uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,869,038,155,328
arxiv
\section{Introduction and results} Spectral invariants were introduced in Viterbo's seminal work \cite{Viterbo:Generating}. Since their appearance they have become one of the most fundamental tools of quantitative symplectic topology. We do not intend to give an overview of its development and many applications here; instead we direct the reader to work by Oh \cite{Oh:Spectral} for a thorough introduction to the subject from a modern perspective. Very briefly, spectral invariants in the symplectic case consists of functions from Hamiltonian diffeomorphisms $$c \colon Ham(X,\omega) \to \mathbf{R}$$ that take values in the real numbers, and which satisfy a list of axioms that will be omitted. The spectral invariants that we consider here are constructed as follows. For a pair of exact Lagrangian submanifolds $L_0,L_1 \subset (X,d\lambda)$ (the symplectic manifold is thus necessarily exact) one can associate the Floer complex $CF(L_0,\phi(L_1))$ to any Hamiltonian diffeomorphism $\phi \in Ham(X,\omega)$ endowed with its canonical action filtration. Spectral invariants are certain real numbers which encode information about this filtered chain complex. In order to make this precise, we utilise the language of the {\bf barcode} from persistent homology from topological data analysis, which goes back to work by Carlsson--Zomorodian-Collins--Guibas \cite{Carlsson}; see Section \ref{sec:barcode} for our description and also \cite{Polterovich:Persistence} for a thorough introduction. The barcode can be defined for any chain complex $(C,\partial,\mathfrak{a})$ with a filtration by subcomplexes $\mathfrak{a}^{-1}[-\infty,c] \subset C_*$ defined by an "action" function $$\mathfrak{a} \colon C \to \{-\infty\} \cup \mathbf{R}.$$ Phrased in this language, the spectral invariants are the values of the starting points of the semi-infinite bars of the barcode associated to the Floer complex. In fact, the main interest here is not the spectral invariants themselves, but rather the following derived concepts (see Definition \ref{dfn:main}): \begin{itemize} \item The {\bf spectral range} of a filtered complex, denoted by $$\rho(C,\partial,\mathfrak{a}) \in \{-\infty\} \cup [0,+\infty].$$ This quantity is defined as the maximal distance between the starting points of two semi-infinite bars in the corresponding barcode. \item The {\bf boundary depth} of a filtered complex, denoted by $$\beta(C,\partial,\mathfrak{a}) \in \{-\infty\} \cup [0,+\infty].$$ This quantity is defined as the maximal length of a finite bar in the corresponding barcode. \end{itemize} For the Floer complex $CF(L,\phi^1_H(L))$ of a Lagrangian and its Hamiltonian deformation, the spectral range coincides with the {\bf spectral norm}, which we define as $$ \gamma( CF(L,\phi^1_H(L)) \coloneqq \rho(CF(L,\phi^1_H(L)).$$ In general, the spectral norm can be defined whenever the complex satisfies Poincar\'{e} duality in a certain technical sense. (Since we do go into the details of the axioms of spectral invariants here, the difference between the concept of spectral range and spectral norm necessarily becomes obscure.) We also need a generalisation of the above spectral invariants to contact manifolds. Since we will only consider with contact manifolds of a very particular type, namely contactisations $$(Y,\alpha)=(X \times \mathbf{R},dz+\lambda)$$ of exact symplectic manifolds $(X,d\lambda)$ (see Section \ref{sec:contact}), this can be done by relying on well-established techniques. From our point of view, the spectral invariants of a contact manifold are defined for the group of contactomorphisms which are contact-isotopic to the identity, and yield functions $$ c \colon Cont_0(Y,\alpha) \to \mathbf{R}.$$ Note that the value does depend on the choice of contact form $\alpha$ here, and not just on the contact structure $\ker\alpha \subset TY$. It should be noted that spectral invariants in the contact setting are much less studied and developed than the symplectic version. However, the original formulation of the spectral invariants, which appeared in \cite{Viterbo:Generating} for symplectic cotangent bundles $(X,\omega)=(T^*M,d(p\,dq))$, admits a straight-forward generalisation to the standard contact jet-space $$(J^1M=T^*M\times \mathbf{R},dz-p\,dq),$$ as shown by Zapolsky \cite{Zapolsky:Jet}. In fact, the spectral invariants in \cite{Viterbo:Generating} are based on a version of Floer homology defined using generating families, and that this theory can be generalised to invariants of Legendrian isotopies inside jet-spaces by work of Chekanov \cite{Chekanov:Generating}. Note that jet-spaces are particular cases of contactisations. The spectral invariants considered here can be defined either by using generating families as in \cite{Zapolsky:Jet}, or by using a Floer homology constructed by using the Chekanov--Eliashberg algebra as first done in \cite{DualityLeg} by Ekholm--Etnyre--Sabloff; also see work \cite{Dimitroglou:Cthulhu} by the author together with Chantraine--Ghiggini--Golovko. (Strictly speaking, all axioms of the spectral invariants have not yet been established in the latter setting, but this does not affect the results here.) More precisely, given a pair of Legendrians $\Lambda_0$ and $\Lambda_1$, the spectral invariants are associated to the barcode of the Floer complex $CF(\Lambda_0,\phi(\Lambda_1))$ where $\phi$ is a contactomorphism which is contact isotopic to the identity. In fact, the Floer homology for the exact Lagrangians that we use here will be defined by using exactly the same technique; we lift the exact Lagrangian submanifold to a Legendrian submanifold of the corresponding contactisation, and then use the Floer complex in the contact setting. More details are given in Section \ref{sec:floer}. Viterbo conjectured in \cite{Viterbo:Homogen} that the spectral norm $\gamma(CF(0_{\mathbf{T}^n},\phi(0_{\mathbf{T}^n})))$ of the Floer complex of the zero section $0_\mathbf{T}^n \subset T^*\mathbf{T}^n$ satisfies a uniform bound whenever $\phi \in Ham(T^*\mathbf{T}^n)$ maps the zero section $\phi(0_{\mathbf{T}^n}) \subset DT^*\mathbf{T}^n$ into the unit cotangent bundle. In recent work by Shelukhin \cite{Shelukhin:Viterbo1}, \cite{Shelukhin:Viterbo2} this property was finally shown to be the case, even for a wide range of cotangent bundles beyond the torus case. The main point of our work here is to give examples of geometric settings beyond symplectic co-disc bundles, where the analogous boundedness of the spectral norm fails. It should be stressed that, in the time of writing of this article, there are still many cases of cotangent bundles for which the original formulation of the problem remains open: does the spectral norm of an exact Lagrangian inside $DT^*M$ which is Hamiltonian isotopic to the zero-section satisfy a uniform bound for any closed smooth manifold $M$? As a first result, in Part (1) of Theorem \ref{mainthm}, we show that the spectral norm of Legendrians inside the contactisation $D^*S^1 \times \mathbf{R} \subset J^1S^1$ which are Legendrian isotopic to the zero section does not satisfy a uniform bound. Recall that any Hamiltonian isotopy of $0_{S^1} \subset DT^*S^1$ lifts to a Legendrian isotopy of the zero section $j^10 \subset J^1S^1$ (see Lemma \ref{lma:lift}); consequently, one way to formulate Part (1) of Theorem \ref{mainthm} is by saying that Viterbo's conjecture cannot be generalised to Legendrian isotopies. Below we denote by $$F_{\theta_0,z_0}=\{\theta=\theta_0,z=z_0\} \subset J^1S^1$$ the Legendrian lift of a cotangent fibre $T^*_{\theta_0}S^1$. \begin{mainthm} \label{mainthm} \begin{enumerate} \item There exists a Legendrian isotopy of the zero section $j^10 \subset J^1S^1$ which satisfies $$\phi^t \colon j^10 \hookrightarrow DT^*S^1 \times \mathbf{R}=(S^1 \times [-1,1]) \times \mathbf{R},$$ and for which $CF(j^10,\phi^t(j^1))$ all are generated by precisely two mixed Reeb chords, whose difference in action moreover becomes arbitrarily large as $t\to+\infty$. In particular, the spectral norm $\gamma(CF(j^10,\phi^1(j^10)))$ becomes arbitrarily large as $t \to +\infty$. \item There exists a Legendrian isotopy of the standard unknot $\Lambda_0 \subset J^1S^1$ shown in Figure \ref{fig:c} which satisfies $$\phi^t \colon \Lambda_0 \hookrightarrow DT^*S^1 \times \mathbf{R}_{>0}=(S^1 \times [-1,1]) \times \mathbf{R}_{>0} \subset J^1S^1,$$ and where the boundary depth $\beta(CF(\phi^1(\Lambda_0),F_{\theta_0,z}))$ becomes arbitrarily large as $t\to+\infty$. In addition, we may assume that $\phi^t$ is supported inside some subset $\{z \ge c\}$ for which the inclusion $\{z \ge c \} \cap \Lambda_0 \subsetneq \Lambda_0$ is a strict subset. \end{enumerate} \end{mainthm} In recent work \cite[Section 6.2]{Biran:Bounds} Biran--Cornea showed that a bound $\gamma(CF(0^M,L)) \le C$ on the spectral norm of the Floer complex of a Lagrangian $L \subset T^*M$, where $L$ is Hamiltonian isotopic to the zero section, implies the bound $\beta(CF(L,T^*_{pt}M)) \le 2C$ on the boundary depth of the Floer complex of $L$ and a cotangent fibre. The Legendrians produced by Part (2) of Theorem \ref{mainthm} can be used to show that the analogous result cannot be generalised to Legendrian isotopies. More precisely, \begin{maincor} There exists a Legendrian isotopy of the zero section $j^10 \subset J^1S^1$ that satisfies $$\phi^t \colon j^10 \hookrightarrow DT^*S^1 \times \mathbf{R}=(S^1 \times [-1,1]) \times \mathbf{R},$$ and for which the spectral norm $\gamma(CF(j^10,\phi^1(j^10))$ is uniformly bounded for all $t \ge 0$, while the boundary depth $\beta(CF(\phi^1(0_{S^1}),F_{\theta_0,z_0}))$ becomes arbitrarily large as $t \to +\infty$. \end{maincor} \begin{proof} Take a cusp-connected sum of a $C^1$-small perturbation of the zero-section $j^10$ and any unknot $\Lambda_t$ from the family produced by Part (2) of Theorem \ref{mainthm}; the case of $\Lambda_0$ is shown in Figure \ref{fig:c}. We refer to \cite{Dimitroglou:Ambient} for the definition of cusp-connected sum (also called ambient Legendrian 0-surgery) along a Legendrian arc (the so-called surgery disc). We perform the cusp-connected sum along a Legendrian arc which is contained inside the region $\{z<c\}$ which is disjoint from the support of the Legendrian isotopy of the unknots. Note that the Legendrian resulting from the cusp-connected sum is Legendrian isotopic to the zero-section, as shown in Figure \ref{fig:c}. It follows that the same is true for the cusp-connected sum of $j^10$ and any Legendrian $\Lambda_t$ from the family. Finally, in order to evaluate the effect of the ambient surgery on the barcodes of the Floer complexes we apply Theorem \ref{thm:ambientsurgery}. To that end, the following two facts are needed. First, $CF(\Lambda_0,j^10)$ is acyclic, and thus its barcode consists of only finite bars. This follows by the invariance under Legendrian isotopy. (After a translation of $\Lambda_0$ sufficiently far in the negative $z$-direction, all generators of the Floer complex disappear.) Second, $$CF(j^1f \cup \Lambda_t, j^10)=CF(j^1f,j^10)\oplus CF(\Lambda_t,j^10)$$ is a direct sum of complexes. The barcode is thus the union of barcodes. \end{proof} \begin{figure}[htp] \vspace{3mm} \labellist \pinlabel $\color{blue}\Lambda_0$ at 75 50 \pinlabel $\color{blue}j^10$ at 75 19 \pinlabel $\color{blue}\Lambda_-$ at 225 32 \pinlabel $\color{blue}\Lambda_+$ at 360 32 \pinlabel $z$ at 56 89 \pinlabel $z$ at 192 89 \pinlabel $z$ at 327 89 \pinlabel $\theta$ at 122 27 \pinlabel $\theta$ at 257 27 \pinlabel $\theta$ at 392 27 \pinlabel $c$ at 63 36 \pinlabel $\frac{1}{2}$ at 110 35 \pinlabel $-\frac{1}{2}$ at 0 35 \endlabellist \includegraphics{c} \caption{\emph{Left:} the front projection of the zero section $j^10 \subset J^1(\mathbf{R}/\mathbf{Z})=J^1S^1$ and a standard Legendrian unknot $\Lambda_0$. \emph{Middle:} The result of a Legendrian \emph{RI}-move on each component, $\Lambda_-$ denotes the union of the two components. \emph{Right:} The Legendrian $\Lambda_+$ which is the result after a cusp-connected sum along the dotted arc shown in the middle picture. $\Lambda_+$ is Legendrian isotopic to the zero section ($\Lambda_+$ is obtained by performing two \emph{RI}-moves on the zero-section).} \label{fig:c} \end{figure} \begin{mainthm} \label{thm:ambientsurgery} If $\Lambda_+$ is a Legendrian obtained from $\Lambda_-$ by a Legendrian ambient surgery. After making the surgery-region sufficiently small, we can assume that there is an action-preserving isomorphism $$CF((\Lambda_+,\varepsilon_+),(\Lambda,\varepsilon)) \to CF((\Lambda_-,\varepsilon_-),(\Lambda,\varepsilon))$$ of complexes, where $(\Lambda,\varepsilon)$ is an arbitrary but fixed Legendrian, and where the augmentation $\varepsilon_+$ is induced by pulling back the augmentation $\varepsilon_-$ under the DGA-morphism induced by the standard Lagrangian handle-attachment cobordism. In particular, the barcodes of the two Floer complexes coincide. \end{mainthm} In the setting of exact Lagrangian cobordisms in the sense of Arnol'd between exact Lagrangian submanifolds similar results were found in \cite[Section 5.3]{Biran:Bounds} . Finally we present a Hamiltonian isotopy of a closed exact Lagrangian inside a Liouville domain for which the spectral norm becomes arbitrarily large. The simplest examples of such a Liouville domain is the 2-torus with an open ball removed; we denote this by $(\Sigma_{1,1},d\lambda)$ and depict it in Figure \ref{fig:t1}. The detailed construction is given in Section \ref{sec:torus}. \begin{mainthm} \label{mainthm:torus} There exists a closed exact Lagrangian submanifold $L \subset (\Sigma_{1,1},\omega)$ and a compactly supported Hamiltonian $H\colon \Sigma_{1,1} \to \mathbf{R}$ for which the induced compactly supported Hamiltonian isotopy $\phi^t_{H} \colon (\Sigma_{1,1},\omega) \to (\Sigma_{1,1},\omega)$ satisfies the property that the spectral norm $\gamma(CF(L,\phi^t_{H}(L)))$ becomes arbitrarily large as $t \to +\infty$. \end{mainthm} \subsection{Why the proofs of uniform bounds fail for Legendrians} \label{sec:why} The techniques that are used in \cite{Shelukhin:Viterbo2} and \cite{Biran:Bounds} to prove the results in the case of the cotangent bundle are not yet fully developed in the case of Legendrians in contactisations. This includes the closed-open map, which is a crucial ingredient in \cite{Shelukhin:Viterbo2}, and a unital $A_\infty$-structure on the Floer complex with relevant PSS-isomorphisms, which is crucial in \cite{Biran:Bounds}. Nevertheless, we still do expect that these operations can be defined also for the Floer homology of Legendrians in contactisations. In fact the $A_\infty$-structure was recently extended to this setting by Legout \cite{Legout}. So this should not be the reason why the proofs break down. So, what then goes wrong in the proofs if one tries to generalise to the Legendrian case? First we recall the properties of the Floer homology complex of a Legendrian and itself; see e.g.~\cite{DualityLeg} for the details. In order to define $CF(\Lambda,\Lambda)$ one must first make the mixed Reeb chords transverse by a Legendrian perturbation of the second copy of $\Lambda$. We do this by replacing $\Lambda$ with a section $j^1f$ in its standard contact jet-space neighbourhood, where $f \colon \Lambda \to \mathbf{R}$ is a $C^1$-small Morse function. In this manner we obtain $$CF(\Lambda,\Lambda)=C^{\mathrm{Morse}}(f;\mathbf{k}) \oplus \bigoplus_{a \in \mathcal{Q}(\Lambda)} \mathbf{k} p_c \oplus \mathbf{k} q_c $$ where $\mathcal{Q}(\Lambda)$ denotes the set of Reeb chords on $\Lambda$, and $C^{\mathrm{Morse}}(f;\mathbf{k})$ is the Morse homology complex with basis given by the critical points of the function $f \colon \Lambda \to \mathbf{R}$. The action of the former chords are approximately equal to $\mathfrak{a}(p_c)=\ell(c)$ and $\mathfrak{a}(q_c)=-\ell(c)$ while the action of the latter is equal to $\mathfrak{a}(x)=f(x)$. What is important to notice here is that the latter generators may be assumed to have arbitrarily small action, while this is not the case for the generators that correspond to pure Reeb chords. When $\Lambda$ is the Legendrian lift of a Lagrangian embedding, there are of course only chords of the former type. This turns out to be the crucial difference between the symplectic and the contact case. \emph{Example in Part (1) of Theorem \ref{mainthm}:} The proof in \cite{Shelukhin:Viterbo2} uses the closed-open map. More precisely, a crucial ingredient in the proof is the action-preserving property of the operations $P'_{a}$ on the Floer homology $CF(0_M,\phi^1_H(0_M))$, which are defined using the length-0 part $\phi^0(a)$ and length-1 part $\phi^1(a,\cdot)$ of the closed open map for certain elements $a \in SH(T^*M)$ in symplectic homology. In the case when the Legendrian has pure Reeb chords (i.e.~it is not the lift of an exact Lagrangian embedding), the chain $\phi^0(a) \in CF(\Lambda,\Lambda)$ may thus consist of generators whose action does not vanish (since they do not correspond to Morse generators). In this case the action-preserving property of $P'_{a}$ in terms of merely the action of the element $a \in SH(T^*M)$ is lost. \emph{Example in Part (2) of Theorem \ref{mainthm}:} The proof in \cite[Section 6.2]{Biran:Bounds} uses the fact that there are continuation elements $a \in CF(\phi^1_H(0_M),0_M)$ and $b \in CF(0_M,\phi^1_H(0_M))$ for which $\mu_2(a,b) \in CF(\phi^1_H(0_M),\phi^1_H(0_M))$ is the unique maximum of a suitable Morse function. In the Legendrian case the element $\mu_2(a,b) \in CF(\phi^1(j^10),\phi^1(j^10))$ is still a homology unit; however, it not necessarily a Morse chord, but can be of significant action. In particular, multiplication with the element $\mu_2(a,b)$ is not necessarily identity on the chain level, nor is it necessarily homotopic to the identity by a chain homotopy of small action. The geometrically induced chain homotopy $\mu_3(a,b,\cdot)$ between $\mu_2(a,\mu_2(b,\cdot))$ and $\mu_2(\mu_2(a,b),\cdot)$ increases action by at most the spectral norm, and is used in\cite{Biran:Bounds} for establishing the bound on the boundary depth. However, this chain homotopy does not to the job anymore, since we also need an additional chain-homotopy (of unknown action properties) to take us from the map $\mu_2(\mu_2(a,b),\cdot)$ to the chain level identity. \section{Background} \subsection{Contact geometry of jet-spaces and contactisations} \label{sec:contact} An {\bf exact symplectic manifold} is a smooth $2n$-dimensional manifold $(X^{2n},d\lambda)$ equipped with a choice of a primitive one-form $\lambda$ for an exact symplectic two-form $\omega=d\lambda$, i.e.~$\omega$ is skew-symmetric, non-degenerate, and closed. Note that the primitive $\lambda$ should be considered as part of the data describing the exact symplectic manifold. A compact exact symplectic manifold with boundary $(W,d\lambda)$ is a {\bf Liouville domain} if the {\bf Liouville vector field}, i.e.~the vector field $\zeta$ given as the symplectic dual of $\lambda$ via the equation $\iota_\zeta d\lambda=\lambda$, is transverse to the boundary $\partial W$. The flow generated by $\zeta$ is called the {\bf Liouville flow} and satisfies $(\phi^t_\zeta)^*\lambda=e^t\lambda$. An open exact symplectic manifold $(\overline{W},d\lambda)$ is a {\bf Liouville manifold} if the all critical points of the Liouville vector field are contained inside some compact Liouville domain $W \subset (\overline{W},d\lambda)$, and if the Liouville flow is complete. A {\bf Hamiltonian isotopy} is a smooth isotopy of $X$ which is generated by a time-dependent vector field $V_t \in \Gamma(TX)$ that satisfies $\iota_{V_t}d\lambda=-dH_t$ for some smooth time-dependent function $$H \colon X \times \mathbf{R}_t \to \mathbf{R}$$ which is called the {\bf Hamiltonian}; a diffeomorphism of $X$ which is the time-$t$ flow generated by such a vector field preserves the symplectic form (but not the primitive) and is denoted by $$\phi^t_{H} \colon (X,\omega) \to (X,\omega);$$ we call such a map a {\bf Hamiltonian diffeomorphism}, and the corresponding flow a {\bf Hamiltonian isotopy}. Conversely, any choice of Hamiltonian function induces a {\bf Hamiltonian isotopy} $\phi^t_H$ in the above manner. Since we consider exact symplectic manifolds, a smooth isotopy $\phi^t \colon X \to X$ is a Hamiltonian isotopy if and only if $(\phi^t)^*\lambda=\lambda+dG_t$ holds for some smooth function $$G \colon X \times \mathbf{R}_t \to \mathbf{R}.$$ Note that the Hamiltonian function that corresponds to a Hamiltonian isotopy is determined only up to the addition of a constant. Any exact $2n$-dimensional symplectic manifold $(X^{2n},d\lambda)$ gives rise to a $2n+1$-dimensional contact manifold $(X \times \mathbf{R}_z,dz+\lambda)$ called its {\bf contactisation}, which is equipped with the canonical contact one-form $\alpha_{st}\coloneqq dz+\lambda$. The contactisations induced by choices of primitives of the symplectic form $\lambda$ and $\lambda'=\lambda+df$ that differ by the exterior differential of $f \colon X \to \mathbf{R}$ are isomorphic via the coordinate change $z\mapsto z-f$. Recall that the contact condition is equivalent to $d\alpha_{st}$ being non-degenerate on the contact planes $\ker \alpha_{st} \subset T(X \times \mathbf{R})$. A {\bf contact isotopy} is a smooth isotopy which preserves the distribution $\ker \alpha_{st}$ (but not necessarily the contact form). The contraction $\iota_{V_t}\alpha_{st}$ of the contact form and the infinitesimal generator gives a bijective correspondence between contact isotopies and smooth time-dependent functions on $X \times \mathbf{R}$, the latter are called {\bf contact Hamiltonians}. We refer to \cite{Geiges:Intro} for more details. \begin{lma} \label{lma:lift} A Hamiltonian isotopy $\phi^t_H \colon (X,d\lambda) \to (X,d\lambda)$ with a choice of Hamiltonian $H_t \colon X \to \mathbf{R}$ lifts to a contact isotopy \begin{gather*} X \times \mathbf{R} \to X \times \mathbf{R},\\ (x,z) \mapsto (\phi^t_H(x),z-G_t(x)), \end{gather*} where the function $G \colon X \times \mathbf{R}_t \to \mathbf{R}$ is defined by $$G_t(x)=\int_0^t \lambda(V_s(\phi^s_{H}(x)))-H_s(\phi^s_H(x))ds$$ and satisfies the property $$(\phi^t_{H})^*\lambda=\lambda+d\,G_t.$$ Moreover, this contact isotopy preserves the contact form $\alpha_{st}$ and is generated by the time-dependent contact Hamiltonian $H_t \circ \operatorname{pr}_X \colon X \times \mathbf{R}_z \to \mathbf{R}$. \end{lma} A smooth immersion of an $n$-dimensional manifold $$\Lambda \looparrowright (X^{2n} \times \mathbf{R},dz+\lambda)$$ in the contactisation is {\bf Legendrian} if it is tangent to $\ker \alpha_{st}$, while a smooth $n$-dimensional immersion $L \looparrowright (X^{2n},\lambda)$ in an exact symplectic manifold is {\bf exact Lagrangian} if $\lambda$ pulls back to an \emph{exact} one-form. The following relation between Legendrians and exact Lagrangians is immediate: \begin{lma} \label{lma:laglift} The canonical projection of a Legendrian immersion to $(X,\lambda)$ is an exact Lagrangian immersion. Conversely, any exact Lagrangian immersion lifts to a Legendrian immersion of the contactisation $X \times \mathbf{R}$. Moreover, the lift is uniquely determined by the choice of a primitive $f \colon L \to \mathbf{R}$ of the pull-back $\lambda|_{TL}=df$, via the formula $z=-f$. \end{lma} Transverse double points of Lagrangian immersions are stable. On the other hand, generic Legendrian immersions are in fact \emph{embedded}. However, there are stable self-intersections of Legendrians that appear in one-parameter families. Recall the following standard fact; again we refer to e.g.~\cite{Geiges:Intro} for details. \begin{lma} A compactly supported smooth isotopy $\phi^t(\Lambda) \subset X \times \mathbf{R}$ through Legendrian embeddings can be generated by an ambient contact isotopy. \end{lma} \subsubsection{The cotangent bundle and jet-space} There is a canonical exact symplectic two-form $-d(p\,dq)$ on any smooth cotangent bundle $T^*M$, whose primitive $-p\,dq$ is the tautological one-form with a minus sign. The cotangent bundle is a Liouville manifold and any co-disc bundle is a Liouville domain. The zero-section $0_M \subset T^*M$ is obviously an exact Lagrangian embedding. The contactisation of $T^*M$ is the one-jet space $J^1M=T^*M \times \mathbf{R}_z$ with the canonical contact one-form $dz-p\,dq$. The zero-section in $T^*M$ lifts to the one jet $j^1c$ of any constant function $c$ (obviously the one jet $j^1f$ of an arbitrary function $f \colon M \to \mathbf{R}$ is Legendrian isotopic to $j^10$). For us the most relevant example is actually the two-dimensional symplectic cotangent bundle $T^*S^1=S^1 \times \mathbf{R}_p$ equipped with the exact symplectic two-form $-d(p\,d\theta)$, and its corresponding contactisation, i.e.~the three-dimensional contact manifold $$(J^1S^1=T^*S^1 \times \mathbf{R}_z,dz-p\,d\theta)$$ (note the sign convention for the Liouville form). In order to describe Legendrians in $J^1M$ we will make use of the {\bf front-projection}, by which one simply means the canonical projection $$\Pi_F \colon J^1M \to M \times \mathbf{R}_z.$$ A Legendrian immersion can be uniquely determined by its post-composition with the front projection. A generic Legendrian knot in $J^1S^1$ has a front projection whose singular locus consists of \begin{itemize} \item non-vertical cubical cusps and \item transverse self-intersections. \end{itemize} On the other hand, note that the front projection has no vertical tangencies by the Legendrian condition. Two sheets of the front projection that have the same slopes (i.e.~$p$-coordinates) above some given point in the base, project to a double-point inside $T^*M$. There is a bijection between double points of this projection and Reeb chords, where a Reeb chord is an integral curve of $\partial_z$ with both endpoints on the Legendrian. The difference of $z$-coordinate of the endpoint and starting point of a Reeb chord $c$ is called its {\bf length} and is denoted by $\ell(c)\ge 0$. Double-points of the Legendrian the immersion itself corresponds to self-tangencies of the front projection. This is not a stable phenomenon, and double-points of Legendrians generically occur only in one-parameter families. These double-points can be considered as Reeb chords of length zero. Two Legendrian knots inside $J^1\mathbf{R}$ or $J^1S^1$ with generic fronts are Legendrian isotopic if and only if their front projections can be related by a sequence of \emph{Legendrian Reidemeister moves} \cite{LegendrianReidemeister} together with an ambient isotopy of the front inside $S^1 \times \mathbf{R}_z$; see \cite{Etnyre:Legendrian} for an introduction to Legendrian knots. \begin{figure}[htp] \vspace{3mm} \labellist \pinlabel $z$ at 4 82 \pinlabel $z$ at 117 82 \pinlabel $RI$ at 94 50 \pinlabel $x$ at 82 4 \pinlabel $x$ at 195 4 \endlabellist \includegraphics{r1} \caption{\emph{RI}: The first Legendrian Reidemeister move in the front projection.} \label{fig:r1} \end{figure} \begin{figure}[htp] \vspace{3mm} \labellist \pinlabel $z$ at 4 82 \pinlabel $z$ at 117 82 \pinlabel $RII$ at 94 50 \pinlabel $x$ at 82 4 \pinlabel $x$ at 195 4 \endlabellist \includegraphics{r2} \caption{\emph{RII}: The second Legendrian Reidemeister move in the front projection.} \label{fig:r2} \end{figure} \begin{figure}[htp] \vspace{3mm} \labellist \pinlabel $z$ at 4 82 \pinlabel $z$ at 117 82 \pinlabel $RIII$ at 93 50 \pinlabel $x$ at 82 4 \pinlabel $x$ at 195 4 \endlabellist \includegraphics{r3} \caption{\emph{RIII}: The third Legendrian Reidemeister move in the front projection.} \label{fig:r3} \end{figure} For convenience we will also introduce a composite move that we will make repeated use of; this is the one shown in Figure \ref{fig:r}, which involves taking two cusp edges with different slopes, and making them cross each other (it is important that the cusps have different slopes). \begin{figure}[htp] \vspace{3mm} \labellist \pinlabel $z$ at 4 82 \pinlabel $z$ at 117 82 \pinlabel $2\times RII$ at 94 50 \pinlabel $x$ at 82 4 \pinlabel $x$ at 195 4 \endlabellist \includegraphics{r} \caption{A composite move: the front to the right is obtained by performing two consecutive \emph{RII}-moves on the front to the left together with an isotopy.} \label{fig:r} \end{figure} \subsubsection{The punctured torus} \label{sec:torus} Here we construct an example of a two-dimensional non-planar Liouville domain: the two torus minus an open ball, which we denote by $(\Sigma_{1,1},d\lambda).$ First, consider the primitive $$\lambda_0=\frac{1}{2}(p\,dq-q\,dp)$$ of the standard linear symplectic form $dp\wedge dq$ on $\mathbf{R}^2.$ We have the identities \begin{align*} & \lambda_0-d\left(\frac{pq}{2}\right)=-q\,dp,\\ & \lambda_0+d\left(\frac{pq}{2}\right)=p\,dq. \end{align*} Take a smooth function $\sigma \colon \mathbf{R}^2 \to \mathbf{R}$ which in the standard coordinates labelled by $(p,q) \in \mathbf{R}^2$ is given by \begin{itemize} \item $\sigma(p,q)=-pq/2$ on $\{|q| \le 1,|p|>2\}$, while it is of the form $-g(p)q/2$ for some smooth function $g$ with $g(p),g'(p) \ge 0$ on $\{|q| \le 1, |p| \ge 1\}$; \item $\sigma(p,q)=pq/2$ on $\{|q|>2,|p| \le 1\}$, while it is of the form $g(q)p/2$ for some smooth function $g$ with $g(q),g'(q) \ge 0$ on $\{|q| \le 1, |p| \ge 1\}$; \item $\sigma(p,q)=0$ on $\{|q|<1,|p|<1\}$; and \end{itemize} Consider the exact symplectic manifold $(X,d\lambda)$ which is obtained by taking the cross-shaped domain $$ \{ p \in [-2,2], q \in [-1,1] \} \cup \{ q \in [-2,2], p \in [-1,1] \} \subset \mathbf{R}^2$$ and identifying $\{p=2\}$ with $\{p=-2\}$, and $\{q=-2\}$ with $\{q=2\}$ in the obvious manner. Topologically the result is a punctured torus. The Liouville form $\lambda_0+d\sigma$ on $\mathbf{R}^2$ extends to a Liouville form $\lambda$ on this punctured torus. The punctured torus has a skeleton $Sk \subset X$ which is the image of the cross $\{pq=0\}$ under the quotient; in other words, $Sk \subset X$ is the union of two smooth Lagrangian circles that intersect transversely in a single point. Note that $$ Sk=\bigcap_{T=1}^\infty \phi^{-T}(X).$$ We claim that the sought Liouville domain $(\Sigma_{1,1},d\lambda)$ can be realised as a suitable subset of this exact symplectic manifold, simply by smoothing its corners; see Figure \ref{fig:t1}. Since $(\Sigma_{1,1},\lambda)$ is surface with non-empty boundary, it admits a symplectic trivialisation of its tangent bundle. This implies that the all Lagrangian submanifolds of $\Sigma_{1,1}$ have a well-defined Maslov class; see Section \ref{sec:Maslov} for more details. We will make heavy use of the fact that the Maslov class depends on the choice of a symplectic trivialisation; in this case, symplectic trivialisations up to homotopy can be identified with homotopy classes of maps $$\Sigma_{1,1} \to S^1$$ i.e.~cohomology classes $H^1(\Sigma_{1,1};\mathbf{Z}).$ \subsection{Barcode of a filtered complex and notions from spectral invariants} \label{sec:barcode} A {\bf filtered complexes} over some field $\mathbf{k}$ is a chain complex $(C,\partial,\mathfrak{a})$ in which each element is endowed with an action $\mathfrak{a}(c) \in \mathbf{R} \sqcup \{-\infty\}$ and such that the following properties are satisfied: \begin{itemize} \item $\mathfrak{a}(c)=-\infty$ if and only if $c=0$, \item $\mathfrak{a}(r\cdot c)=\mathfrak{a}(c)$ for any $r \in \mathbf{k}^*$, \item $\mathfrak{a}(a+b) \le \max\{\mathfrak{a}(a),\mathfrak{a}(b)\}$, and \item $\mathfrak{a}(\partial(a))<\mathfrak{a}(a)$ for any $a \neq 0$. \end{itemize} The subset $$C^{<a}=\mathfrak{a}^{-1}(\{-\infty\} \cup (-\infty,a))$$ is a $\mathbf{k}$-subspace by the first three bullet points; this subspace is a subcomplex by the last bullet point. We say that a basis $\{e_i\}$ is {\bf compatible} with the filtration, if the action of a general element $c \in C$ is given by \begin{equation} \label{eq:compatible} \mathfrak{a}(r_1e_1+\ldots+r_ne_n) = \max\{\mathfrak{a}(e_i); \: r_1 \neq 0 \}, \:\:\: r_i \in \mathbf{k}. \end{equation} Such a bases always exist for any filtered complex by a result due to Barannikov \cite{Barannikov}; also see \cite[Lemma 2.2]{Dimitroglou:Persistence}. (For a general basis one would have to replace the equality "$=$" in Formula \eqref{eq:compatible} with an \emph{inequality} "$\le$".) Given a basis with a specified action on each basis element, one can also use the above formula to \emph{construct} a filtration on the entire complex, under the assumption that the differential decreases action. The Floer complexes described below get endowed with filtrations in precisely this manner, i.e.~by specifying an action for each canonical and geometrically induced basis element. To every complex of vector spaces equipped with a filtration there is a notion of a {\bf barcode}; we refer to \cite[Section 2]{Dimitroglou:Persistence} for the details of the presentation that we rely on here. The barcode is a set of intervals of the form $[a,b)$ and $[a,+\infty)$, where $a,b\in \mathbf{R}$, where we allow multiplicities. Instead of giving the usual definition of the barcode, we give it the following alternative characterisation. \begin{lma}[Lemma 2.6 in \cite{Dimitroglou:Persistence}] The barcode can be recovered from the following data: \begin{enumerate} \item For any basis which is compatible with the action filtration, there is a bijection between the set of actions of basis elements and the union of start and endpoints of bars (counted with multiplicities). \item For any two numbers $a<b$, the number of bars of $C_*$ whose endpoints $e$ satisfy $e \in (b,+\infty]$ and starting points $s$ satisfy $s \in [a,b)$ is equal to $\dim H(C^{<b}/C^{<a})$. \end{enumerate} \end{lma} \begin{cor} \label{cor:depth} Assume that the barcode contains a finite bar $[a,b)$. Then, for any compatible basis $\{e_i\}$, we can deduce the existence of basis elements $e_i$ and $e_j$ with $\mathfrak{a}(e_i)=b$, $\mathfrak{a}(e_j)=b$, and for which $\langle \partial e_i,e_j \rangle \neq 0$. Conversely, if there exists a compatible basis $\{e_i\}$ for which $\partial e_i = r e_j$ for some coefficient $r \neq 0$, then the barcode contains the finite bar $[\mathfrak{a}(e_j),\mathfrak{a}(e_i))$. \end{cor} \begin{rmk} It is important that the barcode considered here does not depend on the grading in any way. An efficient way to deduce properties of the barcode is nonetheless to find a grading for the compatible basis which makes the differential an operation of degree $-1$. This imposes restrictions on the differential, which in view of the previous corollary imposes restrictions on the barcode. This technique will be crucial when studying our examples. \end{rmk} For a filtered complex as above we can associate the following important notions. \begin{dfn} \label{dfn:main} \begin{enumerate} \item The {\em spectral range} $\rho(C,\partial,\mathfrak{a}) \in \{-\infty\} \cup [0,+\infty]$ is the supremum of the distances between starting points of the semi-infinite bars in the barcode. \item The {\em boundary depth} $\beta(C,\partial,\mathfrak{a}) \in \{-\infty\} \cup [0,+\infty]$ is supremum of the lengths of the finite bars in the barcode. \end{enumerate} \end{dfn} An important feature of the barcode is that remains invariant under simple bifurcations of the complex, i.e.~action preserving handle-slides and birth/deaths. Legendrian isotopies induce one-parameter families of the Floer complex considered here, which undergoes bifurcations of precisely this type; hence the corresponding barcode undergoes continuous deformations under Legendrian isotopies. Since this property will not be needed, we do not give more details here, but instead direct the interested reader to \cite{Dimitroglou:Persistence}. \subsection{Floer theory in the setting of exact Lagrangians and Legendrians} \label{sec:floer} Floer homology for pairs $(L_0,L_1)$ of closed exact Lagrangian submanifolds of cotangent bundles were originally defined by Floer \cite{Floer:Morse}. For any such pair one obtains the Floer chain complex $CF(L_0,L_1)$ with a basis given by the intersections $L_0 \cap L_1$, which here are assumed to be transverse. Floer also showed that the homology of the complex -- the so-called Floer homology $HF(L_0,L_1)$ -- is invariant under Hamiltonian isotopy of either Lagrangian $L_i$. Moreover, in the case when $L_1$ is a $C^1$-small Hamiltonian perturbation of $L_0$ the Floer complex $CF(L_0,L_1)=C^{\mathrm{Morse}}(f)$ is the Morse complex for a $C^1$-small Morse function $f \colon L_0 \to \mathbf{R}$. (This is no longer true for the Floer homology of Legendrians; see Section \ref{sec:why}.) Nowadays there are several different techniques available for constructing Floer homology. Here we will consider the setting of Legendrian submanifolds of contactisations $(\overline{W} \times \mathbf{R},\alpha_{st})$ of a Liouville manifold $(\overline{W},d\lambda)$, in which Floer homology associates a chain complex $CF(\Lambda_0,\Lambda_1)$ to a pair of Legendrian submanifolds equipped with additional data. In this case, the homology of the complex is invariant under Legendrian isotopy of either Legendrian $\Lambda_i$. This is the version that we will use also in the case of exact Lagrangian embeddings in $(\overline{W},d\lambda).$ To that end, recall that exact Lagrangians admit lifts to Legendrians by Lemma \ref{lma:laglift}, and that a Hamiltonian of the Lagrangian induces a Legendrian isotopy of the Legendrian lift by Lemma \ref{lma:lift}. In the case when $\overline{W}=T^*M$, and thus $\overline{W} \times \mathbf{R} =J^1M$, in \cite{Zapolsky:Jet} Zapolsky relied on Floer homology defined using the theory of generating families due to Chekanov \cite{Chekanov:Generating} in order to define spectral invariants. Since we will work with contactisations that are more general than jet-spaces, we instead follow the techniques from \cite{DualityLeg} by Ekholm--Etnyre--Sabloff, where the Floer chain complex is constructed as the linearised Legendrian contact-homology complex associated to the Chekanov--Eliashberg algebra \cite{DiffAlg}, \cite{ContHomP}. First we outline the general set-up Floer homology in this setting, which applies equally well to either the version used here or the version defined by using generating families (when applicable). Given a pair of Legendrians $\Lambda_0,\Lambda_1\subset \overline{W} \times \mathbf{R}$, equipped with additional data denoted by $\varepsilon_i$ to be specified below (in the version defined using generating families, this additional data is simply the choice of a generating family), one obtains a graded (grading is in $\mathbf{Z}$ or $\mathbf{Z}/\mu\mathbf{Z}$ depending on the Maslov class as described in Section \ref{sec:Maslov}) filtered chain complex $$(CF_*((\Lambda_0,\varepsilon_0),(\Lambda_1,\varepsilon_1)),\partial,\mathfrak{a})$$ with a canonical compatible basis as a $\mathbf{k}$-vector space given by the \begin{itemize} \item Reeb chords $c$ from $\Lambda_0$ to $\Lambda_1$ of action $\mathfrak{a}(c)=\ell(c)$ equal to the Reeb chord length; together with the \item Reeb chords $c$ from $\Lambda_1$ to $\Lambda_0$ of action $\mathfrak{a}(c)=-\ell(c)$ equal to minus the Reeb chord length. \end{itemize} We assume that all Reeb chords are transversely cut out, and hence that they form a discrete subset, which thus is finite whenever the Legendrians are closed. With our conventions the differential is \emph{strictly action decreasing and of degree $-1$.} The Floer complex satisfies the following important properties; see \cite{DualityLeg} for details. \begin{itemize} \item A Legendrian isotopy of the Legendrian $\Lambda_i$ induces a canonical continuation of the additional data $\varepsilon_i$, and the resulting one-parameter family of Floer complexes undergoes only simple bifurcations, i.e.~handle-slides and births/deaths. In particular, the homology of the complex is not changed under such a deformation. \item In the case when $\Lambda \subset \overline{W} \times \mathbf{R}$ has no Reeb chords (i.e.~it is the lift of an exact Lagrangian embedding), and when $\Lambda'$ is a $C^1$-small Legendrian perturbation, then the induced Floer complex $$(CF((\Lambda,\varepsilon),(\Lambda',\varepsilon')),\partial,\mathfrak{a})=C^{\mathrm{Morse}}(f;\mathbf{k})$$ is the Morse homology complex of some $C^1$-small Morse function $f \colon \Lambda \to \mathbf{R}$. \end{itemize} Again we refer to Section \ref{sec:why} for a description of the complex under the presence of pure Reeb chords; in this case the Morse complex is only realised as a quotient complex of a subcomplex. \subsubsection{Floer complex as the linearised Chekanov--Eliasbherg algebra} Here we relevant technical details for the particular construction of Floer homology used here, i.e.~relying on the Chekanov--Eliashberg algebra for Legendrians in contactisation from \cite{ContHomP}. Assume that $\Lambda_0,\Lambda_1 \subset \overline{W} \times \mathbf{R}$ are two Legendrian submanifolds. Further, assume that the Chekanov--Eliashberg algebras of $\Lambda_i$ admit augmentations $$\varepsilon_i \colon (\mathcal{A}(\Lambda_i),\partial) \to \mathbf{k};$$ recall that the Chekanov--Eliasbherg algebra is a unital DGA generated by the Reeb chords of the Legendrian, and that an augmentation is a unital DGA morphism to the ground field. In particular, when the Legendrian $\Lambda_i$ has no Reeb chords, the Chekanov--Eliashberg algebra takes the simple form $\mathcal{A}(\Lambda_i)=\mathbf{k},$ and there is a canonical augmentation. An important property of augmentations is that they can be pushed forward under a Legendrian isotopy; see e.g.~\cite{DiffAlg} and \cite{Dimitroglou:Cthulhu}. Typically one wants more additional data than just an augmentation. For instance, in order to use coefficients in a field of characteristic different from two, one also needs to fix the choice of a spin structure on both Legendrians $\Lambda_i$. In order to endow the Floer complex a $\mathbf{Z}$-grading, we need to specify a Maslov potential; we refer to Subsection \ref{sec:Maslov} for more details concerning the grading, which will play an important role for us. The Floer complex $$ CF((\Lambda_0,\varepsilon_0),(\Lambda_1,\varepsilon_1)) $$ is generated by the chords that have one endpoint on $\Lambda_0$ and one endpoint on $\Lambda_1$ (either being a starting point). These Reeb chords on $\Lambda_0 \cup \Lambda_1$ are called the {\bf mixed} Reeb chords. In order to define the differential, we will identify the above vector space with the underlying vector space linearised Legendrian contact homology complex of the link $\Lambda_0 \cup \phi^T_{\partial_z}(\Lambda_1)$, where the latter is the $\mathbf{k}$-vector space is generated by all Reeb chords that start on $\Lambda_0$ and end on the translation $\phi^T_{\partial_z}(\Lambda_1)$ of $\Lambda_1$ in the positive $z$-direction. Note that the mixed chords on $\Lambda_0 \cup \Lambda_1$ are in bijective correspondence with the mixed chords on $\Lambda_0 \cup \phi^T_{\partial_z}(\Lambda_1)$. Here we require that $T \gg 0$ has been chosen sufficiently large, so that no chord starts on $\phi^T_{\partial_z}(\Lambda_1)$ and ends on $\Lambda_0$. Of course, the length of a mixed chord $c$ above depend on the parameter $T$ and will not be equal to the action $\mathfrak{a}(c)$ defined above; the relation between action and length is precisely $$\ell(c)=\mathfrak{a}(c)+T.$$ The remaining Reeb chords on the link $\Lambda_0 \cup \phi^T_{\partial_z}(\Lambda_1)$ have both endpoints either on $\Lambda_0$ or $\phi^T_{\partial_z}(\Lambda_1)$, and are called {\bf pure}. Note that the Reeb chords on $\phi^T_{\partial_z}(\Lambda_1)$ are in bijective correspondence with those of $\Lambda_1$. In fact, their Chekanov--Eliashberg algebras are even canonically isomorphic. The differential is the Linearised Legendrian contact homology differential induced by a choice of almost complex structure, together with the augmentations $\varepsilon_i$ for the Chekanov--Eliashberg algebras $\mathcal{A}(\Lambda_i)$ generated by the pure chords. This version of a Floer complex defined via the Chekanov--Eliashberg algebra was originally considered in \cite{DualityLeg}; also see \cite{Dimitroglou:Cthulhu} for a more recent realisation. We now give a sketch of the definition of the differential. It is roughly speaking defined by counts of rigid pseudoholomorphic discs in $\overline{W}$ with \begin{itemize} \item boundary on the Lagrangian immersion $\Pi(\Lambda_0 \cup \phi^T_{\partial_z}(\Lambda_1)) \subset \overline{W}$; \item precisely one positive puncture at a double point which corresponds to a mixed chord -- this is the input; \item precisely one negative puncture at a double point which corresponds to a mixed chord -- this is the output; and \item several additional negative punctures at double points which correspond to pure chords. \end{itemize} When counting the strip, one weights the count by the value of the augmentation $\varepsilon_i$ on the latter pure chords. This is a part of the so-called linearised differential induced by the augmentation, as defined in \cite{DiffAlg}; also see the notion of the bilinearised Legendrian contact homology as defined by Bourgeois--Chantraine in \cite{Bilinearised}. From positivity of symplectic area of such pseudoholomorphic discs together with Stokes' theorem one obtains that the Reeb chord length of the input chord must be larger than the Reeb chord of the output. In other words, the complex is filtered in the precise sense defined in Section \ref{sec:barcode}. From the index formula for the expected dimension of a pseudoholomorphic discs, it follows that the degree of the input is one greater than the degree of the output; i.e.~the differential is of degree $-1$. \subsubsection{Maslov potential and grading} \label{sec:Maslov} In order to define grading in Lagrangian Floer homology the technique of Maslov potentials is useful. The construction of a Maslov potential is originally is due to Seidel \cite{Seidel:Graded}. If the Maslov potential is defined then the grading is well-defined in $\mathbf{Z}$, in general the potential is only well-defined modulo the Maslov number $\mu \in \mathbf{Z}$ (the generator of the subgroup of $\mathbf{Z}$ which is the image of the Maslov class) and in that case the grading is only defined in $\mathbf{Z}/\mu\mathbf{Z}$. In any case the differential is of degree $-1$ with our conventions (i.e.~it decreases the grading). Assume that $\overline{W}$ has vanishing first Chern class; this is e.g.~the case when $\overline{W}$ has a symplectic trivialisation, which is automatic when $\dim_{\mathbf{R}} \overline{W}=2$. The $\mathbf{Z}$--grading of the generators is defined as follows. First, one makes the choice of a trivialisation of the determinant bundle $$\mathbf{C}^* \to \det_{\mathbf{C}} T\overline{W} \to \overline{W}$$ induced by some choice of a compatible almost complex structure. There is an induced bundle with fibre $$\mathbf{C}^*/{\sim}=(\mathbf{R}^2\setminus \{0\})/{\sim}=\mathbf{R} P^1 \cong S^1$$ which admits the lift to an affine $\mathbf{R}$-bundle via the universal cover $\mathbf{R} \to S^1.$ Second, one makes the choice of a Maslov potential for each of the Legendrians $\Lambda_i$; this is the lift of the canonically defined section $\det_{\mathbf{R}}T\Pi(\Lambda_i)/{\sim}$ inside the above $S^1$-bundle to the associated $\mathbf{R}$-bundle. Recall the a non-zero Maslov class is the obstruction to the existence of such a lift. When a Maslov potential exists and the Legendrian is connected, two different choices of Maslov potentials differ by the addition of an integer. Finally, the grading of a generator $c \in CF_*((\Lambda_0,\varepsilon_0),(\Lambda_1,\varepsilon_1))$ is obtained in the following manner. Consider the path of Lagrangian planes given by rotating the Lagrangian plane $T_{\Pi(c)}\Pi(\Lambda_0) \subset T_{\Pi(c)}\overline{W}$ to $T_{\Pi{c}}\Pi(\Lambda_1) \subset T_{\Pi(c)}\overline{W}$ through the smallest possible positive K\"{a}hler angles. These rotations induce a continuous deformation of the Maslov potential of $\Lambda_0$ at the point $\Pi(c)$; denote by $\mu_0 \in \mathbf{R}$ the new value. By construction, the deformed Maslov potential of $\Lambda_0$ at $\Pi(c)$ and the Maslov potential of $\Lambda_1$ at $\Pi(c)$ are now lifts to $\mathbf{R}$ of the same point in $S^1$. The grading is the number $\mu_0-\mu_1 \in \mathbf{Z}$ which is integer by the last property. \begin{lma} \label{lma:Maslov} \begin{enumerate} \item Let $\phi^1 \colon W \times \mathbf{R} \to W \times \mathbf{R}$ be the time-one map of a compactly supported contact isotopy. For any choice of Maslov potential on the Legendrian $\Lambda$ there an induced Maslov potential on its image $\phi^1(\Lambda) \subset W \times \mathbf{R}$ uniquely defined by the property that the Maslov potentials extend over the exact Lagrangian cobordism from $\Lambda$ to $\phi^1(\Lambda)$ induced by the isotopy. \item If $\phi^1$ is a generic $C^1$-small contact isotopy, then the small chords of $\Lambda \cup \phi^1(\Lambda)$ are in bijective correspondence with the critical points of a $C^1$-small Morse function $f \colon \Lambda \to \mathbf{R}$, and the above grading coincides with the Morse index. \end{enumerate} \end{lma} \begin{proof} (1) The trace of the Legendrian isotopy can be made into a Lagrangian cylinder inside the symplectisation $$(\mathbf{R}_t \times \overline{W} \times \mathbf{R}_z,d(e^t\alpha_{st}))$$ with cylindrical ends over the initial and final Legendrian; see work \cite{LagrConc} by Chantraine. The Maslov potential of $\Lambda$ induces a Maslov potential on the negative end of this cobordism. This Maslov potential can be extended to the entire cobordism by elementary topology (it is a Lagrangian cylinder). The induced Maslov potential on the positive end is the sought Maslov potential on $\phi^1(\Lambda)$. (2) This computation is standard, and can be performed in a small neighbourhood of $\Lambda$. Recall that any Legendrian $\Lambda$ has a standard neighbourhood which is contactomorphic to a neighbourhood of the zero section $j^10 \subset J^1\Lambda$, under which $\Lambda$ moreover is identified with $j^10$; see \cite{Geiges:Intro}. The perturbation can be assumed to be given by the one-jet $j^1f$ of some $C^1$-small smooth function $f \colon \Lambda \to \mathbf{R}$ in the same neighbourhood. \end{proof} Note that the case when $\overline{W}$ is of dimension $\dim_{\mathbf{R}}\overline{W}=2$ then $T\overline{W}$ always has a symplectic trivialisation. In the case when $\overline{W}=T^*S^1$ there is a canonically defined trivialisation in which the zero-section has a constant field of non-zero tangent vectors. With this trivialisation the zero section obviously has a Maslov potential which moreover is constant (for a suitable trivialisation). The different symplectic trivialisations on $T^*S^1$ up to homotopy are in bijection with homotopy classes of maps $T^*S^1 \to S^1$ i.e.~cohomology classes in $H^1(T^*S^1)$. Note that there is a unique trivialisation for which the zero section has a non-vanishing Maslov class; for the remaining trivialisations the zero-section does not admit a Maslov potential. \section{Examples that exhibit unbounded spectral norms} The following auxiliary result facilitates our computations, and will be invoked repeatedly. \begin{lma} \label{lma:spectralnormcomp} \begin{enumerate} \item Let $\phi^t \colon \Lambda_0 \hookrightarrow \overline{W} \times \mathbf{R}$ be a Legendrian isotopy of a closed Legendrian $\Lambda_0$ that admits a Maslov potential, and endow $\phi^1(\Lambda_0)$ with the Maslov potential induced from $\Lambda_0$ via the isotopy, as described in Part (1) of Lemma \ref{lma:Maslov}. Further assume that $\Lambda_0$ has no Reeb chords. If the complex $CF(\Lambda_0,\phi^1(\Lambda_0))$ has unique Reeb chord generators $c$ and $d$ in degrees $|d|=0$ and $|c|=\dim \Lambda_0$, then the spectral range satisfies $$ \rho(CF(\Lambda_0,\phi^1(\Lambda_0))) \ge |\ell(c)-\ell(d)|.$$ (In fact, it is even true that the spectral range is \emph{equal} to $\ell(c)-\ell(d)$, where this quantity moreover is positive, but we will not show this.) \item Assume that the complex $CF(\Lambda_0,\Lambda_1)$ is $\mathbf{Z}$-graded, acyclic, and has no generators in degrees $i+1$ or $i-2.$ If there are unique Reeb chords $c,d$ in the degrees $|c|=i$ and $|d|=i-1$, for some choice of symplectic trivialisation and Maslov potential, then the boundary depth satisfies the bound $$\beta(CF(\Lambda_0,\Lambda_1)) \ge \ell(c)-\ell(d).$$ \end{enumerate} \end{lma} \begin{proof} (1): This follows from invariance properties of the Floer homology. Note that the homology of $CF(\Lambda_0,\Lambda_0)$ has unique generators in degrees 0 and $\dim\Lambda$ which represent the point class and fundamental class in Morse homology. It follows by degree reasons that the Reeb chord generators $c$ and $d$ must both be cycles which are not boundaries. The two corresponding semi-infinite bars in the barcode have endpoints that are separated by precisely $|\ell(c)-\ell(d)|$ as sought. (2): Acyclicity together with the degree assumptions implies that $\partial c=d$. The statement then follows by the second part of Corollary \ref{cor:depth} since the Reeb chords form a compatible basis. \end{proof} \subsection{Legendrian isotopy of the unknot (Proof of Part (2) of Theorem \ref{mainthm})} \label{sec:unknot} Consider the contact manifold $J^1\mathbf{R}=\mathbf{R}_q \times \mathbf{R}_p \times \mathbf{R}_z$ with coordinates $q,p,z$ and contact form $dz-p\,dq$. Under the quotient $\mathbf{R}_q \to \mathbf{R}/\mathbf{Z}=S^1$ we obtain the angular coordinate $\theta$ induced by $\theta=q \mod 1$. In other words, the aforementioned contact manifold $J^1\mathbf{R}$ is the universal cover of the contact manifold $J^1S^1=S^1 \times \mathbf{R}_p \times \mathbf{R}_z$ equipped with the standard contact form $dz-p\,d\theta$. First consider the standard Legendrian unknot $\Lambda_0 \subset J^1S^1$ with front projection as shown in Figure \ref{fig:u1}, which thus is contained inside the subset $J^1[-1/2,1/2] \subset J^1S^1$. The $p$-coordinate of this particular representative can be seen to be estimated in terms of the ratio of $a$ and $b$, which yields $$\Lambda_0 \subset \{|p| \le 2a/b\}.$$ Recall the well-known fact that $\Lambda_0$ has vanishing Maslov class and hence admits a Maslov potential. Further, this Legendrian has a unique transverse Reeb chord and its Chekanov--Eliashberg algebra is equal to the polynomial algebra in one variable of degree $1$ with no differential (either for $\mathbf{k}=\mathbf{Z}_2$ or for arbitrary $\mathbf{k}$ and the choice of bounding spin structure); see \cite{Etnyre:LegendrianContact}. In particular, its Chekanov--Eliashberg algebra admits the trivial augmentation. We also fix a Legendrian fibre $$F=F_{(1/4,0)}=\{(1/4) \times \mathbf{R}_p \times \{0\}\} \subset J^1[-1/2,1/2] \subset J^1S^1.$$ Note that the Reeb chords between any Legendrian $\Lambda$ and $F$ are in bijective correspondence with the intersection points of $\Lambda$ and the hypersurface $\{\theta=1/4\}$. Since $F$ that has no Reeb chords, its Chekanov--Eliashberg algebra trivially admits an augmentation. We can thus define the Floer homology complex $CF(\Lambda_0,F)$ which is generated by two Reeb chords $c$ and $d$, where $0>\ell(c)>\ell(d)$ and $|c|=|d|+1$. Note that $CF(\Lambda_0,F)$ is an acyclic complex by invariance under Legendrian isotopy; after shrinking the unknot sufficiently, all mixed chords disappear. The goal is to construct a Legendrian isotopy $\Lambda_t \subset J^1S^1$ of the unknot confined to the subset $$\{|p| \le 2a/b\} \subset J^1 S^1$$ for which the boundary depth of $CF(\Lambda_T,F)$ becomes arbitrarily large as $t \to +\infty$. This isotopy will be constructed as the projection of an isotopy $\tilde{\Lambda}_t \subset J^1\mathbf{R}$ of the unknot inside the universal cover $J^1\mathbf{R} \to J^1S^1$. In fact, the Legendrian isotopy $\tilde{\Lambda}_t$ is very simple; it is the rescaling of $$\tilde{\Lambda}_0=\Lambda_0 \subset J^1[-1/2,1/2] \subset J^1\mathbf{R}$$ under the map $(q,p,z) \mapsto (e^t\cdot q,p,e^t\cdot z)$. It is easy to check that $CF(\tilde{\Lambda}_t,F)$ satisfies the property that the boundary depth goes to $+\infty$ as $t \to +\infty$. Indeed, these complexes are generated by the two unique transversely cut out Reeb chords $c_t$ and $d_t$ between $\tilde{\Lambda}_t$ and $F$ for all values $t>0$. These chords moreover satisfy the property that $\ell(c_t) - \ell(d_t)$ becomes arbitrarily large as $t \to +\infty$; c.f.~Part (2) of Lemma \ref{lma:spectralnormcomp}. What remains to prove is the following two claims for the projection $\Lambda_t \subset J^1S^1$ of the Legendrian rescaling $\tilde{\Lambda}_t \subset J^1\mathbf{R}$. First, we claim that $\Lambda_t$ indeed is a Legendrian isotopy. Second, we show that the boundary depth of $CF(\Lambda_t,F)$ goes to $+\infty$ as $t \to +\infty$ The fact that $\Lambda_t$ is a Legendrian isotopy can be seen by considering the sequence of front projections; see Figures \ref{fig:u2} and \ref{fig:u3}. Except for an isotopy of the front, the front also undergoes a sequence \emph{RIII}-moves together with the composite move shown in Figure \ref{fig:r}. Then we need to estimate the boundary depth of the sequence of Floer complexes $CF(\Lambda_t,F)$. In addition to Reeb chords $c_t$ and $d_t$, which correspond to the Reeb mixed Reeb chords on the lift and have exactly the same actions, there are additional Reeb chords between $\Lambda_t$ and $F$ that appear as $t \to +\infty$. Nevertheless, we claim that the boundary depth of $CF(\Lambda_t,F)$ still is bounded from below by the boundary depth $\beta(CF(\tilde{\Lambda}_t,F)).$ To see the last claim, we will consider different gradings of the complexes $CF(\Lambda_t,F)$, obtained by changing the symplectic trivialisation of $T^*S^1$. Note that $\Lambda_0$ is null-homotopic inside $J^1S^1$ and thus has a vanishing Maslov class independently of the choice of symplectic trivialisation. Moreover, the chords $c_t$ and $d_t$ always satisfy $|c_t|-|d_t|=1$ regardless of the choice of Maslov potential and symplectic trivialisation. We claim that, after changing the symplectic trivialisation of $T^*S^1$ by introducing a sufficiently large number $N \gg 0$ of rotations of the standard symplectic frame as one traverses $\theta=1/2$, all generators $c'$ in the complex except $c_t$ and $d_t$ acquire degrees that satisfy $$|c'|- |c_t| \notin [-10,10].$$ Since these degree properties can be achieved, the statement now follows directly by Part (2) of Lemma \ref{lma:spectralnormcomp}.\qed \begin{figure}[htp] \vspace{3mm} \labellist \pinlabel $a$ at -5 53 \pinlabel $b$ at 68 -7 \pinlabel $\color{blue}\Lambda_0$ at 55 63 \pinlabel $\color{red}d_0$ at 78 58 \pinlabel $\color{red}c_0$ at 89 33 \pinlabel $F$ at 77 32 \pinlabel $z$ at 68 101 \pinlabel $\theta$ at 135 38 \pinlabel $\frac{1}{2}$ at 122 29 \pinlabel $-\frac{1}{2}$ at 10 29 \endlabellist \includegraphics{u1} \vspace{3mm} \caption{The standard Legendrian unknot $\Lambda_0$ and the Legendrian fibre $F$. Note that there are precisely two transverse Reeb chords $c_0,d_0$ between $F$ and $\Lambda_0$.} \label{fig:u1} \end{figure} \begin{figure}[htp] \vspace{3mm} \labellist \pinlabel $\color{blue}\Lambda_2$ at 55 73 \pinlabel $\color{blue}\tilde{\Lambda}_2$ at 55 170 \pinlabel $\color{red}d_2$ at 97 42 \pinlabel $\color{red}c_2$ at 110 22 \pinlabel $\color{red}d_2$ at 97 150 \pinlabel $\color{red}c_2$ at 110 131 \pinlabel $z$ at 87 89 \pinlabel $z$ at 87 198 \pinlabel $\theta$ at 153 26 \pinlabel $q$ at 183 135 \pinlabel $\frac{1}{2}$ at 140 144 \pinlabel $\frac{1}{2}$ at 34 144 \pinlabel $\frac{1}{2}$ at 140 34 \pinlabel $\frac{1}{2}$ at 34 34 \endlabellist \includegraphics{u2} \caption{Above: $\tilde{\Lambda}_2$ has a front which is a linear rescaling of the front of $\Lambda_0$ inside $J^1\mathbf{R}$. Below: $\Lambda_2$ is the projection of $\tilde{\Lambda}_2$ inside $J^1S^1$. Except for the mixed chords $c_t$ and $d_t$ that exist for the lift, there are now additional mixed chords.} \label{fig:u2} \end{figure} \begin{figure}[htp] \vspace{3mm} \labellist \pinlabel $\color{blue}\Lambda_t$ at 40 87 \pinlabel $\color{red}d_t$ at 67 42 \pinlabel $\color{red}c_t$ at 80 22 \pinlabel $z$ at 57 108 \pinlabel $\theta$ at 123 26 \pinlabel $\frac{1}{2}$ at 110 34 \pinlabel $\frac{1}{2}$ at 4 34 \endlabellist \includegraphics{u3} \caption{This shows the projection $\Lambda_t$ of the rescaling $\tilde{\Lambda}_t$ under the universal cover $J^1\mathbf{R} \to J^1S^1$.} \label{fig:u3} \end{figure} \subsection{Legendrian isotopy of the zero-section (Proof of Part (1) of Theorem \ref{mainthm})} We use the same coordinates as in the above Section \ref{sec:unknot}. In fact, the sought Legendrian isotopy is also constructed in a manner similar to the construction of $\Lambda_t$ given there, by performing a rescaling of a part of the front inside the universal cover $J^1\mathbf{R}$ (and then projecting back to $J^1S^1$). The isotopy is shown in Figures \ref{fig:d1} and \ref{fig:d4}. One starts by considering a Legendrian perturbation $j^1f$ of $j^10$ which has precisely two chords. Then one performs a \emph{RII}-move. Rescaling the front of the Legendrian introduced by the \emph{RII}-move in the universal cover $\mathbf{R}^2$ and then projecting back to $S^1 \times \mathbf{R}$ is again a Legendrian isotopy. In Figure \ref{fig:d4} one sees that there are exactly two chords between $j^10$ and the produced Legendrians, while the difference in action between these two generators grows indefinitely as $t \to +\infty$. \qed \begin{figure}[htp] \vspace{3mm} \labellist \pinlabel $z$ at 56 90 \pinlabel $\theta$ at 122 27 \pinlabel $z$ at 203 90 \pinlabel $\theta$ at 269 27 \pinlabel $\frac{1}{2}$ at 110 35 \pinlabel $-\frac{1}{2}$ at -1 35 \pinlabel $\frac{1}{2}$ at 257 35 \pinlabel $-\frac{1}{2}$ at 146 35 \pinlabel $\color{red}d$ at 3 17 \pinlabel $\color{red}d$ at 101 17 \pinlabel $\color{red}c$ at 52 32 \pinlabel $\color{red}d$ at 150 17 \pinlabel $\color{red}d$ at 248 17 \pinlabel $\color{red}c_0$ at 197 40 \pinlabel $\color{blue}\Lambda$ at 35 7 \pinlabel $\color{blue}\Lambda_0$ at 185 7 \endlabellist \includegraphics{d1} \caption{Left: A Legendrian perturbation of the zero section. The vertical chords denote the two Reeb chords between the zero-section $j^10$ and the perturbation. Right: The perturbed version of the zero-section after a suitable Legendrian \emph{RI}-move.} \label{fig:d1} \end{figure} \begin{figure}[htp] \vspace{3mm} \labellist \pinlabel $\frac{1}{2}$ at 166 170 \pinlabel $\frac{2}{2}$ at 215 170 \pinlabel $-\frac{2}{2}$ at 6 170 \pinlabel $-\frac{1}{2}$ at 56 170 \pinlabel $z$ at 113 243 \pinlabel $z$ at 113 115 \pinlabel $\theta$ at 178 26 \pinlabel $q$ at 235 161 \pinlabel $\frac{1}{2}$ at 166 34 \pinlabel $-\frac{1}{2}$ at 56 34 \pinlabel $\color{red}d$ at 157 153 \pinlabel $\color{red}d$ at 59 153 \pinlabel $\color{red}c_t$ at 107 182 \pinlabel $\color{red}d$ at 157 17 \pinlabel $\color{red}d$ at 59 17 \pinlabel $\color{red}c_t$ at 107 46 \pinlabel $\color{blue}\Lambda_t$ at 100 10 \endlabellist \includegraphics{d4} \caption{$\Lambda_t$ is obtained from $\Lambda_0$ by a linear rescaling of the front inside $\{ z \ge 0\}$ in the universal cover $J^1\mathbf{R}^2$ followed by the canonical projection $J^1\mathbf{R} \to J^1S^1.$ The front of $\Lambda_t$ undergoes the composite move shown in Figure \ref{fig:r} consisting of two consecutive \emph{RII}-moves along with \emph{RIII}-moves.} \label{fig:d4} \end{figure} \subsection{Hamiltonian isotopy on the punctured torus (Proof of Theorem \ref{mainthm:torus})} We consider the exact Lagrangian embedding $L \subset (\Sigma_{1,1},d\lambda)$ of $S^1$ which is given as the image of $\{p=0\} \subset \mathbf{R}^2$ under the quotient construction in Section \ref{sec:torus}; see Figure \ref{fig:t1}. We perform a Hamiltonian perturbation $L'$ that intersects the original Lagrangian transversely in precisely two points $c$ and $d$. The spectral norm is thus $\gamma(CF(L,L'))=\ell(c)-\ell(d)$. Then consider the autonomous Hamiltonian $$\rho \colon \Sigma_{1,1} \to \mathbf{R}_{\ge 0}$$ with support inside $\{q \in [-\delta,\delta]\}$ for some small $\delta>0$, and which is equal to the smooth bump-function $\rho(q) \ge 0$ in one variable of the form \begin{itemize} \item $\rho(q)\equiv 1$ in a neighbourhood of $q=0$; \item $\rho(q)=\rho(-q)$; \item and $\rho'(q) \ge 0$ for $q<0$. \end{itemize} The Hamiltonian isotopy $\phi^t_{\rho}$ wraps the region $q \in (-\delta,0)$ in the negative $p$-direction, while it wraps the region $q \in (0,\delta)$ in the positive $p$-direction. We claim that $CF(L,\phi^t_\rho(L'))$ has a spectral norm which becomes arbitrarily large as $t \to +\infty$. What is clear is that $\ell(c)-\ell(d) \to +\infty$ as $t \to +\infty$. (Use e.g.~Lemma \ref{lma:lift}.) Again there are additional generators that appear as $t \to +\infty$, so knowing that $\ell(c)-\ell(d) \to +\infty$ is not sufficient. As in Section \ref{sec:unknot} a change of symplectic trivialisation can again give us what we need. First consider the canonical symplectic trivialisation, induced by the trivialisation of $\mathbf{R}^2$ and the quotient projection. Then deform this trivialisation by making a number $N \gg 0$ of full rotations of the standard symplectic frame (relative the constant one) as one traverses the cycle $\{p=1\}$. Note that the Lagrangian corresponding to $\{p=0\}$ still has a Maslov potential after this change of trivialisation. Again it is readily seen that all generators $c'$ except $c$ and $d$ satisfy the property $$|c'|-|c| \notin [-10,10]$$ after choosing $N \gg 0$ sufficiently large, while $|c|-|d|=1$ always is satisfied. The spectral norm can finally be computed by invoking Part (1) of Lemma \ref{lma:spectralnormcomp}. \begin{figure}[htp] \vspace{3mm} \labellist \pinlabel $q$ at 182 86 \pinlabel $p$ at 87 184 \pinlabel $\color{red}c$ at 91 80 \pinlabel $\color{red}d$ at 148 81 \pinlabel $L$ at 122 80 \pinlabel $\color{blue}L'$ at 122 102 \pinlabel $\Sigma_{1,1}$ at 295 25 \pinlabel $\color{blue}L'$ at 344 80 \pinlabel $L$ at 305 70 \endlabellist \includegraphics{t1} \caption{The left depicts a domain in $\mathbf{R}^2$ with piecewise smooth boundary. After identifying the two horizontal pieces of the boundary, as well as the two vertical pieces, one obtains the Liouville domain shown on the right, with Liouville form described in Section \ref{sec:torus}. The closed exact Lagrangian $L$ is the image of $\{p=0\}$ and $L'$ is a small Hamiltonian perturbation of $L$.} \label{fig:t1} \end{figure} \begin{figure}[htp] \vspace{3mm} \labellist \pinlabel $q$ at 384 86 \pinlabel $p$ at 289 184 \pinlabel $q$ at 182 86 \pinlabel $p$ at 87 184 \pinlabel $\color{red}c$ at 91 80 \pinlabel $\color{red}d$ at 148 81 \pinlabel $L$ at 122 80 \pinlabel $L$ at 333 80 \pinlabel $\color{blue}\phi^t_{\rho}(L')$ at 130 102 \pinlabel $\color{blue}\phi^{t'}_{\rho}(L')$ at 330 102 \endlabellist \includegraphics{t2} \caption{A Hamiltonian isotopy that wraps the Lagrangian $\{p=0\}$ around the one-handle with core $\{q=0\}$, while fixing a neighbourhood of the latter core. Note that the Hamiltonian function is positive but constant near $q=0$. Here $t'>t$.} \label{fig:t2} \end{figure} \section{Proof of Theorem \ref{thm:ambientsurgery}} By definition, our two Floer complexes are the linearised Legendrian contact homology complexes generated as a $\mathbf{k}$-vector space by the mixed Reeb chords on the Legendrian link $$ \Lambda_\pm \cup \phi^T_{\partial_z}(\Lambda).$$ Here $T \gg 0$ is fixed but sufficiently large. The cusp-connected sum performed on $\Lambda_- \cup \phi^T_{\partial_z}(\Lambda)$ produces $\Lambda_+ \cup \phi^T_{\partial_z}(\Lambda)$ (of course, only the first component is affected). There is an associated exact standard Lagrangian handle-attachment cobordism $$ \mathcal{L} \subset (\mathbf{R}_t \times \overline{W} \times \mathbf{R}_z,d(e^t\alpha_{st})) $$ inside the symplectisation as constructed in \cite{Dimitroglou:Ambient}. This is a cobordism with cylindrical ends from $$\Lambda_- \cup \phi^T_{\partial_z}(\Lambda) \:\:\:\text{to}\:\:\:\Lambda_+ \cup \phi^T_{\partial_z}(\Lambda),$$ i.e.~from the Legendrian link before surgery (at the concave end) to the link after surgery (at the convex end). One component of this cobordism is simply the trivial cylinder $\mathbf{R} \times \phi^T_{\partial_z}(\Lambda))$. This Lagrangian cobordism induces a unital DGA-morphism $$\Phi_{\mathcal{L}} \colon \mathcal{A}(\Lambda_+ \cup \phi^T_{\partial_z}(\Lambda)) \to \mathcal{A}(\Lambda_- \cup \phi^T_{\partial_z}(\Lambda))$$ of the Chekanov--Eliashberg algebras. In particular, the choice of augmentation $\varepsilon_-$ of the Chekanov--Eliashberg algebra of $\Lambda_-$ pulls back to an augmentation $\varepsilon_+=\varepsilon_-\circ\Phi_{\mathcal{L}}$ of the Chekanov--Eliashberg algebra of $\Lambda_+$. The DGA morphism $\Phi_{\mathcal{L}}$ of the Chekanov--Eliashberg algebras after and before the surgery was computed in \cite[Theorem 1.1]{Dimitroglou:Ambient} under the assumption that the handle-attachment is sufficiently small. This computations in particular shows that the mixed chords $c$ on $\Lambda_+ \cup \phi^T_{\partial_z}(\Lambda)$ are mapped to $$\Phi_{\mathcal{L}}(c)=c+\sum_i r_i\mathbf{d}_i,\:\: r_i \in \mathbf{k},$$ where $\mathbf{d}_i$ are words of Reeb chords that each contain an \emph{odd} number of mixed chords of $\Lambda_- \cup \phi^T_{\partial z}(\Lambda)$, and in which every mixed chord moreover is of length strictly less than $\ell(c)$. It now follows by pure algebraic considerations that the map $$ CF_*((\Lambda_+,\varepsilon_+),(\Lambda,\varepsilon)) \to CF_*((\Lambda_-,\varepsilon_-),(\Lambda,\varepsilon))$$ induced by linearising the DGA-morphism $\Phi_{\mathcal{L}}$ using the augmentations $\varepsilon$ and $\varepsilon_-$ (see \cite{Bilinearised} and \cite{Dimitroglou:Cthulhu}) is an action-preserving isomorphism of the Floer complexes as claimed. \qed \bibliographystyle{alphanum}
2,869,038,155,329
arxiv
\section{\modelName for missing-at-random scenario} \label{app: MAR} In this section, we briefly explain why \modelName can handle MAR problem by leveraging the results from \citet{rubin1976inference}. Let's denote $\boldsymbol{r}_i$ as the missing mask, where $r_{i,d}=1$ indicates $x_{i,d}$ is observed. For the random variable $R$, we use $p_\lambda(\boldsymbol{r}|{\mathbf x})$ as the missing mechanism with parameters $\lambda$. To explicitly state the dependence of \modelName and its model parameter $\theta$, we use $p_\theta({\mathbf x})$ to denote the corresponding model density. We can now formally define the concept of MAR. \begin{definition}[Missing at Random \citep{rubin1976inference}] The missing data are missing at random if for each value of $\lambda$, $p_\lambda(\boldsymbol{r}|{\mathbf x})$ takes teh same value for all ${\mathbf x}_u$. Namely, $p_\lambda(\boldsymbol{r}|{\mathbf x})=p_\lambda(\boldsymbol{r}|{\mathbf x}_o)$. \end{definition} Recall that our \modelName is trained by maximizing the ELBO (Eq.\ref{eq:ELBO}) based on the observed values $x_o$. However, this formulation ignores the missing mechanisms $p_\phi(\boldsymbol{r}|{\mathbf x})$. In order to perform missing value imputation, one need to ensure that the inference for $\theta$ should be correct. In the following, we show under MAR, ignoring missing mechanism does not affect the correctness of inferring $\theta$ under ELBO. The following proof is an adaptation of Theorem 7.1 in \citet{rubin1976inference}. When explicitly modelling the missing mechanism, the joint likelihood can be written as \begin{align*} &\log p_{\theta,\phi}({\mathbf x}_o,\boldsymbol{r})\\ &=\log \int p_\theta({\mathbf x})p_\phi(\boldsymbol{r}|{\mathbf x})d{\mathbf x}_u\\ &=\log \int p_\theta({\mathbf x},\boldsymbol{z},\boldsymbol{G})p_\phi(\boldsymbol{r}|{\mathbf x})d\boldsymbol{z}d\boldsymbol{G}d{\mathbf x}_u\\ &=\log p_\phi(\boldsymbol{r}|{\mathbf x}_o)\int p_\theta({\mathbf x},\boldsymbol{z},\boldsymbol{G})d\boldsymbol{z}d\boldsymbol{G}d{\mathbf x}_u\\ &\geq \log p_\phi(\boldsymbol{r}|{\mathbf x}_o)+\text{ELBO}(\theta) \end{align*} where the third equality is from the definition of MAR and the last inequality is from the standard ELBO derivation. The above equation explicitly lower bounds the joint likelihood by two separate terms regarding $\theta$ and $\phi$. Thus, when performing inference over $\theta$, one can safely ignore the missing mechanism involving $\phi$, resulting in the same optimization objective as Eq.\ref{eq:ELBO}. \section{Does \modelName respect the graph G in observational space} \label{app: respect graph G} From the formulation of the decoder in \modelName (Eq.\ref{eq:n2e} and \ref{eq:e2n}), the inferred graph ${\mathbf G}$ seems to define whether the information flow between nodes is allowed or not. Namely, when $G_{ij}=1$, the information is allowed to pass from $z_{i}$ to $z_j$ at each iteration $t$. Thus, ${\mathbf G}$ directly defines a structure for latent space ${\mathbf Z}$, and indirectly defines a structure in observation ${\mathbf X}$ through the GNN updates and the final read-out layer. A natural question to ask is whether the resulting observations ${\mathbf x}$ from \modelName also respect the graph ${\mathbf G}$. In the following, we show that when GNN is in equilibrium and the read-out layer is invertible without additional observational noise ($\sigma_x=0$), the \modelName is in fact a SEM for observation ${\mathbf x}$, which respects the graph ${\mathbf G}$. {In the following, for the clarity of notations, we consider the structure learning between variables. For group-wise relations, it is trivial to generalize, since the going from variable-wise to group-wise only changes the read-out layer, where we use $M$ different MLPs instead of one. } First, let's clarify what do we mean by "respect a graph ${\mathbf G}$". \begin{definition}[Respect a graph ${\mathbf G}$] For a given \modelName model $p({\mathbf x},{\mathbf z};{\mathbf G})$ with a specific graph ${\mathbf G}$, we say the model $p({\mathbf x};{\mathbf G})=\int p({\mathbf x},{\mathbf z};{\mathbf G})d{\mathbf z}$ respects the graph ${\mathbf G}$ if it can be factorized \[ p({\mathbf x};{\mathbf G})=\prod_{d=1}^Dp(x_d|PA(d);{\mathbf G}), \] where $PA(d)$ is a set of parents of node $i$ specified by graph ${\mathbf G}$. \end{definition} \subsection{GNN at steady state} From the GNN message passing equations, we can re-organize Eq.\ref{eq:n2e} and \ref{eq:e2n} into one equation: \begin{equation} z_i^t=F(PA(i)^{t-1},z_i^{t-1}), \label{eq: abstract GNN update equation} \end{equation} where $PA(i)^{t-1}$ is a set of parents' value for node $i$ at iteration $t-1$, $z_i^t$ is the value for node $i$ at iteration t and $F(\cdot)$ represents the GNN message passing updates. The above equation resembles a fixed-point iteration procedure for function $F$. Indeed, under the context of GNN, this has been considered as a standard procedure to search for the equilibrium state due to its exponential convergence (\citealt{dai2018learning}, Eq.1; \citealt{gu2020implicit}, Eq.2(b)). Thus, we assume that the GNN updates $F$ has unique equilibrium states given the initial conditions $AN(i)^0\cup z_i^0$ for each $i$, where $AN(i)^0$ represents the initial values of ancestors for node $i$. For a sufficient condition of its existence, one can refer to \citet[Theorem 4.1]{gu2020implicit}. We note that this is only a sufficient condition, meaning that the GNN without the conditions in \citet{gu2020implicit} can still have equilibrium states. Since discussing a necessary and sufficient conditions for the existence of the equilibrium state is out of the scope of this paper, we simply made an assumption that function $F$ has steady states. The reason we consider the initial ancestor values rather than just parent values is due to the message passing nature, where the value $PA(i)^t$ contains the information from the nodes that is at most $t$-hops away. Since graph $G$ represents a DAG, one can always find a permutation $\pi$ of the original index $i=1,\ldots,D$ based on the topological order. For concise notations, we assume the identity permutation. When the GNN is in equilibrium, we can rewrite Eq.\ref{eq: abstract GNN update equation} as \begin{equation} z_i^\infty = F(PA^\infty(i),z_i^\infty), \label{eq: Equilibrium GNN} \end{equation} where the superscript $\infty$ represents that steady state of the node. From the assumption, since the steady state for each $z_i^\infty$ depends on the initial values $AN(i)^0\cup z_i^0$, it is trivial to see that the steady state $PA^\infty(i)$ depends on $AN^0(i)$. Therefore, the steady state $z_i^\infty$ is uniquely determined by $PA^\infty(i)$ and $z_i^0$. Namely, \begin{equation} z_i^\infty = H_i(PA^\infty(i),z_i^0) \label{eq: Equilibrium GNN z_0} \end{equation} for $i=1,\ldots,D$, where $H_i$ is a mapping from $PA(i)^\infty$ and $z_i^0$ to the steady state of node $i$. This is exactly the general form of an \emph{structural equation model} (SEM) defined by graph $G$. If we further assume that the read-out layer $g(\cdot)$ is invertible, we can obtain \begin{equation} x_i = g\left(H_i\left(g^{-1}\left(PA^\infty_x(i)\right),z_i^0\right)\right), \label{eq: Equilibrium GNN x} \end{equation} which is also an SEM based on $G$ for observation ${\mathbf x}$. Thus, it is trivial that \modelName respects the graph $G$ based on the above assumptions. In practice, due to the exponential convergence of fixed point iteration, we found out that one does not need to use large iteration $t$. To balance the performance and computational cost, we found that $3$ iterations of GNN message passing is enough to obtain reasonable performances. \section{Experimental details}\label{app:details} Here we specify the complete experimental details for full reproducibility. We first provide all the details for the synthetic experiment. Then we explain the differences for the neuropathic pain and the Eedi topics experiments. \subsection{Synthetic experiment}\label{app:synthetic} \textbf{Data generation process}. To understand how the number of variables affects \modelName, we use $D=5,7,9$ variables (five datasets for each value of $D$). We first sample the underlying true structure. An edge from variable $i$ to variable $j$ is sampled with probability $0.5$ if $i<j$, and probability $0$ if $i\ge j$ (this ensures that the true structure is a DAG, which is just a standard scenario, and not a requirement for any of the compared algorithms). Then, we generate the data points. Root nodes (i.e. nodes with no parents, like variables 1 and 2 in \autoref{fig:synthetic_graph}(a) in the paper) are sampled from $\mathcal{N}(0,1)$. Any other node $v_i$ is obtained from its parents $\mathrm{Pa}(i)$ as $v_i=\sum_{j\in\mathrm{Pa}(i)}\sin(3v_j) + \varepsilon$, where $\varepsilon\rightarrow\mathcal{N}(0,0.01)$ is a Gaussian noise. We use the $\sin$ function to induce non-linear relationships between variables. Notice that the $3$-times factor inside the $\sin$ encourages that the whole period of the $\sin$ function is used (to favor non-linearity). To evaluate the imputation methods, 30\% of the test values are dropped. As an example of the data generation process, \autoref{fig:synthetic_data} below shows the pair plot for the dataset generated from the graph in \autoref{fig:synthetic_graph}(a) in the paper. \textbf{Model parameters}. We start by specifying the parameters associated to the generative process. We use a prior probability $p_{ij}=0.05$ in ${\mathrm{p}}({\mathbf G})$ for all the edges. This favours sparse graphs, and can be adjusted depending on the problem at hand. The prior ${\mathrm{p}}({\mathbf Z})$ is a standard Gaussian distribution, i.e. $\sigma_z^2=1$. This provides a standard regularisation for the latent space. The output noise is set to $\sigma_x^2=0.02$, which favours the accurate reconstruction of samples. As for the decoder, we perform $T=3$ iterations of GNN message passing. All the MLPs in the decoder (i.e. MLP$^f$, MLP$^b$, MLP$^{e2n}$ and $g$) have two linear layers with ReLU non-linearity. The dimensionality of the hidden layer, which is the dimensionality of each latent subspace, is $256$. Regarding the encoder, it is given by a multi-head neural network that defines the mean and standard deviation of the latent representation. The neural network is a MLP with two standard linear layers with ReLu non-linearity. The dimension of the hidden layer is also $256$. When using groups, there are as many such MLPs as groups. Finally, recall that the variational posterior ${\mathrm{q}}({\mathbf G})$ is the product of independent Bernoulli distributions over the edges, with a probability ${\mathbf G}_{ij}$ to be estimated for each edge. These values are all initialised to ${\mathbf G}_{ij}=0.5$. \textbf{Training hyperparameters}. We use Adam optimizer with learning rate $0.001$. We train during $300$ epochs with a batch size of $100$ samples. Each one of the two stages described in the two-step training takes half of the epochs. The percentage of data dropped during training for each instance is sampled from a uniform distribution. When doing the reparametrization trick (i.e. when sampling from ${\mathbf Z}_n$), we obtain one sample during training ($100$ samples in test time). For the Gumbel-softmax sample, we use a temperature $\tau=0.5$. The rest of hyperparameters are the standard ones in \texttt{torch.nn.functional.gumbel\_softmax}, in particular we use soft samples. To compute the DAG regulariser $\mathcal{R}({\mathbf G})$, we use the exponential matrix implementation in \texttt{torch.matrix\_exp}. This is in contrast to previous approaches, which resort to approximations \cite{zheng2018dags, yu2019dag}. When applying the encoder, missing values in the training data are replaced with the value $0$ (continuous variables). \textbf{Baselines details}. Regarding the structure learning baselines, we ran both PC and GES with the Causal Command tool offered by the Center for Causal Discovery \url{https://www.ccd.pitt.edu/tools/}. We used the default parameters in each case (i.e. disc-bic-score for GES and cg-lr-test for PC). NOTEARS (L), NOTEARS (NL) and DAG-GNN were run with the code provided by the authors in GitHub: \url{https://github.com/xunzheng/notears} (NOTEARS (L) and NOTEARS (NL)) and \url{https://github.com/fishmoon1234/DAG-GNN} (DAG-GNN). In all cases, we used the default parameters proposed by the authors. Regarding the imputation baselines, Majority Vote and Mean Imputing were implemented in Python. MICE and Missforest were used from Scikit-learn library with default parameters \url{https://scikit-learn.org/stable/modules/generated/sklearn.impute.IterativeImputer.html#sklearn.impute.IterativeImputer}. For PVAE, we use the authors implementation with their proposed parameters, see \url{https://github.com/microsoft/EDDI}. \textbf{Other experimental details}. \modelName is implemented in PyTorch. The code is available in the supplementary material. The experiments were run using a local Tesla K80 GPU and a compute cluster provided by Azure Machine Learning platform with NVIDIA Tesla V100 GPU. \subsection{Neuropathic pain experiment}\label{app:neuropathic} \textbf{Data generation process}. We use the Neuropathic Pain Diagnosis Simulator in \url{https://github.com/TURuibo/Neuropathic-Pain-Diagnosis-Simulator}. We simulate five datasets with 1500 samples, and split each one randomly in 1000 training and 500 test samples. To evaluate the imputation methods, 30\% of the test values are dropped. These five datasets are used for the five independent runs reported in experimental results. \textbf{Model and training hyperparameters}. Most of the hyperparameters are identical to the synthetic experiment. However, in this case we have to deal with 222 variables, many more than before. In particular, the number of possible edges is 49062. Therefore, we reduce the dimensionality of each latent subspace to $32$, the batch size to $25$, and the amount of test samples for ${\mathbf Z}_n$ to $10$ (in training we still use one as before). Moreover, we reduce the initial posterior probability for each edge to $0.2$. The reason is that, for $0.5$ initialization, the DAG regulariser $\mathcal{R}({\mathbf G})$ evaluates to extremely high and unstable values for the $222\times 222$ matrix. Since this is a more complex problem (no synthetic generation), we run the algorithm for $1000$ epochs. When applying the encoder, missing values in the training data are replaced with the value $0.5$ (binary variables). \subsection{Eedi topics experiment}\label{app:eedi} \textbf{Data pre-processing}. \textbf{Data generation process}. The real-world Eedi topics dataset contains 6147 samples, and can be downloaded from the website \url{https://eedi.com/projects/neurips-education-challenge} (task3\_4 folder). The mapping from each question to its topics (also called "subjects") is given by the file ``data/metadata/question\_metadata\_task\_3\_4.csv''. For those questions that have more than one topic associated at the same level, randomly sample one of them. The hierarcy of topics (recall \autoref{fig:eedi_hierarchy} in the paper) is given by the file ``data/metadata/subject\_metadata.csv''. We use a random 80\%-10\%-10\% train-validation-test split. The validation set is used to perform Bayesian Optimization (BO) as described below. The five runs reported in the experimental section come from different (random) initializations for the model parameters. \textbf{Model and training hyperparameters}. Here, we follow the same specifications as in the neuropathic pain dataset. The only difference is that we perform BO for three hyperparameters: the dimensionality of the latent subspaces, the number of GNN message passing iterations, and the learning rate. The possible choices for each hyperparameter are $\{5, 10, 15, 20, 25, 30, 35, 40, 45, 50\}$, $\{3, 5, 8, 10, 12, 14, 16, 18, 20\}$, and $\{10^{-4}, 10^{-3}, 10^{-2}\}$ respectively. We perform $39$ runs of BO with the hyperdrive package in Azure Machine Learning platform \url{https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive?view=azure-ml-py}. We use validation accuracy as the target metric. The best configuration obtained through BO was $15$, $8$ and $10^{-4}$, respectively. \textbf{Baselines details}. As explained in the paper, in this experiment DAG-GNN is adapted to deal with missing values and groups of arbitrary size. For the former, we adapt the DAG-GNN code to replace missing values with $0.5$ constant value, as in \modelName. For the latter, we also follow \modelName and use as many different neural networks as groups (as described in the paper), all of them with the same architecture as the one used in the original code (\url{https://github.com/fishmoon1234/DAG-GNN}). \textbf{Other experimental details}. The list of relationships found by \modelName (\autoref{tab:full_relationships_vicause}) and DAG-GNN (\autoref{tab:full_relationships_dag_gnn}) aggregates the relationships obtained in the five independent runs. This is done by setting a threshold of $0.35$ on the posterior probability of edge (which is initialized to $0.2$) and considering the union for the different runs. This resulted in $50$ relationships for \modelName and $57$ for DAG-GNN. For \textit{Random}, we simulated $50$ random relationships. Also, the probability reported in the first column of \autoref{tab:full_relationships_vicause} is the average of the probabilities obtained for that relationship in the five different runs. \section{Additional figures and results} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{images/structured_latent_space.pdf} \vspace{3mm} \caption{Structured latent space. (a) At the level of variables. Each variable in ${\mathbf x}_n$ (each color) has its own latent subspace, which is given by a row in ${\mathbf Z}_n$. (b) At the level of groups of variables. Here, each group of variables (each color) has its own latent subspace, which is given by a row in ${\mathbf Z}_n$.} \label{fig:structured_latent_space} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{images/structured_mappings_cropped.pdf} \caption{The encoder respects the structure of the latent space. (a) At the level of variables. All the variables use the same encoding functions. (b) At the level of groups of variables. Each group of variables uses different encoding functions. \label{fig:structured_mappings}} \end{figure} \begin{figure*}[h] \centering \includegraphics[width=0.6\textwidth]{images/pairplot.png} \caption{Pair-plot for the dataset generated from the graph in \autoref{fig:synthetic_graph}(a) in the paper. We observe different type of relationships between variables, including non-linear ones.} \label{fig:synthetic_data} \end{figure*} \begin{table*}[h] \centering \begin{tabular}{rcccc} \toprule {} & \multicolumn{3}{c}{Number of variables} & \multirow{2}{*}{Average} \\ \cmidrule[0.5pt](lr){2-4} {} & 5 & 7 & 9 & {}\\ \midrule Majority vote & 0.5507$\pm$0.0056 & 0.5391$\pm$0.0050 & 0.5427$\pm$0.0050 & 0.5442$\pm$0.0032 \\ Mean imputing & 0.2351$\pm$0.0104 & 0.2124$\pm$0.0112 & 0.2143$\pm$0.0064 & 0.2206$\pm$0.0061 \\ MICE & 0.1352$\pm$0.0044 & 0.1501$\pm$0.0095 & 0.1230$\pm$0.0025 & 0.1361$\pm$0.0046 \\ Missforest & 0.1279$\pm$0.0040 & 0.1403$\pm$0.0030 & 0.1258$\pm$0.0022 & 0.1313$\pm$0.0025 \\ PVAE & 0.1324$\pm$0.0048 & 0.1536$\pm$0.0095 & 0.1360$\pm$0.0019 & 0.1407$\pm$0.0043 \\ \modelName & \textbf{0.1146$\pm$0.0026} & \textbf{0.1251$\pm$0.0055} & \textbf{0.1191$\pm$0.0015} & \textbf{0.1196$\pm$0.0024} \\ \bottomrule \end{tabular} \caption{Imputation results for the synthetic experiment in terms of RMSE (not aggregating by number of variables, $D=5,7,9$). The values are the mean and standard error over five different simulations. } \label{tab:synthetic_imputation_extended} \end{table*} \begin{table*}[h] \centering \begin{tabular}{rl} \toprule Index & Topic name \\ \midrule 1 & Decimals \\ 2 & Factors, Multiples and Primes \\ 3 & Fractions, Decimals and Percentage Equivalence \\ 4 & Fractions \\ 5 & Indices, Powers and Roots \\ 6 & Negative Numbers \\ 7 & Straight Line Graphs \\ 8 & Inequalities \\ 9 & Sequences \\ 10 & Writing and Simplifying Expressions \\ 11 & Angles \\ 12 & Circles \\ 13 & Co-ordinates \\ 14 & Construction, Loci and Scale Drawing \\ 15 & Symmetry \\ 16 & Units of Measurement \\ 17 & Volume and Surface Area \\ 18 & Basic Arithmetic \\ 19 & Factorising \\ 20 & Solving Equations \\ 21 & Formula \\ 22 & 2D Names and Properties of Shapes \\ 23 & Perimeter and Area \\ 24 & Similarity and Congruency \\ 25 & Transformations \\ \bottomrule \end{tabular} \caption{Mapping between indexes for row/column names in \autoref{tab:eedi_adj_level_2_vicause} and \autoref{tab:eedi_adj_level_2_random} and the actual level-2 topic names.} \label{tab:map_level_2} \end{table*} \begin{table* \centering \small \setlength{\tabcolsep}{3pt} \begin{tabular}{crccccccc} \toprule {} & {} & \multicolumn{3}{c}{Adjacency} & \multicolumn{3}{c}{Orientation} & \multirow{2}{*}[-0.2em]{\shortstack{Causal\\Accuracy}} \\ \cmidrule[0.5pt](lr){3-5} \cmidrule[0.5pt](lr){6-8} {} & {} & Recall & Precision & F$_{1}$-score & Recall & Precision & F$_{1}$-score & {} \\ \midrule \multirow{6}{*}{5} & PC & 0.464$\pm$0.099 & 0.610$\pm$0.117 & 0.526$\pm$0.107 & 0.364$\pm$0.098 & 0.490$\pm$0.127 & 0.416$\pm$0.111 & 0.436$\pm$0.076 \\ {} & GES & 0.414$\pm$0.067 & 0.507$\pm$0.071 & 0.446$\pm$0.065 & 0.257$\pm$0.103 & 0.327$\pm$0.117 & 0.285$\pm$0.110 & 0.368$\pm$0.072 \\ {} & NOTEARS (L) & 0.186$\pm$0.052 & 0.400$\pm$0.089 & 0.247$\pm$0.063 & 0.119$\pm$0.049 & 0.300$\pm$0.110 & 0.167$\pm$0.065 & 0.119$\pm$0.049 \\ {} & NOTEARS (NL) & 0.331$\pm$0.057 & 0.470$\pm$0.078 & 0.384$\pm$0.065 & 0.264$\pm$0.047 & 0.370$\pm$0.053 & 0.304$\pm$0.049 & 0.264$\pm$0.047 \\ {} & DAG-GNN & 0.381$\pm$0.130 & 0.433$\pm$0.121 & 0.399$\pm$0.127 & 0.231$\pm$0.067 & 0.283$\pm$0.073 & 0.249$\pm$0.068 & 0.231$\pm$0.067 \\ {} & \modelName & 0.971$\pm$0.026 & 0.598$\pm$0.059 & 0.730$\pm$0.047 & 0.574$\pm$0.111 & 0.356$\pm$0.085 & 0.432$\pm$0.093 & 0.971$\pm$0.026 \\ \midrule \multirow{6}{*}{7} & PC & 0.396$\pm$0.110 & 0.639$\pm$0.154 & 0.468$\pm$0.112 & 0.113$\pm$0.043 & 0.193$\pm$0.083 & 0.134$\pm$0.050 & 0.324$\pm$0.088 \\ {} & GES & 0.429$\pm$0.087 & 0.647$\pm$0.042 & 0.501$\pm$0.076 & 0.208$\pm$0.067 & 0.279$\pm$0.081 & 0.235$\pm$0.073 & 0.345$\pm$0.091 \\ {} & NOTEARS (L) & 0.222$\pm$0.059 & 0.526$\pm$0.124 & 0.309$\pm$0.078 & 0.176$\pm$0.041 & 0.436$\pm$0.109 & 0.248$\pm$0.058 & 0.176$\pm$0.041 \\ {} & NOTEARS (NL) & 0.315$\pm$0.094 & 0.513$\pm$0.119 & 0.382$\pm$0.104 & 0.269$\pm$0.074 & 0.453$\pm$0.105 & 0.330$\pm$0.084 & 0.269$\pm$0.074 \\ {} & DAG-GNN & 0.396$\pm$0.109 & 0.539$\pm$0.123 & 0.446$\pm$0.111 & 0.318$\pm$0.082 & 0.445$\pm$0.102 & 0.361$\pm$0.085 & 0.318$\pm$0.082 \\ {} & \modelName & 0.813$\pm$0.088 & 0.694$\pm$0.057 & 0.725$\pm$0.053 & 0.559$\pm$0.134 & 0.447$\pm$0.070 & 0.480$\pm$0.089 & 0.701$\pm$0.103 \\ \midrule \multirow{6}{*}{9} & PC & 0.406$\pm$0.072 & 0.654$\pm$0.053 & 0.491$\pm$0.060 & 0.176$\pm$0.020 & 0.302$\pm$0.045 & 0.219$\pm$0.024 & 0.229$\pm$0.041 \\ {} & GES & 0.514$\pm$0.065 & 0.553$\pm$0.050 & 0.525$\pm$0.049 & 0.282$\pm$0.057 & 0.308$\pm$0.068 & 0.291$\pm$0.061 & 0.379$\pm$0.069 \\ {} & NOTEARS (L) & 0.172$\pm$0.026 & 0.403$\pm$0.076 & 0.238$\pm$0.036 & 0.151$\pm$0.023 & 0.366$\pm$0.082 & 0.211$\pm$0.035 & 0.151$\pm$0.023 \\ {} & NOTEARS (NL) & 0.338$\pm$0.042 & 0.485$\pm$0.053 & 0.394$\pm$0.045 & 0.297$\pm$0.034 & 0.429$\pm$0.044 & 0.347$\pm$0.036 & 0.297$\pm$0.034 \\ {} & DAG-GNN & 0.551$\pm$0.067 & 0.554$\pm$0.053 & 0.547$\pm$0.057 & 0.508$\pm$0.061 & 0.516$\pm$0.054 & 0.508$\pm$0.055 & 0.508$\pm$0.061 \\ {} & \modelName & 0.705$\pm$0.061 & 0.615$\pm$0.042 & 0.652$\pm$0.044 & 0.356$\pm$0.092 & 0.297$\pm$0.065 & 0.322$\pm$0.076 & 0.526$\pm$0.081 \\ \bottomrule \end{tabular} \caption{Structure learning results for the synthetic experiment (not aggregating by number of variables, $D=5,7,9$). The values are the mean and standard error over five different simulations.} \label{tab:synthetic_causality_extended} \end{table*} \begin{table*} \setlength{\tabcolsep}{3pt} \centering \small \begin{tabular}{rllllllll} \toprule {} & \multicolumn{8}{c}{Number of variables} \\ \cmidrule[0.5pt](lr){2-9} {} & 4 & 8 & 16 & 32 & 64 & 128 & 256 & 512 \\ \midrule PC & 2.49$\pm$0.62 & 5.19$\pm$1.01 & 8.14$\pm$1.64 & 14.99$\pm$1.59 & 21.65$\pm$2.19 & 26.11$\pm$1.70 & 30.21$\pm$2.01 & 35.43$\pm$1.56 \\ GES & 0.21$\pm$0.02 & 1.12$\pm$0.41 & 1.80$\pm$0.80 & 2.28$\pm$0.78 & 2.76$\pm$1.01 & 3.34$\pm$0.52 & 3.87$\pm$0.66 & 4.10$\pm$0.71 \\ NOTEARS (L) & 8.91$\pm$2.34 & 21.04$\pm$3.43 & 38.53$\pm$2.52 & 56.11$\pm$3.23 & 91.11$\pm$4.15 & 140.33$\pm$3.53 & 331.04$\pm$6.55 & 378.21$\pm$9.12 \\ NOTEARS (NL) & 12.94$\pm$2.18 & 31.03$\pm$3.11 & 54.08$\pm$4.10 & 89.35$\pm$4.11 & 99.32$\pm$5.12 & 240.43$\pm$5.39 & 364.92$\pm$3.22 & 469.43$\pm$4.77 \\ DAG-GNN & 13.62$\pm$2.93 & 30.04$\pm$2.48 & 52.01$\pm$3.81 & 88.12$\pm$79 & 112.33$\pm$5.01 & 255.11$\pm$6.93 & 371.22$\pm$5.32 & 498.09$\pm$5.01 \\ \modelName & 10.27$\pm$1.98 & 25.11$\pm$5.21 & 47.98$\pm$3.12 & 76.12$\pm$4.40 & 101.12$\pm$4.23 & 201.59$\pm$6.33 & 340.10$\pm$8.22 & 421.11$\pm$5.33\\ MMHC & 7.85$\pm$1.02 & 69.10$\pm$5.32 & 542.92$\pm$9.82 & 1314.76$\pm$9.10 & NA & NA & NA & NA \\ Tabu & 2.01$\pm$0.72 & 7.45$\pm$1.03 & 24.08$\pm$5.93 & 57.87$\pm$3.85 & 77.87$\pm$5.52 & 128.67$\pm$4.09 & 163.33$\pm$6.55 & 219.05$\pm$3.42 \\ HillClimb & 1.52$\pm$0.64 & 6.98$\pm$1.23 & 22.10$\pm$5.32 & 51.78$\pm$4.06 & 75.29$\pm$5.84 & 121.92$\pm$4.71 & 157.82$\pm$6.87 & 209.54$\pm$5.01 \\ \bottomrule \end{tabular} \caption{Running times (in minutes) for different structure learning approaches in an extended synthetic experiment. For each number of variables, three datasets were simulated following the same data generation process described above, and the results show the mean and standard error. Notice that we have considered three additional baselines (MMHC, Tabu, HillClimb). We observe three different types of methods. MMHC scales poorly (NA means that the training took more than 24 hours), probably due to its hybrid nature that combines constraint-based and score-based approaches. \modelName and the other deep learning based methods (DAG-GNN, NOTEARS) can scale to large numbers of variables. Of course, simpler methods such as PC, GES, Tabu, and HillClimb are significantly faster than \modelName (note also that these baselines are from highly optimized libraries that leverage e.g. dynamic programming and parallelization), but their performance is worse. Indeed, the structure learning performance for this experiment is shown in \autoref{tab:causality_appendix}.\label{tab:synthetic_times}} \end{table*} \begin{table* \centering \small \begin{tabular}{rrrrrrrrrrrrrrrrrrrrrrrrrr} \toprule {} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 & 22 & 23 & 24 & 25 \\ \midrule 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 2 & 0 & 2 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 6 & 0 & 5 & 0 & 0 & 1 & 6 & 0 & 0 & 0 & 2 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 4 & 0 & 0 & 2 & 0 & 0 & 0 & 0 \\ 7 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 9 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 10 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 2 & 0 & 0 & 0 & 0 \\ 11 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 12 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 13 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 14 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 15 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 16 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 17 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 18 & 0 & 3 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 19 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 20 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 21 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 22 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 23 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 24 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 25 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \bottomrule \end{tabular} \caption{How the 50 relationships found by \modelName are distributed across level 2 topics. The item $(i,j)$ refers to edges in the direction $i\to j$. There are 18 relationships inside level 2 topics (36\%). See \autoref{tab:map_level_2} for a mapping between indexes shown here in row/column names and the actual level-2 topic names.} \label{tab:eedi_adj_level_2_vicause} \end{table*} \begin{table* \centering \small \begin{tabular}{rrrrrrrrrrrrrrrrrrrrrrrrrr} \toprule {} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 & 22 & 23 & 24 & 25 \\ \midrule 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 2 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 3 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 4 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 5 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 6 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 7 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 9 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 10 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 11 & 0 & 3 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 3 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 0 & 0 \\ 12 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 13 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 \\ 14 & 0 & 2 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ 15 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 16 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 17 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 18 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 \\ 19 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 20 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 21 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 22 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 23 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 24 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 25 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 2 \\ \bottomrule \end{tabular} \caption{How the 57 relationships found by DAG-GNN are distributed across level 2 topics. The item $(i,j)$ refers to edges in the direction $i\to j$. There are 8 relationships inside level 2 topics (14\%). See \autoref{tab:map_level_2} for a mapping between indexes shown here in row/column names and the actual level-2 topic names.} \label{tab:eedi_adj_level_2_dag_gnn} \end{table*} \begin{table* \centering \small \begin{tabular}{rrrrrrrrrrrrrrrrrrrrrrrrrr} \toprule {} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 & 22 & 23 & 24 & 25 \\ \midrule 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 4 & 0 & 3 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 7 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 9 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 10 & 0 & 2 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 11 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 12 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 13 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 14 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 15 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 16 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 17 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 18 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 19 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 20 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 21 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 22 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 23 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 \\ 24 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 25 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \bottomrule \end{tabular} \caption{How the 50 relationships found by \textit{Random} are distributed across level 2 topics. The item $(i,j)$ refers to edges in the direction $i\to j$. There are 3 relationships inside level 2 topics (6\%). See \autoref{tab:map_level_2} for a mapping between indexes shown here in row/column names and the actual level-2 topic names.} \label{tab:eedi_adj_level_2_random} \end{table*} \begin{landscape} \newpage \begin{table \centering \tiny \begin{tabular}{rllrrrr} \toprule Prob & Topic 1 (from) & Topic 2 (to) & Adj1 & Ori1 & Adj2 & Ori2 \\ \midrule 0.44 & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & Ordering Negative Numbers [Negative Numbers] [Number] & 5 & 1 & 5 & 1 \\ 0.38 & Mental Multiplication and Division [Basic Arithmetic] [Number] & Multiples and Lowest Common Multiple [Factors, Multiples and Primes] [Number] & 5 & 5 & 5 & 5 \\ 0.37 & Mental Multiplication and Division [Basic Arithmetic] [Number] & Factors and Highest Common Factor [Factors, Multiples and Primes] [Number] & 5 & 5 & 5 & 5 \\ 0.37 & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & Multiples and Lowest Common Multiple [Factors, Multiples and Primes] [Number] & 2 & 2 & 2 & 1 \\ 0.36 & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & Factors and Highest Common Factor [Factors, Multiples and Primes] [Number] & 2 & 1 & 2 & 1 \\ 0.35 & Mental Multiplication and Division [Basic Arithmetic] [Number] & Place Value [Basic Arithmetic] [Number] & 4 & 2 & 4 & 2 \\ 0.35 & Mental Multiplication and Division [Basic Arithmetic] [Number] & BIDMAS [Basic Arithmetic] [Number] & 5 & 5 & 5 & 5 \\ 0.35 & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & Simplifying Expressions by Collecting Like Terms [Writing and Simplifying Expressions] [Algebra] & 5 & 5 & 5 & 5 \\ 0.35 & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & BIDMAS [Basic Arithmetic] [Number] & 4 & 4 & 4 & 3 \\ 0.35 & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & Multiplying and Dividing Negative Numbers [Negative Numbers] [Number] & 4 & 4 & 5 & 4 \\ 0.35 & Mental Multiplication and Division [Basic Arithmetic] [Number] & Squares, Cubes, etc [Indices, Powers and Roots] [Number] & 5 & 5 & 5 & 5 \\ 0.34 & Factors and Highest Common Factor [Factors, Multiples and Primes] [Number] & Mental Multiplication and Division [Basic Arithmetic] [Number] & 5 & 1 & 5 & 1 \\ 0.34 & Basic Angle Facts (straight line, opposite, around a point, etc) [Angles] [Geometry and Measure] & Angle Facts with Parallel Lines [Angles] [Geometry and Measure] & 4 & 4 & 4 & 4 \\ 0.34 & Multiplying and Dividing Negative Numbers [Negative Numbers] [Number] & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & 4 & 2 & 5 & 2 \\ 0.34 & Writing Expressions [Writing and Simplifying Expressions] [Algebra] & Simplifying Expressions by Collecting Like Terms [Writing and Simplifying Expressions] [Algebra] & 5 & 2 & 5 & 2 \\ 0.34 & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & Squares, Cubes, etc [Indices, Powers and Roots] [Number] & 2 & 2 & 2 & 2 \\ 0.33 & Ordering Negative Numbers [Negative Numbers] [Number] & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & 5 & 5 & 5 & 5 \\ 0.33 & Basic Angle Facts (straight line, opposite, around a point, etc) [Angles] [Geometry and Measure] & Measuring Angles [Angles] [Geometry and Measure] & 3 & 2 & 5 & 2 \\ 0.33 & Simplifying Expressions by Collecting Like Terms [Writing and Simplifying Expressions] [Algebra] & Writing Expressions [Writing and Simplifying Expressions] [Algebra] & 4 & 4 & 4 & 4 \\ 0.33 & Measuring Angles [Angles] [Geometry and Measure] & Basic Angle Facts (straight line, opposite, around a point, etc) [Angles] [Geometry and Measure] & 3 & 3 & 5 & 3 \\ 0.33 & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & Place Value [Basic Arithmetic] [Number] & 4 & 1 & 4 & 1 \\ 0.33 & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & Prime Numbers and Prime Factors [Factors, Multiples and Primes] [Number] & 2 & 2 & 2 & 1 \\ 0.33 & Multiplying and Dividing Negative Numbers [Negative Numbers] [Number] & BIDMAS [Basic Arithmetic] [Number] & 4 & 4 & 4 & 4 \\ 0.32 & Factors and Highest Common Factor [Factors, Multiples and Primes] [Number] & BIDMAS [Basic Arithmetic] [Number] & 3 & 2 & 3 & 2 \\ 0.32 & Mental Multiplication and Division [Basic Arithmetic] [Number] & Prime Numbers and Prime Factors [Factors, Multiples and Primes] [Number] & 5 & 5 & 5 & 5 \\ 0.32 & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & Mental Multiplication and Division [Basic Arithmetic] [Number] & 2 & 1 & 2 & 1 \\ 0.32 & Factors and Highest Common Factor [Factors, Multiples and Primes] [Number] & Multiples and Lowest Common Multiple [Factors, Multiples and Primes] [Number] & 3 & 3 & 3 & 3 \\ 0.32 & Linear Equations [Solving Equations] [Algebra] & Substitution into Formula [Formula] [Algebra] & 4 & 2 & 4 & 2 \\ 0.32 & Factors and Highest Common Factor [Factors, Multiples and Primes] [Number] & Squares, Cubes, etc [Indices, Powers and Roots] [Number] & 3 & 2 & 3 & 2 \\ 0.32 & Angle Facts with Parallel Lines [Angles] [Geometry and Measure] & Basic Angle Facts (straight line, opposite, around a point, etc) [Angles] [Geometry and Measure] & 4 & 2 & 4 & 2 \\ 0.32 & Simplifying Expressions by Collecting Like Terms [Writing and Simplifying Expressions] [Algebra] & Substitution into Formula [Formula] [Algebra] & 2 & 2 & 2 & 2 \\ 0.32 & Writing Expressions [Writing and Simplifying Expressions] [Algebra] & Substitution into Formula [Formula] [Algebra] & 4 & 3 & 4 & 3 \\ 0.32 & Mental Multiplication and Division [Basic Arithmetic] [Number] & Time [Units of Measurement] [Geometry and Measure] & 4 & 4 & 4 & 4 \\ 0.32 & Multiplying and Dividing Negative Numbers [Negative Numbers] [Number] & Ordering Negative Numbers [Negative Numbers] [Number] & 4 & 2 & 4 & 2 \\ 0.32 & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & Substitution into Formula [Formula] [Algebra] & 5 & 5 & 5 & 5 \\ 0.32 & Multiplying and Dividing Negative Numbers [Negative Numbers] [Number] & Prime Numbers and Prime Factors [Factors, Multiples and Primes] [Number] & 2 & 1 & 2 & 1 \\ 0.31 & Factors and Highest Common Factor [Factors, Multiples and Primes] [Number] & Prime Numbers and Prime Factors [Factors, Multiples and Primes] [Number] & 5 & 5 & 5 & 5 \\ 0.31 & Basic Angle Facts (straight line, opposite, around a point, etc) [Angles] [Geometry and Measure] & Types, Naming and Estimating [Angles] [Geometry and Measure] & 4 & 2 & 5 & 2 \\ 0.31 & Ordering Negative Numbers [Negative Numbers] [Number] & Multiplying and Dividing Negative Numbers [Negative Numbers] [Number] & 4 & 4 & 4 & 4 \\ 0.31 & Substitution into Formula [Formula] [Algebra] & Writing Expressions [Writing and Simplifying Expressions] [Algebra] & 4 & 3 & 4 & 3 \\ 0.31 & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & Writing Expressions [Writing and Simplifying Expressions] [Algebra] & 2 & 2 & 2 & 1 \\ 0.31 & BIDMAS [Basic Arithmetic] [Number] & Place Value [Basic Arithmetic] [Number] & 4 & 2 & 4 & 1 \\ 0.31 & Multiples and Lowest Common Multiple [Factors, Multiples and Primes] [Number] & Mental Multiplication and Division [Basic Arithmetic] [Number] & 4 & 2 & 4 & 2 \\ 0.31 & Multiplying and Dividing Negative Numbers [Negative Numbers] [Number] & Factors and Highest Common Factor [Factors, Multiples and Primes] [Number] & 4 & 2 & 4 & 2 \\ 0.30 & Simplifying Expressions by Collecting Like Terms [Writing and Simplifying Expressions] [Algebra] & Multiplying and Dividing Negative Numbers [Negative Numbers] [Number] & 2 & 2 & 1 & 1 \\ 0.30 & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & Time [Units of Measurement] [Geometry and Measure] & 2 & 2 & 2 & 2 \\ 0.30 & Ordering Negative Numbers [Negative Numbers] [Number] & Substitution into Formula [Formula] [Algebra] & 3 & 3 & 2 & 2 \\ 0.30 & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & Angles in Polygons [Angles] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ 0.30 & Factors and Highest Common Factor [Factors, Multiples and Primes] [Number] & Place Value [Basic Arithmetic] [Number] & 3 & 2 & 3 & 1 \\ 0.28 & Simplifying Expressions by Collecting Like Terms [Writing and Simplifying Expressions] [Algebra] & Mental Multiplication and Division [Basic Arithmetic] [Number] & 2 & 1 & 2 & 1 \\ \bottomrule \end{tabular} \caption{Full list of relationships found by \modelName in the Eedi topics dataset. Each row refers to one relationship (one edge). From left to right, the columns are the posterior probability of the edge, the sending node (topic), the receiving node (topic), and the adjacency and orientation evaluations from each expert. For each topic, the brackets contain its parent level 2 and level 1 topics.} \label{tab:full_relationships_vicause} \end{table} \end{landscape} \begin{landscape} \newpage \begin{table \centering \tiny \begin{tabular}{llrrrr} \toprule Topic 1 (From) & Topic 2 (To) & Adj1 & Ori1 & Adj2 & Ori2 \\ \midrule Missing Lengths [Perimeter and Area] [Geometry and Measure] & Midpoint Between Two Co-ordinates [Co-ordinates] [Algebra] & 4 & 4 & 4 & 5 \\ Construct Triangle [Construction, Loci and Scale Drawing] [Geometry and Measure] & Place Value [Basic Arithmetic] [Number] & 1 & 1 & 1 & 1 \\ Squares, Cubes, etc [Indices, Powers and Roots] [Number] & Volume of Prisms [Volume and Surface Area] [Geometry and Measure] & 4 & 5 & 5 & 4 \\ Converting between Fractions and Percentages [Fractions, Decimals and Percentage Equivalence] [Number] & Volume of Prisms [Volume and Surface Area] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Angles in Triangles [Angles] [Geometry and Measure] & Parts of a Circle [Circles] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Types, Naming and Estimating [Angles] [Geometry and Measure] & Angle Facts with Parallel Lines [Angles] [Geometry and Measure] & 4 & 5 & 5 & 5 \\ Mental Multiplication and Division [Basic Arithmetic] [Number] & Measuring Angles [Angles] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Angles in Polygons [Angles] [Geometry and Measure] & Compound Area [Perimeter and Area] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Squares, Cubes, etc [Indices, Powers and Roots] [Number] & Solving Linear Inequalities [Inequalities] [Algebra] & 2 & 1 & 3 & 1 \\ Construct Triangle [Construction, Loci and Scale Drawing] [Geometry and Measure] & Solving Linear Inequalities [Inequalities] [Algebra] & 1 & 1 & 1 & 1 \\ Written Multiplication [Basic Arithmetic] [Number] & Translation and Vectors [Transformations] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Enlargement [Transformations] [Geometry and Measure] & Reflection [Transformations] [Geometry and Measure] & 5 & 2 & 5 & 3 \\ Rotation [Transformations] [Geometry and Measure] & Reflection [Transformations] [Geometry and Measure] & 4 & 3 & 5 & 2 \\ Construct Angle and Line Bisectors [Construction, Loci and Scale Drawing] [Geometry and Measure] & Length Scale Factors in Similar Shapes [Similarity and Congruency] [Geometry and Measure] & 1 & 1 & 2 & 1 \\ Angles in Triangles [Angles] [Geometry and Measure] & Properties of Quadrilaterals [2D Names and Properties of Shapes] [Geometry and Measure] & 4 & 3 & 5 & 3 \\ Naming Co-ordinates in 2D [Co-ordinates] [Algebra] & Properties of Quadrilaterals [2D Names and Properties of Shapes] [Geometry and Measure] & 1 & 1 & 3 & 1 \\ Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & Properties of Quadrilaterals [2D Names and Properties of Shapes] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Construct Angle and Line Bisectors [Construction, Loci and Scale Drawing] [Geometry and Measure] & Properties of Quadrilaterals [2D Names and Properties of Shapes] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Written Multiplication [Basic Arithmetic] [Number] & Perimeter [Perimeter and Area] [Geometry and Measure] & 2 & 1 & 2 & 1 \\ Basic Angle Facts (straight line, opposite, around a point, etc) [Angles] [Geometry and Measure] & Perimeter [Perimeter and Area] [Geometry and Measure] & 2 & 1 & 4 & 1 \\ Naming Co-ordinates in 2D [Co-ordinates] [Algebra] & Area of Simple Shapes [Perimeter and Area] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Types, Naming and Estimating [Angles] [Geometry and Measure] & Writing Expressions [Writing and Simplifying Expressions] [Algebra] & 1 & 1 & 1 & 1 \\ Substitution into Formula [Formula] [Algebra] & Writing Expressions [Writing and Simplifying Expressions] [Algebra] & 4 & 2 & 3 & 1 \\ Naming Co-ordinates in 2D [Co-ordinates] [Algebra] & Linear Equations [Solving Equations] [Algebra] & 1 & 1 & 1 & 1 \\ Multiples and Lowest Common Multiple [Factors, Multiples and Primes] [Number] & Factorising into a Single Bracket [Factorising] [Algebra] & 4 & 5 & 5 & 4 \\ Linear Equations [Solving Equations] [Algebra] & Factorising into a Single Bracket [Factorising] [Algebra] & 4 & 3 & 5 & 3 \\ Converting between Fractions and Decimals [Fractions, Decimals and Percentage Equivalence] [Number] & BIDMAS [Basic Arithmetic] [Number] & 1 & 1 & 1 & 1 \\ Reflection [Transformations] [Geometry and Measure] & Place Value [Basic Arithmetic] [Number] & 1 & 1 & 1 & 1 \\ Length, Area and Volume Scale Factors [Similarity and Congruency] [Geometry and Measure] & Mental Multiplication and Division [Basic Arithmetic] [Number] & 5 & 1 & 4 & 1 \\ Naming Co-ordinates in 2D [Co-ordinates] [Algebra] & Midpoint Between Two Co-ordinates [Co-ordinates] [Algebra] & 5 & 5 & 5 & 5 \\ Enlargement [Transformations] [Geometry and Measure] & Time [Units of Measurement] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Rotational Symmetry [Symmetry] [Geometry and Measure] & Midpoint Between Two Co-ordinates [Co-ordinates] [Algebra] & 1 & 1 & 2 & 1 \\ Factors and Highest Common Factor [Factors, Multiples and Primes] [Number] & Horizontal and Vertical Lines [Straight Line Graphs] [Algebra] & 1 & 1 & 1 & 1 \\ Angles in Triangles [Angles] [Geometry and Measure] & Simplifying Expressions by Collecting Like Terms [Writing and Simplifying Expressions] [Algebra] & 1 & 1 & 1 & 1 \\ Naming Co-ordinates in 2D [Co-ordinates] [Algebra] & Simplifying Expressions by Collecting Like Terms [Writing and Simplifying Expressions] [Algebra] & 1 & 1 & 1 & 1 \\ Rotational Symmetry [Symmetry] [Geometry and Measure] & Simplifying Expressions by Collecting Like Terms [Writing and Simplifying Expressions] [Algebra] & 1 & 1 & 1 & 1 \\ Types, Naming and Estimating [Angles] [Geometry and Measure] & Simplifying Expressions by Collecting Like Terms [Writing and Simplifying Expressions] [Algebra] & 1 & 1 & 1 & 1 \\ Equivalent Fractions [Fractions] [Number] & Converting Mixed Number and Improper Fractions [Fractions] [Number] & 5 & 5 & 5 & 5 \\ Multiplying and Dividing with Decimals [Decimals] [Number] & Prime Numbers and Prime Factors [Factors, Multiples and Primes] [Number] & 1 & 1 & 1 & 1 \\ Construct Angle and Line Bisectors [Construction, Loci and Scale Drawing] [Geometry and Measure] & Prime Numbers and Prime Factors [Factors, Multiples and Primes] [Number] & 1 & 1 & 1 & 1 \\ Construct Angle [Construction, Loci and Scale Drawing] [Geometry and Measure] & Prime Numbers and Prime Factors [Factors, Multiples and Primes] [Number] & 1 & 1 & 1 & 1 \\ Types, Naming and Estimating [Angles] [Geometry and Measure] & Prime Numbers and Prime Factors [Factors, Multiples and Primes] [Number] & 1 & 1 & 1 & 1 \\ Angle Facts with Parallel Lines [Angles] [Geometry and Measure] & Factors and Highest Common Factor [Factors, Multiples and Primes] [Number] & 1 & 1 & 1 & 1 \\ Measuring Angles [Angles] [Geometry and Measure] & Factors and Highest Common Factor [Factors, Multiples and Primes] [Number] & 1 & 1 & 1 & 1 \\ Simplifying Expressions by Collecting Like Terms [Writing and Simplifying Expressions] [Algebra] & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & 5 & 1 & 4 & 5 \\ Squares, Cubes, etc [Indices, Powers and Roots] [Number] & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & 1 & 1 & 3 & 1 \\ Multiplying and Dividing Negative Numbers [Negative Numbers] [Number] & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & 5 & 2 & 5 & 1 \\ Ordering Negative Numbers [Negative Numbers] [Number] & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & 5 & 5 & 5 & 5 \\ Rotation [Transformations] [Geometry and Measure] & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & 1 & 1 & 3 & 1 \\ Reflection [Transformations] [Geometry and Measure] & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & 1 & 1 & 3 & 1 \\ Perimeter [Perimeter and Area] [Geometry and Measure] & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & 1 & 1 & 1 & 1 \\ Types, Naming and Estimating [Angles] [Geometry and Measure] & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & 1 & 1 & 1 & 1 \\ Converting between Fractions and Percentages [Fractions, Decimals and Percentage Equivalence] [Number] & Ordering Negative Numbers [Negative Numbers] [Number] & 1 & 1 & 1 & 1 \\ Construct Angle and Line Bisectors [Construction, Loci and Scale Drawing] [Geometry and Measure] & Ordering Negative Numbers [Negative Numbers] [Number] & 1 & 1 & 1 & 1 \\ Perimeter [Perimeter and Area] [Geometry and Measure] & Ordering Negative Numbers [Negative Numbers] [Number] & 1 & 1 & 1 & 1 \\ Construct Angle and Line Bisectors [Construction, Loci and Scale Drawing] [Geometry and Measure] & Time [Units of Measurement] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Written Multiplication [Basic Arithmetic] [Number] & BIDMAS [Basic Arithmetic] [Number] & 5 & 4 & 5 & 3 \\ \bottomrule \end{tabular} \caption{Full list of relationships found by DAG-GNN in the Eedi topics dataset. Each row refers to one relationship (one edge). From left to right, the columns are the sending node (topic), the receiving node (topic), and the adjacency and orientation evaluations from each expert. For each topic, the brackets contain its parent level 2 and level 1 topics.} \label{tab:full_relationships_dag_gnn} \end{table} \end{landscape} \begin{landscape} \newpage \begin{table \centering \tiny \begin{tabular}{llrrrr} \toprule Topic 1 (From) & Topic 2 (To) & Adj1 & Ori1 & Adj2 & Ori2 \\ \midrule Midpoint Between Two Co-ordinates [Co-ordinates] [Algebra] & Angles in Triangles [Angles] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Solving Linear Inequalities [Inequalities] [Algebra] & Enlargement [Transformations] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Squares, Cubes, etc [Indices, Powers and Roots] [Number] & Written Multiplication [Basic Arithmetic] [Number] & 4 & 1 & 5 & 1 \\ Substitution into Formula [Formula] [Algebra] & Written Multiplication [Basic Arithmetic] [Number] & 4 & 1 & 3 & 1 \\ Linear Sequences (nth term) [Sequences] [Algebra] & Mental Multiplication and Division [Basic Arithmetic] [Number] & 5 & 1 & 5 & 2 \\ Measuring Angles [Angles] [Geometry and Measure] & Construct Angle [Construction, Loci and Scale Drawing] [Geometry and Measure] & 5 & 5 & 5 & 5 \\ Dividing Fractions [Fractions] [Number] & Volume of Prisms [Volume and Surface Area] [Geometry and Measure] & 2 & 2 & 2 & 2 \\ Multiplying and Dividing Negative Numbers [Negative Numbers] [Number] & Parts of a Circle [Circles] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Types, Naming and Estimating [Angles] [Geometry and Measure] & Parts of a Circle [Circles] [Geometry and Measure] & 2 & 2 & 2 & 1 \\ Angles in Polygons [Angles] [Geometry and Measure] & Basic Angle Facts (straight line, opposite, around a point, etc) [Angles] [Geometry and Measure] & 5 & 1 & 5 & 1 \\ Angles in Polygons [Angles] [Geometry and Measure] & Compound Area [Perimeter and Area] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Length, Area and Volume Scale Factors [Similarity and Congruency] [Geometry and Measure] & Linear Sequences (nth term) [Sequences] [Algebra] & 1 & 1 & 2 & 1 \\ Substitution into Formula [Formula] [Algebra] & Rotation [Transformations] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & Gradient Between Two Co-ordinates [Co-ordinates] [Algebra] & 5 & 5 & 5 & 5 \\ Compound Area [Perimeter and Area] [Geometry and Measure] & Reflection [Transformations] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ BIDMAS [Basic Arithmetic] [Number] & Reflection [Transformations] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & Properties of Quadrilaterals [2D Names and Properties of Shapes] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Compound Area [Perimeter and Area] [Geometry and Measure] & Properties of Quadrilaterals [2D Names and Properties of Shapes] [Geometry and Measure] & 3 & 1 & 3 & 1 \\ Rotational Symmetry [Symmetry] [Geometry and Measure] & Perimeter [Perimeter and Area] [Geometry and Measure] & 3 & 1 & 3 & 1 \\ Converting between Fractions and Percentages [Fractions, Decimals and Percentage Equivalence] [Number] & Area of Simple Shapes [Perimeter and Area] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Angles in Triangles [Angles] [Geometry and Measure] & Types, Naming and Estimating [Angles] [Geometry and Measure] & 4 & 3 & 5 & 2 \\ Length Scale Factors in Similar Shapes [Similarity and Congruency] [Geometry and Measure] & Types, Naming and Estimating [Angles] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Factorising into a Single Bracket [Factorising] [Algebra] & Types, Naming and Estimating [Angles] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Enlargement [Transformations] [Geometry and Measure] & BIDMAS [Basic Arithmetic] [Number] & 1 & 1 & 1 & 1 \\ Linear Sequences (nth term) [Sequences] [Algebra] & Time [Units of Measurement] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Horizontal and Vertical Lines [Straight Line Graphs] [Algebra] & Adding and Subtracting Negative Numbers [Negative Numbers] [Number] & 1 & 1 & 1 & 1 \\ Area of Simple Shapes [Perimeter and Area] [Geometry and Measure] & Multiplying and Dividing Negative Numbers [Negative Numbers] [Number] & 1 & 1 & 1 & 1 \\ Writing Expressions [Writing and Simplifying Expressions] [Algebra] & Factors and Highest Common Factor [Factors, Multiples and Primes] [Number] & 1 & 1 & 1 & 1 \\ Squares, Cubes, etc [Indices, Powers and Roots] [Number] & Midpoint Between Two Co-ordinates [Co-ordinates] [Algebra] & 1 & 1 & 1 & 1 \\ Writing Expressions [Writing and Simplifying Expressions] [Algebra] & Naming Co-ordinates in 2D [Co-ordinates] [Algebra] & 1 & 1 & 1 & 1 \\ BIDMAS [Basic Arithmetic] [Number] & Line Symmetry [Symmetry] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Simplifying Expressions by Collecting Like Terms [Writing and Simplifying Expressions] [Algebra] & Length, Area and Volume Scale Factors [Similarity and Congruency] [Geometry and Measure] & 1 & 1 & 1 & 1 \\ Converting Mixed Number and Improper Fractions [Fractions] [Number] & Horizontal and Vertical Lines [Straight Line Graphs] [Algebra] & 1 & 1 & 1 & 1 \\ Construct Angle and Line Bisectors [Construction, Loci and Scale Drawing] [Geometry and Measure] & Horizontal and Vertical Lines [Straight Line Graphs] [Algebra] & 1 & 1 & 1 & 1 \\ Multiplying and Dividing with Decimals [Decimals] [Number] & Simplifying Expressions by Collecting Like Terms [Writing and Simplifying Expressions] [Algebra] & 1 & 1 & 1 & 1 \\ Reflection [Transformations] [Geometry and Measure] & Simplifying Expressions by Collecting Like Terms [Writing and Simplifying Expressions] [Algebra] & 1 & 1 & 1 & 1 \\ Substitution into Formula [Formula] [Algebra] & Dividing Fractions [Fractions] [Number] & 4 & 1 & 3 & 1 \\ Factorising into a Single Bracket [Factorising] [Algebra] & Dividing Fractions [Fractions] [Number] & 2 & 1 & 2 & 1 \\ Fractions of an Amount [Fractions] [Number] & Multiplying Fractions [Fractions] [Number] & 5 & 4 & 5 & 2 \\ Time [Units of Measurement] [Geometry and Measure] & Converting Mixed Number and Improper Fractions [Fractions] [Number] & 4 & 1 & 4 & 1 \\ Length Scale Factors in Similar Shapes [Similarity and Congruency] [Geometry and Measure] & Converting Mixed Number and Improper Fractions [Fractions] [Number] & 4 & 1 & 5 & 1 \\ Place Value [Basic Arithmetic] [Number] & Equivalent Fractions [Fractions] [Number] & 4 & 4 & 3 & 5 \\ Reflection [Transformations] [Geometry and Measure] & Equivalent Fractions [Fractions] [Number] & 1 & 1 & 1 & 1 \\ Writing Expressions [Writing and Simplifying Expressions] [Algebra] & Fractions of an Amount [Fractions] [Number] & 1 & 1 & 1 & 1 \\ Dividing Fractions [Fractions] [Number] & Prime Numbers and Prime Factors [Factors, Multiples and Primes] [Number] & 1 & 1 & 1 & 1 \\ Adding and Subtracting Fractions [Fractions] [Number] & Prime Numbers and Prime Factors [Factors, Multiples and Primes] [Number] & 1 & 1 & 1 & 1 \\ Simplifying Expressions by Collecting Like Terms [Writing and Simplifying Expressions] [Algebra] & Multiples and Lowest Common Multiple [Factors, Multiples and Primes] [Number] & 1 & 1 & 1 & 1 \\ Adding and Subtracting Fractions [Fractions] [Number] & Multiples and Lowest Common Multiple [Factors, Multiples and Primes] [Number] & 1 & 1 & 2 & 1 \\ Mental Multiplication and Division [Basic Arithmetic] [Number] & Multiples and Lowest Common Multiple [Factors, Multiples and Primes] [Number] & 5 & 5 & 5 & 5 \\ Length Scale Factors in Similar Shapes [Similarity and Congruency] [Geometry and Measure] & BIDMAS [Basic Arithmetic] [Number] & 1 & 1 & 1 & 1 \\ \bottomrule \end{tabular} \caption{Full list of relationships found by \textit{Random} in the Eedi topics dataset. Each row refers to one relationship (one edge). From left to right, the columns are the sending node (topic), the receiving node (topic), and the adjacency and orientation evaluations from each expert. For each topic, the brackets contain its parent level 2 and level 1 topics.} \label{tab:full_relationships_random} \end{table} \end{landscape} \begin{table*} \setlength{\tabcolsep}{4pt} \centering \small \begin{tabular}{rccccccc} \toprule {} & \multicolumn{3}{c}{Adjacency} & \multicolumn{3}{c}{Orientation} & \multirow{2}{*}[-0.2em]{\shortstack{Causal\\accuracy}} \\ \cmidrule[0.5pt](lr){2-4} \cmidrule[0.5pt](lr){5-7} {} & Recall & Precision & F$_{1}$-score & Recall & Precision & F$_{1}$-score & {} \\ \midrule PC & 0.312$\pm$0.072 & 0.508$\pm$0.077 & 0.381$\pm$0.057 & 0.123$\pm$0.042 & 0.202$\pm$0.069 & 0.136$\pm$0.063 & 0.211$\pm$0.065 \\ GES & 0.295$\pm$0.057 & 0.473$\pm$0.043 & 0.378$\pm$0.051 & 0.132$\pm$0.055 & 0.200$\pm$0.071 & 0.138$\pm$0.058 & 0.238$\pm$0.048 \\ NOTEARS (L) & 0.123$\pm$0.047 & 0.401$\pm$0.063 & 0.219$\pm$0.040 & 0.093$\pm$0.044 & 0.291$\pm$0.050 & 0.129$\pm$0.047 & 0.090$\pm$0.035 \\ NOTEARS (NL) & 0.222$\pm$0.048 & 0.434$\pm$0.052 & 0.293$\pm$0.050 & 0.157$\pm$0.068 & 0.332$\pm$0.061 & 0.228$\pm$0.044 & 0.189$\pm$0.041 \\ DAG-GNN & 0.332$\pm$0.071 & 0.413$\pm$0.059 & 0.354$\pm$0.072 & 0.262$\pm$0.049 & \textbf{0.336$\pm$0.060} & 0.285$\pm$0.051 & 0.257$\pm$0.066 \\ \modelName & \textbf{0.698$\pm$0.052} & 0.588$\pm$0.042 & \textbf{0.635$\pm$0.049} & \textbf{0.417$\pm$0.065} & 0.314$\pm$0.069 & \textbf{0.359$\pm$0.059} & \textbf{0.615$\pm$0.074} \\ MMHC & 0.612$\pm$0.057 & \textbf{0.602$\pm$0.051} & 0.601$\pm$0.046 & 0.355$\pm$0.041 & 0.261$\pm$0.041 & 0.298$\pm$0.058 & 0.471$\pm$0.061 \\ Tabu & 0.332$\pm$0.042 & 0.461$\pm$0.053 & 0.390$\pm$0.047 & 0.121$\pm$0.052 & 0.198$\pm$0.054 & 0.128$\pm$0.051 & 0.240$\pm$0.050 \\ HillClimb & 0.291$\pm$0.054 & 0.452$\pm$0.060 & 0.361$\pm$0.051 & 0.134$\pm$0.049 & 0.196$\pm$0.061 & 0.130$\pm$0.050 & 0.221$\pm$0.044 \\ \bottomrule \end{tabular} \caption{Structure learning results for the extended synthetic experiment described in \autoref{tab:synthetic_times}. Each value is the mean and standard error over twenty-four datasets. In general, the results are qualitatively similar to those obtained in the synthetic experiment in the paper (recall \autoref{tab:synthetic_causality}), with \modelName obtaining superior performance compared to the previous and the new baselines. Notice that the new baseline MMHC is close to \modelName, being superior in adjacency-precision. However, as shown in \autoref{tab:synthetic_times}, MMHC scales poorly.\label{tab:causality_appendix}} \end{table*} \begin{figure} \centering \includegraphics[scale=0.55]{images/topic_relationship.pdf} \caption{The structures discovered by \modelName based on an alternative \cz{education} dataset. In particular, the \textcolor{myblue}{blue node} represents the curriculum that should be taught later. On the other hand, the \textcolor{mygreen}{green node} represents the fundamental topics that should be taught earlier.} \label{fig:eedi_topic_relationship} \end{figure} \section{Conclusions}\label{sec:conclusions} \vspace{-2mm} We introduced \modelName, a novel approach that simultaneously performs {group-wise} structure discovery and learns to impute missing values. Both tasks are performed jointly: imputation is informed by the discovered relationships and vice-versa, {leading to improved performance for both tasks}. Moreover, motivated by a real-world problem, \modelName { shows its impact in the real-world education domain to aid domain experts in setting up curriculum. } \section{Experiments}\label{sec:exp} We evaluate the performance of \modelName in three different problems: a synthetic experiment where the data generation process is controlled, a semi-synthetic problem (simulated data from a real-world problem) with many more variables (Neuropathic Pain), and the real-world problem that motivated the development of the group-level structure learning (Eed ). {To compare the model performance with related work, the first two datasets are on the variable level, which means that each group only has one variable. In the education setting, we focus on the real-world usage of the method and have worked closely with the domain expert to evaluate the results. } Additional experiments are presented in the appendix. \textbf{Baselines}. We consider five baselines for the structure discovery task at the variable level. PC \cite{spirtes2000causation} and GES \cite{chickering2002optimal} are the most popular methods in constraint-based and score-based approaches, respectively. We also consider three recent algorithms based on continuous optimization and deep learning: NOTEARS \citep{zheng2018dags}, the non-linear (NL) extension of NOTEARS \citep{zheng2020learning}, and DAG-GNN \citep{yu2019dag}. Unlike \modelName, these baselines cannot deal with missing values in the training data. Therefore, we work with fully observed training data in the first two sections when using these baselines. In contrast, the real-world data in the last section comes with partially observed training data, and the goal is to discover group-wise relationships. {These baselines are not applicable.} For the missing data imputation task, we also consider five baselines. Mean Imputing and Majority Vote are popular techniques used as references, Missforest \citep{stekhoven2012missforest} and MICE \citep{buuren2010mice} are two of the most widely-used imputation algorithms, and PVAE \cite{eddi} is a recent algorithm based on amortized inference. \textbf{Metrics}. Imputation performance is evaluated with standard metrics such as RMSE (continuous variables) and accuracy ({binary} variables). For {binary} variables , we also provide the area under the ROC and the Precision-Recall curves (AUROC and AUPR, respectively), which are especially useful for imbalanced data (such as Neuropathic Pain). We follow common practice \citep{glymour2019review, mvpc} regarding structure discovery performance, and consider metrics on the \emph{adjacency} and \emph{orientation}. While the former does not take into account the direction of the edges, the latter does. For both adjacency and orientation, we compute recall, precision and F$_1$-score. We also provide \emph{causal accuracy}, a popular structure discovery metric that considers edge orientation \citep{claassen2012bayesian}. \subsection{Synthetic experiment}\label{sec:exp_synthetic} We simulate fifteen synthetic datasets. For each simulated dataset, we first sample the true structure ${\mathbf G}$; see \autoref{fig:synthetic_graph}(a) for an example. We obtain the samples of the datasets by computing each variable based on its parents using a non-linear mapping based on the $\sin$ function. The appendix provides further details, including a visualisation of the generated data in \autoref{fig:synthetic_data}. For each dataset, we simulate $5000$ training and $1000$ test samples. \textbf{Imputation performance}. \modelName outperforms the baselines in terms of imputation across all synthetic datasets ( \autoref{tab:synthetic_imputation}) The results grouped by the number of variables are presented by \autoref{tab:synthetic_imputation_extended} in the appendix. Therefore, \modelName exploits the learned graph to improve imputation by avoiding spurious correlations. \textbf{Structure discovery performance}. \modelName obtains better performance than the baselines, see \autoref{tab:synthetic_causality}. The results split by the number of variables are shown in the appendix, \autoref{tab:synthetic_causality_extended}. Notice that NOTEARS (NL) is slightly better in terms of orientation precision. However, this is at the expense of a significantly lower capacity to detect true edges; see the recall and the trade-off between both (F$_1$-score). In this small synthetic experiment, it is possible to visually inspect the predicted graph. \autoref{tab:synthetic_prob_edge} shows the posterior probability of each edge (i.e. the estimated matrix ${\mathbf G}$) for the simulated dataset that uses the true graph in \autoref{fig:synthetic_graph}(a). Using a threshold of 0.5, we obtain the predicted graph in \autoref{fig:synthetic_graph}(b). We observe that all the true edges are captured by \modelName, with some additional edges {due to finite data and non-convex optimization}. Finally, \modelName can scale to large data both in terms of data points (which benefits naturally from its SGD-based optimization) and dimensionality (thanks to the continuous optimization over the graph space). We demonstrate the computational efficiency {with synthetic data ranging from 4 nodes to 512 nodes} in the appendix, \autoref{tab:synthetic_times}. \subsection{Neuropathic pain dataset}\label{sec:exp_neuropathic} \begin{table}[h] \centering \scriptsize \begin{tabular}{r@{\hskip 2mm}c@{\hskip 2mm}c@{\hskip 2mm}c} \toprule {} & Accuracy & AUROC & AUPR \\ \midrule Majority vote & 0.9268$\pm$0.0003 & 0.5304$\pm$0.0003 & 0.3366$\pm$0.0025 \\ Mean imputing & 0.9268$\pm$0.0003 & 0.8529$\pm$0.0012 & 0.3262$\pm$0.0034 \\ MICE & 0.9469$\pm$0.0007 & 0.9319$\pm$0.0010 & 0.6513$\pm$0.0046 \\ Missforest & 0.9305$\pm$0.0004 & 0.8915$\pm$0.0093 & 0.5227$\pm$0.0033 \\ PVAE & 0.9415$\pm$0.0003 & 0.9270$\pm$0.0007 & 0.5934$\pm$0.0046 \\ \modelName & \textbf{0.9471$\pm$0.0006} & \textbf{0.9392$\pm$0.0008} & \textbf{0.6597$\pm$0.0053} \\ \bottomrule \end{tabular} \vspace{-3mm} \captionof{table}{Imputation results for neuropathic pain data (mean and std error over five runs).\label{tab:neuropathic_imputation}} \end{table} {We evaluate our method using a machine learning benchmark in healthcare applications \citep{tu2019neuropathic}. The dataset contains records of patients regarding the symptoms associated with neuropathic pain. There are 222 variables in this dataset. Unlike the previous experiment with continuous data, this dataset has binary variables indicating the symptoms. } The train and test sets have $1000$ and $500$ patients, respectively. \textbf{Imputation performance}. \modelName shows competitive or superior performance when compared to the baselines, see \autoref{tab:neuropathic_imputation}. Notice that AUROC and AUPR allow for an appropriate threshold-free assessment in this imbalanced scenario. Indeed, as expected from medical data, the minority of values are 1 (symptoms); here, the prevalence of symptoms is around $8\%$ in the test set. Interestingly, it is precisely in AUPR where the differences between \modelName and the rest of the baselines are larger except MICE, whose performance is very similar to that of \modelName in this dataset. \textbf{Structure discovery results}. As in the synthetic experiment, \modelName outperforms the causality-based baselines; see \autoref{tab:neuropathic_causality}. Notice that NOTEARS (NL) is slightly better in terms of adjacency-precision, i.e. the edges that it predicts are slightly more reliable. However, this is at the expense of a significantly lower capacity to detect true edges, see the recall and the trade-off between both (F$_{1}$-score). \subsection{Eedi topics dataset}\label{sec:exp_eedi} {Finally, we evaluate our method on an even more challenging real-world dataset in education requiring group-wise structure discovery. This is } an important real-world problem in the field of AI-powered educational systems \citep{wang2021results, wang2020educational}. {In this setting,} we are interested in relationships between {topics while the observations are question-answer pairs under these topics. The {dataset} is very sparse, with 74.1\% of the values missing. The dataset contains the responses by 6147 students to 948 mathematics questions. The 948 variables are binary (1 if the student provided the correct answer and 0 otherwise). These 948 questions target very specific mathematical concepts and are grouped within a meaningful hierarchy of \emph{topics}; see \autoref{fig:eedi_hierarchy}. Here we apply {our proposed model to find the relationships among the topics using the third level of the topic hierarchy (\autoref{fig:eedi_hierarchy}), resulting in 57 group-level} nodes \begin{figure*} \centering \begin{tikzpicture} \tikzstyle{square} = [rectangle, draw, fill=white!11, text width=1.2em, text badly centered, inner sep=2.5pt] \tikzstyle{box} = [rectangle,thick,draw=blue!75,fill=blue!20,minimum size=2mm,rounded corners=.2ex \tikzstyle{arrowline} = [draw,color=black, -latex] \tikzstyle{surround} = [thick,draw=black,rounded corners=1mm] \node [box] at (-0.5,0) (Maths) {\scriptsize Maths}; \node [box] at (-4.9,-1) (Algebra) {\scriptsize Algebra}; \node [box] at (-0.5,-1) (Number) {\scriptsize Number}; \node [box] at (3.3,-1) (Geometry) {\scriptsize Geometry and measure}; \node [box] at (-6.8,-2) (A_1) {\scriptsize...}; \node [box] at (-4.9,-2) (A_2) {\scriptsize Solving equations}; \node [box] at (-3,-2) (A_3) {\scriptsize...}; \node [box] at (-2.3,-2) (N_1) {\scriptsize...}; \node [box] at (-0.5,-2) (N_2) {\scriptsize Negative numbers}; \node [box] at (1.3,-2) (N_3) {\scriptsize...}; \node [box] at (2.3,-2) (G_1) {\scriptsize...}; \node [box] at (3.3,-2) (G_2) {\scriptsize Angles}; \node [box] at (4.3,-2) (G_3) {\scriptsize...}; \node [box] at (-6.9,-3) (AA_1) {\scriptsize...}; \node [box] at (-4.9,-3) (AA_2) {\scriptsize Quadratic equations}; \node [box] at (-2.9,-3) (AA_3) {\scriptsize...}; \node [box] at (-1.6,-3) (NN_1) {\scriptsize...}; \node [box] at (-0.5,-3) (NN_2) {\scriptsize Ordering}; \node [box] at (0.6,-3) (NN_3) {\scriptsize...}; \node [box] at (1.7,-3) (GG_1) {\scriptsize...}; \node [box] at (3.3,-3) (GG_2) {\scriptsize Circle theorems}; \node [box] at (4.9,-3) (GG_3) {\scriptsize...}; \node [] at (6,0) (Level 0) {\scriptsize Level 0}; \node [] at (6,-1) (Level 1) {\scriptsize Level 1}; \node [] at (6,-2) (Level 2) {\scriptsize Level 2}; \node [] at (6,-3) (Level 3) {\scriptsize Level 3}; \path [arrowline] (Maths) to (Number); \path [arrowline] (Maths) to (Algebra); \path [arrowline] (Maths) to (Geometry); \path [arrowline] (Algebra) to (A_1); \path [arrowline] (Algebra) to (A_2); \path [arrowline] (Algebra) to (A_3); \path [arrowline] (Number) to (N_1); \path [arrowline] (Number) to (N_2); \path [arrowline] (Number) to (N_3); \path [arrowline] (Geometry) to (G_1); \path [arrowline] (Geometry) to (G_2); \path [arrowline] (Geometry) to (G_3); \path [arrowline] (A_2) to (AA_1); \path [arrowline] (A_2) to (AA_2); \path [arrowline] (A_2) to (AA_3); \path [arrowline] (N_2) to (NN_1); \path [arrowline] (N_2) to (NN_2); \path [arrowline] (N_2) to (NN_3); \path [arrowline] (G_2) to (GG_1); \path [arrowline] (G_2) to (GG_2); \path [arrowline] (G_2) to (GG_3); \end{tikzpicture} \vspace{-2mm} \caption{Hierarchy of topics in Eedi data. All the questions are related to maths (level 0 topic). The number of topics at levels 1, 2 and 3 are 3, 25 and 57. Each question is associated with only one topic at level 3 (thus, to only one topic at any higher level).} \label{fig:eedi_hierarchy} \vspace{-4mm} \end{figure*} \begin{table} \centering \scriptsize \begin{tabular}{r@{\hskip 3mm}c@{\hskip 2mm}c@{\hskip 2mm}c} \toprule {} & Accuracy & AUROC & AUPR \\ \midrule Majority vote & 0.6260$\pm$0.0000 & 0.6208$\pm$0.0000 & 0.7465$\pm$0.0000 \\ Mean imputing & 0.6260$\pm$0.0000 & 0.6753$\pm$0.0000 & 0.6906$\pm$0.0000 \\ MICE & 0.6794$\pm$0.0005 & 0.7453$\pm$0.0007 & 0.7483$\pm$0.0010 \\ Missforest & 0.6849$\pm$0.0005 & 0.7219$\pm$0.0007 & 0.7478$\pm$0.0008 \\ PVAE & 0.7138$\pm$0.0005 & \textbf{0.7852$\pm$0.0001} & \textbf{0.8204$\pm$0.0002} \\ \modelName & \textbf{0.7147$\pm$0.0007} & 0.7815$\pm$0.0008 & 0.8179$\pm$0.0006 \\ \bottomrule \end{tabular} \vspace{-3mm} \caption{Imputation results for Eedi topics dataset (mean and standard error over five runs). \label{tab:eedi_imputation}} \vspace{-5mm} \end{table} \begin{table}[h] \centering \scriptsize \begin{tabular}{r@{\hskip 2mm}c@{\hskip 2mm}c@{\hskip 2mm}c@{\hskip 2mm}c} \toprule & \multicolumn{2}{c}{Adjacency} & \multicolumn{2}{c}{Orientation} \\ \cmidrule[0.5pt](r){2-3} \cmidrule[0.5pt](l){4-5} & Expt 1 & Expt 2 & Expt 1 & Expt 2 \\ \midrule \textit{Random} & 2.04 & 2.08& 1.44 & 1.40 \\ DAG-GNN & 2.04 & 2.32 & 1.68 & 1.68 \\ \modelName & \textbf{3.60} & \textbf{3.70} & \textbf{2.76} & \textbf{2.60} \\ \bottomrule \end{tabular} \vspace{-3mm} \captionof{table}{Average expert evaluation of the topic relationships. Cohen's $\kappa$ inter-annotator agreement is $0.72$ for adjacency and $0.76$ for orientation (substantial agreement).\label{tab:relationships_summary}} \end{table} \begin{table*}[t] \scriptsize \centering \begin{tabular}{c@{\hskip 2mm}c@{\hskip 2mm}c} \begin{tabular}{r @{\hskip 1mm}c@{\hskip 1mm}c@{\hskip 1mm}c} \toprule \textit{\modelName} & Number & Algebra & Geometry \\ \midrule Number & 30 & 4 & 3 \\ Algebra & 2 & 6 & 0 \\ Geometry & 0 & 0 & 5 \\ \bottomrule \end{tabular} & \begin{tabular}{r @{\hskip 1mm}c@{\hskip 1mm}c@{\hskip 1mm}c} \toprule DAG-GNN & Number & Algebra & Geometry \\ \midrule Number & 8 & 3 & 6 \\ Algebra & 1 & 5 & 2 \\ Geometry & 14 & 7 & 11 \\ \bottomrule \end{tabular} & \begin{tabular}{r @{\hskip 1mm}c@{\hskip 1mm}c@{\hskip 1mm}c} \toprule \textit{Random} & Number & Algebra & Geometry \\ \midrule Number & 7 & 4 & 6 \\ Algebra & 8 & 1 & 6 \\ Geometry & 6 & 3 & 9 \\ \bottomrule \end{tabular} \end{tabular} \vspace{-3mm} \captionof{table}{Distribution of the relationships across level 1 topics The item $(i,j)$ refers to edges in the direction $i\to j$. \label{tab:eedi_adj_level_1} } \vspace{-5mm} \end{table*} \textbf{Imputation results.} \modelName achieves competitive or superior performance when compared to the baselines (\autoref{tab:eedi_imputation}). Although the dataset is relatively balanced (54\% of the values are 1), we provide AUROC and AUPR for completeness. Notice that this setting is more challenging than the previous ones since we learn relationships between groups of variables (topics). Indeed, whereas the group extension allows for more meaningful relationships, the information flow happens at a less granular level. Interestingly, even in this case, \modelName obtains similar or improved imputation results compared to the baselines. \textbf{Structure discovery results between groups.} Most of the baselines used so far cannot be applied here because i) they cannot deal with partially observed training data or ii) they cannot learn relationships between groups of variables. DAG-GNN is the only one that can be adapted to satisfy both properties. For the first one, we adapt DAG-GNN following the same strategy as in \modelName, i.e. replacing missing values with a constant value. For the second one, notice that DAG-GNN can be used for vector-valued variables according to the original formulation \citep{yu2019dag}. However, all of them need to have the same dimensionality. To cope with arbitrary groups, we apply the group-specific mappings described for \modelName. Finally, as an additional reference, we also compare with randomly generated relationships, which we refer to as \textit{Random}. Moreover, as there are no ground truth relationships in this real-world application, we ask two experts (teachers) to assess the validity of the relationships found by \modelName, DAG-GNN, and \textit{Random}. For each relationship, they evaluate the adjacency (whether it is sensible to connect the two topics) and the orientation (whether the first one is a prerequisite for the second one). They provide an integer value from 1 (strongly disagree) to 5 (strongly agree), i.e. the higher, the better. The complete list of relationships and expert evaluations for \modelName, DAG-GNN, and \textit{Random} can be found in the appendix; see \autoref{tab:full_relationships_vicause}, \autoref{tab:full_relationships_dag_gnn}, and \autoref{tab:full_relationships_random}, respectively. In summary, \autoref{tab:relationships_summary} shows here the average evaluations: we see that the relationships discovered by \modelName score much more highly across both metrics than the baseline models. Another interesting aspect is how the relationships between level-3 topics are distributed across higher-level topics (recall \autoref{fig:eedi_hierarchy}). Intuitively, it is expected that most of the relationships happen \emph{inside} higher-level topics (e.g. Number-related concepts are more probably related to each other than to Geometry-related ones). \autoref{tab:eedi_adj_level_1} shows such a distribution for the compared methods. Indeed, notice that the percentage of inside-topic relationships is higher for \modelName (82\%) and DAG-GNN (42\%) than for \textit{Random} (34\%). An analogous analysis for the 25 level-2 topics is provided in the appendix; see \autoref{tab:eedi_adj_level_2_vicause} (\modelName), \autoref{tab:eedi_adj_level_2_dag_gnn} (DAG-GNN), and \autoref{tab:eedi_adj_level_2_random} (\textit{Random}). In particular, whereas 6\% of the connections happen inside level 2 topics for \textit{Random}, it is 14\% for DAG-GNN and 36\% for \modelName. {\textbf{Education Impact.}} Lastly, {to make a real-world impact, we have been provided with an additional education dataset in the same format as Eedi by an education organization to help provide insight for math curriculum building. The final structure among all topics found by \modelName is presented by figure \autoref{fig:eedi_topic_relationship} in the appendix. } The predicted relationships allowed insights into which topics are foundational and need to be covered earlier (topics with many originating edges), as well as which topics are more complex and should be covered later (topics with many incoming edges). This allowed us to re-evaluate the order of topics in a nationwide used secondary curriculum. Specifically, topics such as ``arithmetic'' or ``properties of shapes'' were moved earlier in the curriculum, while topics such as ``negative numbers'' or ``proportion and similarity'' were moved to a later stage in the curriculum. {Another interesting example found by the domain expert is the Venn diagram, which was originally taught in year 9/10 and is now suggested to move to year 7. Experts found that the Venn diagram has been a useful tool in teaching other topics which are currently taught before year 10. Moving this earlier will help students to learn other topics better.} This emphasises the real-world impact our model, \modelName, can have in planning curricula. \section{Introduction}\label{sec:intro} Understanding the structural relationships among different variables provides critical insights in many real-world applications, such as medicine, economics and education \citep{sachs2005causal,zhang2013integrated}. However, it is commonly impossible to perform randomized controlled trials for many real-world applications due to ethical or cost considerations. Thus, learning graphs from observed data, known as structure learning, has recently made remarkable progress \citep{fatemi2021slaps,yu2019dag,zheng2018dags,zheng2020learning}. For many applications, {variables in the data can be gathered into semantically meaningful groups, where useful insights are at group level}. For example, in finance, one may be interested in how a financial situation influences different industries (i.e. groups) instead of individual companies (i.e. variables). Similarly, in education, the data can contain student responses to thousands of individual questions (i.e. variables), where each question belongs to a broader topic (i.e. groups). Again, it is insightful to find relationships between topics instead of individual questions. Moreover, real-world data such as educational data is inherently sparse since it is not feasible to ask every question to every student; the dimensions of the data in {terms} of {the} number of variables {and} the number of observations are very high, posing a scalability challenge. Despite the progress in structure learning, no existing method can {discover} group-wise relationships given large-scale partially observed data. In this work, we present \modelName (missing \underline{v}alue \underline{i}mputation with \underline{s}tructural \underline{l}earning), a novel approach to simultaneously tackle group-wise structure learning and missing value imputations driven by the real-world topic relationship discovery in an education setting. This is accomplished by combining variational inference with a generative model that leverages a structured latent space and a decoder based on message-passing Graph Neural Networks (GNN) \citep{gilmer2017neural}. Namely, the structured latent space endows each group of variables with its latent subspace, and the interactions between the subspaces are regulated by a GNN whose behavior depends on the inferred graph from variational inference, see \autoref{fig:vicause}(a). \modelName satisfies all the desired properties: it leverages continuous optimization of the structure learning to achieve scalability \citep{zheng2018dags, zheng2020learning}; the \modelName formulation naturally handles missing values, and it can discover relations at different levels of granularity with pre-defined groups. Empirically, we evaluate \modelName on one synthetic and two real-world problems including the aforementioned education scenario. \modelName shows improved performance in both missing data imputation and structure learning accuracy compared to popular and recent approaches for each task. We worked closely with an education domain expert to evaluate the learned topic relationships, and our model has provided insightful results as recognized by the domain experts. \section{Model Description}\label{sec:model} In the following, we present the formulation of \modelName for scalable group-wise structure learning with partial observations using {a} novel deep generative model based framework. \subsection{Problem setting}\label{sec:model_notation} Assume a training data set ${\mathbf X}=\{{\mathbf x}_n\}_{n=1}^N$ with ${\mathbf x}_n\in\mathbb{R}^{D}$. The observed and missing values are denoted as ${\mathbf X}_O$ and ${\mathbf X}_U$, respectively, where we assume the data are missing completely at random (MCAR) or missing at random (MAR). In \autoref{app: MAR}, we explain how to handle MAR. In particular, variables can be gathered into $M$ pre-defined groups, where each can be denoted as ${\boldsymbol{\chi}}_{n,m}=[x_{n,i}]_{i\in \mathcal{I}_m}$. $\mathcal{I}_m$ containing the variable indices belonging to group $m$ (e.g., $\mathcal{I}_2=[4,5,6]$ indicates group $2$ includes the $4^{\text{th}}$, $5^{\text{th}}$ and $6^{\text{th}}$ variables). One should note that each $\mathcal{I}_m$ may have varying sizes for different $m$ (i.e.~varying group sizes). The goal of \modelName is to (i) perform missing value imputation for test samples and (ii) infer structures between groups of variables. We use the adjacency matrix ${\mathbf G}\in[0,1]^{M\times M}$ to represent a graph, where $G_{ij}=1$ or $0$ indicates whether there is a directed edge from $i-$th to $j-$th group or not. In the context of the education domain, the above formulation can be rephrased as follows: variable ${\mathbf x}_n$ containing the student's responses to a set of questions. $x_{i,j}=1$ represents student $i$ has answered question $j$ correctly. Groups can be defined as the topic associated with each question. $\mathcal{I}_m$ contains the question IDs that belong to the same topic, and ${\boldsymbol{\chi}}_m$ represents a group of responses related to that topic. Clearly, not all students can answer every question. Thus, ${\mathbf X}_O$, ${\mathbf X}_U$ represent the existing responses and un-answered questions, respectively. The goal of \modelName is to (i) predict students' responses to un-answered questions, which by itself is important in the education domain \cite{wang2020educational, wang2021results}, and (ii) discover the relationships between topics, which can help education experts optimize the learning experience and the curriculum. For structure learning, we adopt a Bayesian approach for graphs \citep{heckerman2006bayesian}. Namely, we seek to maximize the posterior probability of ${\mathbf G}$ given partially observed training data ${\mathbf X}_O$ within the space of all DAGs: \begin{align} \label{eq:causal_objective} {\mathbf G}_\star = & \textstyle\argmax_{{\mathbf G}\in\textrm{DAGs}}{\color{blue}{\mathrm{p}}({\mathbf X}_O|{\mathbf G}){\mathrm{p}}({\mathbf G})}. \end{align} To optimize over the structure with the DAG constraint in \autoref{eq:causal_objective}, we resort to recent continuous optimization techniques \citep{castle, zheng2018dags, zheng2020learning}, where a differentiable measure of 'DAG-ness', $\mathcal{R}({\mathbf G})=\mathrm{tr}(e^{{\mathbf G} \odot {\mathbf G}})-D-1$, was proposed and is zero if and only if ${\mathbf G}$ is a DAG. To leverage this DAG-ness characterisation, we follow \citet{castle, yu2019dag} and introduce a {\color{ForestGreen}regulariser} based on $\mathcal{R}({\mathbf G})$ to favour the DAG-ness of the solution, i.e. \begin{equation}\label{eq:causal_objective_with_reg} {\mathbf G}_\star = \textstyle\argmax_{{\mathbf G}} \left( {\color{blue}{\mathrm{p}}({\mathbf X}_O|{\mathbf G}){\mathrm{p}}({\mathbf G})}-\lambda{\color{ForestGreen}\mathcal{R}({\mathbf G})}\right). \end{equation} In the following two sections, we present our detailed formulation, training and imputation algorithms of \modelName, that allows the model to infer the latent structure ${\mathbf G}$ and impute missing values $\tilde{{\mathbf x}}_U$ in a test sample $\tilde{{\mathbf x}}\in\mathbb{R}^D$ based on the observed $\tilde{{\mathbf x}}_O$. \subsection{Generative model and variational inference}\label{sec:model_and_inference} \begin{wrapfigure}{r}{0.5\columnwidth \begin{minipage}{0.5\columnwidth} \begin{algorithm}[H] \caption{Generative process}\label{alg:gen_story} \begin{algorithmic} \small \STATE ${\mathbf G}_{ij} \sim \text{Bernoulli}(p_{ij})$ \FOR{$n\in\{1, 2, \cdots, N\}$} \STATE ${\mathbf Z}_n \sim \mathcal{N}(\mathbf{0}, \sigma_z^2{\mathbf I})$ \STATE ${\mathbf x}_n \sim \mathcal{N}(f_{\theta}({\mathbf Z}_n, {\mathbf G}),\sigma_x^2{\mathbf I})$ \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \end{wrapfigure} For the generation of observation ${\mathbf X}$, we adopt the latent variable model of \autoref{fig:vicause}. Particularly, given an inferred graph $G$ and latent ${\mathbf Z}$, the generative path from ${\mathbf Z}$ to ${\mathbf X}$ is provided in \autoref{alg:gen_story}, where we use a graph neural network (GNN) decoder that respects the learned graph structure $G$ and the provided grouping structure. Then the joint model likelihood is \begin{gather}\label{eq:full_model} {\mathrm{p}}\left({\mathbf X}, {\mathbf Z}, {\mathbf G}\right) = {\mathrm{p}}({\mathbf G})\textstyle\prod_n {\mathrm{p}}({\mathbf x}_n|{\mathbf Z}_n,{\mathbf G}){\mathrm{p}}({\mathbf Z}_n). \end{gather} \noindent\textbf{Amortized variational inference.} The true posterior distribution over ${\mathbf Z}$ and ${\mathbf G}$ in \autoref{eq:full_model} is intractable since we use a complex deep learning architecture. Therefore, we resort to an efficient amortized variational inference as in \citet{kingma2013auto, kingma2019introduction}. Here, we consider a fully factorized variational distribution ${\mathrm{q}}({\mathbf Z},{\mathbf G})= {\mathrm{q}}_\phi({\mathbf G})\prod_{n=1}^N {\mathrm{q}}_\phi({\mathbf Z}_n|{\mathbf x}_n)$, where ${\mathrm{q}}_\phi({\mathbf Z}_n|{\mathbf x}_n)$ is a Gaussian whose mean and (diagonal) covariance matrix are given by an \emph{encoder}. For ${\mathrm{q}}({\mathbf G})$, we consider the product of independent Bernoulli distributions over the edges; that is, the presence of each edge from $i$ to $j$ is associated with a probability $p_{ij}$ to be estimated. With the above formulation, the evidence lower bound (ELBO) is \begin{align}\label{eq:ELBO} \nonumber\textrm{ELBO} &= \textstyle\sum_{n} \left\{\mathbb{E}_{{\mathrm{q}}_\phi({\mathbf Z}_n|{\mathbf x}_n){\mathrm{q}}({\mathbf G})}[\log{\mathrm{p}}({\mathbf x}_n|{\mathbf Z}_n,{\mathbf G})\right. - \\ & \textrm{KL}[{\mathrm{q}}_\phi({\mathbf Z}_n|{\mathbf x}_n)||{\mathrm{p}}({\mathbf Z}_n)]]\big\} - \textrm{KL}[{\mathrm{q}}({\mathbf G})||{\mathrm{p}}({\mathbf G}))]. \end{align}\vspace{-12pt} Next, we explain our choice of the generator (decoder), which uses a GNN over a learned graph ${\mathbf G}$ to model the interactions between latent variables, representing the information about each group. Then, we focus on the inference network (encoder), representing the mapping from the group of observed variables to its corresponding latent representation. \noindent\textbf{Generator}. The generator (i.e., decoder) takes ${\mathbf Z}_n$ and ${\mathbf G}$ as inputs and outputs the reconstructed $\hat{\mathbf x}_n=f_{\theta}({\mathbf Z}_n,{\mathbf G})$, where $\theta$ are the decoder parameters. In order to respect the pre-defined group structure, as shown in \autoref{fig:vicause}, ${\mathbf Z}_n$ is partitioned into $M$ parts, where ${\mathbf z}_{n,m}$ represents the latent variable for the group of observations ${\boldsymbol{\chi}}_{n,m}$. This defines a group-wise structured latent space. We adopt a two-step process for the generative path ${\mathbf Z}_n$ to ${\mathbf X}_n$: (i) GNN message passing with respect to the learned graph ${\mathbf G}$ between latent ${\mathbf z}_{n,m}$; (ii) final read-out layer to generate ${\mathbf X}_n$. \noindent\textbf{GNN message passing in the generator}. In message passing, the information flows between nodes in $T$ consecutive node-to-edge (n2e) and edge-to-node (e2n) operations \citep{gilmer2017neural}. At the $t$-th step, we compute an embedding ${\mathbf h}^f_{i\to j}$ for each edge $i\to j$, called \emph{forward} embedding, which summarizes the information sent from node $i$ to $j$. Specifically, the n2e/e2n operations in \modelName are \begin{align} \label{eq:n2e} \textrm{n2e}: &\quad {\mathbf h}^{(t), }_{i\to j} = \mathrm{MLP}^{f}\!\left(\left[{\mathbf z}_i^{(t-1)}, {\mathbf z}_j^{(t-1)}\right]\right),\\\label{eq:e2n} \textrm{e2n}: &\quad {\mathbf z}_i^{(t)} = \mathrm{MLP}^{e2n}\left(\textstyle\sum_{k\neq i}{\mathbf G}_{ki}\cdo {\mathbf h}_{k\to i}^{(t),f \right). \end{align} Here, $t$ refers to the $t$-th iteration of message passing (that is, ${\mathbf Z}^{(0)}={\mathbf Z}_n$, notice that we omit subindex $n$ for clarity). Finally, $\mathrm{MLP}^f$, and $\mathrm{MLP}^{e2n}$ are MLPs to be trained. Interestingly, the message passing updates indicate that the information flows between latent nodes if {a} directed edge {is} specified in graph ${\mathbf G}$. Hence, the inferred structure ${\mathbf G}$ directly defines relations for latent space ${\mathbf Z}$ {which contains the information of pre-defined groups}. {We show that }under certain conditions, the inferred graph $G$ also represents the group-wise structure in observational space, and the corresponding model can be reformulated to a general \emph{structural equation model} (SEM) \citep{peters2017elements} (see \autoref{app: respect graph G}). \noindent\textbf{Read-out layer in the generator}. After $T$ iterations of GNN message passing, we have ${\mathbf Z}^{(T)}$. We then apply a final function that maps ${\mathbf Z}^{(T)}$ to the reconstructed $\hat{\mathbf x}$, {which} also respects the pre-defined group structure. Since the observation ${\mathbf x}=[{\boldsymbol{\chi}}_1,\ldots,{\boldsymbol{\chi}}_M]$ may contain ${\boldsymbol{\chi}}_m$ with different dimensions, we adopt $M$ different MLPs, one for each group as the final read-out layer, to respect the group structure. Namely, $\hat{{\mathbf x}}=(g^1({\mathbf z}_1^T),\ldots,g^M({\mathbf z}_M^T))^\top,$ where $g^m$ represents the MLP for group $m$. Thus, the decoder parameters ${\boldsymbol{\theta}}$ include the parameters of the following neural networks: $\textrm{MLP}^f$, $\textrm{MLP}^{e2n}$ and {$g^m$ for $m=1,\ldots,M$. \noindent\textbf{Inference network}. As in standard VAEs, the encoder maps a sample ${\mathbf x}_n$ to its latent representation ${\mathbf Z}_n$. As discussed before, ${\mathbf Z}_n$ is partitioned into $M$ parts, where each ${\mathbf z}_{n,m}$ contains the information of the observation in group $m$. {Similar to} the read-out layer, we utilize the $M$ MLPs to map groups of observations to the mean/variance of the latent variables: \begin{align}\label{eq:encoder_variables} {\boldsymbol{\mu}}_n&=\left(\mu^1_{\phi_{\mu_1}}({\boldsymbol{\chi}}_{n,1}),\dots,\mu^M_{\phi_{\mu_M}}({\boldsymbol{\chi}}_{n,M})\right)^\intercal,\\ \nonumber {\boldsymbol{\sigma}}_n&=\left(\sigma^1_{\phi_{\sigma_1}}({\boldsymbol{\chi}}_{n,1}),\dots,\sigma^M_{\phi_{\sigma_M}}({\boldsymbol{\chi}}_{n,M})\right)^\intercal. \end{align}}Here, $\mu^m_{\phi_{\mu_m}}$ and $\sigma^m_{\phi_{\sigma_m}}$ are neural networks for group $m$. When missing values are present, we replace them with a constant as in \citep{nazabal2020handling}. A graphic representation of how the encoder respects the structure of the latent space is shown in the appendix, \autoref{fig:structured_mappings}(b). \subsection{Training \modelName}\label{sec:model_training} Given the model described above, we propose the training objective to minimize w.r.t. $\theta$, $\phi$ and ${\mathbf G}$: \begin{equation}\label{eq:training_loss} \mathcal{L}_{\textrm{\modelName}}(\theta,\phi,{\mathbf G}) = {\color{blue}-\mathrm{ELBO}} + \lambda {\color{ForestGreen} \mathbb{E}_{{\mathrm{q}}({\mathbf G})}\left[\mathcal{R}({\mathbf G})\right]}, \end{equation} where ELBO is given by \autoref{eq:ELBO} and the DAG regulariser $\mathcal{R}({\mathbf G})$ was introduced in \autoref{eq:causal_objective_with_reg} to favor the DAG-ness of learned graph ${\mathbf G}$. \noindent\textbf{Evaluating the training loss $\mathcal{L}_{\textrm{\modelName}}$}. \modelName can work with any type of data. The log-likelihood term ($\log p_\theta({\mathbf x}_n|{\mathbf Z}_n,{\mathbf G})$ in \autoref{eq:ELBO}) is defined according to the data type. We use a Gaussian likelihood for continuous variables and a Bernoulli likelihood for binary ones. For the inference of ${\mathbf Z}$ and ${\mathbf G}$, the standard reparametrization trick is used to sample ${\mathbf Z}_n$ from the Gaussian ${\mathrm{q}}_\phi({\mathbf Z}_n|{\mathbf x}_n)$ \cite{kingma2013auto,kingma2019introduction}. To backpropagate the gradients through the discrete variable ${\mathbf G}$, we resort to the Gumbel-softmax trick to sample from ${\mathrm{q}}({\mathbf G})$ \cite{jang, maddison2016concrete}. The $\textrm{KL}[{\mathrm{q}}_\phi({\mathbf Z}_n|{\mathbf x}_n)||{\mathrm{p}}({\mathbf Z}_n)]$ and $\textrm{KL}[{\mathrm{q}}({\mathbf G})||{\mathrm{p}}({\mathbf G}))]$ terms can be obtained in closed-form since they are Gaussian distributions and independent Bernoulli distributions over the edges, respectively. This formulation brings additional advantages in real-life applications since one can easily incorporate domain knowledge and prior information into the \modelName framework. For example, if the existence of a specific edge is known a priori, the edge probability can be set to 0/1 in the prior distribution. Finally, the DAG-loss regulariser in \autoref{eq:training_loss} can be computed by evaluating the function $\mathcal{R}$ on a Gumbel-softmax sample from ${\mathrm{q}}({\mathbf G})$. To adapt the model to different missing levels in the training data ${\mathbf X}$, we adopt the \emph{masking} strategy \citep{eddi, gong2019icebreaker}, which drops a random percent of the observed values during training. The entire training procedure for \modelName is summarised in Algorithm \ref{alg:training}. \begin{algorithm}[t] \small \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Training dataset ${\mathbf X}$, possibly with missing values.} \For{each batch of samples $\{{\mathbf x}_n\}_{n\in B}$}{ Drop a percentage of the data for each sample ${\mathbf x}_n$.\; Encode ${\mathbf x}_n$ through the reparametrization trick to sample ${\mathbf Z}_n\sim\mathcal{N}(\boldsymbol{\mu}_\phi({\mathbf x}_n), \boldsymbol{\sigma}_\phi^2({\mathbf x}_n))$ using Eq.\ref{eq:encoder_variables}.\; Use the Gumbel-softmax to sample ${\mathbf G}$ from ${\mathrm{q}}({\mathbf G})$.\; Use decoder to reconstruct $\hat{\mathbf x}_n=f_{\theta}({\mathbf Z}_n, {\mathbf G})$.\; Calculate the training loss $\mathcal{L}_{\textrm{\modelName}}$ (\autoref{eq:training_loss}).\; Gradient step w.r.t. $\phi$ (encoder parameters), $\theta$ (decoder parameters) and ${\mathbf G}$ (posterior edge probabilities).\; } \Output{Encoder parameters $\phi$, decoder parameters $\theta$, and posterior probabilities over the edges ${\mathbf G}$.} \caption{Training \modelName. \label{alg:training}} \end{algorithm} \noindent\textbf{Two-step training}. After training, we obtain the posterior of the graph ${\mathbf G}$, which respects the underlying structure of the groups as shown in \autoref{app: respect graph G}. {With the trained network, we can impute missing values in the groups where their ancestors contain some observations but if a group has no ancestors no information can be propagated during imputation. After learning the graph structure and to facilitate the imputation task, we introduce a \emph{backwards} edge: for an edge from $j$ to $i$ we denote the backwards edge information as ${\mathbf h}^b_{i\to j}$ which codifies the information that the $i\to j$ edge lets flow from the $j$-th to the $i$-th node. It is defined in the same way as \autoref{eq:n2e}, i.e.,: ${\mathbf h}^{(t),b }_{i\to j} = \mathrm{MLP}^{b}\!\left(\left[{\mathbf z}_i^{(t-1)}, {\mathbf z}_j^{(t-1)}\right]\right)$, where $\mathrm{MLP}^b$ is the backward MLP; and the e2n update (Eq.\ref{eq:e2n}) is modified to $ {\mathbf z}_i^{(t)} = \mathrm{MLP}^{e2n}\left(\textstyle\sum_{k\neq i}{\mathbf G}_{ki}\cdot \left\{ {\mathbf h}_{k\to i}^{(t),f} +{\mathbf h}_{i\to k}^{(t),b}\right\} \right)$.} {In summary, }we propose a two-stage training process, where the first stage --- described in previous sections --- focuses on discovering the edge directions between nodes without the $\mathrm{MLP}^b$ (i.e., we do not train the $\mathrm{MLP}^b$). In the second stage, we fix the graph structure ${\mathbf G}$ and continue to train the model with the backward MLP. This two-stage training process allows \modelName to leverage the backward MLP for the imputation task without {updating the graph structure}. \noindent\textbf{Revisiting the learning objectives}. The optimal graph of relationships, denoted as ${\mathbf G}_\star$ in \autoref{eq:causal_objective_with_reg}, is given by the estimated posterior probabilities of graph ${\mathbf G}$. In addition, the regularizer $\mathcal{R}({\mathbf G})$ provides a way to evaluate if the resulting graph is a DAG. By tuning the regularizer strength $\lambda$, one can ensure that the resulting ${\mathbf G}^*$ represents a proper DAG. For imputation, similar to \citet{eddi, nazabal2020handling}, the trained model can impute missing values for a test instance $\widetilde{\mathbf x}$ as \begin{equation}\label{eq:imputation} {\mathrm{p}}(\widetilde{\mathbf x}_U|\widetilde{\mathbf x}_O, {\mathbf X})= \mathbb{E}_{{\mathrm{q}}_\phi({\mathbf Z}|\widetilde{\mathbf x}){\mathrm{q}}({\mathbf G})}{\mathrm{p}}(\widetilde{\mathbf x}_U|{\mathbf Z},{\mathbf G}). \end{equation} Therefore, the distribution over $\widetilde{\mathbf x}_U$ (missing values) is obtained by applying the encoder and decoder with $\widetilde{\mathbf x}$ as input. One important distinction of \modelName compared to \citet{eddi,nazabal2020handling} is that it incorporates the learned structure ${\mathbf G}$ into the imputation, which helps the model avoid over-fitting due to spurious correlations \citep{castle}. \noindent\textbf{Special case: variable-wise relations}. In the above formulation, we have defined \modelName for group-wise structure learning. Variable-wise relations can be regarded as a special case. In particular, we can set $M=D$ and $\mathcal{I}_m=\{m\}$ (see \autoref{fig:structured_latent_space} (a) in the appendix), i.e. each group only contains a single variable. Through this modification, we can further simplify the encoder and read-out layer. Instead of using $M$ different MLPs, a single MLP can be shared across all variables since each group has dimension of 1. The mean function for the encoder is then defined as \begin{equation} \boldsymbol{\mu}_n=\left(\mu_\phi(x_{n,1}),\ldots,\mu_\phi(x_{n,D})\right). \end{equation} One can define encoder variance $\boldsymbol{\sigma}$ (\autoref{fig:structured_mappings} (a) in the appendix) and the read-out layer $g$ analogously. \section{Related Work} Since \modelName simultaneously tackles missing value imputation and structure learning, we review both fields. Moreover, we review recent works that utilize structure learning to improve the performance of another deep learning task, similar to \modelName. {Finally, as one of the focused applications of this work is in the education domain, we review recent advances of AI in education.} \noindent\textbf{Structure learning}. Structure learning aims to infer the underlying structures associated with some observations. There are mainly three types of methods: constrained-based, score-based, and hybrid. Constraint-based ones exploit (conditional) independence tests to find the underlying structure, such as PC \cite{spirtes1991algorithm} and Fast Causal Inference (FCI) \cite{spirtes2000causation}. They have recently been extended to handle partially observed data through test-wise deletion and adjustments \cite{strobl2018fast, mvpc}. Score-based methods find the structure by optimizing a proper scoring function. The core difficulty lies in the number of possible graphs growing super-exponentially with the number of nodes \citep{chickering2004large}. Thus, explicitly solving the optimization can only be done up to a few nodes \citep{ott2003finding,singh2005finding,cussens2017polyhedral}, resulting in significant limitations in scalability. Therefore, approximation methods have been proposed to ease the computational burden, including searching over topological ordering \citep{teyssier2012ordering,scanagatta2015learning,scanagatta2016learning}, greedy search \citep{chickering2002optimal,ramsey2017million}, coordinate descent \citep{fu2013learning,aragam2015concave,gu2019penalized}. Recently, continuous optimization of structures, called \emph{Notears}, has become very popular within score-based methods \citep{zheng2018dags}. \emph{Notears} proposed a differentiable algebraic characterization of the DAG, allowing an equality-constrained optimization problem to learn the model parameters and graph structures jointly. \emph{Notears} has inspired the development of other methods, \emph{Notears-MLP} and \emph{Notears-Sob} \citep{zheng2020learning}, \emph{Grandag} \citep{lachapelle2019gradient}, and \emph{DAG-GNN} \citep{yu2019dag}, which extend the original formulation to model nonlinear relationships between variables. However, their formulations cannot handle missing values and have been observed to be sensitive to data scaling \citep{kaiser2021unsuitability}. In particular, \emph{DAG-GNN} also adopts a specially-designed GNN to perform structure learning \citep{yu2019dag}. Compared to our formulation, there are three key distinctions: {(i) our model is designed to discover the group-wise relationship, while DAG-GNN and other structured discovery methods focus on variables level structure learning; (ii)} our model is capable of performing missing value imputation and group-wise structure learning simultaneously, whereas the original formulation of DAG-GNN {and related work} can only handle complete data; (iii) \modelName adopts Bayesian learning for the underlying graphs, whereas DAG-GNN uses a point estimation. \noindent\textbf{Structure deep learning}. Continuous optimization for learning structures has been used to boost performance in classification. In CASTLE \cite{castle}, structure learning is introduced as a regulariser for a deep learning classification model. This regulariser reconstructs only the most relevant causal features, leading to improved out-of-sample predictions. In SLAPS \cite{fatemi2021slaps}, the classification objective is supplemented with a self-supervised task that learns a graph of interactions between variables through a GNN. However, these works focused on the supervised classification task, and they did not advance the performance of the structure learning itself. \noindent\textbf{Missing values imputation}. The relevance of missing data in real-world problems has motivated a long history of research \citep{dempster1977maximum, rubin1976inference}. A popular approach for this task is to estimate the missing values based on the observed ones through different techniques \citep{Scheffer02dealingwith}. Here, we find popular methods such as missforest \citep{stekhoven2012missforest}, which relies on Random Forest, and MICE \citep{buuren2010mice}, which is based on Bayesian Ridge Regression. Also, the efficiency of amortized inference in generative models has motivated its use for missing values imputation. This is explored in \citet{wu2018conditional}, although fully observed training data is required. This limitation is addressed in both \citet{nazabal2020handling}, where a zero-imputation strategy is used for partially observed data, and \citet{eddi}, where a permutation invariant set encoder is utilized to handle missing values. \modelName also leverages amortized inference, although the discovered relationships inform the imputation through a GNN. \noindent\textbf{AI in education}. Recently, there has been tremendous progress in using AI for educational applications. For example, knowledge training \citep{lan2014time,vie2019knowledge,naito2018predictive}, which focuses on tracking the evolution of the knowledge of some students; grading students' performance \citep{waters2015bayesrank}; generating feedback for students working on coding challenges \citep{wu2019zero}. In particular, most related to \modelName is work on imputing missing values in students' responses to questions. \citet{wang2020educational} adopts a partial VAE \citep{eddi} to perform missing value imputation and personalization. However, partial VAE does not consider the structural relations between questions/topics and cannot perform structure learning. With the additional insights from structure learning, \modelName can provide more information to teachers to help curriculum design than just imputations.
2,869,038,155,330
arxiv
\section{Introduction}\label{sec:introduction}} \IEEEPARstart{C}{onsider} the classical classification setup \cite[p2]{dgl}: Let $(X,Y), (X_1,Y_1), \cdots (X_n,Y_n) \overset{iid}{\sim} F_{XY}$, where feature vector $X$ lives in $\Re^d$ and class label $Y$ lives in $[K] = \{1,\cdots,K\}$. Denote the training data by $\mathcal{T}_n = \{(X_1,Y_1), \cdots (X_n,Y_n)\}$. Our goal is to learn a classifier $g: \Re^d \times (\Re^d \times [K])^n \to [K]$ using $\mathcal{T}_n$ to predict the true but unobserved class label $Y$ based on the observed test feature vector $X$. Performance is measured by the conditional probability of error, \begin{equation} L(g) = \mathbb{P}[g(X;\mathcal{T}_n) \neq Y|\mathcal{T}_n]. \label{eqn:error} \end{equation} Now consider the setting wherein we do not observe the $Y_i$ but rather noisy labels $Z_i$. For $P_i \in [0,1]$, let noisy class label $Z_i$ be given by $\mathbb{P}[Z_i = Y_i] = 1-P_i$ and $Z_i$ distributed on $[K] \setminus \{Y_i\}$ with probability $P_{i}$; $P_i = 0$ means no noise in the $Z_i$ and $P_i = (K-1)/K$ means no information in the noisy labels $Z_i$. Common label noise structures include class-dependent noise and incident-dependent noise. Class-dependent noise assumes $P_i$ is the same for all instances in the same class, which can be modeled by a noise transition matrix $A \in \Re^{K \times K}$, where $\mathbb{P}[Z_i = l \mid Y_i = k] = A_{kl}$; Symmetric label noise further assumes that $A$ is symmetric with diagonal entries as $1- \alpha$, off-diagonal entries as $\alpha/(K-1)$. Thus, we have $(X_i,Y_i,Z_i,P_i) \overset{iid}{\sim} F_{X,Y,Z,P}$. Again: $X$ is the feature vector and $Y$ is the true class label; now $Z$ is the noisy class label and $P$ characterizes the label noise. The classifier $g$ is trained on the noisy dataset $\tilde{\mathcal{T}}_n = \{(X_1,Z_1), \cdots (X_n,Z_n)\}$, and evaluated on the clean sample \begin{equation} L(\tilde{g}) = \mathbb{P}[g(X;\tilde{\mathcal{T}}_n) \neq Y|\tilde{\mathcal{T}}_n]. \label{eqn:error} \end{equation} It is well known that the optimal classifier is given by the Bayes decision rule: \begin{equation} g^* (x) = \arg \max_{1\le k \le K} \mathbb{P}[Y = k | X=x], \end{equation} with the Bayes error given by \begin{equation} L(g^*) = \mathbb{P}[g^*(x) \ne Y] = 1 - \mathbb E [\max_k p_k(X)], \end{equation} where $p_k(x) = \mathbb{P}[Y = k | X=x]$ for $k \in [K]$ denotes the a posteriori probabilities. One natural decision rule is to approximate the a posteriori probability given the training data. In the non-noisy setting, it is well known that if the posterior estimates are $L_1$ (or $L_2$) consistent, then the plug-in Bayes classifier (that maximizes the a posteriori probabilities) is consistent \cite[Section 2.5]{dgl}. However, in the noisy label dataset, one can only hope to estimate the noisy label posterior $q_k(x) = \mathbb{P}[Z = k | X=x] \, (k \in [K])$ via empirical distribution $q_{kn}(x)$. Consider the plug-in classifier again but from noisy $\tilde{\mathcal{T}}_n$, \begin{equation} \tilde{g}_n (x) = \arg \max_k q_{kn}(x). \label{eqn:consistent-g} \end{equation} If the posterior estimates are $L_1$-consistent yet for the noisy label posterior $q_k(x)$, how well does the noisy plug-in classifier $\tilde{g}_n(x)$ compared to the Bayes optimal classifier? Remarkably, for binary classification with symmetric label noise, the Bayes decision rule based on noisy posterior $q_k(x)$ remains the same as that of the clean posterior $p_k(x)$ up to the information-theoretic threshold $P_i = 1/2$ \cite{lugosi1992learning, natarajan2013learning, menon2015learning}. Thus, if $q_{kn}(x)$ is a $L_1$-consistent estimator of $q_k(x)$, then $\tilde{g}_n(x)$ yields Bayes-optimal performance asymptotically \cite{lugosi1992learning}. We now turn to deep neural network classifiers (DNNs) and ask the same question: How well does the noisy plug-in DNN compared to the Bayes optimal classifier? In other words, can DNNs be robust against massive label noise while using noisy posteriors without any mitigation? Empirically, DNNs can memorize arbitrary noisy labels during training and may generalize poorly ~\cite{zhang2021understanding,arpit2017closer}. This phenomenon motivates many follow-up works to design robust deep learning models by mitigating the effect of label noise, including model-free methods that do not explicitly model the noise structure, and model-based methods that assume or estimate the label noise structure (see \cite{algan2021image} for a recent survey). \vspace{1em} \noindent \textbf{Related work. } In the model-free literature, recent theoretical results show that imposing regularization on DNNs, such as early stopping \cite{pmlr-v108-li20j} or weight regularizations \cite{liu2020earlylearning, xia2021robust, arpit2017closer}, constrains the model to ignore noisy labels during gradient updates and thus mitigate the effect of label noise. More precisely, \cite{pmlr-v108-li20j} showed that, with early stopping, one hidden-layer fully-connected neural network is robust to label noise up to $ \frac{1}{4(K-1)}$ class-dependent noise probability\footnote{Assume that $K$-class labels lie in $[-1,1]$ and labels from different classes have Euclidean distance at least $\delta$ (i.e., $\delta \le \frac{2}{K-1}$), Theorem 2.2 in \cite{pmlr-v108-li20j} proves robustness up to noise probability $\frac{\delta}{8} \le \frac{1}{4(K-1)}$.}. Their analysis relies on the key assumptions that the Jacobian of the network has a low-rank structure, which implies the network ``fits the correct labels essentially ignoring the noisy labels.'' as stated in \cite{pmlr-v108-li20j}. However, they conjecture that the tolerance bound can be improved up to the order of $n$ noisy labels. Similarly, \cite{liu2020earlylearning} observed that ``...early in training, the gradients corresponding to the correctly labeled examples dominate the dynamics---leading to early progress towards the true optimum---but that the gradients corresponding to wrong labels soon become dominant'' and proposed regularization to prevent memorization of noisy labels. In the model-based literature, the most relevant work is \cite{patrini2017making}, which shows that by performing loss correction, DNNs can tolerate label noise as long as the noise transition matrix $A$ is invertible (i.e., tolerance threshold up to $\frac{K-1}{K}$ for symmetric noise); Such tighter bound compared to \cite{pmlr-v108-li20j} is obtained with the extra assumption that the label noise is known or can be perfectly estimated from the data. \vspace{1em} \noindent \textbf{Our contribution} In this paper, we show that when the symmetric label noise is bounded by $\frac{K-1}{K}$, DNNs trained with noisy data can achieve Bayes optimal performance asymptotically, \textit{without the need for any label noise mitigation}. The key observation is that DNNs are universally consistent \cite{farago1993strong, lin2021universal, Drews2022} and thus $L_1$-consistent. This allows us to make use of a generalized version of results in \cite{lugosi1992learning}, extending from the binary setting to the multiclass setting for symmetric label noise. We answer the conjecture in \cite{pmlr-v108-li20j} affirmatively in the special setting of symmetric label noise, without requiring the restrictive assumption in \cite{patrini2017making} to perfectly estimate the noise structure. Our results also hold for other $L_1$-consistent estimators, which may be of independent interest. \section{Main Results} \label{sec:stat} To prove DNNs trained from symmetric noisy labels can achieve Bayes optimality asymptotically, we first generalize the characterization in \cite[Theorem 2.3]{lugosi1992learning} for binary classification to multiclass classification. We then proceed to show that DNNs are $L_1$-consistent estimator of the (noisy) posteriors based on the universal consistency results of DNNs from \cite{farago1993strong, lin2021universal, Drews2022}. To present our main results, we recall the following definitions and key results from \cite{lugosi1992learning}. \begin{definition}[Consistency]\label{defn:consistent} Consider the setup introduced in Section \ref{sec:introduction}. A sequence of posterior estimates $\{q_{kn}\}$ is called $L_1$-consistent for a certain distribution $F_{XY}$ if \begin{equation} \lim_{n \to \infty} \mathbb{E}( \sum_{k=1}^K |q_{kn}(X) - q_k(X)|) = 0. \label{eqn: L1-consistency} \end{equation} It is called $L_2$-consistent for a certain distribution $F_{XY}$ if \begin{equation} \lim_{n \to \infty} \mathbb{E}( \sum_{k=1}^K (q_{kn}(X) - q_k(X))^2) = 0. \label{eqn: L1-consistency} \end{equation} Universal consistency requires consistency to hold for all distributions $F_{XY}$ with $\mathbb{E}(Y^2) < \infty$. \end{definition} \begin{theoremLugosi} \label{thm:lugosi} Consider the binary classification setting, where $\alpha, \beta$ denote the label noise probability for class $0, 1$ respectively. Let the classifier $\tilde{g}_n(x)$ be defined as \eqref{eqn:consistent-g}, which uses maximizing a posteriori (MAP) decision rule on a $L_1$-consistent estimator $q_{kn}(X)$. Assume $\max(\alpha,\beta) < 1/2$. Asymptotically, if $\alpha, \beta$ are known, then \begin{equation} L(\tilde{g}_n) \to L (g^*); \label{eqn:lugosi_known} \end{equation} If $\alpha, \beta$ are unknown, then \begin{equation} L(\tilde{g}_n) \to L (g^*) \left[1+\frac{2|\alpha-\beta|}{1-2 \max (\alpha, \beta)}\right]. \label{eqn:lugosi_unknown} \end{equation} \end{theoremLugosi} In practice, $\alpha, \beta$ are typically unknown. Yet for symmetric label noise (i.e., $\alpha = \beta$), $\tilde{g}_n(x)$ is asymptotically Bayes-optimal until the noise probability exceeds $0.5$. On the other hand, for class-dependent label noise, higher asymmetry implies worse performance --- a constant times the Bayes risk. Therein, we refer to the maximum label noise threshold that preserves Bayes optimality as the statistical limit. We are ready to present our main results, which extend the binary setting in \cite{lugosi1992learning} to the multiclass setting for symmetric label noise. \begin{theorem} \label{thm:main} Consider the multiclass classification setting with $K \ge 2$ classes and symmetric label noise with noise probability $\alpha$. Let the classifier $\tilde{g}_n(x)$ be defined as \eqref{eqn:consistent-g} which uses MAP on a $L_1$-consistent estimator $q_{kn}(X)$. If $\alpha < \frac{K-1}{K}$, then as $n \to \infty$, for both known and unknown $\alpha$, $$ L(\tilde{g}_n) \to L (g^*). $$ \end{theorem} \begin{proof} Let $\alpha$ denote the noisy label probability (i.e., $P[Z_i = Y_i] = 1 - \alpha$). Observe that the symmetric noise transition matrix is given by \begin{equation} A_{ij} = \begin{cases} 1-\alpha,& \text{if } i=j\\ \frac{\alpha}{K-1}, & \text{otherwise.} \label{eqn:sym_noise} \end{cases} \end{equation} In the case where $\alpha$ is known (and thus $A$ is known), observe that the noisy posteriors and the true posteriors are related by \begin{equation} [q_{1}(x), \cdots, q_K(x)] = [p_1(x), \cdots, p_K(x)] A. \label{eqn:transition} \end{equation} Therefore, the invertibility of $A$ yields sufficient and necessary condition for estimating the true posteriors $p_k(x)$ from noisy posteriors $q_k(x)$ and thus obtaining the Bayes optimal decision. Further observe that for symmetric label noise, $$ A = \frac{\alpha}{K-1} \mathbf{1}_{K \times K} + (1-\alpha - \frac{\alpha}{K-1}) I, $$ where $ \mathbf{1}_{K \times K}$ denotes the all-ones matrix in $\Re^{K \times K}$. Thus, $A$ is invertible if and only if $1-\alpha - \frac{\alpha}{K-1} > 0$. In other words, the noisy plug-in classifier can tolerate label noise up to the breakdown point at $\alpha = \frac{K-1}{K}$. When $K=2$, we recover eqn \eqref{eqn:lugosi_known} in \cite[Theorem 2.2]{lugosi1992learning}. In the case where $\alpha$ and thus $A$ are unknown (while the form of $A$ is known as eqn \eqref{eqn:sym_noise}), we can write the true posterior as a function of the noisy posterior using eqn \eqref{eqn:transition}, \begin{align} p_k(x) &= (1- \alpha - \frac{\alpha}{K-1})^{-1} (q_k(x) - \frac{\alpha}{K-1}) . \label{eqn:mono} \end{align} When $\alpha < \frac{K-1}{K}$, the coefficient $(1- \alpha - \frac{\alpha}{K-1})^{-1} > 0$ and so $p_k(x)$ is monotonically increasing with $q_k(x)$. Therefore, by monoticity, if we know the noisy posteriors such that $q_1(x) \ge \ldots \ge q_K(x)$, then $p_1(x) \ge \ldots \ge p_K(x)$. In other words, the noisy decision coincides with the Bayes decision $\arg \max_{k \in [K]} q_k(x) = \arg \max_{k \in [K]} p_k(x)$. Now, since the classifier $\tilde{g}_n(x)$ is a $L_1$-consistent estimator, then the empirical noisy posterior $q_{kn}(x) \to q_k(x)$ when $n \to \infty$, so we can estimate the noisy posterior perfectly in the asymptotic limit, and obtain the Bayes optimal performance. \end{proof} \begin{remark} \label{rem1} Even when the noise probability is unknown, symmetric label noise (up to the information-theoretic threshold) effectively maintains the ordering of the true posteriors, and therefore leads to Bayes optimality based on the noisy posteriors. However, class-dependent label noise typically leads to sub-optimality, as shown in eqn \eqref{eqn:lugosi_unknown} for binary classification and further discussed in \cite{scott2013classification}. A natural mitigation strategy relies on estimating $A$ from data \cite{patrini2017making, menon2015learning}: if $A$ can be perfectly recovered from data, then it is possible to achieve Bayes optimality for unknown, class-dependent label noise, as shown in \cite[Thm 3]{patrini2017making}. \end{remark} \begin{remark} \label{rem2} Theorem \ref{thm:main} is applicable for any $L_1$-consistent estimator. For example, Adaboost is universally consistent when using appropriate early-stopping and sufficiently rich base learners \cite{bartlett2006adaboost}. Therefore it can tolerate massive symmetric label noise. However, Adaboost without consistency guarantees is highly susceptible to symmetric label noise \cite{long2008random}. \end{remark} \begin{remark} \label{rem3} Although $L_1$-consistency is sufficient to derive robustness against label noise, it is not necessary. For example, \cite{ghosh2017robustness} show that decision tree based on Gini impurity splitting can achieve the label noise tolerance up to the statistical limit, while such estimator is not universally consistent \cite[P338]{dgl}. \end{remark} It remains to show that DNNs are $L_1$-consistent. Observe that $L_2$-consistency implies $L_1$-consistency \cite[DGL Cor 6.2]{dgl}, since for each $k \in [K]$, \begin{align} \mathbb{E}(|q_{kn}(X) - q_k(X)|) &= \int_{\Re^d} |q_{kn}(x) - q_k(x)| \mu(dx) \nonumber \\ & \le \Big( \int_{\Re^d} |q_{kn}(x) - q_k(x)|^2 \mu(dx) \Big)^{1/2}. \label{eqn:L2-consistency} \end{align} Thus, our results are immediate from the ($L_2$) universal consistency results of DNNs from \cite{farago1993strong, lin2021universal, Drews2022}. More precisely, universal consistency of under-parameterized neural networks was established in \cite{farago1993strong, barron1994approximation} for fully-connected neural networks and \cite{lin2021universal} for convolutional neural networks (CNNs), whereby \textit{under-parameterized} we mean that in the asymptotic limit, the ratio of the number of parameters of the DNN and the number of data samples is less than $1$. This is in contrast to the \textit{over-parameterized} networks where such ratio is greater than $1$. Remarkably, \cite{Drews2022} recently show that even over-parameterized networks can also be universally consistent, given proper setup in the gradient descent optimization (e.g., initialization, step size, and the number of iterations). To conclude, we establish the following: \begin{cor} Consider the $K$-class classification setting in Theorem \ref{thm:main} where the classifier $\tilde{g}_n(x)$ is a $L_1$-consistent deep neural network (DNN). If the symmetric noise probability $\alpha < \frac{K-1}{K}$, then such DNN trained from noisy data without mitigation can achieve Bayes optimality asymptotically. \end{cor} \section{Numerical Evidence} To demonstrate our results, we conduct numerical simulations on training CNNs on noisy benchmark datasets (see Appendix \ref{app} for full details). As shown in Figure \ref{fig:gap}, when training with symmetric label noise, the classification performance degrades very slowly until the statistical limit $(K-1)/K$ (yellow dotted line), whereas the tolerance bound $1/4(K-1)$ (grey dotted line) in \cite{pmlr-v108-li20j} is much looser. Similar empirical evidence can be found in \cite{markermap2022} that shows variational auto-encoder classifiers are robust to symmetric label noise up to the statistical limit. \begin{figure}[htb!] \centering \includegraphics[width=7cm]{gap.png} \caption{CNNs trained with \textit{symmetric} label noise on MNIST and CIFAR10 datasets. Experimental results agree with the statistical analysis, and demonstrate deep learning models can be surprisingly robust against massive symmetric label noise.} \label{fig:gap} \end{figure} As discussed in Remark \ref{rem1}, class-dependent label noise can be more harmful than symmetric label noise. We illustrate such phenomenon in Figure \ref{fig:gap_asym}, where the class-dependent noise transition matrix is given by \begin{equation} A_{ij} = \begin{cases} 1-\alpha,& \text{if } i=j\\ \alpha, & \text{if } j=(i+1)\;\mathrm{mod}\;10 \\ 0 & \text{otherwise.} \label{eqn:asym_noise} \end{cases} \end{equation} Note that each row of $A$ in \eqref{eqn:asym_noise} only has two nonzero entries, and thus such class-dependent noise effectively reduces the multiclass problem to the binary setting (conditional on each class). Yet when $K=2$, the statistical limit $1/2$ is still more optimistic than the tolerance bound $1/4$ in \cite{pmlr-v108-li20j}, and achievable as shown in Figure \ref{fig:gap_asym}. \begin{figure}[htb!] \centering \includegraphics[width=7cm]{gap_asym.png} \caption{CNNs trained with \textit{class-dependent} label noise on MNIST and CIFAR10 datasets. Here, each class label $k \in \{1, \ldots, 10\}$ is flipped to class $(k+1)\;\mathrm{mod}\;10$ with probability $\alpha$, and remains unchanged with probability $1-\alpha$. This class-dependent label noise structure effectively reduces the multiclass setting to the binary setting. Yet our statistical limit is still achievable and tighter than the bound in \cite{pmlr-v108-li20j}.} \label{fig:gap_asym} \end{figure} \section{Discussion} This short note establishes the statistical limits of deep learning classifiers trained with label noise: Deep neural networks can be surprisingly robust against symmetric label noise without mitigation. Such robustness guarantees hold for any $L_1$-consistent DNN, including both under-parameterized and over-parameterized models. Empirical simulations confirm that the statistical limit is achievable. We hope that the statistical limit might provide an impetus for efforts to understand deep learning against label noise. One interesting direction is to investigate whether we can relax the $L_1$-consistency necessary condition. Our numerical experiments suggest this is plausible ($L_1$-consistency was not enforced in the models), and Remark \ref{rem3} points out a potential path by connecting ReLU-based DNNs to partition-based methods such as decision trees. In future work, we aim to study the statistical limit under general label noise structure, including class-dependent and incident-dependent noise. Based on our current results, we conjecture that mitigation strategies that make use of the noisy data, such as using them to estimate the noise structure, will outperform those that ignore the noisy data. \section*{Acknowledgements} The authors thank George A Kevrekidis, Joshua Agterberg, and Youngser Park for their valuable comments on the paper. Cong Mu and Teresa Huang are partially supported by the Johns Hopkins Mathematical Institute for Data Science (MINDS) Data Science Fellowship. Soledad Villar is supported by NSF DMS 2044349, EOARD FA9550-18-1-7007, and NSF-Simons MoDL (NSF DMS 2031985). \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
2,869,038,155,331
arxiv
\section{Introduction} Given a coin that turns heads (denoted by 0) with probability $p$, thus the probability of turning tails (1) being $q=1-p$, the von Neumann's trick takes two coin flips and returns output by the following rule \cite{VonNeumann51}: \begin{equation}\label{eq:vNrule} 00\mapsto \lambda,\; 01\mapsto 0,\; 10\mapsto 1,\; 11\mapsto \lambda, \end{equation} where $\lambda$ indicates ``no output.'' Because $\Pr(01)=\Pr(10)=pq$, the resulting bit is unbiased. By repeating this process, we obtain a sequence of random bits, and the {\em output rate\/}, the average number of output per input, is $pq\le 1/4$. (See, for example, exercise 5.1-3 in \cite{stein2001introduction}) Formalizing this idea~\cite{elias72,peres92,pae06-randomizing,pae-loui05}, an {\em extracting function} $f\colon\{0,1\}^n \rightarrow \{0,1\}^*$ takes $n$ independent bits of bias $p$, called Bernoulli source of bias $p$, and returns independent and unbiased random bits, and its output rate is bounded by the Shannon entropy $H(p)=-(p\lg p+q\lg q)$. When $p=1/3$, the output rate of von Neumann's procedure is $pq=2/9\approx 0.22$ while the entropy bound $H(1/3)\approx0.92$; the discrepancy is quite large. But there are {\em asymptotically optimal\/} extracting functions that achieve rates arbitrarily close to the entropy bound. Consider the functions defined on $\{0,1\}^2$ as follows, where $\Psi_1$ is the {\em von Neumann} function defined by the rule (\ref{eq:vNrule}): \begin{equation}\label{eq:peres-table} \begin{tabular}{c|c|c|c|c} \hline $x$& $\Pr(x)$&$\Psi_1(x)$&$u(x)$ & $v(x)$\\ \hline 00&$p^2$&$\lambda$&0&0\\ 01&$pq$&0&1&$\lambda$\\ 10&$pq$&1&1&$\lambda$\\ 11&$q^2$&$\lambda$&0&1\\ \hline \end{tabular}\bigskip \end{equation} Extend the three functions $\Psi_1$, $u$, and $v$ to $\{0,1\}^*$: for an empty string, \[ \Psi_1(\lambda)=u(\lambda)=v(\lambda)=\lambda, \] for a nonempty even-length input, define (and the same for $u$ and $v$) \begin{equation}\label{eq:vN-ext} \Psi_1(x_1x_2\dots x_{2n})=\Psi_1(x_1x_2)*\cdots*\Psi_1(x_{2n-1}x_{2n}), \end{equation} where $*$ is concatenation, and for an odd-length input, drop the last bit and take the remaining even-length bits. Now, define {\em Peres function\/} $\Psi:\{0,1\}^*\to\{0,1\}^*$ by a recursion \begin{equation}\label{eq:peres-def} \left\{ \begin{split \Psi(x) &= \Psi_1(x) * \Psi(u(x)) * \Psi(v(x)),\\ \Psi(\lambda) &=\lambda. \end{split} \right. \end{equation} This simple recursive function is extracting for each input length, and, rather surprisingly, asymptotically optimal~\cite{peres92}. Its implementation is straightforward and runs very fast, in $O(n\log n)$ time for input length $n$, with a small footprint. Another superiority over other asymptotically optimal extracting algorithms, for example, Elias algorithm~\cite{elias72}, is its uniformity. To achieve the asymptotic optimality, these algorithms need to take increasingly longer inputs. While Peres algorithm does this with the same simple fixed function $\Psi$, Elias algorithm need to compute separately for each input length with increasing complexity. However, it appears harder to explain why Peres algorithm works than Elias algorithm, and it is quite tempting to say that the algorithm works almost like a magic because of its simplicity of definition and complexity of justification. So a natural question is whether we can find similar recursively defined extracting functions. But the question had remained elusive for a while and it was only recent that its generalizations to many-valued source were discovered~\cite{pae15}. By {\em Peres-style\/} recursion we mean a recursion of the style \[ \Psi(x)=\Psi_1(x)*\Psi(u_1(x))*\dots*\Psi(u_l(x)), \] which defines an asymptotically optimal extracting function, where $\Psi_1$ is extracting and $u_1,\dots,u_l$ are auxiliary functions defined on a fixed finite-length inputs and extended to arbitrary-length inputs as in~\eqref{eq:vN-ext}. If one or more auxiliary functions are omitted, then the resulting recursive function is still extracting but not asymptotically optimal anymore. We always call the base function of the recursion von Neumann function and write as $\Psi_1$. We report a new way to understand and justify Peres-style recursive algorithms using a recent development of {\em binarization tree\/}~\cite{DBLP:conf/isit/Pae16}. It provides a simple and unified viewpoint that explains innerworkings of the Peres algorithm and its recently-found generalizations. Furthermore, we report new Peres-style recursive algorithms that have been arguably hard to come by without this new tool. \section{Binarization Tree and Peres Algorithm} Summarized below are necessary backgrounds on extracting functions and binarization trees. In particular, for a binarization tree, we give ``structure lemma'' and ``entropy lemma.'' The entropy lemma is also known as the ``leaf entropy theorem,'' (see, for example, Section 6.2.2. of \cite{Knuth:art3}) and it is mainly related to the asymptotic optimality of Peres-style recursive algorithms defined by a binarization tree. The structure lemma was first discussed in~\cite{DBLP:conf/isit/Pae16}, and, in our context, it is used to show our algorithms are extracting. For more rigorous treatments on the subjects, see \cite{pae06-randomizing,pae15,DBLP:conf/isit/Pae16}. Then, using these new tools, we give a proof that the original Peres algorithm is extracting and asymptotically optimal. \subsection{Extracting Functions} \begin{definition}[\cite{peres92,pae06-randomizing}]\label{def:extracting} A function $f\colon\{0,1,\dots,m-1\}^{n}\to \{0,1\}^\ast$ is {\em $m$-extracting} if for each pair $z_1,z_2$ in $\{0,1\}^\ast$ such that $|z_1|=|z_2|$, we have $\Pr(f(x)=z_1)=\Pr(f(x)=z_2)$, regardless of the distribution $\dist{p_0,\dots,p_{m-1}}$. \end{definition} \noindent Denote by $S_{(n_0,n_1,\dots,n_{m-1})}$ the subset of $\{0,1,\dots,m-1\}^{n}$ that consists of strings with $n_i$ $i$'s. Then \[ \{0,1,\dots,m-1\}^{n}=\bigcup_{n_0+n_1+\dots+n_{m-1}=n} S_{(n_0,n_1,\dots,n_{m-1})}, \] and each $S_{(n_0,n_1,\dots,n_{m-1})}$ is an {\em equiprobable\/} subset of elements whose probability of occurrence is $p_0^{n_0}p_1^{n_1}\cdots p_{m-1}^{n_{m-1}}$. When $m=2$, an equiprobable set $S_{(l,k)}$ is also written as $S_{n,k}$, where $n=l+k$, and its size can also be written as an equivalent binomial coefficient as well as the multinomial one: \[ {n\choose k}={n\choose{l,k}}. \] An equivalent condition for a function to be extracting is that it sends equiprobable sets to multiple copies of $\{0,1\}^N$, the exact full set of binary strings of various lengths $N$'s. For example, Table~\ref{tbl:example} shows how von Neumann function and Peres function sends equiprobable sets to such sets. \begin{table}[h]\label{tbl:example} \begin{center} \begin{tabular}{c|c|c|c|c||c|c} \hline $k$ ($n=6$)& $\Pr(x)$ &$|S_{n,k}|$ & von Neumann ($\Psi_1$)& bits& Peres ($\Psi$) & bits\\ \hline $k=0$ & $p^6$ & 1 & $\{\lambda\}$ &0& $\{\lambda\}$ &0\\ $k=1$ & $p^5q$ & 6& $3\cdot\{0,1\}$ &6& $\{0,1\},\{0,1\}^2$&10 \\ $k=2$ & $p^4q^2$&15 &$3\cdot\{\lambda\},\, 3\cdot\{0,1\}^2$ & 24& $\{\lambda\},\, \{0,1\},\, \{0,1\}^2,\, \{0,1\}^2$&34\\ $k=3$&$p^3q^3$&20&$6\cdot\{0,1\},\,\{0,1\}^3$&28&$\{0,1\}^2,\,3\cdot\{0,1\}^3$&56\\ $k=4$ & $p^2q^4$&15 &$3\cdot\{\lambda\},\, 3\cdot\{0,1\}^2$ &24& $\{\lambda\},\, \{0,1\},\, \{0,1\}^2,\, \{0,1\}^2$ &34 \\ $k=5$ & $pq^5$ & 6& $3\cdot\{0,1\}$ &6& $\{0,1\},\{0,1\}^2$ &10\\ $k=6$ & $q^6$ & 1 & $\{\lambda\}$ &0& $\{\lambda\}$&0 \\ \hline \end{tabular} \caption{Multiset images of equiprobable sets under extracting functions} \end{center} \end{table} \begin{definition}[\cite{pae15}] A multiset $A$ of bit strings is {\em extracting} if, for each $z$ that occurs in $A$, all the bit strings of length $|z|$ occur in $A$ the same time as $z$ occurs in $A$. \end{definition} \begin{lemma}[\cite{pae15}]\label{lemma:extracting-multiset} A function $f\colon\{0,1,\dots,m-1\}^{n}\to \{0,1\}^\ast$ is extracting if and only if its multiset image of each equiprobable set $S_{(n_0,n_1,\dots,n_{m-1})}$ is extracting. \end{lemma} \subsection{Binarization Tree} Let $X$ be a random variable over $\{0, 1,\dots, m-1\}$ (or, rather, a dice with $m$ faces) with probability distribution $\dist{p_0,\dots,p_{m-1}}$. A sequence $x=x_1\dots x_n\in\{0,1,\dots,m-1\}^{n}$ is considered to be taken $n$ times from $X$. Given a function $\phi\colon\{0,1,\dots,m-1\}\to\{\lambda,0,1,\dots,k-1\}$, the random variable $\phi(X)$ has distribution $\dist{\pi_0,\dots,\pi_{k-1}}$, where \[ \pi_0=\sum_{\phi(i)=0}p_i/s,\,\dots,\,\pi_{k-1}=\sum_{\Phi(i)=k-1}p_i/s,\,\text{and}\; s=\sum_{\phi(i)\not=\lambda}p_i. \] Extend $\phi$ to $\{0,1,\dots,m-1\}^n$, by letting, for $x=x_1\dots x_n$, $\phi(x)=\phi(x_1)*\dots*\phi(x_n)$. Then, for an equiprobable set $S=S_{(n_0,\dots,n_{m-1})}$, its image under $\phi$ is equiprobable, that is, \[ \phi(S)=S_{(l_0,\dots,l_{k-1})}, \] where \[ l_0=\sum_{\phi(i)=0}n_i,\dots, l_{k-1}=\sum_{\phi(i)=k-1}n_i. \] \newcommand{\mathrm{leaf}}{\mathrm{leaf}} Consider a tree with $m$ external nodes labeled uniquely with $0,1,\dots,m-1$. For an internal node $v$ of degree $k$, define a function $\phi_v\colon \{0,1,\dots,m-1\}\to\{\lambda,0,1,\dots,k-1\}$ as follows: \[ \phi_v(x)=\left\{\begin{array}{ll} i,&\text{if $x\in\mathrm{leaf}_i(v)$, for $i=0,\dots,k-1$,}\\ \lambda,&\text{otherwise.} \end{array}\right. \] where $\mathrm{leaf}_i(v)$ is the set of external nodes on the $i$th subtree of $v$. When $X$ is also an $m$-valued source, call such a tree an {\em $m$-binarization tree\/} over $X$ and $\phi_v$ its {\em component function.} For example, the following tree with 10 external nodes \begin{equation}\label{eq:example-tree} \raisebox{-.5\height}{\includegraphics[scale=1]{figs-31.mps}} \end{equation} defines the following component functions: \begin{equation}\label{eq:yet-another-bin-eg} \medskip \begin{tabular}{c|c|c|c|c|c} \hline $x$& $\Phi_1(x)$&$\Phi_2(x)$ & $\Phi_3(x)$ & $\Phi_4(x)$ & $\Phi_5(x)$\\ \hline 0&2&$\lambda$&0&1&1\\ 1&2&$\lambda$&0&0&$\lambda$\\ 2&0&0&$\lambda$&$\lambda$&$\lambda$\\ 3&2&$\lambda$&2&$\lambda$&$\lambda$\\ 4&2&$\lambda$&0&1&0\\ 5&0&1&$\lambda$&$\lambda$&$\lambda$\\ 6&1&$\lambda$&$\lambda$&$\lambda$&$\lambda$\\ 7&2&$\lambda$&1&$\lambda$&$\lambda$\\ 8&2&$\lambda$&0&1&2\\ 9&2&$\lambda$&0&1&3\\ \hline \end{tabular} \end{equation} For $S=S_{(n_0,n_1,\dots,n_{m-1})}$, we have \begin{align*} \Phi_1(S)&=S_{(n_2+n_5,n_6,n_0+n_1+n_3+n_4+n_7+n_8+n_9)},\\ \Phi_2(S)&=S_{(n_2,n_5)},\\ \Phi_3(S)&=S_{(n_0+n_1+n_4+n_8+n_9,n_7,n_3)},\\ \Phi_4(S)&=S_{(n_1,n_0,n_4,n_8,n_9)},\\ \Phi_5(S)&=S_{(n_4,n_0,n_8,n_9)}, \end{align*} and the sizes $|S|$ and $|\Phi_i(S)|$ satisfy \begin{align*} |S|&={n\choose{n_0,\dots,n_9}}\\ &={n\choose{n_2+n_5,n_6,n_0+n_1+n_3+n_4+n_7+n_8+n_9}} {n_2+n_5\choose{n_2,n_5}}\dots {n_4+n_0+n_8+n_9\choose{n_4,n_0,n_8,n_9}}\\ &=\prod|\Phi_i(S)|. \end{align*} In fact, we have a stronger claim. A proof is given in Appendix. \begin{lemma}[Structure Lemma]\label{lemma:structure} Let $\Phi=\{\Phi_1,\dots,\Phi_{M}\}$ be the set of component functions defined by an $m$-binarization tree. Then the mapping $\Phi\colon x\mapsto \Phi(x)= (\Phi_1(x),\dots,\Phi_{M}(x))$ gives a one-to-one correspondence between an equiprobable subset $S=S_{(n_0,n_1,\dots,n_{m-1})}$ and $\Phi_1(S)\times\cdots\times\Phi_{M}(S)$, the Cartesian product of equiprobable sets $\Phi_j(S)$'s. \end{lemma} For a node $v$ of a binarization tree $T$ and its degree is $k$, let \[ P(v)=\sum_{i\in\mathrm{leaf}(v)}p_i, \] where $\mathrm{leaf}(v)=\bigcup_{i=0,\dots,k-1} \mathrm{leaf}_i(v)$, and let \begin{align*} \pi_0(v)&=\sum_{i\in\mathrm{leaf}_0(v)}p_i/P(v),\\ &\vdots\\ \pi_{k-1}(v)&=\sum_{i\in\mathrm{leaf}_{k-1}(v)}p_i/P(v), \end{align*} so that $\phi_v(X)$ has the distribution $\pi(v)=\dist{\pi_0(v),\dots,\pi_{k-1}(v)}$. Then we have the following lemma that is also well-known as the leaf entropy theorem. See, for example, Lemma E in Section 6.2.2 of~\cite{Knuth:art3}. \begin{lemma}[Entropy Lemma]\label{lemma:entropy} \[ H(X)=\sum_{v\in T}P(v)H(\pi(v)). \] \end{lemma} \subsection{Peres Algorithm Revisited} Let $Y$ be a Bernoulli random variable with distribution $\dist{p,q}$. Consider the following binarization tree over $X=Y^2$, the random variable with values $\{0,1\}^2=\{00,01,10,11\}$ and distribution $\dist{p^2,pq,pq,q^2}$, and: \begin{equation}\label{eq:original-peres-tree} \raisebox{-.5\height}{\includegraphics[scale=1]{figs-11.mps}} \end{equation} Then the component functions $\{u,v,\Psi_1\}$ defined by this binarization tree are exactly the same as those of Peres algorithm given in \eqref{eq:peres-table}! \begin{theorem}\label{thm:peres-extracting} Peres function $\Psi$ is extracting. \end{theorem} \begin{proof} Observe that, for an equiprobable set $S\subset\{0,1\}^{2n}$, $\Psi(S)=\Psi_1(S)*\Psi(u(S))*\Psi(v(S))$. This does not hold in general. But, for equiprobable sets, we have one-to-one correspondence $\Phi$ given by the binarization tree~\eqref{eq:original-peres-tree} and $\Phi(S)=\Psi_1(S)\times u(S)\times v(S)$ by the structure lemma. Consider a function $\Psi'$ on $\{0,1\}^*\times\{0,1\}^*\times\{0,1\}^*$ defined by $\Psi'(x,u,v)=x*\Psi(u)*\Psi(v)$. For sets $A,B,$ and $C$, we have $\Psi'(A\times B\times C)=A*\Psi(B)*\Psi(C)$. Since $\Psi=\Psi'\circ\Phi$, we conclude that $\Psi(S)=\Psi_1(S)*\Psi(u(S))*\Psi(v(S))$. Note, here, that $u(S)$ and $v(S)$ are equiprobable. Now, by the induction on the length of strings, $\Psi(u(S))$ and $\Psi(v(S))$ are extracting. Since $\Psi_1$ is extracting, so is $\Psi_1(S)$. So, their concatenation $\Psi(S)$ is extracting, and thus $\Psi$ is extracting. \end{proof} \newcommand\EX{\mathrm{E}} \begin{theorem}\label{thm:peres-optimal} Peres function $\Psi$ is asymptotically optimal. \end{theorem} \begin{proof} By the entropy lemma, \[ H(Y^2)=2pqH(\Psi_1(Y^2))+H(u(Y^2))+(p^2+q^2)H(v(Y^2)). \] The nodes of our binarization tree have distributions \begin{align*} u(Y^2)&:\quad \dist{p^2+q^2,2pq},\\ v(Y^2)&:\quad \dist{p^2/(p^2+q^2), q^2/(p^2+q^2)},\\ \Psi_1(Y^2)&:\quad \dist{\frac12,\frac12}. \end{align*} Since $H(Y^2)=2H(p)$ and $H(\Psi_1(Y^2))=1$, we have \begin{equation}\label{eq:original-peres-entropy} H(p)=pq+\frac12 H(p^2+q^2)+\frac12(p^2+q^2)H(p^2/(p^2+q^2)). \end{equation} Consider the truncated versions of Peres function, whose recursion depths are bounded by $\nu$, defined as follows: \begin{align*} \Psi_\nu(x)&=\Psi_1(x)*\Psi_{\nu-1}(u(x))*\Psi_{\nu-1}(v(x)),\\ \Psi_0(x)&=\lambda. \end{align*} So the von Neumann function $\Psi_1$ has recursion depth 1, and if $|x|\le 2^\nu$, then $\Psi(x)=\Psi_\nu(x)$. The output rate $r_\nu(p)$, for the source distribution $\dist{p,q}$, of $\Psi_\nu$ satisfies the recursion~(See \cite{peres92,pae15,DBLP:journals/ipl/Pae13}) \begin{align}\label{eq:original-peres-rate-rec} r_\nu(p) = r_1(p) + \frac12 r_{\nu-1}(p^2+q^2) + \frac12 (p^2+q^2) r_{\nu-1}(p^2/(p^2+q^2)), \end{align} Note, here, that $u(Y^2)$ and $v(Y^2)$ has distributions $\dist{p^2+q^2,2pq}$ and $\dist{p^2/(p^2+q^2), q^2/(p^2+q^2)}$, respectively, and $r_1(p)=pq$, and $r_0(p)=0$. Consider the operator $T$ on $\{f:[0,1]\to\bfR\mid \lim_{t\to0}f(t)=\lim_{t\to1}f(t)=0\}$ defined by \begin{equation}\label{eq:original-peres-op} T(f)(p)=r_1(p)+ \frac12 f(p^2+q^2) + \frac12 (p^2+q^2) f(p^2/(p^2+q^2)). \end{equation} Then $r_\nu(p)=T^{(\nu)}(r_0)(p)$ is increasing and bounded by $H(p)$. By \eqref{eq:original-peres-entropy}, $H(p)$ is a fixed point of $T$ and thus $\lim_{\nu\to\infty}r_v(p)=H(p)$. \end{proof} In the rest of the paper, for each Peres-style recursion, we give a binarization tree whose component functions define the von Neumann function $\Psi_1$ and auxiliary functions $u_1,\dots,u_l$. The resulting recursive function $\Psi(x)=\Psi_1(x)*\Psi(u_1(x))*\dots*\Psi(u_l(x))$ is extracting exactly in the same manner as in Theorem~\ref{thm:peres-extracting}; an equiprobable set $S$ is sent to an extracting multiset $\Psi_1(S)$ and the Cartesian product of equiprobable sets $u_1(S)\times\dots\times u_l(S)$ which in turn becomes extracting multiset $\Psi(u_1(S))*\dots*\Psi(u_l(S))$ so that $\Psi(S)$ is extracting. For the asymptotical optimality, the operator $T$ is again defined by the binarization tree to be \begin{equation}\label{eq:T-def} T(f)(\ap)=r_1(\ap)+ P(u_1) f(\pi(u_1))+\dots+ P(u_l) f(\pi(u_l)), \end{equation} where $r_1(\ap)$ is the output rate $P(\Psi_1)$ of the von Neumann function and $P(u_i)$ is the probability for the node corresponding to the component function $u_i$ and $\pi(u_i)$ is the node's branching probability distribution. In the same manner as in Theorem~\ref{thm:peres-optimal}, the output rates of the truncated recursive functions give rise to a monotone increasing sequence converging to the Shannon entropy because the entropy is the fixed point of the operator $T$ by the entropy lemma since $T$ is defined by the binarization tree. \section{Peres-Style Recursive Algorithms} Section~\ref{sec:three-face} and \ref{sec:four-face} present the binarization trees for the recently-found generalizations of Peres algorithm in~\cite{pae15}, and the rest of the paper discusses brand-new Peres-style recursive algorithms. \subsection{3-Face Peres Algorithm}\label{sec:three-face} Suppose that we want to find a generalization of Peres algorithm that works on a 3-faced source $Y$ with a distribution $\dist{p,q,r}$. As in the original Peres algorithm, we take two samples and use the obvious generalization of von Neumann function which we use the same name $\Psi_1$. We devise a binarization tree with $3^2=9$ external nodes, whose component function includes $\Psi_1$: \[ \includegraphics[scale=1]{figs-121.mps} \] Verify that the component functions are the same as the auxiliary functions of the 3-faced Peres function given in~\cite{pae15}, except that the functions $\Psi_{11}$, $\Psi_{12}$ and $\Psi_{13}$ with disjoint supports are combined, which we denote by $\Psi_1=\Psi_{11}\oplus \Psi_{12}\oplus \Psi_{13}$: \begin{center} \begin{tabular}{c|c|c|c|c|c} \hline $x$ & $\Pr(x)$ & $\Psi_1(x)$ & $u(x)$ & $v(x)$ & $w(x)$\\ \hline 00& $p^2$ & $\lambda$ & 0 & 0 & $\lambda$\\ 01& $pq$ & 0 & 1 & $\lambda$ & 1\\ 02& $pr$ & 0 & 1 & $\lambda$ & 2\\ 10& $pq$ & 1 & 1 & $\lambda$ & 1\\ 11& $q^2$ & $\lambda$ & 0 & 1 & $\lambda$\\ 12& $qr$ & 0 & 1 & $\lambda$ & 0\\ 20& $pr$ & 1 & 1 & $\lambda$ & 2\\ 21& $qr$ & 1 & 1 & $\lambda$ & 0\\ 22& $r^2$ & $\lambda$ & 0 & 2 & $\lambda$\\ \hline \end{tabular} \end{center} Since $\Psi_{11}$, $\Psi_{12}$ and $\Psi_{13}$ are extracting and have disjoint supports, $\Psi_1(x)=\Psi_{11}(x)*\Psi_{12}(x)*\Psi_{13}(x)$, and thus $\Psi_1$ is extracting. Then the resulting recursive function $\Psi(x)=\Psi_1(x)*\Psi(u(x))*\Psi(v(x))*\Psi(w(x))$ is extracting and asymptotically optimal, where the entropy bound is $H(p,q,r)$. \subsection{4-Face Peres Algorithm}\label{sec:four-face} A 4-Face Peres function is given in \cite{pae15}, and it is defined by the following binarization tree of $4^2=16$ external nodes: \[ \includegraphics[scale=1]{figs-13.mps} \] whose component functions are as follows, where $\Psi_1=\Psi_{11}\oplus\Psi_{12}\oplus\dots\Psi_{16}$, again, as in \cite{pae15}: \begin{center} \begin{tabular}{c|c|c|c|c|c} \hline $x$ & $\Psi_1(x)$ & $u(x)$ & $v(x)$ & $w_1(x)$ & $w_2(x)$ \\ \hline 00& $\lambda$ & 0 & 0 & $\lambda$ & $\lambda$\\ 01& 0 & 1 & $\lambda$ & 0 & $\lambda$ \\ 02& 0 & 1 & $\lambda$ & 1 & $\lambda$ \\ 03& 0 & 1 & $\lambda$ & 2 & $\lambda$ \\ 10& 1 & 1 & $\lambda$ & 0 & $\lambda$ \\ 11& $\lambda$ & 0 & 1 & $\lambda$ & $\lambda$\\ 12& 0 & 1 & $\lambda$ & 3 & $\lambda$ \\ 13& 0 & 2 & $\lambda$ & $\lambda$ & 0 \\ 20& 1 & 1 & $\lambda$ & 1 & $\lambda$ \\ 21& 1 & 1 & $\lambda$ & 3 & $\lambda$ \\ 22& $\lambda$ & 0 & 2 & $\lambda$ & $\lambda$\\ 23& 0 & 2 & $\lambda$ & $\lambda$ & 1 \\ 30& 1 & 1 & $\lambda$ & 2 & $\lambda$ \\ 31& 1 & 2 & $\lambda$ & $\lambda$ & 0 \\ 32& 1 & 2 & $\lambda$ & $\lambda$ & 1 \\ 33& $\lambda$ & 0 & 3 & $\lambda$ & $\lambda$\\ \hline \end{tabular} \end{center} Alternatively, consider, for example, \[ \includegraphics[scale=1]{figs-131.mps} \] and the corresponding component functions are as follows: \begin{center} \begin{tabular}{c|c|c|c|c|c} \hline $x$ & $\Psi_1(x)$ & $u(x)$ & $v(x)$ & $w_1(x)$ & $w_2(x)$ \\ \hline 00& $\lambda$ & 0 & 0 & $\lambda$ & $\lambda$\\ 01& 0 & 1 & $\lambda$ & 0 & $\lambda$ \\ 02& 0 & 1 & $\lambda$ & 1 & $\lambda$ \\ 03& 0 & 1 & $\lambda$ & 2 & $\lambda$ \\ 10& 1 & 1 & $\lambda$ & 0 & $\lambda$ \\ 11& $\lambda$ & 0 & 1 & $\lambda$ & $\lambda$\\ 12& 0 & 1 & $\lambda$ & 3 & $\lambda$ \\ 13& 0 & 1 & $\lambda$ & $\lambda$ & 0 \\ 20& 1 & 1 & $\lambda$ & 1 & $\lambda$ \\ 21& 1 & 1 & $\lambda$ & 3 & $\lambda$ \\ 22& $\lambda$ & 0 & 2 & $\lambda$ & $\lambda$\\ 23& 0 & 1 & $\lambda$ & $\lambda$ & 1 \\ 30& 1 & 1 & $\lambda$ & 2 & $\lambda$ \\ 31& 1 & 1 & $\lambda$ & $\lambda$ & 0 \\ 32& 1 & 1 & $\lambda$ & $\lambda$ & 1 \\ 33& $\lambda$ & 0 & 3 & $\lambda$ & $\lambda$\\ \hline \end{tabular} \end{center} \subsection{3-bit Peres Algorithm} Now consider a brand-new situation in which $m=2$ but the component functions are defined on 3 bits instead of 2 bits as with all the examples given above: \begin{equation}\label{eq:three-bit-peres} \raisebox{-.5\height}{\includegraphics[scale=1]{figs-14.mps}} \end{equation} \[ \begin{tabular}{c|c|c|c|c|c|c|c} \hline $x$&$\Pr(x)$&$u(x)$&$v(x)$&$v_1(x)$&$v_2(x)$&$\Psi_1(x)$&$w(x)$\\ \hline 000&$p^3$&0&0&0&$\lambda$& $\lambda$&$\lambda$ \\ 001&$p^2q$&1&$\lambda$&$\lambda$&$\lambda$& 0&0 \\ 010&$p^2q$&1&$\lambda$&$\lambda$&$\lambda$& 1&0 \\ 011&$pq^2$&1&$\lambda$&$\lambda$&$\lambda$& 0&1 \\ 100&$p^2q$&0&1&$\lambda$&0& $\lambda$&$\lambda$ \\ 101&$pq^2$&1&$\lambda$&$\lambda$&$\lambda$& 1&1 \\ 110&$pq^2$&0&1&$\lambda$&1& $\lambda$&$\lambda$ \\ 111&$q^3$&0&0&1&$\lambda$& $\lambda$&$\lambda$ \\ \hline \end{tabular} \] The three-bit von Neumann function $\Psi_1=\Psi_{11}\oplus\Psi_{12}$ does not utilize inputs 100 and 110, and the output rate $2(p+q)pq/3=2pq/3$ is strictly smaller than $pq$ of the two-bit case. Therefore, even though 3-bit Peres algorithm is asymptotically optimal, the convergence to the entropy bound must be slower. \subsection{4-bit Peres Algorithm} If we ever wanted to have a 4-bit Peres function in this fashion, then can we use $E_4$, the Elias function of input size 4 as the base of the recursion? Note, in the three-bit case, $\Psi_1$ of \eqref{eq:three-bit-peres} is actually $E_3$. With $E_4$, among 16 inputs, only 2 inputs, 0000 and 1111, are wasted. Consider the following binarization tree: \begin{equation}\label{eq:four-bit-peres} \raisebox{-.5\height}{\includegraphics[scale=1]{figs-15.mps}} \vspace{.5cm} \end{equation} So we have the following recursion: \[ \Psi(x)=E_4(x)*\Psi(u(x))*\Psi(v(x))*\Psi(w(x))*\Psi(w_1(x))*\Psi(w_2(x)) \] The rate of $E_4$ is \begin{align*} \frac14\left(2\cdot4p^3q+(2\cdot4+2)p^2q^2+2\cdot4pq^3\right)&=\frac14pq(8p^2+10pq+8q^2)\\ &=pq(1+p^2+q^2+\frac12pq)\\ &>1.65\cdot pq. \end{align*} However, it seems that the convergence is slower than the original Peres function. For a fair comparison, we need to see how the original Peres function on $\{0,1\}^{4n}$. Consider, for $x\in\{0,1\}^4$, \[ \Psi^2(x)=\Psi_1(x)*\Psi_1(u(x))*\Psi_1(v(x))*\Psi^2(uu(x))*\Psi^2(vu(x))*\Psi^2(uv(x))*\Psi^2(vv(x)). \] The output rate of base part of this recursion is \begin{align*} pq+pq(p^2+q^2)+\frac12p^2q^2/(p^2+q^2)&=pq(1+p^2+q^2+\frac12pq/(p^2+q^2))\\ &>pq(1+p^2+q^2+\frac12pq). \end{align*} So the comparison favors the original Peres function. The following plot compares this rate with that of $E_4$, where the red one being the rate of $\Psi^2$: \[ \includegraphics[scale=0.6]{4-bit-base-vs-double-Peres-base.pdf} \] \subsection{Dijkstra's roulette} Dijkstra's one-page paper~\cite{dijkstra90} describes an interesting algorithm that simulates a fair roulette from a biased coin: suppose $m$ is a prime; take $m$ flips of the coin, encoded as a binary string $x$ in $\{0,1\}^m$. If $x=0\dots0$ or $x=1\dots1$, then try again, otherwise, output $y$ the number of cyclic shifts to obtain the lexicographic minimum. The virtues of this scheme are, as with Peres algorithm, simplicity and efficiency, although output rate is much lower than, for example, Elias's algorithm for $m$-faced dice using a coin, in which case again asymptotically optimal with output rate approaching $H_m(p)$, the Shannon entropy with base $m$. Using Dijkstra's scheme together with Peres-style recursion, we can reach out the both side of virtues. Consider a simple case of $m=3$ with a biased coin as a source. The Dijkstra's scheme enhanced with Peres-style recursion is described by the following binarization tree: \[ \includegraphics[scale=1]{figs-18.mps} \] \begin{center} \begin{tabular}{c|c|c|c|c|c} \hline $x$ & $\Pr(x)$ & $\Psi_1(x)$ & $u(x)$ & $v(x)$ & $w(x)$\\ \hline 000& $p^3$ & $\lambda$ & 0 & 0 & $\lambda$\\ 001& $p^2q$ & 0 & 1 & $\lambda$ & 0\\ 010& $p^2q$ & 1 & 1 & $\lambda$ & 0\\ 011& $pq^2$ & 0 & 1 & $\lambda$ & 1\\ 100& $p^2q$ & 2 & 1 & $\lambda$ & 0\\ 101& $pq^2$ & 2 & 1 & $\lambda$ & 1\\ 110& $p^2q$ & 1 & 1 & $\lambda$ & 1\\ 111& $q^3$ & $\lambda$ & 0 & 1 & $\lambda$\\ \hline \end{tabular} \end{center} Note, here, that the base function has three branches while auxiliary functions are binary because we use a coin as the source and outputs are to be three-valued. The resulting recursion $\Psi(x)=\Psi_1(x)*\Psi(u(x))*\Psi(v(x))*\Psi(w(x))$ outputs a uniform three-valued random numbers with an output rate that approaches $H_3(p)$, the Shannon entropy with base 3, as the input size tends to infinity. When $m=5$, consider the following binarization tree: \begin{equation}\label{eq:tree-m-five} \raisebox{-.5\height}{\includegraphics[scale=1]{figs-20.mps}} \end{equation} where $\Psi_{11}$,\dots,$\Psi_{16}$ are five-valued extracting functions, which gives the base function $\Psi_1=\Psi_{11}\oplus\dots\oplus\Psi_{16}$. Now, as $m$ increases, as we can see already in $m=5$ case, the corresponding binarization tree grows complicated so much that the advantage of the simplicity disappears quickly. Note that, in \eqref{eq:tree-m-five}, the supports of the functions $\Psi_{11},\dots,\Psi_{16}$ are exactly the orbits with respect to the group action by rotation on $\{0,1\}^5$~\cite{DBLP:books/daglib/0070572}. For a prime number $m$, there are $(2^m-2)/m$ such orbits, and Dijkstra's algorithm is based on this property. So the height of the binarization tree grows almost linearly and the number of nodes exponentially. However, observe that the subtree rooted at $w$ in \eqref{eq:tree-m-five} can be regarded as a binary search tree whose search keys are $\Psi_{11},\dots,\Psi_{16}$. So we can make a compromise and keep only the nodes with high probability of output. For example, for $m=11$, consider the following binarization tree: \[ \includegraphics[scale=1]{figs-21.mps} \] Here, we partition the orbits into four sets $S_1,\dots,S_4$ appropriately, for example, by the number of 1's. Then, auxiliary functions $w$, $w_1$ and $w_2$ are easily computed by counting the number of 1's in the input $x\in\{0,1\}^{11}$. The base function $\Psi_1$ is computed using the original Dijkstra algorithm. The corresponding Peres-style recursion is \[ \Psi(x)=\Psi_1(x)*\Psi(u(x))*\Psi(v(x))*\Psi(w_1(x))*\Psi(w_2(x)) \] is still extracting but not asymptotically optimal. \section*{Appendix: The Proof of Structure Lemma} Given a binarization tree, let $T$ be a subtree and $X_T$ be the restriction of $X$ on the leaf set of $T$. The leaf entropy theorem is proved by induction using the following recursion, \begin{equation}\label{eq:leaf-entropy-rec} H(X_T)=\begin{cases} 0,& \text{if $T$ is a leaf,}\\ H(\pi)+\pi_0H(X_{T_0})+\dots+\pi_{d-1}H(X_{T_{d-1}}), &\text{otherwise,} \end{cases} \end{equation} where, for nonempty $T$ whose root $v$ has degree $d$, $T_0,\dots,T_{d-1}$ are the subtrees of $v$ and $\pi=\dist{\pi_0,\dots,\pi_{d-1}}$ is the branching distribution of $v$. The structure lemma holds for a similar reason. \begin{proof}[Proof of Structure Lemma] For an equiprobable subset $S=S_{(n_0,\dots,n_{m-1})}$ and a subtree $T$ of the given binarization tree, let $S_T$ be the restriction of $S$ on the leaf set of $T$. Then we have a similar recursion \begin{equation}\label{eq:structure-rec} S_T\cong\begin{cases} \{0\}, & \text{if $T$ is a leaf,}\\ S_{(l_0,\dots,l_{d-1})}\times S_{T_0}\times\dots\times S_{T_{d-1}}, &\text{otherwise,} \end{cases} \end{equation} where, for nonempty $T$ and its root $v$, $T_0,\dots,T_{d-1}$ are the subtrees of $v$ and \[ l_0=\sum_{\phi_v(i)=0}n_i,\,\dots,\, l_{d-1}=\sum_{\phi_v(i)=d-1}n_i, \] so that $\phi_v(S)=S_{(l_0,\dots,l_{d-1})}$. First, if $T$ is a leaf with label $i$, then $S_T$ is a singleton set that consists of a single string of $n_i$ $i$'s, hence the first part of \eqref{eq:structure-rec}. When $T$ is nonempty, the correspondence $S_T\to S_{(l_0,\dots,l_{d-1})}\times S_{T_0}\times\dots\times S_{T_{d-1}}$ is given by $x\mapsto (\phi_v(x),x_{T_0},\dots,x_{T_{d-1}})$, where $x_{T_0},\dots,x_{T_{d-1}}$ are restrictions of $x$. This correspondence is one-to-one because $\phi_v(x)$ encodes the branching with which $x$ is recovered from $(x_{T_0},\dots,x_{T_{d-1}})$, giving an inverse mapping $S_{(l_0,\dots,l_{d-1})}\times S_{T_0}\times\dots\times S_{T_{d-1}}\to S_T$. For example, consider tree \eqref{eq:example-tree} and suppose that $T$ is the subtree rooted at the node {\it 3}. For $x=207643590289787$, the following shows the restrictions of $x$ and $\Phi_i(x)$'s. \[ \medskip \raisebox{-.5\height}{\includegraphics[scale=1]{figs-310.mps}} \] By taking symbols one by one from $x_{T_0}=0490898$, $x_{T_1}=777$, and $x_{T_2}=3$, according to $\Phi_3(x)=01020000101=(b_i)_{i=1}^{11}$, if $b_i=j$, from $x_{T_j}$, we recover $x_T=07439890787$. Induction on subtrees proves the lemma. \end{proof}
2,869,038,155,332
arxiv
\section{Introduction} Social media has exploded as a category of online discourse where people create content, share it, bookmark it and network at a prodigious rate. Examples include Facebook, MySpace, Digg, Twitter and JISC listservs on the academic side. Because of its ease of use, speed and reach, social media is fast changing the public discourse in society and setting trends and agendas in topics that range from the environment and politics to technology and the entertainment industry. Since social media can also be construed as a form of collective wisdom, we decided to investigate its power at predicting real-world outcomes. Surprisingly, we discovered that the chatter of a community can indeed be used to make quantitative predictions that outperform those of artificial markets. These information markets generally involve the trading of state-contingent securities, and if large enough and properly designed, they are usually more accurate than other techniques for extracting diffuse information, such as surveys and opinions polls. Specifically, the prices in these markets have been shown to have strong correlations with observed outcome frequencies, and thus are good indicators of future outcomes~\cite{Pennock2001, Chen2003}. In the case of social media, the enormity and high variance of the information that propagates through large user communities presents an interesting opportunity for harnessing that data into a form that allows for specific predictions about particular outcomes, without having to institute market mechanisms. One can also build models to aggregate the opinions of the collective population and gain useful insights into their behavior, while predicting future trends. Moreover, gathering information on how people converse regarding particular products can be helpful when designing marketing and advertising campaigns~\cite{Leskovec2006, Jansen2009}. This paper reports on such a study. Specifically we consider the task of predicting box-office revenues for movies using the chatter from Twitter, one of the fastest growing social networks in the Internet. Twitter~\footnote{http://www.twitter.com}, a micro-blogging network, has experienced a burst of popularity in recent months leading to a huge user-base, consisting of several tens of millions of users who actively participate in the creation and propagation of content. We have focused on movies in this study for two main reasons. \begin{itemize} \item The topic of movies is of considerable interest among the social media user community, characterized both by large number of users discussing movies, as well as a substantial variance in their opinions. \item The real-world outcomes can be easily observed from box-office revenue for movies. \end{itemize} Our goals in this paper are as follows. First, we assess how buzz and attention is created for different movies and how that changes over time. Movie producers spend a lot of effort and money in publicizing their movies, and have also embraced the Twitter medium for this purpose. We then focus on the mechanism of viral marketing and pre-release hype on Twitter, and the role that attention plays in forecasting real-world box-office performance. Our hypothesis is that movies that are well talked about will be well-watched. Next, we study how sentiments are created, how positive and negative opinions propagate and how they influence people. For a bad movie, the initial reviews might be enough to discourage others from watching it, while on the other hand, it is possible for interest to be generated by positive reviews and opinions over time. For this purpose, we perform sentiment analysis on the data, using text classifiers to distinguish positively oriented tweets from negative. Our chief conclusions are as follows: \begin{itemize} \item We show that social media feeds can be effective indicators of real-world performance. \item We discovered that the rate at which movie tweets are generated can be used to build a powerful model for predicting movie box-office revenue. Moreover our predictions are consistently better than those produced by an information market such as the Hollywood Stock Exchange, the gold standard in the industry~\cite{Pennock2001}. \item Our analysis of the sentiment content in the tweets shows that they can improve box-office revenue predictions based on tweet rates only after the movies are released. \end{itemize} This paper is organized as follows. Next, we survey recent related work. We then provide a short introduction to Twitter and the dataset that we collected. In Section 5, we study how attention and popularity are created and how they evolve. We then discuss our study on using tweets from Twitter for predicting movie performance. In Section 6, we present our analysis on sentiments and their effects. We conclude in Section 7. We describe our prediction model in a general context in the Appendix. \section{Related Work} Although Twitter has been very popular as a web service, there has not been considerable published research on it. Huberman and others~\cite{Huberman2008} studied the social interactions on Twitter to reveal that the driving process for usage is a sparse hidden network underlying the friends and followers, while most of the links represent meaningless interactions. Java et al~\cite{Java2007} investigated community structure and isolated different types of user intentions on Twitter. Jansen and others~\cite{Jansen2009} have examined Twitter as a mechanism for word-of-mouth advertising, and considered particular brands and products while examining the structure of the postings and the change in sentiments. However the authors do not perform any analysis on the predictive aspect of Twitter. There has been some prior work on analyzing the correlation between blog and review mentions and performance. Gruhl and others~\cite{Gruhl2006} showed how to generate automated queries for mining blogs in order to predict spikes in book sales. And while there has been research on predicting movie sales, almost all of them have used meta-data information on the movies themselves to perform the forecasting, such as the movie’s genre, MPAA rating, running time, release date, the number of screens on which the movie debuted, and the presence of particular actors or actresses in the cast. Joshi and others~\cite{Joshi2010} use linear regression from text and metadata features to predict earnings for movies. Sharda and Delen~\cite{Sharda2006} have treated the prediction problem as a classification problem and used neural networks to classify movies into categories ranging from 'flop' to 'blockbuster'. Apart from the fact that they are predicting ranges over actual numbers, the best accuracy that their model can achieve is fairly low. Zhang and Skiena~\cite{Zhang2009} have used a news aggregation model along with IMDB data to predict movie box-office numbers. We have shown how our model can generate better results when compared to their method. \section{Twitter} Launched on July 13, 2006, Twitter~\footnote{http://www.twitter.com} is an extremely popular online microblogging service. It has a very large user base, consisting of several millions of users (23M unique users in Jan~\footnote{http://blog.compete.com/2010/02/24/compete-ranks-top-sites-for-january-2010/}). It can be considered a directed social network, where each user has a set of subscribers known as followers. Each user submits periodic status updates, known as $tweets$, that consist of short messages of maximum size 140 characters. These updates typically consist of personal information about the users, news or links to content such as images, video and articles. The posts made by a user are displayed on the user's profile page, as well as shown to his/her followers. It is also possible to send a direct message to another user. Such messages are preceded by $@user_{id}$ indicating the intended destination. A $retweet$ is a post originally made by one user that is forwarded by another user. These retweets are a popular means of propagating interesting posts and links through the Twitter community. Twitter has attracted lots of attention from corporations for the immense potential it provides for viral marketing. Due to its huge reach, Twitter is increasingly used by news organizations to filter news updates through the community. A number of businesses and organizations are using Twitter or similar micro-blogging services to advertise products and disseminate information to stakeholders. \section{Dataset Characteristics} The dataset that we used was obtained by crawling hourly feed data from Twitter.com. To ensure that we obtained all tweets referring to a movie, we used keywords present in the movie title as search arguments. We extracted tweets over frequent intervals using the Twitter Search Api~\footnote{http://search.twitter.com/api/}, thereby ensuring we had the timestamp, author and tweet text for our analysis. We extracted 2.89 million tweets referring to 24 different movies released over a period of three months. Movies are typically released on Fridays, with the exception of a few which are released on Wednesday. Since an average of 2 new movies are released each week, we collected data over a time period of 3 months from November to February to have sufficient data to measure predictive behavior. For consistency, we only considered the movies released on a Friday and only those in wide release. For movies that were initially in limited release, we began collecting data from the time it became wide. For each movie, we define the {\it critical period} as the time from the week before it is released, when the promotional campaigns are in full swing, to two weeks after release, when its initial popularity fades and opinions from people have been disseminated. Some details on the movies chosen and their release dates are provided in Table 1. Note that, some movies that were released during the period considered were not used in this study, simply because it was difficult to correctly identify tweets that were relevant to those movies. For instance, for the movie {\it 2012}, it was impractical to segregate tweets talking about the movie, from those referring to the year. We have taken care to ensure that the data we have used was disambiguated and clean by choosing appropriate keywords and performing sanity checks. \begin{table} \label{movie_data} \begin{center} \begin{tabular}{|c|c|} \hline Movie & Release Date\\ \hline Armored & 2009-12-04\\ \hline Avatar & 2009-12-18\\ \hline The Blind Side & 2009-11-20\\ \hline The Book of Eli & 2010-01-15\\ \hline Daybreakers & 2010-01-08\\ \hline Dear John & 2010-02-05\\ \hline Did You Hear About The Morgans & 2009-12-18\\ \hline Edge Of Darkness & 2010-01-29\\ \hline Extraordinary Measures & 2010-01-22\\ \hline From Paris With Love & 2010-02-05\\ \hline The Imaginarium of Dr Parnassus & 2010-01-08\\ \hline Invictus & 2009-12-11\\ \hline Leap Year & 2010-01-08\\ \hline Legion & 2010-01-22\\ \hline Twilight : New Moon & 2009-11-20\\ \hline Pirate Radio & 2009-11-13\\ \hline Princess And The Frog & 2009-12-11\\ \hline Sherlock Holmes & 2009-12-25\\ \hline Spy Next Door & 2010-01-15\\ \hline The Crazies & 2010-02-26\\ \hline Tooth Fairy & 2010-01-22\\ \hline Transylmania & 2009-12-04\\ \hline When In Rome & 2010-01-29\\ \hline Youth In Revolt & 2010-01-08\\ \hline \end{tabular} \end{center} \caption{Names and release dates for the movies we considered in our analysis.} \end{table} \begin{figure}[h] \begin{center} \includegraphics[height=2in,width=3.5in]{twit_ts.eps} \caption{Time-series of tweets over the critical period for different movies.} \label{tweet_trends} \end{center} \end{figure} The total data over the critical period for the 24 movies we considered includes 2.89 million tweets from 1.2 million users. Fig~\ref{tweet_trends} shows the timeseries trend in the number of tweets for movies over the critical period. We can observe that the busiest time for a movie is around the time it is released, following which the chatter invariably fades. The box-office revenue follows a similar trend with the opening weekend generally providing the most revenue for a movie. \begin{figure} \begin{center} \includegraphics[height=2in,width=3.5in]{authavg.eps} \caption{Number of tweets per unique authors for different movies } \label{authts} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[height=2in,width=3.5in]{authdist2.eps} \caption{Log distribution of authors and tweets. } \label{authdist2} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[height=2in,width=3.5in]{auth_movdist.eps} \caption{Distribution of total authors and the movies they comment on.} \label{auth_movdist} \end{center} \end{figure} Fig~\ref{authts} shows how the number of tweets per unique author changes over time. We find that this ratio remains fairly consistent with a value between 1 and 1.5 across the critical period. Fig~\ref{authdist2} displays the distribution of tweets by different authors over the critical period. The X-axis shows the number of tweets in the log scale, while the Y-axis represents the corresponding frequency of authors in the log scale. We can observe that it is close to a Zipfian distribution, with a few authors generating a large number of tweets. This is consistent with observed behavior from other networks~\cite{Wu2009}. Next, we examine the distribution of authors over different movies. Fig~\ref{auth_movdist} shows the distribution of authors and the number of movies they comment on. Once again we find a power-law curve, with a majority of the authors talking about only a few movies. \section{Attention and Popularity} We are interested in studying how attention and popularity are generated for movies on Twitter, and the effects of this attention on the real-world performance of the movies considered. \subsection{Pre-release Attention:} Prior to the release of a movie, media companies and and producers generate promotional information in the form of trailer videos, news, blogs and photos. We expect the tweets for movies before the time of their release to consist primarily of such promotional campaigns, geared to promote word-of-mouth cascades. On Twitter, this can be characterized by tweets referring to particular urls (photos, trailers and other promotional material) as well as retweets, which involve users forwarding tweet posts to everyone in their friend-list. Both these forms of tweets are important to disseminate information regarding movies being released. First, we examine the distribution of such tweets for different movies, following which we examine their correlation with the performance of the movies. \begin{table} \label{urlrt} \begin{center} \begin{tabular}{|c|c|c|c|} \hline Features & Week 0 & Week 1 & Week 2\\ \hline url & 39.5 & 25.5 & 22.5\\ \hline retweet & 12.1 & 12.1 & 11.66\\ \hline \end{tabular} \end{center} \caption{Url and retweet percentages for critical week} \end{table} \begin{figure}[h] \begin{center} \includegraphics[height=2in,width=3.5in]{urlperc.eps} \caption{Percentages of urls in tweets for different movies. } \label{authdist} \end{center} \end{figure} \begin{table} \label{urlrt2} \begin{center} \begin{tabular}{|c|c|c|} \hline Features & Correlation & $R^{2}$\\ \hline url & 0.64 & 0.39\\ \hline retweet & 0.5 & 0.20\\ \hline \end{tabular} \end{center} \caption{Correlation and $R^{2}$ values for urls and retweets before release.} \end{table} Table 2 shows the percentages of urls and retweets in the tweets over the critical period for movies. We can observe that there is a greater percentage of tweets containing urls in the week prior to release than afterwards. This is consistent with our expectation. In the case of retweets, we find the values to be similar across the 3 weeks considered. In all, we found the retweets to be a significant minority of the tweets on movies. One reason for this could be that people tend to describe their own expectations and experiences, which are not necessarily propaganda. We want to determine whether movies that have greater publicity, in terms of linked urls on Twitter, perform better in the box office. When we examined the correlation between the urls and retweets with the box-office performance, we found the correlation to be moderately positive, as shown in Table 3. However, the adjusted $R^{2}$ value is quite low in both cases, indicating that these features are not very predictive of the relative performance of movies. This result is quite surprising since we would expect promotional material to contribute significantly to a movie's box-office income. \subsection{Prediction of first weekend Box-office revenues} Next, we investigate the power of social media in predicting real-world outcomes. Our goal is to observe if the knowledge that can be extracted from the tweets can lead to reasonably accurate prediction of future outcomes in the real world. The problem that we wish to tackle can be framed as follows. {\it Using the tweets referring to movies prior to their release, can we accurately predict the box-office revenue generated by the movie in its opening weekend?} \begin{table} \label{res} \begin{center} \begin{tabular}{|c|c|c|} \hline Features & Adjusted $R^{2}$ & p-value\\ \hline Avg Tweet-rate & 0.80 & 3.65e-09\\ \hline Tweet-rate timeseries & 0.93 & 5.279e-09\\ \hline Tweet-rate timeseries + thcnt & {\bf 0.973} & 9.14e-12\\ \hline HSX timeseries + thcnt & 0.965 & 1.030e-10\\ \hline \end{tabular} \end{center} \caption{Coefficient of Determination ($R^{2}$) values using different predictors for movie box-office revenue for the first weekend.} \end{table} \begin{figure}[h] \begin{center} \includegraphics[height=2in,width=3.5in]{pred_vs_act2.eps} \caption{Predicted vs Actual box office scores using tweet-rate and HSX predictors} \label{pred} \end{center} \end{figure} To use a quantifiable measure on the tweets, we define the {\bf tweet-rate}, as the {\it number of tweets referring to a particular movie per hour}. \begin{equation} Tweet-rate(mov) = \frac{|tweets(mov)|}{|Time\ (in\ hours)|} \end{equation} Our initial analysis of the correlation of the average tweet-rate with the box-office gross for the 24 movies considered showed a strong positive correlation, with a correlation coefficient value of 0.90. This suggests a strong linear relationship among the variables considered. Accordingly, we constructed a linear regression model using least squares of the average of all tweets for the 24 movies considered over the week {\it prior to their release}. We obtained an adjusted $R^{2}$ value of 0.80 with a p-value of $3.65e-09***$, where the '***' shows significance at 0.001, indicating a very strong predictive relationship. Notice that this performance was achieved using only one variable (the average tweet rate). To evaluate our predictions, we employed real box-office revenue information, extracted from the Box Office Mojo website~\footnote{http://boxofficemojo.com}. The movie $Transylmania$ that opened on Dec 4th had easily the lowest tweet-rates of all movies considered. For the week prior to its release, it received on an average 2.75 tweets per hour. As a result of this lack of attention, the movie captured the record for the lowest-grossing opening for a movie playing at over 1,000 sites, making only \$263,941 in its opening weekend, and was subsequently pulled from theaters at the end of the second week. On the other end of the spectrum, two movies that made big splashes in their opening weekends, {\it Twilight:New Moon} (making 142M) and {\it Avatar}(making 77M) had, for their pre-release week, averages of 1365.8 and 1212.8 tweets per hour respectively. This once again illustrates the importance of attention in social media. Next, we performed a linear regression of the time series values of the tweet-rate for the 7 days before the release. We used 7 variables each corresponding to the tweet-rate for a particular day. An additional variable we used was the number of theaters the movies were released in, $thcnt$. The results of the regression experiments are shown in Table 4. Note that, in all cases, we are using only data available prior to the release to predict box-office for the opening weekend.\\ {\bf \noindent Comparison with HSX:}\\ To compare with our tweet-based model, we used the Hollywood Stock Exchange index. The fact that artificial online markets such as the Foresight Exchange and the Hollywood Stock Exchange are good indicators of future outcomes has been shown previously~\cite{Pennock2001,Chen2003}. The prices in these markets have been shown to have strong correlations with observed outcome frequencies. In the case of movies, the Hollywood Stock Exchange (http://www.hsx.com/), is a popular play-money market, where the prices for movie stocks can accurately predict real box office results. Hence, to compare with our tweet-rate predictor, we considered regression on the movie stock prices from the Hollywood Stock Exchange, which can be considered the gold standard~\cite{Pennock2001}. From the results in Table 4, it can be seen that our regression model built from social media provides an accurate prediction of movie performances at the box office. Furthermore, the model built using the tweet rate timeseries {\it outperforms} the HSX-based model. \begin{table} \label{week} \begin{center} \begin{tabular}{|c|c|c|} \hline Predictor & AMAPE & Score\\ \hline $Reg_{nobudget}$+$nReg_{1w}$ & 3.82 & 96.81\\ \hline Avg Tweet-rate + thcnt & 1.22 & 98.77\\ \hline Tweet-rate Timeseries + thcnt & {\bf 0.56} & {\bf 99.43}\\ \hline \end{tabular} \end{center} \caption{AMAPE and Score value comparison with earlier work.} \end{table} The graph outlining the predicted and actual values of this model is also shown in Fig~\ref{pred}, outlining the utility of harvesting social media.\\ {\bf \noindent Comparison with News-based Prediction:}\\ In earlier work, Zhang and others~\cite{Zhang2009} have developed a news-based model for predicting movie revenue. The best-performing method in the aforementioned work is the combined model obtained by using predictors from IMDB and news. The corresponding $R^{2}$ value for this combined model is 0.788, which is far lower than the ones obtained by our predictors. We computed the AMAPE (Adjusted Mean Absolute Percentage/Relative Error) measure, that the authors use, for our data. The comparative values are shown in Table 5. We can observe that our values are far better than the ones reported in the earlier work. Note however, that since historical information on tweets are not available, we were able to use data on only the movies we have collected, while the authors in the earlier paper have used a larger database of movies for their analysis. \subsection{Predicting HSX prices} Given that social media can accurately predict box office results, we also tested their efficacy at forecasting the stock prices of the HSX index. At the end of the first weekend, the Hollywood stock exchange adjusts the price for a movie stock to reflect the actual box office gross. If the movie does not perform well, the price goes down and vice versa. We conducted an experiment to see if we could predict the price of the HSX movie stock at the end of the opening weekend for the movies we have considered. We used the historical HSX prices as well as the tweet-rates, individually, for the week prior to the release as predictive variables. The response variable was the adjusted price of the stock. We also used the theater count as a predictor in both cases, as before. The results are summarized in Table 6. As is apparent, the tweet-rate proves to be {\it significantly better} at predicting the actual values than the historical HSX prices. This again illustrates the power of the buzz from social media. \begin{table*} \label{hsx} \begin{center} \begin{tabular}{|c|c|c|} \hline Predictor & Adjusted $R^{2}$ & $p-value$ \\ \hline HSX timeseries + thcnt & 0.95 & 4.495e-10\\ \hline Tweet-rate timeseries + thnt & {\bf 0.97} & 2.379e-11\\ \hline \end{tabular} \end{center} \caption{Prediction of HSX end of opening weekend price.} \end{table*} \subsection{Predicting revenues for all movies for a given weekend} Until now, we have considered the problem of predicting opening weekend revenue for movies. Given the success of the regression model, we now attempt to predict revenue for all movies over a particular weekend. The Hollywood Stock Exchange de-lists movie stocks after 4 weeks of release, which means that there is no timeseries available for movies after 4 weeks. In the case of tweets, people continue to discuss movies long after they are released. Hence, we attempt to use the timeseries of tweet-rate, over 7 days before the weekend, to predict the box-office revenue for that particular weekend. Table 7 shows the results for 3 weekends in January and 1 in February. Note, that there were movies that were two months old in consideration for this experiment. Apart from the time series, we used two additional variables - the theater count and the number of weeks the movie has been released. We used the coefficient of determination (adjusted $R^2$) to evaluate the regression models. From Table 7, we find that the tweets continue to be good predictors even in this case, with an adjusted $R^{2}$ consistently greater than $0.90$. \begin{table} \label{week} \begin{center} \begin{tabular}{|c|c|} \hline Weekend & Adjusted $R^{2}$ \\ \hline Jan 15-17 & 0.92\\ \hline Jan 22-24 & 0.97\\ \hline Jan 29-31 & 0.92\\ \hline Feb 05-07 & 0.95\\ \hline \end{tabular} \end{center} \caption{Coefficient of Determination ($R^{2}$) values using tweet-rate timeseries for different weekends} \end{table} \begin{table*} \label{2ndweek} \begin{center} \begin{tabular}{|c|c|c|} \hline Predictor & Adjusted $R^{2}$ & $p-value$ \\ \hline Avg Tweet-rate & 0.79 & 8.39e-09\\ \hline Avg Tweet-rate + thcnt & 0.83 & 7.93e-09\\ \hline Avg Tweet-rate + PNratio & 0.92 & 4.31e-12\\ \hline \hline Tweet-rate timeseries & 0.84 & 4.18e-06\\ \hline Tweet-rate timeseries + thcnt & 0.863 & 3.64e-06\\ \hline Tweet-rate timeseries + PNratio & {\bf 0.94} & 1.84e-08\\ \hline \end{tabular} \end{center} \caption{Prediction of second weekend box-office gross} \end{table*} The results have shown that the buzz from social media can be accurate indicators of future outcomes. The fact that a simple linear regression model considering only the rate of tweets on movies can perform better than artificial money markets, illustrates the power of social media. \section{Sentiment Analysis} Next, we would like to investigate the importance of sentiments in predicting future outcomes. We have seen how efficient the attention can be in predicting opening weekend box-office values for movies. Hence we consider the problem of utilizing the sentiments prevalent in the discussion for forecasting. Sentiment analysis is a well-studied problem in linguistics and machine learning, with different classifiers and language models employed in earlier work~\cite{Pang2008,Godbole2007}. It is common to express this as a classification problem where a given text needs to be labeled as $Positive$, $Negative$ or $Neutral$. Here, we constructed a sentiment analysis classifier using the LingPipe linguistic analysis package~\footnote{http://www.alias-i.com/lingpipe} which provides a set of open-source java libraries for natural language processing tasks. We used the DynamicLMClassifier which is a language model classifier that accepts training events of categorized character sequences. Training is based on a multivariate estimator for the category distribution and dynamic language models for the per-category character sequence estimators. To obtain labeled training data for the classifier, we utilized workers from the Amazon Mechanical Turk~\footnote{https://www.mturk.com/}. It has been shown that manual labeling from Amazon Turk can correlate well with experts~\cite{Snow2008}. We used thousands of workers to assign sentiments for a large random sample of tweets, ensuring that each tweet was labeled by three different people. We used only samples for which the vote was unanimous as training data. \begin{figure}[h] \begin{center} \includegraphics[height=2in,width=3.5in]{subj2.eps} \caption{Movie Subjectivity values} \label{subj} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[height=2in,width=3.5in]{pol2.eps} \caption{Movie Polarity values} \label{pol} \end{center} \end{figure} The samples were initially preprocessed in the following ways: \begin{itemize} \item Elimination of stop-words \item Elimination of all special characters except exclamation marks which were replaced by $<EX>$ and question marks ($<QM>$) \item Removal of urls and user-ids \item Replacing the movie title with $<MOV>$ \end{itemize} We used the pre-processed samples to train the classifier using an n-gram model. We chose n to be 8 in our experiments. The classifier was trained to predict three classes - Positive, Negative and Neutral. When we tested on the training-set with cross-validation, we obtained an accuracy of 98\%. We then used the trained classifier to predict the sentiments for all the tweets in the critical period for all the movies considered. \subsection{Subjectivity} Our expectation is that there would be more value for sentiments after the movie has released, than before. We expect tweets prior to the release to be mostly anticipatory and stronger positive/negative tweets to be disseminated later following the release. Positive sentiments following the release can be considered as recommendations by people who have seen the movie, and are likely to influence others from watching the same movie. To capture the subjectivity, we defined a measure as follows. \begin{equation} Subjectivity = \frac{|Positive\ and\ Negative\ Tweets|}{|Neutral\ Tweets|} \end{equation} When we computed the subjectivity values for all the movies, we observed that our hypothesis was true. There were more sentiments discovered in tweets for the weeks after release, than in the pre-release week. Fig~\ref{subj} shows the ratio of subjective to objective tweets for all the movies over the three weeks. We can observe that for most of the movies, the subjectivity increases after release. \subsection{Polarity} To quantify the sentiments for a movie, we measured the ratio of positive to negative tweets. A movie that has far more positive than negative tweets is likely to be successful. \begin{equation} PNratio = \frac{|Tweets\ with\ Positive\ Sentiment|}{|Tweets\ with\ Negative\ Sentiment|} \end{equation} Fig~\ref{pol} shows the polarity values for the movies considered in the critical period. We find that there are more positive sentiments than negative in the tweets for almost all the movies. The movie with the enormous increase in positive sentiment after release is {\it The Blind Side} (5.02 to 9.65). The movie had a lukewarm opening weekend sales (34M) but then boomed in the next week (40.1M), owing largely to positive sentiment. The movie {\it New Moon} had the opposite effect. It released in the same weekend as {\it Blind Side} and had a great first weekend but its polarity reduced (6.29 to 5), as did its box-office revenue (142M to 42M) in the following week. \begin{table} \begin{center} \begin{tabular}{|c|c|} \hline Variable & $p-value$ \\ \hline (Intercept) & 0.542 \\ \hline Avg Tweet-rate & 2.05e-11 (***)\\ \hline PNRatio & 9.43e-06 (***) \\ \hline \end{tabular} \end{center} \caption{Regression using the average tweet-rate and the polarity (PNRatio). The significance level (*:0.05, **: 0.01, ***: 0.001) is also shown.} \end{table} Considering that the polarity measure captured some variance in the revenues, we examine the utility of the sentiments in predicting box-office sales. In this case, we considered the second weekend revenue, since we have seen subjectivity increasing after release. We use linear regression on the revenue as before, using the tweet-rate and the PNratio as an additional variable. The results of our regression experiments are shown in Table 8. We find that the sentiments do provide improvements, although they are not as important as the rate of tweets themselves. The tweet-rate has close to the same predictive power in the second week as the first. Adding the sentiments, as an additional variable, to the regression equation improved the prediction to 0.92 while used with the average tweet-rate, and 0.94 with the tweet-rate timeseries. Table 9 shows the regression p-values using the average tweet rate and the sentiments. We can observe that the coefficients are highly significant in both cases. \section{Conclusion} In this article, we have shown how social media can be utilized to forecast future outcomes. Specifically, using the rate of chatter from almost 3 million tweets from the popular site Twitter, we constructed a linear regression model for predicting box-office revenues of movies in advance of their release. We then showed that the results outperformed in accuracy those of the Hollywood Stock Exchange and that there is a strong correlation between the amount of attention a given topic has (in this case a forthcoming movie) and its ranking in the future. We also analyzed the sentiments present in tweets and demonstrated their efficacy at improving predictions after a movie has released. While in this study we focused on the problem of predicting box office revenues of movies for the sake of having a clear metric of comparison with other methods, this method can be extended to a large panoply of topics, ranging from the future rating of products to agenda setting and election outcomes. At a deeper level, this work shows how social media expresses a collective wisdom which, when properly tapped, can yield an extremely powerful and accurate indicator of future outcomes. \section{Appendix: General Prediction Model for Social Media} Although we focused on movie revenue prediction in this paper, the method that we advocate can be extended to other products of consumer interest. We can generalize our model for predicting the revenue of a product using social media as follows. We begin with data collected regarding the product over time, in the form of reviews, user comments and blogs. Collecting the data over time is important as it can measure the rate of chatter effectively. The data can then be used to fit a linear regression model using least squares. The parameters of the model include: \begin{itemize} \item $A$ : rate of attention seeking \item $P$ : polarity of sentiments and reviews \item $D$ : distribution parameter \end{itemize} Let $y$ denote the revenue to be predicted and $\epsilon$ the error. The linear regression model can be expressed as : \begin{equation} y = \beta_{a}*A +\beta_{p}*P +\beta_{d}*D + \epsilon \end{equation} where the $\beta$ values correspond to the regression coefficients. The attention parameter captures the buzz around the product in social media. In this article, we showed how the rate of tweets on Twitter can capture attention on movies accurately. We found this coefficient to be the most significant in our experiments. The polarity parameter relates to the opinions and views that are disseminated in social media. We observed that this gains importance after the movie has been released and adds to the accuracy of the predictions. In the case of movies, the distribution parameter is the number of theaters a particular movie is released in. In the case of other products, it can reflect their availability in the market. \section{Acknowledgement} This material is based upon work supported by the National Science Foundation under Grant $\#$ 0937060 to the Computing Research Association for the CIFellows Project. \bibliographystyle{IEEETran}
2,869,038,155,333
arxiv
\section{Introduction} Infinite particle systems are the mathematical framework to describe complex systems of interacting individual objects or agents, like molecules in a liquid, stars in a galaxy, individuals in a population. The elementary states of the systems are all countable collections of points in $\mathbb{R}^d$ which have no accumulation point, i.e.~elements of $\Gamma$, the space of locally finite simple configurations in $\R^d$. As it is typical for any infinite dimensional system, there does not exist a unique natural reference measure on $\Gamma$ singled out by its symmetry properties with respect to the action of a group. In this paper, we consider reference measures from $\mathfrak G_{z\sigma}^{\beta\phi}$, the set of all (tempered grand canonical) Gibbs measures which describe a Hamiltonian system in the thermodynamic equilibrium. Gibbs measures are parametrized by the intensity $z\sigma$, pair-potential $\phi$ and inverse temperature $\beta$. Gibbs measures transfrom naturally under the standard action on $\Gamma$ of the group of the local diffeomorphisms on $\R^d$. Integration by parts is an infinitesimal manifestation of this natural transformation where a differential operator is the infinitesimal generator of the group action. In \cite{AKR98b} the authors build a calculus on configuration space based on the action of the local diffeomorphism group. Essentially by exploiting the local nature of the group action, they can derive an integration by parts formula for all tempered grand canoncial Gibbs measures. This integration by parts formula is one of the essential ingredients in the construction of a stochastic process using Dirichlet form techniques formally associated to an infinite system of stochastic differential equations driven by white noise, called stochastic gradient dynamics. This dynamics describes an infinite system of particles interacting via a pair potential $\phi$. The integration by parts formula yields the closability of the corresponding pre-Dirichlet form and is necessary for identifying the generator of the Dirichlet form on a set of local functions, i.e. smooth cylinder functions. In this paper, instead, we consider the non-local action of the usual translation group shifting all points of the configuration simultaneously and in the same manner. We study the corresponding integration by parts formula for Gibbs measures. This integration by parts formula arises naturally, for example, in the construction of the environment process of a tagged particle, that is the movement of the particles of a system seen from a tagged one. This corresponds heuristically to choosing a coordinate system in which the origin moves with the tagged particle. To give a rigorous meaning to the environment process associated to the aforementioned gradient dynamics, unfortunately, this intuition cannot be used and one has has to start the construction from scratch, for example, using Dirichlet form techniques, cf.~\cite{FG11}. Since the analytical objects (Dirichlet form, generator) used for this approach necessarily contain the information about a uniform component of the environment dynamics, the integration by parts formula for the generator of the translation group, which we derive here, is an important ingredient for this approach. Let us list the challenges one has to overcome. First, the non-local nature of the translation group leads inevitably to boundary terms in a direct calculation if one uses a finite volume approximation. The derivation of the connection between generator and form in the aforementioned situation given in \cite{GP85} seems to neglect this difficulty and we are not aware of an easy fix that would work in great generality. We circumvent the problem by not using a local approximation and developing a different technique which has already been suggested in \cite{S09}. Second, even if the initial gradient dynamics corresponds to a translation invariant interaction, the associated tagged particle process has as invariant measure a Gibbs measure with non-translation invariant intensity, namely the interaction of the other particles with the tagged one. Moreover, it is physically reasonable that the mutual interaction contains a singular repulsion if particles get too near to each other. Thus we derive the integration by parts formula for general intensities $z\sigma_{\beta\psi}=ze^{-\beta\psi}\,dx$, where $\psi$ is integrable at infinity and $\psi(x)$ may grow at zero like $|x|^{-k}$ for some $k \in \mathbb{N}$, which covers physically relevant cases like e.g.~the Lennard-Jones potential. We include potentials which have singularities almost of an arbitrary nature and are not even everywhere weakly differentiable. Third, one can probably not expect that the integration by parts formula holds for every Gibbs measure, because e.g.~for constant intensities and translation invariant pair potentials there can exist non-translation invariant Gibbs measures due to a phase transition. For such a measure the integration by parts formula would look at least quite different from the intuitive one, if it exists. In the case of non-constant intensities, this problem will prevail or maybe even become worse. Hence, one can expect the integration by parts formula to hold in general only for elements from $\mathfrak G_{z\sigma_{\beta\psi}}^{\beta\phi}$ which are absolutely continuous with respect to a translation invariant element from $\mathfrak G_{zm}^{\beta\phi}$. Indeed, we can characterize that for given potential, inverse temperature and intensity $\sigma_{\beta\psi}$ these are the only measures which obey the intuitive integration by parts formula. As a first application, in \cite{CFGK11} the characterization is used to identify (in a natural way) the class of measures $\mu$ for which an invariance principle can be derived for the tagged particle dynamics (mentioned above) constructed from the Dirichlet form corresponding to $\mu$. It shows, for example, that using uniform motion for proving ergodicity of the environment process gives a more general result than an adaptation of the approach from \cite{AKR98b} (where a situation without uniform motion is considered). While the proof of the integration by parts formula for these measures works under extremely weak assumptions on $\psi$, the characterization we can only show under slightly stronger conditions, which a priori seem not to be necessary. Nevertheless, they still allow nonintegrable singularities of $\nabla\psi$ at isolated points (or sets of sufficiently small dimension), covering in particular the usual non-hardcore pair interactions of statistical mechanics. \section{Preliminaries and statement of the results} By $\Gamma$ we denote the space of locally finite simple configurations in $\R^d$, i.e.~subsets $\gamma$ of $\R^d$ such that $\gamma\cap\Lambda$ is finite for all bounded $\Lambda\subset\R^d$. For a function $f: \R^d\to\R^k$, $k\in\N$, and $\gamma\in\Gamma$ we define $$ \langle f,\gamma\rangle:=\sum_{x\in\gamma}f(x), $$ if for each component $f_i$, $i=1,\cdots,k$, of $f=(f_1,\cdots,f_k)$ at least $\sum_{x\in\gamma} f_i^+(x)<\infty$ or $\sum_{x\in\gamma} f_i^-(x)<\infty$. Here and below we denote the positive and negative part of a real-valued function $g$ by $g^+$ and $g^-$, respectively. We denote by $\mathcal FC_b^\infty(C_0^\infty(\R^d),\Gamma)$ the set of all functions of the form $F=g_F(\langle f,\cdot\rangle)$ with $k\in\N$, $f=(f_1,\cdots,f_k)\in C_0^\infty(\R^d\to\R^k)$ and $g_F: \R^k\to \R$ infinitely often differentiable, bounded and such that all derivatives are bounded. The formal generator $\nabla_\gamma^\Gamma$ of the uniform translations on $\Gamma$ is given by $\nabla_\gamma^\Gamma F:=\sum_{j=1}^k\partial_j g_F(\langle f,\cdot\rangle)\,\langle \nabla f_j,\cdot\rangle$, $F\in \mathcal FC_b^\infty(C_0^\infty(\R^d),\Gamma)$ as above. $\Gamma$ is equipped with the $\sigma$-field $\mathcal B$ generated by the mappings $\gamma\mapsto \langle 1_\Lambda,\gamma\rangle$, $\Lambda\subset\R^d$ measurable and bounded. The objects under consideration are tempered grand canonical Gibbs measures on $(\Gamma,\mathcal B)$ for superstable and lower regular pair potentials $\phi$ with intensity measure $\sigma$ having a bounded density w.r.t.~Lebesgue measure, activity $z>0$ and inverse temperature $\beta>0$. Gibbs measures can be defined in several different but completely equivalent ways, see e.g.~\cite{KK03}. The one used here is recalled in the appendix. The set of all tempered grand canonical Gibbs measures for $z$, $\beta$, $\phi$, $\sigma$ is denoted by $\mathfrak G_{z\sigma}^{\beta\phi}$. By $m$ we denote Lebesgue measure on $\R^d$. Note that if $C<\infty$, and $\psi:\R^d\to[-C,\infty]$ is measurable, then $e^{-\beta\psi}$ is weakly differentiable iff for all $n\in\N$ the function $\psi\wedge n$ is weakly differentiable and $\sup_{n\in\N}\Vert \nabla(\psi\wedge n) e^{-\beta\psi}\Vert_{L^1(K;m)}<\infty$ for all compact $K\subset\R^d$. In this case we define $\nabla\psi:=-(1_{[-C,\infty)}\circ\psi) e^{\beta\psi} \beta^{-1}\nabla e^{-\beta\psi}$ and observe that $\nabla(\psi\wedge n)=(1_{[-C,n]}\circ\psi)\nabla\psi$, $n\in\N$. Let us state the first main result of this note. \begin{theorem}\label{thm} Let $\phi:\R^d\to\R\cup\{\infty\}$ be a (measurable, even,) superstable, lower regular potential and let $\beta>0$ and $z>0$. Moreover, let $\psi: \R^d\to\R\cup\{\infty\}$ be measurable, bounded from below and such that $(1-e^{-\beta\psi})\in L^1(\R^d;m)$. Define the measure $\sigma_{\beta\psi}:=e^{-\beta\psi}m$. Then the following assertions hold: \begin{enumerate} \item For any $\mu_{zm}^{\beta\phi}\in \mathfrak G_{zm}^{\beta\phi}$ a measure $\mu_{{z\sigma_{\beta\psi}}}^{\beta\phi}\in\mathfrak G_{z\sigma_{\beta\psi}}^{\beta\phi}$ is given by \begin{equation}\label{eqn:einszueinsabbildung} \frac{d\mu_{z\sigma_{\beta\psi}}^{\beta\phi}}{d\mu_{zm}^{\beta\phi}}=\Xi_{\psi}^{-1} e^{-\beta\langle\psi,\cdot\rangle}, \end{equation} where $\Xi_{\psi}:=\int_\Gamma e^{-\beta\langle\psi,\gamma\rangle} d\mu_{zm}^{\beta\phi}(\gamma)$. If $\psi$ is Lebesgue-a.e.~finite, in a similar manner for any element from $\mathfrak G_{z\sigma_{\beta\psi}}^{\beta\phi}$ an element from $\mathfrak G_{zm}^{\beta\phi}$ is obtained and \eqref{eqn:einszueinsabbildung} gives a bijection between $\mathfrak G_{zm}^{\beta\phi}$ and $\mathfrak G_{z\sigma_{\beta\psi}}^{\beta\phi}$. \item If, in addition to the assumptions preceding (i), $e^{-\beta\psi}$ is weakly differentiable, $\nabla\psi$ (defined as above) is integrable w.r.t.~$\sigma_{\beta\psi}$, and $\mu_{zm}^{\beta\phi}\in\mathfrak G_{zm}^{\beta\phi}$ is translation invariant, then for $\mu_{z\sigma_{\beta\psi}}^{\beta\phi}$ as in (i) we obtain the following integration by parts formula: For every $F\in \mathcal FC_b^\infty(C_0^\infty(\R^d),\Gamma)$ it holds \begin{equation}\label{eqn:ibp} \int_\Gamma \nabla_\gamma^\Gamma F\,d\mu_{z\sigma_{\beta\psi}}^{\beta\phi}=\beta\int_\Gamma F\langle\nabla\psi,\cdot\rangle\,d\mu_{z\sigma_{\beta\psi}}^{\beta\phi}. \end{equation} \end{enumerate} \end{theorem} \begin{remark}\label{rem} Let $\phi$ be a potential fulfilling the assumptions of Theorem \ref{thm}. \begin{enumerate} \item It is well-known that if $\mu\in\mathfrak G_{z\sigma}^{\beta\phi}$ for $\sigma=m$, $\beta>0$ and $z> 0$, then $\mu$ fulfills the so-called Ruelle bound (see \cite[Eq.~(5.28)]{Ru70} for the meaning of this statement). By analyzing the proof of the last part of Corollary 5.3 in \cite{Ru70} it is not difficult to see that the Ruelle bound extends to all $\mu\in\mathfrak G_{z\sigma}^{\beta\phi}$ in the case when $\sigma$ has a bounded density w.r.t.~Lebesgue measure, i.e.~in particular when $\sigma=\sigma_{\beta\psi}$ as defined in Theorem \ref{thm}. \item If additionally $\int_{\R^d}\vert e^{-\beta\phi}-1\vert\,dm<\infty$, it is known (see \cite{Ru70}) that $\mathfrak G_{zm}^{\beta\phi}\neq\emptyset$ and there also exists a translation invariant element of $\mathfrak G_{zm}^{\beta\phi}$. Thus Theorem \ref{thm}(ii) implies the existence of an element of $\mathfrak G_{z\sigma_{\beta\psi}}^{\beta\phi}$ fulfilling \eqref{eqn:ibp}. \item Although in the situation from (ii) one could derive from Theorem \ref{thm}(i) also that $\mathfrak G_{z\sigma_{\beta\psi}}^{\beta\phi}\neq \emptyset$, it is more natural to derive this existence result from the construction in \cite{Ru70} or \cite{KK03}, since \cite[Proposition 2.6]{Ru70} is easily seen to extend to the case of intensity measures $\sigma$ having bounded density w.r.t.~Lebesgue measure. \end{enumerate} \end{remark} Let $\mu_{zm}^{\beta\phi}$ and $\mu_{z\sigma_{\beta\psi}}^{\beta\phi}$ be as in Theorem \ref{thm}(i) and let $\psi$ fulfill the additional assumptions from Theorem \ref{thm}(ii). The question arises whether \eqref{eqn:ibp} is not only implied by but even characterizes translation invariance of $\mu_{zm}^{\beta\phi}$. This is a natural conjecture, because translation invariance of $\mu_{zm}^{\beta\phi}$ is equivalent to $\mu_{z\sigma_{\beta\psi}}^{\beta\phi}$ being quasi-invariant w.r.t.~the translations $\theta_v: \gamma\mapsto \gamma+v$, $v\in\R^d$, with density $\frac{d\mu_{z\sigma_{\beta\psi}}^{\beta\phi}\circ \theta_v^{-1}}{d\mu_{z\sigma_{\beta\psi}}^{\beta\phi}}(\gamma)=e^{-\beta\langle \psi,\gamma-v\rangle+\beta\langle \psi,\gamma\rangle}$ and because \eqref{eqn:ibp} is just the differential version of the latter statement. Here we define $\gamma+v:=\{x+v\,|\,x\in\gamma\}$ for $\gamma\in\Gamma$, $v\in\R^d$. In the next theorem we verify this conjecture under some more conditions on $\psi$; the difficulties to treat the general case are explained in Remark \ref{rem:notgeneral} below. \begin{theorem}\label{thm2} Assume that $d\geq 2$. Let $\phi$, $\psi$ be as in Theorem \ref{thm}(ii) and assume additionally that $\psi$ is weakly differentiable in $\R^d\setminus\{0\}$ and $\nabla\psi\in L^1(\R^d\setminus B_1(0))$, where $B_1(0)$ denotes the ball around $0$ with radius $1$. Let $\mu_{zm}^{\beta\phi}$, $\mu_{z\sigma_{\beta\psi}}^{\beta\phi}$ be as in Theorem \ref{thm}(i). Then $\mu_{zm}^{\beta\phi}$ is translation invariant iff $\mu_{z\sigma_{\beta\psi}}^{\beta\phi}$ fulfills \eqref{eqn:ibp}. \end{theorem} \begin{remark} \begin{enumerate} \item For the applications mentioned in the introduction, the additional restrictions in the previous theorem are irrelevant. They still allow the usual non-hardcore pair potentials from statistical physics, e.g.~the Lennard-Jones potential. These potentials are usually bounded outisde any neighborhood of the origin. \item For $d=1$ the conclusion of Theorem \ref{thm2} is in most cases trivial: In this case usually $\mathfrak G_{zm}^{\beta\phi}$ consists for all $z,\beta>0$ only of one element, which is automatically translation invariant. This was shown in \cite{Pap87} for superstable potentials $\phi$ which are bounded on the complement of any neighborhood of $0$ and have the property that there exists a decreasing function $\varphi$ such that $\vert\phi(x)\vert\leq \varphi(\vert x\vert)$, $\vert x\vert \geq R>0$, and $\int_R^\infty \varphi(x)\,dx<\infty$. In particular, these conditions cover the usual type of potentials from statistical mechanics. \item The proof of Theorem \ref{thm2} given below extends to the case when one only assumes that $\psi$ is (as in Theorem 2.1(ii) and) weakly differentiable in $\R^d\setminus K$ for some compact $K\subset\R^d$ having Hausdorff dimension strictly less than $d-1$ (instead of choosing $K=\{0\}$) and $\nabla\phi\in L^1(\R^d\setminus B_R(0);m)$ with $R$ large enough such that $K\subset B_R(0)$. \item Theorem \ref{thm2} shows that the one-to-one correspondence from Theorem \ref{thm}(i) extends also to a one-to-one correspondence between the set of translation invariant elements from $\mathfrak G_{zm}^{\beta\phi}$ and the set of elements from $\mathfrak G_{z\sigma_{\beta\psi}}^{\beta\phi}$ fulfilling \eqref{eqn:ibp}. As one easily verifies, the correspondence preserves the structure of these sets in the sense that extremal elements in the former set (pure phases) correspond to extremal elements in the latter set. \end{enumerate} \end{remark} \section{Proofs} For a measure $\sigma$ on $\R^d$ and a measurable function $f: \R^d\to\R$ we define $C_{f,\sigma}:=\int_{\R^d}\vert e^f-1\vert\,d\sigma$. We need the following lemma, which is essentially contained in \cite{KK02}, \cite{KK03}. \begin{lemma}\label{lem} Let $\sigma$ be a measure on $\R^d$ having a bounded density w.r.t.~$m$ and let $\mu_{z\sigma}^{\beta\phi}\in\mathfrak G_{z\sigma}^{\beta\phi}$. \begin{enumerate} \item Let $N\subset\R^d$ with $\sigma(N)=0$. Then $\mu_{z\sigma}^{\beta\phi}$-a.s.~it holds $\gamma\subset\R^d\setminus N$. \item Let $f\in L^1(\R^d;\sigma)$. Then $\langle f,\cdot\rangle=\sum_{x\in\cdot}f(x)$ converges absolutely $\mu_{z\sigma}^{\beta\phi}$-a.s. Moreover, $\Vert \langle f,\cdot\rangle\Vert_{L^1(\Gamma;\mu_{z\sigma}^{\beta\phi})}\leq \xi_\sigma\Vert f\Vert_{L^1(\R^d;\sigma)}$ for some $\xi_\sigma<\infty$ which is independent of $f$. \item Let $\vert f\vert\wedge 1\in L^1(\R^d;\sigma)$ and $f$ be finite $\sigma$-a.e., then $\langle f,\cdot\rangle=\sum_{x\in\cdot}f(x)$ converges absolutely $\mu_{z\sigma}^{\beta\phi}$-a.s. If only $f^+$ is $\sigma$-a.e.~finite then for $\mu_{z\sigma}^{\beta\phi}$-a.e.~$\gamma\in\Gamma$ there exists a finite configuration $\eta\subset\gamma$ such that $f(x)>-\infty$ for all $x\in \gamma\setminus\eta$ and $\sum_{x\in\gamma\setminus \eta}f(x)$ converges absolutely. Moreover one can choose $\eta=\emptyset$ with nonzero $\mu_{z\sigma}^{\beta\phi}$-probability. \item Let $f:\R^d\to\R\cup\{-\infty\}\cup\{\infty\}$ be measurable and such that $C_{f,\sigma}<\infty$. Then $e^{\langle f,\cdot\rangle}:\Gamma\to [0,\infty)$ is well-defined and integrable w.r.t.~$\mu_{z\sigma}^{\beta\phi}$. $e^{\langle f,\cdot\rangle}$ is positive with positive $\mu_{z\sigma}^{\beta\phi}$-probability, and if $f$ is $\sigma$-a.e.~finite, then $e^{\langle f,\cdot\rangle}$ is $\mu_{z\sigma}^{\beta\phi}$-a.s.~positive. \end{enumerate} \end{lemma} \begin{proof} For (i) and (ii) see e.g.~\cite[Theorem 4.1]{KK02} (note that $\mu_{z\sigma}^{\beta\phi}$ fulfills a Ruelle bound, see Remark \ref{rem}(i)). For proving (iii) let $A:=\{x\in\R^d\,|\,\vert f(x)\vert\geq 1\}$. Then $A$ has finite $\sigma$-measure and hence by (ii) it holds $\langle 1_A,\cdot\rangle\in L^1(\Gamma;\mu_{z\sigma}^{\beta\phi})$, and thus $\sharp(\gamma\cap A)<\infty$ $\mu_{z\sigma}^{\beta\phi}$-a.s.~and by the definition of grand canonical Gibbs measures $\sharp(\gamma\cap A)=0$ with positive $\mu_{z\sigma}^{\beta\phi}$-probability. Moreover, since $1_{\R^d\setminus A} f\in L^1(\R^d;\sigma)$, (ii) also implies that $\sum_{x\in \gamma\setminus A}f(x)$ converges absolutely $\mu_{z\sigma}^{\beta\phi}$-a.s. Together with (i) the assertions in (iii) follow. Since $\vert f\vert\wedge 1\leq e^1\vert e^f-1\vert$ and since moreover $C_{f,\sigma}<\infty$ implies that $f^+$ is $\sigma$-a.e.~finite, (iv) is a consequence of (iii), with the exception of the integrability statement, which is seen as follows: We may w.l.o.g.~assume that $f\geq 0$. Then $e^{\langle 1_{[-n,n]^d} f,\cdot\rangle}\uparrow e^{\langle f,\cdot\rangle}$ as $n\to \infty$. Since $C_{f^+,\sigma}\leq C_{f,\sigma}<\infty$, e.g.~the proof of \cite[Proposition 5.1]{KK03} implies that $\sup_{n\in \N}\Vert e^{\langle 1_{[-n,n]^d}f,\cdot\rangle}\Vert_{L^1(\Gamma;\mu_{z\sigma}^{\beta\phi})}<\infty$. Using the monotone convergence theorem, we therefore obtain $e^{\langle f,\cdot\rangle}\in L^1(\Gamma;\mu_{z\sigma}^{\beta\phi})$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm}] Part (i): We start with $\mu_{zm}^{\beta\phi}\in \mathfrak G_{zm}^{\beta\phi}$. It follows from Lemma \ref{lem}(iv) that $e^{-\beta\langle\psi,\cdot\rangle}$ is not $\mu_{zm}^{\beta\phi}$-a.s.~equal to zero and integrable w.r.t.~$\mu_{zm}^{\beta\phi}$ and hence $0< \Xi_{\psi}<\infty$. The Ruelle equation (R) as given in the appendix, putting $\mu=\mu_{zm}^{\beta\phi}$ and $\sigma=m$, implies that $e^{-\beta\langle\psi,\cdot\rangle}\mu_{zm}^{\beta\phi}$ fulfills (R) with $\sigma=\sigma_{\beta\psi}$; that can be seen by replacing $F$ by $Fe^{-\beta\langle\psi,\cdot\rangle}$ in (R). Hence, we proved that $\Xi_\psi^{-1} e^{-\beta\langle\psi,\cdot\rangle}\mu_{zm}^{\beta\phi}$ is a (tempered) grand canonical Gibbs measure for $\phi$ with intensity measure $\sigma_{\beta\psi}$. Conversely, since $C_{\beta\psi,\sigma_{\beta\psi}}=C_{-\beta\psi,m}<\infty$, when starting with $\mu_{z\sigma_{\beta\psi}}^{\beta\phi}\in \mathfrak G_{z\sigma_{\beta\psi}}^{\beta\phi}$, we obtain $e^{\beta\langle\psi,\cdot\rangle}\in L^1(\Gamma;\mu_{z\sigma_{\beta\psi}}^{\beta\phi})$ by Lemma \ref{lem}(iv). If $\psi$ is finite Lebesgue-a.e., it follows that $e^{\beta\psi}\sigma_{\beta\psi}=m$. Using this and the Ruelle equation (R), we can show as above that the normalized $e^{\beta\langle\psi,\cdot\rangle}\mu_{z\sigma_{\beta\psi}}^{\beta\phi}$ is in $\mathfrak G_{zm}^{\beta\phi}$. The last assertion of Theorem \ref{thm}(i) follows from the last assertion in Lemma \ref{lem}(iv). Part (ii): Let $\mu_{zm}^{\beta\phi}\in\mathfrak G_{zm}^{\beta\phi}$ be translation invariant. We first consider the case $\psi=0$, but for a more general class of $F$. As before we assume that $F$ is of the form $F=g_F(\langle f,\cdot\rangle)$, but for $g_F$ we only require (in addition to smoothness) that $g_F$ and $\nabla g_F$ are exponentially bounded (i.e.~bounded in absolute value by $Ce^{a\vert\cdot\vert}$ for some $C<\infty$ and $a\in\R$, where $\vert \cdot\vert$ is Euclidean norm). For such functions, $v\in\R^d$, $\gamma\in\Gamma$ and $t\in [0,1]$ it holds $\frac{d}{dt}F(\gamma+tv)=v\nabla_\gamma^\Gamma F(\gamma+tv)$, and thus by the mean value theorem \begin{equation}\label{eqn:dry} \vert F(\gamma+vt)-F(\gamma)\vert\leq t \sup_{t'\in [0,1]}\vert v\nabla_\gamma^\Gamma F(\gamma+t'v)\vert\leq t\tilde C e^{\tilde a\langle 1_\Lambda,\gamma\rangle} \end{equation} for some $\tilde C<\infty$ and $\tilde a\in\R$, both not depending on $\gamma$, and for some open bounded $\Lambda\subset\R^d$ containing all points having distance less than $\vert v\vert$ from the support of $f$. By Lemma \ref{lem}(iv), the right-hand side of \eqref{eqn:dry} is in $L^1(\Gamma;\mu_{zm}^{\beta\phi})$, hence by Lebesgue's dominated convergence theorem $\int_{\Gamma} v\nabla_\gamma^\Gamma F\,d\mu_{zm}^{\beta\phi}=\lim_{t\to 0} \frac{1}{t} \int_\Gamma F(\cdot+vt)-F\,d\mu_{zm}^{\beta\phi}$, and the latter is equal to $0$ by the assumed translation invariance of $\mu_{zm}^{\beta\phi}$. Thus \eqref{eqn:ibp} holds for $\psi=0$. Replacing $F$ by $F e^{-\beta\langle\psi,\cdot\rangle}$, we obtain directly \eqref{eqn:ibp} also for $\psi\in C_0^\infty(\R^d)$, using that $\mu_{z\sigma_{\beta\psi}}^{\beta\phi}=\frac{1}{\Xi_\psi} e^{-\beta\langle \psi,\cdot\rangle}\mu_{zm}^{\beta\phi}$. We derive the general result extending this by three approximation arguments. (From now on, we restrict again to $F$ as in the assertion.) First, let $\psi\in H^{1,1}(\R^d)$ be bounded and compactly supported. Using convolutions with a Dirac sequence, we obtain a sequence $(\psi_n)_{n\in\N}\subset C_0^\infty(\R^d)$ having the following properties: \begin{enumerate} \item[(a)] $\psi_n\to\psi$ in $H^{1,1}(\R^d)$ as $n\to\infty$. \item[(b)] There exists $0\leq \psi_0\in L^1(\R^d;m)\cap L^\infty(\R^d;m)$ such that $\vert \psi_n\vert \leq\psi_0$. \end{enumerate} For later use we emphasize that we extend \eqref{eqn:ibp} to $\psi$ using only (a), (b) and the fact that \eqref{eqn:ibp} holds for all $\psi_n$. By dropping to a subsequence we may assume that $\psi_n\to \psi$ holds pointwise Lebesgue-a.e. Lemma \ref{lem}(ii) implies that $\langle \psi_0,\cdot\rangle<\infty$ holds $\mu_{zm}^{\beta\phi}$-a.e. By (b), Lebesgue's dominated convergence theorem and Lemma \ref{lem}(i) we conclude that $\langle \psi_n,\cdot\rangle\to \langle \psi,\cdot\rangle$ as $n\to\infty$ pointwise $\mu_{zm}^{\beta\phi}$-a.e. Using Lebesgue's theorem, (b) and integrability of $\vert \nabla_\gamma^\Gamma F\vert e^{\beta\langle\psi_0,\cdot\rangle}$ w.r.t.~$\mu_{zm}^{\beta\phi}$ (which follows e.g.~using Lemma \ref{lem}(iv), since $\vert \nabla_\gamma^\Gamma F\vert$ can be estimated by $C\langle 1_\Lambda,\cdot\rangle\leq C e^{\langle 1_\Lambda,\cdot\rangle}$ for some $C<\infty$ and some open bounded $\Lambda\subset\R^d$) we thus find that for $F\in\mathcal FC_b^\infty(C_0^\infty(\R^d),\Gamma)$ it holds \begin{equation}\label{eq1} \int_\Gamma \nabla_\gamma^\Gamma F\,e^{-\beta\langle \psi_n,\cdot\rangle}\,d\mu_{zm}^{\beta\phi}\to \int_\Gamma \nabla_\gamma^\Gamma F\,e^{-\beta\langle \psi,\cdot\rangle}\,d\mu_{zm}^{\beta\phi} \end{equation} as $n\to\infty$, i.e.~we have convergence of the left-hand side of \eqref{eqn:ibp}. In order to prove convergence of the right-hand side, we show that \begin{equation}\label{eq2a} \int_\Gamma \left\vert F \langle \nabla\psi-\nabla\psi_n,\cdot\rangle\right\vert\,e^{-\beta\langle \psi,\cdot\rangle}\,d\mu_{zm}^{\beta\phi}\to 0 \end{equation} and \begin{equation}\label{eq2b} \int_\Gamma \left\vert F\,\langle \nabla\psi_n,\cdot\rangle\,\left(e^{-\beta\langle \psi,\cdot\rangle}-e^{-\beta\langle \psi_n,\cdot\rangle}\right)\right\vert\,d\mu_{zm}^{\beta\phi}\to 0 \end{equation} as $n\to\infty$. By (a) we have $\nabla\psi_n\to\nabla\psi$ in $L^1(\R^d;m)=L^1(\R^d;\sigma_{-\beta\psi_0})$, hence Lemma \ref{lem}(ii) implies that the sequence $(\langle\nabla\psi_n,\cdot\rangle)_{n\in\N}$ converges to $\langle\nabla\psi,\cdot\rangle$ in $L^1(\Gamma;\mu_{z\sigma_{-\beta\psi_0}}^{\beta\phi})$. This implies \eqref{eq2a}, since the left-hand side of \eqref{eq2a} can be estimated by $\Xi_{-\psi_0} \Vert F\Vert_\infty\int_\Gamma \vert \langle \nabla\psi-\nabla\psi_n,\cdot\rangle \vert d\mu_{z\sigma_{-\beta\psi_0}}^{\beta\phi}$. To prove \eqref{eq2b}, we use that convergence of $(\langle\nabla\psi_n,\cdot\rangle)_{n\in\N}$ in $L^1(\Gamma;\mu_{z\sigma_{-\beta\psi_0}}^{\beta\phi})$ implies uniform integrability of this sequence w.r.t.~$\mu_{z\sigma_{-\beta\psi_0}}^{\beta\phi}$. For any $a\in\R$ we have \begin{multline*} \int_\Gamma \left\vert F\,\langle \nabla\psi_n,\cdot\rangle\,\left(e^{-\beta\langle \psi,\cdot\rangle}-e^{-\beta\langle \psi_n,\cdot\rangle}\right)\right\vert\,d\mu_{zm}^{\beta\phi}\\ \leq 2\Xi_{-\psi_0}\Vert F\Vert_\infty\,\int_{\vert\langle \nabla\psi_n,\cdot\rangle\vert\geq a}\vert \langle\nabla\psi_n,\cdot\rangle\vert\,\,d\mu_{z\sigma_{-\beta\psi_0}}^{\beta\phi} +a\Vert F\Vert_\infty\int_\Gamma \left\vert e^{-\beta\langle\psi_n,\cdot\rangle}-e^{-\beta\langle \psi,\cdot\rangle}\right\vert d\mu_{zm}^{\beta\phi}. \end{multline*} The first summand on the right-hand side can be made arbitrarily small uniformly in $n$ by choosing $a$ large, the second converges to $0$ as $n\to\infty$ for any fixed $a\in\R$. From this \eqref{eq2b} follows. Hence, \eqref{eqn:ibp} is verified for bounded, compactly supported $\psi\in H^{1,1}(\R^d)$. We now give the second approximation argument in order to treat the case when {$\psi\in H^{1,1}(\R^d)$} is bounded, but not necessarily compactly supported: Choose a sequence $(\chi_n)_{n\in\N}\subset C_0^\infty(\R^d)$ such that $1_{[-n,n]^d}\leq \chi_n\leq 1_{[-2n,2n]^d}$ and $\Vert \nabla\chi_n\Vert_\infty\to 0$ as $n\to\infty$, and define $\psi_n:=\chi_n\,\psi$. By the above considerations we know that \eqref{eqn:ibp} holds for all $\psi_n$, $n\in\N$. In order to extend \eqref{eqn:ibp} to $\psi$, we can apply precisely the same arguments as above, since (a) and (b) are again valid with $\psi_0=\vert \psi\vert\in L^1(\R^d;m)\cap L^\infty(\R^d;m)$. The following (third) approximation argument extends \eqref{eqn:ibp} to general $\psi$ as in the assertion: Setting $\psi_n:=\psi\wedge n$, $n\in\N$, we again obtain an approximating sequence of functions fulfilling \eqref{eqn:ibp}. In order to prove \eqref{eq1} we use the following arguments which are a slight modification of the above ones: Since for any $n\in\N$ it holds $\psi_n^-=\psi^-$ we have $\langle \psi_n^-,\cdot\rangle=\langle\psi^-,\cdot\rangle$, and this is finite $\mu_{zm}^{\beta\phi}$-a.s.~by Lemma \ref{lem}(ii). Moreover, it holds $\langle \psi^+_n,\gamma\rangle\to \langle \psi^+,\gamma\rangle\in [0,\infty]$ as $n\to\infty$ for any $\gamma\in\Gamma$ by the monotone convergence theorem. Thus we obtain $\langle \psi_n,\cdot\rangle\to\langle \psi,\cdot\rangle$ as $n\to\infty$ pointwise $\mu_{zm}^{\beta\phi}$-a.s. Since $\vert\nabla_\gamma^\Gamma F\vert e^{-\beta\langle \psi_n,\cdot\rangle}\leq \vert \nabla_\gamma^\Gamma F\vert e^{\beta\langle \psi^-,\cdot\rangle}\in L^1(\Gamma;\mu_{zm}^{\beta\phi})$, \eqref{eq1} follows by Lebesgue's dominated convergence theorem. Moreover, since $$ \int_\Gamma \left\vert F\,\langle \nabla\psi-\nabla\psi_n,\cdot\rangle\right\vert\,e^{-\beta\langle \psi,\cdot\rangle}\,d\mu_{zm}^{\beta\phi}\leq \Xi_{\psi}\cdot\Vert F\Vert_\infty \Vert \langle\nabla\psi-\nabla\psi_n,\cdot\rangle\Vert_{L^1(\Gamma;\mu_{z\sigma_{\beta\psi}}^{\beta\phi})}, $$ by Lemma \ref{lem}(ii) and the fact that $\Vert \nabla\psi_n-\nabla\psi\Vert_{L^1(\R^d;\sigma_{\beta\psi})}=\int_{\R^d} (1_{(n,\infty)}\circ\psi) \vert \nabla\psi \vert e^{-\beta\psi}\,dx\to 0$ as $n\to\infty$, we obtain \eqref{eq2a}. In order to show \eqref{eq2b} we define the measure $\mathcal B(\R^d\times\Gamma)\ni A\mapsto \mu_{*}(A):=\int_{\Gamma} \sum_{x\in\gamma} 1_A(x,\gamma)\,d\mu_{zm}^{\beta\phi}(\gamma)$ on the Borel $\sigma$-field $\mathcal B(\R^d\times\Gamma)$ of $\R^d\times\Gamma$. It holds $$ \int_{\Gamma} \left \vert F\langle\nabla\psi_n,\cdot\rangle\left(e^{-\beta\langle \psi,\cdot\rangle}-e^{-\beta\langle \psi_n,\cdot\rangle}\right)\right\vert\,d\mu_{zm}^{\beta\phi}\leq \Vert F\Vert_\infty \int_{\R^d\times\Gamma} \Theta_n(x,\gamma)\,d\mu_*(x,\gamma), $$ where $\Theta_n(x,\gamma):=\vert\nabla\psi_n(x)\vert \left\vert e^{-\beta\langle\psi,\gamma\rangle}-e^{-\beta\langle\psi_n,\gamma\rangle}\right\vert$ for $(x,\gamma)\in\R^d\times\Gamma$ such that $\langle \psi^-,\gamma\rangle<\infty$. Note that $\Theta_n$ is $\mu_*$-a.e.~defined. We have to prove convergence of $\Theta_n$ to $0$ in $L^1(\R^d\times\Gamma;\mu_*)$ as $n\to\infty$. To this end, we first note that for any $n\in\N$, $\gamma\in\Gamma$ (s.t.~$\langle \psi^-,\gamma\rangle<\infty$) and $x\in\gamma$ it holds \begin{align*} \Theta_n(x,\gamma)&=1_{(-\infty,n]}(\psi(x)) \vert\nabla\psi(x)\vert \left\vert e^{-\beta\langle\psi,\gamma\rangle}-e^{-\beta\langle\psi_n,\gamma\rangle}\right\vert\\ &=1_{(-\infty,n]}(\psi(x)) \vert\nabla\psi(x)\vert e^{-\beta\psi(x)}\left\vert e^{-\beta\langle\psi,\gamma\setminus \{x\}\rangle}-e^{-\beta\langle\psi_n,\gamma\setminus \{x\}\rangle}\right\vert \end{align*} The right-hand side converges to $0$ as $n\to\infty$. This shows that the sequence $(\Theta_n)_{n\in\N}$ converges pointwise to $0$ $\mu_{*}$-a.e. In order to obtain convergence to $0$ in $L^1(\R^d\times\Gamma;\mu_*)$ from Lebesgue's dominated convergence theorem, we note that for $\gamma\in\Gamma$ (s.t.~$\langle \psi^-,\gamma\rangle<\infty$) and $x\in\gamma$ the above equality implies \begin{align*} \Theta_n(x,\gamma)&\leq \vert\nabla\psi(x)\vert e^{-\beta\psi(x)} e^{\beta\langle \psi^-,\gamma\setminus\{x\}\rangle}\\ &= \vert\nabla\psi(x)\vert e^{-\beta\psi^+(x)} e^{\beta\langle\psi^-,\gamma\rangle}=:\Theta_0(x,\gamma) \end{align*} and \begin{align*} \int_{\R^d\times\Gamma} \Theta_0\,d\mu_*&=\int_{\Gamma}\langle\vert \nabla\psi\vert e^{-\beta\psi^+},\cdot\rangle e^{\langle\beta\psi^-,\cdot\rangle}\,d\mu_{zm}^{\beta\phi}=\Xi_{-\psi^-}\int_{\Gamma}\langle \vert \nabla\psi\vert e^{-\beta\psi^+},\cdot\rangle\,d\mu_{z\sigma_{-\beta\psi^-}}^{\beta\phi}\\&\leq \xi_{\sigma_{-\beta\psi^-}}\Xi_{-\psi^-} \int_{\R^d}\vert\nabla\psi\vert e^{-\beta\psi^+}e^{\beta\psi^-}\,dm \end{align*} by Lemma \ref{lem}(ii), and the right-hand side is finite by assumption. \end{proof} We now turn to the proof of Theorem \ref{thm2}. A first step to the proof is contained in the following lemma. \begin{lemma}\label{lem2} Let $\phi$, $\psi$ be as in Theorem \ref{thm2}. Let $\mu_{zm}^{\beta\phi}$, $\mu_{z\sigma_{\beta\psi}}^{\beta\phi}$ be as in Theorem \ref{thm}(i) and assume that \eqref{eqn:ibp} holds. Let $\varphi: \R^d\to\R$ be weakly differentiable and such that $\varphi\leq \psi$, $\varphi^-\in L^1(\R^d;m)$, $\nabla\varphi\in L^1(\R^d;m)$ and $\sup_{y\in A}\varphi(y)<\infty$ for a neighborhood $A$ of $0$. Then \begin{equation}\label{eqn:pretinv} \int_{\Gamma} \nabla_\gamma^\Gamma F \cdot e^{\beta\langle\varphi,\cdot\rangle}d\mu_{\beta\sigma_{\beta\psi}}^{\beta\phi}=\beta\int_{\Gamma} \langle \nabla\psi-\nabla\varphi,\cdot\rangle e^{\beta\langle\varphi,\cdot\rangle}d\mu_{\beta\sigma_{\beta\psi}}^{\beta\phi}. \end{equation} \end{lemma} \begin{proof} Observe that for any $K\in\N$ it holds $\varphi\wedge K\in H^{1,1}(\R^d)$. Choose a sequence $(\varphi_n)_{n\in\N}\subset C_0^\infty(\R^d)$ such that $\varphi_n\to \varphi\wedge K$ and $\nabla\varphi_n\to\nabla(\varphi\wedge K)$ in $L^1(\R^d)$. We apply \eqref{eqn:ibp} to obtain $$ \int_{\Gamma} \nabla_\gamma^\Gamma F \cdot h(\langle\varphi_n,\cdot\rangle))\,d\mu_{z\sigma_{\beta\psi}}^{\beta\phi}=-\int_{\Gamma} F\cdot h'(\langle\varphi_n,\cdot\rangle))\langle \nabla\varphi_n,\cdot\rangle\,d\mu_{z\sigma_{\beta\psi}}^{\beta\phi}+\beta\int_{\Gamma} F \cdot h(\langle\varphi_n,\cdot\rangle))\langle \nabla\psi,\cdot\rangle\,d\mu_{z\sigma_{\beta\psi}}^{\beta\phi} $$ for $h\in C_b^\infty(\R)$ and consider the limit as $n\to\infty$. We have $\langle\varphi_n,\cdot\rangle\to \langle\varphi\wedge K,\cdot\rangle$ in $L^1(\Gamma;\mu_{z\sigma_{\beta\psi}}^{\beta\phi})$ by Lemma \ref{lem}(ii), and dropping to a subsequence we may w.l.o.g.~assume that this holds also pointwise $\mu_{z\sigma_{\beta\psi}}^{\beta\phi}$-a.s. Thus $h(\langle\varphi_n,\cdot\rangle)\to h(\langle\varphi\wedge K,\cdot\rangle)$ and $h'(\langle\varphi_n,\cdot\rangle)\to h'(\langle\varphi\wedge K,\cdot\rangle)$ pointwise $\mu_{z\sigma_{\beta\psi}}^{\beta\phi}$-a.s.~and by Lebesgue's theorem also in weak-$*$ sense in $L^\infty(\Gamma;\mu_{z\sigma_{\beta\psi}}^{\beta\phi})$. Together with the convergence $\langle\nabla\varphi_n,\cdot\rangle\to \langle \nabla(\varphi\wedge K),\cdot\rangle$ in $L^1(\Gamma;\mu_{z\sigma_{\beta\psi}}^{\beta\phi})$ and integrability of $\nabla_\gamma F$ and $\langle\nabla\psi,\cdot\rangle$ w.r.t.~$\mu_{z\sigma_{\beta\psi}}^{\beta\phi}$, we obtain \begin{multline*} \int_{\Gamma} \nabla_\gamma^\Gamma F \cdot h(\langle\varphi\wedge K,\cdot\rangle))\,d\mu_{z\sigma_{\beta\psi}}^{\beta\phi}=\\ -\int_{\Gamma} F\cdot h'(\langle\varphi\wedge K,\cdot\rangle))\langle \nabla(\varphi\wedge K),\cdot\rangle\,d\mu_{z\sigma_{\beta\psi}}^{\beta\phi}+\beta\int_{\Gamma} F \cdot h(\langle\varphi\wedge K,\cdot\rangle))\langle \nabla\psi,\cdot\rangle\,d\mu_{z\sigma_{\beta\psi}}^{\beta\phi} \end{multline*} for any $h\in C_b^\infty(\R)$. Letting $K\to\infty$ and using similar arguments we obtain \begin{equation}\label{eqn:pretinvb} \int_{\Gamma} \nabla_\gamma^\Gamma F \cdot h(\langle\varphi,\cdot\rangle))\,d\mu_{z\sigma_{\beta\psi}}^{\beta\phi}=-\int_{\Gamma} F\cdot h'(\langle\varphi,\cdot\rangle))\langle \nabla\varphi,\cdot\rangle\,d\mu_{z\sigma_{\beta\psi}}^{\beta\phi}+\beta\int_{\Gamma} F \cdot h(\langle\varphi,\cdot\rangle))\langle \nabla\psi,\cdot\rangle\,d\mu_{z\sigma_{\beta\psi}}^{\beta\phi} \end{equation} for $h\in C_b^\infty(\R)$. Now choose a sequence $(h_k)_{k\in\N}\subset C_b^\infty(\R)$ such that $0\leq h_k\uparrow e^{\beta\cdot}$ and $0\leq h_k'\uparrow \beta e^{\beta\cdot}$ as $k\to\infty$. Taking $h=h_k$ in \eqref{eqn:pretinvb} and letting $k\to\infty$, we obtain \eqref{eqn:pretinv} from the monotone convergence theorem (when considering the positive and negative parts of all components of the integrands in \eqref{eqn:pretinvb} separately). For doing so, we only need to verify that $\nabla_\gamma^\Gamma F e^{\langle\beta\varphi,\cdot\rangle}$, $F e^{\langle\beta\varphi,\cdot\rangle} \langle\nabla\varphi,\cdot\rangle$ and $F e^{\langle\beta\varphi,\cdot\rangle} \langle\nabla\psi,\cdot\rangle$ are $\mu_{z\sigma_{\beta\psi}}^{\beta\phi}$-integrable. For the first two expressions this is clear by Lemma \ref{lem}(ii) and since $\varphi\leq \psi$ and $\nabla\varphi\in L^1(\R^d;m)$. For the last one, we compute using Lemma \ref{lem}(ii) and the assumptions on $\varphi$ and $\psi$ \begin{eqnarray*} \lefteqn{\int_{\Gamma}\big\vert F e^{\langle\beta\varphi,\cdot\rangle} \langle\nabla\psi,\cdot\rangle \big\vert d\mu_{z\sigma_{\beta\psi}}^{\beta\phi}}\\ & &\leq \frac{\Xi_{\psi-\varphi}}{\Xi_\psi}\Vert F\Vert_\infty \int_{\Gamma} \langle\vert\nabla\psi\vert,\cdot\rangle d\mu_{z\sigma_{\beta(\psi-\varphi)}}^{\beta\phi}\leq \xi_{\sigma_{\beta(\phi-\psi)}}\frac{\Xi_{\psi-\varphi}}{\Xi_\psi}\,\Vert F\Vert_\infty \int_{\R^d} \vert\nabla\psi\vert e^{\beta(\varphi-\psi)}\,dx\\ & &\leq \xi_{\sigma_{\beta(\phi-\psi)}} \frac{\Xi_{\psi-\varphi}}{\Xi_\psi}\,\Vert F\Vert_\infty \left(\int_{\R^d\setminus A} \vert \nabla\psi\vert \,dx+e^{\beta\sup_{y\in A}\varphi(y)}\int_{A} \vert \nabla\psi\vert e^{-\beta\psi}\,dx\right)<\infty. \end{eqnarray*} This completes the proof of the lemma. \end{proof} \begin{remark}\label{rem:notgeneral} Some comments should be given on the question why Theorem \ref{thm2} is not shown in the generality of Theorem \ref{thm}(ii). After deriving \eqref{eqn:pretinv} one might try an approximation $\varphi_n:=\psi\wedge n$ in order to extend that equation to $\varphi=\psi$, which coincides then with \eqref{eqn:ibp}. However this seems to lead to the necessity of proving that $$ \int_{\R^d} \vert \nabla e^{\beta\psi\wedge n-\beta\psi}\vert\,dx=\int_{\R^d} 1_{\{\psi\geq n\}} \vert \nabla \psi\vert e^{\beta n-\beta\psi}\,dx\to 0, $$ which is wrong in general if $\psi$ is not weakly differentiable. (In contrast, for the third approximation in the proof of Theorem \ref{thm}(ii) we only needed $\int_{\R^d} 1_{\{\psi\geq n\}}\vert \nabla \psi\vert e^{-\beta\psi}\,dx\to 0$.) We avoid this problem by confining ourselves to treating the case where $\psi$ is weakly differentiable except on a very small set. \end{remark} \begin{proof}[Proof of Theorem \ref{thm2}] Necessity is stated in Theorem \ref{thm}(ii). We prove sufficiency: Let $F\in\mathcal FC_b^\infty(C_0^\infty(\R^d),\Gamma)$ and let $v\in\R^d$. We need to show $\int_{\Gamma} F(\gamma+v)-F(\gamma)\,d\mu_{zm}^{\beta\phi}=0$. For $\varepsilon>0$ let $U_\varepsilon$ consist of those points of $\R^d$ which have distance less than $\varepsilon$ from the line $\{sv\,|\,s\in [0,1]\}$ and choose a function $\chi_\varepsilon\in C^\infty(\R^d)$ such that $\chi_\varepsilon=1$ on $U_{\varepsilon}$ and $\chi_\varepsilon=0$ on $\R^d\setminus U_{2\varepsilon}$. Let $g: \R\to [0,1]$ be a smooth function fulfilling $g(0)=1$ and $g(s)=0$ for all $s\in [1,\infty)$. Choose a smooth function $h_\varepsilon: \R^d\to [0,1]$ such that $h_\varepsilon=1$ outside $B_\varepsilon(0)$ and $h_\varepsilon=0$ in $B_{\varepsilon/2}(0)$. Define $\varphi_\varepsilon:= \psi h_\varepsilon+(1-h_\varepsilon)\inf_{y\in\R^d}\psi(y)$. Then $\varphi_\varepsilon$ fulfills the conditions of Lemma \ref{lem2} and we obtain for all $s\in [0,1]$ \begin{multline}\label{eqn:gurk} \int_{\Gamma} \nabla_\gamma^\Gamma (F g(\langle \chi_\varepsilon,\cdot\rangle))(\gamma+sv) e^{\beta\langle \varphi_\varepsilon-\psi,\gamma\rangle}d\mu_{zm}^{\beta\psi}(\gamma)\\ =\beta \int_\Gamma F(\gamma+sv)\,g(\langle \chi_\varepsilon,\gamma+sv\rangle) \langle \nabla\psi-\nabla\varphi_\varepsilon,\gamma\rangle e^{\beta\langle \varphi_\varepsilon-\psi,\gamma\rangle}\,d\mu_{zm}^{\beta\phi}(\gamma). \end{multline} The choice of $\chi_\varepsilon$ and $g$ implies that for any $\gamma\in\Gamma$ fulfilling $\gamma\cap B_{\varepsilon}(0)\neq \emptyset$ it holds $F(\gamma+sv) g(\langle\chi_\varepsilon,\gamma+sv\rangle)=0$ and $\nabla_\gamma^\Gamma(F g(\langle \chi_\varepsilon,\cdot\rangle)(\gamma+sv)=0$, so the integrands in the above equation can only be nonzero for $\gamma\in\Gamma$ fulfilling $\gamma\cap B_\varepsilon(0)=\emptyset$. Since for all such $\gamma$ we have $\varphi_\varepsilon(x)=\psi(x)$ and $\nabla\varphi_\varepsilon(x)=\nabla\psi(x)$ for all $x\in\gamma$, it follows $$ \int_\Gamma \nabla_\gamma^\Gamma (Fg(\langle\chi_\varepsilon,\cdot\rangle))(\gamma+sv) d\mu_{zm}^{\beta\phi}(\gamma)=0. $$ Since $\frac{d}{ds} (F(\gamma+sv)g(\langle\chi_\varepsilon,\gamma+sv\rangle))=v\nabla_\gamma^\Gamma (Fg(\langle\chi_\varepsilon,\cdot\rangle))(\gamma+sv)$ for $\gamma\in\Gamma$, $s\in [0,1]$, it follows from the fundamental theorem of calculus that $$ \int_{\Gamma} F(\gamma+v) g(\langle \chi_\varepsilon,\gamma+v\rangle) d\mu_{zm}^{\beta\phi}(\gamma)=\int_{\Gamma} F(\gamma) g(\langle \chi_\varepsilon,\gamma)\rangle d\mu_{zm}^{\beta\phi}(\gamma). $$ Letting $\varepsilon\to 0$, we obtain $\chi_\varepsilon\to 0$ Lebesgue-a.e.; here we use that $d\geq 2$. Hence by Lebesgue's theorem and Lemma \ref{lem}(i) it follows $$ \int_{\Gamma} F(\gamma+v)\,d\mu_{zm}^{\beta\phi}(\gamma)=\int_{\Gamma} F(\gamma)\,d\mu_{zm}^{\beta\phi}(\gamma), $$ which is what we needed to show. \end{proof} \section*{Appendix} Let us recall the definitions of superstability and lower regularity of a potential and some definitions from Gibbs measure theory. We call a function $\phi: \R^d\to\R\cup\{\infty\}$ a \emph{potential}, if it is measurable and even (i.e.~$\phi(x)=\phi(-x)$ for all $x\in\R^d)$. $\phi$ said to be \emph{superstable}, if there are $a>0$ and $b\geq 0$ such that for any finite configuration $\gamma$ it holds $$ \sum_{\{x,x'\}\in\gamma}\phi(x-x')\geq a\sum_{r\in\Z^d}\sharp(\gamma\cap Q_r)^2-b\sharp \gamma, $$ where $Q_r:=\{(x_1,\cdots,x_d)\in\R^d\,|\,r_i-1/2<x_i\leq r_i+1/2, 1\leq i\leq d\}$ for $r=(r_1,\cdots,r_d)\in\Z^d$, and $\sharp M$ denotes the cardinality of a set $M$. It is called \emph{stable}, if the above estimate holds with $a=0$. $\phi$ is called \emph{lower regular}, if there exists a decreasing function $\theta: [0,\infty)\to [0,\infty)$ such that $\int_0^\infty r^{d-1}\theta(r)\,dr<\infty$ and $\phi(x)\geq -\theta(\vert x\vert)$, $x\in\R^d$. These conditions are fulfilled by a wide class of potentials including those of Lennard-Jones type. By $\Gamma_0$ we denote the set of finite elements of $\Gamma$, and equip it with the trace $\sigma$-field $\mathcal B_0$ of $\mathcal B$ corresponding to the inclusion $\Gamma_0\subset\Gamma$. If $\Lambda\subset\R^d$ is measurable, we set $\Gamma_{\Lambda}:=\{\gamma\in\Gamma\,|\,\gamma\subset\Lambda\}$. It can be considered as a subset of $\Gamma$ or, if $\Lambda$ is relatively compact, as a subset of $\Gamma_0$. Given a $\sigma$-finite measure $\sigma$ on $\R^d$ and an activity parameter $z>0$, one defines on $\Gamma_0$ the Lebesgue-Poisson measure $\lambda_{z\sigma}$ by $$ \lambda_{z\sigma}(A):=\sum_{n=0}^\infty \frac{z^n}{n!} \int_{(\R^d)^n} 1_A(\{x_1,\cdots,x_n\})d\sigma(x_1)\cdots d\sigma(x_n),\quad A\in\mathcal B_0. $$ A measure $\mu$ on $(\Gamma,\mathcal B)$ is said to be \emph{tempered} if it is supported on the set $$ \bigcup_{N\in\N}\bigg\{\gamma\in\Gamma\,\bigg|\,\sum_{r\in[l]}\sharp(\gamma\cap Q_r)^2\leq N^2 (2l+1)^d \mbox{ for all $l\in\N$}\bigg\}, $$ where $[l]:=\Z^d\cap [-l,l]^d$. Let $\phi$ be a stable potential, $\beta>0$ and $z>0$. If a tempered measure $\mu$ on $(\Gamma,\mathcal B)$ fulfills the following condition (the Ruelle equation): \begin{enumerate} \item[(R)] For any nonnegative $\mathcal B$-measurable $F: \Gamma\to\R$ and all measurable relatively compact $\Lambda\subset\R^d$ it holds $$ \quad\quad\int_{\Gamma}F\,d\mu=\int_{\Gamma_{\R^d\setminus \Lambda}}\int_{\Gamma_{\Lambda}} F(\gamma\cup\eta)e^{-\beta \sum_{x\in\eta,y\in\gamma}\phi(x-y)-\beta \sum_{\{x,x'\}\subset \eta} \phi(x-x')}\,d\lambda_{z\sigma}(\eta)d\mu(\gamma), $$ \end{enumerate} then it is said to be a tempered \emph{grand canonical Gibbs measure} for $\phi$ with intensity measure $\sigma$, inverse temperature $\beta$ and activity $z$. \vspace{2ex}\textbf{Acknowledgement}: The authors thank the CCM at the University of Madeira for the hospitality during the Madeira Math Encounters XXXVII in 2009, where this work was initiated. Financial support through FCT, POCTI-219, FEDER, the SFB 701 and DFG projects GR 1809/5-1 and GR 1809/8-1 is gratefully acknowledged.
2,869,038,155,334
arxiv
\section{Introduction} Idiomatic expressions present a challenge to Large Language Models (LLMs) as their meaning cannot necessarily be derived from the composition of their component tokens, a trait that LLMs often exploit to create representations of multi-word expressions. The lack of compositionality leads to poor representations for idiomatic expressions and in turn poor performance in downstream tasks whose data includes them. SemEval-2022 task 2b \citep{tayyarmadabushi-etal-2022-semeval} encourages the creation of better representations of idiomatic expressions across multiple languages by presenting a \textbf{Semantic Text Similarity (STS)} task in which correct STS scores are required whether or not either sentence contains an idiomatic expression. The sub-task requires the creation of a self-consistent model in which a sentence including an idiomatic expression and one containing its literal meaning ('\textit{swan song}' and '\textit{final performance}') are exactly similar to each other and equally similar to any other sentence. To achieve this goal, we investigate whether due to the similarity between idioms and rare-words Schick and Sch\"utze's BERT for Attentive Mimicking \citep{schick-schutze-2020-bertram} (BERTRAM) model, which was designed for use with rare-words, can be used to explicitly learn high-quality embeddings for idiomatic expressions. We also investigate how many examples of each idiom are required to create embeddings that perform well on the task, as well as how the quality of contexts fed to the BERTRAM model effects the representations and performance on the task. Evaluating our model on the task shows that externally trained idiom embeddings significantly increase the performance on STS data containing idioms while maintaining high performance on general STS data. This improved performance gained an overall spearman rank score of 0.6402 and first place (of six entries) on the pre-train setting, and an overall spearman rank score of 0.6504 and second place (of five entries) on the fine-tune setting.\footnote{The code for creating the embeddings and the modified baseline system code can be found on GitHub: https://github.com/drsphelps/semeval-task-2.} \section{Background} \begin{table*}[h!] \small \centering \begin{tabular}{| p{0.2\linewidth} | p{0.7\linewidth} |} \hline \textbf{Usage} & \textbf{Example in Sentence} \\ \hline \hline Idiomatic & Blockchains, fundamentally, are banking because what they’re doing is allowing the transaction of value across networks … they’re doing it in an orthogonally different way," he said Wednesday in what may be his \textbf{swan song} in public office. \\ \hline Literal & Blockchains, fundamentally, are banking because what they’re doing is allowing the transaction of value across networks … they’re doing it in an orthogonally different way," he said Wednesday in what may be his \textbf{bird song} in public office. \\ \hline Semantically Similar & Blockchains, fundamentally, are banking because what they’re doing is allowing the transaction of value across networks … they’re doing it in an orthogonally different way," he said Wednesday in what may be his \textbf{final performance} in public office. \\ \hline \end{tabular} \caption{\label{data-example} Example sentences for the Idiomatic STS data. Idiomatic and Semantically similar should be given an STS score of 1, and be given the same score when compared to the literal use. } \end{table*} Adopting the idiom principle \citep{sincliar:1991} to produce a single token representation for MWEs has been used widely within static embedding distributional semantic models (\citealp{Mikolov2013DistributedRO}; \citealp{cordeiro-etal-2019-unsupervised}). Within contextualised representation models, \citealp{hashempour-villavicencio-2020-leveraging} show that the contextualised representations produced by context2vec \citep{melamud-etal-2016-context2vec} and BERT \citep{devlin-etal-2019-bert} models can be used to differentiate between idiomatic and literal uses of MWEs. However, the MWEs are only represented by one token in the input, before being broken into many tokens using BERTs word piece tokenizer. \citealp{tayyar-madabushi-etal-2021-astitchinlanguagemodels-dataset} add a token to the BERT embedding matrix and shows that this method improves representations through increased performance on their proposed STS task. The embeddings they add to BERT are randomly initialised, however, and only trained during the fine-tun step on limited data. \subsection{BERTRAM} BERT for Attentive Mimicking (BERTRAM) \citep{schick-schutze-2020-bertram}, originally developed to improve representations of rare words, builds upon attentive mimicking \citep{schick-schutze-2019-attentive} to create embeddings, within existing embedding spaces, for tokens that incorporate both form and context information from a small number of example contexts. During training the model attempt to recreate embeddings for common words with the existing embedding in the model treated as the `gold embedding', a process known as mimicking. Form embeddings are then learnt using trained n-gram character embeddings, before being passed with a context into a BERT model. The output of the BERT model forms the embedding for that specific context. To incorporate knowledge from many contexts an attention layer is applied over the outputs for each context to get the final embedding. There exist other models to produce effective embeddings from a small number of contexts \citep{zhao-etal-2018-generalizing, pinter-etal-2017-mimicking}, however, BERTRAM is the only model that is non-bag-of-words and incorporates both form and context information when creating the embedding. Rare words are unsurprisingly defined by how uncommon they are within datasets. This leads to problems when using LLMs on tasks involving rare words as the word pieces they are broken down into have not been influenced enough during pre-training to accurately represent them. Similarly, idiomatic phrases represent a small proportion of the usage of their constituent words, the idioms in the development set for this task represent an average of 4.9\% of the usage of their constituent words. Therefore, the embeddings for constituent words are not significantly effected by the usage of idioms in the training data, leading to the model failing to understand the idiomatic expressions. Further similarities between idioms and rare-words include the variance in compositionality, for example, \textit{unicycle} can be partially understood from its word pieces, whereas \textit{kumquat} cannot. \section{Methodology} \subsection{Embedding Creation} Due to the similarities between rare words and idioms, we use BERTRAM to create representations for idiomatic expressions. A separate BERTRAM model is used for each nof the tasks languages. For English, we use the pre-trained model provided with the original paper. For Portuguese and Galician we train BERTRAM models with BERTimbau Base \citep{portugueseBERT} and Bertinho-Base \citep{galicianBERT} respectively used as the base transformers. The Portuguese and Galician BERTRAM models that we train are trained using almost the same training regime outlined for the English model in the original paper, 3 epochs of context only training, 10 epochs of form only training and 3 epochs of combined training. Due to time and compute restrictions, we do not use One-Token Approximation to expand the number of gold standard representations that can be used for attentive mimicking. The Portuguese and Galician splits of the cc100 dataset (\citealp{conneau-etal-2020-unsupervised}; \citealp{wenzek-etal-2020-ccnet}) are used to train the models, with the entire split being used for Galician, and a 10GB subset used for Portuguese. Contexts for each of the idioms found in the task data can then be created using these models. Examples are retrieved from the relevant split in the cc100 dataset using a grep command \footnote{\textit{grep -i " \$val" -m250 en.txt > \$val.data}, where \$val is the idiom of interest} that retrieves the entire line that the instance of the idiom is found on. We investigate how changing the number of contexts used to create each embeddings changes our performance on the task by creating embeddings for each idiom with between 1-250 examples in intervals. \subsection{Model Architecture} \begin{figure} \centering \includegraphics[width=2.7in]{pretrain.png} \caption{Overall Spearman Rank performance on the development set for the English and Portuguese models at different epochs during pretraining} \label{fig:pretrain} \end{figure} \begin{figure} \centering \includegraphics[width=2.7in]{finetune} \caption{Overall and Idiom STS Only Spearman Rank on the development set whilst training on the Idiom STS data} \label{fig:finetune} \end{figure} For predicting the similarity scores, a separate model is used for each of the languages BERT-Base \citep{devlin-etal-2019-bert} for English, BERTimbau for Portuguese, and Bertinho-Base for Galician. The created BERTRAM embeddings for each of the idioms found within the task are added into the embedding matrix of the relevant model. These models are used within a Sentence BERT \citep{reimers-gurevych-2019-sentence} setup, implemented using the SentenceTransformers library, which consists of a siamese network structure that uses mean squared error over the cosine similarities of the input sentences as it's loss function. This allows us to use the contextualised embedding outputs of our BERT networks to find cosine similarity between a given pair of sentences. \subsection{Data} This sub-task uses data in English, Portuguese and Galician. Data is also split into general STS data which does not necessarily contain idioms and idiom STS data which specifically contains idioms and phrases which are semantically similar or literally similar. An example of idiom STS data taken from the task description can be seen in Table \ref{data-example}. English and Portuguese are the primary languages and general STS data, from STSBenchmark \citep{cer-etal-2017-semeval} and ASSIN2 \citep{Real2020TheA2} for English and Portuguese respectively, and idiom STS data for both languages are included in the train, dev, eval and test sets. A very small amount (50 examples) of Galician data, comprised of idiom STS data, is also included in the test set. The task is split into two settings, pre-train and fine-tune. The pre-train setting does not allow for the use of STS score annotated data which includes idioms, whereas any data can be used in the fine-tune setting. The evaluation metric used in this task is the correlation between the predicted similarities and the gold standard ones, calculated using Spearman's Rank Correlation Coefficient. The Spearman's Rank is calculated for the general STS data and the idiom STS data separately, however, the Spearman's Rank for the entire dataset is used in the final evaluation. \subsection{Pre-train Setting} For the pre-train setting, we use the general STS data in English and Portuguese to train the respective models. Due to a lack of available STS data for Galician, it is trained on the Portuguese data, as there is a high level of similarity between the two languages. Evaluating the models on the dev split, we investigate the optimal number of epochs for the English and Portuguese models. The results (shown in figure \ref{fig:pretrain}) show that 45 epochs are optimal for Portuguese and 35 for English. Due to a lack of dev split data for Galician we use the result from the Portuguese model as they are trained on the same data. \subsection{Fine-tune Setting} \begin{table*} \centering \begin{tabular}{llrrr} \hline Setting & Language(s) & SR ALL & SR Idiom & SR STS \\ \hline Pre-Train & EN & 0.7445 & 0.4422 & 0.8709 \\ Pre-Train & PT & 0.7087 & 0.4806 & 0.8010 \\ Pre-Train & GL & 0.2924 & 0.2924 & - \\ \textbf{Pre-Train} & \textbf{All} & \textbf{0.6402} & \textbf{0.4030} & \textbf{0.8641} \\ \textit{Pre-Train} & \textit{EN} & \textit{0.5958} & \textit{0.2488} & \textit{0.8300} \\ \textit{Pre-Train} & \textit{PT} & \textit{0.5584} & \textit{0.2761} & \textit{0.7745} \\ \textit{Pre-Train} & \textit{GL} & \textit{0.1976} & \textit{0.1976} & \textit{-} \\ \textit{Pre-Train} & \textit{All} & \textit{0.4810} & \textit{0.2263} & \textit{0.8311} \\ \hdashline Fine-Tune & EN & 0.7643 & 0.4861 & 0.8344 \\ Fine-Tune & PT & 0.7307 & 0.4643 & 0.7908 \\ Fine-Tune & GL & 0.2859 & 0.2859 & - \\ \textbf{Fine-Tune} & \textbf{All} & \textbf{0.6504} & \textbf{0.4124} & \textbf{0.8188} \\ \textit{Fine-Tune} & \textit{EN} & \textit{0.6684} & \textit{0.4109} & \textit{0.6210} \\ \textit{Fine-Tune} & \textit{PT} & \textit{0.6026} & \textit{0.4090} & \textit{0.5523} \\ \textit{Fine-Tune} & \textit{GL} & \textit{0.3842} & \textit{0.3842} & \textit{-} \\ \textit{Fine-Tune} & \textit{All} & \textit{0.5951} & \textit{0.3990} & \textit{0.5961} \\ \hline \end{tabular} \caption{\label{final-results} Final Spearman Rank (SR) scores of the system on the test set, split into idiom Semantic Text Similarity (STS), general STS, and all datasets. Aggregated results for all languages in bold. Results for the baseline system, also broken down into languages, are in italics. } \end{table*} For the fine-tune setting we start with the models from the pre-train setting, and further train them on the Idiom STS data provided as part of the task. Again we investigate the optimal number of epochs of training on this data (results shown in figure \ref{fig:finetune}). We find that the overall spearman rank is highest after just a single epoch of training, with further training considerably reducing the performance on the general STS data, and thus on the overall STS score. However, further training, up to 50 epochs, continues to increase the performance of the model on Idiom STS data. Therefore, depending on the application and required trade-off, the model can be tuned to either perform better on general STS data or idiom STS data. \subsection{Number of Examples} \begin{figure} \centering \includegraphics[width=3in]{number_examples.png} \caption{Overall Spearman Rank corellation score on the development set with different numbers of examples used to create the idiom embeddings.} \label{number-examples} \end{figure} We also tune the number of examples given for each idiom on the development data. Using BERTRAM we train embeddings for each of the idioms using a range of different numbers of examples from 1-250. The performance of each set of embeddings is evaluated by training the whole system for 10 epochs followed by evaluation on the dev set. Figure 3 shows the results of this experiment. The performance increases quickly from 1-15 examples before flattening out. The absolute highest performance is achieved at 150 examples, and so this is the value we use going forward. \section{Results} The final results for our system on the test data can be seen in Table \ref{final-results}. These scores show significant improvement over the baseline system and led to our system being placed first for the pre-train setting, and second for the fine-tune setting. Fine-tuning has a much lower effect on the performance of the system when evaluated on the test set than compared with the dev and evaluation sets, with only a small, but significant, rise in overall correlation. Performance rises by only 0.0198 and 0.022 for English and Portuguese respectively, and unlike on dev data we do not see a uniform increase on the SR Idiom score. \subsection{Galician Performance} The performance we achieve on the Galician idiom data is much lower than what is seen on the English and Portuguese data. As we didn't have access to any development data for Galician further investigation will be needed to identify the causes of this discrepancy. Due to the smaller amount of Galician data in the cc100 corpus, some idioms did not have the full 150 examples that were used to create the embeddings for the English and Portuguese idioms. Additionally, there was no Galician STS data to train the final model on, and even though Portuguese and Galician are very similar, the small difference may lead to differences in the performance. \subsection{Error Analysis and Data Issues} To perform analysis on the quality of the created representations we calculate the Spearman's Rank Correlation for each of the idioms in the development set individually. Any idioms with less than 5 occurrences in the development data are removed, as significant correlation scores cannot be achieved with such a low sample size. When evaluating the performance of the idioms individually, we can see that some of the idiomatic expressions perform much worse than average. For example the spearman rank for score for `fish story' is just 0.190 when the embedding is trained on 10 random examples. Analysis of these errors shows that the lower performance can, at least in part, be attributed to different phrase senses in the automatically collected examples. Taking our above example `\textit{fish story}', 3 different phrase senses can be observed in the original randomly selected examples: a tall tale, a literal story about fish, and as a proper noun in the title of the film `A Fish Story'. This leads to a divergence in the contexts in the examples, and the contexts for the idiomatic uses, leading to worse embeddings for the idiomatic phrases. We can explore this further by producing a manually collected gold standard example set, for the English language subset of the MWEs. Taking the original 250 examples for each idiom, we select 10 gold standard examples. To avoid overfitting our embeddings to this task, we only manually remove examples where the MWE is being used as a proper noun (e.g. the film 'A Fish Story'), or the idiom is being misused, leaving in correct literal and idiomatic uses of the phrase. After removing the proper noun and misused cases, 10 random examples are selected to form our 'gold standard' example set. We then compare the spearman scores achieved when the embeddings are trained with the gold standard examples, to scores when the representations are produced using 10 random examples when both models are evaluated on the English split of development set. The results for selected MWEs with the randomly selected (auto) and manually chosen (manual) contexts can be seen in table \ref{tab:manual}. The manually selected examples lead to an increase in performance on the Idiom STS data split from 0.406 to 0.450. A small increase from 0.841 to 0.848 overall on the English split can also be observed, however this performance is limited by the general STS score which is unaffected by our manual selection. Particularly large improvements in spearman rank coefficient can be seen on MWEs with multiple meanings (panda car, banana republic, fish story, etc.). Surprisingly, we actually see the performance on some MWEs fall, however this can likely be attributed to the random selection of examples, and variance in the contexts used for each idiom, especially on the MWEs which did not have many usages removed as they are only used in the idiomatic form (eager beaver, chain reaction, etc.). \begin{table} \centering \begin{tabular}{cccc} \hline \textbf{MWE} & \textbf{Auto} & \textbf{Manual} & \textbf{Change}\\ \hline panda car & 0.399 & 0.851 & 0.452 \\ banana republic & 0.391 & 0.753 & 0.362 \\ ... & ... & ... & ... \\ fish story & 0.190 & 0.304 & 0.114 \\ ... & ... & ... & ... \\ chain reaction & 0.356 & 0.240 & -0.116 \\ eager beaver & 0.491 & 0.352 & -0.159 \\ \end{tabular} \caption{Improvement in correlation, measured using Spearman's Rank Coefficient, when trained on manually chosen examples vs. automatically collected ones.} \label{tab:manual} \end{table} \section{Conclusion} We build our system by augmenting BERT models for each language with single token embeddings learnt using BERTRAM. BERTRAM is used due to its high performance on rare words, which share many properties with idioms such as non-compositionality and being rare examples of component pieces. Our results, and subsequent ranking at first place (of six entries) in the pre-train setting and second place (of five entries) in the fine-tune setting, show that BERTRAM can learn high-quality word embeddings for idioms and that this leads to better performance on downstream tasks. Our error analysis shows that BERTRAM is sensitive to the quality of examples it is shown, and that performance can be improved even further by manually selecting a gold set of contexts for each idiom. Future work could look at the differences in performance between the Portuguese and Galician models with the goal of increasing performance on Galician, and perform more analysis to explore the discrepancy in performance between individual idioms further. \section*{Acknowledgements} This work was supported the Healthy Lifespan Institute (HELSI) at The University of Sheffield and is funded by the Engineering and Physical Sciences Research Council [grant number EP/T517835/1].
2,869,038,155,335
arxiv
\section{Introduction} \begin{quote} ``You've got 500 codes first and then you've got notes everywhere all over them.'' (P05) \end{quote} Hours of interviews and observations, pages of transcripts and field notes, and the large number of codes, labels, and post-it notes that follow are a familiar sight to any qualitative researcher. Qualitative research, known for its ability to deeply understand rich in-situ practices and experiences, is widely used in CSCW and HCI for developing human-centered understandings of computing systems. However, qualitative research is also, by its nature, manual and laborious. While the scale of data generated by methods like interviews and ethnography can be viewed as ``small'' in our current world of ``big data,'' it nevertheless can be overwhelming to humans, and requires tremendous human effort to analyze. Other qualitative methods like content analysis, on the other hand, often directly tackle the realm of ``big data'' by using samples like millions of scraped online posts (e.g., \cite{brubaker_we_2011}), a scale even more clearly impossible for humans to handle. Therefore, qualitative researchers use methods such as subsampling \cite{robinson_sampling_2014} to reduce the sample size to something humans are capable of handling, forgoing the opportunity to take advantage of the richness offered by the full dataset. The challenge of qualitatively analyzing big data has led some to consider the potential for artificial intelligence (AI) to assist qualitative researchers. While one may argue that qualitative research and AI are incompatible due to their different epistemological traditions, recent work has discussed the conceptual similarities between machine learning (ML), a technique often used in AI, and grounded theory methods (GTM), and proposed hybrid models that take advantage of both types of methodology \cite{muller_machine_2016}. Putting this comparison to the test, one study has even shown empirically that GTM and ML can produce similar results on the same data \cite{baumer_comparing_2017}. There also have been various efforts to integrate ML into qualitative research (e.g., \cite{yan_optimizing_2014, marathe_semi-automated_2018}), aiming to automate deductive steps of the qualitative research process---applying established codebooks to the dataset---and reduce human effort. However, the inductive step of generating the codebook through discovering patterns and themes cannot be overlooked. Is it possible that AI can also support the inductive step of qualitative research that finds meaning in unstructured data instead of applying already-found meaning on a larger set of data? While human-AI collaboration seems promising for qualitative research, is it something that qualitative researchers themselves desire? And what abilities and qualities do people seek (or not) in AI that would make it a good collaborator? Answers to this question will not only shed light on how we can better design AI assistance for qualitative researchers, but also provide deeper insights into human-AI collaboration in fields typically seen as human-only. n this paper, we do not seek to examine the efficacy or promise of particular models or algorithms, but to uncover the characteristics and needs unique to qualitative research to which AI designers and developers should pay attention. While qualitative research includes many subprocesses such as data collection and result reporting in additional to data analysis, in this paper, we focus on the analysis process in particular because it shows the most promise for potential AI assistance, as both prior research and our participants suggest. In this work, we examine the practices of qualitative researchers, and the capabilities that they do (or do not) want assistive systems to have, through an analysis of 17 semi-structured interviews with qualitative researchers. While the participants did not participate in the interviews with AI or machine learning in mind, participants imagined systems that were able to automatically offer intelligent inferences and suggestions when they were asked what their imaginary ``perfect tool'' would be. In this paper, we collectively refer to these systems as ``AI'', following the rationale in prior human-AI collaboration literature in CSCW \cite{cai_hello_2019}. We first describe the research methods that they use in their work, and the kinds of assistance they would like. Next, we describe the collaboration practices of participants whose methods involve collaboration, highlighting that reaching consensus is a nuanced social practice that extends beyond the actual research. We then discuss participants' love-hate relationship with qualitative analysis: While the vagueness of qualitative analysis induces confusion and self-doubt, this doubt is essential to their research practice and AI should not remove that doubt in their analysis. Using a framework of task delegability, we close by discussing how qualitative research is a unique case that deviates from the typical tasks humans delegate to AI. While we do not (nor can we) prescribe exactly what algorithms or models should be used, we argue that honoring uncertainty and serendipity should be a central quality of AI that assists with qualitative research, where data labels are fluid and ``negatives'' are not well-defined or sometimes nonexistent. \section{Related Work} \subsection{Qualitative Research and Methods} Qualitative research accounts for a substantial proportion of CSCW and social computing scholarship---more than half of the CSCW 2019 papers use qualitative methods \cite{gilbert_cscw_2019}, with new opportunities for qualitative research being a constant topic in CSCW as well \cite{fiesler_qualitative_2019}. Qualitative researchers use a number of different analytic methods to generate insights, leveraging both inductive logic (to build patterns and themes from the bottom up by organizing data into increasingly abstract units of information) and deductive logic (to check existing patterns and theories against data in a top-down fashion). For example, thematic analysis \cite{braun_using_2006} and constant comparative method \cite{glaser_constant_1965} both describe processes of identifying, analyzing, and reporting emergent patterns within a set of data. Informed by the perspective of hermeneutics, interpretive textual analysis, a commonly used method in qualitative research, aims to understand cultures and subcultures through situated interpretation of texts \cite{allen_textual_2017}. In addition to these methods that focus on analysis, grounded theory \cite{glaser_discovery_1967, strauss_grounded_1997} alternates collecting and analyzing qualitative data in order to generate theories that are ``grounded'' in that data \cite{charmaz_constructing_2006}. It provides a set of systematic guidelines for analysis while maintaining flexibility to fit the given set of data. Data sources in qualitative research are also diverse and overlapping, and any of the five traditions above could draw from interview data, field notes, and textual or visual content \cite{creswell_qualitative_2013}. Many qualitative approaches make use of open qualitative coding \cite{strauss_grounded_1997} to inductively label and categorize emergent concepts while maintaining theoretical freedom, which is particularly useful for sifting through large amounts of unstructured data. Such inductively generated coding schemas can then be deductively applied on the full set of data. Strengths of qualitative work include the ability to examine a space in great detail and depth, the adaptability of the research framework, collection of data with subtleties and complexities not available in quantitative data, and the situating of data in the context of experience and community \cite{ryan_introduction_2018}. Compared to quantitative research that often relies on positivism that believes that there are observable, measurable, ``objective'' facts that hold true for everyone, qualitative research often favors interpretivism, which argues that knowledge and truth are subjective and situated, and dependent on people's experiences and understandings \cite{ryan_introduction_2018}. Instead of goals such as causal determination, prediction, and generalization sought by quantitative researchers, the goals of qualitative inquiries are ``illumination, understanding, and extrapolation to similar situations'' \cite{golafshani_understanding_2003}. However, there are two major challenges of qualitative work. First, qualitative analysis is largely manual and laborious. Qualitative researchers have long used ``cut and paste'' methods for separating chunks of information, either physically (e.g., index cards) or in databases or spreadsheets \cite{campbell_technology_1997}. There are now a number of qualitative analysis software tools (e.g., MaxQDA, ATLAS.ti, Dedoose) in which researchers can highlight and assign labels to portions of text, and these tools can produce some lightweight statistics (e.g., count) of these labels, with potentials of having more advanced features like topic modeling. While computational tools can relieve qualitative researchers of the burden of data management, their effects on data analysis are still highly debatable. On the one hand, qualitative researchers have argued that increasing the sample size in qualitative research designs is not an advantage, and researchers should focus on as many distinct cases as possible rather than as many cases as possible \cite{wiedemann_opening_2013}. Using software tools that generate coding schemes based on a large number of documents risks opportunities for creativity and serendipity that qualitative research is uniquely able to provide \cite{wiedemann_opening_2013}. On the other hand, current qualitative methods such as content analysis \cite{krippendorff_content_2004} can require researchers to handle such an overwhelming amount of data that researchers may need to rely on subsampling or strategically focusing on a subset of data \cite{robinson_sampling_2014}. However, it can be difficult, if not impossible, to extrapolate the findings from the analysis on a subset of the data to the original dataset. Second, the rigor of qualitative analysis is more difficult to assess and demonstrate than research rooted in positivist traditions that relies on measurement. There has long been a wealth of literature from multiple disciplines focusing on the assessment of the reliability and validity of qualitative analysis, providing guidance for publication \cite{elliott_evolving_1999}, procedures for developing coding schema \cite{campbell_coding_2013}, and design research approaches to ensure rigor in qualitative analysis \cite{maher_ensuring_2018}. CSCW and HCI scholarship have also taken a keen interest in meta-studies of qualitative methods, such as comparing interview methodology through different media \cite{dimond_qualitative_2012} and characterizing localized standards of sample size at CHI \cite{caine_local_2016}. McDonald et al. \cite{mcdonald_reliability_2019} have shown that some CSCW and CHI papers chose to report interrater reliability (IRR) even if IRR might not be compatible with their research design, which highlights a need for demonstrating reliability in qualitative research. These two challenges suggest that qualitative research can benefit from new approaches that aim to help with handling larger amounts of data, and our study explores AI as an option to support qualitative scholars with their research. \subsection{Machine Learning and Qualitative Research} Machine learning (ML) techniques are known for their ability to extract patterns from small samples and generalize them to larger datasets, show promising opportunities to apply traditionally ``small'' data-focused qualitative methods to ``big'' data. Muller et al. \cite{muller_machine_2016} discussed conceptual similarities between machine learning and qualitative grounded theory methods (GTM)---they both make major claims to be grounded in the data, and they both start with and return to the data. Through a discussion of how Straussian GTM and Glaserian GTM (the two major approaches to GTM) are similar to unsupervised and supervised learning respectively, Muller et al. proposed a number of hybrid iterative approaches that integrate machine learning and GTM. Baumer et al. \cite{baumer_comparing_2017}, through an empirical study, showed that ML and GTM could produce strikingly similar output from analyzing the same survey data, and later further showcased the promise of ML through their Topicalizer system \cite{baumer_topicalizer_2020}. They argue the promise of combining ML and GTM, but also caution that the results generated from machine learning models should be a scaffold for human interpretation instead of definitive answers, a point we echo in our analysis. Another thread of research has explored the efficacy of using machine learning as a tool to support qualitative research, with an intention to automate parts of the research process. Yan et al. \cite{yan_optimizing_2014} experimented with a human-in-the-loop approach by having a machine learning model code the entire dataset from a pre-defined codebook and having human annotators later correct the labeling, and concluded that creating a one-size-fits-all ML model for all codes in a multi-dimensional coding scheme was impossible. Paredes et al.'s \cite{paredes_inquire_2017} Inquire tool aimed to help qualitative researchers by uncovering semantically-similar content in large-scale social media texts. Marathe and Toyama \cite{marathe_semi-automated_2018} explored whether qualitative coding could be partially automated through a formative user study and user testing of a prototype. They found that repetitive coding with a well-developed codebook lends itself nicely to automation, but that having good IRR is crucial for automatability. Chen et al. \cite{chen_using_2018} pointed out that the goal of performance optimization in machine learning may be at odds with the goal of discovering patterns in qualitative research where the categories evolve over time, and suggested that a model that identifies points of ambiguity (i.e., disagreements in coding) may be more useful to qualitative researchers. It is important to note that the studies above focused on deductive aspects of qualitative research, typically with well-defined codebooks, and not inductive practices. Our exploration extends this prior research by providing insights into the inductive step of the initial discovery of patterns and themes. \subsection{Human-AI Collaboration} The research we have surveyed thus far shows the infeasibility of automating qualitative analysis entirely, but also points towards ways that AI could be a collaborator that works alongside humans, rather than a delegate that performs specific tasks. Human-AI collaboration itself is increasingly gaining importance in CSCW scholarship. Emerging work focuses on a variety of topics from information extraction \cite{mackeprang_discovering_2019} to image understanding \cite{zhang_dissonance_2019}, consistently highlighting the need for transparency and explainability of AI. Mackeprang et al. specifically pointed out that at high levels of automation, users were unlikely to challenge results provided by AI when unaccompanied by explanations \cite{mackeprang_discovering_2019}. This situation, to say the least, would be undesirable in the context of qualitative research. Based on interviews with data scientists about their perception of collaborating with AI, Wang et al. reiterated the importance of AI transparency, and further noted that AI should be designed to augment, rather than to automate, a point we echo in our findings \cite{wang_human-ai_2019}. Furthering this concept of augmentation, Yang et al. proposed the idea of ``unremarkable AI,'' whose interaction with humans should have the right level of unremarkableness but should significantly improve their work \cite{yang_unremarkable_2019}. Although Yang et al.'s study is situated in the context of high-stake clinical decision making processes, we see the notion of unremarkable AI as applicable to other contexts where AI should not be obtrusive. For human-AI collaborations to be successful, the issue of what work the AI should or should not perform needs to be addressed. Lubars and Tan proposed a framework for task delegability to AI \cite{lubars_ask_2019}. They enumerate four major factors to consider: the human's motivation in undertaking the task, difficulty of the task, risk associated with incorrectly completing the task, and trust in the AI's ability to accomplish the human's goal. These factors, especially risk and trust, resonate with existing concerns around AI fairness, accountability, and transparency that have been increasingly present in human-AI collaboration scholarship. In this study, we use this task delegability framework to look at the case of qualitative analysis, and show how qualitative analysis is a uniquely challenging case for human-AI collaboration. \section{Methods} We conducted interviews to investigate the qualitative research methods used across various fields, with specific focus given to the coding process and tools that assist in data analysis. After acquiring IRB approval from our institution, we focused recruitment efforts on participants who primarily use qualitative research methods in their work. We recruited participants via emails to mailing lists of departments that regularly conduct qualitative research (e.g., communication, anthropology) at our institution, as well as public posts on social media that invited CSCW 2019 attendees to participate. We also encourage participants to share the call for participation with other people, resulting in a snowball sample. The first two authors conducted 17 semi-structured interviews with qualitative researchers at our institution and among the broader CSCW qualitative research community. Participants were majority women with 2 men, 1 non-binary, and 1 agender individual. Participants ranged in age from 25 to 45, from the fields of Information Science, Anthropology, HCI, Linguistics, and Journalism. A majority of the participants were from the US, in addition to one from the UK, one from Japan, and one from Switzerland. We do not map this specific demographic information to individual participants in order to ensure that they are not individually identifiable. Fourteen interviews were conducted in-person, and three remote. Interviews lasted from 20 to 70 minutes, depending on the depth of the responses given. Table \ref{tab:demographics} lists our participants along with their field of study and academic position. Participants were compensated with a \$30 Amazon gift card upon completion of the interview. \input{participants-table.tex} Our interview protocol focused on how qualitative researchers collect and analyze their data, taking a broad exploratory approach. Interview questions included what methods the participants use, if there are any tools they use to assist in their work, attitudes and opinions about qualitative coding as a process, and relationship dynamics between collaborators. During the interviews, we asked participants to describe a current or most recent qualitative research project they have conducted, including scope, scale, and timing, in as much detail as possible. We then asked participants to describe their collaboration practices if relevant. The next section of questions focused on the software and tools the participants use for their data collection and analysis. We also asked participants for their attitudes and opinions on the coding process, including what they enjoyed about it and where there might be pain points. Finally, we asked participants to talk about what kind of assistance they would like if they had a perfect tool, and areas of their research where they did not want assistance. As mentioned in the introduction, we did not specifically ask participants about ``AI assistance'' so as to prevent leading them to answers about the term ``AI'' without a shared, concrete idea of what ``AI'' means. Instead, we asked what kind of assistance that they envisioned in a perfect tool, and their responses led us to consider AI as a potential solution, and ultimately human-AI collaboration as the focus of this paper. We performed a thematic analysis of the interview transcripts \cite{braun_using_2006}. Prior to analysis, all interviews were transcribed, anonymized, and assigned the participant IDs presented here. The first author initially engaged in one round of open coding, using the software MaxQDA. Then the second author, after reading the transcripts, discussed preliminary emerging code groups such as ``help with the mess'' or ``feeling of doing it right'' with the first author. Two more rounds of iterative coding helped us combine similar code groups into higher order categories such as ``disciplinary differences.'' The first author used these categories to produce a set of descriptive theme memos \cite{saldana_coding_2009} that described each category with grounding in the interview data. All authors then discussed and updated the memos regularly to reveal the relationships between the categories and finally clarified the themes, which resulted in the five main findings we discuss below. \section{Findings} Our findings document qualitative researchers' current practices and needs that AI needs to support, and discuss their unique, nuanced challenges where AI should not be applied. We begin by describing the methods participants used in their qualitative research, and the kinds of assistance that participants desired from software tools. Next, we discuss the complex collaboration dynamics in collaboration-heavy disciplines by highlighting that reaching consensus is a nuanced social practice that extends beyond the discussion of the research project itself. We then discuss participants' struggle with confidently conducting qualitative research, highlighting the central role of uncertainty in qualitative analysis. Finally, we discuss participants' connection with and need for agency over the analysis of their data despite their self-doubt. From here, we conclude by revealing the bottom line that AI assistance should not cross---they should not replace human researchers in doing the analysis. \subsection{Qualitative Research Practices} \input{methods-table-1.tex} \input{methods-table-2} Our participants used a variety of methods in their qualitative research. Among them, interviews were the most common data collection method, used by 15 participants across 6 fields. Others that were common included participant observation (7 participants), and note taking (4 participants). Participants also described data collection methods specific to their field or discipline, such as Jeffersonian transcribing in linguistics, oral history in journalism, and ethnography in anthropology. Compared to the varied data collection methods, participants' descriptions of their analysis methods, while having varied names and granular details, broadly converged to a process of inductively identifying interesting pieces of data then finding patterns among them. While we acknowledge that qualitative methods entail a much broader range than what our participants reported, our analysis here only focuses on the methods our participants used. Table \ref{tab:methods} shows the complete list of research methods that participants used. Surveying the practices of qualitative researchers runs into some challenges due to terminology. While many of the participants share similar research practices, they referred to these practices with different terms. These differences not only exist across disciplines, but within disciplines as well. For example, P06 referred to the process of inductively building a codebook then deductively labeling according to the codebook as ``thematic analysis,'' while P01 and P12 called the same process ``content analysis.'' Many participants referred to the common grounded theory research method of first identifying data snippets of interest and assigning them descriptive labels, then generating deeper insights from these labels (sometimes with the explicit process of categorizing these labels). However, participants have different terms for both steps. Terms for the first step---identifying interesting units of data---included ``qualitative coding,'' ``open coding,'' ``inductive coding,'' and ``highlighting unusual things.'' Participants’ terms for insight generation are even more varied, from ``memoing'' ``affinity mapping'' to descriptive phrases without a proper term---``seeing themes,'' ``grouping into categories,'' and ``classification process to organize data into themes.'' Furthermore, while P04, a Ph.D. student in linguistic anthropology, also used the term ``coding,'' she was referring to transcribing the interviews into the International Phonetic Alphabet, instead of identifying interesting data snippets---something we only realized halfway into our interview with her. Overall, we saw participants use inconsistent vocabularies despite referring to similar methods. While we have not heard stories of participants running into communication problems, it is possible for researchers to accidentally miscommunicate research methods and for reviewers to have confusing expectations. Assistive tools can also create barriers by choosing to use language that favors one particular genre of academic training. \subsection{Assistance From Computational Tools} Qualitative research is a manual, labor-intensive process that would benefit greatly from computational tools, but for many participants, existing tools were not sufficient for their needs. In this section, we first describe the gaps in current tools, and then discuss the assistance that participants desired. \subsubsection{The Inadequacy of Existing Tools.} Participants used a variety of software to assist with qualitative research, and only three participants did not use any such tools (see Table \ref{tab:methods}). However, not only did many tools fail participants in helping them handle large amounts of qualitative data, participants were also unwilling to learn more sophisticated tools. According to P05, the amount of notes and labels quickly became overwhelming, and the software tool that she used was insufficient: \begin{quote} I do everything in Google Docs. You've got 500 codes first and then you've got notes everywhere all over them, which I'm currently trying to decipher and I'm kicking myself for not having a better method. (P05) \end{quote} Many participants used word processing software like Google Docs to code qualitative data, but these software were not able to handle the overwhelming amount of codes and notes, as was the case for P05. While some participants were aware of other software tools that have more sophisticated functionalities, they did not use or stopped using them for various reasons. For example, P10 stopped using ATLAS.ti because the software broke frequently on her computer, so she returned to her old method of using Excel and handwritten notes. P09 only used software to organize his transcripts, and told us that learning how to use more complicated software was not worth the effort: \begin{quote} Using any tools, I think it gets in the way of the analysis...I think the focus then inevitably becomes on the tool and how I can manipulate and push data in order to make it appropriate for the tool... I need to figure out how I can fluster data in a certain tool, in a certain way so as for it to make it easier for me later on. And I was just thinking it's a waste of time. (P09) \end{quote} P09's quote is exemplary of participants who chose not to use software tools. Their anticipated cost was not only in learning how to work the software, but also in how to use the software in their own way that is sustainable in the future, and these two layers of barriers precluded these participants from using it. P14, while kept using the qualitative analysis software NVivo, limited herself to the most basic functionalities of coding data and making code categories, then hand-copied these categories onto post-it notes that she later put on the wall. Therefore, participants wanted ``more intuitive ways to do qualitative analysis,'' and something that could help them ``sort through this mess'': \begin{quote} If I had a [perfect tool] I would like a little bit more input in putting together connections between the data. (P12) \end{quote} \subsubsection{Desired Assistance.} Participants had different ideas of how a tool could help them make sense of the connections within the data, and cross-sectional analysis and visualization features were common requests: \begin{quote} I really like when I can slice and dice the data. So I want to be able to say, okay, all the people I've talked to who identify as queer, how did they feel about capitalism? I want to be able to do a cross-sectional analysis on multiple codes and domains. (P12) \end{quote} \begin{quote} Just better visualizing the relationships in the data itself. So if I'm missing out codes because I have less time, or if my co-author isn't being participative enough or whatever, then I want to be assured of the fact that I was able to see all kinds of relationships. (P07) \end{quote} It is important to note that cross-sectional analysis and visualization features, though implemented to various extents, already widely exist in qualitative analysis software such as MaxQDA and NVivo. Prior research has noted that qualitative researchers usually confine themselves to the most basic features of qualitative analysis software \cite{wiedemann_opening_2013}, but here we see that the participants still want these seemingly basic features, which suggests that the participants never discovered these features in the first place due to the overall difficulty of using the software. Furthermore, P07 specifically pointed out that the tool should not be ``like the NVivo type, where I have to really learn a lot of it.'' In other words, P07 admitted that an existing tool may be able to achieve what she wanted, but the bar of learning the software was too high. While participants across disciplines wanted additional help to make sense of the data, some participants would appreciate features that are specific to their fields. For example, P05, an anthropology Ph.D. student, requested features designed for their own research tradition: \begin{quote} Maybe, I don't want to say a translation tool, but kind of almost a sort of node system to be like ... ``These were the literal words that were used, but this is kind of the underlying meaning to that.'' (P05) \end{quote} While anthropologists might benefit from features to help them note underlying meanings to spoken words, P15, a Linguistics student, told us that being able to get down to the most granular details of speech was important: \begin{quote} I'm thinking especially in my field where when I'm transcribing interviews, I'm actually not just looking at what is literally being said. There's a process of transcribing for tone of voice, gestures, and things. (P15) \end{quote} Unlike many of our other participants, for whom the purpose of transcribing is to transform audio data into textual data that is readily analyzable, the detailed transcription---called Jeffersonian transcription---\emph{is} the core of P15's analysis. Her analysis did not include categorizing snippets of labeled data. Her unique genre of analysis means that a tool designed only for the most common use case will be of no use to her at all. In addition to features that handle the mechanics and the presentation of qualitative data, participants also wanted features that could take one step further by providing suggestions and inferences, based on the way that the researcher analyzes the data: \begin{quote} If there was some sort of learning algorithm, for example, that would suggest ... other quotes that were similar to that one. (P08) \end{quote} \begin{quote} Or maybe even just the ability to be based on whatever magical software anyone can develop being, ``based on the fact that you tagged it this way, we think that these are the more important interviews or not important, but we think that maybe you want to pay more attention to these.'' Or be like, ``You completely neglected this one. Did you forget about it or is there actually nothing here for you?'' (P05) \end{quote} In sum, participants wanted various types of help from computational tools but often refrained from using sophisticated tools due to their steep learning curve. While their reluctance may indicate usability issues with such tools, their reluctance also suggests opportunities where AI assistance could provide them the desired data representation and visualization, and only offer more sophisticated explanations and customizations if desired. The promise of AI assistance is further highlighted by participants' imagined intelligent suggestions---or, in P05's words, ``magical software.'' \subsection{Collaboration Dynamics} Qualitative research can often be collaborative, involving multiple researchers examining and then reaching consensus on the same data to increase reliability and reduce individual biases. Participants from Information Science and HCI told us that collaborative work with co-authors was common, so it is no surprise that, when asked about features they would like in their imaginary perfect tool, only Information Science and HCI participants mentioned collaboration features: \begin{quote} While I was maintaining this online notebook, I could hypothetically, if my co-author had enough time, the dream scenario would be that he or she would be reading those notes every day and commenting on them ... So the collaborator element's definitely important and could help a lot with the eventual coding and analysis process, even while you collect data. (P07) \end{quote} With collaboration being the norm, a natural follow-up question is how to reach consensus when there are multiple researchers on a single project. ``Consensus,'' however, takes on many different flavors and nuances, according to the participants. Inter-rater reliability (IRR) is often used to calculate consensus during a deductive phase of coding, but is often inappropriate for inductive analysis. Consistent with McDonald et al. 's findings \cite{mcdonald_reliability_2019}, reaching consensus through IRR was rare for our participants---only two used IRR to verify consensus, and two others mentioned IRR only to note that they did \emph{not} use it in their research, likely because our participants largely conducted interviews, of which the analyisis is often incompatible with IRR. Our participants most commonly perceived consensus as some flavor of ``everyone agrees,'' though agreement sometimes can be a flexible and loose standard: \begin{quote} Once everyone is pretty much satisfied and doesn't have any complaints or questions or comments anymore that are significant, and by significant, I mean like ``we have to change this or it's going to ruin the study.'' (P02) \end{quote} Compared to P02, P14's standard for consensus was even more flexible---there is no definition for consensus; it is something that the researcher understands intuitively as they are doing the research: \begin{quote} [Consensus is] just like a kind of tacit knowledge thing where it's you know when you see it, but it's very hard to actually kind of define. (P14) \end{quote} P14's view of consensus as fluid highlights how it is the outcome of nuanced social practices that is not easily explainable or definable. However, its flexible nature also means that achieving consensus might depend on the researchers' established, implicit norms generated from having worked together for an extended period of time, a facet often overlooked by prior conceptualizations of researcher consensus (e.g., \cite{macqueen_codebook_1998}). These social practices, as we learned from participants, often reduce ``consensus'' to the decision of a person with the ultimate decision-making power, rather than multiple researchers having reached some level of agreement. The decision maker, however, varies. Sometimes the decision maker is the project leader: \begin{quote} If I'm leading the paper, to me it's my decision [if we have reached consensus]. (P08) \end{quote} \begin{quote} If the stakes are high, if I am the primary investor in a particular project then yes, I think I should take the decision on this. But if it's maybe my advisor's project I think I would give her the final say in that. (P10) \end{quote} Sometimes the person with the final say is the most senior person, who may or may not be the project leader: \begin{quote} If [my collaborators] don't [agree], we have a little bit of a debate and I'm assuming the most senior person probably wins. (P03) \end{quote} While P03 did not personally have experience with the most senior person ``winning'' the debate, as a junior Ph.D. student this was their default assumption, which suggests a tendency for researchers (especially junior ones) to acquiesce to authoritative figures. P07 elaborated more on this tendency, pointing out that ``consensus'' is the result of an negotiation of collaboration dynamic and power: \begin{quote} I think it depends a lot on who the collaborators are and what the power relationships with those collaborators are ... These things are not objective ... It's not just about what codes appear or what themes appear the most salient and relevant. So I think it's more about the collaboration dynamics, the power, all of those things, rather than what the codes are seeing ... It's literally about [power]. (P07) \end{quote} While it might seem ideal for every researcher in a team to have their opinion treated equally, P07 directly refuted this notion, claiming that it would only jeopardize the project: \begin{quote} [Collaborative qualitative analysis] would still definitely take on some hierarchy in what I imagine. There would still be someone who's written more papers. There will still be someone who's written fewer papers. Or still have an expert on the topic, versus someone who's not the expert on the topic. ... I can't imagine there being a flat hierarchy somewhere. And actually, that one time that it happened ... things went really down south. The paper was about everything. It got rejected. Then there was a fight. Then there were all these conversations about who's contributed the most, who's going to be the first author and so on. (P07) \end{quote} MacQueen et al. \cite{macqueen_codebook_1998} argued that in team-based qualitative analysis it is good practice to have one person maintain the team codebook. While MacQueen et al.'s recommendation largely came from a project management standpoint and maintained that the codebook development should still be done collaboratively, here we see a case where one person being in charge of the \emph{direction of analysis} can be more beneficial than a true, equally collaborative fashion. Overall, we observe that in fields where collaboration is common, reaching consensus among researchers can often be a nuanced social practice beyond mere agreement on the research product. \subsection{The Existential Crisis of Qualitative Researchers} Despite the variety in discipline-specific research methods, desired help from software tools, and meanings of consensus, participants across disciplines consistently told us about one feeling: qualitative research is hard and messy. \begin{quote} Sometimes it can be a little bit frustrating when you're sort of in that process where you're like, ``Is this a theme? Is that a theme?'' Then sometimes when you're going through ... a couple of interviews in and you're like, "Oh there's a theme," and then you're like, ``Oh fuck, that's probably a theme'' ... so then you have to go back to see if that [theme] is in [the other interviews]. (P13) \end{quote} Like P13, many participants described how qualitative analysis could be frustrating, and this frustration often comes from researchers' self-doubt: \begin{quote} I'll just code, and maybe it'll be relevant and maybe it won't be, and I'm not really sure ... If someone just mentions it in passing, do I use that? ... If someone has a few sentences about it then do I code that? Like what is useful to actually code? (P03) \end{quote} \begin{quote} Am I drawing connections between things that aren't necessarily connected? ... which also goes back to not having time to [follow up with] people and be like, ``Hey, so you said this thing, I'm just wondering, can you expand that more?'' (P05) \end{quote} P05 doubted herself when drawing what she feared were non-existent connections in the data, and attributed it to having conducted imperfect interviews. The guilt of imperfect research was common among participants, who felt that they could have asked better questions during interviews, could have used more data in the final research product, or could have done a deeper analysis if they had had more time: \begin{quote} I didn't really get to do maybe as much coding as I would've liked, or to look at as many complex issues as I would've liked. ... It was a restriction on time. (P03) \end{quote} In addition to imperfect research execution being a common source of guilt, others felt apologetic for not following an ideal research methodological paradigm: \begin{quote} I think I would be lying if I said that I completely allow myself to be driven by the data ... especially this study, for example. I did it to inform the design of potential technology that I wanted to build, while I'm doing the questions, I was already looking for particular things. ... So I think that already in the questions you have implicit codes. (P11) \end{quote} \begin{quote} I feel like we're doing it wrong. ... Especially because [we are] saying these are the topics that we're interested in. That's not very grounded theory of me. But, I don't quite know how to do it otherwise, because obviously ... I'm going to look at this interview already from a different perspective than, say, a psychologist. (P15) \end{quote} Participants thought their own research execution was imperfect---for example, ``not very grounded theory.'' However, in participants' own unique research projects, the ``perfect'' methodology may not be the most desirable approach. While HCI (as well as CSCW) has distinguished itself as a ``discipline that has often proceeded with something of a mix-and-match approach'' \cite{dourish_reading_2014}, here we see evidence that such a hybrid paradigm---for example, being driven by the data but also having a topical focus---is also practiced across disciplines. Overall, self-doubt with all parts of the research process was pervasive among participants. This self-doubt was so strong that P07 told us the moment she had to approach the data she collected was what she ``dread[ed] the most.'' Participants' shared experience of self-doubt highlights the inductive and interpretive nature of qualitative research, and suggests that this vagueness may be inevitable and essential to the qualitative research process itself. While it might be tempting to design AI that can reduce qualitative scholars' anxiety by providing concrete suggestions from their data, we found ample reasons why researchers might resist such assistance, which we detail in the next section. \subsection{Don't Do It for Me} Even though participants consistently told us how qualitative research can be full of doubt and vagueness, they did not want AI assistance to eliminate that vagueness if it meant eliminating the work to make sense of their messy data. Participants told us that despite the uncertainties, the new discoveries they made through their research process were the most rewarding: \begin{quote} It's really satisfying. ... It's those kinds of exciting Eureka moments that make research kind of worth it. (P14) \end{quote} \begin{quote} My favorite style of coding is more where I'm able to bring out all of these different things. ... So that's what I really enjoy, the process of recollecting everything that happened, and then coming up with something. (P07) \end{quote} Participants' comments on their unexpected new findings show that serendipity was a major reason why participants enjoyed qualitative research. Integrity was a major factor as well. P12, for example, described a sense of responsibility toward her project: \begin{quote} How do I feel when I'm coding? Overwhelmed. I feel such a great sense of stewardship in the project that we've been discussing. I want to get it right and that makes me feel a lot of pressure. (P12) \end{quote} The motivations and responsibilities that researchers feel around their data and analysis may present some hurdles for any tool meant to support their work. While some participants mentioned that they would benefit from AI-generated suggestions and inferences, these participants, with their sense of serendipity, responsibility, and intimacy toward their research, explicitly expressed that they needed agency over the analysis of their data, and AI should not do it for them. \begin{quote} I don't want anything to do the analysis for me. That doesn't make sense. This is my interpretation of things ... It's one of the most intimate things. ... I have these little like research crushes with some codes or participants. ... These [data] are my babies. (P11) \end{quote} P16, while being in Linguistics and doing a completely different kind of analysis than P11's, shared P11's feelings: \begin{quote} As nice as it theoretically sounds to have something do automatic transcription for you, I actually would not want that even if it could do the Jeffersonian level of detail because it's so important for you as a researcher to understand what's happening in your data, doing it yourself. ... Your transcription is part of your analysis as well and that's the only way for you to get your hands dirty with the data is to get in and see what's happening with it. ... I don't know of anybody who uses [professional transcription services] because you take a lot of pride and ownership over your data and it's important to your participants and I would never want to farm that out when it's your research in your research process. (P16) \end{quote} To P11 and P16, outsourcing analysis for them not only impacts research quality, it also robs them of their emotional connection---the intimacy, pride, and ownership---with the data. Compared to P11 and P16, P06 was slightly more open to the idea of AI-assisted analysis. However, P06 also pointed out that the tool should be limited to the level of suggestions and that making decisions would cross a line: \begin{quote} I don't want it to take out extra information automatically. I don't want it to try to guess or try to ... well, it can try to guess, but I don't want it to decide. ... It can do a suggestion, but I'm not sure if I would trust the suggestion, or actually it can suggest the discussion points that we need, and then we can discuss rather than decide it for us. (P06) \end{quote} In contrast, P14 felt even suggestions could be harmful to her research process: \begin{quote} Maybe it could make suggestions, but even then I don't know if I want it because it doesn't know what my research questions are. ... I don't want it to pick the good quotes for me, that's for sure. ... I don't want the computer to do it. I also don't want the computer to influence how I'm thinking about it. ... I don't necessarily want to be primed or how that's going to affect my thinking ... if it's making this suggestion and now I can't unsee it. (P14) \end{quote} While participants generally resisted complete automation of qualitative analysis, P06 and P14's comments show that even a low-level, suggestion-based automation (according to Parasuraman et al.’s model of autonomy \cite{parasuraman_model_2000}) can also be undesirable. During the interview, P14 asked if there was a machine to interfere, then ``why bother?''---why bother having a human researcher at all? According to P14, even if the researcher does not necessarily have to use or trust the suggestions, seeing the suggestions alone can negatively impact the researcher's interpretation and analysis, or even worse, eliminate the meaning of human researchers. Overall, participants told us about their conflicted feelings about qualitative analysis, one that is marked by anxiety but also intimacy and ownership. Driven by these feelings, they resisted AI to replace them in their analysis, despite seeing its potential to augment their work. These findings suggest that, unlike tasks typical for AI to accomplish, the objective of qualitative research is, often intentionally, ambiguous. Furthermore, qualitative researchers value this ambiguity because it offers them serendipity and opportunities for deep insights. In the discussion section, we detail how qualitative research is a unique case for human-AI collaboration, and provide design implications for AI assistance to honor ambiguity and serendipity. \section{Human-AI Collaboration in Qualitative Research} Our findings reveal complicated feelings participants had toward the practices involved in qualitative analysis. On the one hand, participants struggled with self-doubt in their research analysis and saw the appeal of AI assistance to help them see relationships behind their data. On the other hand, they also liked the analysis process as it is and felt connected with their data, therefore skeptical of AI driving the analysis. This tension between the desire of AI assistance and the need for human agency suggests opportunities for human-AI collaboration in qualitative research, but also draws a line that AI assistance should not cross. To unpack this tension, we turn to a framework proposed by Lubars and Tan to describe the delegability of a task to AI \cite{lubars_ask_2019}. While there were likely many small steps in the qualitative research process (e.g., open coding, axial coding, synthesizing), our participants' resistance of complete automation suggested that they viewed qualitative analysis as a singular task in the context of AI assistance, and we can reasonably speculate that AI designers (who are arguably less familiar with qualitative analysis) might do the same. In their framework, they enumerate four factors that must be considered when determining the appropriateness of delegating a human task to AI: \textbf{motivation}, \textbf{difficulty}, \textbf{risk}, and \textbf{trust}. Lubars and Tan conceptualized motivation as having three components: the human's intrinsic motivation of carrying out the task, whether the human's goal itself is mastering the task, and the utility or importance of the task to the human. Our participants' enjoyment and pride in conducting qualitative research, as well as their wish to improve their research skill, show that qualitative research involves high motivation: participants enjoyed and were committed to conducting their analyses. While one may assume that high motivation indicates low delegability, Lubars and Tan did not find statistically significant correlation between the two variables. The lack of correlation coincides with the sentiments of our participants, who were invested in their qualitative work but also acknowledged the appeal of appropriate AI assistance. Difficulty also consists of three components: a person's perceived ability to complete the task, as well as the required human effort and expertise. Lubars and Tan found a general negative correlation between difficulty and task delegability: the more able a person perceives themselves to complete the task, the more human effort and expertise required by the task, the less delegable the task is to AI. Qualitative research can be considered high difficulty, as it requires effort and expertise on behalf of the researcher, which suggests its low delegability to AI according to Lubars' and Tan's findings. The difficulty of analysis is evidenced by the low confidence participants had in their analysis. While Lubars and Tan also use required social skills and creativity as dimensions to contextualize the previous three dimensions, they are not central to the conceptualization of difficulty, though one can reasonably argue that qualitative research requires high levels of both social skills and creativity. Additionally, in the survey that Lubars and Tan used to evaluate their framework, they observed that self-confidence and intrinsic motivations were positively correlated---people enjoy doing tasks that they are good at. However, the case of qualitative research serves as a unique counterexample: not only do people enjoy doing something they are not confident about whether they are doing it correctly, but low-confidence itself is central to the outcome of qualitative research. Therefore, our study on qualitative research challenges the current task delegability framework, and also raises questions around the integration of AI into qualitative research: Should we consider qualitative research as a single, unified ``task'' for AI to assist with? If qualitative research as a single task should not be delegated, what parts of it are appropriate for AI to assist with? Qualitative research is harder to characterize in terms of the third factor: risk. Lubars' and Tan's conceptualization of risk is three-fold: personal accountability in case errors happen; the uncertainty, or the probability of errors; and the scope of impact, or cost or magnitude of those errors. These three components of qualitative research all focus on errors, but ``error'' is inherently difficult to define, if definable at all, in qualitative research to which subjective interpretations are central: What makes a researcher's interpretations \emph{wrong}? Is there such a thing as interpretation \emph{error}? Furthermore, the concept of risk itself may be more nuanced than currently characterized, given that Lubars and Tan did not find significant correlation between risk and delegability, and the complexity of defining ``error'' in qualitative research may also be present in other kinds of tasks. Designers should carefully consider this flexibility of qualitative research when designing AI assistance, especially supervised models that are entirely dependent on what is ``positive'' or ``negative.'' The final factor, trust, is also nuanced in the context of qualitative research. Trust, which Lubars and Tan also describe as containing three components, includes the perceived ability of AI to complete the task, the interpretability of AI's actions, and the perceived value alignment between AI and human. As our findings have shown, participants had low trust in AI's ability to reliably conduct qualitative analysis. While ``reliable'' is often defined as being consistently ``correct,'' our discussion above shows correctness may not be the focus in qualitative research since error itself is hardly definable. Instead, our findings suggest that the reliability lies in the consistent alignment of values---that the value of qualitative research derives from creativity and serendipity \cite{wiedemann_opening_2013}---between AI and human researchers, which our participants did not trust AI to have. P14 pointed out that she did not want any assistance because she could not ``unsee'' the AI suggestions, which indicates an assumption of unwanted, negative influence from AI that values concrete, highest-probability answers, as opposed to generative and nuanced ones sought by human-based qualitative analysis. However, influence is inevitable in collaborative qualitative research---having human collaborators also influences researchers' perspectives, and it is not reasonable to have to self-isolate to achieve the desired research integrity. Therefore, the assumed ``badness'' of AI influence might be because the suggestions come \emph{from} AI, rather than that the suggestions themselves are of low quality. The distrust of AI shared across our participants may possibly have to do with the differences between typical behavior of humans and AI. Humans usually make suggestions like ``This part of the data is interesting and maybe you should take a look at it'' (and more seasoned collaborators can also explain why it is interesting). However, AI suggestions, generated from classification results, often take the deterministic form of ``data X should be assigned code Y'' without explanations---even a human collaborator would be difficult to trust if they could only make such suggestions. While our participants did not explicitly mention AI's interpretability, the well-known opaqueness of the AI's inner working ``black box'' could only exacerbate the distrust. P14's distrust echoes with Gach et al. 's finding that interpersonal trust---the willingness to accept risk based on the expectation of another person's behavior---often gets mistranslated when it is implemented as impersonal trust that a technical system will behave as expected \cite{gach_experiences_2020}. This mistranslation of trust resonates with Ackerman's notion of sociotechnical gap \cite{ackerman_intellectual_2000}---how can we better technically support the kind of interpersonal trust that we know we must support socially in qualitative research? Given the challenges related to risk and trust, a key consideration in the design of AI collaborative tools for qualitative research should be: How can we promote more interpersonal trust between human researchers and AI collaborators in ambiguous qualitative research? One way for AI to build trust is to provide humans with the ability to make choices and corrections \cite{knowles_models_2015}. If such ability is sufficient, one might think that an ideal collaboration would involve first having the AI automatically label the entire dataset, allowing humans to correct the AI as necessary afterwards. However, not only would such an approach meet the objections of participants (as exemplified by P14), Yan et al. \cite{yan_optimizing_2014} tried such an approach and concluded that creating a one-size-fits-all ML model for all codes in a multi-dimensional coding scheme was impossible. The low precision of their model suggests that humans may have to do \emph{more} work in correcting the AI than if they had coded on their own. While correction is also widely present in human collaboration, it is easier to discuss and resolve the differences with a human than with AI. Furthermore, it is also undesirable to have too much trust---if the data to be coded is ambiguous, our results show that it's possible that researchers might defer to AI as the \textit{de facto} tie breaker even though the situation deserves human deliberation to uncover the rich qualitative insights underlying that ambiguity. Given the two extremes, it may seem that offering answers that provide precisely the right balance of trust is the natural solution for AI, but ``just the right balance'' may never be practically achievable---the design of AI assistance for qualitative research may need to forgo the traditional answer-provider paradigm and leave the deliberation to humans. \subsection{From Answers to Ambiguities and Disagreements} How, then, can we preserve human researchers' agency in ambiguous situations in AI? A good starting point is to consider how it can play out in machine learning algorithms, a core component in many AI systems. Chen et al. \cite{chen_using_2018} point out that the goal of performance optimization in ML may be at odds with the goal of discovering patterns in qualitative research where the categories constantly change: while well-defined data categories are essential anchors for many machine learning algorithms, qualitative analysis does exactly the opposite by iteratively revising, challenging, and sometimes even overhauling existing categories to reveal deeper insights buried in the data; categories are not the end, but only a means to an end. Therefore, a machine learning model that identifies points of \emph{ambiguity} and \emph{disagreement} may have more utility to qualitative researchers. This point is further supported by McDonald et al., who discussed why seeking agreement might not be appropriate when codes are the process instead of the product. This work also highlights the scenario where there is an expert researcher (or a researcher deemed as the ``expert'') in a team, in which case the other researchers defer to the expert researcher largely to maintain the social relationship. The expert-takes-all phenomenon was common among our participants, and we can speculate that it also broadly exists given how common advisor-student relationships and mentor-mentee relationships are in research training. McDonald et al. \cite{mcdonald_reliability_2019} also discussed how agreement can be harmful in research rooted in critical traditions (e.g., feminist HCI \cite{bardzell_feminist_2010}), and the type of ``forced'' agreement in this work may aggravate this harm. Taking the above considerations together, we can imagine an example assistive AI system that has the following features. First, to promote ambiguity and preserve human researchers' agency, the system could have the researcher choose the level of suggestions it provides. For example, the system could default to leaving the researcher on their own (i.e., do nothing) at first, and only later suggest interesting data snippets that the researcher has overlooked based on the researchers' existing coding/labeling patterns (e.g., through text classification using long short-term memory (LSTM) networks). The system could also suggest upcoming, unexamined data snippets for the researcher to consider, but only if the researcher specifically requests this. As demonstrated in our analysis, unprompted suggestions of new data can lead researchers into uncertain (and possibly unwanted) interpretive directions, a concern expressed by our participants. While it is possible to provide \emph{code} suggestions (likely based on topic modeling), none of our participants were enthusiastic about such an idea, ranging from hesitant to resistant. Our participants' reactions show an interesting contrast with the promise of machine learning-based coding shown in prior research \cite{baumer_comparing_2017, marathe_semi-automated_2018, chen_using_2018}. Considering these findings together, we urge AI designers to be careful in making suggestions about the exact codes or labels that a piece of data might take. An AI system capable of making code suggestions should be clear about what this feature is and is not (i.e., that instead of data to be coded, it makes suggestions \emph{of} codes). It should also be available only upon the researcher's request, and only after it learns enough of the researcher's coding patterns in a particular project. Second, given that collaboration was common for many of our participants, the system could have features that further promote ambiguity and serendipity through collaboration. For example, it could suggest data snippets (examined or unexamined) that might be worthy of further discussion among collaborators. The system could also suggest a piece of data that a researcher has already coded to other collaborators who are likely to disagree with that researcher. Both examples aim to encourage discussions among researchers that may spark unexpected insights, and echo the point we made against careless coding suggestions---we envision the system to help set researchers up to do the research, instead of doing it for them. Our vision above is only an example of what an assistive AI might look like, and the concrete specifications will depend on a number of factors such as the audience for whom it is designed (e.g., AI designed for HCI researchers vs. linguists are likely to be different), and the underlying infrastructure that it needs (e.g. collaborator-based suggestions will need collaboration functionality in the first place). The features that we envision also illustrate that an assistive AI will necessarily be an interactive one with mixed initiatives, some of which are already in line with principles of mixed-initiative designs \cite{horvitz_principles_1999}. While we acknowledge that such a system still does not address the expert-takes-all phenomenon, we are also hesitant to recommend that AI address the nuances of the social relationship between researchers. By revealing the the expert-takes-all phenomenon in the collaboration dynamics between junior and senior researchers, we are not calling for a technical solution to ``solve'' their relationship; we only hope that qualitative researchers can be aware of this phenomenon, and strive toward more productive collaborations with it in mind. Uncertainty and ambiguity have long been viewed as a hindrance that costs efficiency and causes mistakes. However, uncertainty is a feature rather than a bug in qualitative research. As our findings indicate, while our participants wanted AI to help them have a better grasp of the uncertainty, they did not want it to be taken away. This very uncertainty allows qualitative researchers to dig deeply into their data, sometimes even to build personal connections with their data, and generate rich interpretations and insights. Our findings not only point to ways that AI can help qualitative researchers, but also reveal ways that qualitative researchers can help each other. For example, researchers who are not accustomed to collaboration may want to experiment with collaborating as a way to have various perspectives that promote serendipity. Senior researchers may also want to consider teaching junior researchers to embrace uncertainty as part of the qualitative research training, so that they can trust themselves in doing qualitative research on a high level while also maintaining a healthy dose of self-doubt. \section{Limitations and Future Work} Our research has several limitations. First, the majority of our participants were junior scholars, with arguably less experience in qualitative research than more senior ones. While we believe that a student majority is still able to provide valuable insights since they all actively conduct research, it may nevertheless have an impact on their reported experiences and tool use. Second, we would like to acknowledge that the qualitative research practices documented in this study (with the coding component in particular) are largely based on individuals, despite the ample descriptions of collaboration dynamics that we heard. We suspect the predominance of individual research practices is a product of the common student-advisor collaboration paradigm, and future work may find value in investigating more and other collaborative coding practices. Similarly, the kind of qualitative data that our participants dealt with were predominantly text, but the whole spectrum qualitative data is much broader, such as image, audio, and video, as some of our participants already indicated. We encourage future work to investigate these alternative forms of qualitative data and their analysis. Finally, as we mentioned in the introduction, our work does not provide guidance for specific models or algorithms to be used. It is a first step toward successful human-AI collaboration in qualitative research by envisioning a world where AI assistance of qualitative research is possible, and laying out the promises and perils of that world. We hope that our findings will inform future in-depth studies of how specific AI systems can be adopted in qualitative research. \section{Conclusion} Qualitative inductive methods are widely used in CSCW and HCI research for their ability to generatively discover deep and contextualized insights, but these inherently manual and laborious processes are infeasible for analyzing large corpora, sacrificing the richness provided by these data. This study explores the potential of AI for assisting humans in conducting qualitative research by investigating current qualitative research practices and revealing where AI can come into play. We describe the methodological practices participants have, the insufficiencies of current tools to support these practices, and desired capabilities of their imaginary perfect tool. We also show that participants, when they collaborate, have complex and nuanced collaboration dynamics, and consensus reaching can be a negotiation of social relationships beyond the research project itself. Furthermore, while participants struggle with the messiness and uncertainty of qualitative analysis, they also want full agency of the process and insist that AI should not take it away from them. We argue that qualitative analysis is a unique case for human-AI collaboration because uncertainty is essential to qualitative analysis itself. The design of AI assistance should embrace this uncertainty and support serendipity by promoting human agency, instead of being the arbiter that aims to reduce uncertainty as AI has been traditionally conceptualized. \begin{acks} We thank the reviewers for their significant time and care in improving this paper. We also thank our participants, many of whom sacrificed their conference time to participate in our study. Finally, we thank Brianna Dym, Steven Frost, Katie Gach, Shamika Goddard, Jacob Paul, Anthony Pinter, and Morgan Klaus Scheuerman for their help and feedback on the early drafts of this work. This work was supported in part by the National Science Foundation Award \#1764089. \end{acks} \bibliographystyle{ACM-Reference-Format}
2,869,038,155,336
arxiv
\section{Introduction} Smoothed particle hydrodynamics (SPH) is a Lagrangian method introduced by \citet{L1977} and \citet{GM1977} that is frequently used for simulating astrophysical fluid problems, for example for star and galaxy formation and supernovae. However, it has also been used outside of astrophysics, for example for the modelling of tsunami and volcanoes \citep[see][for a review]{M1992}. SPH is normally used to model hydrodynamics, with the inclusion of self-gravity in astrophysical contexts. The smoothing length, which may be variable both in space and in time, enables SPH to naturally adapt its resolution to reflect the local density distribution. However, in many astrophysical situations radiation transport is also an important element. Various attempts have been made to include radiation transport into SPH. \citet{L1977} made the first foray into this field, using the diffusion approximation to examine the fission of protostars. \citet{B1985,B1986} also used radiative transfer in the diffusion approximation, modelling stars with entropy gradients. \citet{OW2003} combined SPH with a Monte-Carlo radiation transport method. \citet{Viau2001} and \citet{BCV2004} presented an implicit scheme for the diffusion approximation. Finally, \citet{WB2004} developed an implicit scheme for two-temperature (gas and radiation) radiative transfer in the flux-limited diffusion approximation. In this paper we describe a significant improvement to the method of \citet{WB2004}, namely the implementation of an implicit algorithm which is many thousands of times faster, but still models the same physics. Section \ref{sec:method} begins by summarising the origins of the flux-limited diffusion approximation, and continues into a brief overview of the explicit SPH equations derived by \citet{WB2004}, before describing in detail the new method. Section \ref{sec:testcalc} describes the tests we performed to validate this code. These test results should be compared with those of \citet{TS2001} and \citet{WB2004}. \section{Method} \label{sec:method} In a frame co-moving with the fluid, and assuming local thermal equilibrium (LTE), the equations governing the time-evolution of radiation hydrodynamics (RHD) are \begin{equation} \label{rhd1} \frac{D\rho}{Dt} + \rho\mbox{\boldmath $\nabla\cdot v$} = 0~, \end{equation} \begin{equation} \label{rhd2} \rho \frac{D\mbox{\boldmath $v$}}{Dt} = -\nabla p + \frac{\mbox{${\chi_{}}_{\rm \scriptscriptstyle F}\rho$}}{c} \mbox{\boldmath $F$}~, \end{equation} \begin{equation} \label{rhd3} \rho \frac{D}{Dt}\left( \frac{E}{\rho}\right) = -\mbox{\boldmath $\nabla\cdot F$} - \mbox{\boldmath $\nabla v${\bf :P}} + 4\pi \kappa_{\rm \scriptscriptstyle P} \rho B - c \kappa_{\rm \scriptscriptstyle E} \rho E~, \end{equation} \begin{equation} \label{rhd4} \rho \frac{D}{Dt}\left( \frac{e}{\rho}\right) = -p \mbox{\boldmath $\nabla\cdot v$} - 4\pi \kappa_{\rm \scriptscriptstyle P} \rho B + c \kappa_{\rm \scriptscriptstyle E} \rho E~, \end{equation} \begin{equation} \label{rhd5} \frac{\rho}{c^2} \frac{D}{Dt}\left( \frac{\mbox{\boldmath $F$}}{\rho}\right) = -\mbox{\boldmath $\nabla\cdot${\bf P}} - \frac{\mbox{${\chi_{}}_{\rm \scriptscriptstyle F}\rho $}}{c} \mbox{\boldmath $F$}~ \end{equation} \citep{MM1984,TS2001}. In these equations, $D/Dt \equiv \partial/\partial t + \mbox{\boldmath $v \cdot \nabla$}$ is the convective derivative. The symbols $\rho$, $e$, {\boldmath $v$} and $p$ represent the material mass density, energy density, velocity, and scalar isotropic pressure respectively. The total frequency-integrated radiation energy density, momentum density (flux) and pressure tensor are represented by $E$, {\boldmath $F$}, and {\bf P}, respectively. A detailed explanation of the flux-limited diffusion approximation to the above equations is given by \citet{TS2001}. Here we simply summarise the main points. The assumption of LTE allows the rate of emission of radiation from the matter in equations \ref{rhd3} and \ref{rhd4} to be written as the Planck function, $B$. Equations \ref{rhd2} to \ref{rhd5} have been integrated over frequency, leading to the flux mean total opacity ${\chi_{}}_{\rm \scriptscriptstyle F}$, and the Planck mean and energy mean absorption opacities, $\kappa_{\rm \scriptscriptstyle P}$ and $\kappa_{\rm \scriptscriptstyle E}$. In this paper, the opacities are assumed to be independent of frequency so that $\kappa_{\rm \scriptscriptstyle P}=\kappa_{\rm \scriptscriptstyle E}$ and the subscripts may be omitted. The total opacity, $\chi$, is the sum of components due to the absorption $\kappa$ and the scattering $\sigma$. The equations of RHD may be closed by an equation of state, specifying the gas pressure, the addition of constitutive relations for the Planck function and opacities, and an assumption about the relationship between the angular moments of the radiation field. In this paper, we use an ideal equation of state for the gas pressure $p = (\gamma -1)u\rho$, where $u=e/\rho$ is the specific energy of the gas. Thus, the temperature of the gas is $T_{\rm g}=(\gamma -1)\mu u/R_{\rm g}=u/c_{\rm v}$, where $\mu$ is the dimensionless mean particle mass, $R_{\rm g}$ is the gas constant and $c_{\rm v}$ is the specific heat capacity of the gas. The Planck function is given by $B=(\sigma_{\rm \scriptscriptstyle B}/\pi)T_{\rm g}^4$, where $\sigma_{\rm \scriptscriptstyle B}$ is the Stefan-Boltzmann constant. The radiation energy density also has an associated temperature $T_{\rm r}$ from the equation $E=4 \sigma_{\rm \scriptscriptstyle B} T_{\rm r}^4/c$. For an isotropic radiation field ${\bf \rm P} = \frac{1}{3} E$. The Eddington approximation assumes this relation holds everywhere and implies that, in a steady state, equation \ref{rhd5} becomes \begin{equation} \label{eddington} \mbox{\boldmath $F$} = -\frac{c}{3\chi\rho} \nabla E. \end{equation} This expression gives the correct flux in optically thick regions, where $\chi\rho$ is large. However in optically thin regions where $\chi\rho \rightarrow 0$ the flux tends to infinity whereas in reality $|\mbox{\boldmath $F$}| \le cE$. Flux-limited diffusion solves this problem by limiting the flux in optically thin environments to always obey this inequality. \citet{LP1981} wrote the radiation flux in the form of Fick's law of diffusion as \begin{equation} \label{fld1} \mbox{\boldmath $F$} = -D \nabla E, \end{equation} with a diffusion constant given by \begin{equation} \label{fld2} D = \frac{c\lambda}{\chi\rho}. \end{equation} The dimensionless function $\lambda(E)$ is called the flux limiter. The radiation pressure tensor may then be written in terms of the radiation energy density as \begin{equation} \label{fld3} \mbox{\rm \bf P} = \mbox{ \rm \bf f} E, \end{equation} where the components of the Eddington tensor, {\bf f}, are given by \begin{equation} \label{fld4} \mbox{\rm \bf f} = \frac{1}{2}(1-f)\mbox{\bf I} + \frac{1}{2}(3f-1)\mbox{\boldmath $\hat{n}\hat{n}$}, \end{equation} where $\mbox{\boldmath $\hat{n}$}=\nabla E/|\nabla E|$ is the unit vector in the direction of the radiation energy density gradient and the dimensionless scalar function $f(E)$ is called the Eddington factor. The flux limiter and the Eddington factor are related by \begin{equation} \label{fld5} f = \lambda + \lambda^2 R^2, \end{equation} where $R$ is the dimensionless quantity $R = |\nabla E|/(\chi\rho E)$. Equations \ref{fld1} to \ref{fld5} close the equations of RHD, eliminating the need to solve equation \ref{rhd5}. However, we must still choose an expression for the flux limiter, $\lambda$. In this paper, we choose \citet{LP1981}'s flux limiter \begin{equation} \lambda(R) = \frac{2+R}{6 + 3R + R^2}, \end{equation} to allow comparison of our results with those of \citet{TS2001} and \citet{WB2004}. \subsection{The explicit method} In \citet{WB2004}, we described a method by which equations \ref{rhd2} to \ref{rhd4} can be written in SPH formalism. Equation \ref{rhd1} does not need to be solved directly since the density of each particle is calculated using the standard SPH summation over the particle and its neighbours. We define the specific energy of the gas to be $u = e /\rho$, and that of the radiation to be $\xi = E / \rho$. The explicit equations in one dimension are then \begin{equation} {D \mbox{\boldmath $v_{i}$} \over D t} = - \sum_{j=1}^{N} m_{j} \left( {p_{i} \over \rho_{i}^2} + { p_{j} \over \rho_{j}^2} + \Pi_{ij} \right) \nabla W(r_{ij},h_{ij}) - \frac{\lambda_{i}}{\rho_{i}} \sum_{j=1}^N m_j \xi_{j} \nabla W(r_{ij},h_{ij})~, \end{equation} \begin{equation} \label{eqn:SPHRTE} {D \xi_i \over D t} = \sum_{j=1}^N { m_j \over \rho_i \rho_j} c \left[ {4 {\lambda_i \over \kappa_{i} \rho_i} {\lambda_j \over \kappa_{j} \rho_j} \over \left( {\lambda_i \over \kappa_{i} \rho_i} +{\lambda_j \over \kappa_{j} \rho_j} \right) } \right] \left( \rho_i \xi_i - \rho_j \xi_j \right) {\nabla W_{ij} \over r_{ij}} - \left( \mbox{\boldmath $\nabla \cdot v$} \right)_i f_i \xi_i - a c \kappa_{i} \left({\rho_i \xi_i \over a} - \left( {u_i \over c_{{\rm v},i} } \right)^4 \right), \end{equation} \begin{equation} \label{eqn:SPHRTU} {D u_i \over D t} = \frac{1}{2} \sum_{j=1}^N \left( {p_i \over \rho_i^2 } + {p_j \over \rho_j^2} + \Pi_{ij} \right) m_j \mbox{\boldmath $v$}_{ij} \cdot \nabla W_{ij} + a c \kappa_{i} \left( {\rho_i \xi_i \over a} - \left( {u_i \over c_{{\rm v},i} } \right)^4 \right), \end{equation} where $a=4 \sigma_{\rm B}/c$, $m_i$ is the mass of SPH particle $i$, $\mbox{\boldmath $r$}_{ij}=\mbox{\boldmath $r$}_{i}-\mbox{\boldmath $r$}_{j}$ is the difference in positions between particles $i$ and $j$, $\mbox{\boldmath $v$}_{ij}=\mbox{\boldmath $v$}_{i}-\mbox{\boldmath $v$}_{j}$, and $W_{ij}=W(r_{ij}, h_{ij})$, where $W$ is the standard cubic spline kernel and the mean smoothing length of particles $i$ and $j$ is $h_{ij}=(h_i + h_j)/2$. The smoothing lengths are defined in the same manner as those in \citet{WB2004}, so each particle has approximately eight neighbours unless otherwise stated. We use the standard SPH artificial viscosity \[ \Pi_{ij} = \left\{ \begin{array}{ll} \left( - \alpha_{\rm v} c_{\rm s} \mbox{ $\mu_{ij}$} + \beta_{\rm v} \mbox{ $\mu_{ij}$}^2 \right) / \rho_{ij} & \mbox{if \boldmath $v$} \mbox{$_{ij} \cdot$}\mbox{\boldmath{ $r$}} \mbox{$_{ij} \leq 0 $ , and } \\ 0 & \mbox{if \boldmath $v$} \mbox{$_{ij} \cdot$}\mbox{\boldmath{ $r$}} \mbox{$_{ij} > 0$ }. \\ \end{array} \right. \] where $\mbox{ $\mu_{ij}$} = { h \left( \mbox{\boldmath $v$}_i - \mbox{\boldmath $v$}_j \right) \cdot \left( \mbox{\boldmath $r$}_i - \mbox{\boldmath $r$}_j \right) / \left( \left| \mbox{\boldmath $r$}_i - \mbox{\boldmath $r$}_j \right|^2 + \eta^2 \right)}$, with $\eta^2 = 0.01 h^2$ to prevent numerical divergences if particles get too close together. We use $\alpha_{\rm v}=1$ and $\beta_{\rm v}=2$ unless stated otherwise. In equation \ref{eqn:SPHRTE} the first term on the right hand side is the diffusion term, the second is the work done on the radiation field (in one dimension), and the final term allows energy transfer between the radiation and the gas. In equation \ref{eqn:SPHRTU} the energy transfer term occurs with opposite sign, while the remaining term is the symmetric SPH expression for work and viscous dissipation done on the gas when the thermodynamic variable of integration is energy. We also tested a supercritical shock (see section \ref{sec:supershock}) with an implicit method derived from the asymmetric variant of the work term in equation \ref{eqn:SPHRTU} \begin{equation} \label{eqn:assym} {D u_i \over D t} = \sum_{j=1}^N \left( {p_i \over \rho_i^2} + \frac{1}{2} \Pi_{ij} \right) m_j \mbox{\boldmath $v$}_{ij} \cdot \nabla W_{ij} + a c \kappa_{i} \left( {\rho_i \xi_i \over a} - \left( {u_i \over c_{{\rm v},i} } \right)^4 \right). \end{equation} The two forms of the gas energy equation gave results that differed by two per cent or less. All results presented in this paper use the symmetric version. \subsection{Implicit method} The implicit method for solving the energy equations described by \citet{WB2004} calculated the gas work and viscous terms and the diffusion term as an interaction between pairs of particles, subtracting energy from one particle and adding it to another. The radiation pressure term was added to $\xi$, while the interaction term between the gas and the radiation was calculated by the solution of a quartic equation. The required timestep ${\rm d}t$ was split into $N$ substeps and the particles swept over. This solution was then compared to that with $2N$ substeps. If the fractional error between the two solutions was not less than a specified tolerance, the number of substeps was doubled until the required tolerance was reached. The new formulation uses a Gauss-Seidel method to iterate towards the solution of the system of equations. We use a backwards Euler implicit method rather than the trapezoidal method used in \citet{WB2004}, because the former allows larger timesteps to be taken. To advance a time-dependent variable $A$, from time $t=n$ to $t=n+1$, the backwards Euler scheme states \begin{equation} \label{eqn:backwardsE} { A_i^{n+1} } = A_i^n + {\rm d} t \left(\frac{ {\rm d} A_i}{{\rm d} t}\right)^{n+1}. \end{equation} For a Gauss-Seidel method involving interactions between particles $i$ and $j$, the new value $A_i^{n+1}$ can be solved for by arranging the implicit equations into the form \begin{equation} \label{eqn:GS} A_i^{n+1} = \frac{ A_i^n - {\rm d} t \sum_j \sigma_{ij} \left( A_j^{n+1} \right) }{ 1 - {\rm d} t \sum_j \sigma_{ij}}, \end{equation} where $\sigma_{ij}$ contains quantities other than $A$, and $A_j^{n+1}$ begins as $A_j^n$ and is updated as soon as new values become available. This equation is iterated over until convergence is achieved. The backwards Euler form of equation \ref{eqn:SPHRTE} is given by \begin{equation} \label{eqn:GSxi} \xi^{n+1}_i = \xi_i^n + {\rm d} t \sum_j \frac{m_j}{\rho_i \rho_j} b c \left( \rho_i \xi_i^{n+1} - \rho_j \xi_j^{n+1} \right) \frac{\nabla W_{ij}}{r_{ij}} - {\rm d} t \left( \mbox{\boldmath $\nabla \cdot v$} \right)_i f_i \xi_i^{n+1} - {\rm d} t a c \kappa_i \left[ \frac{ \rho_i \xi_i^{n+1}}{a} - \left( \frac{ u_i^{n+1}}{c_{{\rm v},i}} \right)^4 \right], \end{equation} where \begin{equation} b=\left[ {4 {\lambda_{i} \over \kappa_{i} \rho_{i}} {\lambda_{j} \over \kappa_{j} \rho_{j}} \over \left( {\lambda_{i} \over \kappa_{i} \rho_{i}} + {\lambda_{j} \over \kappa_{j} \rho_{j}} \right) } \right] \end{equation} for brevity, and of equation \ref{eqn:SPHRTU} as \begin{equation} \label{eqn:GSu} u^{n+1}_i = u_i^n + {\rm d} t \sum_j \frac{1}{2} m_j \mbox{\boldmath $v_{ij}$} \cdot \nabla W_{ij} \left( \frac{ u_i^{n+1} \left( \gamma - 1 \right) }{\rho_i} + \frac{ u_j^{n+1} \left( \gamma - 1 \right) }{\rho_j} + \Pi_{ij} \right) + {\rm d} t a c \kappa_i \left[ \frac{ \rho_i \xi_i^{n+1}}{a} - \left( \frac{ u_i^{n+1}}{c_{{\rm v},i}} \right)^4 \right], \end{equation} substituting in $(\gamma -1)u\rho$ for $p$ using our equation of state. These equations can be rearranged into the form of equation \ref{eqn:GS} to solve for $\xi^{n+1}_i$ and $u^{n+1}_i$ and Gauss-Seidel iteration performed with all other independent variables fixed. We investigated two different approaches to solving these equations. The first method was to iterate over the Gauss-Seidel form of equations \ref{eqn:GSxi} and \ref{eqn:GSu} separately but within the same iterations. This resulted in implicit integration that was many times faster than that of \citet{WB2004} in optically thin regimes, but in optically thick regions the performance of the code was similar to that of \citet{WB2004}. The method also failed to converge for large timesteps when the energy transfer term (the last terms in equations \ref{eqn:GSxi} and \ref{eqn:GSu}) became large (due to high $\kappa$ or large temperature differences between the gas and the radiation). By far the most effective method is to solve equations \ref{eqn:GSxi} and \ref{eqn:GSu} simultaneously for $u_i^{n+1}$, and to perform Gauss-Seidel iteration on the resulting expression. The resulting value of $u_i^{n+1}$ is then substituted into equation \ref{eqn:GSxi} to obtain $\xi_i^{n+1}$ during the same iteration. To simplify the subsequent equations we define the following quantities: \begin{eqnarray} \beta = & \displaystyle {\rm d} t ~c~ \kappa_i ~\rho_i ,\nonumber \\ \Gamma = & \displaystyle a~ c~ \kappa_i / c_{{\rm v}, i}^4, \nonumber \\ D_{{\rm d},i} = & \displaystyle \sum_j \frac{m_j}{\rho_j} c~b \frac{\nabla W_{ij}}{r_{ij}}, \nonumber \\ D_{{\rm n},i} = & \displaystyle - \sum_j \frac{m_j}{\rho_i~\rho_j} c~b \frac{\nabla W_{ij}}{r_{ij}} \rho_j \xi_j^{n+1}, \nonumber \\ P_{{\rm d},i} = & \displaystyle \sum_j \frac{1}{2} m_j \mbox{\boldmath $v_{ij}$} \cdot \nabla W_{ij} \frac{ \left( \gamma - 1 \right)}{\rho_i}, \nonumber \\ P_{{\rm n},i} = & \displaystyle \sum_j \frac{1}{2} m_j \mbox{\boldmath $v_{ij}$} \cdot \nabla W_{ij} \left[ \frac{ \left( \gamma - 1 \right) u_j^{n+1} }{\rho_j} + \Pi_{ij} \right], \nonumber \\ R_{{\rm p},i} = & \displaystyle \left( \mbox{\boldmath $\nabla \cdot v$} \right)_i f_i, \nonumber \\ \chi = & {\rm d} t D_{{\rm d},i} - {\rm d} t R_{{\rm p},i}. \nonumber \end{eqnarray} Using these new variables we can solve equation \ref{eqn:GSu} for $\xi_i^{n+1}$ \begin{equation} \label{eqn:xisubst} \xi_i^{n+1} = \frac{1}{\beta} \left( u_i^{n+1} - u_i^n - {\rm d} t~ P_{{\rm n},i} - {\rm d} t ~P_{{\rm d},i} u_i^{n+1} + {\rm d} t ~\Gamma \left[ u_i^{n+1} \right]^4 \right). \end{equation} The right hand side of equation \ref{eqn:xisubst} then replaces $\xi_i^{n+1}$ in equation \ref{eqn:GSxi}, forming a quartic equation in $u_i^{n+1}$. If the quartic equation is cast in the form $ a_4 x^4 + a_3 x^3 + a_2 x^2 +a_1 x + a_0 = 0$ then the co-efficients are given by: \begin{eqnarray} a_4 = & \displaystyle \Gamma {\rm d} t ~\left( \chi - 1 \right) \nonumber \\ a_3 = & \displaystyle 0 \nonumber \\ a_2 = & \displaystyle 0 \nonumber \\ a_1 = & \displaystyle \left( \chi - \beta - 1 \right) \left( 1 - {\rm d} t~ P_{{\rm d},i} \right) \nonumber \\ a_0 = & \displaystyle \beta \xi^n_i + \left( \chi - \beta - 1 \right) \left( - u_i^n - {\rm d} t~ P_{{\rm n},i} \right) + {\rm d} t ~D_{{\rm n},i} \beta \nonumber \end{eqnarray} Solving this quartic equation yields a value for $u_i^{n+1}$, which may then be substituted into \begin{equation} \label{eqn:xiinredef} \xi_i^{n+1} = \frac{ \left( \xi_i^n + {\rm d} t~ D_{{\rm n},i} + {\rm d} t ~\Gamma \left[ u_i^{n+1} \right]^4 \right) }{ 1 - \chi + \beta}, \end{equation} using the quantities defined above \citep[for the analytic solution of a quartic equation, see Appendix A of][]{WB2004}. These solutions for $\xi_i^{n+1}$ and $u_i^{n+1}$ are iterated until they converge. \subsection{Prediction of position, density and smoothing length} \label{sec:prediction} We found that the accuracy when taking large implicit timesteps could be improved by predicting forward many of the quantities on the right-hand sides of equations \ref{eqn:GSxi} and \ref{eqn:GSu} to time $t=n+1$. The quantities $x$, $\rho$, and $h$ can be predicted forwards as \begin{eqnarray} x_i^{n+1} =& x_i^n + {\rm d} t\; \mbox{\boldmath $v^n_i$} \nonumber \\ \rho_i^{n+1} =& \rho_i^n - {\rm d} t\; \rho_i^n \left( \mbox{\boldmath $\nabla \cdot v$} \right)_i^n \nonumber \\ h_i^{n+1} =& h_i^n + {\rm d} t\; h_i^n \left( \mbox{\boldmath $\nabla \cdot v$} \right)_i^n. \end{eqnarray} The improvement in accuracy was especially apparent in the case of the supercritical shock (see section \ref{sec:supershock}). \subsection{Convergence criteria} We define convergence as being when the values of $u_i^{n+1}$ and $\xi_i^{n+1}$ obtained from the $m$-th iteration satisfy equations \ref{eqn:GSxi} and \ref{eqn:GSu} to a given tolerance (with all occurrences of $u_i^{n+1}$ and $\xi_i^{n+1}$ on the right-hand sides of these equations being evaluated from iteration $m-1$). Thus, for example, we iterate until the fractional errors in $\xi$ given by \begin{equation} \frac{\xi^{n+1,m}_i - \left( \xi_i^n + {\rm d} t \sum_j \frac{m_j}{\rho_i \rho_j} b c \left( \rho_i \xi_i^{n+1,m-1} - \rho_j \xi_j^{n+1} \right) \frac{\nabla W_{ij}}{r_{ij}} - {\rm d} t \left( \mbox{\boldmath $\nabla \cdot v$} \right)_i f_i \xi_i^{n+1,m-1} - {\rm d} t~ a ~c~ \kappa_i \left[ \frac{ \rho_i \xi_i^{n+1,m-1}}{a} - \left( \frac{ u_i^{n+1,m-1}}{c_{{\rm v},i}} \right)^4 \right] \right)}{\xi^{n+1,m}_i}, \end{equation} are within a certain tolerance, for which we have typically used $10^{-3}$. In the event that the method fails to converge, we split the timestep into two halves, and begin the iterations again, using the result of the first half timestep as the input to the second. If either fails, we split the timesteps by another factor of two and continue this way until the system converges, or it reaches some excessively small fraction of the original timestep, at which stage it is no longer computationally efficient to continue the calculation. \subsection{Timestep criteria} The integration of the hydrodynamic variables requires that the timesteps obey the Courant condition for the hydrodynamic processes. The usual hydrodynamical SPH timestep criteria are \begin{equation} \label{tshydro1} {\rm d} t_{{\rm Courant},i} = { \zeta h_{i} \over c_{\rm s} + h_{i} \left| \nabla \cdot \mbox{\boldmath $v$} \right|_{i} + 1.2 \left( \alpha_{\rm v} c_{\rm s} + \beta_{\rm v} h_{i} \left| \nabla \cdot \mbox{\boldmath $v$} \right|_{i} \right) }, \end{equation} and \begin{equation} \label{tshydro2} {\rm d} t_{{\rm force},i} = \zeta \sqrt{{h_{i} \over \left| \mbox{\boldmath $a$}_{i} \right|}}, \end{equation} where we use a Courant number of $\zeta=0.3$, unless otherwise noted, and \mbox{\boldmath $a$}$_i$ is the acceleration of particle $i$. The lesser of these two quantities gives the hydrodynamical timestep. There is also an explicit timestep associated with the radiation hydrodynamics, described in detail in \citet{WB2004}. This timestep is typically much smaller than the hydrodynamical timestep, and the new implicit method enables us to forgo the use of smaller timesteps in favour of the large hydrodynamical timestep. \section{Test calculations} \label{sec:testcalc} We have once again duplicated the tests done by \citet{TS2001}, as we did in \citet{WB2004}, this time however using the new code. This code achieves the same or better accuracy than the code described in \citet{WB2004}. However, in the vast majority of cases (especially those involving moving fluids) the new code is significantly faster than the old one. \subsection{Heating and cooling terms} \label{sec:heatcool} We tested the interaction between the radiation and the gas to check that the temperatures of the gas and the radiation equalise at the correct rate when $T_{\rm g} \neq T_{\rm r}$ initially. A gas in a domain 10 cm long with 100 particles was set up so that there was no velocity, with a density $\rho = 10^{-7}$ g cm$^{-3}$, opacity $\kappa = 0.4$ cm$^2$ g$^{-1}$, and $\gamma= \frac{5}{3}$ and $E = \xi \rho = 10^{12}$ ergs cm$^{-3}$. Two tests were carried out, one where the gas heated until it reached the radiation temperature, and one where it cooled. The first test had $e=u \rho = 10^2$ ergs cm$^{-3}$, and the second $e=u \rho = 10^{10}$ ergs cm$^{-3}$. The boundaries of the calculation used reflective ghost particles. This problem can be approximated in the case where the energy in the radiation is much greater than that in the gas by the differential equation \begin{equation} \frac{{\rm d} e}{{\rm d} t} = c \kappa E - a c \kappa \left( {e \over \rho c_{\rm v}} \right)^4, \end{equation} and assuming $E$ is constant. In figure \ref{fig:ts51}, the solid line is this analytic solution, plotted both for the cases where $T_{\rm g}$ increases and decreases. The crosses are the results of the SPH code using an implicit timestep that is set to the greater of $10^{-14}$ s or five percent of the time elapsed, similar to the way \citet{WB2004} performed this test. The squares are similar, but with a timestep being the greater of $10^{-11}$ s or five percent of the time elapsed. As can be seen, the match between the analytic solution and the solutions given by the SPH code is once again excellent. On this test, the new method and that of \citet{WB2004} are of comparable speed and each takes a few minutes. \begin{figure} \centerline{\psfig{figure=fig1.eps,width=9.0truecm}} \caption{\label{fig:ts51} The evolution of the gas energy density $e$ as it equilibriates with a radiation energy density $E=10^{12}$ erg cm$^{-3}$. In the upper case, $e=10^{10}$ erg cm$^{-3}$ initially, while in the lower case $e=10^{2}$ erg cm$^{-3}$. The solid line is the analytic solution, the crosses are the results of the SPH code using implicit timesteps of the lesser of $10^{-14}$~s or five percent of the elapsed time, and the squares with a timestep of the lower of $10^{-11}$~s or five percent of the elapsed time. The symbols are plotted every ten timesteps.} \end{figure} \subsection{Propagating radiation in optically thin media} \begin{figure} \centerline{\psfig{figure=fig2.eps,width=9.0truecm}} \caption{\label{fig:ts55} The propagation of a radiation pulse across a uniform medium. The time is $t=10^{-11}$~s. The vertical dashed line shows the expected position of the pulse based on the speed of light. The results are almost independent of the size of the implicit timestep used. Results are given for implicit timesteps equal to (solid line), ten times (dotted line), one hundred times (short-dashed line), and one thousand times (long-dashed line) the explicit timestep. The dot-dashed line gives the result using a single implicit step of $10^{-11}$~s. The initial conditions are also shown as a solid line.} \end{figure} In the standard diffusion approximation, in optically thin regions, radiation propagates at near infinite speed. This is unphysical, and the flux limiter has been introduced to limit the diffusion of radiation to the speed of light. To examine how well our code limits the speed of the radiation, a one centimetre long one-dimensional box is filled with 100 equally spaced SPH particles, with $E= 10^{-2}$ erg cm$^{-3}$ ($\xi = 0.4$ erg g$^{-1}$), $\rho = 0.025$ g cm$^{-3}$, and $ \kappa=0.4$ cm$^2$ g$^{-1}$. Initially, the radiation and gas are in thermal equilibrium. At the start of the simulation, the radiation energy density for the leftmost ten particles was changed to $E=0.1$ erg cm$^{-3}$($\xi = 4$ erg g$^{-1}$) causing a radiation front that was allowed to propagate across the region. The ghost particles were reflective except in specific radiation energy $\xi$, which was fixed equal to $\xi=4$ erg g$^{-1}$ at the left hand boundary and $\xi=0.4$ erg g$^{-1}$ at the right hand boundary. The implicit code was used with various timesteps ranging from an explicit timestep to a single implicit step lasting $10^{-11}$~s. The results are shown in figure \ref{fig:ts55}. As shown by the figure, the radiation pulse propagates at the correct speed, even using one single large timestep. The front is smoothed out in a manner similar to the results of \citet{TS2001} and \citet{WB2004}; both methods are quite diffusive in this situation. The new code is slower than that of \citet{WB2004} for an explicit timestep, but superior for all longer timesteps. \subsection{Optically-thick (adiabatic) and optically-thin (isothermal) shocks} A shock-tube test, identical to the one in \citet{WB2004}, was set up to investigate the way the code simulated optically-thin and optically-thick regimes and the transition between them. In the limit of high optical depth, the gas cannot cool because the radiation is trapped within the gas; thus the shock is adiabatic. An optically-thin shock, on the other hand, is able to efficiently radiate away the thermal energy and thus behaves as an isothermal shock. In these shock tests, the gas and radiation are highly coupled and, thus, their temperatures are equal. A domain $2 \times 10^{15}$ cm long extending from $x = -1 \times 10^{15}$ to $1 \times 10^{15} $ cm was set up, with an initial density of $\rho = 10^{-10}$ g cm$^{-3}$, and the temperatures of the gas and radiation were initially $1500 $~K. One hundred particles were equally spaced in the domain, with those with negative $x$ having a velocity equal to the adiabatic ($\gamma = 5/3$) sound speed $v_0=c_{\rm s}= 3.2\times 10^5$ cm s$^{-1}$, and those with positive $x$ travelling at the same speed in the opposite direction. The two flows impact at the origin, and a shock forms. Opacities of $\kappa = 40, 0.4, 4.0 \times 10^{-3} $ and $4.0 \times 10^{-5}$ cm$^2$ g$^{-1}$ were used to follow the transition from adiabatic to isothermal behaviour. Ghost particles were placed outside the boundaries and maintain the initial energies of their respective real particles. The boundaries moved inwards with the same velocity as the initial velocities of the two streams. These moving boundaries cause slight perturbations in the densities of those particles closest to the boundaries, however this does not affect the solution in the vicinity of the shocks. \begin{figure} \centerline{\psfig{figure=fig3.eps,width=15.0truecm}} \caption{\label{fig:isoadia} A set of shocks with differing opacity at time $t=1.0\times 10^9$~s. Density is on the left, and gas temperature on the right. The crosses are the SPH results; the solid line gives the analytic solution for an adiabatic shock, and the dashed line for an isothermal shock. The opacities are (top) $\kappa = 40$, (upper middle) $\kappa = 0.4 $, (lower middle) $\kappa = 4.0 \times 10^{-3}$ and (bottom) $\kappa = 4.0 \times 10^{-5}$ cm$^2$ g$^{-1}$. As the opacity is decreased, the shocks transition from adiabatic to isothermal behaviour.} \end{figure} The optically thick and thin limits can be solved analytically (e.g. \citealp{Zeldo}). The shock speed is given by \begin{equation} D = \frac{ ( \gamma_{\rm eff} - 3) + \sqrt{ (\gamma_{\rm eff} +1 )^2 v_0^2 + 16 \gamma_{\rm eff} }}{4}, \end{equation} where $\gamma_{\rm eff}=1$ for the isothermal case, and $\gamma_{\rm eff}=5/3$ for the adiabatic case. The ratio of the final to the initial density is given by \begin{equation} \frac{\rho_1}{\rho_0} = 1 + {v_0 \over D}, \end{equation} and, for the adiabatic shock, the ratio of the final to the initial temperature is given by \begin{equation} \frac{T_1}{T_0} = \frac{\rho_1}{\rho_0} + \frac{v_0 D \rho_0}{p_0}. \end{equation} These analytic solutions are shown by the solid and dashed lines in figure \ref{fig:isoadia}. In the figure, the opacity decreases from top to bottom showing the transition from optically-thick (adiabatic) to optically-thin (isothermal) behaviour. The extremes are in good agreement with their respective analytic adiabatic and isothermal solutions. Note that the spike in thermal energy near the origin and the corresponding reduction in density for the optically-thick case (due to `wall-heating') is softened by the radiation transport that occurs in the intermediate opacity calculation with $\kappa = 0.4$. In comparison to the code described in \citet{WB2004}, these shocks were many times faster. The timestep was limited by the hydrodynamical criteria only. The $\kappa = 40$ and $\kappa = 0.4$ shocks ran in less than a minute, compared to the old code which took thirty-four hours and thirty-five minutes, respectively. The $\kappa = 4 \times 10^{-3}$ shock took just under one minute, compared to forty-five minutes for the old code. The $\kappa = 4 \times 10^{-5}$ code took twenty-three minutes, compared to the previous time of ten hours. \subsection{Sub- and super-critical shocks} \label{sec:supershock} \begin{figure} \centerline{\psfig{figure=fig4.eps,width=16.0truecm}} \caption{\label{fig:subshock} The sub-critical shock with piston velocity $6 \times 10^5$ cm s$^{-1}$ and 100 particles. The large panels show the results using the explicit code. The top panels show radiation (solid line) and gas (dotted line) temperatures. The bottom left panel shows the normalised flux and bottom right panel the Eddington factor. The long-dashed lines give the analytic solutions for the gas temperature and normalised flux. An Eddington factor of 1/3 is also indicated for reference (short-dashed line, lower right panel). The subpanels plot the logarithm of the difference between the results using the explicit code and the implicit code (see the main text). The subpanels show the results with (lower subpanel) and without (upper subpanel) predicting $x$,$h$ and $\rho$ forwards in time. The implicit code was run with timesteps one times (solid line), one tenth of (long-dashed line) and one hundredth of (short-dashed line) the {\it hydrodynamic} timestep.} \end{figure} A supercritical shock occurs when the photons generated by a shock have sufficient energy to preheat the material upstream. The characteristic temperature profile of a supercritical shock is where the temperature on either side of the shock is similar, rather than the downstream temperature being much higher than that upstream, as occurs in a subcritical shock \citep[see][for more details]{Zeldo}. The initial conditions of this problem are those of \citet*{SGM1999} and \citet{TS2001}. A gas with opacity $\kappa = 0.4 $ cm$^2$ g$^{-1}$, uniform density $\rho=7.78 \times 10^{-10}$ g cm$^{-3}$, mean molecular weight $\mu = 0.5$ and $\gamma = \frac{5}{3}$ is set up with $\xi$ and $u$ in equilibrium, with a temperature gradient of $T = 10 + \left( 75 x / 7 \times 10^{10} \right)$ K. Initially the particles are equally spaced between $x=0$ and $x= 7 \times 10^{10} $ cm for the supercritical shock, and between $x=0$ and $x= 3.5 \times 10^{10} $ cm for the subcritical shock. At time $t=0$ a piston starts to move into the fluid from the left-hand boundary (simulated by moving the location of the boundary). For the subcritical shock the piston velocity is $v_{\rm p} = 6$ km s$^{-1}$, and for the supercritical shock $v_{\rm p} = 16$ km s$^{-1}$, as per \citet{SGM1999}. The ghost particles are reflective in the frame of reference of the boundary. Artificial viscosity parameters $\alpha_{\rm v} = 2$ and $\beta_{\rm v} = 4$ were used to smooth out oscillations. The results of calculations for a sub-critical shock (piston velocity $v_{\rm p} = 6$ km s$^{-1}$) are shown in figure \ref{fig:subshock}. The top left panel for each shows the temperature of the radiation field (solid line) and the gas (dotted line) against position, and the top right shows the same quantities against optical depth $\tau$, with $\tau=0$ set at the shock front (measured from the density distribution). The bottom left panel shows normalised flux, and the bottom right the value of the Eddington factor. The analytic solutions discussed by \citet{SGM1999} and \citet{Zeldo} for the temperatures and fluxes of the shocks are shown with long-dashed lines. Figures \ref{fig:subshock} and \ref{fig:supershock} are plotted using the explicit code. In subpanels beneath the main panels, we compare the results from the implicit code with the explicit results. Calculations were performed using the implicit code with timesteps of one, one tenth and one hundredth the hydrodynamical timestep criteria. In the subpanels, we plot the differences of the implicit results with respect to the explicit results. We divide the difference between the implicit and explicit values by the explicit value to obtain a fractional error and take the logarithm of the absolute value of this fraction. Thus, a difference of $-2$ on the subpanels corresponds to an error of 1 percent with respect to the explicit result. The subpanels in figure \ref{fig:subshock} shows the log fractional error of an implicit code with (bottom subpanels) and without (top subpanels) the prediction of $x$, $h$ and $\rho$ discussed in section \ref{sec:prediction}. Similarly, figure \ref{fig:supershock} show the super-critical shock in the same way, excluding (top) and including (bottom) the prediction mentioned in section \ref{sec:prediction}. The prediction of $x$, $h$ and $\rho$ makes the sub-critical shock significantly more accurate for hydrodynamical timesteps, whilst having a smaller benefit for the super-critical shock. Figure \ref{fig:supershockres} shows how increasing the resolution of the simulation enables us to resolve the spike in gas temperature at the shock front in a super-critical shock. \begin{figure} \centerline{\psfig{figure=fig5.eps,width=16.0truecm}} \caption{\label{fig:supershock} The super-critical shock, with piston velocity $1.6 \times 10^6$ cm s$^{-1}$, 100 particles. The subpanels shown the results with (bottom) and without (top) predicting $x$,$h$ and $\rho$ forwards in time. This shock is strong enough for radiation from the shock to preheat the gas upstream. See figure \ref{fig:subshock} for details of the line meanings.} \end{figure} \begin{figure} \centerline{\psfig{figure=fig6.eps,width=16.0truecm}} \caption{\label{fig:supershockres} The super-critical shock, with piston velocity $16$ km s$^{-1}$, showing how changing the resolution affects the spike in gas temperature at the shock. All calculations use a hydrodynamical timestep and the implicit code. The solid line is with 100 particles, the dotted line with 200 particles, and the long-dashed with 500 particles. The short-dashed line shows the results with 500 particles and double the number of neighbours (16 instead of 8).} \end{figure} The new implicit method is many times faster than the old method. In \citet{WB2004}, the super-critical shock with the hydrodynamic timestep criteria took nearly ten days. The new code performed the same calculation on a comparable CPU in two minutes, making the new algorithm approximately $10^4$ times faster. A similar improvement in performance can be seen with the sub-critical shock -- the hydrodynamic timestep run took over twenty-three days for \citet{WB2004}, while the present method ran in four minutes, yielding an increase again of $\approx 10^4$ times. \subsection{Radiation-dominated shock} In material of high optical depth the radiation generated in a shock cannot diffuse away at a high rate, and so the radiation becomes confined in a thin region adjacent to the shock. \citet{TS2001} performed a calculation that tests whether the shock thickness is what one would expect in these circumstances. An extremely high Mach number shock (Mach number of 658) is set up, with the gas on the left having an initial density of $\rho = 0.01$ g cm$^{-3}$, opacity $\kappa=0.4$ cm$^2$ g$^{-1}$, temperature $T_{\rm r} = T_{\rm g} = 10^4$ K, and speed $10^9$ cm s$^{-1}$. The gas on the right has density $\rho = 0.0685847$ g cm$^{-3}$, opacity $\kappa=0.4$ cm$^2$ g$^{-1}$, temperature $T_{\rm r} = T_{\rm g} = 4.239 \times 10^7$ K, and speed $1.458 \times 10^8$ cm s$^{-1}$. The density contrast is set initially by using different mass particles, however as the simulation evolves particles from the left enter the shock and by the time the figure is plotted all the particles shown have the same mass. The locations of the boundaries move with the same speed as their respective particles, and the properties of the ghost particles outside these boundaries are fixed at their initial values. We use hydrodynamical timestep with a Courant number of $\zeta = 0.03$. 1500 particles are equally spaced over a domain extending from $x=-6 \times 10^5$ cm to $x=1.5 \times 10^5$ cm initially, with the discontinuity at $x = 0.5 \times 10^5$ cm. The location of the shock should be fixed in this frame, although individual particles flow through the shock. After a period where a transient feature forms at the shock front and drifts downstream with the flow, a stable shock is established. Its thickness is expected to be roughly equal to the distance $l = { c \lambda / \kappa \rho u_1}$, where $u_1$ is the speed of material flowing into the shock front. \begin{figure} \centerline{\psfig{figure=fig7.eps,width=16.0truecm}} \caption{\label{fig:radthick} The radiation-dominated shock at time $t= 5 \times 10^{-4}$~s. We plot the velocity, radiation and gas energy densities and gas density versus position. The vertical dashed lines show the expected shock thickness. Gas flows into the shock from the left. The after effects of the transient that occurs at the start of the calculation can be seen at the far right of the plots. } \end{figure} Figure \ref{fig:radthick} shows the results at a time $t= 5 \times 10^{-4}$~s, of the radiation energy density $E$, gas energy density $e$, velocity $v$, and density $\rho$ versus position. The vertical dashed lines indicate the expected shock thickness and the SPH results are in good agreement. The after-effects of the transient moving downstream can again be seen on the right of the plots of density and gas energy density. \citet{WB2004} were unable to run this test case with the large timestep used here because it required an unacceptable amount of computational time. The calculation presented here required approximately a week. \section{Conclusions} We have presented a more efficient method for performing radiative transfer in the flux-limited diffusion approximation within the SPH formalism. This gives a speed increase of many thousand times over the code of \citet{WB2004}. In every test, the new implicit code is much faster than the old code for large implicit timesteps, with no loss in accuracy. Whilst the method described here is presented in one dimension, the addition of the algorithm into a three-dimensional code is easily accomplished. The major difference between the one- and three-dimensional algorithms is the form of the radiation pressure, which involves a more complicated tensor equation. We are performing simulations in three dimensions using this algorithm which will be published in due course. \section*{Acknowledgments} SCW acknowledges support from a PPARC postgraduate studentship. MRB is grateful for the support of a Philip Leverhulme Prize. \bibliographystyle{mn2e}
2,869,038,155,337
arxiv
\section{Introduction} Let $(S,\Sigma,\mu)$ be a probability space and $P(x,A)$ a transition probability on $S\times\Sigma$ with Markov operator $Pf(x):= \int f(y)P(x,dy)$ for bounded measurable $f$. We call $P$ {\it bi-stochastic} on $(S,\Sigma,\mu)$ when $\mu$ is {\it invariant} for $P$, i.e. $\int P(x,A)d\mu(x) = \mu(A)$ for every $A \in \Sigma$. In that case, if $f$ is bounded and $f=0$ a.e. $\mu$, then $Pf=0$ a.e., so $P$ defines an operator (still denoted by $P$) on $L^\infty(\mu)$, and the invariance of $\mu$ yields $$ \|Pf\|_1:= \int Pf(x) d\mu(x) = \int f d\mu = \|f\|_1 \quad \text{for } 0 \le f\in L^\infty(\mu). $$ In this case $P$ clearly extends to a contraction of $L^1(\mu)$, and then we have that $P$ is a contraction of each $L^p(\mu)$, $1\le p \le \infty$ \cite[p. 65]{Kr} (or see \cite[Corollary VI.10.12]{DS}), and for $1\le p <\infty$ the power averages $\{A_n(P) := \frac1n\sum_{k=0}^{n-1} P^k \}_{n\ge 1}$ converge in the strong operator topology of $L^p(\mu)$, and a.e. \cite[Theorem 1.7.2]{Kr}. We say that $P$ is {\it ergodic} if $Pf=f \in L^\infty(\mu)$ implies that $f$ is constant a.e. When $P$ is ergodic and bi-stochastic, $\lim A_n(P)f= \int f\,d\mu$ a.e. and in $L^p$-norm, for any $f \in L^p(\mu)$, $1 \le p < \infty$. \smallskip {\bf Definition.} Let $(S,\Sigma,\mu)$ be a probability space and $1 \le p < \infty$. A bounded operator $T$ on $L^p(S,\mu)$ is called {\it hyperbounded} if for some $q>p$ the operator $T$ maps $L^p(S,\mu)$ into $L^q(S,\mu)$. As observed in \cite{G2}, a hyperbounded $T$ maps $L^p$ to $L^q$ continuously, by the closed graph theorem. Note that if $T$ maps $L^p$ to $L^\infty$, then it maps $L^p$ to $L^q$ for any $p<q< \infty$, since $\|Tf\|_q \le \|Tf\|_\infty \le C\|f\|_p$. \smallskip Gl\"uck \cite[Theorem 1.1]{G2} proved the following. \begin{theo} \label{gluck} Let $1<p< \infty$ and let $T$ be a power-bounded positive operator on $L^p(S,\mu)$. If $T$ is hyperbounded, then the essential spectral radius of $T$ (as an operator on the complex $L^p$) is less than 1. \end{theo} A Markov operator $P$ on a probability space $(S,\Sigma,\mu)$ is called {\it hyperbounded} if $\mu$ is invariant (i.e. $P$ is bi-stochastic), and for some $1< p <q \le \infty$ the operator $P$ maps $L^p(\mu)$ to $L^q(\mu)$ (i.e. $P$ is hyperbounded on $L^p(\mu)$). Since all $L^p(S,\mu)$ spaces are invariant under $P$ and therefore under all its powers, it follows that {\it all the powers of a hyperbounded Markov operator are hyperbounded}. \medskip We deduce from Gl\"uck's result (Theorem \ref{gluck}) that a hyper-bounded bi-stochastic Markov operator is uniformly ergodic in all $L^r(\mu)$ spaces, $1<r < \infty$. We prove, using a method similar to Foguel's \cite{F1}, \cite{F3}, that an ergodic hyperbounded Markov operator has a periodic behavior similar to that of ergodic Harris recurrent operators, and obtain conditions for aperiodicity. \smallskip A probability $\nu$ on the unit circle $\mathbb T$ defines a Markov operator $P_\nu$ by $P_\nu f =\nu*f$, for which the normalized Lebesgue measure $\mu$ on $\mathbb T$ is invariant. We prove that if $P_\nu$ is hyperbounded, then $\nu$ has no atoms. We show that there exists an absolutely continuous $\nu$ (so $P_\nu$ is uniformly ergodic in $L^2(\mathbb T)$ by \cite[Theorem 4.4]{DL}) which is not hyperbounded, and that there are singular $\nu$ such that $P_\nu$ is hyperbounded and not Harris recurrent. \medskip \section{Uniformly ergodic positive operators on reflexive Banach lattices} A bounded linear operator $T$ on a Banach space $X$ is called {\it uniformly ergodic} if its power averages $\{A_n(T) \}_{n\ge 1}$ converge in the operator norm topology. An operator $T$ is uniformly ergodic if and only if $n^{-1}\|T^n\| \to 0$ and $(I-T)X$ is closed \cite{L1}. When $X$ is over $\mathbb C$, the powers $\{T^n\}$ converge in operator norm if and only if $T$ is uniformly ergodic and $\sigma(T)\cap \mathbb T \subset \{1\}$ \cite[Theorem 4]{Lu}; in that case, $T$ is power-bounded and $r(T_{|(I-T)X}) <1$. Whether $X$ is over $\mathbb R$ or over $\mathbb C$, operator norm convergence of $T^n$, say to some $E$, is exponential: there exist $C>0$ and $\rho<1$ such that $\|T^n-E\| \le C\rho^n$ (see \cite[Proposition 3.1]{DL}). \smallskip A bounded linear $T$ on $X$ is called {\it quasi-compact} if there exists a compact operator $K$ such that $\|T^n-K\|<1$ for some $n \ge 1$. If $T$ is quasi-compact and $\frac1n T^n \to 0$ in the weak operator topology, then $T$ is uniformly ergodic, $\sigma(T) \cap \mathbb T$ is finite, and each $\lambda \in \sigma(T) \cap \mathbb T$ is a simple pole of the resolvent (hence an eigenvalue) with finite-dimensional eigenspace \cite[p. 711]{DS}. It is known \cite[Lemma 2.2.4, p. 88]{Kr} that $T$ is quasi-compact if and only if there is a sequence $\{K_n\}$ of compact operators such that $\|T^n -K_n\| \to 0$; consequently the powers of a quasi-compact operator are quasi-compact. Conversely, if $T^m$ is quasi-compact for some $m>1$, then $T$ is quasi-compact. \begin{prop} \label{qc} Let $T$ be a {\rm positive} power-bounded operator on a complex Banach lattice $L$. If $T$ is quasi-compact, then there exists an integer $d \ge 1$ such that each (of the finitely many) $\lambda \in \sigma(T) \cap \mathbb T$ is a {\rm d}th root of unity, $\sigma(T^d) \cap \mathbb T \subset \{1\}$, and $(T^{nd})$ converges in operator norm as $n \to \infty$. Moreover, each $\lambda \in \sigma(T) \cap \mathbb T$ is an eigenvalue, with finite-dimensional eigenspace. \end{prop} \begin{proof} Power-boundedness implies $r(T) \le 1$, so we have to prove only when $r(T)=1$. \smallskip By \cite[Theorem VIII.8.3]{DS}, the peripheral spectrum $\sigma(T) \cap \mathbb T$ consists of finitely many points, which are simple poles, hence eigenvalues, with finite-dimensional corresponding eigenspaces. Since $T$ is power-bounded and positive, by a result of Lotz \cite[p. 327, Theorem 4.9]{Sc} the peripheral spectrum $\sigma(T) \cap \mathbb T$ is cyclic, i.e.\ $\lambda^n$ is in the peripheral spectrum when $\lambda$ is. Since $ \sigma(T)\cap \mathbb T$ is finite, $ \lambda \in \sigma(T) \cap \mathbb T$ must be a root of unity. Thus $\sigma(T)\cap \mathbb T$ is a finite set of roots of unity, and let $d$ be the smallest common multiple of their orders. By the spectral mapping theorem, we have $\sigma(T^d) \cap \mathbb T = \{1\}$. Since $T^d$ is also quasi-compact, it is uniformly ergodic, so $(T^{nd})$ converges in operator norm, as $n \to \infty$, by \cite[Theorem 4]{Lu}. \end{proof} {\bf Remarks.} 1. For the cases $L=C(S)$ for a compact Hausdorff space $S$ or (by duality) $L=L^1(\mu)$, the existence of $d$ is proved in \cite[Lemma VIII.8.5]{DS}. 2. Without positivity the proposition is false, even in finite-dimensional spaces. \begin{cor} \label{bartoszek} Let $T$ be a positive power-bounded operator on a complex Banach lattice $L$. Then $T$ is quasi-compact if and only if for some integer $d\ge 1$ and a finite-dimensional projection $E$ we have $\|T^{nd}-E\| \to 0$. \end{cor} {\bf Remark.} For contractions the corollary was proved by Bartoszek \cite[Theorem 2]{Ba}, who gave a representation of the limit (see Section 4). \smallskip \begin{prop} \label{UE} Let $T$ be a positive power-bounded operator on a complex Banach lattice $L$. If $T$ is uniformly ergodic and $F(T):= \{ f \in L:\, Tf=f \}$ is finite-dimensional, then $T$ is quasi-compact, and has all the properties of Proposition \ref{qc}. \end{prop} \begin{proof} Since $T$ is positive uniformly ergodic with $F(T)$ is finite-dimensional, by \cite[Theorem 1]{L2} $T$ is quasi-compact. The existence of $d$ and the other properties follow from Proposition \ref{qc}. \end{proof} {\bf Remark.} Without positivity the proposition is false (take\ $-I$ on $L$ infinite-dimensional); moreover, $T^2$ need not be uniformly ergodic when $T$ is \cite{L1a}. \smallskip {\bf Definition.} Let $1 \le p< \infty$. A power-bounded linear operator $T$ on $L^p(S,\mu)$ is said to be {\it uniformly integrable} in $L^p$ (in short, p-UI), if $\{|Tf|^p:\, \|f\|_p \le 1\}$ is uniformly integrable. When $p=1$, uniform integrability of $T$ means that $T$ is weakly compact; hence $T^2$ is compact in $L^1$ \cite[Corollary VI.8.13]{DS}, and $F(T)\subset F(T^2)$ is finite-dimensional. However, $T$ may be weakly compact in $L^1$ without being compact \cite{YMK}. Wu \cite[Proposition 1.4]{Wu} proved that if $T$ is compact or hyperbounded in $L^p$, then $T$ is p-UI. Gl\"uck \cite[Corollary 2.3]{G2} proved that if $T$ is a {\it positive} power-bounded operator on $L^p$ such that for some integer $m \ge 1$ $T^m$ is p-UI, then the essential spectral radius of $T$ is less than 1. \smallskip {\bf Remarks.} 1. An example in \cite[Remark 1.1(2)]{BWY} shows that we may have a bi-stochastic Markov operator $P$ with $P^n$ compact in $L^2$ (so $P^n$ is 2-UI) for every large n, but $P $ itself is not 2-UI (take $P=P_{t_1}$ for some $t_1\in (0,r_0)$ there; this $P$ is quasi-compact, hence uniformly ergodic, in $L^2$, but not 2-UI). Theorem 1.1 of \cite{BWY} yields that in that example $P_t$ is even hyperbounded in $L^2$ for large $t$, so $P$ as above is not hyperbounded in $L^2$, though $P^n$ is for large $n$. By Proposition \ref{all-Lp} below, for any $1<p< \infty$, $P$ is not hyperbounded in $L^p$, though $P^n$ is for large $n$. 2. El Machkouri, Jakubowski and Voln\'y \cite{MJV} have an example of a 2-UI bi-stochastic Markov operator which is not hyperbounded in $L^2$. \begin{prop} \label{fixed-space} Let $1 < p< \infty$ and let $T$ be a positive power-bounded operator on $L^p(S,\Sigma,\mu)$. Set $Ef = \lim_n \frac1n\sum_{k=1}^n T^kf$, and assume that $T$ is p-UI. (i) If $Tf \ge f \ge 0$ implies $Tf=f$ a.e., then $F(T)$ is finite-dimensional. (ii) If $Ef \not\equiv 0$ for $0 \le f \not\equiv 0$, then $F(T)$ is finite-dimensional. (iii) If $\|T\| \le 1$, then $F(T)$ is finite-dimensional. \end{prop} \begin{proof} If $E=0$ we have $F(T)=\{0\}$; we therefore assume $E \ne 0$. (i) If $g \in F(T)$, then $|g|=|Tg| \le T|g|$, hence $|g| \in F(T)$, so $F(T)$ is a sublattice of $L^p(\mu)$. Then by \cite[Corollary 4.3]{BL}, $F(T)$ is the range of a positive contractive projection. Hence by \cite[Theorem 4.1]{BL}, there is a measure space $(S',\Sigma',\nu)$ such that $F(T)$ is isometrically isomorphic to $L^p(S', \Sigma',\nu)$. Since $T$ is p-UI and on $F(T)$ it is the identity, we obtain that the identity on $L^p(\nu)$ is p-UI, which means that the unit ball of $L^1(\nu)$ is weakly compact. This means that $L^1(\nu)$ is reflexive, so it is finite-dimensional. Hence $F(T)$ is finite-dimensional. (ii) The assumption implies (i), by the proof of \cite[Proposition III.8.4(i)]{Sc}. (iii) If $Tf \ge f \ge0$, then $\|f\| \le \|Tf\| \le \|f\|$, so $Tf=f$ a.e. and (i) applies. (Note that the ergodic limit $E$ is a positive contractive projection on $F(T)$. We can then apply directly \cite[Theorem 4.1]{BL}, and complete the proof as above). \end{proof} {\bf Remarks.} 1. For bi-stochastic p-UI Markov operators, (iii) was proved by Wu \cite[Corollary 3.6(b)]{Wu}. 2. Gl\"uck \cite[Lemma 4.2]{G2} proved, without the assumption on $E$ in (i), that for $T$ hyperbounded as in Proposition \ref{fixed-space}, $F(T)$ is finite-dimensional . \begin{cor} \label{ue} Let $1 < p< \infty$ and let $T$ be a positive power-bounded operator on $L^p(S,\mu)$. If $T$ is p-UI, then it is uniformly ergodic. If in addition $F(T)$ finite-dimensional, in particular if $\|T\| \le 1$ or $T$ is hyperbounded, then $T$ is quasi-compact, $\sigma(T) \cap \mathbb T$ is finite, all its points are eigenvalues with finite-dimensional eigenspaces, and for some integer $d \ge 1$ all these eigenvalues are {\rm d}th roots of unity. \end{cor} \begin{proof} If $1 \notin \sigma(T)$, then $I-T$ is invertible and $A_n= \frac1n (I-T)^{-1}T(I-T^n) \to 0$, with $F(T)=\{0\}$. Otherwise, by \cite[Corollary 2.3]{G2}, the essential spectral radius satisfies $r_{ess}(T)<1$, so if $1 \in \sigma(T)$, then it is a pole of the resolvent, and by Dunford's uniform ergodic theorem \cite[p. 648]{D} $T$ is uniformly ergodic. When $F(T)$ is finite-dimensional, which is the case if $T$ is hyperbounded by \cite{G2}, or when $\|T\| \le 1$ by Proposition \ref{fixed-space}, then $T$ is quasi-compact by \cite{L2}. The other assertions follow from Proposition \ref{UE}. \end{proof} {\bf Remarks.} 1. For additional information on quasi-compact positive {\it contractions} of $L^p$, see \cite[Theorem 2]{Sc2}. 2. Proposition \ref{no-hyper} below presents an aperiodic Harris recurrent symmetric bi-stochastic Markov operator which is uniformly ergodic in $L^2$, but not hyperbounded on $L^2$. Corollary \ref{not-hyper} and Proposition \ref{not-hyper2} below present ergodic bi-stochastic Markov operators, defined by convolutions on $\mathbb T$, which (by \cite{DL}) are uniformly ergodic on $L^2(\mathbb T)$ (hence quasi-compact by \cite{L2}), but are not hyperbounded on $L^2(\mathbb T)$. Note that uniform ergodicity in $L^2$ of an ergodic bi-stochastic Markov operator does not imply Harris recurrence \cite{DL}. 3. If $T$ and $S$ are {\it commuting} operators on $X$, with $T$ quasi-compact and $S$ a contraction, then $TS$ is quasi compact. \begin{lem} \label{product} Let $T$ and $S$ be bounded operators on $L^p(S,\mu)$, $1 \le p< \infty$. If $T$ is hyperbounded, so is $TS$. \end{lem} \medskip \section{Limit theorems for $L^2$ quasi-compact Markov operators} Let $P$ be a bi-stochastic hyperbounded Markov operator on a probability space $(S,\Sigma,\mu)$. Recall that $P$ maps each $L^p(S,\mu)$ to itself, $1 \le p \le \infty$. If $P$ maps $L^1(\mu)$ to $L^q(\mu)$, then for any $1<p<q$ we have $\|Pf\|_q \le C\|f\|_1 \le C\|f\|_p$, so $P$ maps $L^p$ to $L^q$. Similarly, if $P$ maps $L^p$ to $L^\infty$, then for any $p<q<\infty$ it maps $L^p$ to $L^q$. Thus, the standing assumption for hyperbounded Markov operators is, without loss of generality, $1<p<q<\infty$. \smallskip Let $P$ be a Markov operator with invariant probability $\mu$. The transition probability $P(x,A)$ is not used in the definition of hyperboundedness; only the facts that $P$ preserves positivity, is a contraction of $L_1(\mu)$ preserving integrals, and $P1=1$ are used. The dual operator $P^*$ satisfies the same properties (see \cite[p. 75]{F2} or \cite[p. 131]{Kr}), and will be called the {\it dual Markov} operator, though it need not be given by a transition probability (unless some regularity assumptions on the measure space are made). \begin{lem} \label{product-hyper} Let $P$ and $Q$ be bi-stochastic Markov operators on a probability space $(S,\Sigma,\mu)$. If $P$ is hyperbounded, so are $PQ$ and $QP$. \end{lem} \begin{proof} If $P$ maps $L^p$ to $L^q$ ($q>p$), clearly also $PQ$ and $QP$ map $L^p$ to $L^q$. \end{proof} {\bf Remark.} In Proposition \ref{product=hyper} we show two commuting bi-stochastic Markov operators which are not hyperbounded, but their product is. \medskip For the sake of completeness, we prove the following fact, observed in \cite[p. 1855]{MJV} (and in \cite{Ha} for convolution operators). \begin{prop} \label{all-Lp} Let $P$ be a hyperbounded bi-stochastic Markov operator on a probability space $(S,\Sigma,\mu)$. Then $P$ and $P^*$ are hyperbounded in each $L^r$, $1<r<\infty$. \end{prop} \begin{proof} Let $P$ map $L^p$ to $L^q$, $1<p<q<\infty$. Assume first that $1<r<p$. Since $P$ maps $L^1$ into itself and $L^p$ into $L^q$, we define $\theta \in(0,1)$ by $\frac1r=\theta +\frac{1-\theta}p$, and then $\frac1s=\theta +\frac{1-\theta}q$, and conclude from the Riesz-Thorin theorem \cite[Theorem VI.10.11]{DS} that $P$ maps $L^r$ to $L^s$, and $s>r$ since $q>p$. When $p< r< \infty$, we use for the interpolation the fact that $P$ maps $L^\infty$ into itself; then $P$ maps $L^r$ to $L^s$ where $s=rq/p$. We omit the details. Since $P$ maps $L^p$ to $L^q$, $P^*$ is hyperbounded, mapping $L^{q'}$ to $L^{p'}$ (with $p'$ and $q'$ the dual indices). Now apply the previous part to $P^*$. \end{proof} {\bf Remarks.} 1. By Corollary \ref{ue} a hyperbounded Markov operator which maps $L^p$ to $L^q$ with $1<p<q$ is quasi-compact, hence uniformly ergodic, in $L^p$. Hence, by Proposition \ref{all-Lp}, a hyperbounded Markov operator is quasi-compact and uniformly ergodic in all $L^r$ spaces, $1<r<\infty$. See also Corollary \ref{ros2}. 2. A hyperbounded $P$ as above need not be hyperbounded in $L^1$; see the remarks following Proposition \ref{Lr-deriv}. \begin{cor} Convex combinations of two hyperbounded Markov operators on $(S,\Sigma,\mu)$ are hyperbounded. \end{cor} \begin{cor} \label{p-2} Let $P$ be hyperbounded on $(S,\Sigma,\mu)$. Then there exists $1\le p<2$ such that $P$ maps $L^p$ to $L^2$. \end{cor} \begin{proof} By Proposition \ref{all-Lp}, $P^*$ is hyperbounded in $L^2$, so maps $L^2$ to $L^q$ for some $q>2$. Then by duality $P$ maps $L^p$, $p=q/(q-1)$, to $L^2$. \end{proof} \begin{cor} Let $P$ be bi-stochastic on $(S,\Sigma,\mu)$. The symmetrized operator $P_s:=\frac12(P+P^*)$ is hyperbounded if and only if $P$ is hyperbounded. \end{cor} \begin{proof} Let $P$ be hyperbounded. By Proposition \ref{all-Lp} both $P$ and $P^*$ are hyperbounded on $L^2$, hence so is $P_s$. Conversely, if $P_s$ maps $L^p$ to $L^q$ with $q>p$, then $Pf\le 2P_s f$ for $f \ge 0$ yields $$ \|Pf\|_q \le \|P|f|\,\|_q \le 2\|P_s |f|\,\|_q \le 2 \|P_s\|_{L^p \to L^q}\|f\|_p \ , $$ so $P$ is hyperbounded. \end{proof} \begin{prop} \label{ui} Let $P$ be a bi-stochastic Markov operator on a probability space $(S,\Sigma,\mu)$. If $P$ is p-UI for some $1\le p< \infty$, then $P$ and $P^*$ are r-UI and quasi-compact in each $L^r$, $1<r<\infty$. \end{prop} \begin{proof} For $1<r< \infty$, we apply \cite[Proposition 1.2(e)]{Wu}, with $q=1$ when $1<r<p$, and with $q=\infty$ when $r>p$, to conclude that $P$ is r-UI. By Corollary \ref{ue}, $P$ is uniformly ergodic in $L^r$. Since by \cite[Corollary 3.6(b.iii)]{Wu} $F(P)$ is finite-dimensional, $P$ is quasi-compact by \cite{L2}. Fix $1<r<\infty$. By \cite[Proposition 1.2(c) and Remark 1.3(b)]{Wu}, $P^*$ is r-UI in $L^r$, since by the above $P$ is $r'$-UI for $r'=r/(r-1)$, and the previous part of the proof applies to $P^*$ on $L^r$. \end{proof} {\bf Remark.} Recall that a bi-stochastic $P$ may be quasi-compact in $L^2$ and not 2-UI \cite{BWY}, and it may be 2-UI and not hyperbounded \cite{MJV}. \begin{prop} \label{rosenblatt} Let $P$ be a bi-stochastic Markov operator on a probability space $(S,\Sigma,\mu)$. Let $Ef =\lim A_n(P)f$ for $f \in L^1(\mu)$ define the projection on the integrable invariant functions (convergence in $L^r$ for $f \in L^r$, $1 \le r<\infty$). If $\|P^n-E\|_p \to 0$ for some $1\le p <\infty$, then for every $1<r<\infty$ we have $\|P^n-E\|_r \to \infty$. \end{prop} \begin{proof} Fix $1 \le r< \infty$. By the mean ergodic theorem, $Ef$ on $L^r$ projects on the invariant functions in $L^r$, with null space $Z_r:=\overline{(I-P)L^r}$, which is $P$-invariant. Let $R$ be the restriction of $P$ to $Z_r$. It is easy to see that $\|P^n-E\|_r \to 0$ is equivalent to $\|R^n\|_r \to 0$. We clearly have $PE=EP=E=E^2$. Define $U=P-E$. Then $UE=EU=0$, and $U^2=PU=P(P-E)$. By induction $U^n=P^{n-1}(P-E)$, so $\|U^n\|_r \le 2$ and also $\|U^n\|_\infty \le 2$. For $f \in L^r$ with $Ef=0$ (i.e. $f \in Z_r$) we have $Uf=Pf =Rf$, so $U^nf =R^nf$. Hence $\|R^n\|_r \le \|U^n\|_r$. For any $f\in L^r$, $(P-E)f \in Z_r$, so $U^n f= P^{n-1}(P-E)f = R^{n-1}(P-E)f$; hence $\|U^n\|_r\le 2\|R^{n-1}\|_r$. Let $p<r< \infty$. We denote by $R_p$ the restriction of $P$ to $Z_p$ and by $R_r$ the restriction to $Z_r$. Since all $L^r$ spaces are invariant under $U$, and $\|U^n\|_p \le 2$, $\|U^n\|_\infty \le 2$, by the Riesz-Thorin theorem \cite[Theorem (1.11), formula (1.14)]{Z} (see also \cite[Theorem VI.10.11]{DS}), there exists $\theta \in (0,1)$ such that $$ \|U^n\|_r \le \|U^n\|_p^\theta \|U^n\|_\infty^{1-\theta} \le 2^\theta \|R_p^{n-1}\|_p^\theta 2^{1-\theta} = \|R_p^{n-1}\|_p^\theta. $$ This yields $\|R_r^n\|_r \le \|U^n\|_r \le \|R_p^{n-1}\|_p^\theta \to 0$ by the assumption, which is equivalent to $\|P^n-E\|_r \to 0$. For $1<r<p$ we do the interpolation between 1 and $p$ (using $\|U^n\|_1 \le 2$). \end{proof} {\bf Remarks.} 1. For $P$ ergodic the theorem was proved by M. Rosenblatt \cite[Theorem VII.4.1]{R}; our proof is an adaptation of his. Note that the proof does not use positivity of $P$, nor $P1=1$, and applies to any contraction of $L^1(\mu)$ which is also a contraction of $L^\infty(\mu)$. 2. An example of Rosenblatt \cite[p. 213]{R} shows that in general, even for $P$ ergodic, the convergence $\|P^n-E\|_r \to 0$ for every $1< r<\infty$ does not imply convergence in $L_1$ operator norm. 3. Rosenblatt \cite[p. 211]{R} proved also that for $P$ ergodic, $\|P^n-E\|_1 \to 0$ is equivalent to the dual $P^*$ satisfying Doeblin's condition (so $P$ is Harris recurrent). An example in \cite{DL} shows that $P$ ergodic with $\|P^n-E\|_2 \to 0$ need not be Harris recurrent. 4. Rosenblatt \cite[Lemma VII.4.1]{R} proved that for $P$ ergodic, the stationary Markov chain with transition probability $P(x,A)$ and initial distribution $\mu$ is asymptotically uncorrelated if and only if $\|P^n-E\|_2 \to 0$. \begin{cor} \label{ros2} Let $P$ be a bi-stochastic Markov operator on a probability space $(S,\Sigma,\mu)$. If $P$ is uniformly ergodic in $L^p$ for some $1 \le p<\infty$, then $P$ is uniformly ergodic in every $L^r$, $1<r<\infty$. \end{cor} \begin{proof} Put $Q=\frac12(I+P)$. By Lemma 2.1 of Foguel and Weiss \cite{FW}, $\|Q^n(I-Q)\|_p \to 0$; since $P$ and $Q$ have the same invariants, uniform ergodicity in $L^P$ yields $\|Q^n -E\|_p \to 0$. By Proposition \ref{rosenblatt}, $\|Q^n-E\|_r \to 0$ for any $1<r< \infty$. Then $(I-Q)L^r=(I-P)L^r$ is closed in $L^r$, so $P$ is uniformly ergodic in $L^r$. \end{proof} {\bf Remarks.} 1. When $P$ is also ergodic, the corollary was proved in \cite[Corollary 3.5]{DL}. 2. By Corollary \ref{ros2}, a bi-stochastic $P$ is uniformly ergodic in some $L^p(\mu)$, $1<p<\infty$, if and only if it is {\it $L^2$-uniformly ergodic}. \smallskip {\bf Definition.} For a bi-stochastic Markov operator quasi-compact in $L^p(\mu)$, the value $d$ in Proposition \ref{qc} is called the {\it period} of $P$ (in $L^p$); we call $P$ {\it periodic} if $d>1$, and {\it aperiodic} when $d=1$. \begin{prop} \label{qc-Lp} Let $P$ be a bi-stochastic Markov operator on a probability space $(S,\Sigma,\mu)$. Let $Ef =\lim A_n(P)f$ for $f \in L^1(\mu)$ define the projection on the integrable invariant functions. If $P$ is quasi-compact in $L^p$ for some $1<p< \infty$, then it is quasi-compact in every $L^r$, $1 < r<\infty$, and the period of $P$ in $L^r$ equals its period in $L^p$ (we call this common period the {\rm period} of $P$). \end{prop} \begin{proof} Since $\mu$ is an invariant probability, $P$ is conservative (e.g. \cite[p. 117]{Kr}), and all integrable invariant functions are measurable with respect to the $\sigma$-algebra of invariant sets $\Sigma_i:= \{A\in \Sigma: P1_A=1_A\}$, since the limit $Ef$ in the ergodic theorem is the conditional expectation $E(f|\Sigma_i)$ (e.g. \cite[p. 128]{Kr}, or \cite[p. 80]{F2}). By quasi-compactness in $L^p$, $F(P) \cap L^p$ is finite-dimensional, so $L^p(S,\Sigma_i,\mu)$ is finite-dimensional. This implies that $\Sigma_i$ is finite modulo $\mu$, so for $1<r< \infty$ also $F(P)\cap L^r$ is finite-dimensional. By Corollary \ref{ros2} $P$ is uniformly ergodic in $L^r$, so by \cite{L2}, $P$ is quasi-compact on $L^r$. \smallskip By the above, $P$ is quasi-compact in $L^2$, and let $d$ be its period in $L^2$. Fix $1<r< 2$, and let $d_r$ be the period of $P$ in $L^r$. When $f \in L^2$ satisfies $Pf = \lambda f$ with $ \lambda \in \mathbb T$, then $\lambda^d=1$, and since $f \in L^r$, also $\lambda^{d_r}=1$. Minimality of the period yields $d \le d_r$. The dual of a quasi-compact operator is quasi-compact in the dual space, hence the dual Markov operator $P^*$ is quasi-compact in $L^r$, $1<r<\infty$, with the period of $P^*$ in $L^r$ the same as the period of $P$ in the dual $L^{r'}$ ($r'=r/(r-1)$). Now fix $2<r<\infty$, with $d_r$ the period of $P$ in $L^r$. The above argument yields $d_r \le d$.But $r'<2$, so $d \le d_{r'}=d_{r}$, hence $d=d_r$ for $r>2$, and by duality $d=d_r$ also for $1<r<2$. \end{proof} \begin{cor} \label{invariants} Let $P$ be a bi-stochastic Markov operator on $(S,\Sigma,\mu)$. If $P$ is quasi-compact in $L^2$, then every integrable eigenfunction corresponding to a unimodular eigenvalue, in particular every integrable invariant function, is bounded. \end{cor} \begin{proof} We saw that $\Sigma_i$ is finite, so generated by finitely many atoms. If $Pf=f \in L^1$, then $f$ is $\Sigma_i$-measurable, so bounded. If $Pf=\lambda f$ with $|\lambda|=1$ and $f \in L^1(\mu)$, then $P|f| \ge |Pf|= |f|$, and by invariance of $\mu$ we have $P|f|=|f|$; hence $f$ is bounded. \end{proof} {\bf Remark.} In general, the dimension of the eigenspace of a unimodular eigenvalue of a bi-stochastic Markov operator $P$, which is mean ergodic, is not more than the dimension of $F(P)$, by \cite[Theorem 2]{L2}. In particular, if $P$ is ergodic, then the eigenspaces of unimodular eigenvalues are one-dimensional, and the eigenfunctions have constant absolute value. \begin{cor} \label{all-r} Let $P$ be a hyperbounded bi-stochastic Markov operator on $(S,\Sigma,\mu)$. Then for every $1<r<\infty$, $P$ is quasi-compact in $L^r$. \end{cor} \begin{proof} Use Corollary \ref{ue} and Proposition \ref{qc-Lp} when $P$ is hyperbounded on $L^p$. \end{proof} \begin{prop} \label{exponential} Let $P$ be a bi-stochastic Markov operator on a probability space $(S,\Sigma,\mu)$, and assume that for some $1<p<\infty$, $P$ is quasi-compact in $L^p(\mu)$ with period $d$. Let $E_d := \lim A_n(P^d)$ be the projection on $F(P^d)$. Then: (i) for any $1<r< \infty$ there exist $C_r>0$ and $\rho_r <1$ such that $\|P^{nd}-E_d\|_r \le C_r\rho_r^n$. (ii) If $f \in L^r(\mu)$, $1<r< \infty$, then $\lim_{n\to\infty} P^{nd} f=E_d f$ a.e. and in $L^r$. In particular, if $P$ is aperiodic, $P^nf \to Ef$ a.e. (iii) If $f \in L^1(\mu)$, then $\|P^{nd}f -E_df\|_1 \to 0$. \end{prop} \begin{proof} By Proposition \ref{qc-Lp} $P$ is quasi-compact in any $L^r$, $1<r<\infty$, with the same period $d$. By Proposition \ref{qc}, $P^{nd}$ converges in $L^r$ operator norm, necessarily to $E_d$ by the mean ergodic theorem. We now apply Proposition 3.1 of \cite{DL} to obtain the exponential rate (i). Since $P^d E_d=E_d$, for $1<r< \infty$ and $f \in L^r(\mu)$ (i) yields $$\sum_{n=1}^\infty \|P^{nd}f -E_df\|_1 \le \sum_{n=1}^\infty \|P^{nd}f -E_d f\|_r < \infty,$$ so by Beppo Levi $\sum_{n=1}^\infty |P^{nd}f -E_d f| < \infty$ a.e., which implies (ii). Since $\|P^{nd}f -E_df\|_1 \to 0$ for $f \in L^2$, say, (iii) follows by continuity. \end{proof} \begin{cor} \label{doeblin} Let $P$ be an ergodic hyperbounded bi-stochastic Markov operator with period $d$. If $P$ maps $L^1(\mu)$ to $L^q(\mu)$ for some $q>1$, then: (i) There exist $C_1>0$ and $\rho_1 <1$ such that $\|P^{nd}-E_d\|_1 \le C_1 \rho_1^n$. (ii) $P$ is uniformly ergodic in $L^1$. (iii) For any $f \in L^1(\mu)$, $\ \lim_{n\to\infty} P^{nd}f =E_df$ a.e. and in $L^1$. In particular, if $d=1$, then $\lim_{n\to\infty} P^{n} f=\int f\,d\mu$ a.e. for every $f \in L^1$. \end{cor} \begin{proof} We can assume $q< \infty$. Fix some $p \in(1,q)$. Then $P$ maps also $L^p(\mu)$ into $L^q(\mu)$; also $P^d$ maps $L^1$ and $L^p$ into $L^q$, and put $C:=\|P^d\|_{L^1\to L^q}$. By Corollary \ref{ue}, $P$ is quasi-compact in $L^p(\mu)$. (i) Fix $f \in L^1$. We use $E_dP^d=P^dE_d=E_d=E_d^2$ and obtain $$ \|P^{nd}f-E_df\|_1 \le \|P^{nd}f -E_df\|_q = \|P^{(n-1)d} -E_d\|_q\|P^d(f-E_df)\|_q \le $$ $$ C \|P^{(n-1)d} -E_d\|_q\|(f-E_df)\|_1 \le 2 C \|P^{(n-1)d} -E_d\|_q\|f\|_1. $$ Hence $\|P^{nd}-E_d\|_1 \le 2C \|P^{(n-1)d}-E_d\|_q \le 2C\cdot C_q\rho_q^{n-1}$, by Proposition \ref{exponential}(i). (ii) follows from (i). (iii) follows from (i) and Beppo Levi's theorem. \end{proof} {\bf Remark.} Theorem \ref{gluck} does not apply to hyperbounded operators in $L^1$, so does not yield directly (ii) of the corollary. \medskip {\bf Example 1.} {\it Hyperbouned Markov operators} On $(S,\mu)$ we define a Markov operator by $Pf(x)= \int k(x,y)f(y)d\mu(y)$, with a kernel $k(x,y) \ge 0$ satisfying $\int k(x,y)d\mu(y)=1$ for a.e. $x$ and $\int k(x,y)d\mu(x)=1$ for a.e. $y$. Then $P$ is bi-stochastic, hence a contraction of every $L^p(\mu)$. $p\ge 1$. If $k(x,y)$ is bounded, then $PL^1(\mu) \subset L^\infty(\mu)$, so $PL^p(\mu) \subset L^q(\mu)$ for any $1<p<q< \infty$; by \cite{Wu} $P$ is 1-UI (i.e. weakly compact in $L^1$; see also \cite[Exercise VI.9.57]{DS}). In fact, if for some $q>1$ and $M<\infty$ we have $\int |k(x,y)|^q d\mu(y) \le M$ for a.e. $x$, then $P$ maps $L^1$ to $L^q$, with $\|P\|_{L^1\to L^q} \le M$ \cite[Exercise VI.9.59]{DS}; hence $P$ is 1-UI. If $\int \int |k(x,y)|^q d\mu(x)d\mu(y) < \infty$ for some $q>2$, then $PL^2(\mu) \subset L^q(\mu)$ \cite[p. 480]{CM}; in fact, for $q':= \frac q{q-1} < 2$ we have $PL^{q'} \subset L^q$. By Proposition \ref{all-Lp}, $P$ is hyperbounded in every $L^r$, $1<r<\infty$. \medskip {\bf Remarks.} 1. The Markov operator $P$ defined in Example 1 is Harris recurrent (by the analytic definition in \cite[p. 492]{F3}; for the probabilistic definition see \cite{MT}). 2. When $P$ is an ergodic hyperbounded Markov operator mapping $L^1(\mu)$ to $L^q(\mu)$, then $P$ is given by a kernel (see \cite[Exercise VI.9.59]{DS}); hence $P$ is Harris recurrent. If in addition $P$ is aperiodic, then $P^*$ satisfies Doeblin's condition, by Corollary \ref{doeblin}(ii) and \cite[p. 212]{R}. 3. Let $P(x,A)$ define an ergodic hyperbounded Markov operator $P$ on $(S,\Sigma,\mu)$, and let $\{\xi_n\}$ be the stationary Markov chain with transition probability $P(x,A)$ and initial distribution $\mu$. By Corollary \ref{all-r}, $P$ is uniformly ergodic in $L^2(\mu)$, so for every $f \in L^2(S,\mu)$ with $\int f\,d\mu=0$ the sequence $\{ \frac1{\sqrt n} \sum_{k=1}^n f(\xi_k)\}$ satisfies the central limit theorem (CLT), by a result of Gordin and Lifshitz, see \cite{DL}. Davydov \cite{Da} constructed an aperiodic positive recurrent Markov chain with state space $\mathbb Z$, for which the above the CLT fails for some $f \in L^2(\mu)$ with integral zero. The operator given by Davydov's transition matrix is Harris and not uniformly ergodic in $L^2$, hence not hyperbounded. $P^n$ converges in the $L^2$-strong operator topology, but not in $L^2$-operator norm. 4. Let $\{\xi_n\}$ be the stationary Markov chain defined by an {\it aperiodic} ergodic 2-UI (in particular hyperbounded) $P(x,A)$ on $(S,\Sigma,\mu)$. By Proposition \ref{exponential}, $\|P^n-E\|_2 \to 0$. In this case, El Machkouri et al. \cite[Theorem 3.1]{MJV} proved a limit theorem for the distribution of $\{\frac1{B_n}\sum_{k=1}^n f(\xi_k)\}$ for appropriate $B_n$, when $f$ on $S$ has no variance but has heavy tails. 5. A hyperbounded bi-stochastic Markov operator need not be Harris recurrent. An example is given in Theorem \ref{riesz}. \medskip Under the assumptions of Theorem \ref{gluck}, if $\{T^n\}$ converges weakly, then (by Corollary \ref{ue}) it converges in operator norm ($d=1$); however, in general this is not so. \medskip {\bf Example 2.} {\it A periodic symmetric ergodic hyperbounded Markov operator} Let $S=[0,1]$ with $\mu$ the Lebesgue measure, and define a bounded kernel by $k(x,y)=2$ on $([0,\frac12] \times [\frac12,1]) \cup ([\frac12,1] \times [0,\frac12])$ and zero elsewhere on $[0,1]\times [0,1]$. Then the corresponding (ergodic and symmetric) Markov operator $P$ as defined in example 1 has $Pf = -f$ for $f=1_{[0.\frac12]} -1_{[\frac12,1]}$. Note that "spectral gap" in the sense of Miclo \cite{M} is only "spectral gap {\it near} 1" (as in \cite{G2}). \begin{prop} \label{no-hyper} There exists an aperiodic Harris recurrent symmetric Markov operator $P$ with $P^n$ convergent in $L^2$-operator norm, which is not hyperbounded. \end{prop} \begin{proof} Denote by $Q$ the operator of Example 2, and define $P=\frac12(I+Q)$. Clearly $Q$ is ergodic, and since $I-P=\frac12(I-Q)$, also $P$ is ergodic; symmetry of $Q$ implies that of $P$. Since $Q$ is defined by a kernel, $P$ is Harris recurrent. By the construction of Example 1 $Q$ maps $L^p$ into $L^q$ for any $1 \le p<q\le \infty$, so it is uniformly ergodic in every $L^p(\mu)$, $1<p<\infty$, by Corollary \ref{ue}. By Foguel and Weiss \cite[Lemma 2.1]{FW}, $\|P^n(I-P)\|_p \to 0$, so with the uniform ergodicity $P^n$ converges in $L^p$ operator norm ($1<p<\infty$). If $P$ were hyperbounded, it would map $L^p(\mu)$ to $L^q(\mu)$, for some $1<p<q< \infty$. Let $f \in L^p$ which is not in $L^q$. Since $Q$ maps $L^p$ into $L^q$ (see Example 1), $\frac12(I+Q)f=Pf \in L^q$ implies $f \in L^q$, a contradiction. Hence $P$ is not hyperbounded. \end{proof} \smallskip \medskip \section{Cyclic behavior of ergodic bi-stochastic Markov operators} In this section $P$ is an {\it ergodic} Markov operator on $(S,\Sigma,\mu)$ with $\mu$ an invariant {\it probability}. Then $P$ is conservative, and by Hopf's pointwise ergodic theorem \cite[Theorem VIII.6.6]{DS},\cite[p. 80]{F2}, \cite[Theorem 1.7.2]{Kr} we have a.e. convergence of the averages $A_n(P)f(x)$ for any $f \in L^1(\mu)$. On the other hand, let $P$ be a transition probability and $\tilde \mu$ a probability on $(S,\Sigma)$ such that $Pf=0$ $ \tilde \mu$ a.e whenever $f=0\ $ $\tilde\mu$ a.e. Then $L^\infty(\tilde\mu)$ is invariant under $P$, and $P$ on $L^\infty(\tilde\mu)$ is the dual of a positive contraction on $L^1(\tilde\mu)$. If $P$ on $L^\infty(\tilde \mu)$ is conservative, then a.e. convergence of the averages $A_n(P)f(x)$ (or even convergence of their integrals with respect to $\tilde\mu$) for every $f \in L^\infty$, implies the existence of a probability $\mu \sim \tilde\mu$ which is invariant \cite{LS}. This justifies our assumption that $P$ is bi-stochastic on $(S,\Sigma,\mu)$ for studying the convergence of the iterates $P^n$. \smallskip When $P$ is ergodic and bi-stochastic, it is quasi-compact in $L^p(\mu)$ if and only if it is uniformly ergodic in $L^p(\mu)$, by \cite{L2}. When $P$ is quasi-compact in $L^1(\mu)$ and aperiodic, then $P^*$ satisfies Doeblin's condition \cite[p. 211]{R}, hence both $P^*$ and $P$ are Harris recurrent. However, $L^2$-quasi-compactness does not imply Harris recurrence \cite{DL}. Even hyperboundedness of $P$ does not imply Harris recurrence (Theorem \ref{riesz} below). \smallskip {\bf Remark.} For $P$ bi-stochastic on $(S,\Sigma,\mu)$ (without assuming ergodicity), quasi-compactness in $L^p(\mu)$ for some $1\le p < \infty$ implies that the $\sigma$-algebra of invariant sets $\Sigma_i$ is finite (see proof of Proposition \ref{qc-Lp}); hence the reduction to each of the finitely many atoms of $\Sigma_i$ will be an {\it ergodic} bi-stochastic Markov operator as above. \smallskip {\bf Definition.} A power-bounded operator $T$ on a Banach space $L$ is called {\it constrictive} if there exists a compact set $\mathcal K \subset L$ such that \begin{equation} \label{constrict} \text{dist}(T^n x,\mathcal K) \underset{n\to \infty}\to 0 \quad \text{ for every } \ \|x\| \le 1. \end{equation} By the definition, for every $x \in L$ the orbit $(T^nx)_{n\ge 1}$ is precompact, so $T$ is strongly almost periodic, hence mean ergodic. \smallskip $L^1$-constrictive bi-stochastic Markov operators on a probability space $(S,\Sigma,\mu)$ were introduced and studied by Lasota, Li and Yorke \cite{LLY}; for $L=L^1(\mu)$ they proved \begin{equation} \label{lly} \lim_{n\to\infty} \|T^n(x-\sum_{j=1}^r \varphi_j(x)y_j)\| \to 0 \quad \text{for every } \ x \in L, \end{equation} with $y_j$ non-negative unit vectors with disjoint supports, and $T$ permutes the $(y_j)_{1\le j \le r}$. It follows that for some $d \le r!$, $\ T^{nd}x$ converges strongly to $\sum_{j=1}^r \varphi_j(x)y_j$. Obviously, if $P$ is ergodic with $P^nf \to \int f\,d\mu$ in $L^1(\mu)$, then it is constrictive. Komorn\'\i k \cite{Kom} proved \eqref{lly} when the Markov operator is only {\it weakly} constrictive in $L^1$, i.e. \eqref{constrict} holds with $\mathcal K$ only weakly compact. Bartoszek \cite[Theorem 2]{Ba} proved that {\it if $T$ is a quasi-compact positive contraction on a Banach lattice $L$, then $T$ is constrictive}, with the convergence in \eqref{constrict} {\it uniform over the unit ball}. In \cite[Theorem 1]{Ba} he proved that if $T$ is a constrictive positive contraction on $L$, then there exist $r$ positive unit vectors $y_1,\dots,y_r$ and $r$ positive functionals $\varphi_1\dots,\varphi_d$ in $ L^*$ such that \eqref{lly} holds, and $T$ permutes the $(y_j)_{1\le j\le r}$. Sine \cite{Si} used the deLeeuw-Glicksberg decomposition (see, e.g. \cite{Kr}) to study general constrictive contractions in Banach spaces. \medskip The next results are inspired by the work of Foguel \cite{F1},\cite{F2},\cite{F5},\cite{F3},\cite{F6}. \medskip {\bf Definition.} The {\it deterministic $\sigma$-algebra} of a Markov operator $P$ is defined by $$ \Sigma_D=\Sigma_D(P):= \{A\in \Sigma: \text{ for every } n\ge 1, \quad P^n 1_A=1_{A_n} \text{ for some }\ A_n \in \Sigma\}. $$ \noindent The proof that $\Sigma_D$ is a $\sigma$-algebra is in \cite[p. 7]{F2}. In general, $\Sigma_D(P^*) \ne \Sigma_D(P)$ \cite[p. 78]{F2}; in that example, $P^*$ is constrictive and $P$ is not. If $P$ is Harris recurrent, then $\Sigma_D$ is atomic \cite[p. 58]{F2} (proof corrected in \cite{F4}). It is shown in \cite[p. 106]{R} (see also \cite[p. 87]{F2}) that if $P$ is bi-stochastic on $(S,\Sigma,\mu)$, then $$ L^2(S,\Sigma_D(P),\mu) =\{g \in L^2(\mu): \|P^n g\|_2=\|g\|_2 \text{ for every } n\in\mathbb N\}. $$ \noindent It follows from \cite[Theorem A, p. 85]{F2} that if $f \in L^2$ satisfies $E (f|\Sigma_D)=0$, then $P^nf \to 0$ weakly in $L^2$. Consequently, if $E(f|\Sigma_D)=0$ and for some subsequence $P^{n_k}f$ converges in $L^2$-norm, then $\|P^nf\|_2 \to 0$. In particular, If $P$ is $L^1$-constrictive, then $\|P^nf\|_2 \to 0$ whenever $E(f|\Sigma_D)=0$. In general, $\|P^n f\|_2 \to 0$ implies $E(f|\Sigma_D)=0$ \cite[p. 108]{R}. However, {\it $\|P^n f\|_2 \to 0$ for every $f$ with $E(f|\Sigma_D)=0$ if and only if the strong limit $\lim_{k \to \infty} P^{*k}P^k$ (which always exists -- \cite[p. 108]{R}) is a projection} \cite[Lemma 3, p. 108]{R}, which is $E(f|\Sigma_D)$; an example by Rosenblatt \cite[p. 113]{R}, with $\Sigma_D$ trivial (hence all powers ergodic, by Lemma \ref{k-invariants} below), shows that the above strong convergence need not hold in general. \begin{lem} \label{k-invariants} Let $P$ be an ergodic bi-stochastic Markov operator on $(S,\Sigma,\mu)$. Then for every $k \ge 1$ we have $$ \Sigma_{i,k}:= \{A \in \Sigma:\, P^k 1_A=1_A\} \subset \Sigma_D(P). $$ \end{lem} {\bf Remarks.} 1. The lemma was first proved in \cite[Lemma 2.1.8]{F5}. An accessible proof is in \cite[Lemma 1.2]{F6}. 2. The example of $P$ induced by an irrational rotation of the unit circle $\mathbb T$ shows that in general we may have all powers of $P$ ergodic, i.e. $\Sigma_{i,k}$ trivial for every $k$, while $\Sigma_D=\Sigma$. \begin{prop} \label{foguel} Let $P$ be an ergodic bi-stochastic Markov operator on $(S,\Sigma,\mu)$. Then $\Sigma_{i,k}$ is finite for any $k > 1$, and has at most $k$ atoms. Moreover, for fixed $k>1$ there exist $d|k$ and atoms $A_0,\dots,A_{d-1}$ of $\Sigma_{i,k}$ which are disjoint, generate $\Sigma_{i,k}$, and $P1_{A_j}=1_{A_{j+1}}$ for $0\le j<d$ (with $A_d=A_0$). \end{prop} \begin{proof} Fix $k$. If $\Sigma_{i,k}$ is not trivial, let $A \in \Sigma_{i,k}$ with $0<\mu(A)<1$. By Lemma \ref{k-invariants} $A \in \Sigma_D$, so there are $B_j,\ j=0,1,\dots,k-1$ with $P^j 1_A=1_{B_j}$. Then $\mu(B_j)=\mu(A)$ and by ergodicity $1_A \le \sum_{j=0}^{k-1}P^j 1_A = k\mu(A)$. Hence $\mu(A) \ge 1/k$ for any $A \in \Sigma_{i,k}$ with $\mu(A)>0$. This implies that $\Sigma_{i,k}$ is atomic, with at most $k$ different (hence disjoint) atoms. \smallskip Now fix $k>1$, and let $A$ be an atom of $\Sigma_{i,k}$. By Lemma \ref{k-invariants}, $P1_A=1_{A_1}$, and clearly $A_1 \in \Sigma_{i,k}$, with $\mu(A_1)=\mu(A)>0$. We show that $A_1$ is an atom of $\Sigma_{i,k}$. Let $B \subset A_1$ with $\mu(B)>0$ be in $\Sigma_{i.k}$. Then $P^{k-1}1_B=1_C$ for some $C \in \Sigma_{i,k}$, and $P^{k-1}1_B \subset P^{k-1}1_{A_1}=1_A$. But then $C \subset A$ with $\mu(C)=\mu(B)>0$, so $C=A$, and $1_B=P1_C=1_{A_1}$. Hence $A_1$ is an atom of $\Sigma_{i,k}$. Let $d$ be smallest integer with $P^d1_A=1_A$, and set $1_{A_j}=P^j1_A$ for $0\le j<d$. By the above, the $A_j$ are atoms, and disjoint by minimality of $d$. Since $P^k1_A=1_A$, the minimality of $d$ yields $d|k$. By definition, $P1_{A_j}=1_{A_{j+1}}$. By disjointness and ergodicity, $1_A \le \sum_{j=0}^{k-1}P^j1_A =1$, so $\cup_{0\le j<d}A_j =S$. Finally, if $B \in \Sigma_{i,k}$ then $ B\cap A_j$ is $A_j$ or a null set, so $A_0\dots,A_{d-1}$ generate $\Sigma_{i,k}$. \end{proof} The disjoint sets $A_0,\dots,A_{d-1}$ obtained in Proposition \ref{foguel} are {\it cyclically moved} by $P$, with period $d$, and form a {\it cycle}. Non-trivial cycles exist when $P^k$ is not ergodic for some $k>1$. If $f \in L^1(\mu)$ is supported in $A_0$, then $P^jf $ is supported in $A_j$. Note that Harris recurrence is not assumed. Note that if $A_0,\dots,A_{k-1}$ are disjoint with $P1_{A_j}=1_{A_{j+1}}$ for $0\le j \le k-1$ (with $A_k=A_0$), then $P^k1_{A_j}=1_{A_j}$, so $A_j \in \Sigma_{i,k}$ for $0\le j \le k-1$. \smallskip {\bf Remark.} Foguel \cite[ Theorem 2.1.10]{F5} proved Proposition \ref{foguel} for $P$ conservative and ergodic, without the assumption of an invariant probability. Since \cite{F5} is not readily available, we have included the proof for our situation. \begin{cor} \label{Lly} Let $P$ be an ergodic bi-stochastic Markov operator on $(S,\Sigma,\mu)$. Then $P$ is $L^1$-constrictive if and only if for some $d \ge 1\ $ $P^{nd}$ converges in the strong operator topology of $L^2(\mu)$, as $n \to \infty$. \end{cor} \begin{proof} If $P$ is $L^1$-constrictive (or even $L^1$-weakly constrictive), $L^1$-strong convergence of $P^{nd}f$ follows from \eqref{lly}. Since $P$ contracts $L^\infty$-norms, for bounded $f$ we have $L^2$ convergence of $P^{nd}f$, hence $P^{nd}$ converges in the $L^2$ strong operator topology. Conversely, if $P^{nd}$ converges strongly in $L^2$, it also does in $L^1$. The limit is a projection on the integrable $P^d$-invariant functions, which are $\Sigma_{i,d}$-measurable. By Proposition \ref{foguel}, $L^1(\Sigma_{i,d},\mu)$ is finite-dimensional, so its unit ball is compact in $L^1(\mu)$; hence $P$ is $L^1$-constrictive. \end{proof} {\bf Remark.} The result \eqref{lly}, proved for $L^1$ (weakly) constrictive bi-stochastic Markov operators in \cite{LLY} and in \cite{Kom}, is used only in Corollary \ref{Lly}. \begin{prop} \label{deterministic} Let $P$ be an ergodic bi-stochastic Markov operator on $(S,\Sigma,\mu)$. If $k \ge 1\ $ is an integer such that $P^{nk}$ converges strongly in $L^2$ as $n \to \infty$, then $$ \Sigma_D(P) = \Sigma_{i,k}:= \{A \in \Sigma:\, P^k 1_A=1_A\}. $$ \end{prop} \begin{proof} Every $P^k$-invariant integrable function is $\Sigma_{i,k}$-measurable, and by the ergodic theorem $E_k f := \lim_n A_n(P^k)f = E(f|\Sigma_{i,k})$ for $f \in L^1$. By assumption, $P^{nk}f \to E_kf$ also in $L^1$. Let $A \in \Sigma_D$, with $P^n 1_A=1_{A_n}$. Then $1_{A_{nk}} = P^{nk}1_A \to E_k 1_A$ in $L^2$-norm. Now $$ \mu(A)= \int P^n1_A d\mu= \int 1_{A_n}d\mu= \|1_{A_n}\|_2^2 =\|P^n1_A\|_2^2 \le \|1_A\|_2^2=\mu(A). $$ Hence $\|E_k 1_A\|_2^2 =\lim_n \|P^{nk}1_A\|_2^2= \mu(A)= \|1_A\|_2^2.$ Since $E_k$ is an orthogonal projection, this equality implies $E_k 1_A=1_A$, i.e. $A \in \Sigma_{i,k}$. Thus $\Sigma_D \subset \Sigma_{i,k}$. The reverse inclusion holds by Lemma \ref{k-invariants}, so $\Sigma_D = \Sigma_{i,k}$. \end{proof} {\bf Remarks.} 1. When $k=1$, $\Sigma_{i,1}=\Sigma_i = \Sigma_D$, so in the "complete mixing" case $\Sigma_D(P) $ is trivial. In the general ergodic case, strong convergence in $L^2$ of $P^{nk}$ implies that $\Sigma_D(P)$ is atomic with at most $k$ atoms. For additional information see \cite{F5} and \cite{KL}. 2. The example in \cite[p. 113]{R} shows that $\Sigma_{i,k}=\Sigma_D$ for every $k$ does not imply strong convergence of $(P^{nd})_{n \ge 1}$ for any $d$. 3. The example in \cite[p. 78]{F2} shows that $P$ in the proposition need not be Harris. \begin{prop} \label{divisor} Let $P$ be an ergodic bi-stochastic Markov operator on $(S,\Sigma,\mu)$. Assume that for some integer $k_0 \ge 1\ $ $P^{nk_0}$ converges strongly in $L^2$ as $n \to \infty$, and let $d_0$ be the smallest such $k_0$. Then $(P^{nk})_{n \ge 1}$ converges strongly in $L^2$ if and only if $d_0|k$. \end{prop} \begin{proof} Obviously $(P^{nk})_{n\ge 1}$ converges strongly for $k=md_0$. We have to prove the converse only when $d_0 >1$. Let $d \le d_0$ be the number of disjoint atoms in $\Sigma_{i,d_0}$, given by Proposition \ref{foguel}, which are cyclically moved by $P$, with $P^d1_{A_j}=1_{A_j}$ for the atoms $A_0,\dots,A_{d-1}$. Since $d|d_0$ and the atoms generate $\Sigma_{i,d_0}$, we have $\Sigma_{i,d}=\Sigma_{i,d_0}$. With our previous notations, $E_d=E_{d_0}$, and $P^d E_{d_0}=P^d E_d=E_d$. We now prove $d=d_0$. Since $d|d_0$, we write $d_0=md$. Fix $f \in L^2$ and $\varepsilon > 0$. For $n\ge N$ we have $\|P^{nd_0}f -E_{d_0}f\| < \varepsilon$. Hence for $k \ge 1$ we have $$ \|P^{(Nm+k)d}f -E_df\| \le \|P^{kd}\|\cdot \|P^{Nmd}f -E_{d_0}f\| \le \|P^{Nd_0}f -E_{d_0}f\| < \varepsilon. $$ This proves that $(P^{nd})_{n\ge 1}$ converges strongly, and minimality of $d_0$ yields $d=d_0$. Assume $(P^{nk})_{n\ge 1}$ converges strongly. By Proposition \ref{deterministic}, $\Sigma_{i,k}=\Sigma_D=\Sigma_{i,d_0}$, and $d= d_0 \le k$. Let $k=\ell \mod d$ with $0\le \ell <d$. For the atom $A_0$, which is in $\Sigma_D$, we have $P^\ell 1_{A_0}=P^k1_{A_0}=1_{A_0}$. By construction, $P^j1_{A_0} \ne 1_{A_0}$ for $1 \le j \le d-1$, so $\ell=0$, which means $d|k$. \end{proof} \begin{cor} Let $P$ be an ergodic bi-stochastic Markov operator on $(S,\Sigma,\mu)$. Assume that for some integer $k_0 \ge 1\ $ $P^{nk_0}$ converges strongly in $L^2$ as $n \to \infty$, and let $ k \in \mathbb N$. Then $(P^{nk})_{n\ge 1}$ converges strongly in $L^2$ if and only if $\Sigma_{i,k}=\Sigma_D(P)$. \end{cor} \begin{proof} Assume that $\Sigma_{i,k}=\Sigma_D(P)$, and let $d=d_0$ as defined in Proposition \ref{divisor}. The last three sentences of the proof of Proposition \ref{divisor} show that $d|k$; hence Proposition \ref{divisor} yields that $(P^{nk})_{n\ge 1}$ converges strongly in $L^2$. The converse is in Proposition \ref{deterministic}. \end{proof} {\bf Remark.} By Corollary \ref{Lly}, the assumption in Proposition \ref{divisor} and its corollary is that $P$ is ergodic, bi-stochastic and $L^1$-constrictive. \begin{theo} \label{periodic-convergence} Let $P$ be an ergodic bi-stochastic Markov operator on $(S,\Sigma,\mu)$. Assume that for some integer $k \ge 1\ $ $P^{nk}$ converges strongly in $L^2$ as $n \to \infty$, and let $d>1$ be the smallest such $k$. Then: (i) $\Sigma_{i,d}=\Sigma_D(P)$, $\Sigma_D$ is atomic, generated by $d$ disjoint atoms $A_0,\dots,A_{d-1}$, which satisfy $P 1_{A_j}=1_{A_{j+1}}$. (ii) The unimodular eigenvalues of $P$ in $L^1(\mu)$ are precisely all {\rm d}th roots of unity, the corresponding eigenspaces are one-dimensional, and the corresponding eigenfunctions have constant absolute value. (iii) For every $f \in L^p(\mu)$, $1 \le p< \infty$, and $0 \le j \le d-1$ we have \begin{equation} \label{L1-limits} \lim_{n\to\infty}\big\|P^{nd+j}f - d\sum_{\ell=0}^{d-1} \big(\int_{A_\ell}f \,d\mu\big) 1_{A_\ell\dot{+}j} \big\|_p = 0, \end{equation} where $\ell\dot{+}j$ is addition modulo $d$. (iv) For every $f \in L^p(\mu)$, $1 \le p< \infty$ we have \begin{equation} \label{conditional} \lim_{n\to\infty}\big\|P^n \Big( f - d \sum_{\ell=0}^{d-1} \big( \int_{A_\ell}f \,d\mu\big) 1_{A_\ell}\Big) \big\|_p = 0. \end{equation} \end{theo} \begin{proof} (i) The equality $\Sigma_{i,d}=\Sigma_D$ and the existence and properties of the atoms of $\Sigma_{i,d}$ follow form the previous results. Invariance of $\mu$ yields $\mu(A_j)=\mu(A_0)$, so $\mu(A_j)=d^{-1}$. (ii) If $\lambda$ is a unimodular eigenvalue, then by assumption $\lambda^{nd}$ converges as $n \to \infty$, hence $\lambda^d=1$. Conversely, for $\lambda^d=1$ define $f= \sum_{j=0}^{d-1} \bar\lambda^j 1_{A_j}$; then $Pf=\lambda f$, since $P1_{A_{d-1}}=1_{A_0}$. The last statement in (ii) follows by ergodicity. (iii) Note that for strong convergence of $P^n$ the limit is $\int f\ d\mu$, by ergodicity, so we prove \eqref{L1-limits} for $d >1$. The convergence \eqref{L1-limits} in $L^1$ for all integrable functions easily implies $L^p$-norm convergence for $L^p$ functions, so we prove \eqref{L1-limits} for $p=1$. For $j=0$ we have strong convergence of $P^{nd}f$ by assumption, and the limit is $E(f|\Sigma_{i,d}) = E(f|\Sigma_D)$. Since the atoms $A_0,\dots,A_{d-1}$ generate $\Sigma_D$, the $\Sigma_D$-measurable functions are of the form $\sum_{\ell=0}^{d-1} c_\ell 1_{A_\ell}$, and then $E(f|\Sigma_D)=\sum_{\ell=0}^{d-1} \big(\mu(A_\ell)^{-1} \int_{A_\ell} f \,d\mu\big) 1_{A_\ell}$ by disjointness of the atoms. This proves \eqref{L1-limits} for $j=0$, since $\mu(A_\ell) =d^{-1}$. For $j>0$ we apply $P^j$ and use $P^j 1_{A_\ell}=1_{A_{\ell \dot{+} j}}$. Since $P$ is a contraction, \eqref{conditional} follows from $\|P^{nd}\big((f-E(f|\Sigma_D)\big)\|_p \to 0$. \end{proof} {\bf Remarks.} 1. The convergence \eqref{conditional} makes precise \eqref{lly}, giving information on the cycle, showing that the permutation of the $y_j$ is cyclic, and expliciting the functionals $\varphi_j$. The assumption of Theorem \ref{periodic-convergence} means that $P$ is $L^1$-constrictive, by Corollary \ref{Lly}. 2. Foguel's assumption in \cite[Theorem 3.3]{F6} yields that both $P^{nj}$ and $P^{*nj}$ converge strongly in $L^1$ as $n \to \infty$ (his $L^1$-zero-two theorem for $P^{*j}$ is equivalent to the $L^\infty$-zero-two theorem for $P^j$, and the above convergences both hold in the zero alternative). In \cite[Theorem 6.4]{F3} Foguel proved (i) and (iii) assuming Harris recurrence. Our result applies without any of these stronger assumptions, also when $P^{*nk}$ does not converge strongly for any $k$. 3. Wittmann's uniform $L^p$-zero-two theorem \cite[Theorem 1.7]{Wi} yields the strong convergence in $L^2$ of $(P^{nj})_{n\ge 1}$ and $(P^{*nj})_{n\ge 1}$ if $$ \lim_{n \to \infty} \|P^n(I-P^j)\|_2 < \sqrt{3}. $$ 4. A sufficient spectral condition for the strong convergence in $L^2$ of $(P^{nk})_{n\ge 1}$ and $(P^{*nk})_{n\ge 1}$ for {\it some} $k \ge 1$ is given by \cite[Proposition 2]{L3}: $\sigma(P) \cap \mathbb T \ne \mathbb T$. This condition is satisfied if $\lim_{n \to \infty} \|P^n(I-P)\|_2 < 2$ \cite[Theorem 3]{L3}. 5. Derriennic's \cite[Th\'eor\`eme 2]{De} yields that $\|P^n f\|_2 \to 0$ if and only if $ \int fg\,d\mu=0$ for every $g \in \bigcap_{n \ge 1} P^{*n}\{h \in L^2: \|h\| \le 1\}$. \begin{cor} \label{harris} Let $P$ be an ergodic Harris recurrent Markov operator with invariant probability $\mu$ and period $d$. Then the unimodular eigenvalues of $P$ in $L^1(\mu)$ are precisely all {\rm d}th roots of unity, the corresponding eigenspaces are one-dimensional, and the corresponding eigenfunctions have constant absolute value. \end{cor} \begin{proof} By \cite{F3} \eqref{L1-limits} holds, so we can apply Theorem \ref{periodic-convergence}(ii). \end{proof} {\bf Remarks.} 1. \v Sid\'ak \cite{S} proved the result for $P$ on a countable state space defined by a positive recurrent irreducible Markov matrix. We have not found a reference for the corollary in the general case. 2. Gerlach \cite[Theorem 3.4]{Ge} proved that if $T$ is an irreducible power-bounded Harris-type positive operator on a Banach lattice with order continuous norm, then some power $T^n$ has no unimodular eigenvalues different from one. This abstract result yields that if $T$ has unimodular eigenvalues, they are all $n$th roots of unity, but does not yield the result (which depends on the cyclically moving sets) that {\it all} $n$th roots of unity are eigenvalues of $T$. This follows from Schaefer \cite{Sc2}, since the corresponding eigenfunctions are bounded by ergodicity. \medskip It is known that $P$ and $P^*$ have the same invariant sets, hence the same integrable invariant functions (see \cite[Chapter VII]{F2}, for example). In particular, $\Sigma_{i,k}(P)=\Sigma_{i,k}(P^*)$, and $P^*$ is ergodic when $P$ is. In general $\Sigma_D(P) \ne \Sigma_D(P^*)$, but $\Sigma_U:=\Sigma_D(P) \bigcap \Sigma_D(P^*)$ generates the unitary subspace $\mathcal K:=\{f \in L^2: \|P^nf\|_2=\|P^{*n}f\|_2=\|f\|_2 \text{ for every } n \in \mathbb N\}$, i.e. $\mathcal K=L^2(S,\Sigma_U,\mu)$ \cite[Chapter VIII]{F2}. \begin{cor} \label{unitary} Let $P$ be an ergodic bi-stochastic Markov operator as in Theorem \ref{periodic-convergence}, and let $A_0,\dots,A_{d-1}$ be the atoms generating $\Sigma_D(P)$. Then: (i) $\Sigma_U=\Sigma_D(P) =\Sigma_{i,d}$, and $P^* 1_{A_j}=1_{A_{j\dot{-}1}}$. (ii) $\ P^{*n} \Big(f - d\sum_{\ell=0}^{d-1} \big( \int_{A_\ell}f\,d\mu\big) 1_{A_\ell}\Big) \to 0$ weakly, for every $f \in L^p$, $1\le p< \infty$. \end{cor} \begin{proof} By Theorem \ref{periodic-convergence} and Lemma \ref{k-invariants}, $\Sigma_D(P)=\Sigma_{i,d}(P)=\Sigma_{i,d}(P^*) \subset \Sigma_D(P^*)$, so $\Sigma_U=\Sigma_D(P)$. For $f$ in the unitary space $\mathcal K=L^2(\Sigma_U,\mu)$ we have $P^*Pf=f$, hence $P^*1_{A_{j+1}}=P^*P1_{A_j}=1_{A_j}$. (ii) follows from \eqref{conditional}. \end{proof} \begin{theo} Let $P$ be an ergodic bi-stochastic Markov operator on $(S,\Sigma,\mu)$. Then $(P^{nd})_{n\ge 1}$ converges in the strong operator topology (SOT) of $L^2$ for some $d$ if and only if $P^{*k}P^k$ converges (SOT) to a projection on a finite-dimensional subspace of $L^2$. \end{theo} \begin{proof} Assume that $P^{nd}$ converges strongly as $n \to \infty$. By Theorem \ref{periodic-convergence}, \eqref{conditional} yields that $\|P^nf\|_2 \to 0$ for every $f \in L^2$ with $E(f|\Sigma_D)=0$. Hence by \cite[Lemma 3, p. 108]{R}, $P^{*k}P^k$ converges strongly to the projection $E(f|\Sigma_D)$, which has finite-dimensional range since $\Sigma_D$ is finite by Theorem \ref{periodic-convergence}. Assume now that $P^{*k}P^k$ converges strongly to a projection $E_0$ with finite-dimensional range. By \cite[Lemma 3, p. 108]{R}, the limit is $E_0f=E(f|\Sigma_D(P))$, and $\|P^nf\|_2 \to 0$ for every $f \in L^2$ with $E(f|\Sigma_D(P))=0$, i.e. for every $f \perp E_0L^2$. Since $E_0$ has finite-dimensional range, $\Sigma_D(P)$ is finite, therefore atomic. Hence the unitary $\sigma$-algebra $\Sigma_U=\Sigma_D(P) \cap \Sigma_D(P^*)$ is finite, with $d$ atoms. By Foguel's \cite[Corollary 2.9]{F6}, $P^{nd}f$ converges {\it weakly}. Since $E_0L^2=L^2(S,\Sigma_D(P))$ is finite-dimensional and $P$-invariant, for $g \in E_0L^2$ we have that $P^{nd}g$ converges strongly. Together with the above convergence on $(E_0L^2)^{\perp}$, we conclude that $P^{nd}$ converges in the strong operator topology as $n \to \infty$. \end{proof} {\bf Remark.} The necessary condition for convergence of $(P^{nk})_{n\ge 1}$ for some $k$, that $\Sigma_D$ be finite (see Propositions \ref{foguel} and \ref{deterministic}), implies, by \cite[Corollary 2.9]{F6}, that $P^{nd}$ converges in the {\it weak} operator topology (WOT) for some $d$. However, this condition is not sufficient for the SOT convergence (e.g. \cite[p. 113]{R}), and is not necessary for WOT convergence (e.g. an invertible mixing measure preserving transformation). \begin{theo} \label{cycle} Let $P$ be an ergodic $L^2$-quasi-compact bi-stochastic Markov operator on $(S,\Sigma,\mu)$ with period $d>1$. Then: (i) The $\sigma$-algebra $\Sigma_{i,d}$ of $P^d$-invariant sets is finite, has an atom $A_0$ such that for $1\le j\le d-1$ there are disjoint atoms $A_j \in \Sigma_{i,d}$ with $P^j1_{A_0}=1_{A_j}$, $P1_{A_j}=1_{A_{j+1}}$ ($A_d=A_0$), $P^*1_{A_{j+1}}=1_{A_j}$, and $\Sigma_{i,d}$ is generated by $\{A_0,A_1,\dots,A_{d-1}\}$. (ii) For every $f \in L^p(\mu)$, $1 \le p< \infty$, and $0 \le j \le d-1$ we have \begin{equation} \label{Lp-limits} \lim_{n\to\infty}\big\|P^{nd+j}f - d \sum_{\ell=0}^{d-1} \big( \int_{A_\ell} f \,d\mu\big) 1_{A_\ell\dot{+}j} \big\|_p =0, \end{equation} \begin{equation} \label{P*-limits} \lim_{n\to\infty}\big\|(P^*)^{nd+j}f - d \sum_{\ell=0}^{d-1} \big( \int_{A_\ell} f \,d\mu\big) 1_{A_\ell\dot{-}j} \big\|_p =0. \end{equation} (iii) The unimodular eigenvalues of $P$ on $L^p$ are precisely all {\rm d}th roots of unity, and the corresponding eigenfunctions have constant absolute value. \end{theo} \begin{proof} By Proposition \ref{qc}, we have $L^2$ operator norm convergence of $(P^{nd})_{n\ge 1}$, and since $P^*$ is clearly $L^2$-quasi-compact with the same period, also $(P^{*nd})_{n\ge 1}$ converges in operator norm. Let $d_0$ be the smallest $k$ such that $(P^{nk})_{n \ge 1}$ converges strongly in $L^2$. By Proposition \ref{divisor}, $d_0|d$. We show $d_0=d$: observe that for every unimodular eigenvalue $\lambda$ we have, by definition, convergence $\lambda^{nd_0}$. Since all these eigenvalues are roots of $d$, the minimality of $d$ implies $d|d_0$. We now obtain (i) and \eqref{Lp-limits} from Theorem \ref{periodic-convergence} and Corollary \ref{unitary}; \eqref{P*-limits} follows by applying \ref{periodic-convergence} to $P^*$. By the construction of Corollary \ref{ue} and the definition of $d$, all unimodular eigenvalues of $P$ are $d$th roots of unity. (iii) follows from Theorem \ref{periodic-convergence}(ii). \end{proof} We now extend to the periodic case the limit theorem for $P^n$, proved in Proposition \ref{exponential}(ii) for the aperiodic case. \begin{theo} Let $P$ be an ergodic $L^2$-quasi-compact bi-stochastic Markov operator with period $d>1$, and let $A_0,\dots,A_{d-1}$ be the atoms generating $\Sigma_{i,d}$. (i) If $f \in L^r(\mu)$, $1 \le r < \infty$, with $E(f|\Sigma_{i,d})=0$ (i.e. $\int_{A_j}f\,d\mu=0$ for $0\le j\le d-1$), then $\|P^nf\|_r \to 0$ and $\|P^{*n}f\|_r \to 0$. (ii) If $r>1$ and $f \in L^r(\mu)$ with $E(f|\Sigma_{i,d})=0$, then $P^nf \to 0$ and $P^{*n}f \to 0$ a.e. (iii) Assume $P$ is hyperbounded and maps $L^1(\mu)$ to $L^q(\mu)$ for some $q>1$. If $f \in L^1(\mu)$ satisfies $E(f|\Sigma_{i,d})=0$, then $P^nf \to 0$ a.e. \end{theo} \begin{proof} Since $P^d$ and $P^{*d}$ have the same invariant sets, it is enough to prove the assertions for $P$, and then apply them to $P^*$ and obtain the convergence results for powers of $P^*$. \smallskip (i) First assume $r>1$. The assumption on $f$ means $E_df=0$, so by Proposition \ref{exponential}(i) we have $\|P^{nd}f\|_r \to 0$. Since $P$ is a contraction, $\|P^{n}f\|_r \to 0$. For $f$ bounded with $E_df=0$, we have $\|P^nf\|_1 \le \|P^nf\|_r \to 0$. Standard approximations (using $\|E_d\|_\infty=1$) yield the convergence for $f \in L^1$ with $E_df=0$. (ii) For $1<r< \infty$, Proposition \ref{exponential}(i) yields $\|P^{dn}f\|_r \le C_r\rho_r^n \|f\|_r$ when $E_df=0$, so $\sum_{n=1}^\infty \|P^{nd}f\|_1 < \infty$. Since $P$ is a contraction, $\sum_{n=1}^\infty \|P^{nd +j}f\|_1 < \infty$ for $0\le j \le d-1$. Hence $$ \sum_{n=0}^\infty \|P^{n}f\|_1 = \sum_{j=0}^{d-1}\sum_{n=0}^\infty \|P^{nd +j}f\|_1 < \infty. $$ By Beppo Levi $\sum_{n=0}^\infty |P^{n}f| < \infty$ a.e., so $P^{n}f \to 0$ a.e. (iii) The proof is similar to the proof of (ii), using Corollary \ref{doeblin}(i). \end{proof} {\bf Remarks.} 1. In case (iii), $P^*$ need not map $L^1$ to some $L^r$, although $P$ does, so for $L^1$ functions we obtain from the theorem only a.e. convergence of $P^nf$, but not of $P^{*n}f$. However, as noted in the remarks following Example 1, in case (iii) $P$ is Harris, so on each invariant set of $P^d$ the restriction of $P^d$ and $P^{*d}$ are ergodic and Harris, and by Horowitz \cite{H}, with the assumptions $ f \in L^1$ and $E_df=0$, we obtain $P^nf \to 0$ a.e. and $P^{*n}f \to 0$ a.e. 2. The theorem applies to $P$ ergodic hyperbounded. In general, a hyperbounded $P$ need not be Harris (see Theorem \ref{riesz} below), so only (i) and (ii) of the theorem apply. 3. Equivalence of finiteness of $\Sigma_D$ and part (i) of the theorem was proved in \cite{BK} for kernel operators as in Example 1 (which are clearly Harris recurrent), without any further assumptions on the kernel $k(x,y)$, i.e. without assuming hyperboundedness. Additional equivalent conditions are also given there. \medskip \noindent {\bf Problem 1.} {\it If $P$ is ergodic hyperbounded, does $P^nf \to 0$ a.e for $f \in L^1$ with $E_df=0$?} For the aperiodic case, the problem is whether $P^nf$ converges a.e. for every $f \in L^1$. Note that for aperiodic ergodic Harris recurrent operators, Horowitz \cite{H} proved a.e. convergence of $P^n f$ for every $f \in L^1$. \medskip \section{Conditions for aperiodicity of hyperbounded Markov operators} In this section we look for conditions of aperiodicity of an ergodic hyperbounded (necessarily bi-stochastic) Markov operator. By the definition, when $d=1$ we have $\sigma(P) \cap \mathbb T=\{1\}$ (for $P$ on $L^p$ which maps $L^p$ to $L^q$), and with the uniform ergodicity in Corollary \ref{ue}, $\|P^n-E\|_p \to 0$ by \cite[Theorem 4]{Lu}. Obviously, if $P^n$ converges in operator norm, the hyperbounded operator is aperiodic. In the complex $L^p$, this convergence is equivalent to a (global) spectral gap: $r(P_{|(I-P)L^p})< 1$. For any ergodic Markov operator preserving $\mu$, $\overline{(I-P)L^p} = \{f \in L^p: \int f\,d\mu=0\}$ by the mean ergodic theorem. Recall that by Proposition \ref{all-Lp}, a hyperbounded Markov operator is hyperbounded in each $L^r$, $1<r< \infty$, and the period is the same in all $L^r$, by Proposition \ref{qc-Lp}. We therefore look at hyperboundedness in $L^2$. \begin{theo} \label{wang} Let $P$ be an ergodic hyperbounded Markov operator mapping $L^2(\mu)$ to $L^4(\mu)$. If $\|P\|_{L^2 \to L^4} < 2^{1/4}$, then $P$ is aperiodic. \end{theo} \begin{proof} We note that for real functions it is easy to prove (since $P$ is real), that \begin{equation} \label{real-gap} \sup\{\|Pf\|_2: \|f\|_2=1, \int f\,d\mu=0\} = \end{equation} $$ \sup\{\|P(f+ig)\|_2: \|f+ig\|_2=1, \int (f+ig) d\mu=0\} . $$ Since eigenfunctions of unimodular eigenvalues different from 1 have integral zero, to prove aperiodicity it is enough to prove that the left-hand side of \eqref{real-gap} is less than 1. The proof is the same as Wang's proof of \cite[Theorem 1.1]{W1}. Wang assumes that $P$ is symmetric, but this is because he assumes $P$ to contract only $L^2(\mu)$, with $P1=1$; symmetry is used in his proof only for obtaining $P^*1=1$, which holds for any bi-stochastic Markov operator $P$ since $\mu$ is an invariant probability. \end{proof} \begin{prop} \label{optimal} There exists an ergodic hyperbounded Markov operator $P$ with period $d >1$, which for every $q>2$ maps $L^2(\mu)$ to $L^q(\mu)$, with $\|P\|_{L^2 \to L^q} = 2^{\frac12-\frac1q}$. \end{prop} \begin{proof} Let $P$ be the Markov operator of Example 2, which is ergodic and symmetric with period 2. The explicit definition of $P$, by Example 1, is $Pf = 2a1_{[0,1/2)} + 2b1_{[1/2,1]}$, where $a=\int_{1/2}^1 f(y)dy$ and $b= \int_0^{1/2} f(y)dy$. Then (for real $f$) $\|Pf\|_2^2=4a^2\cdot\frac12 +4b^2\cdot\frac12= 2(a^2+b^2)$. Hence for $2<q<\infty$ we have $$ \|Pf\|_q =\big(2^q |a|^q\cdot\frac12 +2^q |b|^q\cdot\frac12\big)^{1/q}= 2\cdot 2^{-1/q} (a^q +b^q)^{1/q} \le $$ $$ 2^{1-1/q} (a^2 + b^2)^{1/2} = 2^{1-1/q} 2^{-1/2} \|Pf\|_2 \le 2^{\frac12 -\frac1q} \|f\|_2. $$ Hence $\|P\|_{L^2 \to L^q} \le 2^{\frac12-\frac1q}$. Define $f=2\cdot 1_{[0,1/2)}$. Then $\|f\|_2= \sqrt 2$, and $\|Pf\|_q= 2^{1-\frac1q} = 2^{\frac12-\frac1q}\|f\|_2$. Thus, for $q< \infty$, $\|P\|_{L^2 \to L^q} = 2^{\frac12-\frac1q}$. Now let $q=\infty$. Then, for any $f \in L^2$, $$ \|Pf\|_\infty= 2\max\{|a|,|b|\} \le 2 \sqrt{a^2+b^2}=\sqrt 2 \|f\|_2, $$ which shows $\|P\|_{L^2 \to L^\infty} \le \sqrt 2$. For $f =2\cdot 1_{[0,1/2)}$ we have $\|Pf\|_\infty = 2 =\sqrt 2 \|f\|_2$; hence $\|P\|_{L^2 \to L^\infty}= \sqrt 2$. \end{proof} {\bf Remarks.} 1. For $q=4$ the proposition shows the optimality of the constant $2^{1/4}$ in Theorem \ref{wang} (and in \cite[Theorem 1.1]{W1}). Wang's simple example \cite[p. 2633]{W1} is not ergodic. Wang's definition of spectral gap, the norm on functions of integral 0 being less than 1, is valid only for the ergodic case. In general, the norm should be taken on the subspace orthogonal to the invariant functions, and in this sense Wang's example has a spectral gap ($P^n$ converges in norm). 2. Combining Theorem \ref{wang} with Proposition \ref{optimal}, we obtain that for any $q>4$ there exists $\Delta_q \in[2^{\frac14},2^{\frac12 -\frac1q}]$ such that if $P$ is an ergodic Markov operator mapping $L^2$ to $L^q$ with $\|P\|_{L^2 \to L^q} < \Delta_q$, then $P$ is aperiodic. 3. Let $q \in (2,4)$. If there exists $\Delta_q$, such that every ergodic $P$ mapping $L^2$ to $L^q$ with $\|P\|_{L^2 \to L^q} < \Delta_q$ is aperiodic, then $ \Delta_q \le 2^{\frac12 -\frac1q}$, by Proposition \ref{optimal}. \begin{theo} \label{wang3} Let $P$ be an ergodic hyperbounded Markov operator mapping $L^2(\mu)$ to $L^3(\mu)$. If $\|P\|_{L^2 \to L^3} < 2^{1/6}$, then $P$ is aperiodic. The value $2^{1/6}$ is optimal. \end{theo} \begin{proof} We prove that the left-hand side of \eqref{real-gap}, denoted later by $\rho$, is less than 1. Thus, we consider real functions. Our proof is inspired by \cite{W1}. Fix $f \in L^2(\mu)$ with $\|f\|_2=1$ and $ \mu(f):= \int f d\mu=0$. We may assume $\mu((Pf)^3) \ge 0$ (otherwise we replace $f$ by $-f$). Let $\varepsilon \in (0,1)$, and define $g = \varepsilon^{1/2} +(1-\varepsilon)^{1/2} f$. Then $\|g\|_2^2= \varepsilon +(1-\varepsilon)\|f\|_2^2 = 1 $, by orthogonality. Then $$ \delta:= \|P\|_{L^2 \to L^3}^3 \ge \|Pg\|_3^3 \ge \int (Pg)^3d\mu= $$ $$ \int \Big(\varepsilon^{3/2} +3 \varepsilon(1-\varepsilon)^{1/2}Pf + 3 \varepsilon^{1/2}(1-\varepsilon)(Pf)^2 +(1-\varepsilon)^{3/2}(Pf)^3 \Big) d\mu \ge $$ $$ \varepsilon^{3/2} +3\varepsilon^{1/2}(1-\varepsilon) \mu((Pf)^2) . $$ Hence $\displaystyle{\|Pf\|_2^2 \le \frac{\delta- \varepsilon^{3/2}}{3 \varepsilon^{1/2}(1-\varepsilon) } }$. Since $f$ with norm 1 and zero integral was arbitrary, we obtain $\displaystyle{\rho^2 \le \frac{\delta- \varepsilon^{3/2}}{3 \varepsilon^{1/2}(1-\varepsilon) } }$. Taking $\varepsilon=\frac12$ we find $\rho^2 \le (\delta- (\frac12)^{3/2})/ 3(\frac12)^{1/2} \cdot \frac12 $. We have $$ \frac{\delta- (\frac12)^{3/2}}{3(\frac12)^{3/2} } <1 \ \text{if and only if } \delta < 4\cdot (\frac12)^{3/2} = \sqrt 2. $$ Hence $\|P\|_{L^2 \to L^3} < 2^{1/6}$ implies $\rho^2 <1$, which yields aperiodicity. \smallskip The optimality of the value $2^{1/6}$ follows from the example in Proposition \ref{optimal}, with $q=3$. \end{proof} {\bf Remark.} Theorem \ref{wang3} does not follow from Theorem \ref{wang}, since $P$ may map $L^2$ to $L^3$ and not to $L^4$. \medskip \noindent {\bf Problem 2.} {\it For $q \in (2,3)$, is there a constant $\Delta_q$ such that any ergodic hyperbounded $P$ mapping $L^2$ to $L^q$ with $\|P\|_{L^2 \to L^q} <\Delta_q$ is aperiodic?} Since for $P$ mapping $L^2$ to $L^q$ ($q>2$) we have $\|P\|_{L^2 \to L^q} \ge 1$, if $\Delta_q$ exists, then $\Delta_q >1$. If $\Delta_q$ does not exist, we may still ask: {\it Is any hyperbounded $P$ mapping $L^2$ to $L^q$ with $\|P\|_{L^2 \to L^q} = 1$ aperiodic?} \medskip \section{Hyperbounded Markov operators defined by convolutions} An important class of Markov operators with an invariant probability is given by convolution operators on compact groups. We deal in this section with the circle group $\mathbb T$, identified with $[-\pi,\pi)$, with normalized Haar (Lebesgue) measure $\mu$. We use the notation $L^p$ or $L^p(\mathbb T)$ for $L^p(\mathbb T,\mu)$. Let $\nu$ be a probability on $\mathbb T$, and define the convolution operator $P_\nu f=\nu*f$, with $\nu*f(x):= \int_\mathbb T f(x-y) d\nu(y)$. Then $P_\nu1=1$, and for $f \in L^p$, $1 \le p < \infty$ we have $$ \|P_\nu f\|_p^p =\int \Big|\int f(x-y)d\nu(y)\Big|^p d\mu(x) \le \int \int |f(x-y)|^p d\nu(y) d\mu(x) = $$ $$ =\int \Big(\int |f(x-y)|^p d\mu(x) \Big) d\nu(y) = \int \int |f(z)|^p d\mu(z) d\nu(y) = \|f\|_p^p. $$ Similarly $\int P_\nu f\, d\mu=\int f\,d\mu$, so $P_\nu$ is a Markov operator with $\mu$ invariant. We denote $e_n(x):= {\text{e}}^{inx}$, $n \in \mathbb Z$. The Fourier coefficients of $f \in L^1$ are $\hat f(n) := \frac1{2\pi} \int f(x)e_{-n}(x)dx$, and the Fourier-Stieltjes coefficients of $\nu$ are $\hat \nu(n) = \int e_{-n}(x)d\nu(x)$. Note that if $\nu << \mu$ with $\frac{d\nu}{d\mu}= \phi \in L^1$, then $\hat \nu =\hat \phi$. Using Fubini's theorem, we obtain $$ \widehat{P_\nu f }(n)=\int \int f(x-y)e_{-n}(x)d\mu(x)d\nu(y)= \int \int f(z)e_n(z+y)d\mu(z)d\nu(y) =\hat\nu(n)\cdot \hat f(n). $$ This yields that $\{\hat\nu(n)\}_{n \in \mathbb Z}$ is a {\it multiplier} sequence in any $L^p$, i.e., if $f =\sum_{n \in \mathbb Z} c_ne_n \in L^p$, $1<p< \infty$, then $\sum_{n \in \mathbb Z}\hat\nu(n)c_ne_n \in L^p$ (convergence in $L^p$). Hyperboundedness of $P_\nu$, mapping $L^p$ into $L^q$ with $p<q$, means that $\{\hat\nu(n)\}_{n \in \mathbb Z}$ in an $L^p-L^q$ multiplier sequence. In some harmonic analysis papers (e.g. \cite{Rit}, \cite{GHR}, \cite{DHR}), when $P_\nu$ is hyperbounded $\nu$ is called {\it $L^p$-improving}. If $P_\nu$ is hyperbbounded in $L^p(\mathbb T)$ for $1\le p< \infty$, then it is hyperbounded in every $L^r$, $1 < r< \infty$ (Proposition \ref{all-Lp}), and quasi-compact in each $L^r$ by Corollary \ref{ue}. Obviously, for any probability $\nu$, the spectrum of of $P_\nu$ in $L^2$ is $\overline{\{\hat\nu(n): n\in \mathbb Z\}}$. Graham, Hare and Ritter \cite[Theorem 4.1]{GHR} proved that {\it if $P_\nu$ is hyperbounded, then for any $1<r< \infty$ the spectrum of $P_\nu$ on $L^r$ is $\overline{\{\hat\nu(n): n\in \mathbb Z\}}$}. \medskip It was shown in \cite[Corollary 4.2(iv)]{DL} that $P_\nu$ is uniformly ergodic in $L^2(\mu)$ if and only if $\inf_{n \ne 0} |\hat\nu(n)-1| >0$. It was proved \cite[Theorem 4.6]{DL} that if $\nu$ is an adapted discrete probability on $\mathbb T$, then $P_\nu$ is not uniformly ergodic in $L^2$. An example of $\nu$ continuous singular satisfying the above condition, with $P_\nu$ not Harris, was presented in \cite[Proposition 4.7]{DL}. The discussion in \cite[p. 92]{DL} shows the existence of a continuous probability with all its powers singular, with $P_\nu$ not uniformly ergodic in $L^2$. By \cite[Theorem 4.3]{DL}, if $P_\nu$ is Harris recurrent (some power $\nu^k$ is not singular), then it is uniformly ergodic in $L^2$. A Rajchman probability $\nu$ (i.e. $\hat\nu(n) \to 0$ as $|n| \to \infty$; necessarily continuous by a result of Wiener \cite [Theorem III.9.6]{Z}) defines $P_\nu$ uniformly ergodic on $L^2$, by \cite[Theorem 4.4]{DL}. The singular probability $\nu$ with $P_\nu$ uniformly ergodic and not Harris, presented in \cite{DL}, {\it is not} Rajchman. \begin{prop} \label{ritter} Let $\nu$ be a probability on $\mathbb T$. If $(P_\nu)^m$ is hyperbonded for some $m>1$, then $P_\nu$ is hyperbounded. \end{prop} \begin{proof} We first note that $(P_\nu)^m =P_{\nu^m}$. By Corollary \ref{p-2}, $P_{\nu^m}$ maps $L^p$ to $L^2$, for some $1<p<2$. By Ritter \cite[Lemma 2]{Rit}, $P_\nu$ is hyperbounded. \end{proof} {\bf Remark.} As noted in the remarks preceding Proposition \ref{fixed-space}, Proposition \ref{ritter} does not extend to general bi-stochastic Markov operators. \begin{theo} \label{riesz} There exists a singular Rajchman probability $\nu$ on $\mathbb T$ such that all its convolution powers are singular (so $P_\nu$ is not Harris) and $P_\nu$ is hyperbounded. \end{theo} \begin{proof} We use the construction of Riesz products (see \cite[Section V.7]{Z}), used by Zafran \cite[p. 619]{Za}. Let $-1\le a_k \le 1$ with $a_k\not=0$ for every $k\ge 1$. Let $(n_k)$ be a lacunary sequence, with $n_{k+1}/n_k\ge q>3$, for every $k\ge 1$. Expanding the partial Riesz products, with $m_n =\sum_{k=1}^n n_k$, we obtain $$ p_n(x)=\Pi_{k=1}^n(1+a_k\cos(2\pi n_k x)) = 1+ \sum_{j=1}^{m_n} \gamma_j \cos(2\pi jx), $$ which is a partial sum of the Fourier-Stieltjes series $1+\sum_{j=1}^\infty \gamma_j\cos(2\pi jx)$ of a non-decreasing continuous function $F(x)$ \cite[Theorem V.7.5, p. 209]{Z}, given by the relation $$ F(x)-F(0)=\lim_{n\to\infty}\int_{0}^x p_n(t)dt. $$ Since $p_n(t) \ge 0$ and $\int_0^1p_n(t)=1$, we have $F(1)-F(0)=1$. The function $F$ has zero derivative Lebesgue a.e., i.e. $F$ is singular, if $\sum_{k=1}^\infty a_k^2=\infty$; otherwise, $F$ is absolutely continuous (\cite[Theorems V.7.6 and V.7.12]{Z}). By expanding the partial Riesz products, it is simple to see that $\gamma_m=0$ if $m$ is not of the form $\sum_{j=1}^k \epsilon_j n_j$, where $\epsilon_j \in \{-1,0,1\}$. Let $\nu$ be the positive measure defined by $F$. Since $F(1)-F(0)=1$, $\nu$ is a probability, and $\hat\nu(m) = \gamma_m$. By \cite{Za}, we have $$ \hat \nu(\sum_{j=1}^k \epsilon_j n_j )=\Pi_{j=1}^k (\frac{a_j}{2})^{|\epsilon_j|}. $$ We conclude that if $\nu$ is Rajchman, we must have $a_k =2 \hat\nu(n_k) \to 0$, while if $a_k\to 0$, then $\nu$ is Rajchman. Since $|a_k| \le 1$ we have $|\hat\nu(m)| \le \frac1{2}$ for $m \ne 0$, so by the theorem of Ritter for Riesz products \cite{Rit}, $P_\nu$ is hyperbounded. For $n \ge 1$, by \cite[p. 619]{Za} the $n$-fold convolution power $\nu^n $ is represented by the infinite Riesz product: $$ \Pi_{k=1}^\infty(1+2(\frac{a_k}2)^n\cos(2\pi n_k x)). $$ By \cite{Z}, $\nu^n$ is singular if and only if $ \sum_{k=1}^\infty a_k^{2n} = \infty$. By choosing $(a_k)$ such that $\sum_{k=1}^\infty |a_k|^n =\infty$ for every $n \ge 1$, e.g. $a_k =1/\log(k+2)$ for $k \ge 1$, we make sure that all convolution powers of $\nu$ are singular, and that $\nu$ is Rajchman. \end{proof} {\bf Remark.} The existence of a singular Rajchman probability with all its powers singular is a special case of Theorem S of Varopoulos \cite{V}. Our proof for $\mathbb T$ is along classical lines, using Riesz products and Zygmund's criteria for singularity, and yields a concrete $\nu$; it is simpler than the proof for general LCA groups in \cite{V}, which shows only the {\it existence} of the desired measures. \begin{cor} \label{not-hyper} There exists a singular Rajchman probability $\nu$ such that all its convolution powers are singular and $P_\nu$ is not hyperbounded. \end{cor} \begin{proof} We denote by $\nu_0$ the Rajchman probability given by Theorem \ref{riesz}. Let $(a_n)_{n\ge 0}$ be a sequence of positive numbers with $\sup_n\, a_n<1$ and $a_n \to 0$. Badea and M\"uller \cite[Theorem 5]{BM} proved that there exists a probability measure $\nu << \nu_0$ with $|\hat\nu(n)| \ge a_{|n|}$ for every integer $n$. By a result of Rajchman (see references and a proof in \cite[(2.1)]{Ly2}), also $\nu$ is Rajchman. Obviously $\nu^k<<\nu_0^k$, so $\nu^k$ is singular. In order to show that we can obtain $P_\nu$ not hyperbounded, we choose $a_n \to 0$ very slowly, so that Edwards's necessary conditions \cite[formulas (16.4.9) and (16.4.10)]{Ed2} are violated; e.g. $a_n =1/\log\log(n+27)$ for $n \ge 0$. \end{proof} {\bf Remarks.} 1. It is noted in \cite[p. 487]{GHR} that it is possible to construct a Rajchman probability $\nu_0$ such that for {\it any} probability $\nu << \nu_0$ the operator $P_\nu$ is not hyperbounded. Our construction is different. 2. Since $\nu$ of the corollary is Rajchman, $P_\nu^nf= \nu^n*f \to \int f\,d\mu$ a.e for every $f \in L^p$, $p>1$, \cite{DL}, although $P_\nu$ is not hyperbounded and not Harris recurrent. 3. Sarnak \cite[p. 309]{Sa} proved that {\it if $\nu$ is Rajchman, then for any $1<r< \infty$ the spectrum of $P_\nu$ on $L^r$ is $\overline{\{\hat\nu(n): n\in \mathbb Z\}}$}, even when, as in Corollary \ref{not-hyper}, \cite[Theorem 4.1]{GHR} does not apply. \begin{prop} \label{Lr-deriv} Let $\nu$ be a probability on $\mathbb T$ and $1<q<\infty$. The operator $P_\nu$ maps $L^1$ into $L^q$ if and only if $\nu<<\mu$ with $\frac{d\nu}{d\mu} \in L^q$. \end{prop} \begin{proof} Let $\nu<<\mu$ with $\phi=\frac{d\nu}{d\mu} \in L^q$. Putting $p=1$ and $r=q$ in \cite[Theorem II.1.15]{Z}, we obtain that for $f \in L^1$ we have $\phi*f \in L^q$, and $$ \|P_\nu f\|_q =\|\phi*f\|_q \le \|\phi\|_q\cdot \|f\|_1. $$ Hence $P_\nu$ maps $L^1$ into $L^q$, with norm $\|P_\nu\|_{L^1\to L^q} \le \|\phi\|_q$. \smallskip The converse is Theorem 16.3.4 of \cite{Ed2}. \end{proof} {\bf Remarks.} 1. When $\nu<<\mu$, $P_\nu$ is an integral operator, defined by the kernel $k(x,y)=\phi(x-y)$. Similarly, if $\nu$ is not singular (with respect to $\mu$), then $P_\nu$ bounds a non-zero integral operator, and therefore is Harris recurrent. 2. The probability $\nu$ of Theorem \ref{riesz} yields $P_\nu$ which is hyperbounded in every $L^p$, $1<p< \infty$, but not in $L^1$ since $\nu$ is singular. \begin{prop} \label{ghr} Let $\nu$ be a probability on $\mathbb T$, such that $P_\nu$ is hyperbounded, mapping $L^p$ to $L^q$, $1<p< q$, and put $\alpha =q/p$. If $r>\alpha/(\alpha-1)$ and $\eta<<\nu$ is a probability with $\frac{d\eta}{d\nu} \in L^r(\nu)$, then $P_\eta$ is hyperbounded. \end{prop} \begin{proof} Let $s=r/(r-1)$. Then $1<s< \alpha$, and the proof of \cite[Theorem 1.1]{GHR} shows that $P_\nu$ maps $L^{ps}$ to $L^q$. \end{proof} \begin{prop} There exists a singular probability $\nu$ on $\mathbb T$ which is {\rm not} Rajchman, such that all its convolution powers are singular, and $P_\nu$ is hyperbounded. \end{prop} \begin{proof} Let $\nu$ be the usual Cantor-Lebesgue measure, as described in \cite[Proposition 4.7]{DL}. It is clearly not Rajchman, since $\hat\nu(n)$ is constant along the powers of 3, and it is shown there that all powers of $\nu$ are singular. Oberlin \cite{Ob} proved that $\nu$ is hyperbounded ($L^p$-improving); see also \cite[Proposition 4.2]{DHR}. \end{proof} {\bf Remarks.} 1. It was proved by Graham et al. \cite[Corollary 3.2(iii)]{GHR} that if $\nu$ maps {\it every} $L^p$, $1<p<2$, into $L^2$, then $\nu$ is Rajchman. 2. If $P_\nu$ is hyperbounded, then $\limsup_{|n| \to \infty} |\hat\nu(n)| <1$: by Proposition \ref{all-Lp} and duality $P_\nu$ maps some $L^p$, $1<p<2$, to $L^2$, and then we apply \cite[Corollary 3.2(ii)]{GHR}. \begin{prop} \label{continuous} Let $\nu$ be a probability on $\mathbb T$. If $P_\nu$ is hyperbounded, then $\nu$ is continuous (atomless), and \begin{equation} \label{HT} \sum_{|n|\ne 0} \frac{|\hat\nu(n)|^2}{|n|} < \infty. \end{equation} \end{prop} \begin{proof} It suffices to prove \eqref{HT}, since then by Kronecker's lemma $\frac1{2N+1} \sum_{|n|\le N} |\hat\nu(n)|^2 \to 0$ as $N\to \infty$, so by Wiener's criterion \cite[Theorem III.9.6]{Z}, $\nu$ is atomless. \smallskip By Corollary \ref{p-2}, hyperboundedness of $P_\nu$ implies that $P_\nu$ maps some $L^p$, $1 < p<2$, into $L^2$. This then allows the following proof, based on an idea of Hare and Roginskaya \cite{HR}. We show that for any $\varepsilon >0$ we have \begin{equation} \label{hr} \sum_{|n|\ne 0}\frac { |\hat\nu(n)|^2}{n^{2(p-1)/p}\log^{1+\varepsilon}(1+|n|) }< \infty. \end{equation} Denote $s=2(p-1)/p$. Let $D_N(x)=\sum_{|n| \le N} {\text{e}}^{inx}$ be the complex Dirichlet kernel. By \cite[Lemma 2.1]{AAJRS} we have $\|D_N\|_p^p \le C_p N^{p-1}$ for $p>1$. Then $$ \sum_{|n|\ne 0}\frac { |\hat\nu(n)|^2}{n^s \log^{1+\varepsilon}(1+|n|) } = \sum_{k=0}^\infty \sum_{|n|=2^k}^{2^{k+1}}\frac { |\hat\nu(n)|^2}{n^s \log^{1+\varepsilon}(1+|n|) } \le $$ $$ c_1 +c_2 \sum_{k=1}^\infty \frac1{2^{ks} k^{1+\varepsilon} }\|\nu *D_{2^{k+1}}\|_2^2 \le c_1 +c_3 \sum_{k=1}^\infty \frac1{2^{ks} k^{1+\varepsilon} }2^{2(p-1)/p} = c_1 +c_3 \sum_{k=1}^\infty \frac1{ k^{1+\varepsilon} }< \infty. $$ This proves \eqref{hr}, and since $s<1$, \eqref{hr} implies \eqref{HT}. \end{proof} {\bf Remarks.} 1. Equation \eqref{hr} improves the necessary condition of Edwards \cite[Remark (3), p. 302]{Ed2} for $q=2$. 2. For any positive sequence $(a_n)_{n \in \mathbb Z}$ which tends to $0$ as $|n| \to \infty$, there exists a {\it complex} measure $\eta$ satisfying $\sum_{n\in \mathbb Z} a_n|\hat\eta(n)|^2 <\infty$, with $P_\eta f =\eta*f$ not $L^p$ improving \cite[Proposition 2.7]{HR}. 3. Another proof of continuity (in a stronger form) is in \cite [Corollary 3.2(i)]{GHR}. \smallskip {\bf Problem 3.} {\it Let $(a_n)_{n \in \mathbb Z}$ be a positive sequence which tends to $0$ as $|n| \to \infty$. Does there exist a {\em probability} measure $\nu$ satisfying $\sum_{n\in \mathbb Z} a_n|\hat\nu(n)|^2 <\infty$, with $P_\nu$ not hyperbounded?} A positive answer for $a_n=1/|n|$ is given below. As mentioned above, a complex measure with the desired properties exists by \cite[Proposition 2.7]{HR} (and it is possible also to obtain from it a real {\it signed} measure). We conjecture that the answer to Problem 3 is always positive; if not, it means that {\it there exists a positive sequence $a_n \to 0$ as $|n| \to \infty$ (necessarily with $\sum_n a_n=\infty$), such that for every probability $\nu$ with $\sum_{n\in \mathbb Z} a_n|\hat\nu(n)|^2 <\infty$, the convolution operator $P_\nu$ is hyperbounded.} Based on the current knowledge, such a sufficient condition for hyperboundedness seems unlikely. \begin{prop} \label{not-hyper2} There exists on $\mathbb T$ a probability $\nu << \mu$ such that $P_\nu$ is not hyperbounded, but \eqref{HT} holds. \end{prop} \begin{proof} Let $b_n=1/\log(|n|+2)$ for $n \in \mathbb Z$. Then for $n>0$ we have $b_{n-1}+b_{n+1}-2b_n \ge 0$, so by \cite[Theorem I.4.1]{Ka} (see also \cite[Theorem V.1.5]{Z}), there is a {\it non-negative} function $f \in L^1(\mathbb T)$ such that $\hat f(n)=b_n$, $n \in \mathbb Z$. Let $g = f/\|f\|_1$ and define $d\nu=g\,d\mu$. Since $\hat\nu(n) =1/\|f||_1 \log(|n|+2)$, the series in \eqref{HT} converges. However, for any $1<p<2$ we have $2(p-1)/p <1$, so the series in \eqref{hr} always diverges, which implies that $P_\nu$ is not hyperbounded. \end{proof} {\bf Remarks.} 1. Corollary \ref{not-hyper} gives an example of $\nu$ Rajchman singular with a non-Harris $P_\nu$ which is $L^2$ uniformly ergodic and not hyperbounded. Proposition \ref{not-hyper2} provides an example with $\nu$ absolutely continuous (so $P_\nu$ is Harris). 2. It is easy to find $\nu$ not continuous with $P_\nu$ uniformly ergodic in $L^2$ and not hyperbounded: take $\nu_1<<\mu$ with $d\nu_1/d\mu$ in $L^2$, and define $\nu=\frac12(\delta_0+\nu_1)$. Since $\delta_0 * f=f$ for any $f$, we have $P_\nu=\frac12(I+P_{\nu_1})$. By \cite{DL} $P_{\nu_1}$ is uniformly ergodic in $L^2$ (and also hyperbounded by Proposition \ref{Lr-deriv}). Also $P_\nu$ is uniformly ergodic (see \cite{DL}), but is not hyperbounded by Proposition \ref{continuous}. See also the proof of Proposition \ref{no-hyper}, with $Q=P_{\nu_1}$ and $P=P_\nu\,$. 3. Since $P_\mu$ is hyperbounded, the first part of Proposition \ref{not-hyper2} follows from \cite[Theorem 1.4]{GHR}. Another construction is given in \cite[Theorem 2.3]{GHR}. Our proof is different. The construction of Corollary \ref{not-hyper} shows that for every Rajchman probability $\nu_0$ there exists $\nu<<\nu_0$ with $P_\nu$ not hyperbounded, since \eqref{HT} fails. \begin{theo} \label{sum-estimates} Let $\nu$ be a probability on $\mathbb T$, such that $P_\nu$ is hyperbounded, mapping $L^p$ to $L^q$, with $1 \le p<q < \infty$. (i) If $1 \le p<2$, then we may assume $q\le 2$, and then there exists a constant $C_{p}$ such that for every $a \in \mathbb Z$ and $b \in \mathbb N^+$ we have \begin{equation} \label{sum-estimate1} \sum_{|n| \le N}|\hat\nu(a+bn)|^2 \le C_p^2 \|P_\nu\|_{L^p\to L^q}^2 N^{2(p-1)/p}(2N+1)^{(2-q)/q}. \end{equation} (ii) If $2 \le p$, then there exists a constant $C_{q}$ such that for every $a \in \mathbb Z$ and $b \in \mathbb N^+$ we have \begin{equation} \label{sum-estimate2} \sum_{|n| \le N}|\hat\nu(a+bn)|^2 \le C_q^2 \|P_\nu\|_{L^p\to L^q}^2 N^{2/q}(2N+1)^{(p-2)/p}. \end{equation} \end{theo} \begin{proof} (i) Let $r=q/(q-1)$ be the dual index of $q\le 2$. For $f \in L^p$ we have $P_\nu f \in L^q$, so by the Hausdorff-Young theorem \cite[Theorem XII.2.3]{Z} \begin{equation} \label{p-q-2} \Big(\sum |\hat\nu (n)|^r|\hat f(n)|^r\Big)^{1/r} \le \|P_\nu f\|_q \le \|P_\nu\|_{L^p \to L^q} \|f\|_p \ . \end{equation} Fix $a \in \mathbb Z$ and $b \in \mathbb N^+$ and put $g_N:= \sum_{n=-N}^N {\text{e}}^{i(a+nb)x} $. Applying \eqref{p-q-2} to $f=g_N$ we obtain \begin{equation} \label{r-powers} \Big(\sum_{|n|\le N} |\hat\nu (a+bn)|^r\Big)^{1/r} \le \|P_\nu g_N\|_q \le \|P_\nu\|_{L^p\to L^q} \|g_N\|_p \ . \end{equation} Since $r \ge 2$, we deduce that \begin{equation} \label{jensen} \Big(\frac1{(2N+1)}\sum_{|n| \le N} |\hat\nu(a+bn)|^2 \Big)^{1/2} \le \Big(\frac1{(2N+1)}\sum_{|n| \le N} |\hat\nu(a+bn)|^r \Big)^{1/r} \le \frac{\|P_\nu\|_{L^p\to L^q}}{(2N+1)^{1/r}} \|g_N\|_p. \end{equation} Now $g_N(x)={\text{e}}^{iax} D_N(bx)$, so by change of variable $y=bx$ and periodicity we obtain $\|g_N\|_p^p =\|D_N\|_p^p$. When $p>1$, we use the estimate $\|D_N\|_p^p \le C_p N^{p-1} $ \cite[Lemma 2.1]{AAJRS} in \eqref{jensen} and obtain $$ \sum_{|n| \le N}|\hat\nu(a+bn)|^2 \le \|P_\nu\|_{L^p\to L^q}^2 C_{p}^2 N^{2(p-1)/p}(2N+1)^{(r-2)/r}\ , $$ which is \eqref{sum-estimate1}. The computations in \cite{AAJRS} with the estimation $\int_0^\infty \big|\frac{\sin x}x\big|^p dx \le 1+\frac{1}{p-1}$ yield the existence of an absolute constant $c$ such that $C_p \le \frac{c}{p-1}$ for $1<p<2$. \smallskip When $p=1$, \cite[Theorem 16.3.4]{Ed2} yields $\nu<<\mu$ with $\phi:=\frac{d\nu}{d\mu} \in L^q$. Then $\sum |\hat\nu(n)|^r$ converges, by Hausdorff-Young, and using \eqref{jensen} we obtain $$ \sum_{|n| \le N}|\hat\nu(a+bn)|^2 \le \big(\sum_{n \in \mathbb Z} |\hat\nu(n)|^r\big)^{2/r} (2N+1)^{(2-q)/q} \le \|\phi\|_q^2 (2N+1)^{(2-q)/q}. $$ Since $\|P_\nu\|_{L^1\to L^q} =\|\phi\|_q$ by \cite[Exercise VI.9.59]{DS}, \eqref{sum-estimate1} holds also for $p=1$, with $C_1=1$. \medskip (ii) Let $p\ge 2$. By assumption, $ P_\nu^*$ maps $(L^q)^*$ to $(L^p)^*$. Denoting $r=q/(q-1)$ and $s=p/(p-1)$, we have that $P_\nu^*$ maps $L^r$ to $L^s$, with $1< r<s $, and $p\ge 2$ implies $s \le 2$. The dual $P_\nu^*$ is induced by $\check\nu$, the reflected probability defined by $\check\nu(A)=\nu(-A)$. We then have $\widehat{\check\nu}(n)= \overline{\hat\nu(-n)}$, and applying \eqref{sum-estimate1} to $P_\nu^*$ yields \eqref{sum-estimate2}, with $C_q =C_r \le c(q-1)$. \end{proof} {\bf Remarks.} 1. When $P_\nu$ maps $L^p$ to $L^q$ with $1<p<2<q$, we can use either \eqref{sum-estimate1} with $r=2$ (when $q\ge p/(p-1)$), or \eqref{sum-estimate2} with $p=2$ (otherwise). With $t:= \max\{q,p/(p-1)\}$ (observe that $t>2$) we then obtain $$ \sum_{|n| \le N}|\hat\nu(n)|^2 \le K\cdot N^{2/t}. $$ Note that in this case, $\limsup_{|n| \to \infty} |\hat\nu(n)|^2 \le \frac2t,$ by Hare \cite[Corollaries 2 and 2']{Ha}. \smallskip 2. When $P_\nu$ maps $L^p$ to $L^q$ with $1<p<q \le 2$, \eqref{r-powers} yields, with $r=q/(q-1)$, \begin{equation} \label{r-powers1} \sum_{|n|\le N} |\hat\nu (n)|^r \le K N^{(p-1)r/p}. \end{equation} Using Abel summation by parts and the estimate $\frac1{n^\beta} -\frac1{(n+1)^\beta} < \frac{\beta}{n^{\beta+1}}$, \eqref{r-powers1} yields the (necessary) condition \cite[(16.4.9)]{Ed2}. Our proof is different, and seems simpler. Moreover, \eqref{r-powers1} is a better necessary condition, since its failure is enough for ruling out mapping $L^p$ to $L^q$. 3. The case $q=2$ of \eqref{sum-estimate1} is in the thesis of Bonami \cite[p. 345]{Bo}, using Fej\'er kernels. \smallskip \begin{prop} \label{coefficients} Let $\nu$ be a probability on $\mathbb T$. If for some $\alpha>0$ $$ \sum_{n \in \mathbb Z} |\hat\nu(n)|^\alpha < \infty, $$ then $P_\nu$ is hyperbounded. More precisely, $P_\nu$ maps $L^p$ to $L^2$, for $p=\max\{1,\frac{2\alpha}{\alpha+2}\}$. \end{prop} \begin{proof} If $\alpha<2$, we have also $ \sum_{n \in \mathbb Z} |\hat\nu(n)|^2< \infty$, and then there is $\phi \in L^2$ with $\hat\phi=\hat\nu$, so by uniqueness $\nu<<\mu$ with $\frac{d\nu}{d\mu}=\phi$, and by Proposition \ref{Lr-deriv} $P_\nu$ maps $L^1$ to $L^2$, so it is hyperbounded. Assume now $\alpha>2$, and put $p=\frac{2\alpha}{\alpha+2}$. Then $1<p<2$. Let $f \in L^p$. By the Hausdorff-Young theorem we have $\sum_{n\in\mathbb Z} |\hat f(n)|^q < \infty$, with $q=\frac p{p-1}=\frac{2\alpha}{\alpha -2}$ the dual index to $p$. We use H\"older's inequality with $s=\frac q2>1$ and $\frac1r=1 -\frac1s =1-\frac2q=\frac2{\alpha}$ and obtain $$ \sum_{n\in\mathbb Z}|\widehat{P_\nu f}(n)|^2= \sum_{n\in\mathbb Z}|\hat\nu(n)\hat f(n)|^2 \le \Big( \sum_{n\in\mathbb Z}|\hat\nu(n)|^{2r}\Big)^{1/r} \Big( \sum_{n\in\mathbb Z}|\hat f(n)|^{2s}\Big)^{1/s} < \infty, $$ since $2s=q$ and $2r=\alpha$. We conclude that $P_\nu$ maps $L^p$ to $L^2$, so $P_\nu$ is hyperbounded. \end{proof} {\bf Remarks.} 1. Proposition \ref{coefficients} was first proved by Hare \cite[Corollary 1]{Ha}. Our proof is different, and seems simpler. 2. Proposition \ref{coefficients} applies whenever $|\hat\nu(n)|= \mathcal O(|n|^{-c})$ for some $c>0$. This condition was shown to imply hyperboundedness of $P_\nu$ in \cite[Theorem XII.5.24]{Z}; for additional information see \cite[Theorem 16.4.6(3)]{Ed2}. Singular probabilities satisfying this latter condition were constructed by Littlewood and by Wiener and Wintner; see \cite[Theorem XII.10.12]{Z}, and the historical section in \cite{BH}. 3. If $\nu$ satisfies the assumptions of Proposition \ref{coefficients}, then $P_\nu$ is Harris recurrent, since for $k> \alpha$ we have $\sum_n|\widehat{\nu^k}(n)|=\sum_n|\hat\nu(n)|^k<\infty$, so $\nu^k << \mu$, and $P_\nu^k=P_{\nu^k}$ is an integral operator. 4. Let $1<p_0 <2$. A sufficient condition of Edwards \cite[p. 303]{Ed2} for $P_\nu$ to map $L^{p_0}$ to $L^2$ is $|\hat\nu(n)| = \mathcal O(|n|^{-1/s})$ for $|n|>1$, where $\frac1s=\frac1{p_0} -\frac12$. For this estimate, Proposition \ref{coefficients} yields only that $P_\nu$ maps $L^p$ to $L^2$ for $p_0<p<2$ (by taking $\alpha>s$ close to $s$). \medskip {\bf Notations.} For a sequence of numbers $(a_n)_{n\ge N}$ we put $\Delta a_n:=a_n-a_{n+1}$, and $\Delta^2 a_n:=\Delta(\Delta a_n)=a_n+a_{n+2}- 2a_{n+1}$. We also denote the Dirichlet Kernel $D_n(x)=\frac12+\sum_{k=1}^n\cos(kx)=\frac12\sum_{0\le |k|\le n} {\text{e}}^{ikx}$ and the Fej\'er kernel $K_n(x)=\frac1{n+1}\sum_{k=0}^n D_k(x)$. \begin{lem} \label{convex} Let $(a_n)_{n\ge N}$ be a positive sequence with $a_n\to 0$, and $\Delta^2 a_n\ge 0$ for $n\ge N$; then there exists a constant $C=C((a_n), N)>0$, such that $ C+\sum_{n=N}^\infty a_n\cos(nx)$ is the Fourier series of a non-negative function function in $ L^1(-\pi,\pi)$. \end{lem} \begin{proof} Given a real sequence $(a_n)_{n\ge 0}$, consider the formal Fourier series $s(x)=\frac12{a_0}+\sum_{n=1}^\infty a_n\cos(nx)$. By twice summation by parts we obtain that, formally, when $a_n\to 0$, $$ s(x)=\sum_{n=0}^\infty (n+1)\Delta^2 a_n K_n(x) $$ (see \cite[formula V.(1.7), p.183]{Z}; convexity is not used, only $|D_n(x)| \le \pi/|x|$ for $|x|>0$ \cite[Section II.5]{Z}, and $0 \le nK_n(x) \le 1/2\sin^2(\frac12 x)$ \cite[formula III.(3.2)]{Z}). If we assume, in addition to $a_n\to 0$, that $(a_n)_{n\ge 0}$ is convex, i.e. $\Delta^2a_n \ge 0$ for $n \ge 0$, then $a_n \ge 0$ for $n\ge 0$ \cite[Theorem III.4.1]{Z}, $s(x)$ converges for every $x \ne 0$ to a non-negative function (still denoted $s$), and $\sum_{n=0}^\infty (n+1) \Delta^2 a_n<\infty$; hence $s(x)\in L^1(-\pi,\pi)$ \cite[Theorem V.1.5]{Z}. \smallskip Now, under the assumptions of the lemma, we define $a_0,a_1,\dots,a_{N-1}$ by $a_{N-1}=2a_N - a_{N+1}$, $a_{N-2}=2a_{N-1}- a_N$, etc., so that $\Delta^2 a_n \ge 0$ for $n \ge 0$. Positivity of $s(x)$ yields that $\sum_{n=N}^\infty a_n\cos(nx) + \sum_{n=0}^{N-1}a_n\cos(nx)$ is positive for $x \ne 0$, so adding the norm $$C :=\sup_{|x| \le \pi} \Big|\sum_{n=0}^{N-1}a_n \cos(nx)\Big|,$$ the series $C+\sum_{n=N}^\infty a_n\cos(nx)=C+\frac12 \sum_{|n| \ge N} a_n{\text{e}}^{inx}$ is positive for $x\ne 0$. Integrability clearly holds. \end{proof} \begin{prop} \label{product=hyper} There exist two probabilities $\nu_1$ and $\nu_2$ on $\mathbb T$ such that $P_{\nu_1}P_{\nu_2}$ is hyperbounded, but both $P_{\nu_1}$ and $P_{\nu_2}$ are not hyperbounded. \end{prop} \begin{proof} By Lemma \ref{convex}, the series $g(x)=\sum_{n=2}^\infty \frac{\cos(nx)}{\log n}$ and $f(x)=\sum_{n=1}^\infty \frac{\cos(nx)}{\log(2n)}$, are integrable, and for some positive constant $C_2>0$, $f_2(x)=f(2x)+C_2$ is positive integrable. Set $$h(x)=g(x) - f(2x) = \sum_{k=1}^\infty \frac{\cos((2k+1)x)}{\log(2k+1)}.$$ Then $h$ is integrable. We would like to show that for some positive constant $C_1$, $h+C_1$ is non-negative. We use Theorem V.2.17 of \cite[p. 189]{Z}. It yields directly that $g(x)\approx \frac{\pi}{2x\log^2(1/x)}$ for $x\to 0^+$. We then apply it with $b(u)=\frac1{\log (2u)}$, and obtain that $f(2x)\approx\frac{\pi}{4x\log^2(1/x)}$ for $x\to 0^+$. Thus $h(x)\approx \frac{\pi}{2x\log^2(1/x)}$ for $x\to 0^+$, so $h(x)>0$ for $x \in(0,\epsilon)$. By \cite[Theorem I.2.6]{Z}, the series $g(x)$ and $f(x)$ converge uniformly for $|x|\ge \epsilon$; hence $h(x)$ is bounded below on $|x| \ge \epsilon$. Finally we conclude that for some positive constant $C_1$, $h+C_1$ is non-negative integrable. Let $f_1(x):= h(x)+C_1$, and define $\nu_1$ and $\nu_2$ by $\frac{d\nu_j}{d\mu} =\frac{f_j}{\|f_j\|_1}$. Since $\hat\nu_1(n)=0$ for even $n \ne 0$ and $\hat\nu_2(n)=0$ for odd $n$, we have $\nu_1*\nu_2=\mu$, so $P_{\nu_1}P_{\nu_2}$ is hyperbounded. However, as in the proof of Proposition \ref{not-hyper2}, for any $1<p<2$, each $P_{\nu_j}$ does not satisfy \eqref{hr}, so is not hyperbounded. \end{proof} {\bf Remark.} Proposition \ref{product=hyper} shows that powers in Proposition \ref{ritter} cannot be replaced by products. \begin{prop} \label{monotone} Let $\nu$ be a probability on $\mathbb T$, such that for some $C>0$, \begin{equation} \label{korner} |\hat\nu(n)|^2 \le C \cdot \frac1n \sum_{k=n+1}^{2n} |\hat\nu(k)|^2 \qquad n=1,2,\dots \end{equation} If $P_\nu$ is hyperbounded, then $P_\nu$ is Harris recurrent. More precisely, some convolution power $\nu^k$ is absolutely continuous. \end{prop} \begin{proof} Since $P_\nu$ is hyperbounded, $\nu$ is continuous by Proposition \ref{continuous}. For any continuous $\nu$, Wiener's characterization of continuity with \eqref{korner} imply that $\nu$ is Rajchman. Let $P_\nu$ map $L^p$ into $L^q$, $1 \le p <q$. We use condition \eqref{korner} and the estimate of the Fourier-Stieltjes coefficients given by Theorem \ref{sum-estimates}. When $1\le p<q\le 2$, \eqref{sum-estimate1} yields \begin{equation} \label{lower} N|\hat\nu(N)|^2 \le C^2\sum_{n=N+1}^{2N} |\hat\nu(n)|^2 \le C^2\sum_{n=1}^{2N} |\hat\nu(n)|^2 \le K^2(\log N)^{2/p}N^{2(p-1)/p} N^{(r-2)/r}, \end{equation} where $r =q/(q-1)$. This yields \begin{equation} \label{smallest} |\hat\nu(N)|^2 \le K^2 (\log N)^{2/p} N^{^{1-\frac2p + 1 -\frac2r }}. \end{equation} Since $p<q \le 2$, we have $s:= \frac2p +\frac2r - 2=\frac2p +\frac{2(q-1)}q -2 = \frac2p- \frac2q >0$. Hence for $k =[\frac1s]+1 $ we obtain $$ \sum_{ N=1}^\infty |\widehat{\nu^k}(N)|^2 = \sum_{ N=1}^\infty |\hat{\nu}(N)|^{2k} < \infty, $$ so $\nu^k<<\mu$, with $\frac{d\nu}{d\mu} \in L^2$. In particular, if $P_\nu$ maps $L^1$ into $L^2$, then $\nu*\nu$ is absolutely continuous. \smallskip When $2 \le p<q$, \eqref{sum-estimate2} yields similarly $$ N|\hat\nu(N)|^2 \le C^2 \sum_{n=1}^{2N} |\hat\nu(n)|^2 \le K^2(\log N)^{2/r}N^{2(r-1)/r} N^{(p-2)/p}, $$ which yields $$ |\hat\nu(N)|^2 \le K^2 (\log N)^{2/r} N^{^{1-\frac2p + 1 -\frac2r }}. $$ As before, $\sum_{ N=1}^\infty |\widehat{\nu^k}(N)|^2 < \infty$ for $k$ and $s$ defined as above, so $\nu^k <<\mu$. \end{proof} {\bf Remarks.} 1. By Cauchy-Schwarz, if $|\hat\nu(n)| \le C\cdot \frac1n \sum_{k=n+1}^{2n} |\hat\nu(k)|$ for $n\ge 1$, then \eqref{korner} holds. 2. Assumption \eqref{korner} is inspired by \cite{Ko}. The averaging is done in order to allow some coefficients to be zero. 3. With a minor change in the proof, assumption \eqref{korner} may be replaced by \begin{equation} \label{korner2} |\hat\nu(n)|^2 \le C \cdot \frac1n \sum_{k=1}^{n} |\hat\nu(k)|^2 \qquad n=1,2,\dots\ , \end{equation} which includes the case of $(|\hat\nu(n)|)_{n\ge 1}$ non-increasing. If $|\hat\nu(n)| \le C \cdot \frac1n \sum_{k=1}^{n} |\hat\nu(k)|$ for $n \ge 1$, then \eqref{korner2} holds. \bigskip {\it Application to uniform distribution modulo 1} Recall \cite[p. 1]{KN} that a sequence of reals $(x_k)_{k\ge 1}$ is called {\it uniformly distributed (u.d.) modulo 1} if for any subinterval $0\le a <b \le 1$ its fractional parts $\{x_k\}:=x_k-[x_k]$ satisfy $$ \lim_{N\to \infty} \frac1N\sum_{k=1}^N1_{[a,b)}(\{x_k\}) = b-a. $$ Since we are concerned with $[0,1)$ (and not an interval of length $2\pi$), we denote now $e(x):= {\text{e}}^{2\pi ix}$. By change of variables, now $\hat\nu(n) = \int_0^1 e(-nx)d\nu(x)$ for a probability $\nu$ on $\mathbb T$. By Weyl's criterion \cite[p. 7]{KN}, $(x_k)_{k \ge 1}$ is u.d. if and only if for every integer $m\not=0$ $$ \frac1N\sum_{k=1}^N e(m x_k)\to 0. $$ Weyl proved that if $(n_k)_{k\ge 1}$ is a sequence of distinct integers, then for Lebesgue almost every $x$ the sequence $(n_k x)$ is u.d. mod 1 \cite[p. 32]{KN}. \begin{prop} \label{lyons} Let $\nu$ be a probability on $\mathbb T$. If $|\hat\nu(n)| =\mathcal O((\log|n|)^{-\gamma})$ for some $\gamma>1$, then for every strictly increasing sequence of positive integers $(n_k)_{k\ge 1}$, the sequence $(n_kx)$ is u.d. for $\nu$ a.e. $x$. \end{prop} This is a special case (with $\Phi(n)=(\log n)^{-\gamma}$) of Theorem 3 of Lyons \cite{Ly}. \smallskip \begin{proof} Fix a probability $\nu$ and a sequence sequence $(n_k)_{k\ge 1}$ of distinct integers. In order to prove that $(n_k x)$ is u.d. mod 1 for $\nu$-a.e. $x$, we have to prove that $\nu$-a.e. $x$ satisfies $$ S_N(x,m):=\frac1N\sum_{k=1}^N e(m n_kx)\to 0 \quad \text{for every } \ 0\ne m \in \mathbb Z $$ Applying to $\nu$ the method of Davenport, Erd\"os, LeVeque \cite{DEL} (see \cite[p. 33]{KN}), it suffices to check that for every $0 \ne m \in \mathbb Z$, \begin{equation} \label{del} \sum_{N=1}^\infty \frac1{N} \int |S_N(x,m)|^2d\nu(x) = \sum_{N=1}^\infty \frac1{N^3} \sum_{k,j=1}^N \hat\nu(m(n_k-n_j))<\infty. \end{equation} When $|\hat\nu(n)|=\mathcal O((\log|n|)^{-\gamma})$, convergence in \eqref{del} holds. \end{proof} {\bf Remarks.} 1. If $\nu$ satisfies all the assumptions of Proposition \ref{monotone}, then by \eqref{smallest} $|\hat\nu(n)| = \mathcal O(|n|^{-c})$ for some $c>0$. so we can apply Proposition \ref{lyons}. 2. Proposition \ref{lyons} applies to the above mentioned examples by Littlewood and by Wiener-Wintner, of $\nu$ singular with $|\hat\nu(n)|=\mathcal O(|n|^{-c})$ for some $c>0$. 3, Additional singular probabilities which satisfy the assumption of Proposition \ref{lyons} can be obtained from Theorem 1.2 of K\"orner \cite{Ko} (with $\phi(n) =(\log(n+1))^{-\gamma}$). 4. Let $\nu_0$ be the probability of Proposition \ref{not-hyper2}, and $\nu:=\nu_0*\nu_0$. Then Proposition \ref{lyons} applies, but $P_\nu$ is not hyperbounded, by Proposition \ref{ritter}. \begin{prop} \label{bded-gaps} Let $\nu$ be a probability on $\mathbb T$, such that $P_\nu$ is hyperbounded. Then for every sequence of distinct positive integers $(n_k)_{k\ge 1}$ with bounded gaps, $(n_kx)$ is u.d. for $\nu$ a.e. $x$. \end{prop} \begin{proof} Let $(n_k)$ have bounded gaps: $|n_{k+1}-n_{k}| \le d$ for any $k \ge 1$. By the Cauchy-Schwarz inequality \begin{equation} \label{double} \sum_{k,j=1}^N\hat\nu(m(n_k-n_j)) \le N|\hat\nu(0)|+ N \Big( \sum_{k\not=j}^N |\hat\nu(m(n_k-n_j))|^2 \Big)^{1/2}. \end{equation} {\it Claim: Each value of $n_k-n_j$ can not occur more than $N$ times.} \noindent {\it Proof of claim:} Denote $c_N(\ell) := |\{1\le k \le N: n_k=\ell\}|$. Since $(n_k)$ are distinct, $c_N(\ell)$ is 0 or 1, and is 1 for $N$ values of $\ell$. Put $n_N^*= \max_{1\le k \le N} n_k$ and $V_N(t):=|\{1 \le k<j \le N: n_k-n_j=t\}|$. Then $$ V_N(t)=\sum_{\ell=1}^{n_N^*} c_N(\ell+t)c_N(\ell) \le N. $$ This means that the value $t \ne 0$ occurs as a difference at most $N$ times. \smallskip The indices of the Fourier-Stieltjes coefficients in \eqref{double} are included in the interval with end points the maximal difference, which is at most $|m|(N-1)d$ (note that $m$ is fixed), so \begin{equation} \label{double-sum} \sum_{k,j=1}^N\hat\nu(m(n_k-n_j)) \le N|\hat\nu(0)|+N \Big(N \sum_{k=-|m|Nd}^{|m|Nd} |\hat\nu(k)|^2 \Big)^{1/2} . \end{equation} Since $P_\nu$ is hyperbounded, we apply Theorem \ref{sum-estimates}. We first assume that $P_\nu$ maps $L^p$ into $L^q$ with $1 \le p<2$, and $q \le 2$. Then \eqref{double-sum} yields $$ \sum_{k,j=1}^N\hat\nu(m(n_k-n_j)) \le CN +C'_m(\log N)^{1/p}N^{3/2}N^{(p-1)/p}N^{(r-2)/2r}, $$ by the estimate \eqref{sum-estimate1}, with $r=q/(q-1)$. But $$ \frac32 +\frac{p-1}p +\frac{r-2}{2r}=3 -\frac1p -\frac{q-1}q =2 -\frac1p+\frac1q <2 $$ since $\frac1p>\frac1q$, so substituting into \eqref{del} we obtain convergence of the series; hence $(n_kx)$ is uniformly distributed mod 1 for $\nu$-a.e. $x$. We now assume that $P_\nu$ maps $L^p$ to $L^q$ with $2 \le p<q$. Then \eqref{double-sum} yields $$ \sum_{k,j=1}^N\hat\nu(m(n_k-n_j)) \le CN +C'_m(\log N)^{1/r }N^{3/2}N^{(r-1)/r}N^{(p-2)/2p}, $$ by the estimate \eqref{sum-estimate2}, with $r=q/(q-1)$. But $$ \frac32 +\frac{r-1}r +\frac{p-2}{2p}=3 -\frac1r -\frac1p =2 -\frac1p+\frac1q <2, $$ and we obtain as before convergence of the series \eqref{del}, which proves that $(n_kx)$ is uniformly distributed mod 1 for $\nu$-a.e. $x$. \end{proof} {\bf Remark.} In the right-hand side of the estimate \eqref{double-sum}, we can replace the endpoints of the summation interval by $\pm |m|n_N^*$. When $P_\nu$ maps $L^p$ into $L^q$, there is an $\alpha= \alpha(p,q)$ such that $(n_k x)$ is u.d. for $\nu$ a.e. $x$ when $(n_k)$ are distinct with $n_N^*=\mathcal O(N^\alpha)$; in particular, this applies when $n_k=\mathcal O(k^\alpha)$. When $1 \le p<q \le 2$ or $2\le p<q$, Theorem \ref{sum-estimates} yields $\alpha(1-\frac2p+\frac2q) <1$ in either case. When $p=1$ and $q=2$, or $p=2$ and $q=\infty$, for any $\alpha \ge 1$ the sequence $(n_k x)$ is u.d. mod 1. Since $\frac1p > \frac1q$, in all other cases $1-\frac2p+\frac2q <1$, so the case of bounded gaps (for which $\alpha=1$) is always a special case. \medskip \bigskip {\bf Acknowledgement.} The second author expresses his deep gratitude to the late Professor Shaul Foguel (1931-2020), his Ph.D. advisor, for introducing him to research in the ergodic theory of Markov operators. \bigskip
2,869,038,155,338
arxiv
\section{Introduction} Many control systems in application domains as diverse as electrical engineering, mechanics, or thermodynamics can be written as port-Hamilto\-nian systems \cite{Brogliato07,Duindam2009,Jacob2012,van2014port}. In recent years, an implicit description of the energy properties \cite{Schaft2020_DiracLagrangeNonlinear,vdSchaft2018} has led to the following class of linear descriptor systems \cite{beattie2018linear,Mehl2018} \begin{subequations}\label{eq:phDAE} \begin{align} \tfrac{d}{dt}{Ex}&= (J-R)Qx + Bu\label{e:phDAE_dyn}, \qquad Ex(0) = w^0\in\operatorname{im} E,\\ y &= B^\top Qx,\label{e:phDAE_output} \end{align} \end{subequations} where the skew-symmetric {\em structure matrix} $J$ describes the interconnection structure of the systems, along which the energy flows are exchanged between its parts and preserving the total energy, whereas the symmetric positive semi-definite {\em dissipation matrix} $R$ specifies how the system dissipates energy. The matrix $E$ allows an implicit definition of the energy and when it is singular models algebraic constraints arising from symmetries of the energy. Its product $E^\top Q = Q^\top E\ge 0$ with the matrix $Q$ corresponds to the energy stored in the system. For physical system's models, $\mathcal E} \newcommand{\frakE}{\mathfrak E(u) = \int_0^Tu(t)^\top y(t)\,dt$ specifies the {\em energy supplied} to the system over a given time horizon $[0,T]$. In the case of electrical circuits, for instance, the pair of input $u$ and output $y$ variables are equal to the pair of voltage and current at a port of the circuit. For an overview of further physical examples of $u$ and $y$ for pH systems see \cite[Table B.1, p.~205]{van2014port}. Besides the supply rate $u^\top y$, also the energy Hamiltonian $H(x)\doteq \tfrac12 x^\top E^\top Qx$ is key in the analysis of port-Hamiltonian systems as the combination of both leads to the energy balance \begin{equation}\label{eq:EnergyBalance} H(x(T))-H(x(0)) = \int_0^T u(t)^\top y(t)\,\text{d}t - \int_0^T \|R^{\frac12}Qx(t)\|^2\,\text{d}t. \end{equation} From the above equality it can be immediately seen that the dynamics are dissipative with respect to the supply rate $u^\top y$---in the sense of Willems \cite{Willems1972a}---and hence they are passive. Thus stabilization and control can be approached by passivity-based techniques, cf.\ \cite{Ortega08,van2000l2}. Despite the widespread interest of pH system for modeling, simulation and control of dynamic systems, surprisingly little has been done in terms of optimal control. In \cite{Schaller2020a} we have shown in the ODE case, i.e. $E=I$ in \eqref{eq:phDAE}, that the inherent dissipativity can be exploited in Optimal Control Problems (OCPs) which require to transfer the system state from a prescribed initial condition at $t=0$ to some target set at $t=T$, while minimizing the supplied energy $\int_0^Tu(t)^\top y(t)\,dt$. Observe that this non-quadratic objective implies that the considered OCP is singular, i.e., the Hessian of the OCP Hamiltonian with respect to $u$ is $0$. Hence the analysis of existence of solutions is more difficult and Riccati theory cannot be applied. In spite of this unfavorable situation, we have shown in \cite{Schaller2020a} that in the special ODE-case with $E = I$ and $Q>0$ the OCP is strictly dissipative at least with respect to a subspace and that optimal controls are completely characterized by the state and its adjoint.\footnote{Specifically, we have proven a sufficient condition for any singular arc to be of order $2$.} First steps towards the infinite-dimensional case are taken in \cite{Philipp2021}. In the present paper, it is our main objective to study the OCP in the more general situation $\det E =0$ with regard to reachability properties of the dynamics, dissipativity, and turnpike behavior, cf.\ Figure~\ref{fig:sb}. Classically, the latter means that for varying initial conditions and horizons the optimal solutions reside close to the optimal steady state for the majority of the time. We refer to \cite{Faulwasser2021} for comments on the historical origins and for a recent overview of turnpike results. The turnpike phenomenon is usually analyzed in two different situations: \begin{enumerate} \item[(a)] supposing that the OCP is regular allows transcribing the first-order optimality conditions as a system ODEs, cf.\ \cite{Gruene2019, Heiland2020, Pighin2020a, Trelat2015}; \item[(b)] supposing the underlying system, respectively, the OCP as such is strictly dissipative with respect to a specific steady state, cf.\ \cite{Damm2014, Faulwasser2017, Gruene2016a, Trelat18}. \end{enumerate} The present paper refers to neither of these situations. While our approach is strongly based on dissipativity, we will, however, not have to assume this property of the OCP. In contrast, a specific subspace dissipativity property is inherent to any port-Hamiltonian system. Indeed our main result establishes a generalized turnpike behavior of the optimal solutions towards the conservative subspace induced by the nullspace of the dissipation matrix $RQ$, see Theorem \ref{thm:DAE_tp}. Our approach also has a strong relation to non-linear mechanical systems, where the dynamics and symmetries give rise to a manifold of optimal input-state pairs, cf.\ \cite{Faulwasser2021b, Faulwasser2020a}. As is well known, a linear differential algebraic equation is closely related to its corresponding matrix pencil. In \cite{Mehl2020} the pencils associated to DAEs of the form \eqref{eq:phDAE} are called ``dissipative Hamiltonian''. Here, we adopt this notion and characterize this class of pencils (see Theorem \ref{t:ph_pencil_charac}). We also characterize the regularity of such pencils (Proposition \ref{p:regular}) and when their index is at most one (Proposition~\ref{p:ind1}). We regard these results as interesting contributions in their own right. Yet, since they are somewhat tangential to the main goal of characterizing the optimal solutions for varying initial conditions and horizon, we outsourced them to Appendix \ref{a:matpencils}. \begin{figure}[!h] \centering \includegraphics[width=.9\linewidth]{plotting_folder/sb} \caption{Summary and outline of the main results in this work.} \label{fig:sb} \end{figure} The paper is arranged as follows. Preliminaries and the problem state are given in Section \ref{s:DAE1}. Therein we introduce the considered OCP for port-Hamiltonian descriptor systems and we show how the algebraic constraints in the DAE can be eliminated via a structure-preserving state transformation proposed in \cite{beattie2018linear}. Having thus reduced the DAE system to a port-Hamiltonian ODE system with feed-through term, Section \ref{s:ODE} studies the corresponding ODE-constrained OCP. In Subsection \ref{ss:ODE_tp} we prove our main result for the ODE case concerning subspace turnpikes. In addition, we prove that the adjoint state performs a turnpike towards zero. The section ends with a numerical example of a modified mass-spring damper system. In Section \ref{s:DAE2} we transfer the statements from Section \ref{s:ODE} to the original DAE case and illustrate our results by means of a numerical example from robotics. We summarize the paper in Section~\ref{s:conclusion} and give an outlook considering future work. The paper is supplemented with a comparatively extensive appendix. In Appendix \ref{a:dae_solutions} we provide a concise recap of the solution theory of regular DAEs. Appendix \ref{a:matpencils} contains a detailed treatise of regular dissipative Hamiltonian pencils, including the above-mentioned characterization. Finally, we prove in Appendix \ref{a:existence} that optimal solutions exist for the kind of OCPs considered in this paper. \ \\ \textbf{Notation. } In the sequel, $\|\cdot\|$ always denotes the Euclidean norm on $\R^n$. The notion $L^1(0,T;\mathbb U)$ stands for the set of all (equivalence classes of) integrable functions on the interval $[0,T]$ with values in $\mathbb U\subset\R^m$. The set of eigenvalues (spectrum) of a real square matrix $A$ is denoted by $\sigma(A)$. The solution of an initial value problem $\dot x = Ax + Bu$, $x(0) = x^0$, with given control $u$ is denoted by $x(\,\cdot\,;x^0,u)$. \section{Preliminaries}\label{s:DAE1} In this paper we consider port-Hamiltonian descriptor systems of the form \eqref{eq:phDAE} where $J,R,Q,E\in\R^{n\times n}$, $B\in\R^{n\times m}$, and \begin{equation}\label{e:matrix_conditions} J=-J^\top,\quad R=R^\top\geq 0,\quad Q^\top E = E^\top Q\geq 0, \end{equation} We say that the pair $(u,x)\in L^1_{\rm loc}([0,\infty),\R^m)\times L^1_{\rm loc}([0,\infty),\R^n)$ satisfies \eqref{e:phDAE_dyn}, if $Ex\in W^{1,1}_{\rm loc}([0,\infty),\R^n)$ and \eqref{e:phDAE_dyn} holds almost everywhere on $(0,\infty)$. In this case, we say that $x$ is a solution of \eqref{e:phDAE_dyn}. We assume throughout the paper that the DAE \eqref{e:phDAE_dyn} is regular and has differentiation index one (for the definition of both notions see Appendix \ref{a:dae_solutions}), since this guarantees unique solutions of \eqref{e:phDAE_dyn} (cf.\ Proposition \ref{p:vks}). \begin{remark} It is well known that DAEs of the form $\frac d{dt}Ex = Ax + b$ with $E,A\in\R^{n\times n}$ are closely related to the matrix pencil $P(s) = sE-A$. In Appendix \ref{a:diss-ham-pencil} we coin pencils of the form $P(s) = sE - (J-R)Q$ with matrices $J,R,Q,E\in\R^{n\times n}$ satisfying the conditions~\eqref{e:matrix_conditions} {\em dissipative Hamiltonian} and characterize the regular ones among the class of all regular pencils $P(s) = sE-A$ (see Theorem \ref{t:ph_pencil_charac}). We also characterize regularity of these pencils and the property of having index at most one (see Propositions \ref{p:regular} and \ref{p:ind1}). \end{remark} \subsection{Problem statement} As already indicated in the Introduction, the power supplied (or withdrawn) from the port-Hamiltonian system \eqref{eq:phDAE} via the ports at time~$t$ is represented by $ u(t)^\top y(t). $ Hence, the energy supplied over a time horizon $[0,T]$ is given by $ \int_0^T u(t)^\top y(t)\,dt. $ It is therefore natural to consider the following problem: {\em Given an initial datum $w^0\in\operatorname{im} E$, a target set $\Psi\subset\operatorname{im} E$, and a control set $\mathbb{U}\subset\R^m$, what is the minimal energy supply for steering $w^0$ into $\Psi$?} Hence, the considered OCP reads: \begin{align}\label{e:phDAE_OCP} \begin{split} \min_{u\in L^1(0,T;\mathbb{U})} &\int_0^T u(t)^\top y(t)\,\text{d}t\\ \text{s.t. }\tfrac{d}{dt}Ex(t) &= (J-R)Qx(t)+Bu(t)\\ y(t)&=B^\top Qx(t)\\ Ex(0)&=w^0\in\operatorname{im} E,\quad Ex(T)\in\Psi\subset\operatorname{im} E. \end{split} \end{align} We shall assume throughout that the target set $\Psi\subset\operatorname{im} E$ is closed and that the set $\mathbb U\subset\R^m$ of control constraints is convex and compact and contains the origin in its interior $\operatorname{int}\mathbb U$. The {\em energy Hamiltonian} of the pH-DAE system \eqref{eq:phDAE} is given by \begin{align*} {H}(x) \doteq \tfrac12\cdot x^\top E^\top Q x. \end{align*} \begin{lemma}\label{l:hamiltonian} If $(u,x,y)$ satisfies \eqref{eq:phDAE}, then $H\circ x\in W^{1,1}_{\rm loc}([0,\infty))$ and \begin{align} \label{eq:energybalance} \tfrac{\text{d}}{\text{d}t} H(x(t)) = u(t)^\top y(t) - \|R^{\frac12}Q x(t)\|^2. \end{align} \end{lemma} \begin{proof} Let $P$ denote the orthogonal projection onto $\operatorname{im} E^\top$. Since $I-P$ maps onto $(\operatorname{im} E^\top)^\perp = \ker E$, we have $E = EP + E(I-P) = EP$. Let $E^\dagger$ denote the Moore-Penrose inverse of $E$. Then $P = E^\dagger E$ and therefore $$ Ex\in W^{1,1}_{\rm loc}([0,\infty),\R^n) \quad\Longleftrightarrow\quad Px\in W^{1,1}_{\rm loc}([0,\infty),\R^n). $$ Since $H(z) = \frac 12 z^\top PE^\top Qz = \frac 12 (Pz)^\top Q^\top E(Pz)$, it follows that $H\circ x\in W^{1,1}_{\rm loc}([0,\infty))$ with generalized derivative \begin{align*} \tfrac d{dt}(H\circ x) &= x^\top Q^\top E\frac d{dt}Px = x^\top Q^\top\frac d{dt}Ex = x^\top Q^\top\big[(J-R)Qx + Bu\big]\\ &= y^\top u - x^\top Q^\top RQx, \end{align*} as $J$ is skew-symmetric and $y = B^\top Qx$. \end{proof} If $(u,x,y)$ satisfies the port-Hamiltonian DAE-system \eqref{eq:phDAE}, then Lemma \ref{l:hamiltonian} immediately implies the following energy balance \eqref{eq:EnergyBalance} which we rewrite as \begin{align}\label{e:DAE_diss} \int_0^T u(t)^\top y(t)\,\text{d}t = H(x(T))-H(x^0) + \int_0^T \|R^{\frac12}Qx(t)\|^2\,\text{d}t. \end{align} Put differently, the minimization of the supplied energy is equivalent to minimizing the sum of the overall energy $H(x(T))$ at time $T$ and the internal dissipation given by the last term in \eqref{e:DAE_diss}. \subsection{Decomposition into differential and algebraic part} Next, we briefly summarize the transformation proposed in \cite{beattie2018linear} that allows to reformulate the pH-DAE-constrained OCP \eqref{e:phDAE_OCP} as a pH-ODE constrained one. \begin{theorem}[\cite{beattie2018linear}]\label{t:beattie} Suppose that the pH-DAE \eqref{eq:phDAE} is regular with index one and let $n_1 \doteq \dim\operatorname{im} E$. Then there exist invertible matrices $U,V\in\mathbb{R}^{n\times n}$ such that \eqref{eq:phDAE} transforms into \begin{subequations} \label{e:beattie} \begin{align} \label{e:beattie_dyn} \frac{d}{dt} \begin{pmatrix} I_{n_1}&0\\ 0&0 \end{pmatrix} \begin{pmatrix} z_1\\z_2 \end{pmatrix} &= \begin{pmatrix} \left(J_{11}-R_{11}\right)Q_{11}&0\\ \left(J_{21}-R_{21}\right)Q_{11}&\left(J_{22}-R_{22}\right)Q_{22} \end{pmatrix} \begin{pmatrix} z_1\\ z_2 \end{pmatrix} + \begin{pmatrix} B_1\\B_2 \end{pmatrix}u,\\ \label{e:beattie_obs} y&=\begin{pmatrix}B_1^\top,B_2^\top\end{pmatrix}\begin{pmatrix} Q_{11}&0\\ 0&Q_{22} \end{pmatrix} \begin{pmatrix} z_1\\z_2 \end{pmatrix}, \end{align} \end{subequations} where \begin{align} \begin{split}\label{e:matrix_transforms} U^\top EV = \begin{pmatrix} I_{n_1}&0\\ 0&0 \end{pmatrix},\quad U^{-1}QV = \begin{pmatrix} Q_{11}&0\\ 0&Q_{22} \end{pmatrix}\\ U^\top JU = \begin{pmatrix} {J}_{11}&{J}_{12}\\ -{J}_{12}^\top&{J}_{22} \end{pmatrix},\quad U^\top RU = \begin{pmatrix} {R}_{11}&{R}_{12}\\ {R}_{12}^\top&{R}_{22} \end{pmatrix}, \end{split} \end{align} with $R_{12} = J_{12}$ and \begin{align*} U^\top B = \begin{pmatrix} B_1\\B_2 \end{pmatrix},\quad V^{-1}x = \begin{pmatrix}z_1\\z_2\end{pmatrix}. \end{align*} The matrices $J_{22}-R_{22}$ and $Q_{22}$ are invertible and $Q_{11}=Q_{11}^\top\ge 0$. \end{theorem} \begin{proof} The proof follows as a particular case of the argumentation in \cite[Section 5]{beattie2018linear} and an additional state transformation $z_1 = E_{11}x_1$, where $E_{11}$ and $x_1$ are as in \cite[Section 5]{beattie2018linear}. \end{proof} A direct consequence of this decomposition and the invertibility of $(J_{22}-R_{22})Q_{22}$ is the following result, where the state $z_2$ is eliminated in the output $y$ and the system \eqref{e:beattie} is reformulated as a generalized pH-ODE system in the state $z_1$. \begin{corollary}[\cite{beattie2018linear}]\label{c:beattie} Suppose that the ph-DAE \eqref{eq:phDAE} is regular and has index one. Then the system \eqref{e:beattie} can be written as \begin{align} \begin{split}\label{e:phDAE_ODE} \dot z_1 &= (J_{11}-R_{11})Q_{11} z_1 +(\hat{B}-\hat{P})u,\\ y &= \left(\hat{B}+\hat{P}\right)^\top Q_{11}z_1 + \left(\hat{S}+\hat{N}\right)u, \end{split} \end{align} where, abbreviating $L_{ij}\doteq J_{ij}-R_{ij}$, $i,j=1,2$, \begin{alignat*}{4} &\hat{B}&&= B_1 - \frac12 L_{21}^\top L_{22}^{-\top}B_2,\qquad &&\hat{P}&&= -\frac12 L_{21}^\top L_{22}^{-\top}B_2\\ &\hat{S}&&= - \frac12 B_2^\top\left(L_{22}^{-1} + L_{22}^{-\top}\right)B_2,\qquad &&\hat{N}&&= - \frac12 B_2^\top\left(L_{22}^{-1} - L_{22}^{-\top}\right)B_2 \end{alignat*} For the state $z_2$ we have $$ z_2 = -Q^{-1}_{22}L_{22}^{-1}\big(L_{21}Q_{11}z_1 + B_2u\big). $$ Moreover, setting $\hat{H}(z_1)\doteq \frac 12\cdot z_1^\top Q_{11}z_1$, we have $H(x) = \hat H(z_1)$ and \begin{align} \label{eq:costfunceq_reform} \frac{d}{dt} \hat{H}(z_1(t)) = y(t)^\top u(t) - \begin{pmatrix} z_1(t)\\u(t) \end{pmatrix}^\top \underbrace{\begin{pmatrix} Q_{11} R_{11}Q_{11} & Q_{11} \hat{P}\\ \hat{P}^\top Q_{11}& \hat{S} \end{pmatrix}}_{=:\hat W} \begin{pmatrix} z_1(t)\\u(t) \end{pmatrix}. \end{align} \end{corollary} \begin{remark}\label{r:xz} It follows from Corollary \ref{c:beattie} and Lemma \ref{l:hamiltonian} that, given a control \\$u~\in~L^1_{\rm loc}([0,\infty),\R^m)$, we have $$ x(t)^\top Q^\top RQx(t) = \vek{z_1(t)}{u(t)}^\top\hat W\vek{z_1(t)}{u(t)} $$ for all solutions $x$ of the DAE in \eqref{e:phDAE_OCP} and the corresponding solutions $z_1$ of the ODE in \eqref{e:phDAE_ODE}. But in fact, an easy computation shows that \begin{align}\label{e:RW} \bar x^\top Q^\top RQ\bar x = \vek{\bar z_1}{\bar u}^\top\hat W\vek{\bar z_1}{\bar u} \end{align} holds for $\bar x,\bar z\in\R^n$ and $\bar u\in\R^m$ whenever $\bar z=V^{-1}\bar x$ and $\bar z_1,\bar z_2,\bar u$ are related via \begin{align}\label{e:z2} \bar z_2 = -Q^{-1}_{22}L_{22}^{-1}\big(L_{21}Q_{11}\bar z_1 + B_2\bar u\big). \end{align} In particular, $\hat W$ is positive semi-definite. \end{remark} Corollary \ref{c:beattie} allows to reformulate OCP~\eqref{e:phDAE_OCP} as follows: \begin{align}\label{e:phDAE_OCPODE} \begin{split} \min_{u\in L^1(0,T;\mathbb{U})} &\int_0^T u(t)^\top y(t)\,\text{d}t\\ \text{s.t. }\dot z_1 &= (J_{11}-R_{11})Q_{11}z_1 +(\hat{B}-\hat{P})u,\\ y &= \left(\hat{B}+\hat{P}\right)^\top Q_{11}z_1 + \left(\hat{S}+\hat{N}\right)u.\\ z_1(0)&=z_1^0,\quad z_1(T)\in\Phi_1, \end{split} \end{align} where $$ z_1^0 \doteq U^\top w^0 \qquad\text{and}\qquad \Phi_1 \doteq U^\top\Psi. $$ Note that \eqref{e:matrix_transforms} implies $U^\top\operatorname{im} E = \R^{n_1}\times\{0\}\cong\R^{n_1}$. Hence, $z_1^0\in\R^{n_1}$ and $\Phi_1\subset\R^{n_1}$. \begin{remark}\label{r:prelim_DAE} {\bf (a)} The starting point of our analysis was an OCP with DAE-con\-straints. We made use of the construction from \cite{beattie2018linear} to reduce it to an ODE-constrained OCP, thereby {\em preserving the port-Hamiltonian structure}. However, the price to pay is the arising feed-through term $(\hat S+\hat N)u$ in the output $y$. {\bf (b)} Once an OCP is feasible (i.e., the admissible set is non-empty), it is advisable to show the existence of optimal solutions. Since the OCPs \eqref{e:phDAE_OCP} and \eqref{e:phDAE_OCPODE} are equivalent, it suffices to consider the ODE-constrained OCP \eqref{e:phDAE_OCPODE}. And in fact, we prove in Corollary \ref{c:ODE_optsol} below that an optimal solution of OCP \eqref{e:phDAE_OCPODE} exists, whenever the state $z_1^0$ can be steered to a point in $\Phi_1$ at time $T$ under the given dynamics. {\bf (c)} In view of the energy balance \eqref{e:DAE_diss} and $H(x) = \frac 12(Ex)^\top QE^\dagger(Ex)$, in the case $RQ=0$ the OCP \eqref{e:phDAE_OCP} is equivalent to minimizing the function $f(w) = w^\top QE^\dagger w$ on the set of states $w\in\Psi$ that are reachable from $w^0$ (inside $\operatorname{im} E$) at time $T$. We are not going to detail this case here. That is, we always assume that $RQ\neq 0$, which means that the system dissipates energy at certain states. Note that this is equivalent to $\hat W\neq 0$ by \eqref{e:RW}. \end{remark} \section{Optimal control of pH ODE systems with feed-through}\label{s:ODE} Next, we analyze the ODE-constrained OCP \eqref{e:phDAE_OCPODE} with regard to algebraic properties, reachability aspects, and the turnpike phenomenon. These properties will then be translated back to the corresponding properties of the original DAE-constrained OCP \eqref{e:phDAE_OCP} in Section~\ref{s:DAE2}. Throughout this section, we consider OCPs with dynamic constraints given by port-Hamilto\-ni\-an ODE systems with feed-through \begin{subequations}\label{e:phODE} \begin{align} \dot x &= (J-R)Qx + (B-P)u\label{e:phODE_dyn}, \qquad x(0) = x^0,\\ y &= (B+P)^\top Qx + Du\label{e:phODE_out}. \end{align} \end{subequations} Moreover, $J,R,Q\in\R^{n\times n}$ and $D\in\R^{m\times m}$ are constant matrices satisfying \begin{equation}\label{e:phODE_matrix_conditions} J=-J^\top,\quad R=R^\top\ge 0,\quad Q=Q^\top\ge 0,\quad S\doteq\tfrac 12(D+D^\top)\ge 0 \end{equation} and $B,P\in\R^{n\times m}$ such that \begin{align}\label{e:W} W \doteq \begin{pmatrix}QRQ & QP\\ P^\top Q & S\end{pmatrix} \ge 0. \end{align} The latter condition is obviously trivially satisfied if $P=0$. In \cite{Schaller2020a} we made first observations concerning the special case with $P=0$, $D=0$, and $Q>0$. Subsequently, we go beyond~\cite{Schaller2020a}, i.e. we explore the reachability properties of \eqref{e:phODE} and prove novel results concerning input-state subspace turnpikes as well as an adjoint turnpike for the ODE-constrained OCP~\eqref{e:phDAE_OCPODE}. \begin{remark}[Dissipative Hamiltonian matrices] In Appendix \ref{a:diss-ham-mat} we characterize the matrices $A$ which can be written in the form $A = (J-R)Q$ (see Theorem \ref{t:ph-charac}). It turns out that this property is purely spectral, i.e., it can be read off the Jordan canonical form of $A$. \end{remark} We say that a linear map $T : \mathcal L} \newcommand{\frakL}{\mathfrak L\to\mathcal L} \newcommand{\frakL}{\mathfrak L$, mapping a subspace $\mathcal L} \newcommand{\frakL}{\mathfrak L\subset\R^n$ (or $\C^n$) to itself is {\em $Q$-symmetric} ({\em $Q$-skew-symmetric}, {\em $Q$-positive semi-definite}), if it has the respective property with respect to the positive semi-definite inner product $$ [\cdot,\cdot] \doteq \<Q\cdot,\cdot\>, $$ restricted to $\mathcal L} \newcommand{\frakL}{\mathfrak L$. For example, $T$ is $Q$-skew-symmetric if $[Tx,y] = -[x,Ty]$ for all $x,y\in\mathcal L} \newcommand{\frakL}{\mathfrak L$. Let $A \doteq (J-R)Q$, where $J,R,Q$ are as in \eqref{e:phODE_matrix_conditions}. By Theorem \ref{t:ph-charac} none of the eigenvalues of $A$ has a positive real part. It follows from the real Jordan form that there exists a (spectral) decomposition \begin{equation}\label{e:decompN1} \R^n = N_1\oplus N_2 \end{equation} such that both subspaces $N_1$ and $N_2$ are $A$-invariant, $\sigma(A|_{N_1})\subset\rm{i}\R$, and $A|_{N_2}$ is Hurwitz. In particular, we have $A = A_1\oplus A_2$ with respect to the decomposition \eqref{e:decompN1}. This decomposition will frequently be referred to in the sequel. Although the decomposition is not needed to prove our main results on the turnpike behavior of optimal solutions in Subsection \ref{ss:ODE_tp}, the next proposition provides some interesting geometrical insight which also explains the behavior of optimal solutions, see Subsection \ref{ss:example}. \begin{proposition}\label{p:decomp} Let $J,R,Q$ be as in \eqref{e:phODE_matrix_conditions}. Then the decomposition \eqref{e:decompN1} is $Q$-orthogonal, i.e., $\R^n = N_1\oplus_Q N_2$, and $$ \ker Q\,\subset\,N_1\,\subset\,\ker(RQ). $$ Moreover, the representation of $(J-R)Q$ with respect to the decomposition \eqref{e:decompN1} has the form \begin{equation}\label{e:cons_diss_dec} (J-R)Q = \mat{J_1}00{J_2-R_2}, \end{equation} where $J_1$ and $J_2$ are $Q$-skew-symmetric in $N_1$ and $N_2$, respectively, $R_2$ is $Q$-positive semi-definite, and $J_2-R_2$ is Hurwitz. The eigenvalues of both $J_1$ and $J_2$ are purely imaginary. \end{proposition} \begin{proof} Let $A \doteq (J-R)Q$. By definition of $N_1$, we have $\ker Q\subset\ker A\subset N_1$. Denote by $\mathcal L} \newcommand{\frakL}{\mathfrak L_\lambda(A)$ the complex algebraic eigenspace of $A$ at $\lambda\in\C$, i.e., \begin{equation}\label{e:hauptvektoren} \mathcal L} \newcommand{\frakL}{\mathfrak L_\lambda(A) = \bigcup_{k=0}^n\ker\big((A-\lambda)^k\big)\,\subset\,\C^n. \end{equation} For a set $\Delta\subset\C$ we will also use the notation $\mathcal L} \newcommand{\frakL}{\mathfrak L_\Delta(A) \doteq \operatorname{span}\{\mathcal L} \newcommand{\frakL}{\mathfrak L_\lambda(A) : \lambda\in\Delta\}\,\subset\,\C^n$. Denote by ${[\perp]}$ the orthogonality relation with respect to $[\cdot\,,\cdot]$. We shall now prove that \begin{equation}\label{e:orthogonal} \lambda\neq\mu,\;\mathcal L} \newcommand{\frakL}{\mathfrak L_\mu(A)\subset\ker(RQ)\quad\Longrightarrow\quad\mathcal L} \newcommand{\frakL}{\mathfrak L_\lambda(A)\;[\perp]\;\mathcal L} \newcommand{\frakL}{\mathfrak L_\mu(A). \end{equation} To see this, let us assume that we have already shown that $\ker((A-\lambda)^k)\,[\perp]\,\mathcal L} \newcommand{\frakL}{\mathfrak L_\mu(A)$ for some $k\in\N_0$. Let $(A-\lambda)^{k+1}y=0$ and set $x = (A-\lambda)y$. Then, by assumption, $x\in(\mathcal L} \newcommand{\frakL}{\mathfrak L_\mu(A))^{[\perp]}$. Let us furthermore assume that we have already proved that $y\,{[\perp]}\,\ker(A-\mu)^j$ for some $j\in\N_0$. Let $z\in\ker(A-\mu)^{j+1}$ and set $w \doteq (A-\mu)z$. Then $[y,w]=0$ and thus (as $RQz=0$), \begin{align*} \overline\lambda[z,y] &= [z,\lambda y] = [z,Ay-x] = [z,Ay] = \<Qz,(J-R)Qy\> = -\<(J+R)Qz,Qy\>\\ &= -\<(J-R)Qz,Qy\> = -[Az,y] = -[\mu z+w,y] = -\mu[z,y], \end{align*} hence $(\overline\lambda+\mu)[z,y] = 0$. Note that $\overline\lambda+\mu = 0$ implies that $(\Re\lambda)(\Re\mu)<0$ (in which case one of $\mathcal L} \newcommand{\frakL}{\mathfrak L_\lambda(A)$ and $\mathcal L} \newcommand{\frakL}{\mathfrak L_\mu(A)$ is trivial) or $\lambda=\mu$, which we excluded. Hence, $[z,y]=0$ and \eqref{e:orthogonal} is proved. We set $N_1' \doteq \mathcal L} \newcommand{\frakL}{\mathfrak L_{\rm{i}\R}(A)$ and $N_2' \doteq \mathcal L} \newcommand{\frakL}{\mathfrak L_{\C^-}(A)$, where $\C^-\doteq \{z\in\C : \Re z < 0\}$. Obviously, we have $N_1 = N_1'\cap\R^n$ and $N_2= N_2'\cap\R^n$. Since $N_1'\subset\ker(RQ)$ (see \eqref{e:ialpha} and \eqref{e:atzero}), Equation~\eqref{e:orthogonal} shows that $N_1'\,{[\perp]}\,N_2'$ and thus the $Q$-orthogonality of $N_1$ and $N_2$. Hence, $A$ decomposes as in \eqref{e:cons_diss_dec}, where $J_1 = JQ|_{N_1}$, $J_2 = P_2JQ|_{N_2}$, and $R_2 = P_2RQ|_{N_2}$ with $P_2$ denoting the projection onto $N_2$ along $N_1$. It is easy to see that $J_1$ is $Q$-skew-symmetric and that $P_2$ is $Q$-symmetric. The latter implies that $J_2$ is $Q$-skew-symmetric and $R_2$ is $Q$-symmetric and $Q$-non-negative. By construction, we have $\sigma(J_1)\subset\rm{i}\R$. Finally, it follows from $\ker Q\subset N_1$ that $[\cdot\,,\cdot]$ is positive definite on $N_2$, and hence also $\sigma(J_2)\subset\rm{i}\R$. \end{proof} \begin{remark} Note that $\ker R_2$ might still be non-trivial. A simple example is given by $Q = I_2$, $J = \smat 01{-1}0$, and $R = \smat 2000$, in which case $N_1=\{0\}$ and therefore $\ker R_2 =\ker R\neq\{0\}$. \end{remark} \subsection{Reachability properties}\label{ss:reachable} As already mentioned in Section \ref{s:DAE1}, we consider control constraints $u\in\mathbb U$ where $\mathbb U$ is a convex and compact set with $0\in\operatorname{int}\mathbb U$. Let $t>0$ and $x\in\R^n$. We say that $z\in\R^n$ is {\em reachable} from $x$ at time $t$ under the dynamics in \eqref{e:phODE_dyn}, if there exists a feasible $u\in L^1(0,t;\mathbb U)$ such that the corresponding state response satisfies $x(t;x,u) = z$. By $\RF_t(x)$ we denote the set of all states that are reachable from $x$ at time $t$. Similarly, we denote by $\RT_t(x)$ the set of states from which $x$ is reachable (i.e., which can be controlled/steered to $x$) at time $t$. Clearly, $\RT_t(x)$ equals $\RF_t(x)$ with respect to the time-reverse dynamics $\dot x = -(J-R)Qx - (B-P)u$. Moreover, we set $$ \RF(x) \doteq \bigcup_{t\ge 0}\RF_t(x) \qquad\text{and}\qquad \RT(x) \doteq \bigcup_{t\ge 0}\RT_t(x), $$ where $\RF_0(x) \doteq \RT_0(x) \doteq \{x\}$. For sets $\Phi\subset\R^n$ we furthermore define $$ \RF(\Phi) \doteq \bigcup_{x\in\Phi}\RF(x) \qquad\text{and}\qquad \RT(\Phi) \doteq \bigcup_{x\in\Phi}\RT(x). $$ Hence, $\RT(\Phi)$ is the set of states that can be steered into $\Phi$. It is well known that the sets $\RF_t(x)$ and $\RT_t(x)$ are compact and convex for each $t\ge 0$ and each $x\in\R^n$, see, e.g., \cite[Chapter 2, Theorem 1]{leemarkus}. \subsubsection{Reachability of steady states} Recall the Kalman controllability matrix of a linear time-invariant control system $(A,B)$ in $\R^n$ $$ K(A,B) \doteq (B,AB,\ldots,A^{n-1}B). $$ The linear control system $(A,B)$ is {\em controllable} if $\operatorname{rank} K(A,B) = n$. \begin{lemma}[Reachable sets of input-constrained pH systems]\label{l:reachable_zero} Consider the pH-system \eqref{e:phODE} with convex and compact input constraint set $\mathbb U$, $0\in\operatorname{int}\mathbb U$. Let $\mathcal X} \newcommand{\frakX}{\mathfrak X \doteq \operatorname{im} K((J-R)Q,B-P)$. Then the following hold: \begin{enumerate} \item[{\rm (i)}] $\RT(0) = \mathcal X} \newcommand{\frakX}{\mathfrak X$. \item[{\rm (ii)}] $\RF(0)\subset\mathcal X} \newcommand{\frakX}{\mathfrak X$ is convex and relatively open in $\mathcal X} \newcommand{\frakX}{\mathfrak X$. \item[{\rm (iii)}] If $0 < t_1 < t_2$, then $\RT_{t_1}(0)\subset\operatorname{int}_{\mathcal X} \newcommand{\frakX}{\mathfrak X}\RT_{t_2}(0)$, \end{enumerate} where $\operatorname{int}_\mathcal X} \newcommand{\frakX}{\mathfrak X$ denotes the interior with respect to the subspace topology of $\mathcal X} \newcommand{\frakX}{\mathfrak X$. \end{lemma} \begin{proof} Set $A \doteq (J-R)Q$ and $\widetilde B \doteq B-P$. From the variation of constants formula it easily follows that $\RF(0)\subset\mathcal X} \newcommand{\frakX}{\mathfrak X$ and $\RT(0)\subset\mathcal X} \newcommand{\frakX}{\mathfrak X$. Note that $\mathcal X} \newcommand{\frakX}{\mathfrak X$ is invariant under $A$ and that $\widetilde B\bar u\in\mathcal X} \newcommand{\frakX}{\mathfrak X$ for each $\bar u\in\R^m$. Let $\widetilde A \doteq A|_{\mathcal X} \newcommand{\frakX}{\mathfrak X}$. We consider the system $\dot x = \widetilde Ax + \widetilde Bu$ in $\mathcal X} \newcommand{\frakX}{\mathfrak X$ (hence, also with initial values $x(0)\in\mathcal X} \newcommand{\frakX}{\mathfrak X$). Then $(\widetilde A,\widetilde B)$ is controllable and the same is true for the time-reversed system $(-\widetilde A,-\widetilde B)$. Also note that $\Re\lambda\le 0$ for each eigenvalue $\lambda$ of $\widetilde A$ (cf.\ Theorem \ref{t:ph-charac}). The properties (i) and (ii) now follow immediately from \cite[p.\ 45, Theorems 1,2,3,5]{Macki2012}, and (iii) is a consequence of \cite[Corollary 17.1]{hermeslasalle}. \end{proof} Let $x_T\in\R^n$ and $x^0\in\RT(x_T)$. Then there exists a minimal time $T(x^0;x_T)$ at which $x_T$ is reachable from $x^0$ (see\footnote{The theorem in \cite{Macki2012} is formulated for $x_T=0$ but the proof also works for $x_T\neq 0$.} \cite[p.\ 60, Theorem 1]{Macki2012}). This defines a {\em minimal time function} $T(\,\cdot\,;x_T) : \RT(x_T)\to [0,\infty)$. By $B_\varepsilon(S)$ we denote the open $\varepsilon$-neighborhood of a set $S\subset\R^k$. We also write $B_\varepsilon(x) \doteq B_\varepsilon(\{x\})$. \begin{corollary}\label{c:mintime} The minimal time function $T(\,\cdot\,;0) : \RT(0)\to [0,\infty)$ is continuous. \end{corollary} \begin{proof} We adopt the notation from Lemma \ref{l:reachable_zero} and its proof. Recall that $(\widetilde A,\widetilde B)$ is controllable (in $\mathcal X} \newcommand{\frakX}{\mathfrak X = \RT(0)$). Let $x^0\in\mathcal X} \newcommand{\frakX}{\mathfrak X$, $\varepsilon > 0$, and set $t = T(x^0;0)$. Then $x^0\in\partial_\mathcal X} \newcommand{\frakX}{\mathfrak X\RT_t(0)$ (see \cite[Lemma 13.1]{hermeslasalle}). By Lemma \ref{l:reachable_zero} (iii) we have $$ \RT_{t-\varepsilon}(0)\subset\operatorname{int}_\mathcal X} \newcommand{\frakX}{\mathfrak X\RT_t(0) \qquad\text{and}\qquad \RT_t(0)\subset\operatorname{int}_\mathcal X} \newcommand{\frakX}{\mathfrak X\RT_{t+\varepsilon}(0). $$ Hence, there exists $\delta_1 > 0$ such that $B^\mathcal X} \newcommand{\frakX}{\mathfrak X_{\delta_1}(x^0)\subset\RT_{t+\varepsilon}(0)$, where $B^\mathcal X} \newcommand{\frakX}{\mathfrak X_r(x) \doteq B_r(x)\cap\mathcal X} \newcommand{\frakX}{\mathfrak X$. On the other hand, there exists $\delta_2>0$ such that $B^\mathcal X} \newcommand{\frakX}{\mathfrak X_{\delta_2}(x^0)\cap\RT_{t-\varepsilon}(0) = \emptyset$. Indeed, otherwise we had $x^0\in\RT_{t-\varepsilon}(0)$ and thus $x^0\in\operatorname{int}_\mathcal X} \newcommand{\frakX}{\mathfrak X\RT_t(0)$, which contradicts $x^0\in\partial_\mathcal X} \newcommand{\frakX}{\mathfrak X\RT_t(0)$. Hence, choosing $\delta = \min\{\delta_1,\delta_2\}$, we have $B^\mathcal X} \newcommand{\frakX}{\mathfrak X_\delta(x^0)\subset\RT_{t+\varepsilon}(0)\backslash\RT_{t-\varepsilon}(0)$. For $x\in B^\mathcal X} \newcommand{\frakX}{\mathfrak X_\delta(x^0)$ this implies $t-\varepsilon\le T(x;0)\le t+\varepsilon$, or, equivalently, $|T(x;0)-T(x^0;0)|\le\varepsilon$. \end{proof} A pair $(\bar x,\bar u)\in\R^n\times\mathbb U$ is called a {\em steady state} (or {\em controlled equilibrium}) of the dynamics in \eqref{e:phODE_dyn} if $$ (J-R)Q\bar x + (B-P)\bar u = 0. $$ In the following, by $\RT_t{}^{\!\!\VV}(x)$ and $\RF_t{}^{\!\!\VV}(x)$ we denote the reachable sets for \eqref{e:phODE_dyn} with $L^1$-controls taking their values in $\mathbb V\subset\R^m$. \begin{lemma}\label{l:karfreitag} Let $(\bar x_e,\bar u_e)\in\R^n\times\mathbb U$ be a steady state of \eqref{e:phODE_dyn} and set $\mathbb V \doteq \mathbb U - \bar u_e$. Then for $t\ge 0$ we have $$ \RT_t(\bar x_e) = \bar x_e + \RT_t{}^{\!\!\VV}(0) \qquad\text{and}\qquad \RF_t(\bar x_e) = \bar x_e + \RF_t{}^{\!\!\VV}(0). $$ \end{lemma} \begin{proof} We have $x\in\RT_t(\bar x_e)$ if and only if $x = e^{-tA}\bar x_e - \int_0^te^{-sA}Bu(s)\,ds$ with $u\in L^1(0,t;\mathbb U)$. Since $\tfrac d{ds}e^{-sA}\bar x_e = e^{-sA}(-A\bar x_e) = e^{-sA}B\bar u_e$, and therefore $e^{-tA}\bar x_e - \bar x_e = \int_0^te^{-sA}B\bar u_e\,ds$, it follows that $x\in\RT_t(\bar x_e)$ if and only if $x = \bar x_e - \int_0^te^{-sA}Bv(s)\,ds$ with $v\in L^1(0,t;\mathbb V)$. This proves the claim for $\RT_t(\bar x_e)$. The claim for $\RF_t(\bar x_e)$ is proved similarly. \end{proof} If $\bar u_e\in\operatorname{int}\mathbb U$, then also $\mathbb V = \mathbb U-\bar u_e$ is compact, convex, and contains $0$ in its interior. Hence, the results obtained so far immediately imply the next corollary. \begin{corollary}\label{c:mintime_ss} Let $(\bar x_e,\bar u_e)\in\R^n\times\operatorname{int}\mathbb U$ be a steady state of \eqref{e:phODE_dyn}. Then the following hold: \begin{enumerate} \item[{\rm (i)}] $\RT(\bar x_e) = \bar x_e + \operatorname{im} K((J-R)Q,B-P)$. \item[{\rm (ii)}] The minimal time function $T(\,\cdot\,;\bar x_e) : \RT(\bar x_e)\to [0,\infty)$ is continuous. \end{enumerate} \end{corollary} \begin{remark} Let $A \doteq (J-R)Q$, $\widetilde B \doteq B-P$, and $\mathcal X} \newcommand{\frakX}{\mathfrak X \doteq \operatorname{im} K(A,\widetilde B)$. Statement (i) in Corollary \ref{c:mintime_ss} raises the question as to whether $\bar x_e\in\mathcal X} \newcommand{\frakX}{\mathfrak X$ for all steady states $(\bar x_e,\bar u_e)$. This is not necessarily the case. Just consider $\widetilde B=0$ (in which case $\mathcal X} \newcommand{\frakX}{\mathfrak X = \{0\}$) and some $\bar x_e\in\ker A\backslash\{0\}$ as a simple counterexample. However, if $(\bar x_e,\bar u_e)$ is a steady state, then since $A\bar x_e = -\widetilde B\bar u_e\in\mathcal X} \newcommand{\frakX}{\mathfrak X$ and $A\mathcal X} \newcommand{\frakX}{\mathfrak X\subset\mathcal X} \newcommand{\frakX}{\mathfrak X$, we have $\bar x_e\in\mathcal X} \newcommand{\frakX}{\mathfrak X$ if $A$ is invertible. It can be shown that $\mathcal L} \newcommand{\frakL}{\mathfrak L_0(A)\subset\mathcal X} \newcommand{\frakX}{\mathfrak X$ (cf.\ \eqref{e:hauptvektoren}) is a necessary and sufficient condition. \end{remark} \subsubsection{Reachability in view of the decomposition \eqref{e:cons_diss_dec}} With respect to the decomposition $\R^n = N_1\oplus_Q N_2$ from Proposition \ref{p:decomp} the control system \eqref{e:phODE_dyn} takes the form \begin{subequations}\label{e:dec} \begin{alignat}{4} &\dot x_1 &&= \phantom{R_2--}J_1x_1 + B_1u,\qquad &&x_1(0) &&= x_1^0,\label{e:dec1}\\ &\dot x_2 &&= (J_2-R_2)x_2 + B_2u,\qquad &&x_2(0) &&= x_2^0,\label{e:dec2} \end{alignat} \end{subequations} where $B_j = P_j(B-P)$ with $P_j$ denoting the projection onto $N_j$ with respect to the decomposition $\R^n = N_1\oplus N_2$, $j=1,2$. \begin{lemma}[{\cite[Cor.\ 3.6.7]{sontag}}]\label{l:sontag} If $((J-R)Q,B-P)$ is controllable, then $$ \RF(0) = N_1 \oplus_Q (\RF(0)\cap N_2). $$ \end{lemma} The next corollary is a simple consequence of the previous results and shows in particular that $N_1$ is reachable from anywhere. \begin{corollary}\label{t:N1reachable} If $((J-R)Q,B-P)$ is controllable, then for every $x^0\in\R^n$ there is a bounded set $K_{x^0}\subset N_2$ such that $$ N_1\oplus_Q (\RF(0)\cap N_2)\,\subset\,\RF(x^0)\,\subset\,N_1\oplus_Q K_{x^0}. $$ \end{corollary} \begin{proof} If $x\in\R^n$ can be reached from zero, then it can be reached from any $x^0\in\R^n$ by Lemma \ref{l:reachable_zero} (i). This and Lemma \ref{l:sontag} prove the first inclusion. For the second inclusion, let $x\in\RF(x^0)$, $x = x_1 + x_2$ with $x_j\in N_j$, $j=1,2$. Then there exist a time $t>0$ and a control $u\in L^1(0,t;\mathbb{U})$ such that $$ x_2 = e^{tA_2}x_2^0 + \int_0^t e^{(t-s)A_2}B_2u(s)\,ds, $$ where $A_2 = J_2-R_2$. As $A_2$ is Hurwitz, there exist $\omega >0$, $M\geq 1$ such that $\|e^{tA_2}\|\le Me^{-\omega t}$. Let $R \doteq \frac{M}{\omega}\|B_2\|\Big(\max_{v\in \mathbb{U}}\|v\|\Big)$. Then \begin{align*} \|x_2\| &\le Me^{-\omega t}\|x_2^0\| + \int_0^t Me^{-\omega(t-s)}\|B_2\|\|u(s)\|\,ds\le M\|x_2^0\| + R, \end{align*} which proves the second inclusion with $K_{x_0} = B_{M\|x_2^0\| + R}(0)\cap N_2$. \end{proof} \subsection{Turnpike properties of minimum energy supply ph-ODE-OCPs}\label{ss:ODE_tp} In Section \ref{s:DAE1} we formulated the OCP corresponding to the minimization of the energy supply of port-Hamiltonian descriptor systems and reduced this DAE-constrained problem to an ODE-constrained OCP of the following form: \begin{align}\label{e:phODE_OCP} \begin{split} \min_{u\in L^1(0,T;\mathbb{U})} &\int_0^T u(t)^\top y(t)\,dt\\ \dot x &= (J-R)Qx+(B-P)u\\ y&=(B+P)^\top Qx + Du\\ x(0)&=x^0,\quad x(T)\in\Phi, \end{split} \end{align} where $B,P\in\R^{n\times m}$, $D\in\R^{m\times m}$, and $J,R,Q\in\R^{n\times n}$ are as in \eqref{e:phODE_matrix_conditions}. Next, we analyze the turnpike phenomenon of optimal solutions of this OCP. Classically, this means that optimal trajectories reside close to certain states for the majority of the time~\cite{Faulwasser2021}. Here, we will show that this phenomenon occurs in a more general way, i.e., optimal pairs $(x^\star, u^\star$) reside close to a subspace for the majority of the time. Despite the problem being linear quadratic, the presented approach to show the turnpike for the primal variables, i.e., the input-state pair $(x^*,u^*)$, does not utilize the optimality conditions due to the possible occurrence of singular arcs. However, we prove in addition that a combination of the first-order optimality conditions and the subspace turnpike for the primal variables induces a turnpike for the adjoint state towards the steady state zero. The Hamiltonian function, representing the energy of the system \eqref{e:phODE} is given by $$ H(x) = \tfrac 12\cdot x^\top Qx. $$ For a control $u\in L^1(0,T;\mathbb U)$ and $x^0\in\R^n$ the solution $x = x(\,\cdot\,,x^0,u)$ of \eqref{e:phODE_dyn} with $x(0) = x^0$ obviously satisfies (note that $z^\top Jz=0$ for $z\in\R^n$) \begin{align*} \tfrac d{dt}H(x(\cdot)) &= x^\top Q\dot x = x^\top Q(J-R)Qx + x^\top Q(B+P)u - 2x^\top QPu\\ &= -x^\top QRQx + u^\top y - u^\top Du - 2x^\top QPu = u^\top y - \left\|W^{1/2}\vek xu\right\|^2. \end{align*} Thus, we obtain the well known energy balance \begin{equation}\label{e:ODE_diss} \int_{t_0}^{t_1}u^\top y\,dt = H(x(t_1)) - H(x(t_0)) + \int_{t_0}^{t_1}\big\|W^{1/2}\smallvek xu\big\|^2\,dt. \end{equation} In particular, this shows that we may replace the cost functional in \eqref{e:phODE_OCP} by $$ J(u) = H(x(T)) + \int_0^T\big\|W^{1/2}\smallvek{x(t)}{u(t)}\big\|^2\,dt. $$ The next corollary thus follows immediately from Theorem \ref{t:optsol_exists}. \begin{corollary}\label{c:ODE_optsol} If $x^0\in\RT(\Phi)$, then the OCP \eqref{e:phODE_OCP} has an optimal solution. \end{corollary} As already discussed in Remark \ref{r:prelim_DAE}, in the case $W=0$ the OCP \eqref{e:phODE_OCP} is equivalent to minimizing $H(x) = \frac 12\cdot x^\top Qx$ on $\Phi\cap\RF_T(x^0)$, which is the (compact) set of states $x\in\Phi$ that are reachable from $x^0$ at time $T$. This case will not be discussed here. Hence, we will assume throughout that $W\neq 0$. \subsubsection{Input-state subspace turnpikes} The following definition of an integral turnpike with respect to a subspace is a natural extension of the classical concept of integral turnpike with respect to a steady state, cf.\ \cite[Definition 2.1]{Gruene2019a} and \cite[Section 1.2]{epfl:faulwasser15h}. The notion of integral turnpike was also discussed in \cite{Trelat18}. \begin{definition}[Input-state Subspace Integral Turnpike Property]\label{def:turnpike_state_control} Let $\ell\in C^1(\R^{n+m})$, $\varphi\in\C^1(\R^n)$, and let $\Phi\subset\R^n$ be closed. We say that a general OCP with linear dynamics of the form \begin{align} \begin{split}\label{e:lin_OCP} \min_{u\in L^1(0,T;\mathbb U)}\,&\varphi(x(T)) + \int_0^T \ell(x(t),u(t))\,dt\\ &\dot x = Ax + Bu\\ &x(0)=x^0,\quad x(T)\in\Phi \end{split} \end{align} has the {\em input-state integral turnpike property} on a set $S_{\rm tp}\subset\RT(\Phi)$ with respect to a subspace $\mathcal{V}\subset\R^n\times\R^m$, if there exist continuous functions $F,T : S_{\rm tp}\to [0,\infty)$ such that for all $x^0\in S_{\rm tp}$ each optimal pair $(x^\star ,u^\star )$ of the OCP \eqref{e:lin_OCP} with initial datum $x^\star (0)=x^0$ and $T > T(x^0)$ satisfies \begin{align}\label{e:integral_tp} \int_0^T\operatorname{dist}^2\big((x^\star (t),u^\star (t)),\mathcal V} \newcommand{\frakV}{\mathfrak V\big)\,dt\le F(x^0). \end{align} \end{definition} \begin{remark} The main feature of this definition is the implication that for $T$ large enough any optimal input-state pair is close to the subspace $\mathcal V} \newcommand{\frakV}{\mathfrak V$ for the majority of the time. Indeed, if $x^0\in S_{\rm tp}$ and $\varepsilon>0$, for $T > T(x^0)$ we have $$ \mu\big(\{t\in [0,T] : \operatorname{dist}((x^\star (t),u^\star (t)),\mathcal V} \newcommand{\frakV}{\mathfrak V) > \varepsilon\}\big)\le\tfrac 1{\varepsilon^2}\!\int_0^T\operatorname{dist}^2((x^\star (t),u^\star (t)),\mathcal V} \newcommand{\frakV}{\mathfrak V)\,dt\le\tfrac{F(x^0)}{\varepsilon^2}, $$ where $\mu$ denotes the standard Lebesgue measure. This behavior of optimal trajectories is called {\em measure turnpike}, cf.\ e.g.\ \cite[Definition 2]{Faulwasser2021}. Here, compared to the usual definition in the literature, the measure turnpike property is with respect to a subspace and the dependence of the upper bound on $\varepsilon$ can explicitly be specified. \end{remark} In this section we shall show that under suitable conditions the input-state integral turnpike property holds for the OCP \eqref{e:phODE_OCP} with respect to the subspace $\ker W$ (see \eqref{e:W}). Recall that \begin{equation}\label{e:Wagain} W = \mat{QRQ}{QP}{P^\top Q}{S}. \end{equation} Before arriving at the main result of this subsection, Theorem \ref{t:turnpike_reachability}, we introduce two technical lemmas. A steady state $(\bar x^\star ,\bar u^\star )$ of \eqref{e:phODE_OCP} is called {\em optimal} if it is a solution of the following minimization problem: \begin{align}\label{e:phODE_ssOCP} \begin{split} &\min_{(\bar{x},\bar{u})\in \mathbb{R}^n\times\mathbb{U}} \bar{u}^\top\bar{y} \\ \text{s.t. } 0&= (J-R)Q\bar{x} + (B - P)\bar{u}\\ \bar{y}&= (B + P)^\top Q\bar{x} + D\bar{u}. \end{split} \end{align} \begin{lemma}\label{l:ssimkern} A steady state $(\bar x,\bar u)$ of \eqref{e:phODE_OCP} is optimal if and only if $$ \vek{\bar x}{\bar u}\,\in\,\ker W. $$ \end{lemma} \begin{proof} Let $\bar z = \smallvek{\bar x}{\bar u}$ be a steady state of \eqref{e:phODE_OCP} and set $\bar{y} = (B + P)^\top Q\bar{x} + D\bar{u}$. Since $\bar x^\top Q^\top JQx = 0$, we obtain \begin{align*} \bar y^\top\bar u &= \bar x^\top Q(B-P)\bar u + 2\bar x^\top QP\bar u + \bar u^\top S\bar u = \bar x^\top QRQ\bar x + 2\bar x^\top QP\bar u + \bar u^\top S\bar u = \bar z^\top W\bar z. \end{align*} In particular, on the set of constraints in \eqref{e:phODE_ssOCP} the target function $\bar u^\top\bar y$ is non-negative. And since $(0,0)$ obviously is a steady state, the optimal value of \eqref{e:phODE_ssOCP} is zero. Hence, a steady state $\bar z$ is optimal if and only if $\bar z^\top W\bar z = 0$, i.e., $W\bar z = 0$. \end{proof} \begin{remark} Lemma \ref{l:ssimkern} shows that the optimal steady states of \eqref{e:phODE_OCP} are exactly those pairs $(\bar x,\bar u)\in\R^n\times\mathbb U$, for which $\smallvek{\bar x}{\bar u}$ lies in the kernel of the $(2n+m)\times (n+m)$-matrix $$ \begin{pmatrix} (J-R)Q & B-P\\ QRQ & QP\\ P^\top Q & S \end{pmatrix}. $$ In particular, the vectors in $\ker Q\times\{0\}$ are optimal steady states. \end{remark} \begin{lemma}\label{l:easystuff} Let $A\in\R^{k\times k}$, $0\neq A = A^\top\ge 0$. Then for all $x\in\R^k$ we have $$ \lambda_{\min}\cdot\operatorname{dist}^2(x,\ker A)\,\le\,x^\top Ax\,\le\,\lambda_{\max}\cdot\operatorname{dist}^2(x,\ker A), $$ where $\lambda_{\min}$ \braces{$\lambda_{\max}$} is the smallest \braces{resp.\ largest} positive eigenvalue of $A$. \end{lemma} \begin{proof} We have $A = U^\top DU$ with $D = \operatorname{diag}(\lambda_i)_{i=1}^k\ge 0$ and $U\in\R^{k\times k}$ orthogonal. Note that $\max_i\lambda_i = \|A\|$ and that $P \doteq U^\top\operatorname{diag}(\delta_{\lambda_i>0})_{i=1}^kU$ is the orthogonal projection onto $\operatorname{im} A$. Let $x\in\R^k$ and $v\doteq Ux$. Then $$ x^\top Ax = \sum_i\lambda_iv_i^2 = \sum_{i:\lambda_i>0}\lambda_iv_i^2\le\|A\|\sum_{i:\lambda_i>0}v_i^2 = \lambda_{\max}\cdot\|Px\|^2. $$ Similarly, $x^\top Ax\ge\lambda_{\min}\|Px\|^2$. The claim now follows from $\operatorname{dist}(x,\ker A) = \|Px\|$. \end{proof} The next theorem is the main result of this subsection. \begin{theorem}[Input-state subspace integral turnpike]\label{t:turnpike_reachability} Let $(\bar x_e,\bar u_e)\in\R^n\times\operatorname{int}\mathbb U$ be an optimal steady state such that $\bar x_e\in\RT(\Phi)$. Then the OCP \eqref{e:phODE_OCP} has the input-state integral turnpike property on $\RT(\bar x_e)$ with respect to $\ker W$. \end{theorem} \begin{proof} Set $A \doteq (J-R)Q$ and $\widetilde B \doteq B-P$. First of all, we shall define some constants. Due to the spectral properties of $A$ (see Theorem \ref{t:ph-charac}), there is $M>0$ such that $\|e^{tA}\|\le 1+Mt$ for all $t\ge 0$. Moreover, we set $$ u_{\max} \doteq \max\{\|u\| : u\in\mathbb U\}. $$ The condition $\bar x_e\in\RT(\Phi)$ means that there exist a time $T_1>0$ and a control $u_1\in L^1(0,T_1;\mathbb U)$ such that $x(T_1,\bar x_e,u_1)\in\Phi$. Now, let $x^0\in\RT(\bar x_e)$. By Corollary \ref{c:mintime_ss} the minimal time $T_0(x^0)\doteq T(x^0;\bar x_e)$ at which $\bar x_e$ can be reached from $x^0$ depends continuously on $x^0$. Define $$ T(x^0) \doteq T_0(x^0) + T_1 \qquad\text{and}\qquad F(x^0) \doteq \lambda_{\min}^{-1}\cdot(G_1 + G_2 + G_0(x^0)), $$ where $\lambda_{\min}$ denotes the smallest positive eigenvalue of the matrix $W$ and $G_0 : \RT(\bar x_e)\to [0,\infty)$ as well as the constants $G_1,G_2\ge 0$ are defined by \begin{align*} G_0(x^0) &\doteq \|W\|T_0(x^0)\cdot\left[ (1+MT_0(x^0))^2\big(\|x^0\| + \|\widetilde B\| T_0(x^0)u_{\max}\big)^2 + u_{\max}^2 \right]\\ G_1 &\doteq \|W\|T_1\cdot\left[ (1+MT_1)^2\big(\|\bar x_e\| + \|\widetilde B\|u_{\max}\big)^2 + T_1u_{\max}^2 \right]\\ G_2 &\doteq \tfrac 12\|Q\|(1+MT_1)^2\cdot\big(\|\bar x_e\| + \|\widetilde B\| T_1u_{\max}\big)^2. \end{align*} We will show that $T$ and $F$ are as in Definition \ref{def:turnpike_state_control}. To this end, let $u_0$ be the time-optimal control that steers $x^0$ to $\bar x_e$ at time $T_0 \doteq T(x^0;\bar x_e)$ and let $T > T(x^0) = T_0 + T_1$. Define a control $u$ by \begin{align*} u(t) \doteq \begin{cases} u_0(t), &t\in [0,T_0]\\ \bar u_e, &t\in [T_0,T-T_1]\\ u_1(t-(T-T_1)), &t\in [T-T_1,T] \end{cases} \end{align*} and denote the state response trajectory by $x$, i.e., $$ x(t) = \begin{cases} e^{tA}x^0 + \int_0^te^{(t-s)A}\widetilde Bu_0(s)\,ds, &t\in [0,T_0]\\ \bar x_e, &t\in [T_0,T-T_1]\\ e^{(t-(T-T_1))A}\bar x_e + \int_{T-T_1}^te^{(t-s)A}\widetilde Bu_1(s-(T-T_1))\,ds, &t\in [T-T_1,T]. \end{cases} $$ The constant value $\bar x_e$ on $[T_0,T-T_1]$ is due to the fact that $(\bar x_e,\bar u_e)$ is a steady state of \eqref{e:phODE_OCP}. Hence, $x$ is a trajectory from $x(0) = x^0$ to a point $x(T)\in\Phi$ and therefore admissible for the OCP \eqref{e:phODE_OCP}. The output is given by $y \doteq (B+P)^\top Qx + Du$. Let $(x^\star ,u^\star )$ be an optimal solution of \eqref{e:phODE_OCP} with $x^\star (0) = x^0$ and denote the corresponding output by $y^\star $. By optimality and the energy balance \eqref{e:ODE_diss}, we obtain that \begin{align*} H(x^\star (T))&-H(x^\star (0)) +\int_0^T\left\|W^{\frac12}\begin{pmatrix}x^\star (t)\\u^\star (t) \end{pmatrix}\right\|^2\,dt = \int_0^T u^\star (t)^\top y^\star (t)\,dt\\ &\le\int_0^T u(t)^\top y(t)\,dt = H(x(T))-H(x(0)) + \int_0^T\left\|W^{\frac12}\begin{pmatrix}x(t)\\u(t) \end{pmatrix}\right\|^2\,dt. \end{align*} Since $H(x)=\tfrac 12\cdot x^\top Qx\ge 0$ for all $x\in\R^n$ and $x^\star (0)=x(0) = x^0$, we obtain \begin{align}\label{eq:ineq1} \int_0^T\left\|W^{\frac12}\begin{pmatrix}x^\star (t)\\u^\star (t) \end{pmatrix}\right\|^2\,dt \leq H(x(T)) + \int_0^T\left\|W^{\frac12}\begin{pmatrix}x(t)\\u(t) \end{pmatrix}\right\|^2\,dt. \end{align} For $t\in [0,T_0]$ we have \begin{align*} \|x(t)\| &\le (1+MT_0)\big(\|x^0\| + \|\widetilde B\| T_0u_{\max}\big). \end{align*} which implies \begin{align*} \int_0^{T_0}\left\|W^{\frac12}\begin{pmatrix}x(t)\\u(t) \end{pmatrix}\right\|^2\,dt \le\|W\|\int_0^{T_0}\big(\|x(t)\|^2 + \|u(t)\|^2\big)\,dt \le G_0(x^0). \end{align*} Similarly, for $t\in [T-T_1,T]$, from \begin{align*} \|x(t)\| &\le (1+MT_1)\big(\|\bar x_e\| + \|\widetilde B\| T_1u_{\max}\big) \end{align*} we obtain \begin{align*} \int_{T-T_1}^{T}\left\|W^{\frac12}\begin{pmatrix}x(t)\\u(t) \end{pmatrix}\right\|^2\,dt \le \|W\|\int_{T-T_1}^{T}\big(\|x(t)\|^2 + \|u(t)\|^2\big)\,dt \le G_1 \end{align*} and $$ H(x(T)) = \tfrac 12\cdot x(T)^\top Qx(T)\le \tfrac 12\|Q\|\|x(T)\|^2 \le G_2. $$ Since $[\bar x_e,\,\bar u_e]\in\ker W$, cf.\ Lemma \ref{l:ssimkern}, we also have $$ \int_{T_0}^{T-T_1}\left\|W^{\frac12}\begin{pmatrix}x(t)\\u(t) \end{pmatrix}\right\|^2\,dt = 0. $$ Therefore, $$ H(x(T)) + \int_0^T\left\|W^{\frac12}\begin{pmatrix}x^\star (t)\\u^\star (t) \end{pmatrix}\right\|^2\,dt\,\le\,G_1 + G_2 + G_0(x^0) = \lambda_{\min}\cdot F(x^0), $$ and the claim follows from Lemma \ref{l:easystuff}. \end{proof} Note that the set of initial values $S_{\rm tp} = \RT(\bar x_e)$ in Theorem \ref{t:turnpike_reachability} coincides with the affine subspace $\bar x_e + \operatorname{im} K((J-R)Q,B-P)$, see Corollary \ref{c:mintime_ss} (i). Therefore, the system exhibits a global turnpike behavior under the assumption that it is controllable. \begin{corollary}\label{c:global_turnpike} Assume that $((J-R)Q,B-P)$ is controllable. If there exists an optimal steady state $(\bar x_e,\bar u_e)\in\R^n\times\operatorname{int}\mathbb U$ of \eqref{e:phODE_OCP} such that $\bar x_e\in\RT(\Phi)$, then the OCP \eqref{e:phODE_OCP} has the input-state integral turnpike property on $\R^n$ with respect to $\ker W$. \end{corollary} \begin{remark} For $\bar x_e = \bar u_e = 0$ in Corollary \ref{c:global_turnpike} the assumption $0\in\RT(\Phi)$ can be replaced by the (seemingly) weaker condition $(N_1\oplus(\RF(0)\cap N_2))\cap\Phi\ne\emptyset$, where $N_1$ is the subspace from Proposition \ref{p:decomp}. This follows directly from Lemma \ref{l:sontag}. \end{remark} \begin{remark}\label{r:only_state} If $P=0$ and $S=0$, we have $\ker W = \ker(R^{\frac 12}Q)\times\R^m$ so that the above-proven turnpike property only provides information about the state. The relation \eqref{e:integral_tp} then reads $$ \int_0^T\operatorname{dist}^2\big(x^\star (t),\ker(RQ)\big)\,dt\le F(x^0). $$ \end{remark} \subsubsection{Classical turnpike for the adjoint state} In this part, we will show that despite the input-state pair enjoys a subspace turnpike, the adjoint variable exhibits a turnpike towards the steady state zero whenever control constraints are not active. A central tool is the dissipativity equation \eqref{e:ODE_diss} which allows to reformulate the OCP \eqref{e:phODE_OCP} in equivalent form as follows: \begin{align} \label{eq:phOCP_ode_reform} \begin{split} \min_{u\in L^1(0,T;\mathbb{U})} & H(x(T)) + \int_0^T \left\|W^{\frac12}\begin{pmatrix} x(t)\\u(t) \end{pmatrix}\right\|^2\,\text{d}t\\ \dot x &= (J-R)Q x +(B-P)u,\\ x(0)&=x^0,\quad x(T)\in \Phi, \end{split} \end{align} where $W$ is as in \eqref{e:Wagain}. In order to conclude a result for the adjoint, we shall utilize the optimality conditions which we derive for the OCP \eqref{e:phODE_OCP} following \cite[Section 4.1.2]{Liberzon2012}. First, we define the (optimal control) Hamiltonian \begin{align*} \mathcal{H}(x,u,\lambda,\lambda_0)\doteq \lambda^\top\left((J-R)Qx + (B-P)u\right) + \lambda_0\left\|W^{\frac12}\smallvek{x}{u}\right\|^2. \end{align*} Let $(x^\star ,u^\star )\in W^{1,1}(0,T;\mathbb{R}^n)\times L^1(0,T;\mathbb{U})$ be an optimal input-state pair for \eqref{eq:phOCP_ode_reform}. Then there is a function $\lambda^\star \in W^{1,1}(0,T;\mathbb{R}^n)$ and a constant $\lambda_0^\star \leq 0$ satisfying $(\lambda_0^\star ,\lambda^\star (t))\neq 0$ for all $t\in [0,T]$ such that \begin{subequations} \begin{align} \label{eq:state} \dot{x}^\star (t)&=\phantom{-}\mathcal{H}_\lambda(x^\star (t),u^\star (t),\lambda^\star (t),\lambda_0^\star )\\ \label{eq:adj} \dot{\lambda}^\star (t) &= -\mathcal{H}_x(x^\star (t),u^\star (t),\lambda^\star (t),\lambda_0^\star )\\ \label{eq:grad} u^\star (t)&\in \arg \max_{u\in \mathbb{U}} \mathcal{H}(x^\star (t),u,\lambda^\star (t),\lambda_0^\star ) \end{align} \end{subequations} for a.e.\ $t\in [0,T]$. Here, \eqref{eq:adj} and \eqref{eq:grad} read as \begin{subequations} \begin{align} \label{eq:adj2} \dot{\lambda}^\star &= -\left((J-R)Q\right)^\top \lambda^\star - 2\lambda_0^\star \left(Q RQx^\star + Q Pu^\star \right)\\ u^\star (t)&\in \arg\max_{u\in \mathbb{U}} \lambda^\star (t)^\top(B-P)u + \lambda_0(2x^\star (t)^\top QPu + u^\top Su).\label{e:argmax} \end{align} \end{subequations} The proof of the following lemma is inspired by \cite[Proof of Rem.\ 2.1]{Porretta2013}. A similar argument was also pursued in \cite[Theorem 3.5]{Faulwasser2020} in the context of infinite dimensional nonlinear systems. \begin{lemma}\label{lem:lambdaestimate} Assume that $((J-R)Q,B-P)$ is controllable and let $(x^\star ,u^\star ,\lambda_0^\star ,\lambda^\star )$ satisfy the necessary optimality conditions listed above. Then for each $t_c\in (0,T)$ there exists a constant $C(t_c)>0$ such that whenever $u^\star (s)\in\operatorname{int}\mathbb U$ for a.e.\ $s\in [t-t_c,t]$ for some $t\in [t_c,T]$, then \begin{align}\label{e:lambda} \|\lambda^\star (t)\|^2 \le C(t_c)\cdot\int_{t-t_c}^t\left\|W\!\smallvek{x^\star (s)}{u^\star (s)}\right\|^2\,ds. \end{align} In particular, if $t_c < T/4$ and $u^\star (t)\in\operatorname{int}\mathbb U$ for a.e.\ $t\in [t_c,T-t_c]$, then \begin{align}\label{e:lambda_int} \int_{2t_c}^{T-t_c}\|\lambda^\star (t)\|^2\,dt \,\le\,t_cC(t_c)\cdot\int_{t_c}^{T-t_c}\left\|W\!\smallvek{x^\star (t)}{u^\star (t)}\right\|^2\,dt. \end{align} \end{lemma} \begin{proof} Set $A \doteq (J-R)Q$ and $\widetilde B = B-P$. Since $(A,\widetilde B)$ is controllable, for each $t>0$ there is $\alpha_t > 0$ such that \begin{align*} \int_0^t\|B^\top e^{sA^\top}x\|^2\,ds \ge \alpha_t\|x\|^2 \qquad \forall\, x\in \mathbb{R}^n, \end{align*} see \cite[Thm.\ 4.1.7]{Curtain1995}. Let $t\in [t_c,T]$. After a change of variables, this estimate is equivalent to \begin{align}\label{eq:obs_bounded} \int_{t-t_c}^{t}\|B^\top e^{(t-s)A^\top}x\|^2\,ds \geq \alpha_{t_c} \|x\|^2 \quad \forall\,x\in\R^n. \end{align} Using linearity of the dynamics, we decompose the solution of \eqref{eq:adj2} as $\lambda^\star = \lambda_1 + \lambda_2$, where \begin{align*} && \lambda_1'(s) &= -A^\top\lambda_1(s), && \lambda_1(t)=\lambda^\star (t),&&\\ &&\lambda_2'(s) &= -A^\top\lambda_2(s) - 2\lambda_0^\star \left(Q RQx^*(t) + Q Pu^*(t)\right),&& \lambda_2(t)=0,&& \end{align*} and apply the observability estimate \eqref{eq:obs_bounded} to $ \lambda_1(s) = e^{(t-s)A^\top} \lambda^\star (t)$. Hence, \begin{align*} \alpha_{t_c}\|\lambda^\star (t)\|^2 \le \int_{t-t_c}^t \|\widetilde B^\top\lambda_1(s)\|^2 \,ds \le 2\int_{t-t_c}^t \big(\|\widetilde B^\top\lambda^\star (s)\|^2 +\|\widetilde B^\top \lambda_2(s)\|^2\big)\,ds. \end{align*} Now, since $u^\star (s)\in\operatorname{int}\mathbb U$ for a.e.\ $s\in [t-t_c,t]$, it follows from \eqref {e:argmax} that $$ \widetilde B^\top \lambda^\star (t) + 2\lambda_0^\star \left(P^\top Qx^\star (t)+ Su^\star (t)\right) = 0. $$ Hence, we obtain $$ \int_{t-t_c}^t\|\widetilde B^\top\lambda^\star (s)\|^2\,ds = 4(\lambda_0^\star )^2\int_{t-t_c}^t\|P^\top Qx^\star (t) + Su^\star (t)\|^2\,ds. $$ Now, setting $F(\tau) = \lambda_0^\star \left(QRQx^\star (\tau) + QPu^\star (\tau)\right)$, we have \begin{align*} \|\widetilde B^\top\lambda_2(s)\|^2 &\le 4\|\widetilde B\|^2\cdot\left\|\int_s^t e^{(\tau - s)A^\top}F(\tau)\,d\tau\right\|^2\\ &\le 4\|\widetilde B\|^2\left(\int_s^t\big\|e^{(\tau-s)A^\top}\big\|^2\,d\tau\right)\left(\int_s^t\|F(\tau)\|^2\,d\tau\right). \end{align*} Due to the spectral properties of $A$ (cf.\ Theorem \ref{t:ph-charac}), we have $\|e^{tA}\|\le 1+Mt$ for some $M>0$ and all $t\ge 0$. Hence, also $\|e^{tA^\top}\| = \|(e^{tA})^\top\| = \|e^{tA}\|\le 1+Mt$. The middle term can thus be estimated as \begin{align*} \int_s^t\big\|e^{(\tau-s)A^\top}\big\|^2\,d\tau &\le\int_s^t(1+M(\tau-s))^2\,d\tau \le (1+Mt_c)^2t_c. \end{align*} Finally, integrating the last term and using Fubini's theorem yields \begin{align*} \int_{t-t_c}^t\int_s^t\|F(\tau)\|^2\,d\tau &= \int_{t-t_c}^t\|F(\tau)\|^2\int_{t-t_c}^\tau\,ds\,d\tau\\ &= \int_{t-t_c}^t(\tau-t+t_c)\|F(\tau)\|^2\,d\tau\le t_c\int_{t-t_c}^t\|F(s)\|^2\,ds, \end{align*} and \eqref{e:lambda} follows with $$ C(t_c) = \frac{8(\lambda_0^\star )^2}{\alpha_{t_c}}\cdot\max\left\{1,\|B-P\|^2(1+Mt_c)^2t_c^2\right\}. $$ Now, let $u^\star (t)\in\operatorname{int}\mathbb U$ for a.e.\ $t\in [t_c,T-t_c]$. Then we again apply Fubini's theorem to get \begin{align*} \int_{2t_c}^{T-t_c}\|\lambda^\star (t)\|^2\,dt &\le C(t_c)\int_{2t_c}^{T-t_c}\int_{t-t_c}^t\left\|W\!\smallvek{x^\star (s)}{u^\star (s)}\right\|^2\,ds\,dt\\ &\le t_cC(t_c)\int_{t_c}^{T-t_c}\left\|W\!\smallvek{x^\star (s)}{u^\star (s)}\right\|^2ds, \end{align*} which is \eqref{e:lambda_int}. \end{proof} The following corollary is a consequence of the preceding lemma, Corollary \ref{c:global_turnpike}, and the estimates for the integral $\int_0^T\left\|W^{\frac12}\!\smallvek{x^\star (t)}{u^\star (t)}\right\|^2dt$ in the proof of Theorem \ref{t:turnpike_reachability}. \begin{corollary}\label{cor:tp_adj} Assume that $((J-R)Q,B-P)$ is controllable, $0\in\RT(\Phi)$, and let $T>T(x^0)$, where $T(\cdot)$ is the function from Theorem \ref{t:turnpike_reachability}. Let $(x^\star ,u^\star ,\lambda^\star )$ satisfy the necessary optimality conditions for the OCP \eqref{e:phODE_OCP} and assume that $t_c\in (0,T/4)$ is such that $u^\star (t)\in\operatorname{int}\mathbb U$ for a.e.\ $t\in [t_c,T-t_c]$. Then the adjoint state $\lambda^\star$ exhibits a turnpike with respect to zero, i.e., there is a continuous function $G:\mathbb{R}^n \to [0,\infty)$ such that \begin{align*} \int_{2 t_c}^{T-t_c} \|\lambda^\star (t)\|^2\,dt \leq G(x^0). \end{align*} \end{corollary} \subsection{Numerical Example: Modified mass-spring damper}\label{ss:example} We briefly illustrate the findings of this chapter by a numerical example of a mass-spring system, with homogeneous damping given by the dissipation matrix $R$, cf.\ \cite[Section V]{Schaller2020a}. Going beyond \cite{Schaller2020a}, we also illustrate the adjoint turnpike. We consider \begin{align*} J\doteq \begin{pmatrix} \phantom{-}0 & 0 & \phantom{-}1\\ \phantom{-}0 & 0 & -1\\ -1 & 1 & \phantom{-}0 \end{pmatrix}, \quad R\doteq \begin{pmatrix} 1&1&0\\1&1&0\\0&0&0 \end{pmatrix \end{align*} and $Q=I$, $P=0$, $D=S=0$. The input matrix, initial and terminal region are given by \begin{align*} B=\begin{pmatrix} 1,0,0\end{pmatrix},\quad x^0=\begin{pmatrix} 1,1,1\end{pmatrix}^\top,\quad \Psi = \{x_T\}=\left\{\begin{pmatrix} -1.2,-0.7,-1 \end{pmatrix}^\top\right\}. \end{align*} We solve the corresponding OCP \eqref{eq:phOCP_ode_reform} with horizon $T\in \{10,15,20\}$ where we discretize the ODE with a RK4-method and $N\in \{100,150,200\}$ time discretization points. The corresponding optimization problem is then solved by the \textit{fmincon} function in \textit{MATLAB}. The subspace turnpike phenomenon proved in Theorem~\ref{t:turnpike_reachability} can be observed in Figure~\ref{fig:orbits} where the optimal state approaches the subspace \begin{align} \label{e:kern2} \ker(RQ) = \ker R = \{x\in \mathbb{R}^3\,|\,x_1+x_2=0\}. \end{align} The spiraling state trajectory (see Figure \ref{fig:orbits}) can be explained as follows: first, the state quickly approaches $\ker(RQ)$, as predicted by Theorem \ref{t:turnpike_reachability} and Remark \ref{r:only_state}. Note that $\ker(RQ) = N_1$ in this example, where $N_1$ is as in decomposition \eqref{e:decompN1}. Hence, the state $x_2$ in \eqref{e:dec} approaches zero. In addition, we observe in Figure \ref{fig:ode} that the optimal control $u$ also approaches zero, which implies that $x_1(t)\approx e^{J_1t}x$ locally. The spiraling effect now results from the skew-symmetry of $J_1$. Further, as depicted in Figure~\ref{fig:odeadj}, we observe the turnpike towards zero of the adjoint state as proven in Corollary~\ref{cor:tp_adj}. \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{plotting_folder/ode_dreid} \caption{Optimal state trajectory in $\R^3$ for $T = 20$.\label{fig:orbits}} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{plotting_folder/ode_overtime} \caption{Optimal state and control of OCP \eqref{eq:phOCP_ode_reform} for time horizons $T=10$ \braces{\textcolor{black}{\makebox[0.5cm]{\xleaders\hbox to 1.0em{$- \cdot$}\hfill }}}, $T=15$ \braces{\textcolor{red}{\makebox[0.5cm]{\xleaders\hbox to 0.7em{$-$}\hfill }}}, and $T=20$ \braces{\textcolor{blue}{\makebox[0.4cm]{\xleaders\hbox to 1em{---}\hfill }}}.\label{fig:ode}} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{plotting_folder/ode_adjoint} \caption{Adjoint states of OCP \eqref{eq:phOCP_ode_reform} for time horizons $T=10$ \braces{\textcolor{black}{\makebox[0.5cm]{\xleaders\hbox to 1.0em{$- \cdot$}\hfill }}}, $T=15$ \braces{\textcolor{red}{\makebox[0.5cm]{\xleaders\hbox to 0.7em{$-$}\hfill }}}, and $T=20$ \braces{\textcolor{blue}{\makebox[0.4cm]{\xleaders\hbox to 1em{---}\hfill }}}.\label{fig:odeadj}} \end{figure} \newpage \section{Port-Hamiltonian DAE-OCPs}\label{s:DAE2} Subsequently, we leverage the results from Section \ref{s:ODE} to analyze the pH-DAE OCP \eqref{e:phDAE_OCP}. To this end, let us first discuss reachability properties of the DAE control system~\eqref{e:phDAE_dyn} of index at most one. Let $t>0$ and $w,w^0\in\operatorname{im} E$. We say that $w$ is reachable from $w^0$ at time $t$ under the dynamics in \eqref{e:phDAE_dyn} if there exists a control $u\in L^1(0,t;\mathbb U)$ such that the (possibly non-smooth) solution $x_u$ of the DAE in \eqref{e:phDAE_dyn} with $Ex_u(0) = w^0$ satisfies $Ex(t) = w$. By $\RF_t(w)$ we denote the set of all vectors in $\operatorname{im} E$ that are reachable from $w\in\operatorname{im} E$ at time $t$. Similarly, we denote by $\RT_t(w)$ the set of vectors in $\operatorname{im} E$ from which $w$ is reachable at time $t$. The sets $\RF(w)$ and $\RT(w)$ are defined analogously to their ODE-counterparts in Subsection \ref{ss:reachable} and so are $\RF(\Psi)$ and $\RT(\Psi)$ for sets $\Psi\subset\operatorname{im} E$. Using the quasi-Weierstra\ss\ form (cf.\ Section \ref{a:dae_solutions}), it is easy to see that the properties of the reach\-able sets for ODEs carry over to the DAE case---with the exception that these sets are always contained in $\operatorname{im} E$ and topological properties have to be regarded with respect to the subspace topology of $\operatorname{im} E$. \subsection{Turnpike properties of minimum energy supply ph-DAE-OCPs} We shall now define the subspace turnpike property for DAE-OCPs with respect to the state variable, which is the DAE-counterpart to Definition~\ref{def:turnpike_state_control} for ODE problems. Due to the absence of a feed-through term in the ph-DAE-OCP \eqref{e:phDAE_OCP}, we obtain a subspace turnpike purely in the state, as opposed to the input-state turnpike in Definition~\ref{def:turnpike_state_control}. \begin{definition}[State Subspace Integral Turnpike Property]\label{def:turnpike_state} We say that a general DAE-OCP of the form \begin{align} \begin{split}\label{e:DAE_OCP} \min_{u\in L^1(0,T;\mathbb U)} &\varphi(x(T)) + \int_0^T \ell(x(t),u(t))\,dt \\ \text{s.t. }\tfrac d{dt}Ex &= Ax + Bu\\ Ex(0)&=w^0,\quad Ex(T)\in\Psi, \end{split} \end{align} with $C^1$-functions $\ell : \R^{n+m}\to\R$, $\varphi : \R^n\to\R$ and a closed set $\Psi\subset\operatorname{im} E$ has the {\em state integral turnpike property} on a set $S_{\rm tp}\subset\RT(\Psi)$ with respect to a subspace $\mathcal{V}\subset\R^n$, if there exist continuous functions $F,T : S_{\rm tp}\to [0,\infty)$ such that for all $w^0\in S_{\rm tp}$ each optimal pair $(x^\star ,u^\star )$ of the OCP \eqref{e:DAE_OCP} with initial datum $Ex^\star (0)=w^0$ and $T > T(x^0)$ satisfies \begin{align*} \int_0^T\operatorname{dist}^2(x^\star (t),\mathcal{V})\,dt\,\le\,F(w^0). \end{align*} \end{definition} Let us also define the (optimal) steady states for the DAE-constrained OCP \eqref{e:phDAE_OCP}. \begin{definition} A pair of vectors $(\bar w,\bar u)\in\operatorname{im} E\times\mathbb U$ is called a {\em steady state} of \eqref{e:phDAE_OCP} if there exists $\bar x\in\R^n$ such that $E\bar x = \bar w$ and $(J-R)Q\bar x + B\bar u = 0$. The steady state $(\bar w,\bar u)$ is called {\em optimal} if it is a solution of the following minimization problem: \begin{align}\label{e:phDAE_ssOCP} \begin{split} &\min_{(w,u)\in\operatorname{im} E\times\mathbb U} u^\top y\\ \text{s.t. } 0&= (J-R)Qx + Bu\\ y&= B^\top Qx\\ w &= Ex. \end{split} \end{align} \end{definition} A vector $\bar x\in\R^n$ with $E\bar x = \bar w$ and $(J-R)Q\bar x + B\bar u = 0$ is unique. This follows directly from the regularity of the pencil $P(s) = sE-(J-R)Q$ \begin{lemma} $(\bar w,\bar u)\in\operatorname{im} E\times\mathbb U$ is an \braces{optimal\,} steady state of \eqref{e:phDAE_OCP} if and only if $(U^\top\bar w,\bar u)$ is an \braces{optimal\,} steady state of \eqref{e:phDAE_OCPODE}, where $U$ is as in Theorem \rmref{t:beattie}. In particular, a steady state $(\bar w,\bar u)$ of \eqref{e:phDAE_OCP} is optimal if and only if $\bar x\in\ker(RQ)$. \end{lemma} \begin{proof} We use the notations from Theorem \ref{t:beattie}. Setting $\bar z = V^{-1}\bar x$, the equation $(J-R)Q\bar x + B\bar u=0$ is equivalent to \begin{equation}\label{e:malwieder} (J_{11}-R_{11})Q_{11}\bar z_1 + B_1\bar u = 0 \qquad\text{and}\qquad \bar z_2 = -Q_{22}^{-1}L_{22}^{-1}(L_{21}Q_{11}\bar z_1 + B_2\bar u). \end{equation} Now, if $(\bar w,\bar u)$ is a steady state of \eqref{e:phDAE_OCP} and $\bar w = E\bar x$, then $U^\top\bar w = U^\top EV\bar z = \bar z_1$ (see \eqref{e:matrix_transforms}), so that $(U^\top\bar w,\bar u)$ is a steady state of \eqref{e:phDAE_OCPODE}. Conversely, if $(U^\top\bar w,\bar u)$ is a steady state of \eqref{e:phDAE_OCPODE} and we set $\bar z_1 \doteq U^\top\bar w$, $\bar z_2$ as in \eqref{e:malwieder}, and $\bar x \doteq V\bar z$, then $E\bar x = U^{-\top}(U^\top EV)\bar z = U^{-\top}\bar z_1 = \bar w$, which shows that $(\bar w,\bar u)$ is a steady state of \eqref{e:phDAE_OCP}. The equivalence of optimal steady states follows from the fact that the transformation in Theorem \ref{t:beattie} does not change the input $u$ and the output $y$. The ``in particular''-part is a consequence of Remark \ref{r:xz} and Lemma \ref{l:ssimkern}. \end{proof} The following theorem is our main result concerning the turnpike behavior of optimal solutions of the pH-DAE OCP \eqref{e:phDAE_OCP}. \begin{theorem}[Integral state subspace turnpikes]\label{thm:DAE_tp} Let $(\bar w,\bar u)\in\operatorname{im} E\times\operatorname{int}\mathbb U$ be an optimal steady state of \eqref{e:phDAE_OCP} such that $\bar w\in\RT(\Psi)$. Then the OCP \eqref{e:phDAE_OCP} has the state integral turnpike property on $\RT(\bar w)$ with respect to $\ker(RQ)$. \end{theorem} \begin{proof} Let $\bar z_1 \doteq U^\top\bar w$. Then $(\bar z_1,\bar u)$ is an optimal steady state of OCP \eqref{e:phDAE_OCPODE}. It is easily seen that $\RT(\Psi) = U^{-\top}\RT_{\rm ODE}(\Phi_1)$, where $\RF_{\rm ODE}$ denotes the reachable set with respect to the ODE system in \eqref{e:phDAE_OCPODE}. Therefore, $\bar w\in\RT(\Psi)$ is equivalent to $\bar z_1\in\RT_{\rm ODE}(\Phi_1)$. Hence, by Theorem \ref{t:turnpike_reachability} there exist continuous functions $\widetilde F,\widetilde T : \RT_{\rm ODE}(\bar z_1)\to [0,\infty)$ such that for all $z_1^0\in\RT_{\rm ODE}(\bar z_1)$ each optimal pair $(z_1^\star ,u^\star )$ of the OCP \eqref{e:phDAE_OCPODE} with initial datum $z_1^\star (0)=z_1^0$ and $T > \widetilde T(z_1^0)$ satisfies \begin{align*} \int_0^T\operatorname{dist}^2\big((z_1^\star (t),u^\star (t)),\ker\hat W\big)\,dt\,\le\,\widetilde F(z_1^0). \end{align*} Define $F,T : \RT(\bar w)\to [0,\infty)$ by $F(w) \doteq \|\hat W\|\lambda_{\min}^{-1}\cdot\widetilde F(U^\top w)$ and $T(w) \doteq \widetilde T(U^\top w)$, $w\in\RT(\bar w)$, where $\lambda_{\min}$ is the smallest positive eigenvalue of $Q^\top RQ$. Let $w^0\in\RT(\bar w)$ and let $(x^\star ,u^\star )$ be an optimal pair of \eqref{e:phDAE_OCP} with initial datum $Ex(0) = w^0$. Set $z_1^\star \doteq U^\top Ex^\star $ and $z_1^0 \doteq U^\top w^0$. Then $(z_1^\star ,u^\star )$ is an optimal pair of \eqref{e:phDAE_OCPODE} with $z_1^\star (0) = z_1^0$ and for $T > T(w^0)$ we have $T > \widetilde T(z_1^0)$ and thus (see Lemma \ref{l:easystuff} and Remark \ref{r:xz}) \begin{align*} \int_0^T\!\!\operatorname{dist}^2(x^\star (t),\ker(RQ))\,dt &\le \lambda_{\min}^{-1}\int_0^T\|R^{\frac 12}Qx^\star (t)\|^2\,dt = \lambda_{\min}^{-1}\int_0^T\left\|\hat W^{\frac 12}\smallvek{z_1^\star (t)}{u^\star (t)}\right\|^2\,dt\\ &\le \frac{\|\hat W\|}{\lambda_{\min}}\int_0^T\operatorname{dist}^2\big((z_1^\star (t),u^\star (t)),\ker\hat W\big)\,dt\,\le\,F(w^0), \end{align*} which proves the theorem. \end{proof} A regular DAE control system $\tfrac d{dt}Ex = Ax + Bu$ in $\R^n$ (or simply $(E,A,B)$) is called {\em R-controllable} if for all $\lambda\in\C$ we have $\operatorname{rank}[\lambda E - A\quad B] = n$. This is obviously a generalization of the Hautus test. Using the quasi-Weierstra\ss\ form (cf.\ Section \ref{a:dae_solutions}), it is easy to see that $(E,A,B)$ is R-controllable if and only if the corresponding ODE-system in \eqref{e:ivp_sys1} is controllable. Also in our case, we have \begin{align*} \operatorname{rank}[\lambda E - (J-R)Q\quad B] &= \operatorname{rank}\big[\lambda U^\top E - U^\top(J-R)UU^{-1}Q\quad U^\top B\big]\\ &= \operatorname{rank}\big[\lambda U^\top EV - U^\top(J-R)UU^{-1}QV\quad U^\top B\big]\\ &= \operatorname{rank}\left[\begin{pmatrix}\lambda - L_{11}Q_{11} & 0 & B_1\\-L_{21}Q_{11} & -L_{22}Q_{22} & B_2\end{pmatrix}\right]\\ &= \operatorname{rank}\big[\lambda - L_{11}Q_{11}\quad B_1\big] + n_2, \end{align*} where $n_2 \doteq n - n_1$ and $L_{ij} = J_{ij} - R_{ij}$, $i,j=1,2$. The last equality holds since $L_{22}Q_{22}$ is invertible. Hence, $(E,(J-R)Q,B)$ is R-controllable if and only if the ODE control system in \eqref{e:phDAE_OCPODE} is controllable. The following corollary thus follows immediately from Corollary \ref{c:global_turnpike}. \begin{corollary} Assume that $(E,(J-R)Q,B)$ is R-controllable. If there exists an optimal steady state $(\bar w,\bar u)\in\operatorname{im} E\times\operatorname{int}\mathbb U$ of \eqref{e:phDAE_OCP} such that $\bar w\in\RT(\Psi)$, then the OCP \eqref{e:phDAE_OCP} has the state integral turnpike property on $\operatorname{im} E$ with respect to $\ker(RQ)$. \end{corollary} \begin{remark} The notion of R-controllability for DAE control systems introduced above was first defined in \cite{yip1981}. In \cite{berger2013controllability} the authors show that this property is equivalent to the so-called {\em controllability in the behavioral sense}. However, both \cite{yip1981} and \cite{berger2013controllability} work with DAEs of the type $E\dot x = Ax+Bu$ (instead of $\frac d{dt}Ex = Ax+Bu$) which are more restrictive due to the regularity requirement on $x$. \end{remark} \subsection{Numerical example: Force control of a robot in vertical translation of the end-effector} Let us consider the force control of a robot manipulator as descried in \cite{Volpe1994}. The robot is the type CMU DD II and its end-effector is endowed with a force sensor. We slightly adapt the parameters from \cite{Volpe1994} and set the mass to $m_A = 1.1$, $m_B = 0.1$, and the stiffness parameters to $k_1=0$, $k_2 =5$, and $k_3=\infty$. The choice of the stiffness coefficient $k_3$ induces clearly a constraint: the elongation of the spring $3$ to $0$, hence yielding a constrained mechanical system.The damping parameters are set to $c_1 = 10$, $c_2=10$, and $c_3=17$. The structure and dissipation matrix are given by \begin{align*} R&=\left(\begin{array}{cc} 0_{3} & 0_{3\times2}\\ 0_{2\times3} & \left(\begin{array}{cc} \left(c_{1}+c_{2}\right) & -c_{2}\\ -c_{2} & \left(c_{2}+c_{3}\right) \end{array}\right) \end{array}\right), \\ \text{and }\qquad J&=\left(\begin{array}{cc} 0_{3} & \Gamma\\ -\Gamma^{\top} & 0_{2} \end{array}\right), \quad \text{where}\quad \Gamma=\left(\begin{array}{cc} 1 & 0\\ -1 & 1\\ 0 & -1 \end{array}\right). \end{align*} Let further \begin{align*} E=\textrm{diag}\left(1,\,1,\,\frac{1}{k_{3}},m_{A},m_{B}\right) \text{and} \quad Q=\textrm{diag}\left(k_{1},\,k_{2},\,1,\,1,1\right) \end{align*} (with the convention $\tfrac{1}{\infty}=0$) and \begin{align*} B=\begin{pmatrix}0,0,0,1,0\end{pmatrix}^{\top},\quad w^0=\begin{pmatrix} 1,1,0,1,0\end{pmatrix}^\top,\quad \Psi = \{w_T\}=\left\{\begin{pmatrix} 1,1,0,2,0 \end{pmatrix}^\top\right\}. \end{align*} We eliminate the algebraic constraints and discretize the corresponding three dimensional ODE for time horizons $T\in \{5,10,15\}$ by discretization with a RK4 method with $N\in \{1000,2000,3000\}$ time steps. The resulting OCP is then solved by \textit{CasADi} \cite{Andersson2019}. \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{plotting_folder/dae_overtime} \caption{Optimal state and control of the DAE-OCP \eqref{e:phDAE_OCP} for time horizons $T=5$ (\textcolor{black}{\makebox[0.5cm]{\xleaders\hbox to 1.0em{$- \cdot$}\hfill }}), $T=10$ (\textcolor{red}{\makebox[0.5cm]{\xleaders\hbox to 0.7em{$-$}\hfill }}), $T=15$ (\textcolor{blue}{\makebox[0.4cm]{\xleaders\hbox to 1em{---}\hfill }}).\label{fig:dae}} \end{figure} It is clear that in this example $\ker(RQ) = \{x\in \mathbb{R}^5\,|\, x_4=x_5=0\}.$ In Figure~\ref{fig:dae} we observe the subspace turnpike behavior proven in \Cref{thm:DAE_tp} for the corresponding DAE OCP \eqref{e:phDAE_OCP}, i.e., the optimal solution is close to $\ker(RQ)$ for the majority of the time. \FloatBarrier \section{Conclusion} \label{s:conclusion} This paper has investigated a class of optimal control problems for linear port-Hamilto\-nian descriptor systems. We have shown that, considering the supplied energy as the objective to be minimized, the optimal solutions transferring initial data to prescribed target sets exhibit the turnpike phenomenon. Specifically, we have presented results on input-state subspace turnpikes in the ODE-constrained reduction of the original DAE-constrained problem. We have shown that the input-state subspace ODE turnpike corresponds to a state subspace DAE turnpike in the original problem. Importantly, we generalized the classical notion of steady-state turnpikes to subspace turnpikes. In the context of pH systems this turnpike subspace, which can be regarded as the attractor of infinite-horizon optimal solutions, is the nullspace of the dissipation matrix $RQ$. Future work will consider the extension towards infinite-dimensional systems; we refer to~\cite{Philipp2021} for first steps in this direction. \bibliographystyle{abbrv}
2,869,038,155,339
arxiv
\section{Introduction} In the field of neuroscience, it is generally acknowledged that neurons are the basic units of information processing in the brain. They do this by generating characteristic very short duration and highly peaked electric action potentials, or more simply spikes, in its body [see, for example, Dayan and Abbott (2001)]. These spikes can travel along nerve fibers that extend over relatively long distances to other cells. The temporal pattern of these spikes depends dynamically on the stimuli of the neuron or the biochemicals induced by the spikes of other neurons. The collection of such spikes generated by a neuron over a time period is called a spike train. In this way, information is transmitted via spike trains. Because the spikes are of very short duration and are highly peaked, point processes or counting processes are the most commonly used probability models for neural spike trains, with points on the time axis representing the temporal location of the spikes [see, for example, Brillinger (1992)]. Let $N(t)$ denote the number of spikes on the interval $[0, t)$. $N(.)$ counts the number of spikes and hence is a counting process. Let $w_1< w_2 <\cdots < w_{N(t)}$ be all the spike times occurring in $[0, t)$. We assume that the following limit exists: \begin{displaymath} \lambda (t | w_1,\cdots, w_{N(t)} ) = \lim_{\delta \downarrow 0} \frac{1}{\delta} E[ N( t+\delta) - N(t) | w_1,\cdots, w_{N(t)} ], \hspace{0.5cm} \mbox{a.s.} \end{displaymath} $\lambda(.| .)$ is known as the conditional intensity of $N(.)$. In the neuroscience literature, a number of probability models for $\lambda(.| .)$ have been proposed. One of the simplest is when $\lambda(t| w_1,\cdots, w_{N(t)} )$ depends only on $t$. This leads to an inhomogeneous Poisson process [see, for example, Ventura {\em et al.} (2002)]. It is well known that for a short period of time after a spike has been discharged, it is more difficult or even impossible for a neuron to fire another spike [see, for example, Dayan and Abbott (2001), page 4]. Such a time interval is called the refractory period. The main drawback with the inhomogeneous Poisson process model is that it does not incorporate the refractory period of the neuron. To account for this, a number of researchers [for example, Johnson and Swami (1983) and Kass and Ventura (2001)] have proposed modeling $\lambda(.|.)$ by \begin{displaymath} \lambda_0 (t| w_1,\cdots, w_{N(t)} ) = f( t, t- w_{N(t)}), \end{displaymath} where $f$ is a nonnegative function. This model is Markovian in that it only depends on the present time $t$ and the duration $t-w_{N(t)}$ since the last spike. A simpler alternative model for the conditional intensity of $N(.)$ that has been proposed in the literature [see, for example, Johnson and Swami (1983), Miller (1985) and Berry and Meister (1998)] is \begin{equation} \lambda_1 (t| w_1,\cdots, w_{N(t)} ) = \left\{ \begin{array}{ll} s(t) & \mbox{if $N(t)=0$,} \\ s(t) r( t- w_{N(t)}) & \mbox{if $N(t)\geq 1$}, \end{array} \right. \label{eq:1.1} \end{equation} where $s, r$ are nonnegative functions. $s$ and $r$ are known as the free firing rate function and the recovery function respectively. This model has the added attractiveness of easy interpretability. This article consists of two parts. The first part considers sieve maximum likelihood estimation of $s$ and $r$ in (\ref{eq:1.1}) based on $n$ independent realizations of $N(t), t\in [0, T),$ where $0< T <\infty$ and $N(.)$ is a counting process with conditional intensity $\lambda_1 (.|.)$. Here we assume that the true free firing rate function $s$ and recovery function $r$ both lie in the class of $q$-smooth functions $\Theta_{\tilde{\kappa}, q}$ where $\Theta_{\tilde{\kappa}, q}$ is defined as in (\ref{eq:2.62}). Section 2 computes upper bounds on the metric entropy of $\Theta_{\tilde{\kappa}, q}$ as well as other function spaces induced by $\Theta_{\tilde{\kappa}, q}$. These results are needed in Section 3. Section 3 focuses on sieve maximum likelihood estimators $\hat{s}_n$ and $\hat{r}_n$ for $s$ and $r$ respectively. Assuming that there exists an absolute refractory period (that is, there exists a constant $\theta>0$ such that $r(u) = 0$, for all $u\in [0, \theta]$), it is proved in Theorem \ref{tm:3.2} that for $q>1/2$, \begin{displaymath} E_{s,r} [ \int_0^T | \hat{s}_n (t) - s(t) | dt] = O(n^{-q/(2q+1)} \log^{1/2} n), \hspace{0.5cm}\mbox{as $n\rightarrow\infty$.} \end{displaymath} If, in addition, $s(t)>0$ for $t\in [0, T]$, then Theorem \ref{tm:3.2} shows that \begin{displaymath} E_{s,r} [ \int_0^{T^*} | \hat{r}_n (u) - r(u) | du] = O(n^{-q/(2q+1)} \log^{1/2} n), \hspace{0.5cm}\mbox{as $n\rightarrow\infty$}, \end{displaymath} where $T^*$ is an arbitrary but fixed constant satisfying $0< T^* < T$. In Section 4, corresponding lower bounds for the convergence rate are established. In particular under the assumptions of Section 3, Theorems \ref{tm:4.1} and \ref{tm:4.2} prove that it is not possible to achieve a faster convergence rate than $n^{-q/(2q+1)}$. Thus we conclude that sieve maximum likelihood estimators for $s$ and $r$ achieved essentially the optimal convergence rate (except for a logarithmic factor). The second part of the article deals with the detection of multiple spike train patterns. Let ${\bf w}^{(i)} = \{ w^{(i)}_1,\ldots,w^{(i)}_{N_i(T)} \}$ be the spike times of the $i$th neuron for $1 \leq i \leq d$, and let ${\bf w} = ({\bf w}^{(1)},\ldots,{\bf w}^{(d)})$. Loosely speaking, the pattern or template ${\bf w}$ is said to have occurred at time $t$ in the spike trains ${\bf y} = ({\bf y}^{(1)},\ldots,{\bf y}^{(d)})$ if for most $y \in {\bf y}^{(i)} \cap [t,t+T)$, $1 \leq i \leq d$, there exists $w \in {\bf w}^{(i)}$ such that $y-w-t$ is close to 0. A more rigorous definition of a match, via a user-chosen score function, is given in Section 5. When the number of matches is significantly large, we can identify the onset of the patterns ${\bf w}$ in ${\bf y}$ with the stimulus provided to the subjects when ${\bf w}$ is recorded. For example, ${\bf w}$ can be the spike times of an assembly of neurons of a zebra finch when its own song is played while awake and ${\bf y}$ is the spike trains of the same assembly when it is sleeping. The replaying of these patterns during sleep has been hypothesized to play an important role in bird song learning [cf. Dave and Margoliasch (2000) and Mooney (2000)]. In Brown, Kass and Mitra (2004), it was stated that ``research in statistics and signal processing on multivariate point process models has not been nearly as extensive as research on models of multivariate continuous-valued processes'' in the section titled ``future challenges for multiple spike train data analysis''. We develop in Sections 5 and 6 a theory for computing the distribution of scan statistics in multivariate point processes and apply it to obtain $p$-values for template matching. The accuracy of these computations is then verified independently via computer experiments. \section{Metric entropy} In this section, suppose $N(.)$ is a counting process with conditional intensity $\lambda_1 (.|.)$ as given by (\ref{eq:1.1}). We assume that a realization of $N$ is observed on the interval $[0, T)$, $0<T<\infty$, and that the spike times are $0 < w_1 < w_2 <\cdots < w_{N(T)} < T$. It is convenient to let $\{w_1,\cdots, w_{N(T)}\}$ denote the point process corresponding to $N(t), t\in [0, T)$, and ${\cal N}$ be the set of all possible realizations of $\{w_1,\cdots, w_{N(T)}\}$. It follows from Chapter 7 of Daley and Vere-Jones (2002) that the likelihood is the local Janossy density given by \begin{equation} p_{s,r} ( \{w_1,\cdots, w_{N(T)}\}) = e^{-\int_0^T s(t) r(t-w_{N(t)}) dt} \prod_{j=1}^{N(T)} s(w_j) r(w_j- w_{j-1} ), \label{eq:4.1} \end{equation} and hence its log-likelihood function is \begin{eqnarray*} l( s, r| \{w_1,\cdots, w_{N(T)}\}) &=& - \int_0^T s(t) r( t- w_{ N(t)} ) dt + \sum_{j=1}^{ N (T) } \log[ s( w_j ) r (w_j - w_{ j-1} ) ] \nonumber \\ &=& - \int_0^{w_1 \wedge T} s(t) dt - \int_{w_1 \wedge T}^T s(t) r( t- w_{ N(t)} ) dt \nonumber \\ && + \sum_{j=1}^{ N (T) } \log[ s( w_j ) r (w_j - w_{ j-1} ) ], \end{eqnarray*} where $r( t - w_0)= 1$ for all $t\in [0, T)$. Next let $q, q_0, q_1$ be constants satisfying $q>0$, $q = q_0+q_1$, $q_0$ a nonnegative integer and $0< q_1\leq 1$. Furthermore we write $\tilde{\kappa} = (\kappa_0,\cdots, \kappa_{q_0+1} )$ to be a vector of strictly positive constants. In this section, we assume that the true free firing rate function $s$ and the recovery function $r$ lie in the $q$-smooth function class $\Theta_{\tilde{\kappa}, q}$ where \begin{eqnarray} \Theta_{\tilde{\kappa}, q} &=& \Big\{ f=g^2: g\in {\cal C}^{q_0} [0, T), \min_{t\in [0, T)} g(t) \geq 0, \max_{t\in [0, T)} |\frac{ d^j}{dt^j} g(t) | < \kappa_j, j=0,\cdots, q_0, \nonumber \\ &&\hspace{0.5cm} |\frac{d^{q_0}}{ dt^{q_0}} g (t_1) - \frac{d^{q_0} }{dt^{q_0}} g (t_2)| < \kappa_{q_0+1} |t_1-t_2|^{q_1}, \forall t_1, t_2 \in [0, T) \Big\}. \label{eq:2.62} \end{eqnarray} Let $\{0< \delta_n\leq 1: n= 1, 2, \cdots\}$ be a sequence of constants (to be suitably chosen later and $\delta_n$ depends only on $n$) such that $\delta_n \rightarrow 0$ as $n\rightarrow \infty$. We define a sieve for the parameter space of $\Theta_{\tilde{\kappa}, q}$ by \begin{eqnarray*} \Theta_{\tilde{\kappa}, q, n} &=& \Big\{ f=g^2: g\in {\cal C}^{q_0} [0, T), \min_{t\in [0, T)} g(t) \geq \delta_n, \max_{t\in [0, T)} |\frac{d^j }{ dt^j} g (t)| < \kappa_j, j=0,\cdots, q_0, \nonumber \\ &&\hspace{0.5cm} |\frac{ d^{q_0} }{ dt^{q_0}} g (t_1) - \frac{ d^{q_0} }{ dt^{q_0}} g (t_2)| < \kappa_{q_0+1} |t_1-t_2|^{q_1}, \forall t_1, t_2 \in [0, T) \Big\}. \end{eqnarray*} Let $\Theta_{\tilde{\kappa}, q}$ and $\Theta_{\tilde{\kappa}, q, n}$ be endowed with the metrics $\rho_{\Theta_{\tilde{\kappa}, q}}$ and $\rho_{\Theta_{\tilde{\kappa}, q, n}}$ respectively where \begin{eqnarray*} \rho_{\Theta_{\tilde{\kappa}, q}} (f_1, f_2) &=& \sup_{t\in [0, T)} | f_1^{1/2}(t) - f_2^{1/2}(t) |, \hspace{0.5cm}\forall f_1, f_2 \in \Theta_{\tilde{\kappa}, q}, \nonumber \\ \rho_{\Theta_{\tilde{\kappa}, q, n}} (f_1, f_2) &=& \sup_{t\in [0, T)} | f_1^{1/2}(t) - f_2^{1/2}(t) |, \hspace{0.5cm}\forall f_1, f_2 \in \Theta_{\tilde{\kappa}, q, n}. \end{eqnarray*} We observe that any $f\in \Theta_{\tilde{\kappa}, q}$ can be approximated arbitrarily closely by $ (f^{1/2}+\delta_n)^2 \in \Theta_{\tilde{\kappa}, q, n}$ by choosing $n$ sufficiently large. Consequently a sieve for the parameter space of $(s, r)$ can now be expressed as $\Theta_{\tilde{\kappa}, q, n}^2 = \Theta_{\tilde{\kappa}, q, n} \times \Theta_{\tilde{\kappa}, q, n}$ with metric $\rho_{\Theta_{\tilde{\kappa}, q, n}^2}$ where \begin{displaymath} \rho_{\Theta_{\tilde{\kappa}, q, n}^2} ((f_1, g_1), (f_2, g_2)) = \rho_{\Theta_{\tilde{\kappa}, q, n}}( f_1, f_2) + \rho_{\Theta_{\tilde{\kappa}, q, n}} (g_1, g_2), \hspace{0.5cm} \forall (f_1,g_1 ), (f_2, g_2) \in \Theta_{\tilde{\kappa}, q, n}^2. \end{displaymath} Next let \begin{displaymath} {\cal F}_{\tilde{\kappa}, q, n} = \mbox{ \{$p_{s_1,r_1}$ is as in (\ref{eq:4.1}): $(s_1,r_1) \in \Theta_{\tilde{\kappa}, q, n}^2$\}}, \end{displaymath} be endowed with the Hellinger metric $\rho_{{\cal F}_{\tilde{\kappa}, q, n}}$ where \begin{eqnarray*} && \rho_{{\cal F}_{\tilde{\kappa}, q, n}} (p_{s_1, r_1}, p_{s_2, r_2}) \nonumber \\ &=& \| p_{s_1, r_1}^{1/2} - p_{s_2,r_2}^{1/2} \|_2 \nonumber \\ &=& \Big\{ \sum_{j=0}^\infty \int_{0<w_1<\cdots < w_j<T} [ p_{s_1, r_1}^{1/2} ( \{w_1,\cdots, w_j\}) - p_{s_2,r_2}^{1/2} ( \{w_1,\cdots, w_j\}) ]^2 dw_1\cdots dw_j \Big\}^{1/2}. \end{eqnarray*} For $\varepsilon>0$, let $\Theta_{\tilde{\kappa}, q, n}^2 (\varepsilon ) \subseteq \Theta_{\tilde{\kappa}, q, n}^2$ denote a finite $\varepsilon$-net for $\Theta_{\tilde{\kappa}, q, n}^2$ with respect to the metric $\rho_{\Theta_{\tilde{\kappa}, q, n}^2}$. This implies that for each $(s_1, r_1) \in \Theta_{\tilde{\kappa}, q, n}^2$, there exists a $(s_2, r_2) \in \Theta_{\tilde{\kappa}, q, n}^2 (\varepsilon)$ such that $\rho_{\Theta_{\tilde{\kappa}, q, n}^2} ((s_1,r_1), (s_2, r_2) ) \leq \varepsilon$. Now suppose that for each $\varepsilon >0$, there exist measurable nonnegative functions $f_{l,\varepsilon}$ and $f_{u,\varepsilon}$ on $\Theta_{\tilde{\kappa}, q, n}^2 (\varepsilon) \times {\cal N}$ such that for each $(s_1,r_1) \in \Theta_{\tilde{\kappa}, q, n}^2$, there is some $(s_2, r_2) \in \Theta_{\tilde{\kappa}, q, n}^2 (\varepsilon)$ satisfying \begin{equation} \rho_{\Theta_{\tilde{\kappa}, q, n}^2 } ((s_1,r_1), (s_2, r_2) ) \leq \varepsilon, \label{eq:4.5} \end{equation} with \begin{eqnarray} f_{l,\varepsilon} ((s_2, r_2), \{w_1,\cdots, w_{N(T)}\}) &\leq & p_{s_1, r_1}( \{w_1,\cdots, w_{N(T)}\}) \nonumber \\ &\leq & f_{u,\varepsilon} ((s_2, r_2), \{w_1,\cdots, w_{N(T)}\}), \hspace{0.5cm}\mbox{a.s.,} \label{eq:4.6} \end{eqnarray} and \begin{eqnarray} && \Big\{ \sum_{j=0}^\infty \int_{0<w_1<\cdots < w_j<T} [ f_{u,\varepsilon}^{1/2} ((s_2, r_2), \{w_1,\cdots, w_j\}) \nonumber \\ &&\hspace{0.5cm} - f_{l,\varepsilon}^{1/2} ((s_2,r_2), \{w_1,\cdots, w_j\}) ]^2 dw_1 \cdots dw_j \Big\}^{1/2} \leq \varepsilon. \label{eq:4.7} \end{eqnarray} {\sc Definition.} For $\varepsilon>0$, the $\varepsilon$-entropy of $\Theta_{\tilde{\kappa}, q, n}^2$ with respect to the metric $\rho_{\Theta_{\tilde{\kappa}, q, n}^2}$ is defined to be \begin{eqnarray*} H (\varepsilon, \Theta_{\tilde{\kappa}, q, n}^2, \rho_{\Theta_{\tilde{\kappa}, q, n}^2} ) &=& \log [ \min\{ \mbox{card $\Theta_{\tilde{\kappa}, q, n}^2 (\varepsilon)$: $\Theta_{\tilde{\kappa}, q, n}^2 (\varepsilon)$ is a $\varepsilon$-net for $\Theta_{\tilde{\kappa}, q, n}^2$} \nonumber \\ && \hspace{0.5cm}\mbox{with respect to the metric $\rho_{\Theta_{\tilde{\kappa}, q, n}^2}$} \} ]. \end{eqnarray*} The $\varepsilon$-entropies of $\Theta_{\tilde{\kappa}, q}$ with respect to $\rho_{\Theta_{\tilde{\kappa}, q}}$ and $\Theta_{\tilde{\kappa}, q, n}$ with respect to $\rho_{\Theta_{\tilde{\kappa}, q, n}}$ are defined in a similar manner. {\sc Definition.} The $\varepsilon$-entropy of ${\cal F}_{\tilde{\kappa}, q, n}$ with bracketing with respect to the metric $\rho_{{\cal F}_{\tilde{\kappa}, q, n}}$ is defined to be \begin{eqnarray*} H^B (\varepsilon, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n}}) &=& \log [ \min\{ \mbox{card $\Theta_{\tilde{\kappa}, q, n}^2 (\varepsilon)$: (\ref{eq:4.5}), (\ref{eq:4.6}) and (\ref{eq:4.7}) are satisfied} \} ]. \end{eqnarray*} We observe from Kolmogorov and Tihomirov (1961), page 308, and Dudley (1999), page 11, that the $\varepsilon$-entropy of $\Theta_{\tilde{\kappa}, q, n}$ satisfies \begin{displaymath} H( \varepsilon, \Theta_{\tilde{\kappa}, q, n}, \rho_{\Theta_{\tilde{\kappa}, q, n}} ) \leq H( \varepsilon, \Theta_{\tilde{\kappa}, q}, \rho_{\Theta_{\tilde{\kappa}, q}} ) \leq \frac{C_{\tilde{\kappa}, q} }{\varepsilon^{1/q} }, \end{displaymath} and hence \begin{equation} H (\varepsilon, \Theta_{\tilde{\kappa}, q, n}^2, \rho_{\Theta_{\tilde{\kappa}, q, n}^2} ) \leq 2 H(\frac{\varepsilon}{2}, \Theta_{\tilde{\kappa}, q, n}, \rho_{\Theta_{\tilde{\kappa}, q, n}}) \leq \frac{ 2^{(q+1)/q} C_{\tilde{\kappa}, q} }{\varepsilon^{1/q} }, \label{eq:4.20} \end{equation} where $C_{\tilde{\kappa}, q}$ is a constant depending only on $\tilde{\kappa}$ and $q$. Thus we conclude from Lemma \ref{la:a.3} in Appendix A that \begin{displaymath} H^B (\varepsilon, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n}} ) \leq \frac{ 2^{(q+ 2)/q} C_{\tilde{\kappa}}^{1/q} C_{\tilde{\kappa}, q} }{ \varepsilon^{1/q}}, \end{displaymath} where $C_{\tilde{\kappa}}$ is a constant that depends only on $\tilde{\kappa}$. Next let $f: {\cal N} \rightarrow R$ be a nonnegative function such that \begin{displaymath} \sum_{j=0}^\infty \int_{0\leq w_1<\cdots < w_j <T} f(\{w_1,\cdots, w_j\}) dw_1 \cdots dw_j < \infty. \end{displaymath} We follow Wong and Shen (1995) by defining \begin{displaymath} Z_f (\{w_1,\cdots, w_{N(T)} \}) = \log [ \frac{ f (\{w_1,\cdots, w_{N(T)} \}) }{ p_{s,r} (\{w_1,\cdots, w_{N(T)} \}) } ], \end{displaymath} where $s$ is the true free firing rate function and $r$ the true recovery function. For $\tau>0$, we write \begin{eqnarray*} && \tilde{f} (\{w_1,\cdots, w_{N(T)} \}) \nonumber \\ &=& \left\{ \begin{array}{ll} f (\{ w_1,\cdots, w_{N(T)} \}), & \mbox{if $f (\{ w_1,\cdots, w_{N(T)} \}) \geq e^{-\tau} p_{s,r}(\{w_1,\cdots, w_{N(T)}\} )$}, \\ e^{-\tau} p_{s,r}(\{ w_1,\cdots, w_{N(T)} \}), & \mbox{if $f (\{ w_1,\cdots, w_{N(T)} \}) < e^{-\tau} p_{s,r}(\{w_1,\cdots, w_{N(T)}\} )$}, \\ \end{array} \right. \nonumber \end{eqnarray*} and \begin{equation} \tilde{Z}_f = Z_{\tilde{f}} = \left\{ \begin{array}{ll} Z_f, & \mbox{if $Z_f \geq -\tau$,} \\ -\tau, & \mbox{if $Z_f < -\tau$}. \end{array} \right. \label{eq:4.13} \end{equation} Let $\tilde{\cal Z}_{\tilde{\kappa}, q, n} = \{ \tilde{Z}_{p_{s_1, r_1}}: p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} \}$ be the space of truncated log-likelihood ratios (based on one observation). Define $H^B (\varepsilon, \tilde{\cal Z}_{\tilde{\kappa}, q, n}, \rho_{\tilde{\cal Z}_{\tilde{\kappa}, q, n}})$ to be the $\varepsilon$-entropy of $\tilde{\cal Z}_{\tilde{\kappa}, q, n}$ with bracketing with respect to the metric \begin{eqnarray*} && \rho_{\tilde{\cal Z}_{\tilde{\kappa}, q, n} } (\tilde{Z}_{p_{s_1, r_1}}, \tilde{Z}_{p_{s_2, r_2}} ) \nonumber \\ &=& \Big\{ E_{s,r} [ \tilde{Z}_{p_{s_1, r_1}} (\{ w_1,\cdots, w_{N(T)} \}) - \tilde{Z}_{p_{s_2, r_2}} (\{w_1,\cdots, w_{N(T)} \}) ]^2 \Big\}^{1/2} \nonumber \\ &=& \Big\{ \sum_{j=0}^\infty \int_{0<w_1<\cdots < w_j<T} [ \tilde{Z}_{p_{s_1, r_1}} (\{w_1,\cdots, w_j\}) - \tilde{Z}_{p_{s_2, r_2}} (\{ w_1,\cdots, w_j\}) ]^2 \nonumber \\ &&\hspace{0.5cm} \times e^{-\int_0^T s(t) r(t- w_{\zeta(t)}) dt} [ \prod_{i=1}^j s(w_i) r(w_i - w_{i-1}) ] dw_1\cdots dw_j \Big\}^{1/2}, \end{eqnarray*} where $\zeta(t) = \max\{ k\geq 0: w_k <t\}$ and $E_{s,r}$ denotes expectation when the true free firing rate function is $s$ and the recovery function is $r$. We observe from Lemma \ref{la:a.6} in Appendix A that \begin{equation} H^B (\varepsilon, \tilde{\cal Z}_{\tilde{\kappa}, q, n}, \rho_{\tilde{Z}_{\tilde{\kappa}, q, n}} ) \leq 2^{(q+ 2)/q} C_{\tilde{\kappa}}^{1/q} C_{\tilde{\kappa}, q} (\frac{ 2 e^{\tau/2} }{ \varepsilon})^{1/q}. \label{eq:2.10} \end{equation} \section{Sieve maximum likelihood estimation} In this section, we assume that we have $n$ independent identically distributed copies of $N(t), t\in [0, T)$, with conditional intensity as given by (\ref{eq:1.1}). Let these $n$ copies be denoted by $N_i(t), t\in [0, T)$, and the spike times be written as $0< w_{i,1} <\cdots < w_{i, N_i (T)}< T$, $i=1,\cdots, n$. Inspired by Wong and Shen (1995), we shall first establish a number of likelihood ratio probability inequalities. \begin{pn} \label{pn:4.1} Let $0< \varepsilon <1$ and $C_{\tilde{\kappa}}, C_{\tilde{\kappa}, q}$ be as in (\ref{eq:2.10}). Suppose that \begin{equation} \int_{\varepsilon^2/2^8}^{\sqrt{2} \varepsilon} [ 2^{(q+2)/q} C_{\tilde{\kappa}}^{1/q} C_{\tilde{\kappa}, q} (\frac{ 10}{ x})^{1/q} ]^{1/2} dx \leq \frac{ n^{1/2} \varepsilon^2 }{ 2^{13} \sqrt{2} }. \label{eq:4.90} \end{equation} Then \begin{displaymath} P_{s,r}^* \{ \sup_{\| p_{s_1,r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \geq \varepsilon, p_{s_1, r_1}\in {\cal F}_{\tilde{\kappa}, q, n} } \prod_{i=1}^n \frac{p_{s_1, r_1} (\{ w_{i,1},\cdots, w_{i, N_i (T)} \}) }{ p_{s, r} (\{ w_{i,1},\cdots, w_{i, N_i(T)} \} )} \geq e^{- n \varepsilon^2/8} \} \leq 4 \exp[ -\frac{ n\varepsilon^2}{ 2^7 (250)}]. \end{displaymath} $P^*_{s,r}$ is the outer measure corresponding to the density $p_{s,r}$. \end{pn} We refer the reader to Appendix A for a proof of Proposition \ref{pn:4.1}. Next we define nonnegative functions $s^\dag_n$ and $r^\dag_n$ on $t\in [0, T)$ by \begin{equation} \sqrt{ s^\dag_n (t)} = \sqrt{ s (t)} + \delta_n, \hspace{0.5cm} \sqrt{ r^\dag_n (t)} = \sqrt{ r (t)} + \delta_n, \label{eq:3.23} \end{equation} where $\{0< \delta_n \leq 1: n=1, 2, \cdots\}$ is as in Section 2. Since $s, r\in \Theta_{\tilde{\kappa}, q}$, we have $s^\dag_n, r^\dag_n \in \Theta_{\tilde{\kappa}, q, n}$ and $p_{s^\dag_n, r^\dag_n} \in {\cal F}_{\tilde{\kappa}, q, n}$ for sufficiently large $n$. We further observe from Lemma 8 of Wong and Shen (1995) and Lemma \ref{la:a.43} in Appendix A that \begin{displaymath} 0 \leq \delta^\dag_n := E_{s,r} (\frac{ p_{s,r} }{p_{s^\dag_n, r^\dag_n } } -1) \leq C_{\tilde{\kappa}, 1} \delta_n, \end{displaymath} where $C_{\tilde{\kappa}, 1}$ is a constant depending only on $\tilde{\kappa}$. \begin{pn} \label{pn:4.2} Let $0< \varepsilon <1, \delta^\dag_n \leq 1$ and that (\ref{eq:4.90}) holds. Then \begin{eqnarray*} && P_{s,r}^* \{ \sup_{ \|p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \geq \varepsilon, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \prod_{i=1}^n \frac{ p_{s_1, r_1} (\{ w_{i,1},\cdots, w_{i, N_i(T)} \} ) }{ p_{s^\dag_n, r^\dag_n } (\{ w_{i,1},\cdots, w_{i, N_i(T)} \}) } \geq e^{ -n \varepsilon^2/16} \} \nonumber \\ &\leq & 4 \exp[ - \frac{ n \varepsilon^2 }{ 2^7 (250) } ] + \exp[ - n( \frac{ \varepsilon^2 }{16} - \delta^\dag_n )]. \end{eqnarray*} \end{pn} {\sc Proof.} First we observe from Proposition \ref{pn:4.1} that \begin{eqnarray*} && P_{s,r}^* \{ \sup_{ \|p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \geq \varepsilon, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \prod_{i=1}^n \frac{ p_{s_1, r_1} (\{ w_{i,1},\cdots, w_{i, N_i(T)} \} ) }{ p_{s^\dag_n, r^\dag_n } (\{ w_{i,1},\cdots, w_{i, N_i(T)} \}) } \geq e^{ - n \varepsilon^2/16} \} \nonumber \\ &\leq & P_{s,r}^* \{ \sup_{ \|p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \geq \varepsilon, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \prod_{i=1}^n \frac{ p_{s_1, r_1} (\{ w_{i,1},\cdots, w_{i, N_i(T)} \} ) }{ p_{s, r } (\{ w_{i,1},\cdots, w_{i, N_i(T)} \}) } \geq e^{ - n \varepsilon^2/8} \} \nonumber \\ && + P_{s,r} \{ \prod_{i=1}^n \frac{ p_{s, r} (\{ w_{i,1},\cdots, w_{i, N_i(T)} \} ) }{ p_{s^\dag_n, r^\dag_n } (\{ w_{i,1},\cdots, w_{i, N_i(T)} \}) } \geq e^{ n \varepsilon^2/16} \} \nonumber \\ &\leq & 4 \exp[ - \frac{ n \varepsilon^2 }{ 2^7 (250) } ] + P_{s,r} \{ \prod_{i=1}^n \frac{ p_{s, r} (\{ w_{i,1},\cdots, w_{i, N_i(T)} \} ) }{ p_{s^\dag_n, r^\dag_n } (\{ w_{i,1},\cdots, w_{i, N_i(T)} \}) } \geq e^{ n \varepsilon^2/16} \}. \end{eqnarray*} Now using Markov's inequality, \begin{eqnarray*} && P_{s,r} \{ \prod_{i=1}^n \frac{ p_{s, r} (\{ w_{i,1},\cdots, w_{i, N_i(T)} \} ) }{ p_{s^\dag_n, r^\dag_n } (\{ w_{i,1},\cdots, w_{i, N_i(T)} \}) } \geq e^{ n \varepsilon^2/16} \} \nonumber \\ &\leq & e^{ - n \varepsilon^2/16} \prod_{i=1}^n E_{s,r} [ \frac{ p_{s, r} (\{ w_{i,1},\cdots, w_{i, N_i(T)} \} ) }{ p_{s^\dag_n, r^\dag_n } (\{ w_{i,1},\cdots, w_{i, N_i(T)} \}) }] \nonumber \\ &=& (1 + \delta^\dag_n )^n e^{ - n \varepsilon^2/16} \nonumber \\ &\leq & \exp( -\frac{ n \varepsilon^2 }{16} + n \delta^\dag_n ). \end{eqnarray*} This proves Proposition \ref{pn:4.2}. \hfill $\Box$ {\sc Definition.} Let $\eta_n$ be a sequence of positive numbers converging to 0. We call an estimator $p_{\hat{s}_n, \hat{r}_n}: {\cal N}^n \rightarrow {\mathbb{R}}^+$ a $\eta_n$-sieve MLE of $p_{s,r}$ if $(\hat{s}_n, \hat{r}_n) \in \Theta_{\tilde{\kappa}, q, n}^2$ and \begin{displaymath} \frac{1}{n} \sum_{i=1}^n \log [ p_{\hat{s}_n, \hat{r}_n} (\{ w_{i,1},\cdots, w_{i, N_i(T)} \}) ] \geq \sup_{p_{s_1, r_1}\in {\cal F}_{\tilde{\kappa}, q, n} } \frac{1}{n} \sum_{i=1}^n \log[ p_{s_1, r_1} (\{ w_{i,1},\cdots, w_{i, N_i(T)} \}) ] - \eta_n. \end{displaymath} The corresponding $\hat{s}_n$ and $\hat{r}_n$ are called $\eta_n$-sieve MLEs of $s$ and $r$ respectively. {\sc Definition.} If $\delta_n$ in Section 2 satisfies $\delta_n =0$ for $n=1,2, \cdots$, then a $\eta_n$-sieve MLE $p_{\hat{s}_n, \hat{r}_n}$ is more simply called a $\eta_n$-MLE of $p_{s,r}$. \begin{pn} \label{pn:4.50} Let $\varepsilon_n>0$ be the smallest value of $\varepsilon$ satisfying (\ref{eq:4.90}) and $0< \eta_n < \varepsilon_n^2/16$. If $p_{\hat{s}_n,\hat{r}_n}$ is a $\eta_n$-sieve MLE of $p_{s,r}$, then \begin{displaymath} P_{s, r} ( \| p_{\hat{s}_n, \hat{r}_n}^{1/2} - p_{s,r}^{1/2} \|_2 \geq \varepsilon_n) \leq 4 \exp[ - \frac{ n \varepsilon_n^2 }{ 2^7 (250) } ] + \exp[ - n( \frac{ \varepsilon_n^2 }{16} - \delta^\dag_n )], \end{displaymath} where $P_{s,r}$ denotes probability when the true free firing rate function is $s$ and the recovery function is $r$. \end{pn} {\sc Proof.} We observe from Proposition \ref{pn:4.2} that \begin{eqnarray*} && P_{s,r} ( \| p_{\hat{s}_n, \hat{r}_n}^{1/2} - p_{s,r}^{1/2} \|_2 \geq \varepsilon_n) \nonumber \\ &\leq & P_{s,r}^* \{ \sup_{ \| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \geq \varepsilon_n, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \prod_{i=1}^n \frac{ p_{s_1, r_1} (\{w_{i,1},\cdots, w_{i, N_i(T)} \}) }{ p_{s^\dag_n, r^\dag_n } (\{w_{i,1},\cdots, w_{i, N_i(T)} \}) } \geq e^{-n \eta_n} \} \nonumber \\ &\leq & P^*_{s,r} \{ \sup_{ \| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \geq \varepsilon_n, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \prod_{i=1}^n \frac{ p_{s_1, r_1} (\{w_{i,1},\cdots, w_{i, N_i(T)} \}) }{ p_{s^\dag_n, r^\dag_n } (\{w_{i,1},\cdots, w_{i, N_i(T)} \}) } \geq e^{-n \varepsilon_n^2/16 } \} \nonumber \\ &\leq & 4 \exp[ - \frac{ n \varepsilon_n^2 }{ 2^7 (250) } ] + \exp[ - n( \frac{ \varepsilon_n^2 }{16} - \delta^\dag_n )]. \end{eqnarray*} This proves Proposition \ref{pn:4.50}. \hfill $\Box$ \begin{tm} \label{tm:3.1} Let $\varepsilon_n>0$ be the smallest value of $\varepsilon$ satisfying (\ref{eq:4.90}), $q>1/2$ and $0< \eta_n < \varepsilon_n^2/16$. If $p_{\hat{s}_n,\hat{r}_n}$ is a $\eta_n$-MLE of $p_{s,r}$, then \begin{displaymath} E_{s,r} \| p_{\hat{s}_n, \hat{r}_n}^{1/2} - p_{s,r}^{1/2} \|_2 = O(n^{-q/(2q+1)}), \hspace{0.5cm} \mbox{as $n\rightarrow\infty$.} \end{displaymath} \end{tm} {\sc Proof.} We observe that $\delta^\dag_n =0$ for $n=1,2,\cdots$ (from the definition of $\eta_n$-MLE) and $\varepsilon_n$ is exactly of order $n^{-q/(2q+1)}$ as $n\rightarrow \infty$. Hence we observe from Proposition \ref{pn:4.50} that \begin{eqnarray*} E_{s,r} \| p_{\hat{s}_n, \hat{r}_n}^{1/2} - p_{s,r}^{1/2} \|_2 &\leq & \varepsilon_n + 8 \exp[ - \frac{ n \varepsilon_n^2 }{ 2^7 (250) } ] +2 \exp[ - n( \frac{ \varepsilon_n^2 }{16} - \delta^\dag_n )] \nonumber \\ &=& O(n^{-q/(2q+1)}), \end{eqnarray*} as $n\rightarrow\infty$. \hfill $\Box$ Now we assume that there exists a refractory period in which the neuron cannot discharge another spike after a spike had been fired [see, for example, Brillinger (1992) and Johnson and Swami (1983)]. More precisely, we suppose that there exists a constant $\theta>0$ such that \begin{equation} r( u) = 0, \hspace{0.5cm}\forall u\in [0, \theta]. \label{eq:4.86} \end{equation} Then the number of spikes on the interval $[0, T)$ can be at most $n_\theta = \lceil T/\theta \rceil$. \begin{pn} \label{pn:4.6} Let $\varepsilon_n>0$ be the smallest value of $\varepsilon$ satisfying (\ref{eq:4.90}) and $0< \eta_n < \varepsilon_n^2/16 \leq (1 - e^{-1})^2/32$. If (\ref{eq:4.86}) holds and $p_{\hat{s}_n,\hat{r}_n}$ is a $\eta_n$-sieve MLE of $p_{s,r}$, then \begin{eqnarray*} && P_{s,r} \Big\{ \sum_{j=0}^{n_\theta } \int_{0<w_1<\cdots < w_j <T} p_{s,r} (\{w_1,\cdots, w_j\}) \log [\frac{ p_{s,r} (\{w_1,\cdots, w_j\}) }{ p_{\hat{s}_n, \hat{r}_n} (\{w_1,\cdots, w_j\}) } ] dw_1\cdots dw_j \nonumber \\ && \hspace{0.5cm} > [ 6 + \frac{ 2 \log (2) }{(1- e^{-1})^2} + 8 \max\{1, \log( \frac{ e^{ \bar{\kappa}^4 (\bar{\kappa}^4 +1) T/2} }{ \varepsilon_n \delta_n^{ 2 n_\theta} } ) \} ] \varepsilon_n^2 \Big\} \nonumber \\ &\leq & 4 \exp[ - \frac{ n \varepsilon_n^2 }{ 2^7 (250) } ] + \exp[ - n( \frac{ \varepsilon_n^2 }{16} - \delta^\dag_n )], \end{eqnarray*} where $\bar{\kappa} = \kappa_0 \vee 1$. \end{pn} {\sc Proof.} First we observe that \begin{eqnarray*} && \sum_{j=0}^{n_\theta } \int_{0<w_1<\cdots < w_j<T} \frac{ p_{s,r}^2 (\{w_1,\cdots, w_j\}) }{ p_{\hat{s}_n, \hat{r}_n}(\{w_1,\cdots, w_j\}) } dw_1 \cdots dw_j \nonumber \\ &= & \sum_{j=0}^{n_\theta } \int_{0<w_1<\cdots < w_j<T} \frac{ e^{-2 \int_0^T s(t) r(t- w_{\zeta(t)}) dt} \prod_{i=1}^j s^2 (w_i) r^2 (w_i-w_{i-1}) }{ e^{- \int_0^T \hat{s}_n (t) \hat{r}_n (t- w_{\zeta(t)}) dt} \prod_{i=1}^j \hat{s}_n (w_i) \hat{r}_n (w_i-w_{i-1}) } dw_1 \cdots dw_j \nonumber \\ &\leq & \frac{ e^{\bar{\kappa}^4 (\bar{\kappa}^4 +1) T} }{\delta_n^{4 n_\theta }}. \end{eqnarray*} Now we observe from Theorem 5 of Wong and Shen (1995) that \begin{displaymath} \| p_{\hat{s}_n, \hat{r}_n}^{1/2} - p_{s,r}^{1/2} \|_2^2 \leq \varepsilon_n^2 \leq (1-e^{-1})^2/2 \end{displaymath} implies that \begin{eqnarray*} && \sum_{j=0}^{n_\theta } \int_{0<w_1<\cdots < w_j<T} p_{s,r} (\{w_1,\cdots, w_j\}) \log[ \frac{ p_{s,r} (\{w_1,\cdots, w_j\}) }{ p_{\hat{s}_n, \hat{r}_n}(\{w_1,\cdots, w_j\}) }] dw_1 \cdots dw_j \nonumber \\ &\leq & [ 6 + \frac{ 2 \log (2) }{(1- e^{-1})^2} + 8 \max\{1, \log( \frac{ e^{\bar{\kappa}^4 (\bar{\kappa}^4 +1) T/2} }{ \varepsilon_n \delta_n^{ 2 n_\theta} } ) \} ] \varepsilon_n^2. \end{eqnarray*} Hence it follows from Proposition \ref{pn:4.50} that \begin{eqnarray*} && P_{s,r} \Big\{ \sum_{j=0}^{n_\theta } \int_{0<w_1<\cdots < w_j<T} p_{s,r} (\{w_1,\cdots, w_j\}) \log[ \frac{ p_{s,r} (\{w_1,\cdots, w_j\}) }{ p_{\hat{s}_n, \hat{r}_n}(\{w_1,\cdots, w_j\}) }] dw_1 \cdots dw_j \nonumber \\ && \hspace{0.5cm} > [ 6 + \frac{ 2 \log (2) }{(1- e^{-1})^2} + 8 \max\{1, \log( \frac{ e^{ \bar{\kappa}^4 (\bar{\kappa}^4 +1) T/2} }{ \varepsilon_n \delta_n^{2 n_\theta } } ) \} ] \varepsilon_n^2 \Big\} \nonumber \\ &\leq & 4 \exp[ - \frac{ n \varepsilon_n^2 }{ 2^7 (250) } ] + \exp[ - n( \frac{ \varepsilon_n^2 }{16} - \delta^\dag_n )]. \end{eqnarray*} This proves Proposition \ref{pn:4.6}. \hfill $\Box$ The following theorem is the main result of this section. \begin{tm} \label{tm:3.2} Let $\varepsilon_n>0$ be the smallest value of $\varepsilon$ satisfying (\ref{eq:4.90}), $q>1/2$, $0< \eta_n < \varepsilon_n^2/16$ and $\delta_n = n^{-\alpha}$ for some constant $\alpha\in (2q/(2q+1), 1)$. Suppose that (\ref{eq:4.86}) holds and $\hat{s}_n,\hat{r}_n$ are $\eta_n$-sieve MLEs of $s,r$ respectively. Then \begin{displaymath} E_{s,r} [ \int_0^T |\hat{s}_n (t) - s(t)| dt ] = O( n^{-q/(2q +1)} \log^{1/2} n ), \hspace{0.5cm}\mbox{as $n\rightarrow\infty$.} \end{displaymath} If, in addition, $s(t) >0$ for all $t\in [0, T]$, then \begin{displaymath} E_{s,r} [ \int_0^{T^*} |\hat{r}_n (u) - r(u)| du ] = O( n^{-q/(2q+1)} \log^{1/2} n), \hspace{0.5cm}\mbox{as $n\rightarrow\infty,$} \end{displaymath} where $T^*$ is any constant satisfying $0<T^*< T$. \end{tm} {\sc Proof.} First we observe from (\ref{eq:4.90}) that $\varepsilon_n$ is exactly of order $n^{-q/(2q+1)}$ as $n\rightarrow\infty$. We observe from Proposition \ref{pn:4.6} and Lemma \ref{la:a.1} (in Appendix A) that \begin{eqnarray*} && P_{s, r} \Big\{ \min\{ \frac{1}{20 \int_0^T s(t) e^{-\int_0^t s(u) du} dt} , \frac{1}{200} \} [ \int_0^T | \hat{s}_n (t) - s(t) | e^{-\int_0^t s(u) du } dt ]^2 \nonumber \\ && \hspace{0.5cm} \leq [ 6 + \frac{ 2 \log (2) }{(1- e^{-1})^2} + 8 \max\{1, \log( \frac{ e^{ \bar{\kappa}^4 (\bar{\kappa}^4 +1) T/2} }{ \varepsilon_n \delta_n^{2 n_\theta } } ) \} ] \varepsilon_n^2 \Big\} \nonumber \\ &\geq & 1 - 4 \exp[ - \frac{ n \varepsilon_n^2 }{ 2^7 (250) } ] - \exp[ - n( \frac{ \varepsilon_n^2 }{16} - \delta^\dag_n )], \end{eqnarray*} or equivalently, \begin{eqnarray*} && P_{s,r} \Big\{ \int_0^T | \hat{s}_n (t) - s(t) | e^{-\int_0^t s(u) du } dt \leq \Big\{ \max\{ 20 \int_0^T s(t) e^{-\int_0^t s(u) du} dt, 200 \} \nonumber \\ &&\hspace{0.5cm}\times [ 6 + \frac{ 2 \log (2) }{(1- e^{-1})^2} + 8 \max\{1, \log( \frac{ e^{\bar{\kappa}^4 (\bar{\kappa}^4 +1) T/2} }{ \varepsilon_n \delta_n^{2 n_\theta } } ) \} ] \varepsilon_n^2 \Big\}^{1/2} \Big\} \nonumber \\ &\geq & 1 - 4 \exp[ - \frac{ n \varepsilon_n^2 }{ 2^7 (250) } ] - \exp[ - n( \frac{ \varepsilon_n^2 }{16} - \delta^\dag_n )]. \end{eqnarray*} This implies that \begin{eqnarray*} && E_{s,r} [ \int_0^T | \hat{s}_n (t) - s(t) | e^{-\int_0^t s(u) du } dt ] \nonumber \\ &\leq & \Big\{ \max\{ 20 \int_0^T s(t) e^{-\int_0^t s(u) du} dt, 200 \} \nonumber \\ &&\hspace{0.5cm}\times [ 6 + \frac{ 2 \log (2) }{(1- e^{-1})^2} + 8 \max\{1, \log( \frac{ e^{\bar{\kappa}^4 (\bar{\kappa}^4 +1) T/2} }{ \varepsilon_n \delta_n^{2 n_\theta } } ) \} ] \varepsilon_n^2 \Big\}^{1/2} \nonumber \\ && + \kappa_0^2 T \{ 4 \exp[ - \frac{ n \varepsilon_n^2 }{ 2^7 (250) } ] + \exp[ - n( \frac{ \varepsilon_n^2 }{16} - \delta^\dag_n )] \}, \end{eqnarray*} and consequently \begin{eqnarray} && E_{s,r} [ \int_0^T | \hat{s}_n (t) - s(t) | dt ] \nonumber \\ &\leq & \Big\{ \max\{ 20 \kappa_0^2 T e^{2 \kappa_0^2 T}, 200 e^{2 \kappa_0^2 T} \} [ 6 + \frac{ 2 \log (2) }{(1- e^{-1})^2} + 8 \max\{1, \log[ \frac{ e^{ \bar{\kappa}^4 (\bar{\kappa}^4 +1) T/2} }{ \varepsilon_n \delta_n^{2 n_\theta } } ] \} ] \varepsilon_n^2 \Big\}^{1/2} \nonumber \\ && + \kappa_0^2 T e^{\kappa_0^2 T} \{ 4 \exp[ - \frac{ n \varepsilon_n^2 }{ 2^7 (250) } ] + \exp[ - n( \frac{ \varepsilon_n^2 }{16} - \delta^\dag_n )] \} \label{eq:3.77} \\ &=& O( n^{-q/(2q+1)} \log^{1/2} n ), \nonumber \end{eqnarray} as $n\rightarrow \infty$. Next we assume, in addition, that $s(t) >0$ for all $t\in [0, T]$. Let $\xi(t), t\in [0, T)$ be as in (\ref{eq:a.63}). Since $s\in \Theta_{\tilde{\kappa}, q}$ and \begin{displaymath} s(t) e^{-\int_0^t s(u) du} \leq \xi(t) \leq \max\{ s(t), s(t) r(u): u \in [0, T)\}, \end{displaymath} we have $0 < \min_{0\leq t< T} \xi(t) \leq \max_{0\leq t< T} \xi(t) \leq \bar{\kappa}^4$. Thus as in the previous case, \begin{eqnarray*} && P_{s,r} \Big\{ \min\{ \frac{1}{20 \int_0^T \int_0^t \xi( t-u) s(t) r(u) e^{-\int_{t-u}^t s(v) r( v-t+u) dv} du dt} , \frac{1}{200} \} \nonumber \\ && \hspace{0.5cm}\times [ \int_0^T \int_0^t | \hat{s}_n (t) \hat{r}_n (u) - s(t) r(u) | \xi( t-u) e^{- \int_{t-u }^t s(v) r(v-t+u ) dv } du dt ]^2 \nonumber \\ && \hspace{0.5cm} \leq [ 6 + \frac{ 2 \log (2) }{(1- e^{-1})^2} + 8 \max\{1, \log( \frac{ e^{ \bar{\kappa}^4 (\bar{\kappa}^4 +1) T/2} }{ \varepsilon_n \delta_n^{2 n_\theta } } ) \} ] \varepsilon_n^2 \Big\} \nonumber \\ &\geq & 1 - 4 \exp[ - \frac{ n \varepsilon_n^2 }{ 2^7 (250) } ] - \exp[ - n( \frac{ \varepsilon_n^2 }{16} - \delta^\dag_n )], \end{eqnarray*} or equivalently, \begin{eqnarray*} && P_{s, r} \Big\{ \int_0^T \int_0^t | \hat{s}_n (t) \hat{r}_n (u) - s(t) r(u) | \xi( t-u) e^{- \int_{t-u }^t s(v) r(v-t+u ) dv } du dt \nonumber \\ && \hspace{0.5cm} \leq \Big\{ \max \{ 20 \int_0^T \int_0^t \xi( t-u) s(t) r(u) e^{-\int_{t-u}^t s(v) r( v-t+u) dv} du dt, 200 \} \nonumber \\ &&\hspace{0.5cm}\times [ 6 + \frac{ 2 \log (2) }{(1- e^{-1})^2} + 8 \max\{1, \log( \frac{ e^{ \bar{\kappa}^4 (\bar{\kappa}^4 +1) T/2} }{ \varepsilon_n \delta_n^{2 n_\theta } } ) \} ] \varepsilon_n^2 \Big\}^{1/2} \Big\} \nonumber \\ &\geq & 1 - 4 \exp[ - \frac{ n \varepsilon_n^2 }{ 2^7 (250) } ] - \exp[ - n( \frac{ \varepsilon_n^2 }{16} - \delta^\dag_n )]. \end{eqnarray*} This implies that \begin{eqnarray*} && E_{s,r} [ \int_0^T \int_0^t | \hat{s}_n (t) \hat{r}_n (u) - s(t) r(u) | \xi( t-u) e^{- \int_{t-u }^t s(v) r(v-t+u ) dv } du dt ] \nonumber \\ &\leq & \Big\{ \max \{ 20 \int_0^T \int_0^t \xi( t-u) s(t) r(u) e^{-\int_{t-u}^t s(v) r( v-t+u) dv} du dt, 200 \} \nonumber \\ &&\hspace{0.5cm}\times [ 6 + \frac{ 2 \log (2) }{(1- e^{-1})^2} + 8 \max\{1, \log( \frac{ e^{ \bar{\kappa}^4 (\bar{\kappa}^4 +1) T/2} }{ \varepsilon_n \delta_n^{2 n_\theta } } ) \} ] \varepsilon_n^2 \Big\}^{1/2} \nonumber \\ && + \bar{\kappa}^8 T^2 \{ 4 \exp[ - \frac{ n \varepsilon_n^2 }{ 2^7 (250) } ] + \exp[ - n( \frac{ \varepsilon_n^2 }{16} - \delta^\dag_n )] \} \nonumber \\ &\leq & \Big\{ \max \{ 20 \bar{\kappa}^8 T^2, 200 \} [ 6 + \frac{ 2 \log (2) }{(1- e^{-1})^2} + 8 \max\{1, \log( \frac{ e^{ \bar{\kappa}^4 (\bar{\kappa}^4 +1) T/2} }{ \varepsilon_n \delta_n^{2 n_\theta } } ) \} ] \varepsilon_n^2 \Big\}^{1/2} \nonumber \\ && + \bar{\kappa}^8 T^2 \{ 4 \exp[ - \frac{ n \varepsilon_n^2 }{ 2^7 (250) } ] + \exp[ - n( \frac{ \varepsilon_n^2 }{16} - \delta^\dag_n )] \}, \end{eqnarray*} and \begin{eqnarray} && E_{s,r} [ \int_0^T \int_0^t | \hat{s}_n (t) \hat{r}_n (u) - s(t) r(u) | du dt ] \nonumber \\ &\leq & \Big\{ \frac{ \max \{ 20 \bar{\kappa}^8 T^2 e^{ 2 \bar{\kappa}^4 T}, 200 e^{ 2 \bar{\kappa}^4 T} \} }{ \min_{0\leq t< T} \xi^2 (t) } [ 6 + \frac{ 2 \log (2) }{(1- e^{-1})^2} + 8 \max\{1, \log( \frac{ e^{ \bar{\kappa}^4 (\bar{\kappa}^4 +1) T/2} }{ \varepsilon_n \delta_n^{2 n_\theta } } ) \} ] \varepsilon_n^2 \Big\}^{1/2} \nonumber \\ && +\frac{ \bar{\kappa}^8 T^2 e^{ \bar{\kappa}^4 T} }{ \min_{0\leq t <T} \xi(t) } \{ 4 \exp[ - \frac{ n \varepsilon_n^2 }{ 2^7 (250) } ] + \exp[ - n( \frac{ \varepsilon_n^2 }{16} - \delta^\dag_n )] \} \label{eq:3.78} \\ &=& O( n^{-q/(2q+1)} \log^{1/2} n), \nonumber \end{eqnarray} as $n\rightarrow\infty$. Hence \begin{eqnarray*} && [\min_{0\leq t< T} s(t) ] E_{s,r} [ \int_0^T |\hat{r}_n (u) - r(u)| (T-u) du] \nonumber \\ &\leq & E_{s,r} [ \int_0^T \int_u^T s(t) |\hat{r}_n (u) - r(u) | dt du ] \nonumber \\ &\leq & E_{s,r} [ \int_0^T |\hat{s}_n (t) - s(t) | \int_0^t \hat{r}_n (u) du dt ] + E_{s,r} [ \int_0^T \int_0^t | \hat{s}_n (t) \hat{r}_n (u) - s(t) r(u) | du dt ] \nonumber \\ &\leq & \bar{\kappa}^2 T E_{s,r} [ \int_0^T |\hat{s}_n (t) - s(t) | dt ] + E_{s,r} [ \int_0^T \int_0^t | \hat{s}_n (t) \hat{r}_n (u) - s(t) r(u) | du dt ]. \end{eqnarray*} Thus we conclude from (\ref{eq:3.77}) and (\ref{eq:3.78}) that \begin{eqnarray*} && E_{s,r} [\int_0^{T^*} | \hat{r}_n (u) - r(u) | du ] \nonumber \\ &\leq & \frac{ \bar{\kappa}^2 T}{ (T- T^*) \min_{0\leq t< T} s(t) } E_{s,r} [ \int_0^T |\hat{s}_n (t) - s(t) | dt ] \nonumber \\ && + \frac{1}{(T-T^*) \min_{0\leq t< T} s(t) } E_{s,r} [ \int_0^T \int_0^t | \hat{s}_n (t) \hat{r}_n (u) - s(t) r(u) | du dt ] \nonumber \\ &=& O( n^{-q/(2q +1)} \log^{1/2} n), \end{eqnarray*} as $n\rightarrow\infty$. This proves the theorem.\hfill $\Box$ \section{Lower bounds} Suppose $N(.)$ is a counting process with conditional intensity $\lambda_1 (.|.)$ as given by (\ref{eq:1.1}). Let $N_1(.), \cdots, N_n(t), t\in [0, T),$ be independent identically distributed copies of $N(t), t\in [0, T)$. In this section we shall compute lower bounds on the rate of convergence of an estimator for $s$ and an estimator for $r$ based on $N_1(t), \cdots, N_n(t), t\in [0, T)$. Let $0< \theta < T$ be as in (\ref{eq:4.86}) and $\Theta_{\tilde{\kappa}, q}$ be as in Section 2. Define \begin{eqnarray*} \Theta_{\theta, \tilde{\kappa}, q } &=& \Big\{ f= g^2: g\in {\cal C}^{q_0} [0, T), \min_{t\in [0, T)} g(t) \geq 0, \max_{t\in [0, T)} |\frac{ d^j}{dt^j} g (t)| < \kappa_j, j=0,\cdots, q_0, \nonumber \\ &&\hspace{0.5cm} |\frac{ d^{q_0}}{ dt^{q_0}} g (t_1) - \frac{ d^{q_0}}{ dt^{q_0}} g (t_2) | \leq \kappa_{q_0+1} |t_1-t_2|^{q_1}, \forall t_1,t_2\in [0, T), \mbox{$g(t) = 0$ if $t\in [0, \theta]$} \Big\}. \end{eqnarray*} \begin{la} \label{la:3.1} Let $\tilde{\Theta}_{\tilde{\kappa}, q, n}\subseteq \Theta_{\tilde{\kappa}, q}$ such that {\rm card}$(\tilde{\Theta}_{\tilde{\kappa}, q, n}) < \infty$. Suppose that $\tilde{s}_n$ is an estimator for $s$ based on $N_1(t), \cdots, N_n(t), t\in [0, T)$. Then \begin{eqnarray} && \sup \{ E_{s, r} [ \int_0^T | \tilde{s}_n (t) - s(t) | dt]: s \in \Theta_{\tilde{\kappa}, q}, r \in \Theta_{\theta, \tilde{\kappa}, q} \} \nonumber \\ &\geq & \frac{1}{2} \inf \{ \int_0^T |s_1(t) - s_2(t) | dt: s_1\neq s_2, s_1, s_2 \in \tilde{\Theta}_{ \tilde{\kappa}, q, n} \} \Big\{ 1 - \frac{1}{ \log[ {\rm card} (\tilde{\Theta}_{\tilde{\kappa}, q, n} )-1 ] } \Big[ \log 2 \nonumber \\ &&\hspace{1.0cm} + \frac{1}{[{\rm card} (\tilde{\Theta}_{\tilde{\kappa}, q, n} ) ]^2 } \sum_{s_1, s_2 \in \tilde{\Theta}_{\tilde{\kappa}, q, n} } \sum_{i=1}^n E_{s_1, r_1} \log \frac{ p_{s_1, r_1} (\{ w_{i,1},\cdots, w_{i, N_i(T)} \}) }{ p_{s_2, r_1} (\{ w_{i,1},\cdots, w_{i, N_i(T)} \}) } \Big] \Big\}, \label{eq:3.5} \end{eqnarray} for any $r_1\in \Theta_{\theta, \tilde{\kappa}, q}$. Next suppose that $\tilde{r}_n$ is an estimator for $r$ based on $N_1(t), \cdots, N_n(t), t\in [0, T)$. Let $T^*$ be a constant satisfying $\theta< T^* < T$ and $\tilde{\Theta}_{\theta, T^*, \tilde{\kappa}, q, n} \subset \Theta_{\theta, \tilde{\kappa}, q}$ such that {\rm card}$(\tilde{\Theta}_{\theta, T^*, \tilde{\kappa}, q, n}) <\infty$ and $r_1(u) = r_2(u), u\in [T^*, T)$ $\forall r_1, r_2 \in \tilde{\Theta}_{\theta, T^*, \tilde{\kappa}, q, n}$. Then \begin{eqnarray} && \sup \{ E_{s, r} [ \int_0^{T^*} | \tilde{r}_n (t) - r(t) | dt]: s \in \Theta_{\tilde{\kappa}, q}, r \in \Theta_{\theta, \tilde{\kappa}, q} \} \nonumber \\ &\geq & \frac{1}{2} \inf \{ \int_0^{T^*} |r_1(t) - r_2(t) | dt: r_1\neq r_2, r_1, r_2 \in \tilde{\Theta}_{\theta, T^*, \tilde{\kappa}, q, n} \} \nonumber \\ &&\hspace{0.5cm}\times \Big\{ 1 - \frac{1}{ \log [ {\rm card} (\tilde{\Theta}_{\theta, T^*, \tilde{\kappa}, q, n} )-1 ] } \Big[ \log 2 \nonumber \\ &&\hspace{1.0cm} + \frac{1}{[ {\rm card} (\tilde{\Theta}_{\theta, T^*, \tilde{\kappa}, q, n} ) ]^2 } \sum_{r_1, r_2 \in \tilde{\Theta}_{\theta, T^*, \tilde{\kappa}, q, n} } \sum_{i=1}^n E_{s_1, r_1} \log \frac{ p_{s_1, r_1} (\{ w_{i,1},\cdots, w_{i, N_i(T)} \}) }{ p_{s_1, r_2} (\{ w_{i,1},\cdots, w_{i, N_i(T)} \}) } \Big] \Big\}, \label{eq:3.6} \end{eqnarray} for any $s_1\in \Theta_{\tilde{\kappa}, q}$. \end{la} We refer the reader to Appendix A for a proof of Lemma \ref{la:3.1}. Theorems \ref{tm:4.1} and \ref{tm:4.2} (below) are the main results of this section. They are motivated by the lower bound results in Yatracos (1988). \begin{tm} \label{tm:4.1} Let $q>0$. Suppose that $\tilde{s}_n$ is an estimator for $s$ based on $N_1(t), \cdots, N_n(t), t\in [0, T)$. Then there exists a constant $C_{\tilde{\kappa},q}>0$ (depending only on $\tilde{\kappa}$ and $q$) such that \begin{displaymath} \sup \{ E_{s, r} [ \int_0^T | \tilde{s}_n (t) - s(t) | dt]: s \in \Theta_{\tilde{\kappa}, q}, r \in \Theta_{\theta, \tilde{\kappa}, q} \} \geq C_{\tilde{\kappa}, q} n^{-q/(2q+1)}. \end{displaymath} \end{tm} {\sc Proof.} Let $\{b_n >0: n=1, 2, \cdots\}$ be a sequence of constants that tend to 0 as $n\rightarrow \infty$ and that $b_n^{-1}$ is an integer. For $i=1,\cdots, b_n^{-1}$, define $\phi_{i,n}: [0, T) \rightarrow \mathbb{R}$ by \begin{displaymath} \phi_{i,n} (t) = \left\{ \begin{array}{ll} (b_n T)^q [ 1 - (\frac{ 2 t - (2i-1) b_n T}{ b_n T} )^2 ]^q, & \mbox{if $(i-1) b_n T \leq t< i b_n T$,} \\ 0, & \mbox{otherwise.} \end{array} \right. \end{displaymath} Writing $q=q_0+q_1$, $q_0$ a nonnegative integer and $0<q_1\leq 1$, we have \begin{eqnarray*} \lim_{n\rightarrow\infty} \max_{t\in [0, T)} |\frac{ d^j }{dt^j} \phi_{i,n} (t) | &<& \infty, \hspace{0.5cm} \forall j=0, \cdots, q_0, \nonumber \\ \lim_{n\rightarrow\infty} \max_{t_1\neq t_2\in [0, T)} |\frac{ d^{q_0} }{dt^{q_0}} \phi_{i,n} (t_1) - \frac{ d^{q_0} }{dt^{q_0}} \phi_{i,n} (t_2)|/|t_1-t_2|^{q_1} &<& \infty. \end{eqnarray*} Let $\Xi_{a,n}$ denote functions of the form \begin{displaymath} a [ 1 + \sum_{i=1}^{b_n^{-1}} \gamma_i \phi_{i,n} (t) ]^2, \hspace{0.5cm}\forall t\in [0, T), \end{displaymath} where $\gamma_i = 0$ or $1$ and $a>0$ is a suitably small constant such that $\Xi_{a,n} \subset \Theta_{\tilde{\kappa}, q}$. If $s_1, s_2 \in \Xi_{a,n}$ where $s_1 \neq s_2$, then writing \begin{equation} s_1 (t) = a [ 1 + \sum_{i=1}^{b_n^{-1}} \gamma_{1,i} \phi_{i,n} (t) ]^2, \hspace{0.5cm} s_2 (t) = a [ 1 + \sum_{i=1}^{b_n^{-1}} \gamma_{2,i} \phi_{i,n} (t) ]^2, \label{eq:3.1} \end{equation} with $\gamma_{1,i}, \gamma_{2,i}$ taking values $0$ or $1$, we have \begin{eqnarray*} \int_0^T | s_1 (t) - s_2 (t) | dt &=& a \int_0^T | 2 \sum_{i=1}^{b_n^{-1}} (\gamma_{1,i} -\gamma_{2, i} ) \phi_{i,n}(t) + \sum_{i=1}^{b_n^{-1}} (\gamma_{1,i} -\gamma_{2, i}) \phi^2_{i,n} (t) | dt \nonumber \\ &\geq & a \int_0^{b_n T} [ 2 \phi_{1,n}(t) + \phi^2_{1,n} (t) ] dt \nonumber \\ &=& a (b_n T)^{q+1} J_q + \frac{a (b_n T)^{2q+1} J_{2q} }{2}, \end{eqnarray*} where \begin{equation} J_l = \int_{-1}^1 (1- y^2)^l dy>0, \hspace{0.5cm}\forall l>0. \label{eq:4.77} \end{equation} Also it follows from (\ref{eq:3.1}) that \begin{eqnarray*} |\frac{ s_1 (t) - s_2(t)}{s_1(t)} | &\leq & 2 a \phi_{i,n}(t) + a \phi^2_{i,n} (t) \nonumber \\ &\leq & 2 a (b_n T)^q + a (b_n T)^{2 q}, \hspace{0.5cm}\forall t\in [0, T). \end{eqnarray*} Let $r_1 \in \Theta_{\theta, \tilde{\kappa}, q}$. Now using Lemma \ref{la:a.1} in Appendix A, \begin{eqnarray*} && E_{s_1, r_1} \log [ \frac{ p_{s_1, r_1} (\{ w_{i,1},\cdots, w_{i, N_i(T)}\} ) }{ p_{s_2, r_1} (\{ w_{i,1},\cdots, w_{i, N_i(T)}\} ) } ] \nonumber \\ &=& \int_0^T \{ \frac{s_2 (t) }{s_1 (t)} -1 - \log[ \frac{s_2 (t) }{ s_1 (t)} ] \} s_1 (t) e^{-\int_0^t s_1 (u) du } dt \nonumber \\ && + \int_0^T \int_0^t \{ \frac{ s_2 (t) }{ s_1 (t) } -1 - \log[ \frac{s_2 (t) }{ s_1 (t) } ] \} \xi( t-u) s_1 (t) r_1 (u) e^{- \int_{t-u }^t s_1 (v) r_1 (v-t+u ) dv } du dt \nonumber \\ &\leq & \frac{1}{2} \int_0^T (\frac{ s_1(t)-s_2(t) }{s_1(t)} )^2 s_1 (t) e^{-\int_0^t s_1 (u) du } dt \nonumber \\ && + \frac{1}{2} \int_0^T \int_0^t (\frac{ s_1(t) - s_2 (t) }{ s_1 (t) })^2 \xi( t-u) s_1 (t) r_1 (u) e^{- \int_{t-u }^t s_1 (v) r_1 (v-t+u ) dv } du dt \nonumber \\ &\leq & \frac{ a^2 (b_n T)^{2 q} }{2} [ 2 + (b_n T)^q ]^2 (1 + \bar{\kappa}^8 T^2), \end{eqnarray*} where $\bar{\kappa}= \kappa_0 \vee 1$. Finally we observe from Proposition 3.8 of Birg\'{e} (1983) that there exists a subset $\tilde{\Theta}_{\tilde{\kappa}, q, n}$ of $\Xi_{a, n}$ such that \begin{displaymath} \int_0^T |s_1(t) -s_2(t)| dt \geq \frac{ 1}{8 b_n} [ a (b_n T)^{q+1} J_q + \frac{a (b_n T)^{2q+1} J_{2q} }{2} ], \hspace{0.5cm} \forall s_1\neq s_2 \in \tilde{\Theta}_{\tilde{\kappa}, q, n}, \end{displaymath} and $\log[ {\rm card}(\tilde{\Theta}_{\tilde{\kappa}, q, n}) -1] > 0.316/b_n$. Consequently we conclude from (\ref{eq:3.5}) that \begin{eqnarray*} && \sup \{ E_{s, r} [ \int_0^T | \tilde{s}_n (t) - s(t) | dt]: s \in \Theta_{\tilde{\kappa}, q}, r \in \Theta_{\theta, \tilde{\kappa}, q } \} \nonumber \\ &\geq & \frac{ 1}{16 b_n} [ a (b_n T)^{q+1} J_q + \frac{a (b_n T)^{2q+1} J_{2q} }{2} ] \Big\{ 1 \nonumber \\ &&\hspace{0.5cm} - \frac{b_n}{0.316} \Big[ \log 2 + \frac{ a^2 n (b_n T)^{2 q} }{2} [ 2 + (b_n T)^q ]^2 (1 + \bar{\kappa}^8 T^2) \Big] \Big\} \nonumber \\ &=& \frac{ a b_n^q T^{q+1} }{16 } [ J_q + \frac{ (b_n T)^q J_{2q} }{2} ] \Big\{ 1 - \frac{b_n}{0.316} \Big[ \log 2 + \frac{ a^2 n (b_n T)^{2 q} }{2} [ 2 + (b_n T)^q ]^2 (1 + \bar{\kappa}^8 T^2) \Big] \Big\}. \end{eqnarray*} Thus we conclude that there exist strictly positive constants $C_0$ and $C_{ \tilde{\kappa}, q}$ (depending only on $\tilde{\kappa}$ and $q$) such that by taking $b_n = 1/\lceil C_0 n^{1/(2 q+1)} \rceil$, we have \begin{displaymath} \sup \{ E_{s, r} [ \int_0^T | \tilde{s}_n (t) - s(t) | dt]: s \in \Theta_{\tilde{\kappa}, q}, r \in \Theta_{\theta, \tilde{\kappa}, q} \} \geq C_{\tilde{\kappa}, q} n^{-q/(2q+1)}. \end{displaymath} This proves Theorem \ref{tm:4.1}.\hfill $\Box$ \begin{tm} \label{tm:4.2} Let $q>0$ and $\theta, T^*$ be constants satisfying $0< \theta< T^* < T$. Suppose that $\tilde{r}_n$ is an estimator for $r$ based on $N_1(t), \cdots, N_n(t), t\in [0, T)$. Then there exists a constant $C_{\theta, \tilde{\kappa}, q}>0$ (depending only on $\theta, \tilde{\kappa}$ and $q$) such that \begin{displaymath} \sup \{ E_{s, r} [ \int_0^{T^*} | \tilde{r}_n (u) - r(u) | du]: s \in \Theta_{\tilde{\kappa}, q}, r \in \Theta_{\theta, \tilde{\kappa}, q } \} \geq C_{\theta, \tilde{\kappa}, q} n^{-q/(2q +1)}. \end{displaymath} \end{tm} {\sc Proof.} Let $f\in \Theta_{\theta, \tilde{\kappa}, q}$ such that $f(u) >0$ if $u\in [\theta^*, T^*]$ for constants $\theta^*, T^*$ satisfying $0< \theta < \theta^* < T^* < T$. Then $0< \underline{f} := \min_{u\in [\theta^*, T^*] } f(u) \leq \bar{f} := \max_{u\in [\theta^*, T^* ]} f(u) <\infty$. Let $\{ b_n >0: n=1, 2, \cdots\}$ be a sequence of constants that tend to 0 as $n\rightarrow \infty$ such that $b_n^{-1}$ is an integer. Then for $i=1,\cdots, b_n^{-1}$, define $\phi_{i,n}: [0, T) \rightarrow \mathbb{R}$ by \begin{eqnarray*} \phi_{i,n} (u) &= & \left\{ \begin{array}{ll} [ b_n (T^* - \theta^*) ]^q [ 1 - (\frac{ 2 u - 2 \theta^* - (2i-1) b_n (T^* - \theta^*) }{ b_n (T^* - \theta^*) } )^2 ]^q, & \\ \hspace{0.5cm} \mbox{if $\theta^* + (i-1) b_n (T^* -\theta^*) \leq u < \theta^* + i b_n (T^* - \theta^*)$,} \\ 0,\hspace{0.5cm} \mbox{elsewhere}. & \end{array} \right. \end{eqnarray*} Let $\Xi_{a,n}$ denote functions of the form \begin{displaymath} a [ f^{1/2} (u) + \sum_{i=1}^{b_n^{-1}} \gamma_i \phi_{i,n} (u) ]^2, \hspace{0.5cm}\forall u\in [0, T), \end{displaymath} where $\gamma_i = 0$ or $1$ and $a>0$ is a sufficiently small constant such that $\Xi_{a, n} \subset \Theta_{\theta, \tilde{\kappa}, q }$. If $r_1, r_2 \in \Xi_{a, n}$ where $r_1 \neq r_2$, then writing \begin{eqnarray} r_1 (u)&=& a [ f^{1/2} (u) + \sum_{i=1}^{b_n^{-1}} \gamma_{1,i} \phi_{i,n} (u) ]^2, \nonumber \\ r_2 (u)&=& a [ f^{1/2} (u) + \sum_{i=1}^{b_n^{-1}} \gamma_{2,i} \phi_{i,n} (u) ]^2, \label{eq:3.11} \end{eqnarray} with $\gamma_{1,i}, \gamma_{2,i}$ taking values $0$ or $1$, we have \begin{eqnarray*} && \int_0^{T^*} | r_1 (u) - r_2 (u) | du \nonumber \\ &=& a \int_{\theta^*}^{T^*} | 2 f^{1/2} (u) \sum_{i=1}^{b_n^{-1}} (\gamma_{1,i} -\gamma_{2,i}) \phi_{i,n} (u) + \sum_{i=1}^{b_n^{-1}} (\gamma_{1,i} -\gamma_{2,i}) \phi_{i,n}^2 (u) | du \nonumber \\ &\geq & a \int_{\theta^*}^{\theta^*+ b_n (T^* -\theta^*)} [ 2 \underline{f}^{1/2} \phi_{1,n} (u) + \phi_{1,n}^2 (u) | du \nonumber \\ &= & a \underline{f}^{1/2} [b_n (T^* -\theta^*)]^{q+1} J_q + \frac{ a [ b_n (T^* - \theta^*)]^{2q+1} J_{2q} }{ 2}, \end{eqnarray*} where $J_q$ is as in (\ref{eq:4.77}). Also it follows from (\ref{eq:3.11}) that \begin{eqnarray*} |\frac{ r_1 (u) - r_2(u)}{r_1(u)} | &\leq & \frac{ 2 a \bar{f}^{1/2} \phi_{i,n}(u) + a \phi^2_{i,n} (u) }{ \underline{f}^{1/2} } \nonumber \\ &\leq & \frac{ 2 a \bar{f}^{1/2} [b_n (T^* - \theta^*)]^q }{\underline{f}^{1/2} } + \frac{ a [ b_n (T^* - \theta^*)]^{2 q} }{\underline{f}^{1/2} }, \hspace{0.5cm}\forall u\in [\theta^*, T^*]. \end{eqnarray*} Let $s_1 \in \Theta_{\tilde{\kappa}, q}$. Now using Lemma \ref{la:a.1}, \begin{eqnarray*} && E_{s_1, r_1} \log [ \frac{ p_{s_1, r_1} (\{ w_{i,1},\cdots, w_{i, N_i(T)}\} ) }{ p_{s_1, r_2} (\{ w_{i,1},\cdots, w_{i, N_i(T)}\} ) } ] \nonumber \\ &=& \int_0^T \int_0^t \{ \frac{ r_2 (u) }{ r_1 (u) } -1 - \log[ \frac{r_2 (u) }{ r_1 (u) } ] \} \xi( t-u) s_1 (t) r_1 (u) e^{- \int_{t-u }^t s_1 (v) r_1 (v-t+u ) dv } du dt \nonumber \\ &= & \int_0^T \{ \frac{ r_2 (u) }{ r_1 (u) } -1 - \log[ \frac{r_2 (u) }{ r_1 (u) } ] \} \int_u^T \xi( t-u) s_1 (t) r_1 (u) e^{- \int_{t-u }^t s_1 (v) r_1 (v-t+u ) dv } dt du \nonumber \\ &\leq & \frac{1}{2} \int_{\theta^*}^{T^*} (\frac{ r_1(u) - r_2(u) }{ r_1(u)})^2 \int_u^T \xi( t-u) s_1 (t) r_1 (u) e^{- \int_{t-u }^t s_1 (v) r_1 (v-t+u ) dv } dt du \nonumber \\ &\leq & \frac{\bar{\kappa}^8 b_n^{2 q} (T - \theta^*)^2 [ 2 a \bar{f}^{1/2} (T^* - \theta^*)^q + a b_n^q (T^* - \theta^*)^{2 q}]^2 }{2 \underline{f} }, \end{eqnarray*} where $\bar{\kappa} = \kappa_0 \vee 1$. Finally we observe from Proposition 3.8 of Birg\'{e} (1983) that there exists a subset $\tilde{\Theta}_{\theta, T^*, \tilde{\kappa}, q, n}$ of $\Xi_{a, n}$ such that \begin{eqnarray*} && \int_0^{T^*} |r_1(u) -r_2(u)| du \nonumber \\ & \geq & \frac{ 1}{8 b_n} \Big[ a \underline{f}^{1/2} [b_n (T^* -\theta^*)]^{q+1} J_q + \frac{ a [ b_n (T^* - \theta^*)]^{2q+1} J_{2q} }{ 2} \Big], \hspace{0.5cm} \forall r_1\neq r_2 \in \tilde{\Theta}_{\theta, T^*, \tilde{\kappa}, n}, \end{eqnarray*} and $\log[ {\rm card}(\tilde{\Theta}_{\theta, T^*, \tilde{\kappa}, n}) -1] > 0.316/b_n$. Consequently we conclude from (\ref{eq:3.6}) that \begin{eqnarray*} && \sup \{ E_{s, r} [ \int_0^{T^*} | \tilde{r}_n (u) - r(u) | du]: s \in \Theta_{\tilde{\kappa}, q}, r \in \Theta_{\theta, \tilde{\kappa}, q } \} \nonumber \\ &\geq & \frac{ a b_n^q }{16 } [ \underline{f}^{1/2} (T^* -\theta^*)^{q+1} J_q + \frac{ b_n^q (T^* - \theta^*)^{2q+1} J_{2q} }{ 2} ] \Big\{ 1 \nonumber \\ &&\hspace{0.5cm} - \frac{b_n}{0.316} \Big[ \log 2 + \frac{ a \bar{\kappa}^8 n b_n^{2 q} (T - \theta^*)^2 [ 2 \bar{f}^{1/2} (T^* - \theta^*)^q + b_n^q (T^* - \theta^*)^{2 q} ]^2 }{2 \underline{f} } \Big] \Big\}. \end{eqnarray*} Hence there exist strictly positive constants $C_1$ and $C_{\theta, \tilde{\kappa}, q}$ (depending only on $\theta, \tilde{\kappa}$ and $q$) such that by taking $b_n = 1/ \lceil C_1 n^{1/(2q+1)} \rceil$, we have \begin{displaymath} \sup \{ E_{s, r} [ \int_0^{T^*} | \tilde{r}_n (u) - r(u) | du]: s \in \Theta_{\tilde{\kappa}, q}, r \in \Theta_{\theta, \tilde{\kappa}, q} \} \geq C_{\theta, \tilde{\kappa}, q} n^{-q/(2q+1)}. \end{displaymath} This proves Theorem \ref{tm:4.2}.\hfill $\Box$ \section{Template matching with continuous kernels} In the second part of this article, let ${\bf w} =({\bf w}^{(1)},\ldots,{\bf w}^{(d)})$ be the spike train pattern of an assembly of $d$ neurons recorded when an experimental stimulus is provided to a subject, where ${\bf w}^{(i)} = \{ w^{(i)}_1,\ldots,w^{(i)}_{N_i(T)} \}$ are the spike times of the $i$th neuron over the period $[0,T)$. The same neurons of the subject are subsequently observed for a longer period of time when it is engaged in other activities and the corresponding spike trains ${\bf y} = ({\bf y}^{(1)},\ldots,{\bf y}^{(d)})$ are checked for occurrences of the template ${\bf w}$. For $t \geq 0$, let ${\bf y}_t = ({\bf y}_t^{(1)}, \ldots,{\bf y}_t^{(d)})$, where ${\bf y}_t^{(i)} = \{ y-t: y \in {\bf y}^{(i)} \cap [t,t+T) \}$. There are various algorithms in the neuroscience literature that have been used to determine if there is a close match between ${\bf y}_t$ and ${\bf w}$. In Gr\"{u}n, Diesmann and Aertsen (2002), $T$ is chosen small and a match is declared if $$\{ 1\leq i\leq d : {\bf w}^{(i)} = \emptyset \} = \{ 1\leq i \leq d : {\bf y}_t^{(i)} =\emptyset \}. $$ In the sliding sweeps algorithm [cf.\ Dayhoff and Gerstein (1983) and N\'{a}dasdy {\em et al.} (1999)], a match is declared if $$\sup_{1 \leq i \leq d} \sup_{w \in {\bf w}^{(i)}} \inf_{y-t \in {\bf y}_t^{(i)}} |y-t-w| \leq \Delta, $$ where $\Delta > 0$ is a pre-determined constant. We shall study in this section the pattern filtering algorithm [cf. Chi, Rauske and Margoliasch (2003)], which uses a scoring system to measure the proximity between ${\bf w}$ and ${\bf y}_t$. Let $f$ be a non-increasing and non-constant function on $[0,\infty)$ with $f(0) > 0$. The score between ${\bf w}$ and ${\bf y}_t$ is given by \begin{equation} S_t = \sum_{i=1}^d S_t^{(i)}, \quad {\rm where} \enskip S_t^{(i)} = T^{-1} \sum_{y-t \in {\bf y}_t^{(i)}} \max_{w \in {\bf w}^{(i)}} f(|y-t-w|). \label{1} \end{equation} For a given template ${\bf w}$, define the kernel functions \begin{equation} g_{\bf w}^{(i)}(u) = \Big[ \max_{w \in {\bf w}^{(i)}} f(|u-w|) \Big] {\bf 1}_{\{ 0 \leq u < T \}}, \hspace{0.5cm} \forall i=1,\cdots, d. \label{2} \end{equation} Then we can also express $$ S_t^{(i)} = T^{-1} \sum_{y \in {\bf y}^{(i)}} g_{\bf w}^{(i)}(y-t). $$ The graph of $S_t^{(i)}$ against $t$ is thus a normalized sum of the kernels $g_{\bf w}^{(i)}(y - \cdot)$ over all $y \in {\bf y}^{(i)}$. We declare a match between ${\bf y}_t$ and ${\bf w}$ to be present when the proximity score $S_t$ exceeds a pre-determined threshold level $c$. To prevent overcounting, a match at time $t$ is declared to be new only if the overlap between the time interval $[t,t+T)$ and the time interval of the previous new match is less than $\alpha T$ for some constant $0 < \alpha < 1$. More specifically, let $\sigma_1 = \inf \{ t: S_t \geq c \}$ and $\sigma_{j+1} = \inf \{ t> \sigma_j+(1-\alpha)T: S_t \geq c \}$ for $j \geq 1$. Then the number of new matches between the spike trains ${\bf y}$ over the time interval $[0,a+T)$ and the template ${\bf w}$ is $U_a := \sup \{ j: \sigma_j \leq a \}$; with the convention $U_a=0$ if $\sigma_1 > a$. To prevent the occurrences of too many (false) matches when ${\bf y}$ is pure noise, the threshold level $c$ has to be chosen reasonably large. For $a$ large, there can be on the average more than one new (false) match between ${\bf y}$ and ${\bf w}$. The Poisson distribution is often used for modeling $U_a$ to compute the $p$-value under such circumstances. For small $a$, the occurence of a single match would itself be rare and we can use the probability of having at least one match as the $p$-value. For this purpose, we study the scan statistics \begin{displaymath} M_a := \sup_{0 \leq t \leq a} S_t, \end{displaymath} and its dual, the time to detection \begin{displaymath} V_c := \inf \{ t: S_t \geq c \}, \end{displaymath} which we shall show to have asymptotic Gumbel and exponential distributions respectively. In this section, we consider $f$ to be continuous on $[0,\infty)$ while in Section 6, we will consider the case in which $f$ is not continuous. \subsection{Main results} Let ${\bf y}^{(i)}$, $i=1,\cdots,d,$ be independent Poisson processes with constant intensity $\lambda_i > 0$. Consider the following regularity conditions on ${\bf w}$ and $f$. \smallskip \noindent (A1) Let ${\bf w}_*^{(1)},\cdots,{\bf w}_*^{(d)}$ be point processes on $[0,\infty)$ with each ${\bf w}_*^{(i)}$ ergodic, stationary and having non-constant inter-arrival times and let ${\bf w}^{(i)} = {\bf w}_*^{(i)} \cap [0,T)$. \smallskip \noindent (A2) Let $f$ be continuous and let there be a possibly empty finite set $H$ such that the second derivative of $f$ exists and is uniformly continuous and bounded over any interval inside ${\mathbb{R}}^+ \setminus H$. Moreover, \begin{equation} 0 < \sup_{x \in {\mathbb{R}}^+ \setminus H} \Big| \frac{d}{dx} f(x) \Big| < \infty \quad {\rm and} \quad \lim_{x \rightarrow \infty} f(x) > -\infty. \label{4} \end{equation} Let $\mu_{\bf w} = T^{-1} \sum_{i=1}^d \lambda_i \int_0^T g_{\bf w}^{(i)}(u) \; du$ be the expected value of $S_t$ conditioned on ${\bf w}$ known. Let the large deviation rate function of $S_t$ be \begin{equation} \phi_{\bf w}(c) = \sup_{\theta > 0} \Big[ \theta c- T^{-1} \sum_{i=1}^d \lambda_i \int_0^T (e^{\theta g_{\bf w}^{(i)}(u)} -1) \; du \Big] \quad {\rm for} \ c > \mu_{\bf w}. \label{5} \end{equation} We shall denote by $\theta_{\bf w}$ ($=\theta_{{\bf w},c}$) the unique value of $\theta > 0$ that attains the supremum on the right hand side of (\ref{5}). By the stationarity of ${\bf w}_*^{(i)}$ in (A1), for all $y \in {\mathbb{R}}$, the distribution of $\max_{w \in {\bf w}_*^{(i)}} f(|y-w|)$ is equal to the distribution of $Z_i := \max_{w \in {\bf w}_*^{(i)}} f(|w|)$. Hence by the ergodicity of ${\bf w}^{(i)}$ in (A1) and the bounded property of $f$ in (A2), \begin{eqnarray} & & T^{-1} \int_0^T e^{\theta g_{\bf w}^{(i)}(u)} du \rightarrow Ee^{\theta Z_i} \hspace{0.5cm} \mbox{a.s.\ $\forall \theta > 0$} \cr & {\rm and} & T^{-1} \int_0^T g_{\bf w}^{(i)} (u) \ du \rightarrow E Z_i \hspace{0.5cm} \mbox{a.s.\ as $T \rightarrow \infty$}. \label{6} \end{eqnarray} Let $\mu = \sum_{i=1}^d \lambda_i EZ_i$. Define the limiting large deviation rate function \begin{equation} \phi(c) = \sup_{\theta>0} \Big[ \theta c - \sum_{i=1}^d \lambda_i (Ee^{\theta Z_i}-1) \Big] \quad {\rm for} \ c > \mu. \label{7} \end{equation} Let $\theta_*$ ($=\theta_{*,c}$) be the unique value of $\theta > 0$ attaining the supremum on the right hand side of (\ref{7}). Then by (\ref{5}), (\ref{6}) and (\ref{7}), $\mu_{\bf w} \rightarrow \mu, \phi_{\bf w} \rightarrow \phi$ pointwise on $(\mu,\infty)$ and $\theta_{\bf w} \rightarrow \theta_*$ a.s.\ as $T \rightarrow \infty$. Similarly, \begin{eqnarray} v_{\bf w} &:=& T^{-1} \sum_{i=1}^d \lambda_i \int_0^T [g_{\bf w}^{(i)}(u)]^2 e^{\theta_{\bf w} g_{\bf w}^{(i)}(u)} du, \nonumber \\ \tau_{\bf w} &:=& T^{-1} \sum_{i=1}^d \lambda_i \int_0^T \Big[ \frac{d}{du} g_{\bf w}^{(i)}(u) \Big]^2 e^{\theta_{\bf w} g_{\bf w}^{(i)}(u)} du \label{8} \end{eqnarray} both converge almost surely to positive constants as $T \rightarrow \infty$. Let $P_{\bf w}$ denote the probability measure conditioned on a known ${\bf w}$. \begin{pn} \label{l1} Assume (A1)-(A2). Then for any $t \geq 0$, $\Delta > 0$ and $c > \mu$, \begin{equation} P_{\bf w} \Big\{ \sup_{t < u \leq t+\Delta} S_u \geq c \Big\} \sim \Delta \zeta_{\bf w} e^{-T \phi_{\bf w}(c)} \hspace{0.5cm} \mbox{a.s.\ as $T \rightarrow \infty$}, \label{9} \end{equation} where $\zeta_{\bf w} = (2 \pi)^{-1} (\tau_{\bf w}/v_{\bf w})^{1/2}$. \end{pn} By pieceing together the local boundary crossing probabilities in (\ref{9}), we are able to obtain the following results. \begin{tm} \label{t1} Assume (A1)-(A2). {\rm (a)} Let $c > \mu$. Then the distribution (conditional on ${\bf w}$) of $\zeta_{\bf w} e^{-T \phi_{\bf w}(c)} V_c$ converges to the exponential distribution with mean 1 almost surely as $T \rightarrow \infty$. {\rm (b)} Let $a \rightarrow \infty$ as $T \rightarrow \infty$ such that $(\log a)/T$ converges to a positive constant. Let $c_{\bf w} > \mu_{\bf w}$ satisfy $\phi_{\bf w}(c_{\bf w}) = (\log a)/T$. Then for any $z \in {\mathbb{R}}$, $$ P_{\bf w} \{ \theta_{\bf w} T (M_a -c_{\bf w}) - \log \zeta_{\bf w} \geq z \} \rightarrow 1-\exp(-e^{-z}) \hspace{0.5cm} \mbox{a.s.\ as $T \rightarrow \infty$}. $$ {\rm (c)} Let $a \rightarrow \infty$ as $T \rightarrow \infty$ such that $(\log a)/T$ converges to a positive constant. Let $c$ ($=c_T$) be such that $\eta_{\bf w} := a \zeta_{\bf w} e^{-T \phi_{\bf w}(c)}$ converges to a constant $\eta > 0$ almost surely. Then \begin{equation} P_{\bf w} \{ U_a = k \} - e^{-\eta_{\bf w}} \frac{\eta_{\bf w}^k}{k!} \rightarrow 0 \hspace{0.5cm} \mbox{a.s. $\forall k=0,1,\cdots.$} \label{11} \end{equation} \end{tm} {\sc Remark.} Theorem \ref{t1} can be extended to deal with the situation in which $m > 1$ trials are conducted, giving rise to $m$ spike train vectors. If we hypothesize that the times of recurrence of the template are the same for the $m$ trials, then the pattern filtering algorithm is most effectively applied by comparing ${\bf w}$ against a union of the $m$ spike train vectors. If we hypothesize that the times of recurrence of the template is different for each trial, then we can compare ${\bf w}$ against each spike train vector separately and sum up the number of new matches for the $m$ trials. The Poisson distribution can again be used to compute the $p$-values. Note that we do not require the vectors ${\bf y}$ to have the same intensity for the $m$ trials. This has implications when one has a spike train ${\bf y}$ that is nonstationary but can be broken up into $m$ segments such that each part has almost constant intensities as we can apply Theorem \ref{t1}(c) separately on each of the $m$ segments. {\sc Remark.} In Theorem 1 of Chi (2004), it was shown [without the regularity condition (A2)] that $$ \lim_{T \rightarrow \infty} T^{-1} \log V_c \rightarrow \phi(c) \quad {\rm a.s. \ for \ all} \ c > \mu. $$ The question of whether $\log V_c = T \phi_{\bf w}(c) + o(T^{1/2})$ was also raised in a remark on page 157. Theorem \ref{t1}(a) provides a more precise answer; that $\log V_c = T \phi_{\bf w}(c)+O_P(1)$. \subsection{Implementation} We conduct a small scale simulation study in this subsection to test the finite sample accuracy of the analytic approximations in Theorem \ref{t1}. An alternative to analytic approximations is to compute the $p$-values $p_{\bf w} := P_{\bf w} \{ M_a \geq c \}$ via direct Monte Carlo. However, as $p$-values of interest are often small, a large number of simulation runs is required for these estimations to be accurate. The computational cost is compounded when the time period $[0,a+T)$ of ${\bf y}^{(i)}$ is large. We introduce here an importance sampling alternative for the simulation of $p$-values. We use a change of measure argument, which is also used in the proof of Proposition \ref{l1}, by generating ${\bf y}^{(i)}$ from an inhomogeneous Poisson process. Analogous change of measures for computing $p$-values have been used in sequential analysis [cf.\ Siegmund (1976)], change-point detection [cf.\ Lai and Shan (1999)] and DNA sequence alignments [cf.\ Chan (2003)]. Let $P_{\theta,t}$ denote the probability measure under which ${\bf y}^{(i)}$ is generated as a Poisson point process with intensity $\eta_i(v) = \lambda_i e^{\theta g_{\bf w}^{(i)}(v-t)}$ for each $1 \leq i \leq d$. Note that $g_{\bf w}^{(i)}(v-t)=0$ for $v \not\in [t,t+T)$ and hence the change of measure occurs only for the generation of spikes in the interval $[t,t+T)$. The likelihood of ${\bf y}_t^{(i)}$ under $P_{\theta,t}$ is given by $$ L_{\theta,t}({\bf y}_t^{(i)}) = \exp \Big( - \lambda_i \int_0^T e^{\theta g_{\bf w}^{(i)}(u)} du \Big) \prod_{y \in {\bf y}_t^{(i)}} \lambda_i e^{\theta g_{\bf w}^{(i)}(y-t)}. $$ Hence the likelihood ratio \begin{eqnarray} \frac{dP_{\theta,t}}{dP_{\bf w}}({\bf y}) & = & \prod_{i=1}^d \frac{L_{\theta,t}({\bf y}_t^{(i)})}{L_{0,t}({\bf y}_t^{(i)})} = \prod_{i=1}^d \exp \Big[ \theta T S_t^{(i)} - \lambda_i \int_0^T (e^{\theta g_{\bf w}^{(i)}(u)}-1) \; du \Big] \cr & = & \exp \Big[ \theta T S_t -\sum_{i=1}^d \lambda_i \int_0^T (e^{\theta g_{\bf w}^{(i)}(u)}-1) \ du \Big]. \label{14} \end{eqnarray} In our importance sampling algorithm, we first select a small $\Delta > 0$ such that $J:=a/\Delta$ is a positive integer. For each simulation run, we generate $j$ randomly from $\{ 0,\ldots, J \}$ followed by ${\bf y}$ from $P_{\theta_{\bf w},j \Delta}$. The estimate \begin{eqnarray} \widehat p & = & (J+1) \Big[ \sum_{j=0}^J \frac{dP_{\theta_{\bf w},j \Delta}}{dP_{\bf w}}({\bf y}) \Big]^{-1} {\bf 1}_{\{ M_a \geq c \}} \cr & = & (J+1) \exp \Big[ \sum_{i=1}^d \lambda_i \int_0^T (e^{\theta g_{\bf w}^{(i)}(u)}-1) \; du \Big] \Big( \sum_{j=0}^J e^{\theta T S_{j \Delta}} \Big)^{-1} {\bf 1}_{\{ M_a \geq c \}} \label{15} \end{eqnarray} is then unbiased for $p_{\bf w}$. The averages of (\ref{15}) over all the simulation runs is then the importance sampling estimate of $p_{\bf w}$. {\sc Example 1.} Consider the Hamming window function \begin{equation} f(t) = \left\{ \begin{array}{cl} {1 \over 2} (1-\beta)+{1 \over 2} (1+\beta) \cos \left( \frac{\pi t}{\varepsilon} \right) & {\rm if} \ 0 \leq t < \varepsilon, \cr - \beta & {\rm if} \ t \geq \varepsilon, \cr \end{array} \right. \label{16} \end{equation} with $\varepsilon = 5$ ms and $\beta=0.4$ [see, for example, Chi, Rauske and Margoliash (2003)]. We generate a template ${\bf w}$ over the time interval from 0 to $T=500$ ms on $d=4$ spike trains, with interarrival distance $X$ ms between two spikes on each spike train satisfying \begin{equation} P \{ X \leq x \} = 1- e^{-(x-1)^+/24}. \label{17} \end{equation} This corresponds to an absolute refractory period or ``dead time'' of $1$ ms after each spike in ${\bf w}^{(i)}$ before the next spike can be generated. In our computer experiment, a total of 80 spikes were generated on the four spike trains using (\ref{17}). To compute the $p$-values using direct Monte Carlo, we generated 2000 realizations of ${\bf y}$ by using Poisson point processes with constant intensity $\lambda_i = 0.04$ ms$^{-1}$ on the interval from 0 to $a+T=20$ s. The proportion of times that $\{ M_a \geq c \}$ occurs is taken as the estimate of $p_{\bf w}$. For importance sampling, 2000 simulation runs were also executed using the algorithm described earlier by choosing $\Delta=0.2$ ms. The following thinning method is used to generate the spike times in the interval $[j \Delta, j \Delta+T)$, where the intensity of ${\bf y}^{(i)}$ is not constant under $P_{\theta_{\bf w},j \Delta}$: \smallskip 1. Let $\widetilde {\bf y}_t^{(i)} = \{ u_1,\ldots,u_N \}$ be generated on $[0,T)$ as a Poisson process with constant intensity $\lambda_i e^{\theta_{\bf w} f(0)}$. \smallskip 2. Generate independent uniform random variables $R_1,\cdots,R_N$ on $[0,1]$ and let $$ {\bf y}_t^{(i)} (=\{ y-t: y \in {\bf y}^{(i)} \cap [t,t+T) \})=\{ u_j \in \widetilde {\bf y}_t^{(i)}: R_j \leq e^{\theta_{\bf w}[g_{\bf w}(u_j)-f(0)]} \}. $$ For the analytic approximation, we apply Theorem \ref{t1}(a), which gives us \begin{equation} P_{\bf w} \{M_a \geq c \} = P_{\bf w} \{ V_c \leq a \} \doteq 1-\exp(-a \zeta_{\bf w} e^{-T \phi_{\bf w}(c)}). \label{19} \end{equation} \begin{table} \begin{center} {\sc Table 1.} Estimates of $P_{\bf w} \{ M_a \geq c \} \pm$ standard error. \begin{tabular}{c|c|c|c} \hline $c$ & Direct MC & Imp. Sampling & Anal. Approx. (\ref{19}) \cr \hline 0.017 & 0.037$\pm$0.004 & 0.0387$\pm$0.0019 & 0.0383 \cr 0.018 & 0.024$\pm$0.003 & 0.0237$\pm$0.0012 & 0.0241 \cr 0.019 & 0.016$\pm$0.003 & 0.0158$\pm$0.0008 & 0.0149 \cr 0.020 & 0.009$\pm$0.002 & 0.0095$\pm$0.0005 & 0.0091 \cr 0.021 & 0.005$\pm$0.002 & 0.0054$\pm$0.0003 & 0.0055 \cr 0.022 & 0.003$\pm$0.001 & 0.0033$\pm$0.0002 & 0.0033 \cr \hline \end{tabular} \end{center} \end{table} We see from the results summarized in Table 1 that there is substantial variance reduction when importance sampling is used. The analytic approximations have also been shown to be quite accurate, lying within two standard errors of the importance sampling estimate in all the cases considered. \subsection{Proofs} We preface the proofs of Proposition \ref{l1} and Theorem \ref{t1} with the following preliminary lemmas. We shall let $\lfloor \cdot \rfloor$ denote the greatest integer function. Let $P_{\theta_{\bf w},t}$ be the change of measure defined in the beginning of Section 5.2 and let $P_{\theta_{\bf w}} = P_{\theta_{\bf w},0}$. \begin{la} \label{l2} Let $t \geq 0$ and $c >\mu_{\bf w}$. Then $$ P_{\bf w} \{ S_t \geq c \} \sim (2 \pi v_{\bf w})^{-1/2} \theta_{\bf w}^{-1} T^{-1/2} e^{-T \phi_{\bf w}(c)} \hspace{0.5cm} \mbox{a.s.\ as $T \rightarrow \infty$}. $$ \end{la} {\sc Proof.} Let $E_{\theta,t}$ denote expectation with respect to the probability measure $P_{\theta,t}$. Let $I_T=[z_T,z_T+\varepsilon_T)$ with $\varepsilon_T=o(T^{-1/2})$. Then by (\ref{5}) and (\ref{14}), \begin{eqnarray} P_{\bf w} \{ T^{1/2}(S_t-c) \in I_T \} & = & E_{\theta_{\bf w},t} \Big[ \frac{dP_{{\bf w}}}{dP_{\theta_{\bf w},t}} {\bf 1}_{\{ T^{1/2}(S_t-c) \in I_T \}} \Big] \cr & = & e^{-T \phi_{\bf w}(c)} E_{\theta_{\bf w},t} \Big[ e^{T \theta_{\bf w} (c-S_t)} {\bf 1}_{\{ T^{1/2}(S_t-c) \in I_T \}} \Big] \cr & \sim & e^{-T \phi_{\bf w}(c) -T^{1/2} \theta_{\bf w} z_T} P_{\theta_{\bf w},t} \{ T^{1/2}(S_t-c) \in I_T \}. \label{21} \end{eqnarray} By similar computations, for any $y \in \mathbb{R}$, \begin{eqnarray} P_{\bf w} \{ S_t \geq c+y \} = e^{-T \phi_{\bf w}(c)} E_{\theta_{\bf w},t} \Big[ e^{T \theta_{\bf w}(c-S_t)} {\bf 1}_{\{ S_t \geq c+y \}} \Big] \leq e^{-T \phi_{\bf w}(c)- T \theta_{\bf w} y}. \label{21a} \end{eqnarray} Under $P_{\theta_{\bf w}, t}$, $T S_t^{(i)}$ is compound Poisson with Poisson mean $\eta_i= \lambda_i \int_0^T e^{\theta_{\bf w} g_{\bf w}^{(i)}(u)} du$ and each summand is identically distributed as $g_{\bf w}^{(i)}(U_i)$, where $U_i$ is a random variable on $[0,T)$ with density $(\lambda_i/\eta_i) e^{\theta_{\bf w} g_{\bf w}^{(i)}(u)}$. We note that $$ E_{\theta_{\bf w}, t} \Big[ g_{\bf w}^{(i)}(U_i) \Big] = (\lambda_i/\eta_i) \int_0^T g_{\bf w}^{(i)}(u) e^{\theta_{\bf w} g_{\bf w}^{(i)}(u)} \ du = \frac{d}{d \theta} \int_0^T \lambda_i e^{\theta g_{\bf w}^{(i)}(u)} \ du \Big|_{\theta=\theta_{\bf w}} \Big/ \eta_i. $$ Since $\theta_{\bf w}$ maximizes the right hand side of (\ref{5}), it follows that \begin{equation} E_{\theta_{\bf w}, t} [S_t] = T^{-1} \sum_{i=1}^d \eta_i E_{\theta_{\bf w}} [ g_{\bf w}^{(i)}(U_i)] =T^{-1} \frac{d}{d \theta} \sum_{i=1}^d \lambda_i \int_0^T (e^{\theta g_{\bf w}^{(i)}(u)}-1) \ du \Big|_{\theta=\theta_{\bf w}} = c. \label{24} \end{equation} Since a compound Poisson $Y =\sum_{j=1}^N Y_j$ has variance ${\rm Var}(Y) = (EN)(EY_1^2)$, it follows from (\ref{8}) that \begin{equation} {\rm Var}_{\theta_{\bf w}, t} (S_t) = T^{-2} \sum_{i=1}^d \eta_i \int_0^T [g_{\bf w}^{(i)}(u)]^2 (\lambda_i/\eta_i) e^{\theta_{\bf w} g_{\bf w}^{(i)}(u)} du = T^{-1} v_{\bf w}. \label{24a} \end{equation} By (\ref{24}) and (\ref{24a}), $T^{1/2}(S_t-c)$ is asymptotically normal with mean 0 and variance $v_{\bf w}$. Hence by equation (5) of Stone (1965), \begin{equation} P_{\theta_{\bf w},t} \{ T^{1/2}(S_t-c) \in I_T \} = (2 \pi v_{\bf w})^{-1/2} \int_{I_T} e^{-z^2/(2v_{\bf w})} dz + o_T(1)(\varepsilon_T+T^{-1/2}) \hspace{0.5cm} {\rm a.s.} \label{24b} \end{equation} as $T \rightarrow \infty$, where $o_T(1)$ is a term not depending on $\varepsilon_T$ and $z_T$. Let $\varepsilon_T T^{1/2}$ tend to 0 slowly enough such that $o_T (1)/(\varepsilon_T T^{1/2}) \rightarrow 0$. Then by (\ref{24b}), \begin{equation} P_{\theta_{\bf w},t} \{ T^{1/2}(S_t-c) \in I_T \} \sim (2 \pi v_{\bf w})^{-1/2} \int_{I_T} e^{-z^2/(2v_{\bf w})} dz \hspace{0.5cm} \mbox{a.s.\ as $T \rightarrow \infty$}, \label{24c} \end{equation} if $I_T$ is uniformly bounded. Then by (\ref{21}), (\ref{21a}) and (\ref{24c}), \begin{eqnarray*} P_{\bf w} \{ S_t \geq c \} & = & \sum_{k=0}^{\lfloor \varepsilon_T^{-1} \rfloor} P_{{\bf w}} \{ k \varepsilon_T \leq T^{1/2}(S_t-c) < (k+1) \varepsilon_T \} + P_{{\bf w}} \{ S_T \geq c+T^{-1/2} \} \cr & \sim & (2 \pi v_{\bf w})^{-1/2} e^{-T \phi_{\bf w}(c)} \int_0^\infty e^{-T^{1/2} \theta_{\bf w} z-z^2/(2 v_{\bf w})} dz \hspace{0.5cm}\mbox{a.s.\ as $T\rightarrow\infty$,} \end{eqnarray*} and Lemma \ref{l2} holds. \hfill $\Box$ \begin{la} \label{l3} Assume (A1)-(A2). There exists $\varepsilon_T =o(T^{-1/2})$ such that for all uniformly bounded intervals $I_{1,T},I_{2,T}$ of length $\varepsilon_T$, \begin{eqnarray} & & P_{\theta_{\bf w},t} \Big\{ T^{1/2} \Big( S_t-c,\frac{d}{dx} S_x \Big|_{x=t} \Big) \in I_{1,T} \times I_{2,T} \Big\} \cr &\sim & (2 \pi)^{-1} (v_{\bf w} \tau_{\bf w})^{-1/2} \Big(\int_{z_1 \in I_{1,T}} e^{-z_1^2/( 2 v_{\bf w}) } \; dz_1 \Big) \Big( \int_{z_2 \in I_{2,T}} e^{-z_2^2/(2 \tau_{\bf w})} \ dz_2 \Big), \label{22} \end{eqnarray} almost surely as $T\rightarrow \infty$. \end{la} {\sc Proof.} By stationarity, we may assume without loss of generality that $t=0$. Under $P_{\theta_{\bf w}}$, the vector $(T S_0^{(i)}, T \frac{d}{dx} S_x^{(i)} |_{x=0})'$ is bivariate compound Poisson with Poisson mean $\eta_i = \lambda_i \int_0^T e^{\theta_{\bf w} g_{\bf w}^{(i)}(u)} \ du$ and with each summand identically distributed as \begin{displaymath} (g_{\bf w}^{(i)}(U_i),-\frac{d}{du} g_{\bf w}^{(i)}(u) |_{u=U_i})', \end{displaymath} where $U_i$ is a random variable on $[0,T)$ with density $(\lambda_i/\eta_i) e^{\theta_{\bf w} g_{\bf w}^{(i)}(u)}$. By (A2), $\frac{d}{dx} S_x^{(i)} |_{x=0}$ exists almost surely. We shall now compute the means and covariances of \begin{displaymath} (S_0,\frac{d}{dx} S_x |_{x=0})' = \sum_{i=1}^d (S_0^{(i)}, \frac{d}{dx} S_x^{(i)} |_{x=0})' \end{displaymath} under $P_{\theta_{\bf w}}$. Since \begin{eqnarray*} E_{\theta_{\bf w}} \Big[ - \frac{d}{du} g_{\bf w}^{(i)}(u) \Big|_{u=U_i} \Big] & = & -(\lambda_i/\eta_i) \int_0^T \Big[ \frac{d}{du} g_{\bf w}^{(i)}(u) \Big] e^{\theta_{\bf w} g_{\bf w}^{(i)}(u)} \ du \cr & = & -(\lambda_i/\eta_i) \theta_{\bf w}^{-1} (e^{\theta_{\bf w} g_{\bf w}^{(i)}(T)}- e^{\theta_{\bf w} g_{\bf w}^{(i)}(0)}), \end{eqnarray*} and $g_{\bf w}^{(i)}$ is bounded, it follows that \begin{equation} E_{\theta_{\bf w}} \Big[ \frac{d}{dx} S_x \Big|_{x=0} \Big] = T^{-1} \sum_{i=1}^d \eta_i E_{\theta_{\bf w}} \Big[ - \frac{d}{du} g_{\bf w}^{(i)}(u) \Big|_{u=U^{(i)}} \Big] =O(T^{-1}). \label{26} \end{equation} The bivariate compound Poisson $(Y,Z)' = \sum_{j=1}^N (Y_j,Z_j)'$, where $(Y_1,Z_1)', \cdots, (Y_N,Z_N)'$ are independent identically distributed summands conditioned on an independent Poisson random variable $N$, has covariance matrix $$ {\rm Cov} \pmatrix{Y \cr Z} = E(N) \pmatrix{ E(Y^2) & E(YZ) \cr E(YZ) & E(Z^2)}. $$ It follows from the relation \begin{eqnarray*} E_{\theta_{\bf w}} \Big[ g_{\bf w}^{(i)}(U_i) \frac{d}{du} g_{\bf w}^{(i)}(u) \Big|_{u=U_i} \Big] & = & (\lambda_i/\eta_i) \int_0^T \Big[ \frac{d}{du} g_{\bf w}^{(i)}(u) \Big] g_{\bf w}^{(i)}(u) e^{\theta_{\bf w} g_{\bf w}^{(i)}(u)} du \cr & = & (\lambda_i/\eta_i) [\theta_{\bf w}^{-1} g_{\bf w}^{(i)}(u)- \theta_{\bf w}^{-2}] e^{\theta g_{\bf w}^{(i)}(u)} \Big|_{u=0}^{u=T} = O(T^{-1}) \hspace{0.5cm} {\rm a.s.,} \end{eqnarray*} that $$ {\rm Cov}_{\theta_{\bf w}}(S_0, \frac{d}{dx} S_x|_{x=0}) = -T^{-2} \sum_{i=1}^d \eta_i E_{\theta_{\bf w}} [g_{\bf w}^{(i)}(U_i) \frac{d}{du} g_{\bf w}^{(i)}(u)|_{u=U_i}] = O(T^{-2}) \hspace{0.5cm} {\rm a.s.} $$ and hence by (\ref{8}), \begin{equation} {\rm Cov}_{\theta_{\bf w}} \pmatrix{ S_0 \cr \frac{d}{dx} S_x \big|_{x=0} } \sim T^{-1} \pmatrix{ v_{\bf w} & 0 \cr 0 & \tau_{\bf w} } \hspace{0.5cm} \mbox{a.s.\ as $T \rightarrow \infty$}. \label{28} \end{equation} It then follows from equation (5) of Stone (1965), (\ref{24}), (\ref{26}) and (\ref{28}) that \begin{eqnarray*} & & P_{\theta_{\bf w},t} \{ T^{1/2} \Big( S_t-c,\frac{d}{dx} S_x|_{x=t} \Big) \in I_{1,T} \times I_{2,T} \} \cr & = & (2 \pi)^{-1} (v_{\bf w} \tau_{\bf w})^{-1/2} \Big(\int_{z_1 \in I_{1,T}} e^{-z_1^2/( 2 v_{\bf w}) } \; dz_1 \Big) \Big( \int_{z_2 \in I_{2,T}} e^{-z_2^2/(2 \tau_{\bf w})} \ dz_2 \Big) \cr & & \quad + o_T(1)(\varepsilon_T^2+T^{-1}), \end{eqnarray*} where $o_T(1)$ does not depend on $I_{j,T}$, $j=1,2$. Then Lemma \ref{l3} follows by selecting $\varepsilon_T$ such that $\varepsilon_T T^{1/2} \rightarrow 0$ and $o_T(1)/\varepsilon_T^2 T \rightarrow 0$. \hfill $\Box$ \begin{la} \label{l4} Let $\kappa$, $T$, $K$ and $c$ be positive constants. Let $$ s(u) = c+ z_1 T^{-1/2} + u z_2 T^{-1/2} -\frac{u^2}{2} K. $$ Then $\sup_{0 < u < \kappa T^{-1/2}} s(u) \geq \max \{ c,s(0),s(\kappa T^{-1/2}) \}$ if and only if $\kappa \geq z_2/K \geq 0$ and $z_1 \geq -z_2^2/(2K T^{1/2})$. \end{la} {\sc Proof.} Since the quadratic $s$ has a unique maximum at $u=z_2/(KT^{1/2})$, it follows that if $\kappa \geq z_2/K \geq 0$, then $$ \sup_{0 < u < \kappa T^{-1/2}} s(u) = s \Big( \frac{z_2}{ KT^{1/2} } \Big) = c+ \frac{z_1}{ T^{1/2}} + \frac{ z_2^2 }{ 2KT} $$ and Lemma \ref{l4} easily holds. \hfill $\Box$ \begin{la} \label{l5} Assume (A1)-(A2). Then for any $\kappa > 0$, $t \geq 0$ and $c >\mu$, \begin{equation} P_{\bf w} \Big\{ \sup_{t < u \leq t+\kappa T^{-1/2}} S_u \geq \max (c, S_t, S_{t+\kappa T^{-1/2}}) \Big\} \sim \kappa T^{-1/2} \zeta_{\bf w} e^{-T \phi_{\bf w}(c)} \hspace{0.5cm} {\rm a.s.}, \label{31} \end{equation} where $\zeta_{\bf w} = (2 \pi)^{-1} (\tau_{\bf w}/v_{\bf w})^{1/2}$. \end{la} {\sc Proof.} Assume without loss of generality $t=0$. Let $H_i$ $(=H_{i,{\bf w}})$ be the set of all $v$ such that a second derivative does not exists at $g_{\bf w}^{(i)}(v)$. Note that by (A1)-(A2), the number of elements in $H_i$ is $O(T)$ a.s. for all $i$. Let $0 < u < \kappa T^{-1/2}$ and let $y \in {\bf y}^{(i)}$ be such that $y-h \not\in (0,u)$ for all $h \in H_i$. Then by (A2) and the mean value theorem, \begin{equation} g_{\bf w}^{(i)}(y-u)-g_{\bf w}^{(i)}(y)+u \frac{d}{dv} g_{\bf w}^{(i)}(v) \Big|_{v=y} = \frac{u^2}{2} \frac{d^2}{dv^2} g_{\bf w}^{(i)}(v) \Big|_{v=\xi} \label{32} \end{equation} for some $y-u \leq \xi \leq y$. If $y \in {\bf y}^{(i)}$ is such that $y-h \in (0,u)$ for some $h \in H_i \setminus \{ 0,T \}$, then \begin{eqnarray} \label{33} && g_{\bf w}^{(i)}(y-u)-g_{\bf w}^{(i)}(y)+u \frac{d}{dv} g_{\bf w}^{(i)}(v) \Big|_{v=y} \nonumber \\ & = & \int_{y-u}^y \Big( \frac{d}{dv} g_{\bf w}^{(i)}(v) \Big|_{v=y} - \frac{d}{d \xi} g_{\bf w}^{(i)}(\xi) \Big) \ d \xi \cr & = & (h+u-y) \Big( \frac{d}{dv} g_{\bf w}^{(i)}(v) \Big|_{v \downarrow h} - \frac{d}{d \xi} g_{\bf w}^{(i)}(v) \Big|_{v \uparrow h} \Big)+o(u^2). \end{eqnarray} Finally, for completeness, we consider $y \in {\bf y}^{(i)}$ such that either $y-T \in (0,u)$ or $y \in (0,u)$. Then we write formally \begin{equation} g_{\bf w}^{(i)}(y-u)-g_{\bf w}^{(i)}(y)+u \frac{d}{dv} g_{\bf w}^{(i)}(v) \Big|_{v=y} = g_{\bf w}^{(i)}(y-u)-g_{\bf w}^{(i)}(y)+u \frac{d}{dv} g_{\bf w}^{(i)}(v) \Big|_{v=y}. \label{34} \end{equation} By adding up (\ref{32})-(\ref{34}) over all $y \in {\bf y}^{(i)}$ for $i=1,\cdots,d$ and dividing by $T$, we obtain \begin{equation} S_u - S_0 - u \frac{d}{dv} S_v \Big|_{v=0} = - \frac{C_{{\bf w},u} u^2}{2}, \label{35} \end{equation} where $C_{{\bf w},u}$ is an expression derived from the right hand sides of (\ref{32})-(\ref{34}). It shall be shown in Appendix B that \begin{equation} \lim_{T \rightarrow \infty} \sup_{0 < u < \kappa T^{-1/2}} u^2 T \Big| C_{{\bf w},u}-\theta_{\bf w} \tau_{\bf w} \Big| \rightarrow 0 \quad {\rm a.s. \ under} \ P_{\theta_{\bf w}}. \label{36} \end{equation} Then by Lemma \ref{l3}, the change of variables $$ z_1 = T^{1/2}(S_0-c) \quad {\rm and} \enskip z_2 = T^{1/2} \frac{d}{dx} S_x \Big|_{x=0}, $$ substituting $K = \theta_{\bf w} \tau_{\bf w}$ into Lemma \ref{l4}, (\ref{35}), (\ref{5}) and (\ref{14}), \begin{eqnarray*} & & P_{\bf w} \Big\{ \sup_{0 < u < \kappa T^{-1/2}} S_u \geq \max(c,S_0,S_{\kappa T^{-1/2}}) \Big\} \nonumber \\ &=& \quad E_{\theta_{\bf w}} \Big[ \frac{dP_{\bf w}}{dP_{\theta_{\bf w}}}({\bf y}) {\bf 1}_{\{ \sup_{0 < u < \kappa T^{-1/2}} S_u \geq \max(c,S_0,S_{\kappa T^{-1/2}}) \}} \Big] \cr &= & e^{-T \phi_{\bf w}(c)} E_{\theta_{\bf w}} \Big[ e^{T \theta_{\bf w} (c-S_0)} {\bf 1}_{\{ \sup_{0 < u < \kappa T^{-1/2}} S_u \geq \max(c,S_0,S_{\kappa T^{-1/2}}) \}} \Big] \cr &\sim & e^{-T \phi_{\bf w}(c)} (2 \pi)^{-1} (v_{\bf w} \tau_{\bf w} )^{-1/2} \nonumber \\ &&\hspace{0.5cm}\times \int_0^{\kappa \theta_{\bf w} \tau_{\bf w}} \int^{\infty}_{-z_2^2/(2 \theta_{\bf w} \tau_{\bf w} T^{1/2})} e^{-T^{1/2} \theta_{\bf w} z_1-z_1^2/2 v_{\bf w}-z_2^2/ 2 \tau_{\bf w}} dz_1 \ dz_2 \cr &\sim & e^{-T \phi_{\bf w}(c)} (2 \pi)^{-1} (v_{\bf w} \tau_{\bf w})^{-1/2} \nonumber \\ &&\hspace{0.5cm}\times \int_0^{\kappa \theta_{\bf w} \tau_{\bf w}} e^{-z_2^2/2 \tau_{\bf w}} (-T^{-1/2} \theta_{\bf w}^{-1} e^{-T^{1/2} \theta_{\bf w} z_1}) \Big|_{z_1=-z_2^2/(2 \theta_{\bf w} \tau_{\bf w} T^{1/2})}^{z_1=\infty} dz_2, \end{eqnarray*} and indeed Lemma \ref{l5} holds. \hfill $\Box$ The proof of the next lemma will also be shown in Appendix B. \begin{la} \label{l6} Assume (A1)-(A2). Let $$ A_t = \Big\{ \sup_{t < u < t+\kappa T^{-1/2}} S_u \geq \max(c,S_t,S_{t+\kappa T^{-1/2}}) \Big\}. $$ Then there exists $r_\kappa =o(\kappa)$ as $\kappa \rightarrow \infty$ such that for all $t \geq 0$, with probability 1, \begin{equation} \sum_{1 \leq \ell \leq T^{3/2}/\kappa+1} P_{\bf w}(A_t \cap A_{t+\ell \kappa T^{-1/2}}) \leq r_\kappa T^{-1/2} e^{-T \phi_{\bf w}(c)} \label{46} \end{equation} for all large $T$. \end{la} {\sc Proof of Proposition {\rm \ref{l1}}.} By stationarity, we may assume without loss of generality $t=0$. By Lemmas \ref{l2}, \ref{l5}, \ref{l6} and the inequalities \begin{eqnarray*} & & \sum_{q=0}^{\lfloor \Delta/(\kappa T^{-1/2}) \rfloor-1} \Big[ P_{\bf w}(A_{q \kappa T^{-1/2}}) - \sum_{\ell=1}^{\lfloor \Delta/(\kappa T^{-1/2}) \rfloor-1-q} P_{\bf w} (A_{q \kappa T^{-1/2}} \cap A_{(q+\ell) \kappa T^{-1/2}}) \Big] \\ &\leq & P_{\bf w} \Big\{ \sup_{0 < u \leq \Delta} S_u \geq c \Big\} \\ &\leq & \sum_{q=0}^{\lfloor \Delta/(\kappa T^{-1/2}) \rfloor} P_{\bf w}(A_{q \kappa T^{-1/2}})+\sum_{q=0}^{\lfloor \Delta/(\kappa T^{-1/2}) \rfloor+1} P_{\bf w} \{ S_{q\kappa T^{-1/2}} \geq c \}, \end{eqnarray*} it follows that for any $0 < \varepsilon < 1$, there exists $\kappa$ arbitrarily large such that \begin{eqnarray} & & (\lfloor \Delta/(\kappa T^{-1/2}) \rfloor-1) \Big[ (1-\varepsilon) \kappa T^{-1/2} \zeta_{\bf w} e^{-T \phi_{\bf w}(c)} - r_\kappa T^{-1/2} e^{-T \phi_{\bf w}(c)} \Big] \cr & \leq & P_{\bf w} \Big\{ \sup_{0 < u \leq \Delta} S_u \geq c \Big\} \cr & \leq & (\lfloor \Delta/(\kappa T^{-1/2}) \rfloor+1)(1+\varepsilon) \Big[ \kappa T^{-1/2} \zeta_{\bf w} e^{-T \phi_{\bf w}(c)} + (2 \pi v_{\bf w})^{-1/2} \theta_{\bf w}^{-1} T^{-1/2} e^{-T \phi_{\bf w}(c)} \Big] \label{46b} \end{eqnarray} holds for all large $T$ with probability 1. Select $\kappa$ large enough such that (\ref{46b}) and the inequalities $$ r_\kappa \leq \varepsilon \kappa \zeta_{\bf w}, \hspace{0.5cm} (2 \pi v_{\bf w})^{-1/2} \theta_{\bf w}^{-1} \leq \varepsilon \kappa \zeta_{\bf w}, $$ holds for all large $T$ with probability 1. Then by (\ref{46b}), the inequality \begin{equation} \Big| \frac{P_{\bf w} \{ \sup_{0 < u \leq \Delta} S_u \geq c \}}{\Delta \zeta_{\bf w} e^{-T \phi_{\bf w}(c)}}-1 \Big| \leq 3 \varepsilon \label{46c} \end{equation} holds for all large $T$ with probability 1 and (\ref{9}) follows from (\ref{46c}) by selecting $\varepsilon > 0$ arbitrarily small. \hfill $\Box$ {\sc Proof of Theorem {\rm \ref{t1}}.} Let $z \in {\mathbb{R}}$ and let $\xi$ ($=\xi_{\bf w}$) be such that $\xi/T \rightarrow \infty$ and $k$ $(=k_{\bf w}):= z e^{T \phi_{\bf w}(c)}/\zeta_{\bf w} \xi$ is a positive integer tending to infinity almost surely. Define $B_j = \{ \sup_{(j-1) \xi \leq t < j \xi-T} S_t \geq c \}$ and $C_j = \{ \sup_{j \xi-T \leq t \leq j \xi} S_t \geq c \}$. Then \begin{equation} P_{\bf w} \Big( \bigcup_{j=1}^k B_j \Big) \leq P_{\bf w} \{ \zeta_{\bf w} e^{-T \phi_{\bf w}(c)} V_c \leq z \} \leq P_{\bf w} \Big( \bigcup_{j=1}^k B_j \Big) + \sum_{j=1}^k P_{\bf w}(C_j). \label{57} \end{equation} Conditioned on ${\bf w}$ known, the event $B_j$ depends only on the spike train times of ${\bf y}^{(i)}$ lying inside $[(j-1)\xi,j \xi)$. Since these intervals are disjoint for different $j$, it follows that $B_1,\cdots,B_k$ are independent conditioned on ${\bf w}$. By Lemmas \ref{l5} and \ref{l6}, it follows that with probability 1, \begin{equation} P_{\bf w}(B_j) \sim (\xi-T) \zeta_{\bf w} e^{-T \phi_{\bf w}(c)} \sim z/k, \quad P_{\bf w}(C_j) \sim T \zeta_{\bf w} e^{-T \phi_{\bf w}(c)} \quad {\rm for \ all} \ 1 \leq j \leq k. \label{58} \end{equation} Since $k \rightarrow \infty$ a.s. as $T \rightarrow \infty$, with probability 1, \begin{equation} P_{\bf w} \Big( \bigcup_{j=1}^k B_j \Big) = 1 - \prod_{j=1}^k P_{\bf w}(B_j^c) = 1-(1-z/k)^k+o(1) \rightarrow 1-e^{-z}. \label{59} \end{equation} Moreover because $\xi/T \rightarrow \infty$, it follows from (\ref{58}) that with probability 1, \begin{equation} \sum_{j=1}^k P_{\bf w}(C_j) \sim T k \zeta_{\bf w} e^{-T \phi_{\bf w}(c)} = o(1). \label{60} \end{equation} Theorem \ref{t1}(a) follows from (\ref{57}), (\ref{59}) and (\ref{60}). To show Theorem \ref{t1}(b), we use the Taylor expansion \begin{eqnarray} \phi_{\bf w} \Big( c_{\bf w} + \frac{z+\log \zeta_{\bf w}}{\theta_{\bf w} T} \Big) & = & \phi_{\bf w}(c_{\bf w}) + \theta_{\bf w} \Big( \frac{z+\log \zeta_{\bf w}}{\theta_{\bf w} T} \Big) + O(T^{-2}) \cr & = & T^{-1} [z+\log(a \zeta_{\bf w})]+O(T^{-2}). \label{61} \end{eqnarray} By the computations in (\ref{57})-(\ref{60}), it follows that with probability 1, \begin{eqnarray} P_{\bf w} \Big\{ M_a \geq c_{\bf w} + \frac{z+\log \zeta_{\bf w}}{\theta_{\bf w} T} \Big\} & = & P_{\bf w} \{ V_{c_{\bf w} +[z+\log \zeta_{\bf w}]/(\theta_{\bf w} T)} \leq a \} \cr & = & 1 - \exp \Big[-a \zeta_{\bf w} e^{-T \phi_{\bf w}(c_{\bf w}+ [z+\log \zeta_{\bf w}]/(\theta_{\bf w} T) )}+o(1) \Big], \label{62} \end{eqnarray} and Theorem \ref{t1}(b) follows by substituting (\ref{61}) into (\ref{62}). It remains to show (c). Let $\widetilde U_a = \sum_{j=1}^{\lfloor a/\xi \rfloor} {\bf 1}_{B_j}$. Then by (\ref{58}), $E_{\bf w}[\widetilde U_a]-\eta_{\bf w} \rightarrow 0$ a.s. and (\ref{11}) holds with $U_a$ replaced by $\widetilde U_a$, since $B_j$, $1 \leq j \leq \lfloor a/\xi \rfloor$ are independent events and the limit of sum of binomial random variables is Poisson. By (\ref{58}) and (\ref{60}), \begin{equation} \sum_{j=1}^{\lfloor a/\xi \rfloor} P_{\bf w}(C_j) = o(1) \quad {\rm and} \ \sum_{j=1}^{\lfloor a/\xi \rfloor-1} P_{\bf w}(B_j \cap B_{j+1}) = o(1) \hspace{0.5cm} \mbox{a.s.\ as $T \rightarrow \infty$}. \label{63} \end{equation} Moreover, by Lemma \ref{l6}, with probability 1, \begin{equation} \sum_{q=0}^{\lfloor a/(\kappa T^{-1/2}) \rfloor} \Big[ \sum_{\ell=\lfloor (1-\alpha) T^{3/2}/\kappa \rfloor}^{\lfloor T^{3/2}/\kappa+1 \rfloor} P_{\bf w} (A_{q \kappa T^{-1/2}} \cap A_{(q+\ell)\kappa T^{-1/2}}) \Big] \leq \frac{ \eta_{\bf w} r_\kappa }{ \kappa \zeta_{\bf w} } \label{64} \end{equation} for all large $T$, where $\alpha T$ is the maximal permitted overlap between two matches. By (\ref{63}) and (\ref{64}) with $\kappa$ arbitrarily large, we can conclude that $\widetilde U_a -U_a \rightarrow 0$ in probability and hence (\ref{11}) holds. \hfill $\Box$ \section{Template matching with kernels containing discontinuities} In this section, we obtain analogues of Proposition \ref{l1} and Theorem \ref{t1} when the score function $f$ contains discontinuities. A typical example is the box kernel \begin{equation} f(x) = \cases{1 & if $x < \varepsilon$, \cr -\beta & if $x \geq \varepsilon$, } \label{65} \end{equation} where $\beta,\varepsilon$ are positive real numbers. Instead of (A2), we have the following regularity condition on $f$. \smallskip \noindent (A2)$'$ Let $f$ be a discontinuous function and let there be a finite set $H$ such that the first derivative of $f$ exists and is uniformly continuous over any interval within ${\mathbb{R}}^+ \setminus H$. Moreover, (\ref{4}) holds. \smallskip Under (A2)$'$, the values of $f$ may be concentrated on $0, \pm q, \pm 2q, \cdots$ for some $q > 0$. {\sc Definition.} Let $L(f) = \{ f(x): x \geq 0 \}$ be the range of $f$. We say that $f$ is arithmetic if \begin{equation} L(f) \subseteq q {\mathbb{Z}} \hspace{0.5cm} \mbox{for some $q > 0$}. \label{68} \end{equation} Moreover, if $q$ is the largest number satisfying (\ref{68}), then we say that $f$ is arithmetic with span $q$. If (\ref{68}) is not satisfied for all $q > 0$, we say that $f$ is nonarithmetic. For example, if $\beta$ in (\ref{65}) is irrational, then $f$ is nonarithmetic while if $\beta=s/r$ for co-primes $r$ and $s$, then $f$ is arithmetic with span $q=r^{-1}$. We write for $i=1,\cdots, d$, \begin{eqnarray*} g_{\bf w}^{(i)}(u+) &=& \lim_{v \downarrow u} g_{\bf w}^{(i)}(v), \nonumber \\ g_{\bf w}^{(i)}(u-) &=& \lim_{v \uparrow u} g_{\bf w}^{(i)}(v), \nonumber \\ \delta_i(u) &=& g_{\bf w}^{(i)}(u-)-g_{\bf w}^{(i)}(u+), \cr D_i &= &\{ u \in (0,T): \delta_i(u) \neq 0 \}, \end{eqnarray*} where $g_{\bf w}^{(i)}$ is as in (\ref{2}). Let $\phi_{\bf w}$, $\theta_{\bf w}$, $v_{\bf w}$ and $\mu$ be as in Section 5.1. If $D_i\neq \emptyset$ for some $i$, we can define $h_{\bf w}^*$ to be the probability mass function taking values in $\{ \delta_i(u) \}_{u \in D_i, 1 \leq i \leq d}$ with \begin{equation} h_{\bf w}^*(x) = \sum_{i=1}^d \lambda_i \sum_{u \in D_i} e^{\theta_{\bf w} g_{\bf w}^{(i)}(u-)} {\bf 1}_{\{ \delta_i(u)=x \}} \Big/ \sum_{i=1}^d \lambda_i \sum_{u \in D_i} e^{\theta_{\bf w} g_{\bf w}^{(i)}(u-)}. \label{69} \end{equation} Let $E_*$ denotes expectation when $X_1,X_2,\cdots$ are independent identically distributed random variables with probability mass function $h_{\bf w}^*$. Define \begin{equation} \omega_b = \inf \{ n: X_1+\cdots+X_n \geq b \} \quad {\rm and} \ R_b = X_1 + \cdots + X_{\omega_b}. \label{70} \end{equation} Then the overshoot constant \begin{equation} \nu_{\bf w} := \lim_{b \rightarrow \infty} E_* e^{-\theta_{\bf w}(R_b-b)}, \label{71} \end{equation} where $b$ is taken to be a multiple of $\chi$ if $h_{\bf w}^*$ is arithmetic with span $\chi$. Note that the statement ``$h_{\bf w}^*$ is arithmetic with span $\chi$'' implies that $\{ \delta_i(u) \}_{u \in D_i, 1 \leq i \leq d} \subset \chi \mathbb{Z}$. The constants $\nu_{\bf w}$ have been well-studied in sequential analysis, see for example Siegmund (1985) for the existence of the limits in (\ref{71}). Let us define the asymptotic constant $\zeta_{\bf w}'$ (not depending on {\bf y}) by \begin{equation} \zeta_{\bf w}' = (2 \pi T v_{\bf w})^{-1/2} \nu_{\bf w} K_{\bf w} \sum_{i=1}^d \lambda_i \sum_{u \in D_i} \delta_i(u) e^{\theta_{\bf w} g_{\bf w}^{(i)}(u-)}, \label{72} \end{equation} where $$ K_{\bf w} = \cases{ 1 & if $h_{\bf w}^*$ is nonarithmetic, \cr \frac{1}{\theta_{\bf w} \chi} (1-e^{-\theta_{\bf w} \chi}) & if $h_{\bf w}^*$ is arithmetic with span $\chi$, $f$ is nonarithmetic, \cr \frac{q}{\chi} \left( \frac{1-e^{-\theta_{\bf w} \chi}}{1-e^{-\theta_{\bf w} q}} \right) & if $h_{\bf w}^*$ is arithmetic with span $\chi$, $f$ is arithmetic with span $q$.} $$ Since we can express each $\delta_i(u)$, $u \in D_i$ in the form $g_1-g_2$ for $g_1,g_2 \in L(f)$, it follows that if $f$ is arithmetic, then $h_{\bf w}^*$ is arithmetic and $\chi/q$ is a positive integer. Analogous to Lemma \ref{l1} and Theorem \ref{t1}, we have the following asymptotic results. \begin{pn} \label{l7} Assume (A1), (A2)$'$ and let $\Delta > 0$, $t \geq 0$. If $f$ is nonarithmetic and $c > \mu$, then \begin{equation} P_{\bf w} \Big\{ \sup_{t < u \leq t+\Delta} S_u \geq c \Big\} \sim \Delta \zeta_{\bf w}' e^{-T \phi_{\bf w}(c)} \hspace{0.5cm} \mbox{a.s.\ as $T \rightarrow \infty$}. \label{74} \end{equation} If $f$ is arithmetic with span $q$, then (\ref{74}) also holds under the convention that \begin{equation} Tc\ (=Tc_T) \in q {\mathbb{Z}} \hspace{0.5cm} \mbox{with $c \rightarrow c'$ as $T \rightarrow \infty$ for some $c' > \mu$}. \label{75} \end{equation} \end{pn} \begin{tm} \label{t2} Assume (A1) and(A2)$'$ and let $f$ be nonarithmetic. {\rm (a)} Let $c > \mu$. Then the distribution (conditional on ${\bf w}$) of $\zeta_{\bf w}' e^{-T \phi_{\bf w}(c)} V_c$ converges to the exponential distribution with mean 1 almost surely as $T\rightarrow \infty$. {\rm (b)} Let $a \rightarrow \infty$ as $T \rightarrow \infty$ such that $(\log a)/T$ converges to a positive constant. Let $c_{\bf w} > \mu_{\bf w}$ satisfy $\phi_{\bf w}(c_{\bf w}) = (\log a)/T$. Then for any $z \in \mathbb{R}$, $$ P_{\bf w} \{ \theta_{\bf w} T (M_a -c_{\bf w}) - \log \zeta_{\bf w}' \geq z \} \rightarrow 1-\exp(-e^{-z}) \hspace{0.5cm} \mbox{a.s.\ as $T \rightarrow \infty$}. $$ {\rm (c)} Let $a \rightarrow \infty$ as $T \rightarrow \infty$ such that $(\log a)/T$ converges to a positive constant. Let $c$ ($=c_T$) be such that $\eta_{\bf w} := a \zeta_{\bf w}' e^{-T \phi_{\bf w}(c)}$ converges to a constant $\eta > 0$ almost surely. Then \begin{equation} P_{\bf w} \{ U_a = k \} - e^{-\eta_{\bf w}} \frac{\eta_{\bf w}^k}{k!} \rightarrow 0 \hspace{0.5cm} \mbox{a.s. $\forall \ k=0,1,\cdots$}. \label{10} \end{equation} If $f$ is arithmetic with span $q$, then (a) and (c) also hold under the convention (\ref{75}). \end{tm} {\sc Example 2.} We shall conduct here a simulation study similar to Example 1. The generation of ${\bf w}$ and ${\bf y}$ are similar to Example 1 but the box kernel (\ref{65}) is used instead of the Hamming window function (\ref{16}) when computing $g_{\bf w}^{(i)}$. We chose parameters $\varepsilon=4$ms and $\beta=0.3$ in (\ref{65}). Hence $f$ is arithmetic with span $q=0.1$ and $h_{\bf w}^*$ is arithmetic with span $\chi=1.3$. In fact, $h_{\bf w}^*$ is positive only on the values $-1.3$ and $1.3$ and hence $\nu_{\bf w}=1$. In the template ${\bf w}$ that was generated, there were a total of 2 $\times$ 59 elements in $\bigcup_i D_i$ with half of all $u \in D_i$ satisfying $\delta_i(u)=1.3$ and the other half satisfying $\delta_i(u)=-1.3$ for each $i$. Hence $$ h_{\bf w}^* (-1.3) = e^{-0.3 \theta_{\bf w}}/(e^{\theta_{\bf w}}+ e^{-0.3 \theta_{\bf w}}) \quad {\rm and} \quad h_{\bf w}^* (1.3) = e^{\theta_{\bf w}}/ (e^{\theta_{\bf w}}+e^{-0.3 \theta_{\bf w}}). $$ These information are then used in the computation of the constant $\zeta_{\bf w}'$ in the analytical approximation \begin{equation} P_{\bf w} \{ M_a \geq c \} = P_{\bf w} \{ V_c \leq a \} \doteq 1 - \exp(-a \zeta_{\bf w}' e^{-T \phi_{\bf w}(c)}), \label{76} \end{equation} an analogue of (\ref{19}) that follows from Theorem \ref{t2}(a). \begin{table} \begin{center} {\sc Table 2.} Estimates of $P_{\bf w} \{ M_a \geq c \} \pm$ standard error with $a+T=20s.$ \begin{tabular}{c|c|c|c} \hline $c$ & Direct MC & Imp. Sampling & Anal. Approx. (\ref{76}) \cr \hline 0.065 & 0.029$\pm$0.004 & 0.0300$\pm$0.0016 & 0.0289 \cr 0.066 & 0.019$\pm$0.003 & 0.0218$\pm$0.0012 & 0.0207 \cr 0.067 & 0.012$\pm$0.002 & 0.0140$\pm$0.0008 & 0.0144 \cr 0.068 & 0.008$\pm$0.002 & 0.0103$\pm$0.0006 & 0.0101 \cr 0.069 & 0.005$\pm$0.002 & 0.0067$\pm$0.0004 & 0.0070 \cr 0.070 & 0.003$\pm$0.001 & 0.0051$\pm$0.0003 & 0.0047 \cr \hline \end{tabular} \end{center} \end{table} In Table 2, we compare the analytical approximation (\ref{76}) against both direct Monte Carlo and importance sampling with 2000 simulation runs being used to obtain each entry. The variance reduction when importance sampling is used is similar to that seen in Example 1, and the technique is indeed an effective time saving device for computing $p$-values especially when the $p$-values are small. The analytic approximations are also accurate and agree with the simulation results that were obtained. In addition to the above simulation study, we also conducted a similar exercise to check the accuracy of the Poisson approximation of $U_a$ in (\ref{10}), this time with $a+T=200s$ and threshold level $c=0.0614$. The maximal proportion of overlap between two matches is chosen to be $\alpha=0.8$. The analytical approximations are compared against 2000 direct Monte Carlo simulation runs and the results are recorded in Table 3. Again we see that the analytical approximations are quite accurate and this indicates the usefulness of using the asymptotic results in Theorem \ref{t2} to estimate $p$-values. \begin{table} \begin{center} {\sc Table 3.} Estimates of $P_{\bf w} \{ U_a = k \}$ and $\eta_{\bf w} = E_{\bf w} (U_a)$. Standard errors in parentheses. \begin{tabular}{c|c|c|c|c|c|c|c||c} \hline $k$ & 0 & 1 & 2 & 3 & 4 & 5 & $\geq$ 6 & $\eta_{\bf w}$ \cr \hline Approx. (\ref{10}) & 0.336 & 0.366 & 0.200 & 0.073 & 0.020 & 0.004 & 0.001 & 1.09 \cr \hline Direct Monte & 0.328 & 0.363 & 0.195 & 0.084 & 0.024 & 0.005 & 0.001 & 1.13 \cr Carlo & (0.011) & (0.011) & (0.009) & (0.006) & (0.003) & (0.002) & (0.001) & (0.02) \cr \hline \end{tabular} \end{center} \end{table} \medskip We shall now prove Proposition \ref{l7} and Theorem \ref{t2} via the following preliminary lemmas. Let \begin{equation} h_{\bf w}(x) = \sum_{i=1}^d \lambda_i \sum_{u \in D_i} e^{\theta_{\bf w} g_{\bf w}^{(i)}(u+)} {\bf 1}_{\{ \delta_i(u)=x \}} \Big/ \sum_{i=1}^d \lambda_i \sum_{u \in D_i} e^{\theta_{\bf w} g_{\bf w}^{(i)}(u+)}. \label{79} \end{equation} Then $h_{\bf w}$ and $h_{\bf w}^*$ [see (\ref{69})] are conjugate probability mass functions in the following sense. \begin{la} \label{l8} Let $D_i\neq \emptyset$ for some $i$. Then there exists $\gamma_{\bf w} = 1+O(T^{-1})$ a.s. such that $$ h_{\bf w}^*(x) = \gamma_{\bf w} e^{\theta_{\bf w} x} h_{\bf w}(x) \hspace{0.5cm} \forall x. $$ \end{la} {\sc Proof.} Let $u \in D_i$ with $w_j^{(i)} < u < w_{j+1}^{(i)}$ for adjacent spikes $w_j^{(i)}, w_{j+1}^{(i)} \in {\bf w}^{(i)}$. Then by the symmetry of $g_{\bf w}^{(i)}$ in the interval $(w_j^{(i)},w_{j+1}^{(i)})$ about its mid-point $(w_j^{(i)}+w_{j+1}^{(i)})/2$, it follows that $v:=w_{j+1}^{(i)}-(y-w_j^{(i)}) \in D_i$ and that $g_{\bf w}^{(i)}(v-) = g_{\bf w}^{(i)}(u+)$. Hence $\gamma_{\bf w}$, which we define here to be the ratio of the denominators on the right-hand sides of (\ref{69}) and (\ref{79}), is $1+O(T^{-1})$ a.s. with the $O(T^{-1})$ coming from $u \in D_i$ occuring before the first spike or after the last spike in ${\bf w}^{(i)}$. Since $$ e^{\theta_{\bf w} g_{\bf w}^{(i)}(u-)} {\bf 1}_{\{ \delta_i(u)=x \}} = e^{\theta_{\bf w}[x+g_{\bf w}^{(i)}(u+)]} {\bf 1}_{\{ \delta_i(u)=x \}}, $$ Lemma \ref{l8} holds. \hfill $\Box$ \begin{la} \label{l9} Assume (A1), (A2)$'$ and let $\kappa > 0$. Then for all $\varepsilon > 0$, there exists $\kappa$ large enough such that for any for any $t \geq 0$, the inequality $$ \Big| \frac{ P_{\bf w} \{ S_t < c, \sup_{t < u \leq t+\kappa T^{-1}} S_u \geq c \} }{ \kappa T^{-1} \zeta_{\bf w}' e^{-T \phi_{\bf w}(c)} } - 1 \Big| \leq \varepsilon, $$ holds for all large $T$ with probability 1, where the constant $\zeta_{\bf w}'$ is defined in (\ref{72}) and $c > \mu$ if $f$ is nonarithmetic; $c$ satisfies (\ref{75}) if $f$ is arithmetic with span $q$. \end{la} {\sc Proof.} Assume without loss of generality $t=0$ and let $G_i = \bigcup_{v \in D_i} (v,v+\kappa T^{-1}]$. We can write $TS_0 = TS_0' + J_0$, where $$ S_0' = T^{-1} \sum_{i=1}^d \sum_{y \in {\bf y}^{(i)}, y \not\in G_i} g_{\bf w}^{(i)}(y) \ {\rm and} \ J_0 = \sum_{i=1}^d \sum_{y \in {\bf y}^{(i)} \cap G_i} g_{\bf w}^{(i)}(y). $$ The random variables $S_0'$ and $J_0$ are independent because they are functions of the Poisson processes ${\bf y}^{(i)}$ over disjoint subsets of the real line. Let us first consider $f$ arithmetic with span $q$. Then $TS_0'$ and $J_0$ are both integral multiples of $q$. Since $f$ is constant between jumps, we can express $TS_u=TS_0' + J_u$ [see (\ref{1})] where \begin{equation} J_u = \sum_{i=1}^d \sum_{y \in {\bf y}^{(i)} \cap G_i} g_{\bf w}^{(i)}(y-u) \qquad \forall u \in (0,\kappa T^{-1}). \label{84} \end{equation} Hence both $S_0 < c$ and $\sup_{0 < u \leq \kappa T^{-1}} S_u \geq c$ occurs if and only if \begin{equation} \sup_{0 < u \leq \kappa T^{-1}} (J_u-J_0) \geq \ell q \ {\rm and} \ S_0' =c-T^{-1} kq \ {\rm for \ some \ integer} \ \ell \geq 1 \ {\rm and} \ k=J_0/q+\ell. \label{85} \end{equation} Since $E_{\theta_{\bf w}}[S_0']=c+O(T^{-1})$ a.s., by the local limit theorem for lattice random variables [see, for example, Theorem 15.5.3 of Feller (1971)], \begin{equation} P_{\theta_{\bf w}} \{ S_0' = c-T^{-1}qk \} \sim \frac{q}{(2 \pi T v_{\bf w})^{1/2}} \ {\rm a.s.} \label{86} \end{equation} for any integer $k$. Since $S_0'$ and $( J_u )_{0 < u \leq \kappa T^{-1}}$ are independent, it follows from (\ref{85}), (\ref{86}), (\ref{5}) and the change of measure (\ref{14}) that \begin{eqnarray} & & P_{\bf w} \Big\{ S_0 < c, \sup_{0 < u \leq \kappa T^{-1}} S_u \geq c \Big\} \nonumber \\ &=& E_{\theta_{\bf w}} \Big[ \frac{dP_{\bf w}}{dP_{\theta_{\bf w}}}({\bf y}) {\bf 1}_{\{ S_0 < c, \sup_{0 < u \leq \kappa T^{-1}} S_u \geq c \}} \Big] \nonumber \\ &\sim & \frac{q}{(2 \pi T v_{\bf w})^{1/2}} e^{-T \phi_{\bf w}(c)} \sum_{\ell=1}^\infty e^{\theta_{\bf w} \ell q} P_{\theta_{\bf w}} \Big\{ \sup_{0 < u \leq \kappa T^{-1}} (J_u-J_0) \geq \ell q \Big\}. \label{87} \end{eqnarray} Since $g_{\bf w}^{(i)}$ is constant between jumps, the graph of $(J_u-J_0)$ against $u$ is also piecewise constant with jumps at all $u$ for which $y-u \in D_i$ for some $y \in {\bf y}^{(i)}$, $1 \leq i \leq d$ [see (\ref{84})]. Let $N_*$ be the total number of spikes in $\bigcup_{1 \leq i \leq d} ({\bf y}^{(i)} \cap G_i)$. Then there are $N_*$ such jumps and $$ \sup_{0 < u \leq \kappa T^{-1}} (J_u-J_0) = \sup_{1 \leq j \leq N_*} (X_1+\cdots+X_j) $$ where $X_j$ is the $j$th jump and has probability mass function $h_{\bf w}$. Moreover, $X_1,X_2,\cdots$ are independent conditioned on $N_*$, a Poisson random variable independent of the $X_i$'s with mean \begin{equation} EN_* = \kappa T^{-1} \sum_{i=1}^d \lambda_i \sum_{u \in D_i} e^{\theta_{\bf w} g_{\bf w}^{(i)}(u+)}. \label{88} \end{equation} If $r \in \{ 0,\cdots,\chi/q-1 \}$ and $s \in {\mathbb{Z}}^+$, then $R_{s \chi-rq} = R_{s \chi}$ [see (\ref{70})]. Let $E_*$ and $P_*$ denote the expectation and probability measure respectively when $X_1,X_2,\cdots$ are independent identically distributed with probability mass function $h_{\bf w}^*$. Then it follows from a change of measure to $P_*$ and Lemma \ref{l8} that \begin{eqnarray} & & \sum_{\ell=1}^\infty e^{\theta_{\bf w} \ell q} P_{\theta_{\bf w}} \Big\{ \sup_{0 < u \leq \kappa T^{-1}} (J_u-J_0) \geq \ell q \Big\} \cr &= & \gamma_{\bf w}^{-1} \sum_{\ell=1}^\infty E_* \Big[ e^{-\theta_{\bf w}(R_{\ell q}-\ell q)} {\bf 1}_{\{ \sup_{1 \leq j \leq N_*} (X_1+\cdots+X_j) \geq \ell q \}} \Big] \cr &= & \gamma_{\bf w}^{-1} E_* \Big[ \sum_{r=0}^{\chi/q-1} \sum_{s=1}^\infty e^{-\theta_{\bf w}[R_{s \chi}-(s \chi-rq)]} {\bf 1}_{\{ \sup_{1 \leq j \leq N_*} (X_1+\cdots+X_j) \geq s \chi \}} \Big] \cr &\sim &\chi^{-1} \Big( \sum_{r=0}^{\chi/q-1} e^{-\theta_{\bf w} rq} \Big) \nu_{\bf w} E_* \Big[ \sup_{1 \leq j \leq N_*} (X_1+\cdots+X_j) \Big]. \label{89} \end{eqnarray} Since $X_i$ has positive mean under $P_*$ for all large $T$ and the almost sure limit of $E_* N$ [see (\ref{88})] is proportional to $\kappa$, it follows that there exists $\kappa$ large enough such that \begin{equation} \Big| \frac{ E_* [ \sup_{1 \leq j \leq N_*} (X_1+\cdots+X_j) ] }{ (EN_*) (E_* X_1)} -1 \Big| < \frac{\varepsilon }{2} \label{90} \end{equation} for all large $T$. Since $(q/\chi) \sum_{r=0}^{\chi/q-1} e^{-\theta_{\bf w} rq} = K_{\bf w}$, Lemma \ref{l9} then follows from (\ref{69}) and (\ref{87}) to (\ref{90}). When $f$ is nonarithmetic, the local limit result (\ref{22}) with $I_{2,T} = \mathbb{R}$ and $t=0$ is used in place of (\ref{86}). \hfill $\Box$. The next lemma, needed for the proofs of both Proposition \ref{l7} and Theorem \ref{t2}, will be proved in Appendix B. \begin{la} \label{l10} Assume (A1) and (A2)$'$. Let $$ A_t = \Big\{ S_t < c, \ \sup_{t < u \leq t+\kappa T^{-1}} S_u \geq c \Big\}. $$ Then there exists $r_\kappa =o(\kappa)$ as $\kappa \rightarrow \infty$ such that for all $t \geq 0$, with probability 1, $$ \sum_{\ell=1}^{\lfloor T^2/\kappa+1 \rfloor} P_{\bf w}(A_t \cap A_{t+\ell \kappa T^{-1}}) \leq r_\kappa T^{-1/2} e^{-T \phi_{\bf w}(c)} $$ for all large $T$. \end{la} {\sc Proof of Proposition {\rm \ref{l7}}.} By stationarity, we may assume without loss of generality $t=0$. Then (\ref{74}) follows from Lemmas \ref{l2}, \ref{l9}, \ref{l10} and the inequalities \begin{eqnarray*} & & \sum_{q=0}^{\lfloor \Delta/(\kappa T^{-1}) \rfloor-1} \Big[ P_{\bf w}(A_{q \kappa T^{-1}}) - \sum_{\ell=1}^{\lfloor \Delta/(\kappa T^{-1}) \rfloor-1-q} P_{\bf w} (A_{q \kappa T^{-1}} \cap A_{(q+\ell) \kappa T^{-1}}) \Big] \cr & \leq & P_{\bf w} \Big\{ \sup_{0 < u \leq \Delta} S_u \geq c \Big\} \leq \sum_{q=0}^{\lfloor \Delta/( \kappa T^{-1}) \rfloor} P_{\bf w}(A_{q \kappa T^{-1}})+P_{\bf w} \{ S_0 \geq c \}, \end{eqnarray*} with $\kappa$ arbitrarily large; see for example, the proof of Proposition \ref{l1}. \hfill $\Box$ {\sc Proof of Theorem {\rm \ref{t2}}.} The proof of Theorem \ref{t2} proceeds as in the proof of Theorem \ref{t1}. The only modification needed is the replacement of $\zeta_{\bf w}$ by $\zeta_{\bf w}'$. For the proof of Theorem \ref{t2}(c), we will also need to replace (\ref{64}) by $$ \sum_{q=0}^{\lfloor a/(\kappa T^{-1}) \rfloor} \Big[ \sum_{\ell=\lfloor (1-\alpha) T^2/\kappa \rfloor}^{\lfloor T^2/\kappa+1 \rfloor} P_{\bf w} (A_{q \kappa T^{-1}} \cap A_{(q+\ell)\kappa T^{-1}}) \Big] \leq \frac{ \eta_{\bf w} r_\kappa T^{1/2}}{ \kappa \zeta_{\bf w}'}. $$ \hfill $\Box$ \section{Acknowledgments} Wei-Liem Loh would like to thank Professor Yannis Yatracos for the many discussions on metric entropy and minimum distance estimation and to Professor Zhiyi Chi for introducing to him the field of neuroscience when he visited the University of Chicago in Spring 2003. \section{Appendix A} \begin{la} \label{la:a.3} Let $\Theta_{\tilde{\kappa}, q, n}^2$ and $\rho_{\Theta_{\tilde{\kappa}, q, n}^2}$ be as in Section 2. Then for each $(s_1, r_1)\in \Theta_{\tilde{\kappa}, q, n}^2$, \begin{eqnarray*} && \Big\{ \sum_{j=0}^\infty \int^*_{0<w_1<\cdots < w_j <T} \sup_{ \rho_{\Theta_{\tilde{\kappa}, q, n}^2} ((s_1, r_1), (s_2, r_2))\leq \varepsilon, (s_2,r_2)\in \Theta_{\tilde{\kappa}, q, n}^2 } [ p_{s_1,r_1}^{1/2} (\{w_1,\cdots, w_j\}) \nonumber \\ && \hspace{0.5cm} - p_{s_2, r_2}^{1/2} (\{ w_1,\cdots, w_j\}) ]^2 dw_1 \cdots dw_j \Big\}^{1/2} \leq \varepsilon C_{\tilde{\kappa}}, \end{eqnarray*} where $C_{\tilde{\kappa}} \geq 1/2$ is a constant depending only on $\tilde{\kappa}$. Here $\int^*$ denotes the upper integral [see for example Dudley (1999), page 94]. Consequently, \begin{equation} H^B (\varepsilon, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n}} ) \leq H( \frac{ \varepsilon}{ 2 C_{\tilde{\kappa}} }, \Theta_{\tilde{\kappa}, q, n}^2, \rho_{\Theta_{\tilde{\kappa}, q, n}^2}) \leq \frac{ 2^{(q+2)/q} C_{\tilde{\kappa}}^{1/q} C_{\tilde{\kappa}, q } }{\varepsilon^{1/q} }. \label{eq:4.9} \end{equation} \end{la} {\sc Proof.} Let $\bar{\kappa} = \kappa_0\vee 1$. We observe from (\ref{eq:4.1}) that \begin{eqnarray*} && | p_{s_1,r_1}^{1/2} (\{w_1,\cdots, w_j\}) - p_{s_2, r_2}^{1/2} (\{ w_1,\cdots, w_j\}) | \nonumber \\ &=& | e^{-\int_0^T s_1(t) r_1(t-w_{\zeta(t)}) dt/2} \prod_{i=1}^j s_1^{1/2} (w_i) r_1^{1/2} (w_i- w_{i-1}) \nonumber \\ &&\hspace{0.5cm} - e^{-\int_0^T s_2 (t) r_2 (t-w_{\zeta(t)}) dt/2} \prod_{i=1}^j s_2^{1/2} (w_i) r_2^{1/2} (w_i- w_{i-1}) |, \end{eqnarray*} where $\zeta(t) = \max\{ k\geq 0: w_k <t\}$. Since $\rho_{\Theta_{\tilde{\kappa}, q, n}^2} ( (s_1, r_1), (s_2, r_2)) \leq \varepsilon$, we have \begin{eqnarray*} | s_1^{1/2} ( w_i) r_1^{1/2} (w_i -w_{i-1}) - s_2^{1/2} (w_i) r_2^{1/2} (w_i-w_{i-1}) | &\leq & 2 \varepsilon \bar{\kappa}, \hspace{0.5cm}\forall i\geq 1, \end{eqnarray*} and \begin{eqnarray*} && | e^{-\int_0^T s_1(t) r_1(t- w_{\zeta(t)} ) dt/2} - e^{-\int_0^T s_2(t) r_2(t - w_{\zeta(t)} ) dt/2} | \nonumber \\ &\leq & | 1 - e^{\int_0^T [ s_1(t) r_1(t- w_{\zeta(t)}) - s_2 (t) r_2 (t-w_{\zeta(t)}) ] dt/2} | \nonumber \\ &\leq & 2 \varepsilon \bar{\kappa}^3 T \sum_{i=0}^\infty \frac{ ( 2 \varepsilon \bar{\kappa}^3 T )^i}{ (i+1)!}. \end{eqnarray*} Consequently for $j=1, 2, \cdots,$ \begin{eqnarray*} && \int^*_{0<w_1<\cdots <w_j <T} \sup_{\rho_{\Theta_{\tilde{\kappa}, q, n}^2}((s_1, r_1), (s_2, r_2)) \leq \varepsilon, (s_2, r_2)\in \Theta_{\tilde{\kappa}, q, n}^2 } [ e^{-\int_0^T s_1(t) r_1(t-w_{\zeta(t)}) dt/2} \nonumber \\ &&\hspace{0.5cm}\times \prod_{i=1}^j s_1^{1/2} (w_i) r_1^{1/2} (w_i- w_{i-1}) \nonumber \\ &&\hspace{0.5cm} - e^{-\int_0^T s_2 (t) r_2 (t-w_{\zeta(t)}) dt/2} \prod_{i=1}^j s_2^{1/2} (w_i) r_2^{1/2} (w_i- w_{i-1}) ]^2 dw_1 \cdots dw_j \nonumber \\ &\leq & [ 2 \varepsilon \bar{\kappa}^{2 j+3} T \sum_{i=0}^\infty \frac{ (2 \varepsilon \bar{\kappa}^3 T )^i}{ (i+1)!} + 2 j \varepsilon \bar{\kappa}^{2 j-1} ]^2 \frac{T^j }{ j!}, \end{eqnarray*} and for each $(s_1, r_1)\in \Theta_{\tilde{\kappa}, q, n}^2 (\varepsilon)$, \begin{eqnarray*} && \Big\{ \sum_{j=0}^\infty \int^*_{0<w_1<\cdots < w_j <T} \sup_{ \rho_{\Theta_{\tilde{\kappa}, q, n}^2}((s_1, r_1), (s_2, r_2)) \leq \varepsilon, (s_2, r_2)\in \Theta_{\tilde{\kappa}, q, n}^2 } [ p_{s_1,r_1}^{1/2} (\{w_1,\cdots, w_j\}) \nonumber \\ && \hspace{0.5cm} - p_{s_2, r_2}^{1/2} (\{ w_1,\cdots, w_j\}) ]^2 dw_1 \cdots dw_j \Big\}^{1/2} \nonumber \\ &\leq & \varepsilon \Big\{ \sum_{j= 0}^\infty [ 2 \bar{\kappa}^{2 j+3} T \sum_{i=0}^\infty \frac{ (2 \varepsilon \bar{\kappa}^3 T )^i}{ (i+1)!} + 2 j \bar{\kappa}^{2 j-1} ]^2 \frac{T^j }{ j!} \Big\}^{1/2} \nonumber \\ &\leq & \varepsilon C_{\tilde{\kappa} }, \end{eqnarray*} where $C_{\tilde{\kappa} }$ is a absolute constant depending only on $\tilde{\kappa}$. (\ref{eq:4.9}) now follows from Lemma 2.1 of Ossiander (1987) and (\ref{eq:4.20}) since $C_{\tilde{\kappa}} \geq 1/2$. \hfill $\Box$ \begin{la} \label{la:a.6} With the notation of Section 2, we have \begin{eqnarray*} H^B (\varepsilon, \tilde{\cal Z}_{\tilde{\kappa}, q, n}, \rho_{\tilde{\cal Z}_{\tilde{\kappa}, q, n}}) & \leq & H^B ( \frac{\varepsilon }{ 2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n}}) \nonumber \\ &\leq & 2^{(q+2)/q} C_{\tilde{\kappa}}^{1/q} C_{\tilde{\kappa}, q} (\frac{ 2 e^{\tau/2} }{\varepsilon})^{1/q}, \hspace{0.5cm}\forall \varepsilon>0. \end{eqnarray*} \end{la} {\sc Proof.} For $i=1,2$, let $f_i: {\cal N}\rightarrow R$ be a nonnegative function such that \begin{displaymath} \sum_{j=0}^\infty \int_{0<w_1<\cdots< w_j<T} f_i (\{w_1,\cdots, w_j\}) dw_1 \cdots dw_j <\infty. \end{displaymath} Define for $j=0,1, \cdots,$ \begin{displaymath} A_{i,j} = \left\{ \{w_1,\cdots, w_j \}: f_i(\{w_1,\cdots, w_j \}) < e^{-\tau} p_{s,r}(\{w_1,\cdots, w_j \}) \right\}, \end{displaymath} and \begin{displaymath} \tilde{f}_i (\{w_1,\cdots, w_j\}) = \left\{ \begin{array}{ll} f_i (\{ w_1,\cdots, w_j\}), & \mbox{on $A_{i, j}^c$,} \\ e^{-\tau} p_{s,r}(\{w_1,\cdots, w_j\}), & \mbox{on $A_{i, j}$.} \end{array} \right. \end{displaymath} Letting $\tilde{Z}_{f_i}$ as in (\ref{eq:4.13}), we have \begin{eqnarray*} && \sum_{j=0}^\infty \int_{0<w_1<\cdots<w_j<T} [\tilde{Z}_{f_1} (\{w_1,\cdots, w_j\} ) -\tilde{Z}_{f_2} (\{w_1,\cdots, w_j\} ) ]^2 \nonumber \\ &&\hspace{0.5cm}\times p_{s,r} (\{w_1,\cdots, w_j\}) dw_1\cdots dw_j \nonumber \\ &=& 4 \sum_{j=0}^\infty \int_{0<w_1<\cdots<w_j<T} \Big[ \log (\frac{ \tilde{f}_1^{1/2}(\{w_1,\cdots, w_j\} ) }{ p_{s,r}^{1/2} (\{w_1,\cdots, w_j\}) } ) -\log(\frac{ \tilde{f}_2^{1/2} (\{w_1,\cdots, w_j\}) }{ p_{s,r}^{1/2} (\{w_1,\cdots, w_j\}) }) \Big]^2 \nonumber \\ &&\hspace{0.5cm}\times p_{s,r} (\{w_1,\cdots, w_j\}) dw_1\cdots dw_j \nonumber \\ &\leq & 4 e^\tau \sum_{j=0}^\infty \int_{0<w_1<\cdots<w_j<T} [ \tilde{f}_1^{1/2}(\{w_1,\cdots, w_j\} ) - \tilde{f}_2^{1/2} (\{w_1,\cdots, w_j\}) ]^2 dw_1\cdots dw_j. \nonumber \\ \end{eqnarray*} By dividing the integral into four parts: namely $A_{1,j}\cap A_{2, j}, A_{1,j}^c \cap A_{2,j}, A_{1,j}\cap A_{2,j}^c$ and $A_{1,j}^c \cap A_{2,j}^c$, we observe that \begin{eqnarray*} && \sum_{j=0}^\infty \int_{0<w_1<\cdots<w_j<T} [ \tilde{f}_1^{1/2}(\{w_1,\cdots, w_j\} ) - \tilde{f}_2^{1/2} (\{w_1,\cdots, w_j\}) ]^2 dw_1\cdots dw_j \nonumber \\ &\leq & \sum_{j=0}^\infty \int_{0<w_1<\cdots<w_j<T} [ f_1^{1/2}(\{w_1,\cdots, w_j\} ) - f_2^{1/2} (\{w_1,\cdots, w_j\}) ]^2 dw_1\cdots dw_j. \end{eqnarray*} Now we conclude that \begin{eqnarray*} && \Big\{ \sum_{j=0}^\infty \int_{0<w_1<\cdots<w_j<T} [ \tilde{Z}_{f_1} (\{w_1,\cdots, w_j\} ) - \tilde{Z}_{f_2} (\{w_1,\cdots, w_j\} ) ]^2 \nonumber \\ &&\hspace{0.5cm}\times p_{s,r} (\{w_1,\cdots, w_j\}) dw_1\cdots dw_j \Big\}^{1/2} \nonumber \\ &\leq & 2 e^{\tau/2} \Big\{ \sum_{j=0}^\infty \int_{0<w_1<\cdots<w_j<T} [ f_1^{1/2}(\{w_1,\cdots, w_j\} ) - f_2^{1/2} (\{w_1,\cdots, w_j\}) ]^2 dw_1\cdots dw_j \Big\}^{1/2}. \end{eqnarray*} Lemma \ref{la:a.6} now follows from Lemma \ref{la:a.3}. \hfill $\Box$ The statement of the next lemma can be found in Wong and Shen (1995), page 346, but its proof is not provided there. \begin{la}[A Bernstein-type inequality] \label{la:4.6} Let $Z_1, Z_2, \cdots$ be independent identically distributed random variables satisfying \begin{displaymath} E (|Z_1|^j) \leq \frac{ j! b^{j-2} \gamma}{2}, \hspace{0.5cm}\forall j\geq 2. \end{displaymath} Then \begin{displaymath} P[ \frac{1}{n^{1/2}} \sum_{i=1}^n ( Z_i - EZ_i) \geq t ] \leq \exp [ - \frac{ t^2}{ 4 (2 \gamma + b t n^{-1/2} ) } ], \hspace{0.5cm}\forall t>0. \end{displaymath} \end{la} {\sc Proof.} The following proof is an adaption of the proof given in Bennett (1962), pages 36 to 38. Let $ c>0$ be a suitably chosen constant and ${\rm Var}(Z_i) = \sigma^2$. Then \begin{displaymath} E ( e^{c ( Z_i - EZ_i) } ) = 1 + c^2 \sum_{j=2}^\infty \frac{ c^{j-2} E[ (Z_i - EZ_i)^j] }{ j!} = 1 + c^2 F, \hspace{0.5cm} \mbox{say}. \end{displaymath} Since $1+ c^2 F\leq e^{ c^2 F}$, we have \begin{displaymath} E \{ \exp[ c\sum_{i=1}^n (Z_i - EZ_i) ] \} \leq e^{ n c^2 F}. \end{displaymath} Hence it follows from Markov's inequality that \begin{eqnarray*} P[ \sum_{i=1}^n (Z_i - EZ_i) \geq t \sqrt{n} ] &\leq & e^{-c t\sqrt{n} } E e^{ c \sum_{i=1}^n (Z_i - EZ_i) } \nonumber \\ &\leq & e^{ c^2 n F - c t \sqrt{n}} \nonumber \\ &\leq & e^{- t^2/(4 F)}, \end{eqnarray*} by choosing $c$ such that $F = t/(2 c n^{1/2})$. Now for $j\geq 2$, \begin{eqnarray*} | E( Z_1 - EZ_1)^j| &\leq & E( |Z_1 -EZ_1|^j ) \nonumber \\ &\leq & \sum_{i=0}^j \frac{ j!}{i! (j-i)!} ( E|Z_1|^i) ( E|Z_1|^{j-i} ) \nonumber \\ &\leq & 2^j E(|Z_1|^j ) \nonumber \\ &\leq & j! 2^{j-1} b^{j-2} \gamma. \end{eqnarray*} Hence if $2 b c < 1$, \begin{displaymath} F \leq 2 \gamma \sum_{j=2}^\infty (2 bc)^{j-2} = \frac{ 2 \gamma }{ 1 - 2 bc}. \end{displaymath} This implies that \begin{displaymath} \frac{ t}{2 c n^{1/2} } \leq \frac{ 2 \gamma }{ 1 - 2 b c}, \end{displaymath} and consequently \begin{displaymath} c \geq \frac{ t}{ 4 \gamma n^{1/2} + 2 b t}. \end{displaymath} By taking \begin{displaymath} c = \frac{ t}{ 4 \gamma n^{1/2} + 2 b t}, \end{displaymath} we observe that $2 bc <1$ and \begin{displaymath} P[ \frac{1}{n^{1/2}} \sum_{i=1}^n ( Z_i - EZ_i) \geq t ] \leq \exp [ - \frac{ t^2}{ 4 (2 \gamma + b t n^{-1/2} ) } ]. \end{displaymath} This proves Lemma \ref{la:4.6}.\hfill $\Box$ For $g: {\cal N}\rightarrow R$, let \begin{equation} \nu_n (g) = \frac{1}{n^{1/2}} \sum_{i=1}^n [ g(\{w_{i,1},\cdots, w_{i, N_i(T)} \}) - E_{s,r} g(\{w_{i,1},\cdots, w_{i, N_i(T)} \}) ], \label{eq:3.22} \end{equation} assuming that the right hand side exists and $\{ w_{i,1}, \cdots, w_{i, N_i(T)} \}$ is as in Section 3. We observe from Lemma 6 of Wong and Shen (1995) that \begin{equation} P_{s,r} [ \nu_n( \tilde{Z}_f ) \geq t ] \leq \exp[ - \frac{ t^2}{ 8 (8 c_0 \| f^{1/2} - p_{s,r}^{1/2} \|_2^2 + 2 t n^{-1/2} ) } ], \hspace{0.5cm}\forall t>0, \label{eq:3.21} \end{equation} where $\tilde{Z}_f$ is as in (\ref{eq:4.13}), \begin{displaymath} c_0= (e^{\tau/2} - 1 -\frac{\tau}{2} )/(1 - e^{-\tau/2} )^2. \end{displaymath} The next lemma is motivated by Theorem 3 of Shen and Wong (1994) and Lemma 7 of Wong and Shen (1995). As the proof of the latter lemma is only briefly sketched in Wong and Shen (1995), page 348, a detailed proof of Lemma \ref{la:4.10} is given below. \begin{la} \label{la:4.10} For any $t>0, 0<\gamma<1$ and $M>0$, let \begin{displaymath} \psi (M, t^2, n)= \frac{ M^2}{ 16 ( 8c_0 t^2 + M n^{-1/2}) }. \end{displaymath} Assume that \begin{eqnarray} H^B (\frac{t}{10}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n}}) &\leq & \frac{\gamma}{4} \psi (M, t^2, n), \label{eq:4.88} \\ \int_{\gamma M/(32 n^{1/2})}^{e^{\tau/2}t/5} [ H^B (\frac{ x}{2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n}} ) ]^{1/2} dx & \leq & \frac{ M \gamma^{3/2} }{ 2^{10} }. \label{eq:4.89} \end{eqnarray} Then \begin{displaymath} P^*_{s,r} [ \sup_{\| p^{1/2}_{s_1, r_1} - p^{1/2}_{s, r} \|_2 \leq t, p_{s_1,r_1} \in {\cal F}_{\tilde{\kappa}, q, n}} \nu_n( \tilde{Z}_{p_{s_1, r_1}} ) \geq M] \leq 3 e^{-(1-\gamma ) \psi( M, t^2, n)}, \end{displaymath} where $\nu_n(.)$ is as in (\ref{eq:3.22}). \end{la} {\sc Proof.} Without loss of generality, we can assume that \begin{equation} 3 e^{-(1-\gamma ) \psi( M, t^2, n)} \leq 1, \label{eq:4.25} \end{equation} else Lemma \ref{la:4.10} is trivial. We observe from Lemma \ref{la:a.6} that \begin{displaymath} H^B (\gamma, \tilde{\cal Z}_{\tilde{\kappa}, q, n}, \rho_{\tilde{\cal Z}_{\tilde{\kappa}, q, n} }) \leq H^B (\frac{\gamma }{ 2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n}}) <\infty. \end{displaymath} For any $\delta_{n,0} > \delta_{n,1} > \cdots > \delta_{n,n_0} > 0$, there exist ${\cal F}_{n,j}, j=0,\cdots, n_0$, with \begin{displaymath} | {\cal F}_{n,j} | = \exp[ H^B (\frac{ \delta_{n,j} }{2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n}} ) ], \end{displaymath} such that for each $p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n}$ one can find $f_j^L$, $f_j^U \in {\cal F}_{n,j}$ such that \begin{displaymath} f_j^L(\{w_1,\cdots, w_{N(T)} \} ) \leq p_{s_1, r_1}(\{w_1,\cdots, w_{N(T)} \} ) \leq f_j^U(\{w_1,\cdots, w_{N(T)} \} ), \hspace{0.5cm} \mbox{a. s.}, \end{displaymath} and \begin{eqnarray*} && \Big\{ \sum_{j_1=0}^\infty \int_{0<w_1<\cdots < w_{j_1} <T} [ \tilde{Z}_{f_j^U}( \{w_1,\cdots, w_{j_1}\}) - \tilde{Z}_{f_j^L}( \{w_1,\cdots, w_{j_1} \}) ]^2 \nonumber \\ && \hspace{0.5cm}\times p_{s,r} (\{w_1,\cdots, w_{j_1} \}) dw_1\cdots dw_{j_1} \Big\}^{1/2} \nonumber \\ &\leq & 2 e^{\tau/2} \Big\{ \sum_{j_1=0}^\infty \int_{0<w_1<\cdots < w_{j_1}<T} [f^U_j ( \{w_1,\cdots, w_{j_1}\})^{1/2} \nonumber \\ &&\hspace{0.5cm} - f_j^L (\{w_1,\cdots, w_{j_1}\})^{1/2} ]^2 dw_1 \cdots dw_{j_1} \Big\}^{1/2} \nonumber \\ &\leq & \delta_{n,j}. \end{eqnarray*} Here $n_0$ is a nonnegative integer to be suitably chosen later. Define for $k=0,\cdots, n_0$, \begin{eqnarray*} u_k (\{ w_1,\cdots, w_{N(T)}\}) &=& \min_{0\leq j\leq k} f_j^U (\{w_1,\cdots, w_{N(T)}\}), \nonumber \\ l_k(\{ w_1,\cdots, w_{N(T)}\}) &=& \max_{0\leq j\leq k} f_j^L (\{w_1,\cdots, w_{N(T)}\}). \end{eqnarray*} Then $\tilde{Z}_{l_k} \leq \tilde{Z}_{p_{s_1, r_1}} \leq \tilde{Z}_{u_k}$, $0 \leq \tilde{Z}_{u_{k+1} } - \tilde{Z}_{ l_{k+1} } \leq \tilde{Z}_{ u_k} - \tilde{Z}_{ l_k}$ a. s., and \begin{displaymath} [ E_{s,r} (\tilde{Z}_{u_k} - \tilde{Z}_{l_k} )^2 ]^{1/2} \leq [ E_{s,r} (\tilde{Z}_{f_k^U} - \tilde{Z}_{f_k^L} )^2 ]^{1/2} \leq \delta_{n,k}, \hspace{0.5cm}\forall k=0,\cdots, n_0. \end{displaymath} If $n_0=0$, define $B_0= {\cal N}$. If $n_0\geq 1$, let $a_1> a_2> \cdots > 0$ be a sequence of constants and define \begin{eqnarray*} B_0 &=& \{ \tilde{Z}_{u_0} - \tilde{Z}_{l_0} \geq a_1\}, \nonumber \\ B_k &=& \{ \tilde{Z}_{u_k} - \tilde{Z}_{l_k} \geq a_{k+1}, \tilde{Z}_{u_j} - \tilde{Z}_{l_j} < a_{j+1}, j=0,\cdots, k-1\}, \hspace{0.5cm}\forall k=1,\cdots, n_0-1, \nonumber \\ B_{n_0} &=& ( \cup_{k=0}^{n_0-1} B_k )^c. \end{eqnarray*} Note that $\{ B_k: k=0,\cdots, n_0\}$ forms a partition of ${\cal N}$. Consequently writing ${\bf 1}_{B_k}$ to denote the indicator function of $B_k$, we have \begin{eqnarray*} \tilde{Z}_{p_{s_1, r_1}} &=& \tilde{Z}_{u_0} + \sum_{k=0}^{n_0} ( \tilde{Z}_{u_k} {\bf 1}_{B_k} - \tilde{Z}_{u_0} {\bf 1}_{B_k} ) + \tilde{Z}_{p_{s_1, r_1}} - \sum_{k=0}^{n_0} \tilde{Z}_{u_k} {\bf 1}_{B_k} \nonumber \\ &=& \tilde{Z}_{u_0} +\sum_{j=1}^{n_0} ( \tilde{Z}_{u_j} - \tilde{Z}_{u_{j-1}} ) {\bf 1}_{\cup_{j\leq k\leq n_0} B_k} + \sum_{k=0}^{n_0} ( \tilde{Z}_{p_{s_1, r_1}} - \tilde{Z}_{u_k} ) {\bf 1}_{B_k}. \end{eqnarray*} Let $\eta_1, \cdots, \eta_{n_0+1}$ be strictly positive constants such that $2 \eta_1 + \cdots + 2 \eta_{n_0} +\eta_{n_0+1} \leq \gamma M/8$. Then \begin{eqnarray} && P^*_{s,r} [ \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \nu_n (\tilde{Z}_{p_{s_1, r_1}} ) \geq M ] \nonumber \\ &\leq & P_{s,r}^* [ \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \nu_n (\tilde{Z}_{u_0} ) \geq M - \frac{\gamma M}{4} ] \nonumber \\ && + P_{s,r}^* [ \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \nu_n (\sum_{j=1}^{n_0} ( \tilde{Z}_{u_j } - \tilde{Z}_{u_{j-1}} ) {\bf 1}_{\cup_{j\leq k\leq n_0} B_k} ) \geq \sum_{j=1}^{n_0} \eta_j ] \nonumber \\ && + P_{s,r}^* [ \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \nu_n ( \sum_{k=0}^{n_0} ( \tilde{Z}_{p_{s_1, r_1}} - \tilde{Z}_{u_k} ) {\bf 1}_{B_k} ) \geq \frac{\gamma M}{8} +\sum_{k=1}^{n_0+1} \eta_k ] \nonumber \\ &\leq & |{\cal F}_{n,0} | \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } P_{s,r} [\nu_n (\tilde{Z}_{u_0} ) \geq M - \frac{\gamma M}{4} ] \nonumber \\ && + \sum_{j=1}^{n_0} P_{s,r}^* [ \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \nu_n ( ( \tilde{Z}_{u_j } - \tilde{Z}_{ u_{j-1}} ) {\bf 1}_{\cup_{j\leq k\leq n_0} B_k} ) \geq \eta_j ] \nonumber \\ && + \sum_{k=0}^{n_0-1} P_{s,r}^* [ \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \nu_n ( ( \tilde{Z}_{p_{s_1, r_1}} - \tilde{Z}_{u_k} ) {\bf 1}_{B_k} ) \geq \eta_{k+1} ] \nonumber \\ && + P_{s,r}^* [ \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \nu_n ( ( \tilde{Z}_{p_{s_1, r_1}} - \tilde{Z}_{u_{n_0}} ) {\bf 1}_{B_{n_0} } ) \geq \frac{\gamma M}{8} + \eta_{n_0+1} ] \nonumber \\ &\leq & |{\cal F}_{n,0}| \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } P_{s,r} [\nu_n (\tilde{Z}_{u_0} ) \geq M - \frac{\gamma M}{4} ] \nonumber \\ && + \sum_{j=1}^{n_0} (\prod_{l=0}^j | {\cal F}_{n,l} | ) \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } P_{s,r} [ \nu_n ( ( \tilde{Z}_{u_j} - \tilde{Z}_{u_{j-1}} ) {\bf 1}_{\cup_{j\leq k\leq n_0} B_k} ) \geq \eta_j ] \nonumber \\ && + \sum_{k=0}^{n_0-1} P_{s,r}^* [ \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \nu_n ( ( \tilde{Z}_{p_{s_1, r_1}} - \tilde{Z}_{u_k} ) {\bf 1}_{B_k} ) \geq \eta_{k+1} ] \nonumber \\ && + P_{s,r}^* [ \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \nu_n ( ( \tilde{Z}_{p_{s_1, r_1}} - \tilde{Z}_{u_{n_0}} ) {\bf 1}_{B_{n_0} } ) \geq \frac{\gamma M}{8} + \eta_{n_0+1} ]. \label{eq:4.66} \end{eqnarray} Since $H^B$ may be replaced by a larger continuous function at the expense of an arbitrarily small increase in these values, we shall follow Alexander (1984), page 1045, and assume for the rest of this proof that $H^B(., {\cal F}_{\tilde{\kappa}, q, n},\rho_{{\cal F}_{\tilde{\kappa}, q, n} } )$ is continuous and strictly decreasing from $\infty$ to $0$ on $(0, a]$ for some $a$. We define \begin{displaymath} \delta_{n,0} = \inf \{ x: H^B( \frac{x }{ 2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n} } ) \leq \frac{\gamma }{4} \psi( M, t^2, n) \}. \end{displaymath} {\sc Case 1.} Suppose that $\delta_{n,0} > \gamma M/( 8 n^{1/2})$. Then we further define \begin{eqnarray*} \delta_{n, j} &=& \max \{ \frac{\gamma M}{ 8 n^{1/2} }, \sup\{ x\leq \frac{\delta_{n,j-1} }{2}: H^B (\frac{x }{2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n} } ) \geq \nonumber \\ && \hspace{0.5cm} 4 H^B (\frac{ \delta_{n,j-1} }{ 2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n} } ) \} \}, \hspace{0.5cm} \forall j= 1, 2,\cdots, \nonumber \\ n_0 &=& \min\{ j: \delta_{n,j} = \frac{ \gamma M}{ 8 n^{1/2}} \}, \nonumber \\ \eta_j &=& 4 \delta_{n, j-1} [ \frac{ \sum_{i=0}^j H^B (\delta_{n,i} e^{-\tau/2}/2, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n} } ) }{ \gamma } ]^{1/2}, \hspace{0.5cm}\forall j=1,\cdots, n_0+1, \nonumber \\ a_j &=& \frac{ 8 n^{1/2} \delta_{n, j-1}^2 }{ \eta_j}, \hspace{0.5cm}\forall j=1,\cdots, n_0. \end{eqnarray*} Thus $\delta_{n, n_0} = \delta_{n, n_0+1} = \gamma M/(8 n^{1/2})$, \begin{eqnarray*} H^B ( \frac{\delta_{n,0} }{ 2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n} } ) &=& \frac{ \gamma \psi( M, t^2, n) }{4}, \nonumber \\ \frac{ \delta_{n,0} }{ 2 e^{\tau/2} } & \leq & \frac{ t}{10}. \end{eqnarray*} We observe from Lemma 3.1 of Alexander (1984) and (\ref{eq:4.89}) that \begin{eqnarray*} 2 \sum_{j=1}^{n_0+1} \eta_j &= & \frac{ 8 }{\gamma^{1/2} } \sum_{j=1}^{n_0+1} \delta_{n, j-1} [ \sum_{i=0}^j H^B (\frac{ \delta_{n,i} }{ 2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n} } ) ]^{1/2} \nonumber \\ &\leq & \frac{ 8 }{\gamma^{1/2} } \sum_{j=1}^{n_0-1} \delta_{n,j-1} [ \frac{4}{3} H^B (\frac{\delta_{n,j} }{ 2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n} } ) ]^{1/2} \nonumber \\ & & + \frac{ 8 }{\gamma^{1/2} } \sum_{j=n_0}^{n_0+1} \delta_{n, j-1} [ 4 H^B (\frac{ \delta_{n,j} }{ 2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n}} ) ]^{1/2} \nonumber \\ &\leq & \frac{ 16 }{\gamma^{1/2} } \sum_{j=0}^{n_0} \delta_{n,j} [ H^B (\frac{ \delta_{n,j+1} }{2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n} } ) ]^{1/2} \nonumber \\ &\leq & \frac{ 2^7 }{\gamma^{1/2} } \int_{\gamma M/(32 n^{1/2})}^{\delta_{n,0} } [ H^B (\frac{x }{ 2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n} } ) ]^{1/2} dx \nonumber \\ &\leq & \frac{ \gamma M}{8}. \end{eqnarray*} Now we observe from (\ref{eq:3.21}) that \begin{eqnarray*} && |{\cal F}_{n,0} | \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } P_{s,r} [\nu_n (\tilde{Z}_{u_0} ) \geq (1 - 2^{-2} \gamma ) M ] \nonumber \\ &\leq & e^{ H^B (\delta_{n,0} e^{-\tau/2}/2, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n} } )} \exp\{ - \frac{ (1 - 2^{-2} \gamma )^2 M^2 }{ 8 [ 8 c_0 ( 2^{-1} e^{-\tau/2} \delta_{n,0} + t)^2 + 2 n^{-1/2} (1- 2^{-2} \gamma ) M ] } \} \nonumber \\ &\leq & e^{ H^B (\delta_{n,0} e^{-\tau/2}/2, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n} } )} \exp\{ - \frac{ (1 - 2^{-2} \gamma )^2 M^2 }{ 16 [ 8 c_0 t^2 + n^{-1/2} (1- 2^{-2} \gamma ) M ] } \} \nonumber \\ &\leq & \exp\{ [\frac{\gamma }{4} - (1 - \frac{ \gamma }{4} )^2 ] \psi (M, t^2, n) \} \nonumber \\ &\leq & \exp[ - (1 - \frac{ 3 \gamma }{4} ) \psi (M, t^2, n) ], \end{eqnarray*} since \begin{eqnarray*} \| u_0^{1/2} - p_{s,r}^{1/2} \|_2 & \leq & \| u_0^{1/2} - p_{s_1, r_1}^{1/2} \|_2 + \| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \nonumber \\ &\leq & \frac{ \delta_{n,0} }{ 2 e^{\tau/2} } + t \nonumber \\ &\leq & \frac{ 11 t}{ 10}. \end{eqnarray*} Next we have \begin{eqnarray*} {\rm Var}_{s,r} [ (\tilde{Z}_{u_j} - \tilde{Z}_{u_{j-1}} ) {\bf 1}_{\cup_{j\leq k \leq n_0} B_k} ] &\leq & E_{s,r} [( \tilde{Z}_{u_{j-1}} -\tilde{Z}_{l_{j-1}} )^2 ] \nonumber\\ &\leq & \delta_{n, j-1}^2, \hspace{0.5cm}\forall j=1,\cdots, n_0, \end{eqnarray*} and $-a_j \leq \tilde{Z}_{l_{j-1}} - \tilde{Z}_{u_{j-1}} \leq \tilde{Z}_{u_j} - \tilde{Z}_{u_{j-1}} \leq 0$ on $\cup_{j\leq k\leq n_0} B_k$. Hence by the one sided version of Bernstein's inequality [see Bennett (1962), page 38], \begin{displaymath} P_{s,r} (\nu_n ( (\tilde{Z}_{u_j} - \tilde{Z}_{u_{j-1}} ) {\bf 1}_{\cup_{j\leq k\leq n_0} B_k} ) \geq \eta_j) \leq \exp [ -\frac{ \eta_j^2 }{ 2( \delta_{n, j-1}^2 + a_j \eta_j/(3 n^{1/2}) )} ], \hspace{0.5cm}\forall j=1,\cdots, n_0. \end{displaymath} Since \begin{displaymath} \frac{ \eta_j^2 }{ 2( \delta_{n, j-1}^2 + a_j \eta_j/(3 n^{1/2}) )} = \frac{ 3 \eta_j^2 }{ 22 \delta_{n, j-1}^2} \geq \frac{ 2}{ \gamma} \sum_{i=0}^j H^B (\frac{ \delta_{n,i}}{ 2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n} } ), \end{displaymath} we have \begin{eqnarray*} && \sum_{j=1}^{n_0} (\prod_{l=0}^j | {\cal F}_{n,l} | ) \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } P_{s,r} [ \nu_n ( ( \tilde{Z}_{u_j} - \tilde{Z}_{u_{j-1}} ) {\bf 1}_{\cup_{j\leq k\leq n_0} B_k} ) \geq \eta_j ] \nonumber \\ &\leq & \sum_{j=1}^{n_0} \exp [ \sum_{i =0}^j H^B (\frac{ \delta_{n,i} }{ 2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n} } ) -\frac{ \eta_j^2 }{ 2( \delta_{n, j-1}^2 + a_j \eta_j/(3 n^{1/2}) )} ] \nonumber \\ &\leq & \sum_{j=1}^{n_0-1} \exp [ -\frac{ 2(1 - \gamma)}{\gamma } \sum_{i=0}^j H^B (\frac{ \delta_{n, i} }{ 2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n} } ) ] \nonumber \\ &&\hspace{0.5cm} + \exp[-\frac{2(1 -\gamma) }{\gamma } \sum_{i=0}^{n_0} H^B (\frac{ \delta_{n,i}}{ 2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n} } )] \nonumber \\ &\leq & \sum_{j=1}^{n_0-1} \exp [ -\frac{ 2 (1 - \gamma )( \sum_{i=0}^j 4^i) }{\gamma } H^B (\frac{ \delta_{n,0}}{2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n }} ) ] \nonumber \\ &&\hspace{0.5cm} + \exp[-\frac{ 2 ( 1 -\gamma ) (4^{n_0-1} +\sum_{i=0}^{n_0-1} 4^i ) }{\gamma } H^B (\frac{ \delta_{n,0}}{2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n} } )] \nonumber \\ &= & \sum_{j=1}^{n_0-1} \exp [ -\frac{ (1 - \gamma )( \sum_{i=0}^j 4^i) }{2 } \psi (M, t^2, n) ] \nonumber \\ &&\hspace{0.5cm} + \exp[-\frac{ ( 1 -\gamma ) (4^{n_0-1} +\sum_{i=0}^{n_0-1} 4^i ) }{2 } \psi( M, t^2, n) ]. \end{eqnarray*} Thus it follows from (\ref{eq:4.25}) that \begin{eqnarray*} && \sum_{j=1}^{n_0} (\prod_{l=0}^j | {\cal F}_{n, l} | ) \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } P_{s,r} [ \nu_n ( ( \tilde{Z}_{u_j} - \tilde{Z}_{u_{j-1}} ) {\bf 1}_{\cup_{j\leq k\leq n_0} B_k} ) \geq \eta_j ] \nonumber \\ &\leq & 2 \exp [ - (1 - \gamma ) \psi (M, t^2, n) ]. \end{eqnarray*} Next using Markov's inequality, we observe that for $0\leq k\leq n_0-1$, \begin{displaymath} P_{s,r} ( B_k) \leq P_{s,r} ( \tilde{Z}_{u_k} - \tilde{Z}_{l_k} \geq a_{k+1} ) \leq \frac{ E_{s,r} (\tilde{Z}_{u_k} - \tilde{Z}_{l_k} )^2 }{ a_{k+1}^2 } \leq \frac{ \delta_{n,k}^2}{ a^2_{k+1}}. \end{displaymath} Hence \begin{eqnarray*} && \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \nu_n ((\tilde{Z}_{p_{s_1, r_1} } - \tilde{Z}_{u_k} ) {\bf 1}_{B_k} ) \nonumber \\ & \leq & n^{1/2} \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } E_{s,r} [ ( \tilde{Z}_{u_k} - \tilde{Z}_{l_k} ) {\bf 1}_{B_k} ] \nonumber \\ &\leq & n^{1/2} \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } [ E_{s,r} ( \tilde{Z}_{u_k} - \tilde{Z}_{l_k} )^2 P_{s,r}( B_k ) ]^{1/2} \nonumber \\ & \leq & \frac{ n^{1/2} \delta^2_{n,k} }{ a_{k+1} } \nonumber \\ &= & \frac{ \eta_{k+1}}{ 8}, \hspace{0.5cm}\forall k=0,\cdots, n_0-1. \end{eqnarray*} This implies that \begin{displaymath} \sum_{k=0}^{n_0-1} P_{s,r}^* [ \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \nu_n ( ( \tilde{Z}_{p_{s_1, r_1}} - \tilde{Z}_{u_k} ) {\bf 1}_{B_k} ) \geq \eta_{k+1} ] = 0. \end{displaymath} Finally, \begin{eqnarray*} && \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \nu_n ( ( \tilde{Z}_{p_{s_1, r_1}} - \tilde{Z}_{u_{n_0}} ) {\bf 1}_{B_{n_0} } ) \nonumber \\ &\leq & \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } n^{1/2} E_{s, r} | \tilde{Z}_{p_{s_1, r_1}} - \tilde{Z}_{u_{n_0}} | \nonumber \\ &\leq & n^{1/2} \delta_{n, n_0} \nonumber \\ &= & \frac{\gamma M}{ 8 }, \end{eqnarray*} and consequently \begin{displaymath} P_{s,r}^* [ \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \nu_n ( ( \tilde{Z}_{p_{s_1, r_1}} - \tilde{Z}_{u_{n_0}} ) {\bf 1}_{B_{n_0} } ) \geq \frac{\gamma M}{8} +\eta_{n_0+1} ] = 0. \end{displaymath} Thus we conclude from (\ref{eq:4.66}) that \begin{displaymath} P_{s,r}^* [ \sup_{\| p^{1/2}_{s_1, r_1} - p^{1/2}_{s, r} \|_2 \leq t, p_{s_1,r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \nu_n( \tilde{Z}_{p_{s_1, r_1}} ) \geq M] \leq 3 e^{-(1-\gamma ) \psi( M, t^2, n)}. \end{displaymath} {\sc Case 2.} Suppose that $\delta_{n,0} \leq \gamma M/( 8 n^{1/2})$. Then define $n_0=0$ and $\eta_1 = \gamma M/16$. As in Case 1, we have \begin{displaymath} |{\cal F}_{n,0}| \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } P_{s,r} [\nu_n (\tilde{Z}_{u_0} ) \geq (1 - \frac{\gamma}{4} ) M ] \leq \exp[ - (1 - \frac{ 3\gamma}{4} ) \psi (M, t^2, n) ], \end{displaymath} and \begin{displaymath} P_{s,r}^* [ \sup_{\| p_{s_1, r_1}^{1/2} - p_{s,r}^{1/2} \|_2 \leq t, p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \nu_n ( ( \tilde{Z}_{p_{s_1, r_1}} - \tilde{Z}_{u_{n_0}} ) {\bf 1}_{B_{n_0} } ) \geq \frac{\gamma M}{8} + \eta_{n_0+1} ] = 0. \end{displaymath} This completes the proof of Lemma \ref{la:4.10}.\hfill $\Box$ {\sc Proof of Proposition \ref{pn:4.1}.} Without loss of generality, we can assume that \begin{equation} 4 \exp [ - \frac{ n\varepsilon^2 }{ 2^7 (250) } ] \leq 1. \label{eq:a.56} \end{equation} We observe that (\ref{eq:4.90}) holds with $\varepsilon$ replaced by any $s$ such that $\varepsilon \leq s \leq 1$. Let $\gamma = 1/2$, $e^{\tau/2} = 5$, $t = \sqrt{2} s$ and $M= \gamma n^{1/2} s^2/2$. Then it follows from Lemma \ref{la:a.6} that \begin{eqnarray*} \int_{ \gamma M/(32 n^{1/2}) }^{e^{\tau/2} t/5} [H^B (\frac{x}{2 e^{\tau/2}}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n} }) ]^{1/2} dx &\leq & \int_{s^2/2^8}^{\sqrt{2} s} [ 2^{(q+2)/q} C_{\tilde{\kappa}}^{1/q} C_{\tilde{\kappa}, q} (\frac{2 e^{\tau/2} }{ x} )^{1/q}]^{1/2} dx \nonumber \\ & \leq & \frac{ n^{1/2} s^2 }{ 2^{13} \sqrt{2} } \nonumber \\ &= & \frac{ M \gamma^{3/2} }{ 2^{10} }, \end{eqnarray*} and since $q>1/2$, \begin{eqnarray*} \sqrt{ 2^{(q+2)/q} C_{\tilde{\kappa}}^{1/q} C_{\tilde{\kappa}, q} (2 e^{\tau/2} )^{1/q} } &\leq & \frac{ n^{1/2} s^2 }{ 2^{13} \sqrt{2} } (\frac{ 2 q-1}{2 q}) \frac{1}{ (\sqrt{2} s)^{( 2q-1)/( 2q)} - (2^{-3} s^2)^{(2q-1)/(2q)} } \nonumber \\ &=& \frac{ M }{ 2^{11} \sqrt{2} } (\frac{ 2 q-1}{2 q}) \frac{1}{ (\sqrt{2} s)^{( 2q-1)/( 2q)} - (2^{-3} s^2)^{(2q-1)/(2q)} }. \end{eqnarray*} Also with $\psi(M, t^2, n)$ as in Lemma \ref{la:4.10}, we have \begin{eqnarray*} H^B ( \frac{ t}{10}, {\cal F}_{\tilde{\kappa}, q, n}, \rho_{{\cal F}_{\tilde{\kappa}, q, n} } ) &\leq & \frac{ 2^{(q+2)/q} C_{\tilde{\kappa}}^{1/q} C_{\tilde{\kappa}, q} (2 e^{\tau/2} )^{1/q} }{ (\sqrt{2} s)^{1/q } } \nonumber \\ &\leq & \frac{ M^2 }{ 2^{23} s^{1/q} } (\frac{ 2q-1}{2q})^2 \frac{1}{ [ (\sqrt{2} s)^{( 2q-1)/( 2q)} - (2^{-3} s^2)^{(2q-1)/(2q)} ]^2 } \nonumber \\ &\leq & \frac{ M^2}{ 2^{23} s^2} \nonumber \\ &< & \frac{ M^2}{ 4 s^2 ( 64 c_0 + 1)} \nonumber \\ &=& \frac{\gamma}{4} \psi( M, t^2, n). \end{eqnarray*} Consequently, we observe from Lemma \ref{la:4.10} that \begin{displaymath} P_{s,r}^* [ \sup_{ \| p_{s_1, r_1}^{1/2}- p_{s, r}^{1/2} \|_2^2 \leq 2 s^2, p_{s_1,r_1} \in {\cal F}_{\tilde{\kappa}, q, n} } \nu_n( \tilde{Z}_{p_{s_1, r_1}} ) \geq \frac{ n^{1/2} s^2 }{4}] \leq 3 \exp[ - \frac{ n s^2 }{ 2^7 (2^6 c_0 +1) }]. \end{displaymath} Let $A = \{ p_{s_1, r_1} \in {\cal F}_{\tilde{\kappa}, q, n}: s^2 \leq \| p_{s_1,r_1}^{1/2}- p_{s,r}^{1/2} \|_2^2 \leq 2 s^2\}$. By Lemma 4 of Wong and Shen (1995), \begin{displaymath} \sup_A E( \tilde{Z}_{p_{s_1, r_1}} ) \leq -(1-\delta) s^2, \end{displaymath} where $\delta = 2 e^{-\tau/2} ( 1- e^{-\tau/2} )^{-2} = 5/8$. Now \begin{eqnarray*} && P_{s,r}^* \{ \sup_A \prod_{i=1}^n \frac{ p_{s_1, r_1} (\{w_{i,1},\cdots, w_{i, N_i (T)} \}) }{ p_{s, r} (\{w_{i,1},\cdots, w_{i, N_i (T)} \}) } \geq \exp[ - n s^2 (1 - \delta - \frac{1}{4})] \} \nonumber \\ &\leq & P_{s,r}^* \{ \sup_A \nu_n (\tilde{Z}_{p_{s_1, r_1} } ) \geq \frac{ n^{1/2} s^2}{ 4} \}, \end{eqnarray*} and hence \begin{eqnarray*} && P_{s,r}^* \{ \sup_{ s^2 \leq \|p_{s_1, r_1}^{1/2} - p_{s, r}^{1/2} \|_2^2 \leq 2 s^2, p_{s_1, r_1}\in {\cal F}_{\tilde{\kappa}, q, n} } \prod_{i=1}^n \frac{ p_{s_1, r_1} (\{w_{i,1},\cdots, w_{i, N_i (T)} \}) }{ p_{s, r} (\{w_{i,1},\cdots, w_{i, N_i (T)} \}) } \geq e^{ - n s^2/8} \} \nonumber \\ &\leq & 3 \exp[ - \frac{ n s^2 }{ 2^7 (2^6 c_0 +1) }]. \end{eqnarray*} Let $L$ be the smallest integer such that $2^L \varepsilon^2 \geq 4 \geq \max \{ \| p_{s_1, r_1}^{1/2} - p_{s, r}^{1/2} \|_2^2: p_{s_1, r_1}\in {\cal F}_{\tilde{\kappa}, q, n} \}$. Then \begin{eqnarray*} && P_{s,r}^* \{ \sup_{ \| p_{s_1, r_1}^{1/2} - p_{s, r}^{1/2}\|_2 \geq \varepsilon, p_{s_1, r_1}\in {\cal F}_{\tilde{\kappa}, q, n} } \prod_{i=1}^n \frac{ p_{s_1, r_1} (\{w_{i,1},\cdots, w_{i, N_i (T)} \}) }{ p_{s, r} (\{w_{i,1},\cdots, w_{i, N_i (T)} \}) } \geq e^{ - n \varepsilon^2/8} \} \nonumber \\ &\leq & \sum_{j=0}^L P_{s,r}^* \{ \sup_{2^j \varepsilon^2 \leq \| p_{s_1, r_1}^{1/2}- p_{s, r}^{1/2} \|_2^2 < 2^{j+1} \varepsilon^2, p_{s_1, r_1}\in {\cal F}_{\tilde{\kappa}, q, n} } \prod_{i=1}^n \frac{ p_{s_1, r_1} (\{w_{i,1},\cdots, w_{i, N_i (T)} \}) }{ p_{s, r} (\{w_{i,1},\cdots, w_{i, N_i (T)} \}) } \geq e^{ - n \varepsilon^2/8} \} \nonumber \\ &\leq & 3 \sum_{j=0}^L \exp[ - \frac{ 2^j n \varepsilon^2 }{ 2^7 (2^6 c_0 +1) }]. \end{eqnarray*} Hence we conclude from (\ref{eq:a.56}) that \begin{eqnarray*} && P_{s,r}^* \{ \sup_{ \| p_{s_1, r_1}^{1/2}- p_{s, r}^{1/2} \|_2 \geq \varepsilon, p_{s_1, r_1}\in {\cal F}_{\tilde{\kappa}, q, n} } \prod_{i=1}^n \frac{ p_{s_1, r_1} (\{w_{i,1},\cdots, w_{i, N_i (T)} \}) }{ p_{s, r} (\{w_{i,1},\cdots, w_{i, N_i (T)} \}) } \geq e^{ - n \varepsilon^2/8} \} \nonumber \\ &\leq & 4 \exp[ - \frac{ n \varepsilon^2 }{ 2^7 (2^6 c_0 +1) }]. \end{eqnarray*} This proves Proposition \ref{pn:4.1}. \hfill $\Box$ \begin{la} \label{la:a.43} Let $s^\dag_n (t)$ and $r^\dag_n (t)$ be as in (\ref{eq:3.23}). Then \begin{displaymath} E_{s,r} ( \frac{ p_{s,r} }{p_{s^\dag_n, r^\dag_n } } -1 ) \leq C_{\tilde{\kappa}, 1} \delta_n, \end{displaymath} where $C_{\tilde{\kappa}, 1}$ is a constant depending only on $\tilde{\kappa}$. \end{la} {\sc Proof.} Let $\bar{\kappa} = \kappa_0\vee 1$. Then \begin{eqnarray*} && E_{s,r} (\frac{ p_{s,r} }{p_{s^\dag_n, r^\dag_n }} -1) \nonumber \\ &=& \sum_{j=0}^\infty \int_{0<w_1<\cdots < w_j<T} \frac{ e^{-\int_0^T s(t) r(t- w_{\zeta(t)} ) dt } \prod_{i=1}^j s(w_i) r(w_i - w_{i-1}) }{ e^{-\int_0^T s^\dag_n(t) r^\dag_n (t- w_{\zeta(t)} ) dt } \prod_{i=1}^j s^\dag_n (w_i) r^\dag_n (w_i - w_{i-1}) } \nonumber \\ &&\hspace{0.5cm}\times [ e^{-\int_0^T s(t) r(t- w_{\zeta(t)} ) dt } \prod_{i=1}^j s(w_i) r(w_i - w_{i-1}) - \nonumber \\ &&\hspace{1cm} e^{-\int_0^T s^\dag_n(t) r^\dag_n (t- w_{\zeta(t)} ) dt } \prod_{i=1}^j s^\dag_n (w_i) r^\dag_n (w_i - w_{i-1}) ] dw_1 \cdots dw_j \nonumber \\ &\leq & e^{ 2 \bar{\kappa}^2 \delta_n (2 \bar{\kappa} +\delta_n) T} \sum_{j=0}^\infty \int_{0<w_1<\cdots < w_j<T} | e^{-\int_0^T s(t) r(t- w_{\zeta(t)} ) dt } \prod_{i=1}^j s(w_i) r(w_i - w_{i-1}) - \nonumber \\ &&\hspace{1cm} e^{-\int_0^T s^\dag_n(t) r^\dag_n (t- w_{\zeta(t)} ) dt } \prod_{i=1}^j s^\dag_n (w_i) r^\dag_n (w_i - w_{i-1}) | dw_1 \cdots dw_j. \end{eqnarray*} Now we observe that \begin{displaymath} | s^\dag_n (w_i) r^\dag_n (w_i - w_{i-1}) - s(w_i) r(w_i - w_{i-1}) | \leq 2 \bar{\kappa}^2 \delta_n ( 2 \bar{\kappa} + \delta_n), \end{displaymath} and \begin{eqnarray*} && | e^{ -\int_0^T s^\dag_n(t) r^\dag_n (t- w_{\zeta(t)} ) dt } - e^{ -\int_0^T s(t) r(t- w_{\zeta(t)} ) dt } | \nonumber \\ &\leq & |1 - e^{ \int_0^T [ s^\dag_n(t) r^\dag_n (t- w_{\zeta(t)} ) - s(t) r(t- w_{\zeta(t)} ) ] dt } | \nonumber \\ &\leq & 2 \bar{\kappa}^2 \delta_n (2 \bar{\kappa} + \delta_n) T \sum_{i=0}^\infty \frac{ [ 2 \bar{\kappa}^2 \delta_n (2 \bar{\kappa} + \delta_n) T ]^i }{(i+1)!}. \end{eqnarray*} This implies that for $j=1,2, \cdots,$ \begin{eqnarray*} && \int_{0<w_1<\cdots < w_j<T} | e^{-\int_0^T s(t) r(t- w_{\zeta(t)} ) dt } \prod_{i=1}^j s(w_i) r(w_i - w_{i-1}) - \nonumber \\ &&\hspace{1cm} e^{-\int_0^T s^\dag_n(t) r^\dag_n (t- w_{\zeta(t)} ) dt } \prod_{i=1}^j s^\dag_n (w_i) r^\dag_n (w_i - w_{i-1}) | dw_1 \cdots dw_j \nonumber \\ &\leq & \delta_n \Big\{ 2 \bar{\kappa}^{4 j + 2} (2 \bar{\kappa}+ \delta_n) T \sum_{i=0}^\infty \frac{ [2 \bar{\kappa}^2 (2 \bar{\kappa} + \delta_n) T ]^i }{(i+1)!} + 2 j \bar{\kappa}^{4 j -2} (2 \bar{\kappa} + \delta_n) \Big\} \frac{ T^j}{j!}. \end{eqnarray*} This proves Lemma \ref{la:a.43}. \hfill $\Box$ \begin{la} \label{la:a.1} Let $N(t), t\in [0, T),$ be a counting process with conditional intensity $\lambda_1(.|.)$ as in (\ref{eq:1.1}). Suppose that (\ref{eq:4.86}) holds and \begin{equation} \xi (t) := \lim_{\delta \downarrow 0} \frac{1}{\delta} P_{s, r} [ N( t+\delta ) - N(t ) =1 ], \hspace{0.5cm} \forall t \in [0, T). \label{eq:a.63} \end{equation} Then for $s_1, r_1 \in \Theta_{\tilde{\kappa}, q, n}$, \begin{eqnarray*} && \sum_{j=0}^{n_\theta } \int_{0<w_1<\cdots < w_j<T} p_{s,r} (\{w_1,\cdots, w_j\}) \log [\frac{p_{s,r} (\{w_1,\cdots, w_j\}) }{ p_{s_1, r_1} (\{w_1,\cdots, w_j\}) } ] dw_1 \cdots dw_j \nonumber \\ &=& \int_0^T \{ \frac{s_1 (t) }{s(t)} -1 - \log[ \frac{s_1 (t) }{ s(t)} ] \} s(t) e^{-\int_0^t s(u) du } dt \nonumber \\ && + \int_0^T \int_0^t \{ \frac{ s_1 (t) r_1 (u) }{ s(t) r(u) } -1 - \log[ \frac{s_1 (t) r_1 (u)}{ s(t) r(u)} ] \} \xi( t-u) s(t) r(u) e^{- \int_{t-u }^t s(v) r(v-t+u ) dv } du dt. \end{eqnarray*} Also if \begin{displaymath} \sum_{j=0}^{n_\theta } \int_{0<w_1<\cdots < w_j<T} p_{s,r} (\{w_1,\cdots, w_j\}) \log [\frac{p_{s,r} (\{w_1,\cdots, w_j\}) }{ p_{s_1, r_1} (\{w_1,\cdots, w_j\}) } ] dw_1 \cdots dw_j \leq 1, \end{displaymath} then \begin{eqnarray*} && \sum_{j=0}^{n_\theta } \int_{0<w_1<\cdots < w_j<T} p_{s,r} (\{w_1,\cdots, w_j\}) \log [\frac{p_{s,r} (\{w_1,\cdots, w_j\}) }{ p_{s_1, r_1} (\{w_1,\cdots, w_j\}) } ] dw_1 \cdots dw_j \nonumber \\ &\geq & \min\{ \frac{1}{20 \int_0^T s(t) e^{-\int_0^t s(u) du} dt} , \frac{1}{200} \} [ \int_0^T | s_1 (t) - s(t) | e^{-\int_0^t s(u) du } dt ]^2, \end{eqnarray*} and \begin{eqnarray*} && \sum_{j=0}^{n_\theta } \int_{0<w_1<\cdots < w_j<T} p_{s,r} (\{w_1,\cdots, w_j\}) \log [\frac{p_{s,r} (\{w_1,\cdots, w_j\}) }{ p_{s_1, r_1} (\{w_1,\cdots, w_j\}) } ] dw_1 \cdots dw_j \nonumber \\ &\geq & \min\{ \frac{1}{20 \int_0^T \int_0^t \xi( t-u) s(t) r(u) e^{-\int_{t-u}^t s(v) r( v-t+u) dv} du dt} , \frac{1}{200} \} \nonumber \\ && \hspace{0.5cm}\times [ \int_0^T \int_0^t | s_1 (t) r_1 (u) - s(t) r(u) | \xi( t-u) e^{- \int_{t-u }^t s(v) r(v-t+u ) dv } du dt ]^2. \end{eqnarray*} \end{la} {\sc Proof.} Writing the expectation as a Lebesgue-Stieltjes integral [cf.\ Aalen (1978) and Karr (1987); see also Miller (1985), page 1455, for a different approach], we observe that \begin{eqnarray*} && \sum_{j=0}^{n_\theta } \int_{0<w_1<\cdots < w_j<T} p_{s,r} (\{w_1,\cdots, w_j\}) \log [\frac{p_{s,r} (\{w_1,\cdots, w_j\}) }{ p_{s_1, r_1} (\{w_1,\cdots, w_j\}) } ] dw_1 \cdots dw_j \nonumber \\ &=& E_{s,r} \log [\frac{p_{s,r} (\{w_1,\cdots, w_{N(T)} \}) }{ p_{s_1, r_1} (\{w_1,\cdots, w_{N(T)} \}) } ] \nonumber \\ &=& E_{s,r} \{ - \int_0^T s(t) r(t- w_{N(t)} ) dt + \sum_{j=1}^{N(T)} \log [s(w_j) r(w_j - w_{j-1} ) ] \nonumber \\ && + \int_0^T s_1(t) r_1(t- w_{N(t)} ) dt - \sum_{j=1}^{N(T)} \log [s_1 (w_j) r_1 (w_j - w_{j-1} ) ] \} \nonumber \\ &=& -\int_0^T s (t ) e^{-\int_0^t s(u) du } dt + \int_0^T \log [s (t) ] e^{-\int_0^t s(u) du} s(t) dt \nonumber \\ && - \int_0^T \int_0^t s (t) r (u) \xi( t-u) e^{- \int_{t-u }^t s(v) r(v-t+u ) dv } du dt \nonumber \\ && + \int_0^T \int_0^t \log[ s (t) r( u) ] \xi( t-u) e^{- \int_{t-u }^t s(v) r(v-t+u ) dv } s(t) r( u) du dt \nonumber \\ && +\int_0^T s_1 (t ) e^{-\int_0^t s(u) du } dt - \int_0^T \log [s_1 (t) ] e^{-\int_0^t s(u) du} s(t) dt \nonumber \\ && + \int_0^T \int_0^t s_1 (t) r_1 (u) \xi( t-u) e^{- \int_{t-u }^t s(v) r(v-t+u ) dv } du dt \nonumber \\ && - \int_0^T \int_0^t \log[ s_1 (t) r_1 ( u) ] \xi( t-u) e^{- \int_{t-u }^t s(v) r(v-t+u ) dv } s(t) r( u) du dt \nonumber \\ &=& \int_0^T \{ \frac{s_1 (t) }{s(t)} -1 - \log[ \frac{s_1 (t) }{ s(t)} ] \} s(t) e^{-\int_0^t s(u) du } dt \nonumber \\ && + \int_0^T \int_0^t \{ \frac{ s_1 (t) r_1 (u) }{ s(t) r(u) } -1 - \log[ \frac{s_1 (t) r_1 (u)}{ s(t) r(u)} ] \} \xi( t-u) s(t) r(u) e^{- \int_{t-u }^t s(v) r(v-t+u ) dv } du dt. \end{eqnarray*} Next suppose that \begin{displaymath} \sum_{j=0}^{n_\theta } \int_{0<w_1<\cdots < w_j<T} p_{s,r} (\{w_1,\cdots, w_j\}) \log [\frac{p_{s,r} (\{w_1,\cdots, w_j\}) }{ p_{s_1, r_1} (\{w_1,\cdots, w_j\}) } ] dw_1 \cdots dw_j \leq 1. \end{displaymath} Since $y-1-\log(y) \geq 0$ for all $y \in [0, \infty)$ with equality only if $y=1$, we conclude that \begin{equation} \int_0^T \{ \frac{s_1 (t) }{s(t)} -1 - \log[ \frac{s_1 (t) }{ s(t)} ] \} s(t) e^{-\int_0^t s(u) du } dt \leq 1, \label{eq:a.31} \end{equation} and \begin{displaymath} \int_0^T \int_0^t \{ \frac{ s_1 (t) r_1 (u) }{ s(t) r(u) } -1 - \log[ \frac{s_1 (t) r_1 (u)}{ s(t) r(u)} ] \} \xi( t-u) s(t) r(u) e^{- \int_{t-u }^t s(v) r(v-t+u ) dv } du dt \leq 1. \end{displaymath} Observing that \begin{displaymath} y-1 -\log (y) \geq \frac{ (y-1)^2}{10} {\bf 1}\{y\in (0, 6)\} +\frac{ y-1}{10} {\bf 1}\{y\geq 6\}, \end{displaymath} it follows from (\ref{eq:a.31}) that \begin{eqnarray*} && \int_0^T \{ \frac{s_1 (t) }{s(t)} -1 - \log[ \frac{s_1 (t) }{ s(t)} ] \} s(t) e^{-\int_0^t s(u) du } dt \nonumber \\ &\geq & \frac{1}{10} \int_0^T ( \frac{s_1 (t) }{s(t)} -1 )^2 {\bf 1}\{ \frac{ s_1(t) }{ s(t)} \in (0, 6)\} s(t) e^{-\int_0^t s(u) du } dt \nonumber \\ && + \frac{1}{10} \int_0^T ( \frac{s_1 (t) }{s(t)} -1 ) {\bf 1}\{ \frac{ s_1(t) }{ s(t)} \geq 6\} s(t) e^{-\int_0^t s(u) du } dt \nonumber \\ &\geq & \frac{1}{10 \int_0^T s(t) e^{-\int_0^t s(u) du} dt} [ \int_0^T | \frac{s_1 (t) }{s(t)} -1 | {\bf 1}\{ \frac{ s_1(t) }{ s(t)} \in (0, 6)\} s(t) e^{-\int_0^t s(u) du } dt ]^2 \nonumber \\ && + \frac{1}{100} [ \int_0^T | \frac{s_1 (t) }{s(t)} -1 | {\bf 1}\{ \frac{ s_1(t) }{ s(t)} \geq 6\} s(t) e^{-\int_0^t s(u) du } dt ]^2 \nonumber \\ &\geq & \min\{ \frac{1}{20 \int_0^T s(t) e^{-\int_0^t s(u) du} dt} , \frac{1}{200} \} [ \int_0^T | \frac{s_1 (t) }{s(t)} -1 | s(t) e^{-\int_0^t s(u) du } dt ]^2. \end{eqnarray*} In a similar manner, we have \begin{eqnarray*} && \int_0^T \int_0^t \{ \frac{ s_1 (t) r_1 (u) }{ s(t) r(u) } -1 - \log[ \frac{s_1 (t) r_1 (u)}{ s(t) r(u)} ] \} \xi( t-u) s(t) r(u) e^{- \int_{t-u }^t s(v) r(v-t+u ) dv } du dt \nonumber \\ &\geq & \min\{ \frac{1}{20 \int_0^T \int_0^t \xi( t-u) s(t) r(u) e^{-\int_{t-u}^t s(v) r( v-t+u) dv} du dt} , \frac{1}{200} \} \nonumber \\ && \hspace{0.5cm}\times [ \int_0^T \int_0^t | \frac{ s_1 (t) r_1 (u) }{ s(t) r(u) } -1 | \xi( t-u) s(t) r(u) e^{- \int_{t-u }^t s(v) r(v-t+u ) dv } du dt ]^2. \end{eqnarray*} This proves Lemma \ref{la:a.1}. \hfill $\Box$ {\sc Proof of Lemma \ref{la:3.1}.} Following Yatracos (1988), page 1183, we observe that \begin{eqnarray} && \sup \{ E_{s, r} [ \int_0^T | \tilde{s}_n (t) - s(t) | dt]: s \in \Theta_{\tilde{\kappa}, q}, r \in \Theta_{\theta, \tilde{\kappa}, q} \} \nonumber \\ &\geq & \sup \{ E_{s_1, r_1} [ \int_0^T | \tilde{s}_n (t) - s_1 (t) | dt]: s_1 \in \tilde{\Theta}_{\tilde{\kappa}, q, n} \} \nonumber \\ &\geq & \frac{1}{{\rm card}(\tilde{\Theta}_{\tilde{\kappa}, q, n} ) } \sum_{ s_1 \in \tilde{\Theta}_{\tilde{\kappa}, q, n} } E_{s_1, r_1} [ \int_0^T | \tilde{s}_n (t) - s_1 (t) | dt], \label{eq:3.2} \end{eqnarray} for any $r_1\in \Theta_{\theta, \tilde{\kappa}, q}$. Define $\tilde{s}_n^* \in \tilde{\Theta}_{\tilde{\kappa}, q, n}$ such that \begin{displaymath} \int_0^T | \tilde{s}_n (t) - \tilde{s}_n^* (t) | dt = \inf \{ \int_0^T | \tilde{s}_n (t) - s_1(t) | dt: s_1 \in \tilde{\Theta}_{\tilde{\kappa}, q, n} \}. \end{displaymath} Then we have for $s_1 \in \tilde{\Theta}_{\tilde{\kappa}, q, n}$, \begin{eqnarray*} \int_0^T |\tilde{s}_n^* (t) - s_1(t) | dt &\leq & \int_0^T | \tilde{s}_n^* (t) - \tilde{s}_n (t) | dt + \int_0^T | \tilde{s}_n (t) - s_1(t)| dt \nonumber \\ &\leq & 2 \int_0^T | \tilde{s}_n (t) - s_1(t) | dt. \end{eqnarray*} So \begin{eqnarray} && \frac{1}{{\rm card}(\tilde{\Theta}_{\tilde{\kappa}, q, n} ) } \sum_{ s_1 \in \tilde{\Theta}_{\tilde{\kappa}, q, n} } E_{s_1, r_1} [ \int_0^T | \tilde{s}_n (t) - s_1 (t) | dt] \nonumber \\ &\geq & \frac{1}{2 {\rm card}(\tilde{\Theta}_{\tilde{\kappa}, q, n} ) } \sum_{ s_1 \in \tilde{\Theta}_{\tilde{\kappa}, q, n} } E_{s_1, r_1} [ \int_0^T | \tilde{s}_n^* (t) - s_1 (t) | dt] \nonumber \\ &\geq & \inf \{ \int_0^T |s_1(t) - s_2(t) | dt: s_1\neq s_2, s_1, s_2 \in \tilde{\Theta}_{\tilde{\kappa}, q, n} \} \nonumber \\ &&\hspace{0.5cm}\times \frac{1}{ 2 {\rm card}(\tilde{\Theta}_{\tilde{\kappa}, q, n} ) } \sum_{ s_1 \in \tilde{\Theta}_{\tilde{\kappa}, q, n} } P_{s_1, r_1} ( \tilde{s}_n^* \neq s_1). \label{eq:3.3} \end{eqnarray} We observe from Fano's lemma [cf.\ Ibragimov and Has'minskii (1981), pages 323 to 325, or Yatracos (1988), page 1182] that \begin{eqnarray} && \frac{1}{ {\rm card}(\tilde{\Theta}_{\tilde{\kappa}, q, n} ) } \sum_{ s_1 \in \tilde{\Theta}_{\tilde{\kappa}, q, n} } P_{s_1, r_1} ( \tilde{s}_n^* \neq s_1 ) \nonumber \\ &\geq & 1 - \frac{1}{ \log [{\rm card}(\tilde{\Theta}_{\tilde{\kappa}, q, n}) -1 ] } \Big\{ \log 2 + \nonumber \\ &&\hspace{0.5cm} + \frac{1}{ [{\rm card} (\tilde{\Theta}_{\tilde{\kappa}, q, n})]^2 } \sum_{s_1, s_2 \in \tilde{\Theta}_{\tilde{\kappa}, q, n}} E_{s_1, r_1} \log [ \prod_{i=1}^n \frac{ p_{s_1, r_1} (\{ w_{i,1},\cdots, w_{i, N_i(T)}\} ) }{ p_{s_2, r_1} (\{ w_{i,1},\cdots, w_{i, N_i(T)}\} ) } ] \Big\}. \label{eq:3.4} \end{eqnarray} (\ref{eq:3.5}) now follows from (\ref{eq:3.2}), (\ref{eq:3.3}) and (\ref{eq:3.4}). (\ref{eq:3.6}) is proved in a similar manner. \hfill $\Box$ \section{Appendix B} {\sc Proof of {\rm (\ref{36})}.} Let $u=\delta T^{-1/2}$ for some $0 < \delta < \kappa$. The contribution to $C_{{\bf w},u}$ from (\ref{32}) is equal to (under $P_{\theta_{\bf w}}$) \begin{eqnarray} & & -T^{-1} \sum_{i=1}^d \sum_{y \in {\bf y}^{(i)}} \frac{d^2}{dv^2} g_{\bf w}^{(i)}(v) \Big|_{v=y} + o(1) \cr &= & -T^{-1} \sum_{i=1}^d \lambda_i \int_0^T \Big[ \frac{d^2}{dv^2} g_{\bf w}^{(i)}(v) \Big] e^{\theta_{\bf w} g_{\bf w}^{(i)}(v)} dv + o(1) \hspace{0.5cm} {\rm a.s. \ as } \ T \rightarrow \infty. \label{39} \end{eqnarray} If $\frac{d^2}{dv^2} g_{\bf w}^{(i)}(v)$ exists for all $v$ in an interval $(v_0,v_1)$, then by integration by parts, \begin{eqnarray} & & -\int_{v_0}^{v_1} \Big[ \frac{d^2}{dv^2} g_{\bf w}^{(i)}(v) \Big] e^{\theta_{\bf w} g_{\bf w}^{(i)}(v)} dv \cr &= & - \frac{d}{dv} g_{\bf w}^{(i)}(v) e^{\theta_{\bf w} g_{\bf w}^{(i)}(v)} \Big|_{v=v_0}^{v=v_1} + \theta_{\bf w} \int_{v_0}^{v_1} \Big[ \frac{d}{dv} g_{\bf w}^{(i)}(v) \Big]^2 e^{\theta_{\bf w} g_{\bf w}^{(i)}(v)} dv. \label{40} \end{eqnarray} By letting $v_1 \uparrow h_{j+1}$ and $v_0 \downarrow h_j$, where $h_j$ and $h_{j+1}$ are adjacent points in $H_i$, it follows from (\ref{40}) that (\ref{39}) is equal to \begin{equation} T^{-1} \sum_{i=1}^d \lambda_i \sum_{h \in H_i} \Big( \frac{d}{dv} g_{\bf w}^{(i)} (v) \Big|_{v \downarrow h} - \frac{d}{dv} g_{\bf w}^{(i)} (v) \Big|_{v \uparrow h} \Big) e^{\theta_{\bf w} g_{\bf w}^{(i)}(h)} + \theta_{\bf w} \tau_{\bf w} +o(1). \label{41} \end{equation} By the law of large numbers, the contribution to $C_{{\bf w},u}$ from (\ref{33}) is equal to \begin{eqnarray} && \frac{2}{u^2 T} \sum_{i=1}^d \lambda_i \sum_{h \in H_i} \Big[ \int_h^{h+u} (h+u-y) \; dy \Big] \Big( \frac{d}{dv} g_{\bf w}^{(i)}(v) \Big|_{v \uparrow h} - \frac{d}{dv} g_{\bf w}^{(i)} (v) \Big|_{v \downarrow h} \Big) e^{\theta_{\bf w} g_{\bf w}^{(i)}(h)}+o(1) \label{42} \\ &=& T^{-1} \sum_{i=1}^d \lambda_i \sum_{h \in H_i} \Big( \frac{d}{dv} g_{\bf w}^{(i)}(v) \Big|_{v \uparrow h} - \frac{d}{dv} g_{\bf w}^{(i)} (v) \Big|_{v \downarrow h} \Big) e^{\theta_{\bf w} g_{\bf w}^{(i)}(h)}+o(1) \nonumber \end{eqnarray} almost surely as $T \rightarrow \infty$. Since the contribution from (\ref{34}) to $C_{{\bf w},u}$ is asymptotically negligible, it follows from adding up (\ref{41}) and (\ref{42}) that \begin{equation} \lim_{T \rightarrow \infty} [C_{{\bf w},u}-\theta_{\bf w} \tau_{\bf w}] \rightarrow 0 \hspace{0.5cm} \mbox{a.s.\ under $P_{\theta_{\bf w}}$}. \label{43} \end{equation} Since the first and second derivatives of $g_{\bf w}^{(i)}$ are bounded and continuous by (A2), it follows from (\ref{35}) that there exists $\beta_s \rightarrow 0$ as $s \rightarrow 0$ such that \begin{equation} \sup_{u \leq x \leq u+sT^{-1/2}} T | u^2C_{{\bf w},u}-x^2 C_{{\bf w},x}| \leq \beta_s \label{44} \end{equation} for all large $T$ with probability 1. We can conclude (\ref{36}) from (\ref{43}) and (\ref{44}). \hfill $\Box$ {\sc Proof of Lemma {\rm \ref{l6}}.} By stationarity, we may assume without loss of generality $t=0$. Let $\ell \geq 1 $ and let us denote by $Q_\theta$ $(=Q_{\theta,\ell})$ the probability measure under which ${\bf y}^{(i)}$ is generated as a Poisson point process on $[0,T + \ell \kappa T^{-1/2})$ with intensity $$ \eta_i(u) = \lambda_i \exp[\theta \widetilde g_{\bf w}^{(i)}(u)] \ {\rm for \ all} \ 1 \leq i \leq d, $$ where $\widetilde g_{\bf w}^{(i)}(u) = g_{\bf w}^{(i)}(u)+g_{\bf w}^{(i)}(u-\ell \kappa T^{-1/2})$. Let \begin{equation} \widetilde \phi_{\bf w}(c) = \sup_{\theta > 0} \Big[ 2 \theta c - T^{-1} \sum_{i=1}^d \lambda_i \int_0^{T+\ell \kappa T^{-1/2}} (e^{\theta \tilde g_{\bf w}^{(i)}(u)}-1) \ du \Big] \label{48} \end{equation} and let $\widetilde \theta_{\bf w} > 0$ attain the supremum on the right hand side of (\ref{48}). Define $\widetilde S_x = S_x + S_{x+\ell \kappa T^{-1/2}}$. It follows from the arguments in (\ref{24}), (\ref{26}) and (\ref{28}) that $$ E_{\tilde \theta_{\bf w}} [\widetilde S_0] = 2c, \quad E_{\tilde \theta_{\bf w}} \Big[ \frac{d}{dx} \widetilde S_x \Big|_{x=0} \Big] = O(T^{-1}), $$ where $E_{\tilde \theta_{\bf w}}$ denotes expectation with respect to $Q_{\tilde \theta_{\bf w}}$, and $$ {\rm Cov}_{\tilde \theta_{\bf w}} \pmatrix{ \widetilde S_0 \cr \frac{d}{dx} \widetilde S_x \big|_{x=0} } \sim T^{-1} \pmatrix{ \widetilde v_{\bf w} & 0 \cr 0 & \widetilde \tau_{\bf w} }, $$ where $\widetilde v_{\bf w}$ and $\widetilde \tau_{\bf w}$ are defined as in (\ref{8}) but with $\int_0^{T+\ell \kappa T^{-1/2}}$ replacing $\int_0^T$, $\widetilde g_{\bf w}^{(i)}$ replacing $g_{\bf w}^{(i)}$ and $\widetilde \theta_{\bf w}$ replacing $\theta_{\bf w}$. Hence for intervals $I_{1,T}$, $I_{2,T}$ satisfying the conditions of Lemma 3, \begin{eqnarray*} & & Q_{\tilde \theta_{\bf w}} \Big\{ T^{1/2} \Big( \widetilde S_0-2c, \frac{d}{dx} \widetilde S_x \Big|_{x=0} \Big) \in I_{1,T} \times I_{2,T} \Big\} \nonumber \\ &\sim & (2 \pi)^{-1} (\widetilde v_{\bf w} \widetilde \tau_{\bf w} )^{-1/2} \Big( \int_{z_1 \in I_{1,T}} e^{-z_1^2/(2 \tilde v_{\bf w})} \ dz_1 \Big) \Big( \int_{z_2 \in I_{2,T}} e^{-z_2^2/(2 \tilde \tau_{\bf w})} \ dz_2 \Big). \end{eqnarray*} By the arguments in the proof of Lemma \ref{l5}, it follows that analogous to (\ref{31}), \begin{eqnarray} P_{\bf w}(A_0 \cap A_{\ell \kappa T^{-1/2}}) & \leq & P_{\bf w} \Big\{ \sup_{0 < u < \kappa T^{-1/2}} \widetilde S_u \geq \max(2c,\widetilde S_0,\widetilde S_{\kappa T^{-1/2}}) \Big\} \cr & \sim & \kappa T^{-1/2} \widetilde \zeta_{\bf w} e^{-T \tilde \phi_{\bf w} (c)}, \label{52} \end{eqnarray} for all $\ell$ and large $T$, where $\widetilde \zeta_{\bf w} = (2 \pi)^{-1} (\widetilde \tau_{\bf w}/ \widetilde v_{\bf w})^{1/2}$. It remains for us to show that there exists a constant $\gamma > 0$ such that with probability 1, \begin{eqnarray} \widetilde \phi_{\bf w}(c) & \geq & \theta_{\bf w} c - T^{-1} \sum_{i=1}^d \lambda_i \int_0^{\ell \kappa T^{-1/2}+T} (e^{\theta_{\bf w} \tilde g_{\bf w}^{(i)}(u)/2}-1) \ du \cr & \geq & \phi_{\bf w}(c) + \gamma \min \{ \ell^2 \kappa^2 T^{-1},1 \} \label{53} \end{eqnarray} for all $1 \leq \ell \leq T^{3/2}/\kappa+1$ and $T$ large, so that (\ref{46}) follows by adding up (\ref{52}) over $1 \leq \ell \leq T^{3/2}/\kappa+1$. The first inequality in (\ref{53}) follows directly from letting $\theta = \theta_{\bf w}/2$ in the right hand side of (\ref{48}). By the identity \begin{eqnarray*} & & 2 e^{\theta_{\bf w}[g_{\bf w}^{(i)}(u) + g_{\bf w}^{(i)}(u-\ell \kappa T^{-1/2} )]/2} \cr &= & e^{\theta_{\bf w} g_{\bf w}^{(i)}(u)} +e^{\theta_{\bf w} g_{\bf w}^{(i)}(u-\ell \kappa T^{-1/2})} - (e^{\theta_{\bf w} g_{\bf w}^{(i)}(u)/2} -e^{\theta_{\bf w} g_{\bf w}^{(i)}(u-\ell \kappa T^{-1/2})/2})^2, \end{eqnarray*} it follows from (\ref{5}) that \begin{eqnarray} & & \theta_{\bf w} c- \frac{1}{T} \sum_{i=1}^d \lambda_i \int_0^{\ell \kappa T^{-1/2}+T} (e^{\theta_{\bf w} \tilde g_{\bf w}^{(i)}(u)/2}-1) \ du \cr &= & \phi_{\bf w}(c) + \frac{1}{ 2 T} \sum_{i=1}^d \lambda_i \int_0^{\ell \kappa T^{-1/2}+T} (e^{\theta_{\bf w} g_{\bf w}^{(i)}(u)/2} -e^{\theta_{\bf w} g_{\bf w}^{(i)}(u-\ell \kappa T^{-1/2})/2})^2 \ du \label{55} \end{eqnarray} and the second inequality of (\ref{53}) indeed holds for all large $T$ with probability 1. \hfill $\Box$ {\sc Proof of Lemma {\rm \ref{l10}}.} The proof of Lemma \ref{l10} uses arguments similar to the proof of Lemma \ref{l6}. The main changes are in replacing $\kappa T^{-1/2}$ by $\kappa T^{-1}$. Analogous to (\ref{52}), there exists a constant $C > 0$ such that with probability 1, \begin{eqnarray} P_{\bf w}(A_t \cap A_{t+\ell \kappa T^{-1}}) & \leq & P_{\bf w} \Big\{ S_t+ S_{t+\ell \kappa T^{-1}} < 2c, \sup_{t < u \leq t+\kappa T^{-1}} (S_u+S_{u+\ell \kappa T^{-1}}) \geq 2c \Big\} \cr & \leq & C \kappa T^{-1/2} e^{-T \tilde \phi_{\bf w}(c)} \label{93} \end{eqnarray} for all large $T$, if $\kappa$ is chosen large enough. By (\ref{55}) (with $\kappa T^{-1/2}$ replaced by $\kappa T^{-1}$), the following analogue to (\ref{53}), \begin{equation} \widetilde \phi_{\bf w}(c) \geq \phi_{\bf w}(c) + \gamma \min \{ \ell \kappa T^{-1},1 \} \label{94} \end{equation} holds with probability 1 for some $\gamma > 0$ and hence Lemma \ref{l10} follows from (\ref{93}) and (\ref{94}). \hfill $\Box$
2,869,038,155,340
arxiv
\section{Introduction} A magnetic geodesic flow of the Riemannian metric $ds^2=g_{ij}dx^idx^j$ on a 2-surface is given by the Hamiltonian system $$ \dot{x}^j = \{x^j,H\}_{mg}, \quad \dot{p}_j = \{p_j,H\}_{mg}, \quad H = \frac{1}{2} g^{ij} p_ip_j, \quad i,j=1,2, \eqno(1.1) $$ the magnetic Poisson bracket has the form $$ \{F,H\}_{mg} = \sum_{i=1}^2 \left ( \frac{\partial F}{\partial x^i} \frac{\partial H}{\partial p_i} - \frac{\partial F}{\partial p_i} \frac{\partial H}{\partial x^i} \right ) + \Omega (x^1,x^2) \left ( \frac{\partial F}{\partial p_1} \frac{\partial H}{\partial p_2} - \frac{\partial F}{\partial p_2} \frac{\partial H}{\partial p_1} \right ), $$ where $\omega= \Omega (x^1,x^2) dx^1\wedge dx^2$ is a closed 2-form which defines the magnetic field (\cite{1}). The first integral of the magnetic geodesic flow (1.1) is a function $F(x,p)$ such that $\dot{F} = \{F,H\}_{mg} \equiv 0.$ If in addition $F, H$ are functionally independent a.e., then the flow (1.1) is completely integrable. The magnetic geodesic flows on various configurational spaces were studied in many papers (e.g., see \cite{2}---\cite{8}). Let us briefly mention the results related to the 2-torus. There are only 2 known examples of integrability at all energy levels. \begin{example} Let the Riemannian metric and the magnetic field have the form $$ds^{2} = dx^{2} + dy^{2}, \qquad \omega = B dx\wedge dy, \qquad B = const \neq 0.$$ Then there exists the first integral $F = cos \left( \frac{p_1}{B} - y \right).$ \end{example} \begin{example} Let the Riemannian metric and the magnetic field have the form $$ds^{2} = \Lambda(y)(dx^{2} + dy^{2}), \qquad \omega = -u'(y)dx\wedge dy.$$ Then there exists the linear in momenta first integral $F_1 = p_1 + u(y).$ \end{example} It is shown in~\cite{9} that if a magnetic field is non-zero, then the existence of a quadratic integral (with analytic periodic coefficients) of the flow (1.1) on the 2-torus at 2 different energy levels implies the existence of a linear integral at all energy levels. This result was generalized on the case of polynomial integrals of an arbitrary degree in~\cite{10},~\cite{11}. In general, in the presence of a non-zero magnetic field it seems to be more natural to search for the first integrals of the flow (1.1) which are conserved only at a fixed energy level. In case of polynomial integrals, as shown in~\cite{12}, this problem reduces to the search for solutions to certain quasi-linear systems of PDEs, which turn out to be semi-Hamiltonian. It means that the generalized hodograph method (\cite{13}) can be applied to these systems. Notice that the implementation of this method is usually associated with significant difficulties. In application to the problem of geodesic flows, as far as we know, the only example when one managed to construct explicit solutions via this method (they relate to an integral of the fourth degree) is presented in~\cite{14}. Let us also mention the series of papers~\cite{15}---\cite{19}, where many examples of polynomial integrals of the third and the fourth degrees of various mechanical systems were constructed. On the other hand, non-polynomial integrals (e.g., rational ones) of the magnetic geodesic flow (1.1), as far as we know, are almost unexplored so far. At the same time a large number of papers are devoted to study rational integrals of mechanical systems at large (including the standard geodesic flows without magnetic fields), e.g., see~\cite{20}---\cite{35}. This paper is organized as follows. In Section 2 we briefly present the known results on linear integrals of geodesic flows and magnetic geodesic ones. In Section 3 we recall how the generalized hodograph method works. In Section 4 we use this method to construct the solutions which correspond to quadratic integrals of the magnetic flow (1.1) at a fixed energy level. In Section 5 we study the rational integrals of the magnetic flow (1.1) with a linear numerator and denominator and reduce the problem of constructing such integrals to the search for solutions to a certain PDE of the second order. Finally, in Section 6 we construct exact solutions to this equation. \section{Linear integrals} The problem of local polynomial integrals of the first and the second degrees of 2-dimensional geodesic flows was actually solved by G.D. Birkhoff in~\cite{36}. Further his results were generalized. For instance, local integrals of higher degrees were studied in~\cite{37},~\cite{38}. A classification of linear and quadratic integrals of geodesic flows on the 2-sphere and the 2-torus was obtained in~\cite{39}. In case of linear integrals, all these results can be easily generalized to the case of magnetic geodesic flows. Let us briefly recall this construction (in slightly different terms compared with~\cite{36}). Choose the conformal coordinates $ds^2=\Lambda(x,y) (dx^2+dy^2)$ on a 2-surface, then the Hamiltonian takes the form $H = \frac{p_1^{2} + p_2^{2}}{2 \Lambda (x,y)}.$ Suppose that the magnetic geodesic flow (1.1) admits a linear integral $F = a(x,y) p_1 + b(x,y) p_2 + c(x,y)$ at a fixed energy level $\{H = C_1\}$ or, equivalently (e.g., see~\cite{33}), at all energy levels. The condition $\{F,H\}_{mg} \equiv 0$ implies: $$ a_x = b_y, \qquad a_y = - b_x, \qquad (a \Lambda)_x + (b \Lambda)_y = 0, \eqno(2.1) $$ $$ c_x = b \Omega, \qquad c_y = - a \Omega. \eqno(2.2) $$ Thus the system splits into two subsystems (2.1), (2.2). The first one relates to the standard geodesic flow, and the second one describes the magnetic field. Notice that the cross-differentiation (2.2) gives $(a \Omega)_x + (b \Omega)_y = 0,$ which implies (due to (2.1)) that $\Lambda(x,y)$ and $\Omega(x,y)$ are functionally dependent. Let us consider separately the subsystem (2.1). As shown in~\cite{36} one can introduce local coordinates $u(x,y), v(x,y)$ such that the metric takes the form $ds^2=\Lambda(x,y) (dx^2+dy^2) = \lambda(v) (du^2+dv^2).$ Therefore, the coordinate $u$ is cyclic and the additional integral (in the absence of a magnetic field) has the form $F_1=p_u.$ These observations allow to construct solutions to the system (2.1), (2.2). Going to coordinates $u(x,y), v(x,y)$ we obtain that the linear integral of the geodesic flow (1.1) of the metric $ds^2 = \lambda(v) (du^2+dv^2)$ in the magnetic field $\omega = \Omega(u,v) du \wedge dv$ has the form $F = p_u + f(u,v)$ for a certain function $f(u,v).$ The condition $\{F,H\}_{mg} \equiv 0$ implies immediately that $f(u,v) = \widetilde{f}(v)$ and $\Omega(u,v) = - \widetilde{f}'(v)$ (compare with Example 2 in the Introduction). So, if the magnetic geodesic flow admits a linear integral, then up to a change of coordinates the metric and the magnetic field have the forms described above. \section{The generalized hodograph method} Before studying the quadratic integrals of the magnetic geodesic flows, let us briefly recall how the generalized hodograph method works (\cite{13}). The quasi-linear system (written in Riemann invariants) $$r^{i}_t = v_i(r)r^{i}_x, \qquad i = 1,...,n, \qquad v_i \neq v_j \eqno(3.1)$$ is called semi-Hamiltonian if the following relations hold true $$\partial_i\left(\frac{\partial_j v_k}{v_j - v_k}\right) = \partial_j\left(\frac{\partial_i v_k}{v_i - v_k}\right),\quad i \neq j, \quad j \neq k.$$ Any semi-Hamiltonian system admits infinitely many symmetries (commuting flows) of the form $r^{i}_t = w_i(r)r^{i}_x,$ $i = 1,...,n,$ where $w_i$ and $v_i$ satisfy the relations $$\frac{\partial_k v_i}{v_k - v_i} = \frac{\partial_k w_i}{w_k - w_i},\quad i \neq k.$$ For any commuting flow a solution to the following system of algebraic equations $$w_i(r) = v_i(r) t + x, \quad i = 1,...,n,$$ is also a solution to the initial semi-Hamiltonian system (3.1). For semi-Hamiltonian systems which are not in the diagonal form $$u^{i}_t = \sum_{j=1}^{n} v^{i}_j(u)u^{j}_x,\quad i = 1,..., n, \eqno(3.2)$$ one can search for commuting flows in the form $$u^{i}_\tau = \sum_{j=1}^{n} w^{i}_j(u)u^{j}_x,\quad i = 1,..., n$$ taking into account the following condition $$\partial_\tau(u^{i}_t) = \partial_\tau \left(\sum_{j = 1}^{n} v^{i}_j(u)u^{j}_x\right) = \partial_t(u^{i}_\tau) = \partial_t \left(\sum_{j = 1}^{n} w^{i}_j(u)u^{j}_x\right). \eqno(3.3)$$ In this case one can construct a solution to the semi-Hamiltonian system (3.2) by solving the following system of algebraic equations $$x \delta^{i}_k + t v^{i}_k = w^{i}_k. \eqno(3.4)$$ \section{Quadratic integrals} Suppose that the magnetic geodesic flow (1.1) admits a quadratic in momenta integral at a fixed energy level which is independent of the Hamiltonian. Choose the conformal coordinates $ds^2=\Lambda(x,y) (dx^2+dy^2)$ on a surface, then the Hamiltonian takes the form $H = \frac{p_1^{2} + p_2^{2}}{2 \Lambda (x,y)}.$ Let us fix the energy level $\{H = \frac{1}{2}\}$ and search for a quadratic integral $F$ at this level in the following form: $$ F = \sum_{k=-2}^{2} a_k(x,y) e^{i k \phi}, $$ where $a_k(x,y) = u_k(x,y) + i v_k(x,y), \ a_{-k} = \overline{a_k}.$ Following~\cite{12}, we assume that $a_2 = a_{-2} = \Lambda.$ Also denote $f = \frac{u_1}{\sqrt{\Lambda}}, \ g= \frac{v_1}{\sqrt{\Lambda}}.$ Then, as shown in~\cite{12}, an existence of the integral $F$ is equivalent to an existence of solutions to the following semi-Hamiltonian system \begin{equation}\tag{4.1} A(U)U_x + B(U)U_y = 0, \end{equation} where\\ \[ A= \left( {\begin{array}{cccc} 0 & 0 & 1 & 0\\ f & 0 & \Lambda & 0\\ 2 & 1 & 0 & \frac{g}{2}\\ 0 & 0 & 0 & -\frac{f}{2} \end{array} } \right), \quad B= \left( {\begin{array}{cccc} 0 & 0 & 0 & 1\\ -g & 0 & 0 & -\Lambda\\ 0 & 0 & -\frac{g}{2} & 0\\ 2 & -1 & \frac{f}{2} & 0 \end{array} } \right), \] \\ \noindent here $U = (\Lambda, u_0, f, g)^{T}.$ The magnetic field has the form: $\Omega = \frac{1}{4} \left( g_x-f_y \right).$ In this Section we construct solutions to the system (4.1) (generally speaking, local ones) via the generalized hodograph method. Notice that this system also admits global analytic solutions on the 2-torus (see~\cite{40},~\cite{41}). To apply the generalized hodograph method to the system (4.1), we construct its symmetries of the form $$U_{\tau} = A_1(U)U_x + B_1(U)U_y,$$ where $A_1, B_1$ are certain unknown matrices. Having constructed the symmetries and having solved the corresponding algebraic system (3.4), we find the solutions to the initial system (4.1). We shall search for the components of the unknown matrices $A_1, B_1$ in the form of non-homogeneous polynomials of a degree $N$ in the unknown functions $\Lambda, u_0, f, g.$ By direct calculations (very bulky ones though) one may check that for $N=1$ the generalized hodograph method yields only trivial (i.e. constant) solutions, and for $N=2$ we obtain the solutions which relate to the Liouville metric and a zero magnetic field. For $N = 3$ we shall search for the components of the matrices $A_1 = (a_{ij})_{1 \leq i, j \leq 4}$ and $B_1 = (b_{ij})_{1 \leq i, j \leq 4}$ in the following form: \\ \\ $a_{ij} = c_{ij1} f^3+c_{ij2} f^2 g+c_{ij3} f^2 \Lambda +c_{ij4} f^2 u_0+c_{ij5} f^2+c_{ij6} f g^2+c_{ij7} f g \Lambda +c_{ij8} f g u_0+c_{ij9} f g+c_{ij10} f \Lambda ^2+c_{ij12} f \Lambda +c_{ij11} f \Lambda u_0+c_{ij13} f u_0^2+c_{ij14} f u_0+c_{ij15} f+c_{ij17} g^2 \Lambda +c_{ij18} g^2 u_0+c_{ij16} g^3+c_{ij19} g^2+c_{ij20} g \Lambda ^2+c_{ij22} g \Lambda +c_{ij21} g \Lambda u_0+c_{ij23} g u_0^2+c_{ij24} g u_0+c_{ij25} g+c_{ij26} \Lambda ^3+c_{ij28} \Lambda ^2+c_{ij31} \Lambda +c_{ij27} \Lambda ^2 u_0+c_{ij29} \Lambda u_0^2+c_{ij30} \Lambda u_0+c_{ij32} u_0^3+c_{ij33} u_0^2+c_{ij34} u_0+c_{ij35},$ \\ \\ $b_{ij} = d_{ij1} f^3+d_{ij2} f^2 g+d_{ij3} f^2 \Lambda +d_{ij4} f^2 u_0+d_{ij5} f^2+d_{ij6} f g^2+d_{ij7} f g \Lambda +d_{ij8} f g u_0+d_{ij9} f g+d_{ij10} f \Lambda ^2+d_{ij12} f \Lambda +d_{ij11} f \Lambda u_0+d_{ij13} f u_0^2+d_{ij14} f u_0+d_{ij15} f+d_{ij17} g^2 \Lambda +d_{ij18} g^2 u_0+d_{ij16} g^3+d_{ij19} g^2+d_{ij20} g \Lambda ^2+d_{ij22} g \Lambda +d_{ij21} g \Lambda u_0+d_{ij23} g u_0^2+d_{ij24} g u_0+d_{ij25} g+d_{ij26} \Lambda ^3+d_{ij28} \Lambda ^2+d_{ij31} \Lambda +d_{ij27} \Lambda ^2 u_0+d_{ij29} \Lambda u_0^2+d_{ij30} \Lambda u_0+d_{ij32} u_0^3+d_{ij33} u_0^2+d_{ij34} u_0+d_{ij35},$ \\ \\ where $c_{ijkl}, c_{ijh}, d_{ijkl}, d_{ijh}$ are certain constants, $h = 1,...,9,$ $k = 1,...,3,$ $l = 1,...,5.$ One can find these constants taking into account the relations (3.3). The final form of the matrices $A_1,$ $B_1$ is very bulky so we shall not write these matrices out explicitly. Introduce the new notations: $$\alpha = 4 c_{141} + d_{115},\quad \beta = c_{1119} - 2 c_{1219},\quad \gamma = -8 c_{1415} - 2 d_{1135},$$ $$\delta = 4 c_{1235} - 2 c_{1135},\quad \epsilon = c_{1125} - 2 c_{1225},\quad \zeta = c_{1122} - 2 c_{1222}.$$ The relations (3.4) are equivalent to the following system of 11 algebraic equations on the unknown functions $\Lambda(x,y), u_0(x,y), f(x,y), g(x,y)$ (notice that due to~\cite{13} this system is compatible): \begin{multline*} -4 \alpha f^3+f^2 (6 \zeta \Lambda -4 \beta g+\zeta u_0-4 \epsilon )+2 f \left(\gamma -2 \alpha \left(g^2+4 (6 \Lambda +u_0)\right)+2 y\right)-4 \beta g^3\\+g^2 (-2 \zeta \Lambda +\zeta u_0-4 \epsilon )+2 g (\delta +8 \beta (u_0-2 \Lambda )+2 x)+4 \Lambda (\zeta (\Lambda +u_0)-4 \epsilon ) = 0, \end{multline*} $$\zeta f^2-32 \alpha f+\zeta g^2+2 \zeta (\Lambda +u_0)-8 \epsilon = 0,$$ $$2 \gamma -4 \alpha f^2+f (4 \zeta \Lambda +\zeta u_0-4 \epsilon )-4 \alpha \left(g^2+8 \Lambda +4 u_0\right)+4 y = 0,$$ $$\zeta u_0-4 (2 \alpha f+2 \beta g+\epsilon ) = 0,$$ \begin{multline*} -48 \alpha f^3+\zeta f^4+f^2 \left(2 \zeta \left(g^2+11 \Lambda +3 u_0\right)-24 \epsilon \right)+8 f \left(\gamma -2 \alpha \left(g^2+4 (4 \Lambda +u_0)\right)+2 y\right)\\+32 \beta g^3+\zeta g^4+g^2 (6 \zeta \Lambda -2 \zeta u_0+8 \epsilon )+8 \Lambda (\zeta (\Lambda +u_0)-4 \epsilon ) = 0, \end{multline*} $$-40 \alpha f^2+\zeta f^3+f \left(\zeta \left(g^2+10 \Lambda +4 u_0\right)-16 \epsilon \right)+4 \left(\gamma -2 \alpha \left(g^2+8 \Lambda +4 u_0\right)+2 y\right) = 0,$$ $$\zeta f^2-16 \alpha f+2 \zeta \Lambda +\zeta g^2+16 \beta g = 0,$$ $$\frac{\delta }{2}+\frac{\Lambda \left(\zeta f^2-32 \alpha f+2 \zeta (\Lambda +u_0)-8 \epsilon \right)}{g}-\beta \left(f^2+8 \Lambda -4 u_0\right)+2 \alpha f g+\beta g^2+x = 0,$$ $$\frac{\gamma }{2}+\alpha f^2+f (\zeta \Lambda +2 \beta g)-\alpha \left(g^2+8 \Lambda +4 u_0\right)+y = 0,$$ $$4 \alpha f+4 \beta g-\frac{\zeta u_0}{2}+2 \epsilon = 0,$$ $$-8 \beta \Lambda +\frac{\delta }{2}-\beta f^2+2 \alpha f g+\beta g^2-\zeta g \Lambda +4 \beta u_0+x = 0.$$ It is easy to check that in case $\zeta = 0$ this system admits only trivial solutions, therefore further we shall assume that $\zeta \neq 0.$ Simplifying these equations, we finally obtain $$ \Lambda = \frac{- \zeta f^2-\zeta g^2+16 \alpha f-16 \beta g}{2 \zeta}, \qquad u_0 = \frac{4 (2 \alpha f+2 \beta g+\epsilon)}{\zeta}, \eqno(4.2) $$ wherein $f(x,y), g(x,y)$ satisfy the following two relations: $$ -\zeta^2 f g^2-12 \beta \zeta f g-\zeta^2 f^3+26 \alpha \zeta f^2-192 \alpha ^2 f+6 \alpha \zeta g^2+64 \alpha \beta g-32 \alpha \epsilon +\gamma \zeta+2 \zeta y = 0, \eqno(4.3) $$ $$ \zeta^2 f^2 g-12 \alpha \zeta f g+\zeta^2 g^3+26 \beta \zeta g^2 + 192 \beta ^2 g+6 \beta \zeta f^2-64 \alpha \beta f+32 \beta \epsilon +\delta \zeta+2 \zeta x = 0. \eqno(4.4) $$ The following theorem holds true. \begin{theorem} In a small neighborhood of certain points $(x_0, y_0)$ the system (4.3), (4.4) admits smooth solutions $f(x,y),$ $g(x,y)$. Moreover, by choosing appropriate constants $\alpha,$ $\beta$ one can obtain the solutions which correspond to a positive conformal factor of the metric $\Lambda(x,y)$ (see (4.2)) of a non-zero curvature and a non-zero magnetic field (at least in a small neighborhood of these points). The functions $f(x,y),$ $g(x,y),$ $\Lambda(x,y),$ $u_0(x,y)$ constructed in such a way satisfy the initial semi-Hamiltonian system (4.1). \end{theorem} \begin{proof} Consider the mapping $S : \mathbb{R}^{2} \times \mathbb{R}^{2} \rightarrow \mathbb{R}^{2},$ $S=S(f,g,x,y),$ which is given by relations (4.3), (4.4). The proof of Theorem 1 is based on applying the implicit function theorem to the mapping $S.$ We skip the details. \end{proof} Notice that in the particular case $\alpha = \beta = 0$ the system (4.2) --- (4.4) has the following exact solutions: $$\Lambda(x,y) =-\frac{1}{2 \zeta} \sqrt[3]{\zeta (2 x + \delta)^2+\zeta (2 y + \gamma)^2}, \qquad u_0(x,y) = \frac{4 \epsilon}{\zeta},$$ $$f(x,y) = \frac{2 y + \gamma}{\sqrt[3]{\zeta (2 y + \gamma)^2 + \zeta (2 x + \delta)^2}}, \qquad g(x,y) =-\frac{2 x + \delta}{\sqrt[3]{\zeta (2 y + \gamma)^2 + \zeta (2 x + \delta)^2}},$$ where $\gamma, \delta, \epsilon, \zeta$ are arbitrary constants, wherein $\zeta \neq 0.$ In this case the metric is flat, and the magnetic field has the form $$\Omega(x,y) = -\frac{2}{3 \sqrt[3]{\zeta (2 x + \delta)^2+\zeta (2 y + \gamma)^2}}.$$ To construct the exact solutions to the initial problem in general case, let us make the change of variables $(x,y) \rightarrow (f,g)=(X,Y).$ The corresponding formulas are given by the relations (4.3), (4.4). For simplicity assume that $\gamma = \delta = \epsilon = 0,$ $\zeta = 2.$ Extending this transformation to a canonical one, we obtain the following relations between the new momenta $P_1 = P_f,$ $P_2=P_g$ and the old ones $p_1,$ $p_2:$ $$ P_1 = -2p_1(XY-3Y\alpha+3X\beta-8\alpha\beta)+p_2((3X-8\alpha)(X-6\alpha)+Y(Y+6\beta)), $$ $$ P_2 = -X^2p_1-(3Y+8\beta)(Yp_1+2p_2\alpha+6p_1\beta)+2X(Yp_2+3p_1\alpha+3p_2\beta). $$ This allows to construct the following local integrable example. \begin{example} Let $\alpha, \beta$ be arbitrary constants. Denote $$ R(X,Y) = (X^2-8 \alpha X +Y^2+8Y \beta), $$ $$ S(X,Y) = 3 X^4-44 X^3 \alpha +6 X^2 \left(Y^2+34 \alpha ^2+10 Y \beta +18 \beta ^2\right) $$ $$ -12 X \alpha \left(5 Y^2+24 \alpha ^2+48 Y \beta +88 \beta ^2\right)+(3 Y+8 \beta ) \left(Y^3+12 Y^2 \beta +256 \alpha ^2 \beta +36 Y \left(\alpha ^2+\beta ^2\right)\right). $$ Then the geodesic flow of the Riemannian metric $ds^2=g_{11}dX^2+2g_{12}dX dY+g_{22}dY^2,$ where $$ g_{11} = -\frac{R}{2} \{9X^4+10X^2Y^2+Y^4-156\alpha X^3-76\alpha XY^2+964X^2 \alpha^2 +132 Y^2 \alpha^2 $$ $$ -2496X\alpha^3+2304\alpha^4+4Y(3Y^2+(5X-24\alpha)(3X-8\alpha))\beta+4(9Y^2+(3X-8\alpha)^2)\beta^2\}, $$ $$ g_{12} = -4 R (XY-3\alpha Y+3 X \beta-8\alpha \beta) ((X-6\alpha) (X-2 \alpha) + (Y+2 \beta) (Y+6 \beta)), $$ $$ g_{22} = -\frac{R}{2} \{ X^4-12 X^3 \alpha - 4X \alpha (3Y+8\beta) (5Y+24 \beta) $$ $$ +2X^2 (5Y^2+18 \alpha^2+38 Y \beta+66 \beta^2) + (3Y+8 \beta)^2 (4\alpha^2 + (Y+6 \beta)^2) \}, $$ in the magnetic field $$ \omega = - ((X-2\alpha) (X-6 \alpha)+(Y+2 \beta)(Y+6 \beta)) dX \wedge dY $$ at the fixed energy level $\{H=\frac{1}{2}g^{ij}P_iP_j=\frac{1}{2}\}$ admits the quadratic integral $$ F = \frac{1}{S^2} \left( a_{11} P_1^2 + a_{12} P_1 P_2 + a_{22} P_2^2 + b_1 P_1 + b_2 P_2 + c \right), $$ where $$ a_{11} = 16 (X Y-3 Y \alpha +3 X \beta -8 \alpha \beta )^2, \quad a_{22} = 4 ((3 X-8 \alpha ) (X-6 \alpha )+Y (Y+6 \beta ))^2, $$ $$ a_{12} = -16 (X Y-3 Y \alpha +3 X \beta -8 \alpha \beta ) ((3 X-8 \alpha ) (X-6 \alpha )+Y (Y+6 \beta )), $$ $$ b_1 = 2 S \left(-16 X \alpha \beta +X^2 (Y+6 \beta )-Y (Y+6 \beta ) (3 Y+8 \beta )\right), $$ $$ b_2 = S (-2 (X-6 \alpha ) \left(3 X^2-Y^2-8 X \alpha \right)-32 Y \alpha \beta ), \quad c= S^2 (X^2+Y^2-4 X \alpha +12 Y \beta). $$ \end{example} \section{Rational integrals} The remaining part of the paper is devoted to the rational integrals. Suppose that the magnetic geodesic flow (1.1) admits a rational integral with a linear numerator and denominator at a fixed energy level. Choose the conformal coordinates $ds^2 = \Lambda(x,y) (dx^2+dy^2)$ on a surface and fix the energy level $H = \frac{p_1^2+p_2^2}{2 \Lambda(x,y)} = \frac{C}{2}.$ We shall search for the rational integral in the form $$ F = \frac{a_0(x,y)p_1+a_1(x,y)p_2+f(x,y)}{b_0(x,y)p_1+b_1(x,y)p_2+g(x,y)}. \eqno(5.1) $$ Let us parameterize the momenta in the following way: $p_1 = \sqrt{C \Lambda} \cos \phi, p_2 = \sqrt{C \Lambda} \sin \phi.$ The condition $\frac{dF}{dt}=0$ is equivalent to the following relation (see~\cite{12},~\cite{10}): $$ F_x \cos \phi + F_y \sin \phi +F_{\phi} \left( \frac{\Lambda_y}{2 \Lambda} \cos \phi - \frac{\Lambda_x}{2 \Lambda} \sin \phi - \frac{\Omega}{\sqrt{C \Lambda}} \right) = 0. \eqno(5.2) $$ Substituting (5.1) into (5.2), one obtains that the left-hand side is a polynomial in $e^{i \phi},$ all the coefficients must vanish. Vanishing of the coefficient at $e^{3 i \phi}$ is equivalent to the following relation (e.g., see~\cite{33}): $$ \left( \frac{a_0-i a_1}{b_0-ib_1} \right)_x - i \left( \frac{a_0-i a_1}{b_0-ib_1} \right)_y = 0. $$ Introducing the notations $$ u = \frac{a_0b_0+a_1b_1}{b_0^2+b_1^2}, \qquad v = \frac{a_0b_1-a_1b_0}{b_0^2+b_1^2}, $$ we may rewrite the previous equality in the following form $\left( u + i v \right)_x - i \left( u + i v \right)_y = 0.$ Consequently, $u(x,y), v(x,y)$ are two conjugate harmonic functions: $u_x = - v_y, u_y = v_x.$ We shall consider the simplest case when $$ u(x,y) \equiv c_1, \qquad v(x,y) \equiv c_2, $$ where $c_1, c_2$ are constants, wherein one may assume that $c_2 \neq 0$ (otherwise there exists a linear in momenta integral). Consequently, $$ a_0(x,y) = c_1 b_0(x,y) + c_2 b_1(x,y), \qquad a_1(x,y) = -c_2 b_0(x,y) + c_1 b_1(x,y). $$ Vanishing of the coefficient at $e^{2 i \phi}$ implies $$ \left( \frac{f - (c_1 + i c_2) g}{b_0 - i b_1} \right)_x - i \left( \frac{f - (c_1 + i c_2) g}{b_0 - i b_1} \right)_y = 0. $$ Similarly we obtain $$ f(x,y) = \frac{(c_2 \gamma_1 - c_1 \gamma_2) b_0(x,y) + (c_1 \gamma_1 + c_2 \gamma_2) b_1(x,y)}{c_2}, \ g(x,y) = \frac{-\gamma_2 b_0(x,y) + \gamma_1 b_1(x,y)}{c_2}, $$ where $\gamma_1, \gamma_2$ are conjugate harmonic functions. Again we shall consider only the simplest case when $\gamma_1, \gamma_2$ are arbitrary constants. Substituting the expressions for $a_0$, $a_1$, $f$, $g$ into $F$ we obtain $$ F = c_1 + c_2\frac{c_2(b_1(x,y) p_1 - b_0(x,y) p_2) + \gamma_1 b_0(x,y) + \gamma_2 b_1(x,y)}{ c_2(b_0(x,y) p_1 + b_1(x,y) p_2) - \gamma_2 b_0(x,y) + \gamma_1 b_1(x,y)}. $$ Consequently, one can further assume that the first integral has the form $$ F = \frac{c_2(b_1(x,y) p_1 - b_0(x,y) p_2) + \gamma_1 b_0(x,y) + \gamma_2 b_1(x,y)}{ c_2(b_0(x,y) p_1 + b_1(x,y) p_2) - \gamma_2 b_0(x,y) + \gamma_1 b_1(x,y)}. $$ Let us make an appropriate rotation of the plane $x$, $y$ and divide the numerator and denominator by $c_2$. After that one can assume without loss of generality that $c_2 = 1$, $\gamma_1 = \gamma$, $\gamma_2 = 0$, where $\gamma$~is a constant. Since the coefficients of the integral (5.1) are defined non-uniquely, one can assume without loss of generality that $b_0(x,y)^2 + b_1(x,y)^2 \equiv 1,$ that is $$ b_0(x,y) = \sin \frac{\psi(x,y)}{2}, \qquad b_1(x,y) = \cos \frac{\psi(x,y)}{2} $$ for a certain function $\psi(x,y).$ Then the remaining three equations (which are equivalent to vanishing of the coefficients at $e^{i k \phi},$ $k=0,1$) take the form: $$ 2 \gamma \Omega \sin \psi - C \Lambda_y + (\gamma^2 - C\Lambda) \psi_x = 0, \eqno(5.3) $$ $$ 2 \gamma \Omega \cos \psi + C \Lambda_x + (\gamma^2 - C\Lambda) \psi_y = 0, \eqno(5.4) $$ $$ 2 \Omega \Lambda - \gamma \Lambda_y \sin \psi + \gamma \Lambda_x \cos \psi = 0. \eqno(5.5) $$ Multiply (5.3) by $(\gamma \sin \psi),$ (5.4) by $(\gamma \cos \psi),$ (5.5) by $(-C),$ and sum it up. We obtain $$ 2 \Omega + \gamma \psi_x \sin \psi + \gamma \psi_y \cos \psi = 0. $$ We can express the magnetic field: $$ \Omega (x,y) = \frac{\gamma}{2}((\cos \psi)_x - (\sin \psi)_y). $$ Multiply (5.3) by $(\gamma \cos \psi),$ (5.4) by $(\gamma \sin \psi),$ and subtract one form another. We obtain: $$ \left((C\Lambda - \gamma^2) \sin \psi \right)_x + \left( (C\Lambda - \gamma^2) \cos \psi \right)_y = 0. \eqno(5.6) $$ Due to the expression for the magnetic field the relation (5.5) is equivalent to $$ (\Lambda \cos \psi)_x - (\Lambda \sin \psi)_y = 0. \eqno(5.7) $$ Introduce the notation $\rho(x, y) = \frac{C}{\gamma^2}\Lambda(x,y) - 1$. Then due to (5.6), (5.7) the functions $\rho(x,y), \psi(x,y)$ satisfy the relations $$ \left( \rho \sin \psi \right)_x + \left( \rho \cos \psi \right)_y = 0, \qquad \left( (\rho + 1) \sin \psi \right)_y - \left( (\rho + 1) \cos \psi \right)_x = 0. \eqno(5.8) $$ The system (5.8) is semi-Hamiltonian. In the hyperbolic domain (i.e. where $\rho (\rho + 1) < 0$) it admits the Riemann invariants $r_1(x,y), r_2(x,y):$ $$ \psi = \frac{1}{2} \left(r_1+r_2 \right), \qquad \rho = - \sin^2 \left( \frac{1}{4} \left(r_1 - r_2 \right) \right), $$ and can be diagonalized: $$ \frac{\partial r_1}{\partial y} = - \tan \left( \frac{1}{4} \left(3 r_1 + r_2 \right) \right) \frac{\partial r_1}{\partial x}, \qquad \frac{\partial r_2}{\partial y} = - \tan \left( \frac{1}{4} \left(r_1 + 3 r_2 \right) \right) \frac{\partial r_2}{\partial x}. $$ This system has an interesting property. It has the form $(r_j)_y+\lambda_j (r_j)_x=0,$ $j=1, 2$ and one can easily check that $\frac{\partial \lambda_j}{\partial r_j} > 0$ everywhere. Due to this fact, apparently, this system does not admit smooth global non-constant solutions. For details we refer the reader to~\cite{412} where this observation was used to prove rigorously a non-existence of smooth global solutions for another system. The obtained semi-Hamiltonian system also has infinitely many commuting flows (see~\cite{13}) of the form $$ \frac{\partial r_1}{\partial t} = w_1 (r_1, r_2) \frac{\partial r_1}{\partial x}, \qquad \frac{\partial r_2}{\partial t} = w_2 (r_1, r_2) \frac{\partial r_2}{\partial x}, $$ where $w_1(r_1, r_2), w_2(r_1, r_2)$ are arbitrary functions satisfying the following relations: $$ \left( \tan \left(\frac{3r_1+r_2}{4} \right) - \tan \left(\frac{r_1+3r_2}{4} \right) \right) \frac{\partial w_2}{\partial r_1} + \frac{w_2 - w_1}{4 \cos^2 (\frac{r_1+3r_2}{4})} = 0, $$ $$ \left( \tan \left(\frac{3r_1+r_2}{4} \right) - \tan \left(\frac{r_1+3r_2}{4} \right) \right) \frac{\partial w_1}{\partial r_2} + \frac{w_2 - w_1}{4 \cos^2 (\frac{3r_1+r_2}{4})} = 0. $$ According to the generalized hodograph method (see~\cite{13}) any two such functions $w_1, w_2$ allow to construct a solution to the initial semi-Hamiltonian system (5.8) (see the details in Section 3). Unfortunately, we have failed to construct non-trivial solutions by this method, so we shall follow another way. The first equation in (5.8) means that there exists a function $\Phi(x,y)$ such that $$ \Phi_y = \rho \sin\psi, \qquad \Phi_x = -\rho\cos\psi. $$ Consequently, $\sin\psi = \frac{\Phi_y}{\sqrt{\Phi_x^2 + \Phi_y^2}},$ $\cos\psi = -\frac{\Phi_x}{\sqrt{\Phi_x^2 + \Phi_y^2}}$, and the function $\Phi(x,y)$ satisfy the equation $$ \triangle \Phi + \left( \frac{\Phi_x}{\sqrt{\Phi_x^2+\Phi_y^2}} \right)_x + \left( \frac{\Phi_y}{\sqrt{\Phi_x^2+\Phi_y^2}} \right)_y = 0 $$ or, equivalently, $$ (\Phi_x^2+\Phi_y^2)^{3/2} \triangle \Phi + \Phi_x^2 \Phi_{yy} - 2 \Phi_x \Phi_y \Phi_{xy} + \Phi_y^2 \Phi_{xx} =0. \eqno(5.9) $$ Let us make the Legendre transform (see~\cite{42}) of the equation (5.9), namely, assume $P = \Phi_x$, $Q = \Phi_y$ to be new independent variables and $Z = x P + y Q - \Phi$ to be the new unknown function. We obtain $$ ((P^2 + Q^2)^{3/2}+ P^2)Z_{PP} + 2 P Q Z_{PQ} + ((P^2 + Q^2)^{3/2} + Q^2)Z_{QQ} = 0. $$ This transformation allows to obtain solutions such that $\Phi_{xx}\Phi_{yy} - \Phi_{xy}^2 \neq 0$. Due to $Z_{PP}Z_{QQ} - Z_{PQ}^2 = \Phi_{xx}\Phi_{yy} - \Phi_{xy}^2$ we shall search for solutions of the transformed equation such that $$ Z_{PP}Z_{QQ} - Z_{PQ}^2 \neq 0.\eqno{(5.10)} $$ Let us make the polar change of variables in the equation. Actually, we already have the equalities $P = -\rho\cos \psi$, $Q = \rho\sin\psi$, so it will be convenient to use the same notations for new independent variables. Dividing the result by $\rho$, we obtain $$ \rho(\rho+1)Z_{\rho\rho} + \rho Z_\rho + Z_{\psi\psi}=0.\eqno{(5.11)} $$ In the new coordinates the condition (5.10) has the form $$ Z_{\rho\rho}Z_{\psi\psi} + \rho Z_\rho Z_{\rho\rho} - (Z_{\psi\rho}-\frac{1}{\rho}Z_\psi)^2\neq 0. $$ Since this condition is imposed on solutions to the equation (5.11), then it is equivalent to the following condition $$ \rho(\rho+1)Z_{\rho\rho}^2 + (Z_{\psi\rho}-\frac{1}{\rho}Z_\psi)^2\neq 0.\eqno{(5.12)} $$ Due to the inverse Legendre transform (\cite{42}) we have $x = Z_P$, $y = Z_Q$. Due to the inverse function theorem one can find $P$ and $Q$ in terms of $x$ and $y$ if the condition (5.10) holds true. If (5.12) holds true, then one may find $\rho$, $\psi$ in terms of $x$ and $y$ from the relations $$ Z_\rho = - Z_P \cos\psi + Z_Q \sin\psi = -x\cos\psi + y\sin\psi $$ $$ Z_\psi = Z_P \rho\sin\psi + Z_Q \rho \cos\psi = x\rho\sin\psi + y\rho\cos\psi. $$ Based on these considerations, let us formulate the following \begin{theorem} Let the function $Z(\rho, \psi)$ be a solution to the equation (5.11). Assume that the functions $\rho(x,y)$, $\psi(x,y)$ satisfy the relations $$ Z_\rho(\rho,\psi) = -x\cos\psi + y\sin\psi,\quad Z_\psi(\rho,\psi) = x\,\rho\sin\psi + y\,\rho\cos\psi.\eqno{(5.13)} $$ Also assume that for any values $\rho(x,y)$, $\psi(x,y)$ the condition (5.12) holds true. Then the functions $\rho(x,y)$, $\psi(x,y)$ are solutions to the system (5.8). \end{theorem} \begin{proof} Some details were omitted in the considerations made above. In particular, it was implicitly assumed that $\rho>0$. However, the formulated statement can be proved by the direct calculations. Let us differentiate (5.13) by $x$ and $y$. Then it is easy to find the derivatives $\rho_x$, $\rho_y$, $\psi_x$, $\psi_y$ in terms of the functions $\rho$, $\psi$ from the obtained relations. Namely, the following relations hold true $$ \rho_x = \frac{1}{D} (\rho Z_{\rho\psi}\sin\psi + Z_{\psi\psi}\cos\psi + \rho Z_\rho\cos\psi - Z_\psi\sin\psi), $$ $$ \rho_y = \frac{1}{D} (\rho Z_{\rho\psi}\cos\psi - Z_{\psi\psi}\sin\psi - \rho Z_\rho\sin\psi - Z_\psi\cos\psi), \eqno(5.14) $$ $$ \psi_x = \frac{1}{D} (-\rho Z_{\rho\rho}\sin\psi - Z_{\rho\psi}\cos\psi + \frac{1}{\rho} Z_\psi\cos\psi), $$ $$ \psi_y = \frac{1}{D} (-\rho Z_{\rho\rho}\cos\psi + Z_{\rho\psi}\sin\psi - \frac{1}{\rho} Z_\psi\sin\psi), $$ where $D=\rho(\rho + 1)Z_{\rho\rho}^2 + (Z_{\psi\rho}-\frac{1}{\rho}Z_\psi)^2$. Notice that the condition (5.12) has exactly the form $D\neq 0$. Substituting the derivatives (5.14) into the system (5.8), we obtain the identities due to the fact that $Z$ satisfies the equation (5.11). Theorem 2 is proved. \end{proof} The first integral can be expressed in terms of $\psi(x,y)$ in the following way: $$ F = \frac{\cos\frac{\psi(x,y)}{2} p_1 - \sin\frac{\psi(x,y)}{2} p_2 + \gamma \sin\frac{\psi(x,y)}{2}}{ \sin\frac{\psi(x,y)}{2} p_1 + \cos\frac{\psi(x,y)}{2} p_2 + \gamma\cos\frac{\psi(x,y)}{2}}.\eqno(5.15) $$ Theorem 2 demonstrates that if for some solution to (5.11) one can solve the equation (5.13) for $\rho$, $\psi,$ then one can construct an example of a metric and a magnetic field such that the corresponding magnetic geodesic flow admits a rational integral at least at a fixed energy level. As it will be shown in the next Section, it is not difficult to obtain explicit solutions to the equation (5.11). The main obstacle to constructing explicit examples of metrics and magnetic fields is that it is difficult to solve the equations (5.13) for $\rho$, $\psi$ explicitly. However, one can overcome these difficulties by making the change of variables $(x,y)\to(\rho,\psi).$ Let $Z(\rho, \psi)$ be a solution to the equation (5.11). Let us express $x$, $y$ in terms of $\rho$, $\psi$ from (5.13) $$ x = - Z_\rho(\rho, \psi)\cos\psi + \frac{1}{\rho}Z_\psi(\rho,\psi)\sin\psi,\quad y = Z_\rho(\rho, \psi)\sin\psi + \frac{1}{\rho}Z_\psi(\rho,\psi)\cos\psi.\eqno{(5.16)} $$ If the condition (5.12) holds true at some point $\rho_0$, $\psi_0,$ then this map has a non-degenerate Jacobi matrix. Then one can make the change of variables in the initial metric, in the magnetic field and in the first integral by choosing $\rho$, $\psi$ as the new independent variables. The corresponding canonical change of momenta has the form $$ p_1 = \frac{y_\psi p_\rho - y_\rho p_\psi}{x_\rho y_\psi - x_\psi y_\rho} = \rho_x p_\rho + \psi_x p_\psi,\quad p_2 = \frac{-x_\psi p_\rho + x_\rho p_\psi}{x_\rho y_\psi - x_\psi y_\rho} = \rho_y p_\rho + \psi_y p_\psi.\eqno{(5.17)} $$ We shall obtain the explicit formulae in these variables for the metric, the magnetic field and the coefficients of the first integral. The following theorem holds true. \begin{theorem} Let $Z(\rho,\psi)$ be a solution to the equation (5.11) in a certain domain $A\subset \{-1 < \rho < 0\}\cup\{\rho > 0\}$, and let the condition (5.12) hold true everywhere in this domain. Then in the domain $A$ the geodesic flow of the metric \begin{multline*} ds^2 = \frac{\gamma^2(\rho + 1)}{C\rho^4}\big((\rho^4 Z_{\rho\rho}^2 + (\rho Z_{\rho\psi} - Z_\psi)^2)d\rho^2 -2\rho^2 Z_{\rho\rho}(\rho Z_{\rho\psi}-Z_\psi)d\rho d\psi\\ +\rho^2(\rho^2(\rho+1)^2 Z_{\rho\rho}^2+(\rho Z_{\rho\psi} - Z_\psi)^2)d\psi^2\big)\tag{5.18} \end{multline*} in the magnetic field $$ \omega = \frac{\gamma}{2}Z_{\rho\rho}d\rho\wedge d\psi\eqno{(5.19)} $$ admits the rational in momenta integral $$ F = \frac{a_0(\rho, \psi) p_\rho + a_1(\rho, \psi)p_\psi + \gamma D \sin\frac{\psi}{2}}{ b_0(\rho, \psi) p_\rho + b_1(\rho, \psi) p_\psi + \gamma D \cos\frac{\psi}{2}},\eqno{(5.20)} $$ where $$ a_0(\rho, \psi) = \rho Z_{\rho\psi}\sin\frac{\psi}{2} + Z_{\psi\psi}\cos\frac{\psi}{2} + \rho Z_\rho\cos\frac{\psi}{2} - Z_\psi\sin\frac{\psi}{2}, $$ $$ b_0(\rho, \psi) = \rho Z_{\rho\psi}\cos\frac{\psi}{2} - Z_{\psi\psi}\sin\frac{\psi}{2} - \rho Z_\rho\sin\frac{\psi}{2} - Z_\psi\cos\frac{\psi}{2}, $$ $$ a_1(\rho, \psi) = -\rho Z_{\rho\rho}\sin\frac{\psi}{2} - Z_{\rho\psi}\cos\frac{\psi}{2} + \frac{1}{\rho} Z_\psi\cos\frac{\psi}{2}, $$ $$ b_1(\rho, \psi) = -\rho Z_{\rho\rho}\cos\frac{\psi}{2} + Z_{\rho\psi}\sin\frac{\psi}{2} - \frac{1}{\rho} Z_\psi\sin\frac{\psi}{2}, $$ $$ D = \rho(\rho+1)Z_{\rho\rho}^2+\left(\frac{Z_\psi}{\rho}- Z_{\rho\psi}\right)^2 $$ at the fixed energy level $\{H=\frac{C}{2}\}.$ \end{theorem} \begin{proof} We have to show that the magnetic Poisson bracket $$ \{F,H\}_{mg} = \frac{\partial F}{\partial \rho} \frac{\partial H}{\partial p_\rho} - \frac{\partial F}{\partial p_\rho} \frac{\partial H}{\partial \rho} + \frac{\partial F}{\partial \psi} \frac{\partial H}{\partial p_\psi} - \frac{\partial F}{\partial p_\psi} \frac{\partial H}{\partial \psi} + \frac{\gamma}{2} Z_{\rho\rho} \left ( \frac{\partial F}{\partial p_\rho} \frac{\partial H}{\partial p_\psi} - \frac{\partial F}{\partial p_\psi} \frac{\partial H}{\partial p_\rho} \right )\eqno{(5.21)} $$ vanishes for all $(\rho,\psi, p_\rho, p_\psi)$ belonging to the energy level $\{H=\frac{C}{2}\}$ such that $(\rho,\psi)\in A.$ Since the condition (5.12) holds true in the domain $A,$ then for any point $(\rho_0, \psi_0)\in A$ the map given by (5.16) is the diffeomorphism of a certain neighborhood of this point to a certain neighborhood of its image, namely the point $(x_0, y_0)$ (by the inverse function theorem). This diffeomorphism allows to change the coordinates to $x$, $y$ in this neighborhood; new momenta are defined by the formulae (5.17). This change transforms the metric (5.18), the magnetic field (5.19) and the function (5.20) to the metric $\frac{\gamma^2}{C}(\rho(x,y)+1)(dx^2+dy^2)$, the magnetic field $\frac{\gamma}{2}\big((\cos\psi(x,y))_x-(\sin\psi(x,y))_y\big)dx\wedge dy$ and the function (5.15) correspondingly. Due to Theorem 2 the functions $\rho(x, y)$, $\psi(x, y)$ defining the inverse map to (5.16) are solutions to the system (5.8). Consequently, in the coordinates $(x, y)$ the magnetic Poisson bracket vanishes on the set $\{H=\frac{C}{2}\}.$ Since the transform is canonical, the magnetic Poisson bracket (5.21) vanishes at the point $(\rho_0,\psi_0)$. Due to the arbitrariness of this point we obtain that (5.21) vanishes everywhere in the domain $A$ at the energy level $\{H=\frac{C}{2}\}.$ Theorem 3 is proved. \end{proof} \begin{remark} The condition (5.12) holds true if $\rho > 0.$ So if a solution to (5.11) is defined everywhere for $\rho > 0$, then one can assume $A$ to be equal to $\{\rho > 0\}$ in Theorem~3. \end{remark} \section{Solutions to the key equation and examples} Let us construct certain partial solutions to the equation (5.11) which yield integrable examples via Theorem 3. We shall use the method of separation of variables, i.e. we shall search for a solution to (5.11) in the form $Z(\rho,\psi) = Z_1(\rho)Z_2(\psi)$. Let us substitute it into the equation and divide it by $Z$. One can transform the obtained equality in such a way that the left-hand side depends only on $\rho$, and the right-hand side depends only on $\psi$, i.e. both sides are equal to a certain constant $\mu$. Namely, the following relations hold true $$ -\rho(\rho + 1)\frac{Z_1''}{Z_1} - \rho\frac{Z_1'}{Z_1} = \frac{Z_2''}{Z_2} = \mu.\eqno{(6.1)} $$ The general solution to the second equation of (6.1) has the form $Z_2(\psi) = C_1 e^{\sqrt{\mu}\psi} + C_2 e^{-\sqrt{\mu}\psi}$. The first equation of (6.1) is equivalent to $$ \rho(\rho + 1)Z_1'' + \rho Z_1' + \mu Z_1 = 0 $$ Following~\cite{43}, let us write out the general solution to this equation. In case $\mu\neq -k^2$, $k\in\mathbb{Z}$ we have $$ Z_1 = C_1 \rho \,_2F_1(1-i\sqrt{\mu},1+i\sqrt{\mu}; 2; -\rho) + C_2 \,_2F_1(-i\sqrt{\mu}, i\sqrt{\mu}; 1; \rho + 1). $$ In case $\mu = -k^2$, $k\in\mathbb{Z}$ we have $$ Z_1 = C_1 \rho \,_2F_1(1-k,1+k; 2; -\rho) + C_2 \frac{1}{\rho^{|k|}} \,_2F_1\left(|k|+1,|k|; 2|k|+1; -\frac{1}{\rho}\right). $$ In these equalities $\,_2 F_1$ is the hypergeometric function. Now let us formulate the general statement. \begin{lemma} For any $\nu\in\mathbb{C}$ the functions $$ \rho \,_2F_1(1-\nu,1+\nu; 2; -\rho)e^{i\nu\psi},\eqno{(6.2)} $$ $$ \,_2F_1(-\nu, \nu; 1; \rho + 1)e^{i\nu\psi}\eqno{(6.3)} $$ are solutions to the equation (5.11). For any $\nu\in\mathbb{Z}\setminus\{0\}$ the function $$ \frac{1}{\rho^{|\nu|}}\left.\frac{d^{|\nu|-1}}{d\zeta^{|\nu|-1}} \left(\frac{1}{\zeta^{|\nu|+1}}\int_0^\zeta\frac{(\zeta-\xi)^{|\nu|-1}\xi}{1-\xi}d\xi\right)\right|_{\zeta=-\frac{1}{\rho}} e^{i\nu\psi}\eqno{(6.4)} $$ is also a solution to (5.11). Any linear combinations of real and imaginary parts of these solutions for different values of $\nu$ are also solutions. \end{lemma} \begin{proof} It follows directly from the previous discussions that (6.2), (6.3) are solutions. Let us prove the equality \begin{multline*} \frac{1}{\rho^{|\nu|}} \,_2F_1\left(|\nu|+1,|\nu|; 2|\nu|+1; -\frac{1}{\rho}\right)=\\ =\frac{(2|\nu|)!}{|\nu|!(|\nu|-1)!^2}\frac{1}{\rho^{|\nu|}}\left.\frac{d^{|\nu|-1}}{d\zeta^{|\nu|-1}} \left(\frac{1}{\zeta^{|\nu|+1}}\int_0^\zeta\frac{(\zeta-\xi)^{|\nu|-1}\xi}{1-\xi}d\xi\right)\right|_{\zeta=-\frac{1}{\rho}}, \tag{6.5} \end{multline*} which will imply that the function (6.4) is also a solution. We shall use the following well-known equality for hypergeometric functions (\cite{43}): $$ \frac{d}{d\zeta}\,_2F_1(a,b;c;\zeta) = \frac{ab}{c}\,_2F_1(a+1,b+1;c+1;\zeta). $$ Applying it successively we obtain the equality $$ \,_2F_1(m+1,m; 2m+1; \zeta) = \frac{(2m)!}{m!(m-1)!(m+1)!}\frac{d^{m-1}}{d\zeta^{m-1}}\,_2F_1(2,1;m+2;\zeta), $$ which holds true for any $m\in\mathbb{N}$. Finally, the following sequence of equalities holds true \begin{multline*} \,_2F_1(2,1;m+2;\zeta)=\sum_{l=0}^\infty \frac{(l+1)!(m+1)!}{(m+l+1)!}\zeta^l =\frac{m(m+1)}{\zeta^{m+1}}\sum_{l=0}^\infty\frac{(m-1)!(l+1)!}{(m+l+1)!}\zeta^{m+l+1}=\\ =\frac{m(m+1)}{\zeta^{m+1}}\sum_{l=0}^\infty \int_0^\zeta (\zeta - \xi)^{m-1}\xi^l d\xi= \frac{m(m+1)}{\zeta^{m+1}}\int_0^\zeta\frac{(\zeta - \xi)^{m-1}\xi}{1-\xi}d\xi, \end{multline*} which implies (6.5). Lemma 1 is proved. \end{proof} \begin{remark} In case $\nu\in\mathbb{Z}\setminus\{0\}$ the first two solutions given in Lemma 1 are proportional to each other. Moreover, in this case they are polynomials due to the fact that the hypergeometric series has only a finite number of non-zero terms. Namely, for any $k\in\mathbb{Z}\setminus\{0\}$ the following relation holds true: $$ \rho \,_2F_1(1-k,1+k; 2; -\rho)=-\sum_{j=1}^k \frac{(k+j-1)!}{k(k-j)!(j-1)!j!}\rho^j. $$ \end{remark} Let us show an example when the equations (5.13) can be explicitly solved in respect to $\rho$, $\psi$. \begin{example} Consider a solution to the equation (5.11) which is obtained from Lemma 1 for $\nu = 0$: $Z(\rho, \psi) = \mathrm{ln}(1 + \rho)$. The equations (5.13) have the form $$ \frac{1}{1 + \rho} = -x\cos\psi + y\sin\psi, \qquad 0 = x\sin\psi + y\cos\psi. $$ Consequently $\rho = \frac{1}{\sqrt{x^2+y^2}}-1$, $\cos\psi = \frac{-x}{\sqrt{x^2+y^2}}$, $\sin\psi = \frac{y}{\sqrt{x^2+y^2}}$. We obtain $$ \Lambda(x,y) = \frac{\gamma^2}{C \sqrt{x^2+y^2}},\quad \Omega(x,y)=-\frac{\gamma}{2\sqrt{x^2+y^2}}. $$ So the geodesic flow of the metric $ds^2 = \Lambda(x,y)(dx^2+dy^2)$ in the magnetic field $\Omega(x,y)dx\wedge dy$ admits the rational integral $$ F = \frac{(\sqrt{x^2+y^2}-x) p_1 - y p_2 + \gamma y}{ y p_1 + (\sqrt{x^2+y^2}-x) p_2 + \gamma\sqrt{x^2+y^2} - \gamma x}. $$ Here $C > 0, \gamma$ are arbitrary constants. It is easy to check that $F$ is the first integral at all energy levels. Besides, the metric is flat and the change $x = \frac{C}{4\gamma^2} (u^2 - v^2)$, $y = \frac{C}{2\gamma^2} u v$ transforms it to the Euclidean one which implies that there exists a linear integral. To write it out, it is convenient to make the change of variables $x = e^{\xi}\cos\eta$, $y = e^{\xi}\sin\eta$. After this change the metric takes the form $ds^2 = \frac{\gamma^2}{C}e^{\xi}(d\xi^2+d\eta^2)$, and the magnetic field becomes $-\frac{\gamma}{2}e^{\xi}d\xi\wedge d\eta$. Consequently, due to Example 2 there exists a linear integral which has the form $$ F_1 = p_\eta - \frac{\gamma}{2}e^{\xi} = -y p_1 + x p_2 - \frac{\gamma}{2}\sqrt{x^2 + y^2}. $$ One can check directly that $H$, $F$ and $F_1$ are functionally independent, i.e. this is the example of a superintegrable magnetic geodesic flow at all energy levels. The constructed example is defined on the set $x^2 + y^2 \neq 0$. \end{example} Let us show examples obtained via Theorem 3. \begin{example} Assume that $\nu = 2$ in Lemma 1. Due to the Remark 2 it is not difficult to find a partial solution to the equation (5.11) of the form $Z(\rho, \psi)=\left(\frac{2}{3}\rho + \rho^2\right)\cos(2\psi)$. We obtain the metric $$ ds^2 = \frac{2\gamma^2(\rho+1)}{C}(2d\rho^2 + 2\sin (4\psi) d\rho d\psi + (1 + 2\rho + 2\rho^2 + (1 + 2\rho)\cos (4\psi))d\psi^2), $$ the magnetic field $$ \omega = \gamma\cos(2\psi)d\rho\wedge d\psi, $$ and the rational integral $$ F = \frac{\cos\left(\frac{\psi}{2}\right)(\rho - 2\rho\cos\psi - \cos (2\psi))p_\rho + \sin\left(\frac{3\psi}{2}\right)p_\psi +\gamma(1+2\rho+\cos (4\psi))\sin\left(\frac{\psi}{2}\right)}{ -\sin\left(\frac{\psi}{2}\right)(\rho+2\rho\cos\psi-\cos (2\psi))p_\rho - \cos\left(\frac{3\psi}{2}\right)p_\psi +\gamma(1+2\rho+\cos (4\psi))\cos\left(\frac{\psi}{2}\right)} $$ at the energy level $\{H=\frac{C}{2}\}.$ Here $C > 0, \gamma$ are arbitrary constants; the parametrization of the momenta at this energy level has the form: $$ p_\rho (\phi) = -2 \sqrt{\gamma^2 (1+\rho)} \cos (\phi - \psi), \ p_\psi (\phi) = -\sqrt{\gamma^2 (1+\rho)} (\sin (\phi + 3\psi) + (1+2 \rho) \sin (\phi - \psi)). $$ One can check that this metric is well-defined if $\rho > -1$ and degenerates on the set $\rho = -\cos^2 2\psi$. Besides, the metric, the magnetic field and the first integral are $2\pi$-periodic in respect to $\psi$. Thus this example is well-defined if $\rho > -\cos^2 2\psi$, $0 \leqslant \psi < 2\pi$. \end{example} \begin{example} Consider the real part of the solution (6.4) for $\nu = 1,$ $\rho > 0:$ $$ Z = \left(\rho \mathrm{ln}\left(1+\frac{1}{\rho}\right)-1\right)\cos\psi. $$ We obtain the metric \begin{multline*} ds^2 = \frac{\gamma^2}{2 C \rho^4(\rho+1)^3}\big((1 + 2\rho(\rho+1) - (1+2\rho)\cos (2\psi))d\rho^2-\\ -2\rho(\rho + 1)\sin (2\psi) d\rho d\psi+2\rho^2(\rho+1)^2d\psi^2\big), \end{multline*} the magnetic field $$ \omega = -\frac{\gamma\cos\psi}{2\rho(\rho+1)^2}d\rho\wedge d\psi $$ and the rational integral \begin{multline*} F = \Big[2\rho(\rho+1)\left(p_\rho\rho(\rho+1)\cos \left(\frac{3\psi}{2}\right)+ p_\psi(1+\rho+(1+2\rho)\cos\psi)\sin\left(\frac{\psi}{2}\right)\right)+\\ +\gamma(1+2\rho-\cos (2\psi))\sin\left(\frac{\psi}{2}\right)\Big]/\\ \Big[2\rho(\rho+1)\left(-p_\rho\rho(\rho+1)\sin\left(\frac{3\psi}{2}\right)-p_\psi(1+\rho-(1+2\rho)\cos\psi)\cos\left(\frac{\psi}{2}\right)\right)+\\ +\gamma(1+2\rho-\cos (2\psi))\cos\left(\frac{\psi}{2}\right)\Big] \end{multline*} at the energy level $\{H=\frac{C}{2}\}.$ Here $C > 0, \gamma$ are arbitrary constants; the parametrization of the momenta at this energy level has the form: $$ p_\rho (\phi) = \frac{\gamma (-\cos \phi + (1+2 \rho) \cos (\phi + 2\psi))}{2 \rho^2 (1+\rho)^{3/2}} , \quad p_\psi (\phi) = \frac{\gamma \sin (\phi + 2\psi)}{\rho \sqrt{1+\rho}}. $$ This metric is well-defined if $\rho > 0.$ It is $2\pi$-periodic in respect to $\psi$ as well as the magnetic field and the first integral. Besides, it is well-defined in some domains in the strip $-1 < \rho < 0$, bounded by the curve $\rho = -\sin^2\psi$ and straight lines $\rho = 0$, $\rho = -1$. \end{example} In addition we note that the curvatures of the metrics constructed in Examples 5, 6 are non-zero in general. In conclusion let us make a couple of general remarks related to other integrable examples which can be obtained from our construction and their possible forms. \begin{remark} Let $k\in\mathbb{N}$. Then certain solutions to the equation (5.11) can be represented in the form $$ Z(\rho,\psi) = P_k(\rho)\cos (k(\psi+\psi_0)), $$ where $P_k$ is a polynomial of degree $k$ (it is uniquely defined by the value of $k$ up to a constant multiplier), besides $P_k(0)=0,$ and $\psi_0$ is an arbitrary constant. This solution is obtained from Lemma 1 as a linear combination of real parts of solutions of the form (6.2) due to Remark 2. It is easy to notice that in this example the metric, the magnetic field and the first integral have the coefficients which are polynomial in $\rho$ and trigonometric polynomial in $\psi$. \end{remark} \begin{remark} Solutions of the form (6.4) (for $\nu=k$) can be represented in the form $$ Z(\rho,\psi)=\left(P_{k-1}(\rho)+P_k(\rho)\mathrm{ln}\left(1+\frac{1}{\rho}\right)\right)e^{ik\psi},\eqno{(6.6)} $$ where $k\in\mathbb{N}$, and $P_{k-1}$, $P_k$ are certain polynomials of degrees $k-1$ and $k$ accordingly. Indeed, consider one of the multipliers of (6.4). It follows from the Leibnitz formula that $$ \frac{d^{k-1}}{d\zeta^{k-1}}\left(\frac{1}{\zeta^{k+1}}\int_0^\zeta\frac{(\zeta - \xi)^{k-1}\xi}{1-\xi}d\xi\right)=\sum_{j=0}^{k-1} a_{j,k}\zeta^{-2k+j}\int_0^\zeta \frac{(\zeta-\xi)^{k-j-1}\xi}{1-\xi}d\xi,\eqno{(6.7)} $$ where $a_{j,k}$ are certain constants. One can check that $$ \int_0^\zeta \frac{(\zeta-\xi)^{k-j-1}\xi}{1-\xi}d\xi = \tilde{P}_{1,k-j}(\zeta)+\tilde{P}_{2,k-j-1}(\zeta)\mathrm{ln}(1-\zeta), $$ where $\tilde{P}_{1,k-j}(\zeta)$, $\tilde{P}_{2,k-j-1}(\zeta)$ are polynomials of degrees $k-j$ and $k-j-1$ accordingly, besides $\tilde{P}_{1,k-j}(0)=0$. Substituting this expression into (6.7), we obtain $$ \frac{d^{k-1}}{d\zeta^{k-1}}\left(\frac{1}{\zeta^{k+1}}\int_0^\zeta\frac{(\zeta - \xi)^{k-1}\xi}{1-\xi}d\xi\right)= \sum_{l=-2k+1}^{-k}b_l \zeta^l + \sum_{l=-2k}^{-k-1} c_l \zeta^l\ln(1-\zeta). $$ Here $b_l$, $c_l$ are certain constants. Substituting $\zeta = -\frac{1}{\rho}$ into the obtained relation and multiplying by $\frac{1}{\rho^k}$, by a certain constant and by $e^{ik\psi}$, we obtain (6.6). The real part for $\rho > 0$ has the form $$ Z(\rho,\psi)=\left(P_{k-1}(\rho)+P_k(\rho)\mathrm{ln}\left(1+\frac{1}{\rho}\right)\right)\cos(k\psi). $$ Thus in the corresponding examples the coefficients of the metric, the magnetic field and the first integral are trigonometric polynomials in $\psi$ and are polynomials in $\rho$ and in $\ln\left(1 + \frac{1}{\rho}\right)$, besides they have at most second degree as polynomials with respect to $\ln\left(1 + \frac{1}{\rho}\right)$. \end{remark} \begin{remark} In other partial cases the solutions given in Lemma 1 are functions which can be expressed in terms of complete elliptic integrals of the first and the second kind. The following equalities hold true (see~\cite{44}) $$ K(m) = \frac{1}{2}\pi\,_2F_1\left(\frac{1}{2},\frac{1}{2};1;m\right),\quad E(m) = \frac{1}{2}\pi\,_2F_1\left(-\frac{1}{2},\frac{1}{2};1;m\right), $$ where $K$, $E$ are complete elliptic integrals of the first and the second kind correspondingly: $$ K(m) = \int_0^{\frac{\pi}{2}}\frac{d\theta}{\sqrt{1-m\sin^2\theta}},\quad E(m) = \int_0^{\frac{\pi}{2}}\sqrt{1-m\sin^2\theta}d\theta. $$ Due to the property of the hypergeometric function (see~\cite{43}: formulae (31)---(45) in Section 2.8) for half-integer $\nu$ solutions (6.2), (6.3) can be expressed in terms of elementary functions and functions $K$, $E$. For instance, assume that $\nu=\frac{1}{2}$. Then the real part of the solution (6.2) takes the form (for $\rho > -1$) $$ Z(\rho, \psi) = \frac{4}{\pi}\big(E(-\rho)-K(-\rho)\big)\cos\frac{\psi}{2} $$ The corresponding metric, the magnetic field and the first integral can be expressed in terms of complete elliptic integrals. \end{remark} \begin{remark} Solutions to the equation (5.11) of a more general form can be obtained via the Fourier transform of this equation with respect to $\psi$. In particular, in such a way one can obtain solutions which have the form of integrals with respect to $\nu$ of solutions (6.2), (6.3) multiplied by certain finite (or decreasing fast enough) functions of $\nu$. Even more general solutions can be obtained if one consider the Fourier transform in the sense of generalized functions (see~\cite{45}). We do not formulate any concrete statements here and do not give any solutions obtained via this approach since the corresponding examples of metrics and magnetic fields turn out to be very cumbersome. \end{remark} \section{Conclusion} In this paper we study the magnetic geodesic flow on a 2-surface admitting an additional first integral at a fixed energy level. Depending on the form of the integral the question of its existence reduces to searching for solutions to certain semi-Hamiltonian systems of PDEs. In case of a quadratic in momenta integral we construct exact solutions via the generalized hodograph method. These solutions correspond to the Riemannian metrics of non-zero curvatures and to non-zero magnetic fields. It would be interesting to investigate the possibility of applying this method to construct integrals of higher degrees. In case of a rational in momenta integral with a linear numerator and denominator we manage to reduce the corresponding semi-Hamiltonian system to a linear PDE of the second order. We show that one can construct a rich family of explicit solutions to this equation via the method of separating of the variables. It would be interesting to construct examples of rational integrals with numerators and denominators of higher degrees. Summing it up, in this paper we explicitly construct new examples of Riemannian metrics and magnetic fields on 2-surfaces such that the corresponding geodesic flows admit an additional either polynomial or rational in momenta first integral at a fixed energy level. \vspace{0.2cm} {\bf Acknowledgments.} The authors thank Professor I.A.~Taimanov and Professor A.E.~Mironov, and also S.G.~Basalaev and N.A.~Evseev for useful discussions.
2,869,038,155,341
arxiv
\section{Introduction} Though the standard model (SM) has been verified to be correct times by times, many new physics beyond standard model are proposed to solve both experimental and aesthetical problems, such as neutrino masses, $\mu$ anomalous magnetic movement problem or hierarchy problem, etc. Many new models introduce vector like particles (VLP) \cite{VLP-rev} whose right handed and left handed components transform in the same way under the weak SU(2)$\times$U(1) gauge group. The extension is acceptable because the anomalies generated by the VLPs cancel automatically, and vector quarks can be heavy naturally. VLPs also arise in some grand unification theories. For example, in order to explain the little hierarchy problem between the traditional GUT scale and string scale, a testable flipped $SU(5)\times U(1)_X$ model are proposed in Ref. \cite{Jiang:2009zza} in which the TeV-scale VLPs were introduced~\cite{Jiang:2006hf}. Such kind of models can be constructed from the free fermionic string constructions at the Kac-Moody level one~\cite{Antoniadis:1988tt, Lopez:1992kg} and from the local F-theory model~\cite{Beasley:2008dc, Jiang:2009zza}. However when we do the flavor physics with doublet VLPs in these models~\cite{Li:2012xz, Li:2015nya}, a problem always appears when we are dealing with the mass spectrum of quarks and leptons. Let us start with the SM in which all fermion masses come from the Yukawa couplings. After the spontaneously gauge symmetry breaking, we can get two separate mass matrices $M_U,~M_D$ for the up and down type fermions. The mass eigen states are obtained after the diagonalization \begin{eqnarray} \label{eq:1} Z_U^\dagger M_U U_U=M_U^D,~~ Z_D^\dagger M_D U_D=M_D^D, \end{eqnarray} where $M_U^D={\rm diag.}[m_u,m_c,m_t]$, $M_D^D={\rm diag.}[m_d,m_s,m_b]$. The physical measurable parameters are $m_i$ and the so called CKM matrix \begin{equation} \label{eq:defckm} V_{\rm CKM} = U_U^\dagger U_D. \end{equation} Since $M_U,~M_D$ come from separate Yukawa couplings, we can always set one of the matrices diagonal, for example $M_U$, and use the CKM matrix to get the Yukawa couplings \begin{eqnarray} \label{eq:11} Z_D\left( \begin{array}{ccc} m_d & 0 & 0\\ 0 & m_s & 0\\ 0 & 0 & m_b \end{array} \right)V_{\rm CKM}^\dagger= \left( \begin{array}{ccc} Y^D_{11}v & Y^D_{12}v & Y^D_{13}v\\ Y^D_{21}v & Y^D_{22}v & Y^D_{23}v\\ Y^D_{31}v & Y^D_{32}v & Y^D_{33}v \end{array} \right) \end{eqnarray} for the calculation in flavor physics. Note that $v$ is the vacuum expectation value (VEV) of the Higgs, and $Z_D$ is a random unitary matrix. Such a trick can not be used in case of the participation of a vector doublet, namely $Q$ with gauge charge $\bf 3,~2,~\frac{1}{6}$ and $\bar Q$ with gauge charge $\bf \bar 3,~2,~-\frac{1}{6}$, resulting bilinear term in the lagrangian $$M^VQ\cdot\bar Q.$$ It is clear that in the model, there are the same input parameters in the matrices $M_U,~M_D$ \begin{eqnarray} \label{eq:mud} M_U &=& \left( \begin{array}{cccc} Y^U_{11}v & Y^U_{12}v & Y^U_{13}v & \cdots\\ Y^U_{21}v & Y^U_{22}v & Y^U_{23}v & \cdots\\ Y^U_{31}v & Y^U_{32}v & Y^U_{33}v & \cdots\\ M^V_{41} & M^V_{42} & M^V_{43} & \cdots \end{array}\right),~ M_D = \left( \begin{array}{cccc} Y^D_{11}v & Y^D_{12}v & Y^D_{13}v & \cdots\\ Y^D_{21}v & Y^D_{22}v & Y^D_{23}v & \cdots\\ Y^D_{31}v & Y^D_{32}v & Y^D_{33}v & \cdots\\ -M^V_{41} & -M^V_{42} & -M^V_{43} & \cdots \end{array}\right). \end{eqnarray} The mass matrices for up and down type quarks are related to each other. Therefore, we can not set one of the matrices diagonal and the CKM matrix can not be got easily. The shooting method is always used to treat such an obstacle. Random $M_U$ and $M_D$ are generated to meet the requirements after diagonlization: the mass of eigen state and the measurements of elements of CKM matrix. However this is too much time consuming, and precise solution for diagonalization is almost unavailable. Although this is just a numerical problem, when one treats the VLP contributions to the flavor physics seriously, diagonalization of quark matrices will be the first and important step. In this paper, we will first propose a general method to solve the obstacle in models with vector like quark doublets. As its application, we will study rare B decay $B\to X_s\gamma$ in the SM with one vector like quark doublet. The paper is organized as follows. We show the detail of the trick in Section~2. The simple application to $B\to X_s\gamma$ process, including quark mass spectrum, Feynman rules and the Wilson coefficients, as well as the numerical analysis for calculation of $B\to X_s\gamma$ is shown in Section~3. A summary is given in Section~4. \section{The Trick of diagonalization of vector quark doublet}\label{sec2} Firstly, we address the problem clearly on how to deal with the diagonalization of $N\times N$ matrix $M_U$ and $M_D$: \begin{eqnarray} \label{eq:mud2} Z_U^\dagger M_U U_U=M_U^D ,~ Z_D^\dagger M_D U_D=M_D^D \end{eqnarray} in which $M_U^D, M_D^D$ are the diagonal mass matrices for up and down type quark, respectively. Note that $N$ should be greater than 3 and the first three elements in the matices should be the three generations of quark multiplates in the SM, other elments with $N>3$ are the new multiplates introduced in new physics beyond the SM. Then we have \begin{eqnarray} \label{eq:mvud} M_U = \left( \begin{array}{ccccc} Y^U_{11}v & Y^U_{12}v & Y^U_{13}v &\cdots & M_{U1N}\\ Y^U_{21}v & Y^U_{22}v & Y^U_{23}v &\cdots & M_{U2N}\\ Y^U_{31}v & Y^U_{32}v & Y^U_{33}v &\cdots & M_{U3N}\\ \cdots & \cdots & \cdots &\cdots & \cdots \\ M^V_{N1} & M^V_{N2} & M^V_{N3} & \cdots & M_{UNN} \end{array}\right),~ M_D = \left( \begin{array}{ccccc} Y^D_{11}v & Y^D_{12}v & Y^D_{13}v &\cdots & M_{D1N}\\ Y^D_{21}v & Y^D_{22}v & Y^D_{23}v &\cdots & M_{D2N}\\ Y^D_{31}v & Y^D_{32}v & Y^D_{33}v &\cdots & M_{D3N}\\ \cdots & \cdots & \cdots &\cdots & \cdots \\ -M^V_{N1} & -M^V_{N2} & -M^V_{N3} & \cdots & M_{DNN} \end{array}\right). \end{eqnarray} The last line of the two matrices has the same parameters except the last elements. Considering that there are some same parameters in $M_U$ and $M_D$, we find that a very simple way is to add two matrices in Eq. (\ref{eq:mvud}) \begin{eqnarray} M_U+M_D =\left(Z_U M^D_U U_{\rm CKMN}+Z_D M^D_D \right) U^\dagger_D. \label{eqvud} \end{eqnarray} The left side of the equation is \begin{eqnarray} \label{eq:pvud} M_U +M_D= \left( \begin{array}{ccccc} Y^U_{11}v+Y^D_{11}v & Y^U_{12}v+Y^D_{12}v & Y^U_{13}v+Y^D_{13}v &\cdots & M_{U1N}+M_{D1N}\\ Y^U_{21}v+Y^D_{21}v & Y^U_{22}v+Y^D_{22}v & Y^U_{23}v+Y^D_{23}v &\cdots & M_{U2N}+M_{D2N}\\ Y^U_{31}v+Y^D_{31}v & Y^U_{32}v+Y^D_{32}v & Y^U_{33}v+Y^D_{33}v &\cdots & M_{U3N}+M_{D3N}\\ \cdots & \cdots & \cdots &\cdots & \cdots \\ 0 & 0 & 0 & \cdots & M_{UNN}+M_{DNN} \end{array}\right). \end{eqnarray} Obviously, the mass inputs from bilinear terms vanish. We can denote the matrix in the form as \begin{eqnarray} M_U+M_D =M_{UD}= \left( \begin{array}{cc} {\bf M_{A}} & {\bf M_{B}} \\ {\bf M_{0}}& M_C \end{array}\right), \end{eqnarray} in which ${\bf M_{A}}$, ${\bf M_{B}}$, ${\bf M_{0}}$ are $(N-1)\times(N-1)$, $(N-1)\times1$ and $1\times(N-1)$ matrices correspondingly. To prepare for the diagonalization, we chose the diagonal mass matrix elements of quarks $(m_u,m_c,m_t,\cdots m_X)$, $(m_d,m_s,m_b,\cdots, m_Y)$ and a matrix $U_{\rm CKMN}$, which are determined partly by experimental measurements as input parameters \begin{eqnarray} \label{eq:ukmall} U_{\rm CKMN} &=& U^\dagger_U U_D = \left( \begin{array}{cc} \left(U_{\rm CKM}\right)_{3\times 3} & \cdots \\ \cdots & U_{NN} \end{array}\right) = \left( \begin{array}{cc} \left( \begin{array}{ccc} U_{ud} & U_{us} & U_{ub} \\ U_{cd} & U_{cs} & U_{cb} \\ U_{td} & U_{ts} & U_{tb} \end{array}\right) & \cdots \\ \cdots & U_{NN} \end{array}\right). \label{CKM4} \end{eqnarray} Note that above $Z_U,~Z_D,~U_U,~U_D$ are unitary matrices, but $\left(U_{\rm CKM}\right)_{3\times 3}$ is not an ordinary CKM matrix $V_{\rm CKM}$ which is non-unitary in this case. Detailed dicussion will be shown in the following section. What we need to do for the next is to generate a unitary matrix $U_D$. In the similar way we denote $U_D$ as \begin{eqnarray} U_D = \left( \begin{array}{cc} {\bf U_{DA}} & {\bf U_{DB}} \\ {\bf U_{D0}}& U_{DNN} \end{array}\right). \end{eqnarray} Both sides of Eq. (\ref{eqvud}) times the matrix $U_D$, we can get \begin{eqnarray} M_{UD}U_D &=& \left( \begin{array}{cc} {\bf M_{A}}{\bf U_{DA}}+{\bf M_{B}}{\bf U_{D0}} & {\bf M_{A}}{\bf U_{DB}} + {\bf M_{B}}U_{DNN} \\ M_C{\bf U_{D0}} & M_C U_{DNN} \end{array}\right)\nn\\ &=& \left(Z_U M^D_U U_{\rm CKMN}+Z_D M^D_D \right). \end{eqnarray} From above equation, we can get the last line of $U_D$ simply by inputting $M^D_U,~M^D_D, ~U_{\rm CKMN}$ and random $Z_U$, $Z_D$: \begin{eqnarray} \left(Z_U M^D_U U_{\rm CKMN}+Z_D M^D_D \right)_{\mbox{last line}}&=& \left( \begin{array}{cc} M_C{\bf U_{D0}} & M_C U_{DNN} \end{array}\right)\nn\\ &=& M_C {\bf U_{D}}_N , \end{eqnarray} where \begin{eqnarray} {\bf U_{D}}_N&=& \left( \begin{array}{cccc} U_{DN1} & U_{DN2}& \cdots & U_{DNN} \end{array}\right) \end{eqnarray} is a unit vector in $N$ dimension. Next we use the unit vector to generate total $U_D$. Since ${\bf M_{A}}$ and ${\bf M_{B}}$ are random matrix, $U_D$ can be random too. The unit vector ${\bf U_{D}}_{N-1}$ of $U_D$ can be determined as \begin{eqnarray} {\bf U_{D}}_{N-1}&=& \left( \begin{array}{ccccc} -\frac{U_{DN2}^\ast}{\sqrt{|U_{DN1}|^2+|U_{DN2}|^2}} & \frac{U_{DN1}^\ast}{\sqrt{|U_{DN1}|^2+|U_{DN2}|^2}} & 0 &\cdots & 0 \end{array}\right). \end{eqnarray} It is clear that the vector is orthogonal to ${\bf U_{D}}_N$ and normalized to 1. Then we use the first three elements of ${\bf U_{D}}_N$ and ${\bf U_{D}}_{N-1}$ to generate ${\bf U_{D}}_{N-2}$: Normalize the algebraic complements of first line of the $3\times 3$ matrix. Step by step, we can finally get (${\bf U_{D}}_1$, ${\bf U_{D}}_2$, $\cdots$, ${\bf U_{D}}_{N-1}$) and form a special $U_D^S$ \begin{eqnarray} \label{eq:mvud2} U_D^S &=&\left( \begin{array}{c} {\bf U_{D}}_1\\ \cdots\\ {\bf U_{D}}_{N-2}\\ {\bf U_{D}}_{N-1}\\ {\bf U_{D}}_N \end{array} \right) =\left( \begin{array}{ccccc} U_{D11} & U_{D12} & U_{D13} &\cdots & U_{D1N} \\ \cdots & \cdots & \cdots &\cdots & 0\\ U_{D(N-2)1} & U_{D(N-2)2} & U_{D(N-2)3} &\cdots & 0\\ U_{D(N-1)1} & U_{D(N-1)2} & 0 & \cdots & 0 \\ U_{DN1} & U_{DN2} & U_{DN3} & \cdots & U_{DNN} \end{array}\right). \end{eqnarray} From above steps, we can see that (${\bf U_{D}}_1$, ${\bf U_{D}}_2$, $\cdots$, ${\bf U_{D}}_{N-1}$) can be rotated into any other orthogonal $N-1$ vectors to construct random matrix ${\bf M_{A}}$ and ${\bf M_{B}}$, only ${\bf U_{D}}_N$ must be kept unchanged. Therefore, a general unitary matrix can be realized by timesing a unitary $N\times N$ matrix $U_R$, \begin{eqnarray} U_D = U_R U^S_D =\left(\begin{array}{cc} {\bf U_R}_{N-1} & \bf 0 \\ \bf 0 & 1 \end{array}\right) U^S_D \end{eqnarray} in which ${\bf U_R}_{N-1}$ is a $(N-1)\times (N-1)$ unitary matrix. We finish the work by \begin{eqnarray} U^\dagger_U &=& U_{\rm CKMN}U^\dagger_D\\ M_U &=& Z_U M^D_U U^\dagger_U\\ M_D &=& Z_D M^D_D U^\dagger_D \end{eqnarray} At this stage, we would like to summarize our method here \begin{itemize} \item Step 1: Chose ($m_u,m_c,m_t,\cdots, m_X,m_d,m_s,m_b,\cdots, m_Y$) and $U_{\rm CKMN}$ and generate random unitary matrices $Z_U$ and $Z_D$ as the inputs for the model; \item Step 2: Determine the last line of matrix $Z_U M^D_U U_{\rm CKMN}+Z_D M^D_D$ as \begin{eqnarray} M_C\left( \begin{array}{cccc} U_{DN1} & U_{DN2}& \cdots & U_{DNN} \end{array}\right) \end{eqnarray} and normalize it into a unit vector ${\bf U_D}_N$. \item Step 3: Use the unit vector ${\bf U_D}_N$ to generate other $N-1$ unitary vectors (${\bf U_{D}}_1$, ${\bf U_{D}}_2$, $\cdots$, ${\bf U_{D}}_{N-1}$), and form a special $U^S_D$ \begin{eqnarray} \label{eq:mvud3} U_D^S &=&\left( \begin{array}{ccccc} {\bf U_{D}}_1 & \cdots & {\bf U_{D}}_{N-2} & {\bf U_{D}}_{N-1} & {\bf U_{D}}_N \end{array} \right)^T. \end{eqnarray} \item Step 4: Generate a $N-1$ unitary matrix ${\bf U_R}_{N-1}$ to form a unitary matrix $U_R$ which is \begin{eqnarray} U_R =\left(\begin{array}{cc} {\bf U_R}_{N-1} & \bf 0 \\ \bf 0 & 1 \end{array}\right), \end{eqnarray} then, a general $U_D$ is obtained by \begin{eqnarray} U_D = U_R U^S_D. \end{eqnarray} \item Step 5: Use these equations \begin{eqnarray} U^\dagger_U &=& U_{\rm CKMN}U^\dagger_D,\nonumber\\ M_U &=& Z_U M^D_U U^\dagger_U,\nonumber\\ M_D &=& Z_D M^D_D U^\dagger_D, \end{eqnarray} to get the inputs for the flavor physics. \end{itemize} We can see that by this trick we can skip the inputs of the bilinear mass terms $M^V_{Ni}$. In physical analysis, the mass of eigen states $m_{X,~Y}$ in the VLP models are inputs freely. $Z_{U}$ and $Z_{D}$ can be generated randomly, $U_{U}$ and $U_D$ can also be scanned the most generally if we vary $U_R$ randomly. Thus the method can do the most general scan in the parameter space of mass matrices in the models with VLPs for the numerical studies, which will be shown in the following section. \section{$B\to X_s\gamma$ process in extension of the SM with one vector like quark doublet} \label{sec3} \subsection{The standard model with vector like quarks} \begin{table} \centering \caption{A simple extension of the standard model with one vector like quarks doublet}\label{tab1} \begin{tabular}{cc} \begin{tabular}{c|c} & $\bf SU(3),~SU(2),~U(1)$\\ \hline $ Q=\left(\begin{array}{c} U\\ D\end{array}\right)_L$ & $\bf 3,~2,~\frac{1}{6}$\\ $u_{R}$ & $\bf 3,~1,~\frac{2}{3}$ \\ $d_{R}$ & $\bf 3,~1,~-\frac{1}{3}$ \end{tabular} & \begin{tabular}{c|c} & $\bf SU(3),~SU(2),~U(1)$\\ \hline $ V_{Q}=\left(\begin{array}{c} \bar{V}_{d}\\ \bar{V}_{u}\end{array}\right)_{R}$ & $\bf \bar{3},~2,~-\frac{1}{6}$\\ $\bar{V}_{uL}$ & $\bf \bar{3},~1,~-\frac{2}{3}$ \\ $\bar{V}_{dL}$ & $\bf \bar{3},~1,~\frac{1}{3}$ \end{tabular} \end{tabular} \end{table} As an application of the method, in this section we study the VLP contribution to $B\to X_s\gamma$ in a very simple VLP extension of SM for the demonstration. In the Tab. \ref{tab1}, we list the gauge symmetry of the matter multiplates in which the first two queues show the quarks in the SM and the last two queues show the VLPs with the anti-gauge symmetry. Note that we ignore partners of the last two queues whose gauge symmetry are exactly the same as the first two queues of the SM. As talked in the introduction, these VLPs can be heavy naturally. Since gauge symmetry of Higgs $H=(h^+,~h^0)^{\rm T}$ is ($\bf 1,~2,~1/2$), the lagrangian for two quarks of the model is written as: \begin{eqnarray} \mathcal{L}=&&Y_{d}\bar{Q}Hd_{R}+Y_{u}\bar{Q}\cdot\bar{H}u_{R}+ Y_{Vu}\bar{V_{Q}}H\bar{V}_{uL}+Y_{Vd}\bar{V_{Q}}\cdot\bar{H}\bar{V}_{dL}\nn\\ &&+M_{Q}V_{q}\cdot Q+M_{u}\bar{V}_{uL}u_{R}+M_{d}\bar{V}_{dL}d_{R}+h.c., \end{eqnarray} in which $A\cdot B=\epsilon^{ij}A_i B_j$. The first line of the lagrangian is Yukawa terms, the second line is the bilinear terms. Note that $Y_u$, $Y_d$ are $3\times 3$ matrix, without the bilinear terms, the model will be almost the same as the fourth generation standard model (SM4). After the electro-weak symmetry breaking, we can get the mass matrices of up and down quarks in the basis of $(u,~c,~t,~V_u)$ and $(d,~s,~b,~V_d)$: \begin{eqnarray} M_U= \left(\begin{array}{cccc} Y_{u}^{11}v & Y_{u}^{12}v & Y_{u}^{13}v & M_{u}^{1}\\ Y_{u}^{21}v & Y_{u}^{22}v & Y_{u}^{23}v & M_{u}^{2}\\ Y_{u}^{31}v & Y_{u}^{32}v & Y_{u}^{33}v & M_{u}^{3}\\ -M_{Q}^{1} & -M_{Q}^{2} & -M_{Q}^{3} & Y_{V_u}v\end{array}\right),~ M_D=\left(\begin{array}{cccc} Y_{d}^{11}v & Y_{d}^{12}v & Y_{d}^{13}v & M_{d}^{1}\\ Y_{d}^{21}v & Y_{d}^{22}v & Y_{d}^{23}v & M_{d}^{2}\\ Y_{d}^{31}v & Y_{d}^{32}v & Y_{d}^{33}v & M_{d}^{3}\\ M_{Q}^{1} & M_{Q}^{2} & M_{Q}^{3} & Y_{V_d}v\end{array}\right), \end{eqnarray} where $v$ is the VEV for $H$. The first three elements of last line of the matrices have the same parameter, making the scan of the parameter space very difficult. These two matrices can be diagonalized by unitary matrices $U$ and $Z$, \begin{eqnarray} Z_u^\dagger M_UU_u={\rm diag.}[m_{u},m_{c},m_{t},m_{X}],\nn\\ Z_d^\dagger M_DU_d={\rm diag.}[m_{d},m_{s},m_{b},m_{Y}]. \label{massdiag} \end{eqnarray} Product of the two matrices is denoted as \begin{equation} \label{eq:ukm} U_{\rm CKM4}=U_u^\dagger U_d, \end{equation} which is unitary $4\times 4$ matrix. We stress that the trick we introduced in the above section seems to just give us a numerical tool for quark masses and some quark mixing matrices, but it is important in studying the flavor physics in such models. For studying VLP contributions to $B\to X_s\gamma$, we now present the Feynman rules for the interaction of $\bar{u}_ld_j\chi^+, ~\chi=W,~G$ and $\bar{d}_ld_jZ$ in the Feynman gauge which read: \begin{eqnarray} && {\rm i}\frac{g}{\sqrt{2}}\gamma^\mu \left[g^\chi_L(i,j)P_L +g^W_R(i,j)P_R\right],~~(\chi=W,Z),\label{gud}\\ && {\rm i}\frac{g}{\sqrt{2}m_W}\left[g^\chi_L(i,j)P_L +g^\chi_R(i,j)P_R\right]~ (\chi=G) \label{hud} \end{eqnarray} where \begin{eqnarray} g^W_L(i,j) &=& \sum_{m=1}^3U_u^{*mi} U_d^{m,j},\ \ \ g^W_R(i,j) =Z_u^{*4 i}Z_d^{4j}, \label{ud_w}\\ g^G_L(i,j) &=& \sum_{k,m=1}^3 Y_{u}^{km}vZ_u^{*ki} U_d^{mj} + Y_{Vd}vZ_u^{*4i} U_d^{4j},\label{ud_yl}\\ g^G_R(i,j) &=& -\sum_{k,m=1}^3 Y_{d}^{\ast mk}vZ_d^{*k j} U_u^{mi} - Y_{Vu}^\ast vZ_d^{*4j} U_d^{4i}. \label{ud_yr}\\ g^Z_L(i,j) &=& -\frac{1}{\sqrt{2}\cos\theta_{W}}\left[\left(1-\frac{2}{3}\sin^{2} \theta_{W}\right)\delta^{ij}-U_{d}^{*4i}U_{d}^{4j}\right],\label{ud_z1}\\ g^Z_R(i,j) &=& -\frac{1}{\sqrt{2}\cos\theta_{W}}\left[-\frac{2}{3}\sin^{2} \theta_{W}\delta^{ij}+Z_{d}^{*4i}Z_{d}^{4j}\right] .\label{ud_z2} \end{eqnarray} Note that $U(1)_{EM}$ interaction is not changed by the VLPs, thus the vertices of photon and quarks are still the same as those in the SM. From above mass matrices and Feynman rules, we can see that the model has two points to be explored: \begin{itemize} \item The CKM matrix is got from the $W^+\bar{u}_id_j$ vertex in Eq. (\ref{ud_w}) \begin{equation} \label{eq:vkm} V^{ij}_{\rm CKM4} = \sum_{m=1}^3U_u^{*mi} U_d^{mj} =U_{\rm CKM4}^{ij}-U_u^{*4i} U_d^{4j}. \end{equation} which is non-unitary for that the indexes $i,j$ range form 1 to 4, but the summation of index $m$ is from 1 to 3. $V^{ij}_{\rm CKM4}$ is also a $4\times 4$ matrix of which the upper left elements ($i,j\ne4$) are physical measurable value of CKM matrix $V$ as in the SM. This is the key difference between VLP models and the SM4. Nevertheless, the loop-level flavor change neutral current (FCNC) will be changed by the Yukawa interactions, then the prediction of process $B\to X_s\gamma$ may be changed significantly. \item The last terms in Eqs.(\ref{ud_w})-(\ref{ud_z2}), which we call the ``tail terms'', violate the gauge universality of fermions and cause tree-level FCNC processes induced by the processes such as $b \to s\ell^+\ell^-$, then the constraints on the parameter space need to be explored. \end{itemize} \subsection{Enhancement in $b\to s$ transition} \begin{figure}[hbtp] \begin{center} \scalebox{0.7}{\epsfig{file=fig1.ps}} \caption{Leading order Feynman diagram of $B\to X_s\gamma$ process.} \label{fig1} \end{center} \end{figure} In this subsection we focus attention on VLP contributions to rare B decay $B\to X_s\gamma$. The starting point for rare B decays is the determination of the low-energy effective Hamiltonian obtained by integrating out the heavy degrees of freedom in the theory. For $b \to s $ transition, this can be written as \begin{equation} {\cal H}_{\rm eff} = - \frac{G_F}{\sqrt{2}} V_{tb} V_{ts}^\ast \sum_{i=1}^{10} [C_i(\mu) O_i(\mu)+C^{'}_i(\mu) O^{'}_i(\mu)]~,~\, \label{eq:HeffBXsgamma} \end{equation} where the effective operators $O_i$ are same as those in the SM defined in Ref.~\cite{BLOSM}. The chirality-flipped operators $O'_i$ are obtained from $O_i$ by the replacement $\gamma_5\to -\gamma_5$ in quark current\cite{Li:2012xz}. We calculate the Wilson coefficient $C_7$ at matching scale $m_W$. The leading order Feynman diagrams are shown in FiG. \ref{fig1} and $C_7$ reads \begin{eqnarray} C_{7}(m_W)&=&\frac{1}{V_{tb}V_{ts}^{\ast}}\sum_{i=1}^{4}\biggl[ g_{L}^{W\ast}(i,2)g_{L}^{W}(i,3)A(x_i) + \frac{g_{L}^{G\ast}(i,2)g_{L}^{G}(i,3)}{m_{u_i}^2} x_i B(x_i)\nn\\ & & + \frac{g_{L}^{G\ast}(i,2)g_{R}^{G}(i,3)}{m_{u_i}m_b} x_i C(x_i) + \frac{g_{L}^{W\ast}(i,2)g_{R}^{G}(i,3)}{m_b}D(x_i)\nn\\ & & + \frac{m_{u_i}}{m_b}g_{L}^{W\ast}(i,2)g_{R}^{W}(i,3)E(x_i) + \frac{g_{L}^{G\ast}(i,2)g_{R}^{W}(i,3)}{m_b}D(x_i)\biggl] \label{fc7} \end{eqnarray} where $x_i=m_{u_i}^2/m_W^2$ and the loop function $A(x),~B(x),~C(x),~D(x),~E(x)$ are listed in the appendix. The first two lines are the similar contribution as in the SM, while the last lines are the terms come form the tail terms. Note that the contribution of right diagram in the second line of FIG. \ref{fig1} is zero in the SM. The terms with $1/m_b$ in above equation is extracted to compose the operator ${\cal O}_7$. There are two differences in the calculation of $B\to X_s\gamma$ processes compared with the SM. One is the tail terms of gauge or Yukawa interactions, another one is the new type of Yukawa interactions listed in Eqs. (\ref{ud_yl}, \ref{ud_yr}) which can not be written into the simple form in the SM such as \begin{equation} \label{eq:smf} g_L^{G,\rm SM}(i,2)=m_{u_i}V_{is},~g_R^{G,\rm SM}(i,3)=-m_{b}V_{ib}. \end{equation} \begin{table}[htb] \caption[]{The CKM matrix elements constrained by the tree-level B decays.} \label{tab:c1B} \begin{center} \begin{tabular}{c||c|c} \hline & absolute value & direct measurement from \\ \hline $V_{ud}$ & $0.97425 \pm 0.00022$ & nuclear beta decay \\\hline $V_{us}$ & $0.2252 \pm 0.0009$ & semi-leptonic K-decay\\\hline $V_{ub}$ & $0.00415 \pm 0.00049$ & semi-leptonic B-decay\\\hline $V_{cd}$ & $0.230 \pm 0.011$ & semi-leptonic D-decay\\\hline $V_{cs}$ & $1.006 \pm 0.023$ & (semi-)leptonic D-decay\\\hline $V_{cb}$ & $0.0409 \pm 0.0011$ & semi-leptonic B-decay\\\hline $V_{tb}$ & $0.89 \pm 0.07$ & (single) top-production\\\hline \end{tabular} \end{center} \end{table} In the model with three generation quarks, the CKM matrix unitarity is already used in the calculations of the loop-level FCNC induced rare B decays. For consistency, in numerical analysis the constraints on CKM matrix element are not from processes occurred at loop level, such as rare B decays, but from tree-level processes shown in Table~\ref{tab:c1B}~\cite{Beringer:1900zz, Eberhardt:2010bm}. Since there are no tree-level measurements of $V_{td}$, $V_{ts}$ now, we use above inputs and the unitarity to get $3\times3$ unitary matrix at first. The method is that we scan $(V_{ud}, V_{us}, V_{ub})$ randomly (keeping $|V_{ud}|^2+ |V_{us}|^2+ |V_{ub}|^2=1$ ) in range listed in Table~\ref{tab:c1B}, then we define two parameters $\alpha,~\beta$ and solve them by the equations \begin{eqnarray} &&V_{ud}^\ast (V_{cd}+\alpha) + V_{us}^\ast(V_{cs}+\beta) + V_{ub}^\ast V_{cb} =0,\nn\\ &&\left|V_{cd}+\alpha\right|^2+ \left|V_{cs}+\beta\right|^2+ \left|V_{cb}\right|^2 =1. \end{eqnarray} $(V_{td}, V_{ts}, V_{tb})$ are got by the unitarity relation with $(V_{ud}, V_{us}, V_{ub})$ and $(V_{cd}, V_{cs}, V_{cb})$. After that we times the $3\times3$ unitary matrix with three matrices \begin{eqnarray} \left( \begin{array}{cccc} {\bf 1} &\cdots &\cdots &\cdots \\ \cdots & \cos \theta_{4i} & \cdots & \sin \theta_{4i} \\ \cdots & \cdots & {\bf 1} & \cdots\\ \cdots & -\sin \theta_{4i} & \cdots & \cos \theta_{4i} \end{array}\right) \end{eqnarray} in which $i=1,~2,~3$ and $\max(|\theta_{41}|,~|\theta_{42}|,~|\theta_{43}|)<0.01\pi$, to generate a $4\times4$ unitary matrix $U_{\rm CKM4}$. $V_{\rm CKM4}$ are got by the Eq. (\ref{eq:vkm}). All the corresponding elements should satisfy the experiment bound list in Table~\ref{tab:c1B} and $V_{td}$, $V_{ts}$ ($|V_{ts}|\simeq 0.04$ which is consistent with the fitting results in Ref. \cite{Beringer:1900zz}) can be got too. With this inputs in hand, the first task is to check the scale of the mass parameter of model, such as $M_Q$, $m_X, m_Y$. From the $Z\bar{b}b$ vertexes in Eq. (\ref{ud_z1}) and Eq. (\ref{ud_z2}), we can see that in order to keep gauge universality of quarks, the tail terms in the Feynman rules must be much smaller than the SM like terms, namely $|Z^{4i}_{u,d}|^2_{i=1,2,3},~|U^{4i}_{u,d}|^2_{i=1,2,3}\ll \sin^2\theta_W$. Thus in the numerical studies we require \begin{figure}[hbtp] \begin{center} \scalebox{0.4}{\epsfig{file=fig2.ps}} \caption{$M_V$ versus $M_Y$ under constraints $|Z^{4i}_{u,d}|^2_{i=1,2,3},~|U^{4i}_{u,d}|^2_{i=1,2,3}< 10^{-4}.$}\label{fig2} \end{center} \end{figure} \begin{equation} \label{eq:vud2} |Z^{4i}_{u,d}|^2_{i=1,2,3},~|U^{4i}_{u,d}|^2_{i=1,2,3}< 10^{-4}. \end{equation} Note that though these elements are greater than $\lambda^3$ (parameter in the Wolfenstein parameterization \cite{Wolfenstein:1983yz}), they are much smaller than the product of $V_{\rm CKM3}^\dagger V_{\rm CKM3}$ (almost equals $\bf 1$), thus the requirements are suitable for indicating the contraints from the deviation from unitarity. Since the scanning in the parameter space is freely, we set $ m_X = 1172{\rm GeV}$ (mass of top quark plus 1000 GeV) and scan $m_Y$ in the range of $(4.2, 1004) \rm GeV$ (mass of bottom quark plus 1000 GeV), and $Z_{u,d},~U_{u,d}$ randomly (ignoring the CP phases). $M_V$ is defined by \begin{equation} \label{eq:mv} M_V = \max(|M_Q^1|,~|M_Q^2|,~|M_Q^3|). \end{equation} The result for $M_V$ versus $M_Y$ is shown in the FIG. \ref{fig2} which checks the mass input of vector doublet. We can see that $M_V$ increases as $m_Y$ growing up. However $M_V$ is much smaller than $m_X$ and $m_Y$. Small mixings lead to parameter $M_Q$ which determine the mixing between SM quarks and vector like quarks are also suppressed. This is in agreement with that the deviation from unitarity is suppressed by the ratio $m/m_{X,Y}$ where $m$ denotes generically the standard quark masses, which is a typical result of VLP models. \cite{Langacker:1988ur,delAguila:1982fs,delAguila:1987nn,Cheng:1991rr,Botella:2012ju} \begin{figure}[hbtp] \begin{center} \scalebox{0.4}{\epsfig{file=fig3-1.ps}} \scalebox{0.4}{\epsfig{file=fig3-2.ps}} \caption{$K_1$, (red $\triangle$) $K_2$ (green $\Box$) versus $M_V$ and enhancement of $|C_7 (m_W)|$ case of $|Z^{4i}_{u,d}|^2_{i=1,2,3},~|U^{4i}_{u,d}|^2_{i=1,2,3}< 10^{-4}$ (color online).}\label{fig3} \end{center} \end{figure} The second task is to check the VLP contribution to $B\to X_s\gamma$. We find the Wilson coefficient of FCNC operator $O_7$ is not so suppressed as the mixing. The new contributions, from the terms of the last line in Eq.(\ref{fc7}) are suppressed by the mixing, whereas the terms of first line are almost the same as the SM. The enhancement comes mainly from the $g_R^G(4,3)/m_b$ terms which are got from the Goldstone loop in $b\to s$ transition. (The right diagram in the first line and diagrams in the second line of the Fig. \ref{fig1}) In order to show the enhancement clearly, we define two factors \begin{eqnarray} \label{eq:kfactor} K_1 &=& \frac{g_R^{G}(4,3)g_L^{G}(4,2)^\ast}{m_Xm_b V_{tb}V_{ts}^\ast} = \frac{g_R^{GVb}g_L^{GVs\ast}}{m_Xm_b V_{tb}V_{ts}^\ast}\,, \\ K_2 &=& \frac{U^{43}U^{42\ast}}{V_{tb}V_{ts}^\ast} = \frac{U^{Vb}U^{Vs\ast}}{V_{tb}V_{ts}^\ast}\,, \end{eqnarray} in which $K_2$ denotes the deviation from the unitarity of $3\times 3$ CKM matrix, while $K_1$ shows the enhancement of the contribution from vector like particles. $K_1$ is in fact got from the coefficient of first term in the second line of analytical expression of $C_7(m_W)$ in Eq. (\ref{fc7}) when $i=4$. It will be changed into exactly $K_2$ in case of the SM4. Note that other terms with $g_R^G(4,3)/m_b$ can give enhancement too, we chose factor $K_1$ for a typical demonstration since it seems that it will be suppressed by $m_X$. Results are shown in the FIG \ref{fig3} in which the left panel shows $K_1$ and $K_2$ versus $M_V$ while the right panel shows $|C_7(m_W)|$ versus $K_1$. From the left panel, we can see that though $K_2$ increase as $M_V$ increases, it is still much smaller than $V_{tb}V_{ts}^\ast$, implying that deviation of unitarity are negligible. However the factor $K_1$ can be enhanced up to order ${\cal O}(1)$ by the increase of $M_V$. From the right panel, we can see that $K_1$ enhances $C_7$ up to a value much larger that the result of the SM. The reason for the enhancement mainly comes from the new type of Yukawa couplings. Combining of Eqs. (\ref{massdiag},~\ref{ud_yl},~\ref{ud_yr}), one can get similar form compared with the SM4 \begin{eqnarray} \label{eq:comsm} \frac{g^G_L(4,2)}{m_{X}} &=& U_{CKM4}^{42} \label{eqglvs} +\frac{1}{m_{X}}\left[\sum_{m=1}^3\left(M_{Q}^{m}U_{d}^{m2}Z_{u}^{\ast44} -M_{u}^{m}U_{d}^{42}Z_{u}^{\ast m4}\right) +(Y_{Vd}-Y_{Vu})U_{d}^{42}Z_{u}^{\ast44}v\right],\\ \frac{g^G_R(4,3)}{m_{b}} &=&-U_{CKM4}^{\ast 43} \label{eqgrvb} -\frac{1}{m_{b}} \left[\sum_{m=1}^3 \left(M_{Q}^{\ast m}U_{u}^{\ast m4}Z_{d}^{43}+M_d^{\ast m}U_{u}^{\ast44}Z_{d}^{m3}\right) +(Y^*_{Vd}-Y_{Vu}^\ast)U_{u}^{\ast44}Z_{d}^{43}v\right]. \end{eqnarray} Since $m_X\simeq Y_{Vu}v$, $Z_u^{44},~U_u^{44}\simeq 1$, one can easily obtain that \begin{equation} \label{eq:enfac1} \frac{g^G_L(4,2)}{m_{X}}\sim V_{\rm CKM4}^{42}, \end{equation} the suppression of $Z_{d}^{43}$ (order of $m/m_{X,Y}$) in Eq. (\ref{eqgrvb}) are enhanced by terms with factor such as $\frac{Y_{Vu}v}{m_b}$, etc., resulting \begin{equation} \label{eq:ehnfac2} \frac{g^G_R(4,3)}{m_{b}}\gg V_{\rm CKM4}^{43}. \end{equation} Thus the term $V_{4b}V_{4s}^\ast$ satisfying the unitary constraint \begin{equation} \label{eq:enh4} V_{ub}V_{us}^\ast+V_{cb}V_{cs}^\ast+V_{tb}V_{ts}^\ast +V_{4b}V_{4s}^\ast=0 \end{equation} is enhanced greatly by heavy VLPs, then the factor leads the enhancement to $C_7$. This is different from those in the SM4 in which the contribution from the fourth generation can be neglected. \begin{figure}[hbtp] \begin{center} \scalebox{0.4}{\epsfig{file=fig4.ps}} \caption{$B\to X_s \gamma$ prediction in random scan.}\label{fig4} \end{center} \end{figure} In the numerical scan, we vary $Z_{u,d}$ and $U_{u,d}$ randomly, keeping the constraints of $|V^{4i}_{u,d}|^2_{i=1,2,3},~|U^{4i}_{u,d}|^2_{i=1,2,3}$, scan $m_X$ and $m_Y$ in the range of $(1, 2000) \rm GeV$. Apart from the CKM limits, we use the $B\to X_s \gamma$ process to constrain parameter space. The branching ratio of $B\to X_s\gamma$ is normalized by the process $B\rightarrow X_{c}e\bar{\nu_{e}}$: \begin{equation} {\rm Br}(B\rightarrow X_{s}\gamma)={\rm Br}^{\rm ex} (B\rightarrow X_{c}e\bar{\nu_{e}}) \frac{|V_{ts}^{\ast}V_{tb}|^{2}}{|V_{cb}|^{2}}\frac{6\alpha} {\pi f(z)}[|C^{\rm eff}_{7}(\mu_b)|^{2}+|C^{\prime,\rm eff}_{7}(\mu_b)|^{2}].\label{bsg} \end{equation} Here $z=\frac{m_c}{m_b}$, and $f(z)=1-8z^2+8z^6-z^8-24z^4\ln z$ is the phase-space factor in the semi-leptonic B decay. The method of running of the operators from $m_W$ scale to $\mu_b$ scale can be found in Ref. \cite{Li:2012xz}. We use the following bounds on the calculation \cite{Beringer:1900zz} \begin{eqnarray} && {\rm Br}^{\rm ex}(b\to ce\overline{\nu}_{e}) = (10.72 \pm 0.13) \times 10^{-2},\\ && {\rm Br}^{\rm ex}(B\to X_s \gamma)= (3.55 \pm 0.24 \pm 0.09) \times 10^{-4}. \end{eqnarray} The numerical results show that the $C^{\prime,\rm eff}_{7}(\mu_b)$ is much smaller than $C^{\rm eff}_{7}(\mu_b)$, therefore we do not present the formula of $C^{\prime,\rm eff}_{7}(m_W)$ here. The branching ratio as a function of $m_V$ is shown in FIG. \ref{fig4}, from which we can see that ${\rm Br}({ B}\to X_s \gamma)$ can be enhanced much greater than the experiment bound. Then the measurements of FCNC process can give a stringent constraint on the vector like quark model, especially when the masses of vector quark are much greater than the electro-weak scale. A few remarks should be addressed: \begin{itemize} \item There is one point of view on the unitarity of the CKM matrix which is that the $3\times 3 $ ordinary quark mixing matrix is regarded as nearly unitary, deviation from unitarity is suppressed by heavy particle in the new physics beyond the SM. In other word, one admits that the extended CKM matrix elements exist, they approach to zero while mass scale of the new physics approaches to infinity. All the new physical effects should decouple from the flavor sector and what should be checked is that if $3\times 3$ unitariry is consistent in all kinds of flavor processes. \item Another point of view is that, as in the SM case, the $3\times 3 $ ordinary quark mixing matrix elements are only extracted by experiments in the measurements of tree and loop level precesses. The unitarity should be checked, experiment measurements on the elements of matrix can be used as the constraints to the new physics beyond the SM. In the numerical analysis, the elements of CKM matrix are regarded as inputs. Thus what should be done is to scan the parameter space generally under these constraints, no prejudice should be imposed. Then the enhancement effect in $B\to X_s\gamma$ will be more clear. \end{itemize} \begin{figure}[hbtp] \begin{center} \scalebox{0.4}{\epsfig{file=fig5-1.ps}} \scalebox{0.4}{\epsfig{file=fig5-2.ps}} \caption{Enhancement factor and deviation from unitarity versus $m_X$, red $\triangle$ are excluded by the bound of $B\to X_s \gamma$ measurement which the green $\Box$ are the survived points (color online).}\label{fig5} \end{center} \end{figure} Large parameter space is excluded by the measured branching fraction of $B\to X_s \gamma$ as shown in FIG. \ref{fig4}. The enhancement effect of the VLPs can be seen in FIG. \ref{fig5} in which the left panel shows the enhancement factor $K_1$ versus $m_X$ while the right panel shows $K_2$ versus $m_X$. From the right panel we can see that deviation from unitarity are very small and almost irrelevant with $m_X$ since we are doing a general scan of $Z_{u,d}$ and $U_{u,d}$. However as we see from the left panel, as $m_X$ increases up, ${\rm Br}({B}\to X_s \gamma)$ measurement will constrain the enhancement factor and then constrain the input parameter of $m_X$. In all, the enhancement can be summarized as that when mass of vector like particle increases up, it will increase the mass parameter $m_V$ thus give an enhancement factor under very small deviation from unitarity. This should be a special point when we do the study on the vector like quark models. \section{Summary}\label{sec4} In the model with vector doublets, there exist bilinear terms in the lagrangian, making the general scan of the Yukawa coupling very difficult. In this paper, we show a trick to deal with the scan. Our scan method are exactly and the more efficient. We use the trick to study a very simple extension of the SM with vector like quarks. We studied one of the most important rare B decay $B\to X_s \gamma$ process in which we found that even the deviations from the unitarity of quark mixing matrix are small, the enhancement to rare B decay from VLPs are still significant. The enhanced effect is an important feature in the vector like particle model. In this work we just show the scan method, the key point of the enhancement and how stringent constraints on the parameter space from $B\to X_s \gamma$ measurements. What should be done includes models like extension of the SM with VLPs, two higgs doublets models \cite{Grinstein:1990tj} or supersymmetry models \cite{Altmannshofer:2009ne}. Such effect should be checked in all kinds of rare decays such as inclusive process $b\to s \ell^+\ell^-$ and exclusive processes $B_s \to \mu^+\mu^-$, $B_s \to \ell^+\ell^- \gamma$ and $B\bar B$ mixing {\it et. al.} The detailed studies on the parameter space including other rare B decays and new models will appear in our future work. \begin{acknowledgments} This work was supported by the Natural Science Foundation of China under grant numbers 11375001 and by talents foundation of education department of Beijing. \end{acknowledgments} \section*{Appendix} \begin{itemize} \item The loop functions for calculating the Wilson coefficients at the matching scale are the following \end {itemize} \begin{eqnarray} A(x)&=&\frac{55-170x+127x^{2}}{36(1-x)^{3}}+\frac{4x-17x^{2}+15x^{3}}{6(1-x)^{4}}\ln x,\nn\\ B(x)&=&\frac{-7+5x+8x^{2}}{36(1-x)^{3}}+\frac{-2x+3x^{2}}{6(1-x)^{4}}\ln x,\nn\\ C(x)&=& \frac{3-5x}{6(1-x)^{2}}+\frac{2-3x}{3(1-x)^{3}}\ln x,\nn\\ D(x)&=&\frac{3x-1}{4(1-x)^{2}}+\frac{x^{2}}{2(1-x)^{3}}\ln x,\nn\\ E(x)&=& \frac{-17+19x}{6(1-x)^{2}}+\frac{-8x+9x^{2}}{3(1-x)^{3}}\ln x.\nn \end{eqnarray}
2,869,038,155,342
arxiv
\section{Introduction} \label{sec:intro} Emergency communications in disaster recovery and relief operation requires efficient, robust, and rapid communication networks to deliver data between disaster management headquarter and on-site teams. The efficiency and timeliness of the response is often contingent due to the dynamic and resource limited constraints in the disaster areas \cite{Krishnaswamy.2014}. Therefore, the need of a reliable and robust network infrastructure for communication which ensures available connections to all users in a disaster area is crucial \cite{Cabrera.2013}. By popularity of portable devices and cellular networks in everyday communications, such devices become promising candidates to contribute on supporting disaster recovery and relief operation. Furthermore, nowadays commercial mobile devices are equipped with advanced signal processing capabilities and multiple radio interfaces such as cellular communication, WiFi, and Bluetooth. They provide a broad range of applications including videos and photos sharing. In addition, more portable devices will be brought to the disaster area by relief workers. A possible scenario is illustrated as in Fig. \ref{fig:model1}. First responders (e.g., on-site relief teams) connect to the remote control station (e.g., disaster management headquarter or local government) through either backbone networks or satellite communications. On the up-stream, first responders in the emergency zone take and share their location and videos/images with the remote control station and each others as well. On the down-stream, the responders request the global view of the emergency area from the remote control station. However, due to the effects of wireless channel in the disaster area, the data dissemination may either require retransmission or eventually be disconnected due to random obstacles on the path and packet drops. Therefore, it must be protected against intermittent connectivity issues and unreliable wireless channel. \begin{figure}[htbp] \begin{center} \epsfig{file=model1.eps,width=0.46\textwidth} \caption{Data transfer and distribution scheme for disaster recovery and relief operation. First responders (e.g., on-site relief teams) and the remote control station (e.g., disaster management headquarter or local government) communicate through either backbone networks or satellite communications.} \label{fig:model1} \end{center} \end{figure} In this paper, we focus on the down-stream communication between the remote control station and the responders. By exploiting NC \cite{Medard.2011} with the existing mobile devices, we propose a novel solution of reliable data dissemination for emergency scenarios, where mobile devices establish simultaneously two network interfaces consisting of the purely cellular communication links to the base station and short range links (e.g., WiFi ad-hoc mode) to neighboring devices within its proximity to ensure that the requested content is available for all responders/relief workers within the disaster area. In this study, a large file is segmented into independent fragments. With NC at source, fragments are linearly combined into network-coded packets to be sent. Each demanded device is responsible for collecting and decoding to recover the entire file from these fragments. The benefits of the proposed network architecture with using NC are as follows: \begin{itemize} \item The two radio interfaces guarantee content delivery and connectivity for all devices within the disaster area under effects of wireless channel, i.e. reliable and robust connections for emergency communications. \item NC makes efficient use of the limited available bandwidth, especially in case of emergency scenarios. \end{itemize} The rest of this paper is organized as follows. In Section \ref{sec:survey}, we review the state-of-the-art of NC for emergency communications. In Section \ref{sec:model}, we derive the proposed network architecture. In Section \ref{sec:NC}, we present the exploited NC scheme and respective analytical model. In Section \ref{sec:sim}, we evaluate the effectiveness of our proposed architecture with NC. Finally, in Section \ref{sec:concl}, we conclude the paper. \section{A review on Network Coding for Emergency Communications} \label{sec:survey} \subsection{Network Coding Overview} NC over the store-and-forward paradigm provides a new solution of enhancing reliability, throughput enhancement, and network design and operation \cite{Pahlevani.2014}. For example, a simple three-node model is illustrated in Fig. \ref{fig:xor}, where A and B want to exchange their own packets a and b, respectively. Assume that these nodes are out of range of each other and in time-division access mode, then this communication requires four timeslots including two timeslots for sending the packets to the relay C and two timeslots for relaying the packets. However, with NC, the relay can simply XOR the packets and send the coded packet. Then, both A and B can retrieve the required packet from the other node using their own packets. By this way, the total timeslots for transmission reduce from 4 to 3. \begin{figure}[htbp] \begin{center} \epsfig{file=XOR.eps,width=0.26\textwidth} \caption{An example of NC: a) without NC and b) with NC.} \label{fig:xor} \end{center} \end{figure} NC can be classified as either inter-session or intra-session \cite{Ostovari.2013}. The former focuses on solving bottleneck problems and reducing the number of transmissions by allowing packets from different sources/flows to be coded together. Therefore, NC decreases the interference between the links in wireless network and increases the overall network throughput. This technique has low computational complexity for coding. However, its drawback is not resilient to packet losses in the system. On the other hand, intra-session NC leads reliability enhancement in wireless networks with smaller number of transmissions than the feedback-based scheme without NC. However, it requires higher computational complexity than inter-session scheme. This approach exploits the link diversity by combining different packets from the same source/flow. Intra-session NC usually relies on random linear NC (RLNC) to encode and decode packets in a group with coefficients chosen from a finite field. This field size determines the probability that the destination can obtain linearly independent combinations and therefore obtain innovative information to recover the original packets successfully. In general, NC has recently emerged as a new approach for improving network performance in terms of throughput and reliability, especially in wireless networks by the uncertainty of wireless medium. This section briefly presents a literature review of the state-of-the-art of NC for emergency communications, including general emergency cases, emergency cases in vehicular ad-hoc networks (VANETs), and large potential of NC applications for commercial mobile devices. \subsection{Network Coding for General Emergency Cases} Emergency communications should be reliable and flexible for disaster aid and relief operation \cite{Lee.2010}. Reliability, availability and robustness have been considered as fundamental requirements for broadband communications and networks during disaster and emergency times. NC is a promising solution to enhance to reliability and robustness for data transmission. Joy et al. \cite{Joy.2013} presented an implementation of network infrastructure with NC to deliver large files from a source to a destination with the help by surrounding nodes, e.g. a real-time video from a cellphone to a helicopter. Intra-session NC is applied at source node, then surrounding nodes forward overhearing packets. At the destination, decoding procedure is done to recover the original file. The NC helps to improve the numbers of files delivered compared to fragmentation in scenarios of packet loss or disruptions. The advantages come from spatial diversity by surrounding nodes. Various nodes may repeat different pieces of a file due to link disruption by channel condition or busy relays. Besides, the relay may recover the original files from different pieces to use before forwarding. Nevertheless, this paper only exploited WiFi ad-hoc mode on the Android phones and laptops in short-range communications. In \cite{Altamimi.2014}, NC is employed to improve the delivery probability in an intermittently connected network (ICN), which utilizes mobile networks with cooperation between nodes to create message replication. Main targets are to maximize the delivery ratio and minimize the overhead ratio. The authors showed an explicit expression for the delivery probability of random linear NC in comparison to the normal replication. This work requires overhearing ability for each device in mobile networks. Besides, Nguyen et al. \cite{Hung.2014} designed a novel NC aided MIMO scheme for combating the deleterious effects of both the shadow fading and Rayleigh fading in hostile wireless channels. The proposed model leads to ambulance-and-emergency communications. A powerful space-time code is proposed for providing a near-capacity performance in fast fading environments. NC is herein used to obtain a further spatial diversity gain for combating slow fading effects by obstacles. In \cite{Alejandra.2014}, a novel perceptual semantics for multimedia communications is proposed to enhance situation awareness in human-analysis-driven processes as in emergency operations. However, this study mainly focuses on application layer optimization with adaptive NC at network layer, which assumes a network architecture as a FIFO with finite queue. In addition, authors in \cite{Subramanian.2010} considered an intermittently-connected mobile network consisting of N relays, 1 source and M destinations. NC is utilized to enhance the transmission capacity limited by disruptive connectivity. Each relay makes random linear combination for incoming packets over $GF(q=2^{F})$ before sending to the others. Queuing-theoretic is derived to analyze the steady-state throughput performance of the network-coded scheme. For VANETs, beacon information plays an important role in vehicle applications such as predicting the position of neighbor vehicles to avoid any emergency situation \cite{Sahu.2014a}. However, beacon overhead and congestion may cause low message reception, which affects emergency messages and other control messages. The work in \cite{Sahu.2014a} considered packet level NC for controlling beacon overhead. An intermediate vehicle considering as a relay will perform $XOR$ operation for incoming packets from other two vehicles, i.e. $C$= $A$ $XOR$ $B$. Then, the combined packet is forwarded back to the two sources. By this way, channel contention caused by beacon overhead is reduced. Besides, the authors in \cite{Sahu.2014b} also utilized $XOR$ operation for incoming packets from other two vehicles as in \cite{Sahu.2014a}. However, the target is to cancel interferences due to inter-street beacon communications by adaptive transmission control. \subsection{Network Coding on Commercial Mobile Devices} By huge number of devices, implementing NC on commercially mobile devices via cellular networks and WiFi connection opens a promising approach for real-time applications such as emergency communications, where real-time videos/photos can be taken and sent in a timely matter. In \cite{Peng.2012}, architectures of network-coded have been presented for the next generation IMT-Advanced systems. The key point is that the collaboration by the intermediate nodes can enhance network performance significantly. However, the existing 3G/4G cellular networks have not supported for cooperative relay schemes and decoding procedure at the base station as well. Moreover, respective scheduling schemes are also required. Therefore, this work is a first look at the possible scenarios to the design of cooperative NC in futuristic wireless communication systems. Pedersen et al. \cite{Pedersen.2008} implemented a simple scheme of NC at intermediate cellphones. By combining both 3G/4G links and WiFi connection on cellphones, the overall network bandwidth may be reduced by 50 percents. Even though the model is simple, this work opens an new approach to apply NC for existing cellular network devices, which is not feasible by \cite{Peng.2012}. In addition, in \cite{Pahlevani.2014}, many NC schemes for data sharing have been implemented on commercial cellphones. The authors considered both 3G/4G and WiFi connection between devices. In general, realistic applications of NC for emergency communications over cellular communication systems have not been considered significantly due to the limitations such as cooperative relay schemes, decoding procedure, and scheduling schemes at the base station. However, if both 3G/4G and WiFi are flexibly combined, NC can bring enhancement on network utilization efficiency and reliability over the hostile channel conditions. \section{Description of the Proposed Network Architecture} \label{sec:model} Assume that the purely cellular network infrastructure is still functional under the effects of disasters, mobile devices will be helpful in the efficient distribution of rescue and relief to disaster area. However, for communication in this environment, end-to-end connectivity cannot be always guaranteed to all users in the field, where the construction of a continuous end-to-end path between source and destination is difficult or impossible. In case of large-scale disasters such as flood and cyclone, cellular network infrastructure may immediately become non-functional due to system damage. The proposed network scheme in Fig. \ref{fig:model2} can then be adapted by replacing the base station with a portable wireless station which is equipped with satellite communications and short range radio links. \begin{figure}[htbp] \begin{center} \epsfig{file=model2.eps,width=0.48\textwidth} \caption{Network model: a) Direct cellular links between BTS and users without cooperation, b) Cooperation between cellular links and WiFi links for data download and distribution. The red dotted lines denote failure or intermittent connections over cellular links; however, the proposed transmission strategy may provide data dissemination by other connections over WiFi links.} \label{fig:model2} \end{center} \end{figure} In Fig. \ref{fig:model2} (b), we introduce a novel network architecture based on existing mobile devices in cellular networks, where content delivery from the disaster management headquarter is guaranteed to all users in intermittent connectivity scenarios. Moreover, overall network resource on cellular communications may be saved by cooperation between devices over cellular links and WiFi links for data download and distribution. Assume that relief workers with mobile devices in disaster area request the same multimedia content from the headquarter, e.g. global view of the disaster area. We can consider these users as a multicast group as illustrated in Fig. \ref{fig:model1}. The network model consists of two groups. The first group of users is connected directly to the cellular base station through cellular links. However, due to the channel effects or obstacles which cause either disconnection or intermittent connectivity, the other users cannot work well with cellular links. Thus the second group, which considers the first group as indirect sources to relay the requested multimedia content, forms an ad-hoc network based on WiFi links. This model allows data dissemination from the headquarter to all on-site relief workers in emergency scenarios, especially in case of intermittent connectivity due to obstacles or difficult terrain at the disaster areas. \section{Random Linear Network Coding Scheme} \label{sec:NC} \subsection{Network Coding Scheme} \label{sec:des} \begin{figure}[htbp] \begin{center} \epsfig{file=model3.eps,width=0.48\textwidth} \caption{Network-coded traffic with random linear NC at source. For example, in case of $n=2,m=2$, each coded packet is transmitted to a relay in the first group via a cellular link. Then, these packets are forwarded toward the next hop via WiFi connections. Intermediate nodes in the second relay group can recombine multiple received packets from the same generation to increase linear independence for received packets at the sinks. Even if erasure events occur on one of incoming links of the relays, delivery performance is still guaranteed at the destination.} \label{fig:model3} \end{center} \end{figure} We consider the proposed network architecture in Fig. \ref{fig:model2}(b), where a multicast group requests an image of the disaster area sent from the management headquarter (source node). For simplicity, we assume that the base station plays the same role as the source node. To improve the reliability for content delivery against packet losses, the source exploits a random linear NC scheme to the file before transmission as illustrated in Fig. \ref{fig:model3}. First, the file is segmented into a set of fragments or blocks. The source then linearly combines $n$ fragments $x_{i}$ $(1\leq i\leq n)$ to generate $m$ $(m \geq n)$ linear combinations $y_{j}$ $(1\leq j\leq m)$ as follows \begin{eqnarray} {y_{j}} = \sum_{\substack{1\leq i\leq n}} c_{ji}x_{i}, \label{eq:rlnc1} \end{eqnarray} where $c_{ji}$ is a coefficient randomly generated from a Galois field of size $q=2^{f}$ ($c_{ji}\in \mathbb{F}_{q} \backslash \left\{0\right\}$), e.g. $f=8$. A set of $m$ linear combinations produced from $n$ fragments is assigned the same generation number. In general, let $X\in \mathbb{F}^{n \times 1}_{q}$ denote the $n\times 1$ vectors of source fragments and $C\in \mathbb{F}^{m \times n}_{q}$ with rank $n$ denote the $m \times n$ coefficient matrix. Then, the $m \times 1$ vectors of the transmitted network-coded packets $Y\in \mathbb{F}^{m \times 1}_{q}$ are given by \begin{eqnarray} Y = C \cdot X. \label{eq:rlnc2} \end{eqnarray} At intermediate nodes, multiple packet combinations from the same generation are linearly recombined to generate at the output, where each element is a randomly coefficient from the same finite field $GF(2^{f})$. At the sink, if it received at least $n$ linearly independent encoded messages from the source node, the network-coded packets are decoded by \begin{eqnarray} \hat{X} = \hat{C}^{-1} \cdot \hat{Y}, \label{eq:rlnc3} \end{eqnarray} where $\hat{C}^{-1} \in \mathbb{F}^{m^{'} \times n}_{q}$ ($m^{'} \geq n$) are the $m^{'} \times n$ coefficient matrix relevant to the received packet combinations $\hat{Y}$. \subsection{Decoding Probability Analysis} \label{sec:analysis} In this section, we investigate the successful decoding probability in random linear NC at source with erasure channels. For a generalized analysis of the proposed network architecture in Fig. \ref{fig:model3}, we consider a relay network composed of 1 traffic source $S$, $N$ and $M$ relay nodes at the first and the second hops, respectively, and a number of destinations, as denoted in Fig. \ref{fig:model4}. All links are assumed to be independent channels. Random linear NC scheme follows the description in Sec. \ref{sec:des}. The source generates $m$ linear combinations from each of $n$ original fragments via different links to the first hop neighbors $r_{i}$ $(1\leq i\leq N)$. The network-coded packets forwarded by the first group are then re-encoded at the second hop nodes $R_{j}$ $(1\leq j\leq M)$ before reaching the destinations. \begin{figure}[htbp] \begin{center} \epsfig{file=model4.eps,width=0.38\textwidth} \caption{A linear NC scheme at source $S$ and $N+M$ relays.} \label{fig:model4} \end{center} \end{figure} The effectiveness of random linear NC depends on the availability of at least $n$ linearly independent encoded packets for each generation at the destination to recover the original data. This condition relates to the impacts of erasure rate and NC design. Therefore, we derive the decoding probability at the destination as a function of the packet loss rate, coding design parameters $(n,m)$, and the number of relays $(N,M)$. Let $\delta_{0}$ be the erasure rate for the links between $S$ and $r_{i}$, $\delta_{1}$ be the erasure rate for the links between $r_{i}$ and $R_{j}$, and $\delta_{2}$ be the erasure rate for the link between $R_{j}$ and the destinations. On the erasure links between $R_{j}$ and the destination, we let $\overline{\epsilon}$ denote the successful reception event with $Pr\left\{\overline{\epsilon}\right\}=1-\delta_{2}$ and $\epsilon$ denote the occurrence of an erasure event with $Pr\left\{\epsilon\right\}=\delta_{2}$. We consider the extraction of each element $\hat{c}_{ji}$ in coefficient matrix $\hat{C}$ at the destination under the effects of erasure channels $\delta_{0}$, $\delta_{1}$, and $\delta_{2}$. At a specific relay $R_{j}$ (e.g., $R_{1}$), it can be observed that in case of without packet loss on the link $R_{1}$ to $D$, the random element is equal to zero only if packet losses have occurred on either all $N$ links from the $r_{i}$ to $R_{1}$ or all $N$ links from the source to $R_{j}$. On the other hand, the element is a random value from a finite field excluding zero. Therefore, the conditional probabilities are respectively defined as \begin{eqnarray} Pr\left\{\hat{c}_{ji}= 0 | \overline{\epsilon}\right\} = \left[ \left(1-\delta_{0}\right)\delta_{1} \right]^{N} + \left(\delta_{0}\right)^{N} = \psi, \label{eq:a1} \end{eqnarray} \begin{eqnarray} Pr\left\{\hat{c}_{ji}= \theta | \overline{\epsilon}\right\} = \left(1-\psi \right)/\left(q-1 \right) (\theta \neq 0). \label{eq:a2} \end{eqnarray} We adapted the analytical model in \cite{Seong.2014}, which is based on rank of coefficient matrix to derive the bound for the decoding probability. The delivery failure probability $P_{fail}$ is defined as \begin{eqnarray} P_{fail} & := & Pr\left\{rank(\hat{C}) < n \right\} \nonumber \\ & = & Pr\left\{\exists v: \hat{C}v = 0^{T} \right\} \nonumber \\ & \leq & \sum_{\substack{v \in \mathbb{F}^{n}_{q} \backslash \left\{0^{T}\right\}}} Pr\left\{\hat{C}^{-1}v = 0^{T} \right\}, \label{eq:a3} \end{eqnarray} where $v$ is a nonzero vector with $n$ elements and $0^{T}$ denotes a zero vector. Follow the same approach in Theorem $1$ in \cite{Seong.2014}, we obtain \begin{eqnarray} P_{decode} = 1 - P_{fail}, \label{eq:a4} \end{eqnarray} where \begin{multline} P_{fail} \leq \frac{1}{q-1} \sum_{\substack{1\leq i \leq n}} \dbinom{n}{i}(q-1)^{i} \bigg\{\delta_{2} + \left(1-\delta_{2}\right) \\ \times \bigg[ q^{-1} + \left(1-q^{-1} \right) \left(1-\frac{1-\psi}{1-q^{-1}}\right)^{i} \bigg] \bigg\}^{M\cdot\frac{m}{N}}. \label{eq:a5} \end{multline} For the sake of comparison, we also derive the decoding probability in inter-NC scheme at the second hop relays based on the same network model as in Fig. \ref{fig:model4}. Where the source generates $N$ purely fragments without combination. Each of the $M$ intermediate nodes $R_{j}$ encodes the received messages using random linear NC before forwarding. The decoding probability of the $N$ fragments is given by \begin{multline} P^{'}_{decode} \geq (1 - \delta_{0})^{N} \times \bigg\{ 1 - \frac{1}{q-1} \sum_{\substack{1\leq i \leq N}} \dbinom{N}{i}(q-1)^{i} \\ \times \bigg\{\delta_{2} + \left(1-\delta_{2}\right) \bigg[ q^{-1} + \left(1-q^{-1} \right) \left(1-\frac{1-\delta_{1}}{1-q^{-1}}\right)^{i} \bigg] \bigg\}^{M}\bigg\}. \label{eq:a6} \end{multline} \begin{figure}[htbp] \begin{center} \epsfig{file=theory3D.eps,width=0.55\textwidth} \caption{Theoretically decoding probability of NC at source according to $q=256$, $n=3$, $N=3$, $\delta_{1}=0.01$, $\delta_{2}=0.01$.} \label{fig:theory3D} \end{center} \end{figure} Fig. \ref{fig:theory3D} plots the bound for the decoding probability with respect to the number of relays $M$ at the second hop as a function of erasure rate $\delta_{0}$ on the links between the source and the first hop relays and coding design parameter $m$. Each surface corresponds to a different value of $M$. In addition, by the solid blue line, we compare theoretical performance of random NC at source and inter-NC with respect to various values of erasure rate $\delta_{0}$. We can observe that inter-NC at intermediate relays with $M=4$ is dramatically affected by erasure channels between the source and the first hop relays. On the other hand, random linear NC at source with link diversity can potentially provide better delivery performance by flexibly changing NC parameters $m$ at the source node. In particular, in case of $M=4$, the decoding probability at destinations approximately reaches a maximum value of $100\%$ according to erasure rates in the range between $0.2$ and $0.55$. This is because of that the redundancy of intermediate relays $R_{j}$ at the second hop ($M=4$ versus $N=3$) combining with link diversity by packet combination at the source increases opportunity for the network-coded packets to reach the destinations. On the other case, assume that the available relays are reduced to $M=2$ while coding parameters at the source are designed to be $n=3$ and $m=4$. Then, we see a significant degradation of the decoding probability if compared with the other cases. However, the problem can be improved by increasing redundant combinations at the source, i.e. $m-n$ packets. For example, with $m=5$, the decoding ratio obtains a value of $90\%$ at $\delta_{0}=0.5$. The above observation reveals a practical aspect of NC at source, called \textit{Geo-Network coding}, which takes into account the geographical information of relays in the emergency area to network coding design so as to enhance robust and reliable transmission and delivery in situation awareness scenarios. Assume that the source has access to coverage maps and locations of relays, Geo-Network coding then simply mean to select the $M$ appropriate relays depending on the signal strength at their locations. Even if the number of relays $R_{j}$ is not available to provide a required performance, the source can increase the network performance itself by the $m-n$ redundant combinations. \section{Simulation Results} \label{sec:sim} Assume that intermittent connectivity due to obstacles or difficult terrain in the disaster areas prevents direct transmission from the base station to the demanded users using cellular links. In this case, our architecture is a feasible approach to support service to the users. We take benefits of NC to enhance the reliability for multicast data. In this section, we evaluate the performance of the proposed network architecture in multicast data delivery with using NC and without using NC in terms of average packet delivery ratio (PDR) at the sinks. The simulated network scheme is illustrated as in Fig. \ref{fig:model3}, where the source transmits multicast data using different channels via cellular links to some intermediate nodes in the group. These packets are then forwarded hop-by-hop to their neighboring nodes via WiFi links before reaching the sinks. The objective of simulation is to send a multicast file from a source to 2 sinks through intermediate relays, where 2000 packets are transmitted across the simulated network. The Galois Field size for NC is $2^8$, which is sufficiently large enough for practical applications. Link packet erasure patterns at different erasure rates are generated using Gilbert Elliot model. We compare the delivery performance of purely fragmentation without NC and fragmentation with NC at source versus per link packet erasure probability. \begin{figure}[htbp] \begin{center} \epsfig{file=LossRate.eps,width=0.45\textwidth} \caption{Mean delivery ratio according to different transmission strategies versus per link packet erasure probability.} \label{fig:LossRate} \end{center} \end{figure} Initial simulations are performed to evaluate the average PDR at multiple sinks versus varying link erasure probabilities. Fig. \ref{fig:LossRate} compares the performance of fragmentation without NC and fragmentation with NC assigned different values of $(n,m)$, i.e. the $m$ linear combinations for each $n$ source fragments. We can observe that purely fragmentation with only store-and-forward presents very low PDR due to erasure channels and packet drop at intermediate nodes. On the other hand, fragmentation schemes with NC generally outperforms the purely fragmentation regardless of the effect of erasure channel. This is because of NC's ability to recover from losses, i.e. reliability over erasure channels, by exploiting link diversity and re-encoding packets with the same generation at intermediate nodes to reduce congestion, which increases probability of successfully transmitting a coded packet from the source to the sinks. In particular, the case of NC with $n=11, m=12$ obtains the highest performance by its higher link diversity than the other schemes although its redundancy ratio is a bit smaller than the case of NC with $n=9, m=10$ ($1/11$ and $1/9$, respectively). Whereas, in case of $n=1, m=2$, because of small link diversity, its PDR is the smallest one if compared to the other NC cases. However, it still obtains a significant improvement on network performance with respect to various packet loss rates. In general, the revealed simulation results show that hybrid network architecture with NC can considerably improve multicast data delivery in scenarios with bandwidth-constrained and severe disruptions such as intermittent connectivity and erasure channels. \begin{figure}[htbp] \begin{center} \epsfig{file=GF.eps,width=0.45\textwidth} \caption{Mean delivery ratio of fragmentation with NC in case $n=9$, $m=10$ according to different field sizes $GF(2^k)$ versus per link packet erasure probability.} \label{fig:GFsize} \end{center} \end{figure} The subsequent results show the impacts of Galois Field size on network performance. Fig. \ref{fig:GFsize} denotes mean delivery ratio versus various link erasure probabilities according to different field sizes. The larger field size conduces the higher delivery ratio at the destination. The reason is that the larger the field size, the larger the probability that the sinks receive $n$ linearly independent encoded packets from the source, i.e. the larger the probability of successful decoding a network-coded packet generation. \section{Conclusion} \label{sec:concl} In this paper, we proposed a novel network architecture based on the existing cellular networks and commercial mobile devices for intermittent connectivity scenarios. By exploiting random linear NC, we can guarantee robust and reliable content delivery and connectivity for all devices within the disaster area under the effects of wireless channels and obstacles. Simulation results show that in the proposed network architecture, fragmentation with NC at source significantly outperforms the purely fragmentation scheme in terms of the delivery probability. The impacts of finite field size in network performance is also evaluated. This work is only a first step toward a full consideration of novel network architectures for emergency scenarios. In the future, we plan to investigate real implementation and performance evaluation of the proposed architecture on the existing commercial cellphones. \section*{Acknowledgments} \bibliographystyle{IEEETran}
2,869,038,155,343
arxiv
\section{Introduction} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{exam1.png} \caption{A real-world example of fact-checking behavior. \emph{thebri\_animal} is a fact-checker, who corrects the false claim with a fact-checking URL/article containing factual evidences.} \label{example} \vspace{-15pt} \end{figure} While social media sites provide users with the revolutionized communication medium by bringing the communication efficiency to a new level, they can be easily misused for widely spreading misinformation and fake news. Fake news and misinformation have been a long-standing issue for various purposes such as political propaganda \cite{allcott2017social} and financial propaganda \cite{kogan2017fake}. To fight against fake news, traditional publishers employed human editors to manually and carefully check the content of news articles to maintain their reputation. However, social media provided a new way to spread news, which lead to broader information sources and expanded audience (i.e., anyone can be a media and create news). In particular, users share news articles with their own opinion or read articles shared by their friends from whatever the source of news is with mostly blind trust \cite{twitterfake201} or with their own ideologies \cite{ecker2010explicit, nyhan2010corrections}. Although social media posts usually have a very short life cycle, the unprecedented amount of fake news may lead to a catastrophic impact on both individuals and society. Besides from misleading users with false information \cite{nyhan2010corrections}, widely propagated fake news could even cause trust crisis of entire news ecosystem \cite{Shu:2019:BeyondFN}, even further affecting both the cyberspace and physical space. In literature, researchers focused on four topics regarding fake news: characterization (i.e., types of fake news), motivation, circulation, and countermeasures \cite{FakeNews,Zhou:2019:FNF:3289600.3291382}. A large body of work has been done on fake news identification \cite{Shu:2019:BeyondFN,Tschiatschek:2018:FND,Bastidas2018,Wang:2018:EANN} by exploiting multiple content-related and social-related components. However, we notice that the fake news still has been widely spread even after early detection \cite{figueira2017current}. Therefore, we propose to study a complementary approach to mitigate the spread and impact of fake news. Recently, community and journalists started building and maintaining fact-checking websites (e.g., Snopes.com). Social media users called \emph{fact-checkers} also started using these fact-checking pages as factual evidences to debunk fake news by replying to fake news posters. Figure \ref{example} demonstrates a real-world example of a fact-checker's fact-checking behavior on Twitter by debunking another user's false claim with a Snopes page URL as an evidence to support the factual correction. In \cite{Vo:2018}, researchers found that these fact-checkers actively debunked fake news mostly within one day, and their replies were exposed to hundreds of millions users. To motivate these fact-checkers further quickly engage with fake news posters and intelligently consume increased volume of fact-checking articles, in this paper we propose a novel personalized fact-checking URL recommender system. According to \cite{mikolov2013distributed}, co-occurrence matrix within the given context provides information of semantic similarity between two objects. Therefore, in our proposed deep-learning based recommender system, we employ two extended matrices: user-user co-occurrence matrix, and URL-URL co-occurrence matrix to facilitate our recommendation. In addition, users tend to form relationships with like-minded people \cite{quattrociocchi2016echo}. Therefore, we incorporate each user's social context to capture the semantic relation to enhance the recommendation performance. Our main contributions are summarized as follows: \squishlist \item We propose a new framework for personalized fact-checking URL recommendation, which relies on multi-relational context neighbors. \item We propose two attention mechanisms which allow for learning deep semantic representation of both a target user and a target URL at different granularity. \item Experimental results show that our proposed model outperforms eight state-of-the-art baselines, covering various types of recommendation approaches. Ablation study confirm the effectiveness of each component in our proposed framework. \squishend \section{Related Works} In this section, we briefly review related works and position our work within the following areas: (1) fake news and misinformation; (2) advancements in recommender systems; and (3) graph convolutional networks. \subsection{Fake News and Misinformation} Fake news has attracted considerable attention since it is related to our daily life and has become a serious problem related to multiple areas such as politics \cite{allcott2017social} and finance \cite{kogan2017fake}. Social media sites have become one of popular mediums to propagate fake news and misinformation. The dominant line of work in this topic is fake news detection \cite{shu2017fake} which was mostly formulated as a binary classification problem. Researchers began to incorporate social context and other features for identifying fake news at an early stage and preventing it from diffusion on the social network \cite{Shu:2019:BeyondFN,Zhou:2019:FNF:3289600.3291382}. Some other researchers focus on investigating the propagation patterns of fake news in social network \cite{wu2018tracing,liu2018early}. \cite{Vo:2019:LFA} also studied fake news intervention. Unlike most previous works, we follow the direction of \cite{Vo:2018} and propose to build a personalized recommender system for promoting the fact-checking article circulation to debunk fake news. \subsection{Advancements in Recommender System} Traditionally, recommendation algorithms can be divided into two categories: collaborative filtering \cite{Sarwar:2001:ICF} and content-based filtering. However, in the past few years, the recommendation has become a more integrated task due to the success of the deep neural network. Neural Networks (NNs) proves to be effective to capture underlying nonlinear relations \cite{He:2017:NCF}. Another advantage is that the NNs enhanced the model's capability of extracting knowledge from multimodal data \cite{van2013deep,he2016ups,wang2017your}, which serves as auxiliary information and provide solutions to address the data sparsity problem. More recently, researchers introduced attention mechanism into recommender systems, which has achieved great success in various fields \cite{Bahdanau2015Attention,NIPS2017selfatt}. Researchers developed multiple variants of attention mechanism to improve both the recommendation precision and model interpretability \cite{wang2017dynamic, chen2017attentive, seo2017interpretable,zhu2017couplenet}. In this paper, we also propose two novel designs of attention mechanism. Following \cite{Ebesu:2018:CMN,He2018NAISNA}, we further explore multi-relational context of given user-URL pair, aiming at discriminating the most important elements towards URL-dependent user preference. \subsection{Graph Convolutional Networks} With the surge of Graph-based Neural Network, GCN-based approaches have shown strong effectiveness on various tasks\cite{Kipf2016GCN,LGCL2018,NIPS2017GraphSAGE}, including recommender system. The core idea is to iteratively aggregate attributed node vectors around each node, and messages propagates by stacking multiple layers. However, the original design of GCN is not suitable for our scenario because of the following reasons: First, existing GCN works \cite{LGCL2018,NIPS2017GraphSAGE} do not distinguish different types of nodes, whereas in our case, it does not make sense to aggregate user and URL nodes together. And the aggregation function proposed in most GCN works treats all its adjacency nodes with the same importance. It is inappropriate in real-world applications and probably tends to neglect necessary information. \cite{velickovic2018gat} breaks this schema by using a multi-head attention mechanism to replace the convolution-like operator, yet it requires significant extra computation and memory. Compared to the previous works, in this paper, we focus on a novel application and investigate both co-occurrence context and social context related influences for fact-checking URL recommendation. We also incorporate sets of auxiliary attributes, which enable more comprehensive learning of the compatibility between given pairs of user and URL. Moreover, we take advantage of advancements in graph neural networks and attention mechanisms, and solve the aforementioned research problems. \section{Problem Formulation} \label{definition} We formally introduce definitions before describing our proposed framework. We define fact-checking behavior as a user (i.e., \emph{fact-checker}\footnote{We use terms user and fact-checker interchangeably in the paper.}) embeds a fact-checking URL in his reply in order to debunk fake news. We regard each fact-checking behavior as an implicit interaction between target user $i$ and target URL $j$. \paragraph{Definition 1 (Fact-checking URL Recommendation Task)} Let $\mathcal{U} = \{u_1,u_2,...,u_n\}$ denotes a set of fact-checkers on social media, and use $\mathcal{C} = \{c_1,c_2,...,c_m\}$ to index fact-checking URLs. We construct user-URL interaction matrix $Y = \{y_{ij} | u\in \mathcal{U}, v \in \mathcal{C} \}$ according to users' fact-checking behavior, where \begin{equation} y_{ij} = \begin{cases} 1, \text{if ($u_i,c_j$) interaction observed,}\\ 0, \text{otherwise.} \end{cases} \end{equation} each value of 1 for $y_{ij}$ indicates the existence of implicit interaction between target user $i$ and target URL $j$. Each user $u_i$ and each URL $c_j$ associate with a set of attributes. The goal of the recommendation task is to recommend top-N URLs from the URL set $\mathcal{C}$ to each user. We also construct the entire dataset as a heterogeneous graph, which is a special kind of information network that consists of either multiple types of objects or different types of links, or both. \paragraph{Definition 2 (Heterogeneous Network) \cite{sun2011pathsim}} Formally, consider a heterogeneous graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$, where $\mathcal{V} (|V|= m + n)$ and $E$ denote the node set and edge set, respectively. The heterogeneity represents by the node type mapping function: $\phi: \mathcal{V} \to \mathcal{A}$ and edge type projection function: $\psi: \mathcal{E} \to \mathcal{R}$, where $\mathcal{A}$ and $\mathcal{R}$ denote the sets of predefined node types and edge types, and $|\mathcal{A}| + |\mathcal{R}| > 2$. Note that we does not consider self-loop in our graph construction. \begin{figure}[t] \centering \includegraphics[scale=0.7]{mr_case.png} \caption{A toy example of multi-relational context w.r.t. given target user-URL pair.} \label{mr_case} \vspace{-16pt} \end{figure} \paragraph{Definition 3 (Multi-relational Context)} Given target user $i$, we define his following fact-checkers and co-occurrenced fact-checkers as his social context user neighbors and co-occurrenced context user neighbors, respectively. Similarly, we name the other URLs posted by target user $i$ and co-occurrenced URLs of target URL $j$ as historical context URL neighbors and co-occurrenced context URL neighbors, respectively. In general, we call all the context neighbors as multi-relational context of given target user-URL pair. \paragraph{Example} Figure \ref{mr_case} illustrates the multi-relational context. In Figure \ref{mr_case}, $c_1$, $c_2$, $c_3$ represents fact-checking URLs and $u_1$, $u_2$, $u_3$ are users who involve sharing these URLs. For example, $(u_1 \to u_2)$ indicates the social relationship between $u_1$ and $u_2$. Intuitively, we care more about the influence of $u_2$ on $u_1$. $(u_1 \to c_1 \gets u_2)$ means $u_1$ and $u_2$ are co-occurrenced user neighbors. Similarly, we name $c_1$ and $c_2$ as co-occurrenced URL neighbors of $u_3$, and $c_2$ is historical context URL neighbor given target $u_3$-$c_3$ pair. \begin{table}[htbp] \caption{Notations.} \centering \small \begin{tabular}{@{}cl@{}} \toprule Notations & Description \\ \midrule $b_h$ & \# of selected relation-based neighbors \\ $S$ & Spatial weight tensor \\ $L$ & Layer-wise weight tensor \\ $C$ & Channel-wise wight tensor \\ $M$ & Initial embedding matrix of each neighbor \\ $N$ & Attended embedding matrix of each neighbor \\ $A_{ij}$ & Weighted adjacency matrix in graph \\ $W_{\phi_{i}}$ & Node type specific transformation matrix \\ $\mathcal{N}^{\phi_t}_i$ & Node type specific neighbor nodes \\ $e_{ij}^{\phi^{(l)}}$ & Importance between node pair $(i,j)$ at layer $l$ \\ $\alpha_{ij}^{\phi^{(l)}}$ & Weights between node pair $(i,j)$ at layer $l$ \\ $p_i$ & Neighborhood embedding of user $i$ \\ $p_j$ & Neighborhood embedding of URL $j$ \\ $u^{\prime}_i$ & Wide context-based embedding of user $i$ \\ $c^{\prime}_j$ & Wide context-based embedding of URL $j$ \\ $h^{(l)}_i$ & Deep context-based embedding of node $i$ \\ \bottomrule \end{tabular} \label{notations} \vspace{-13pt} \end{table} \begin{figure*}[h] \centering \includegraphics[width=\textwidth]{framework.png} \caption{A schematic overview of our proposed Attributed Multi-Relational Attention Network (AMRAN), consisting of two modules: (1) a convolutional spatial attention network (CSAN); and (2) a heterogeneous graph attention network (HGAN).} \label{overview} \vspace{-15pt} \end{figure*} \section{Proposed Framework} \label{sec:framework} We propose a novel framework called Attributed Multi-Relational Attention Network (\emph{AMRAN}), to understand the influence of the multi-relational context to target user's fact-checking behavior. In this section, we elaborate our proposed AMRAN with using notations described in Table~\ref{notations}. At the high level, \emph{AMRAN} is composed of two modules as shown in Figure \ref{overview}: (i) a convolutional spatial attention network (\emph{CSAN}) and (ii) a heterogeneous graph attention network (\emph{HGAN}). \emph{CSAN} jointly models the influence of multi-relational context on target user-URL pair (Section 4.1). It enriches the neighborhood diversity, and expands the scope of information reception. \emph{HGAN} leverages both global node connectivity and local node attributes, in order to incorporate the effect of information propagation and encode user's dynamic preference in depth (Section 4.2). At the final step, the model produces recommendations by combining wide context-aware target user embedding and URL embedding, multi-relational context user embedding and context URL embedding, and deep context-aware user embedding and URL embedding (Section 4.3). \subsection{Convolutional Spatial Attention Network (CSAN)} The left bounding box in Figure \ref{overview} illustrates the structure of CSAN module. To provide a broad scope of knowledge for generating wide context-aware target user embedding and URL embedding, we adopt a multi-branch setting in CSAN. The two parallel branch models multi-relational context for target user and target URL respectively. Each branch contains two identical streams. We select $b_h$ context neighbors for each stream (e.g., historical context URL neighbors and co-occurrenced context URL neighbors of target URL, social context user neighbors and co-occurenced user neighbors of target user). These streams are employed to learn the most discriminative features from multi-relational neighbors of target user and target URL. Then we employ a gated fusion layer to capture the optimal global level representation of target user-URL pair. Note that we enable the embedding sharing within each branch as users/URLs share the same feature set. \subsubsection{Raw Attribute Input} User and URL associate with different feature sets. Therefore, CSAN starts from embedding the input attribute set of each context neighbor. We use $s$ and $t$ to denote the number of features related to user and URL, respectively. Note that the dimension of initial embedding for each attribute could be different since they may carry with different information volume. We use one-hot encoding for categorical feature inputs, and apply direct lookup on these features. However, the same solution performs poorly when it comes continuous attributes such as the post frequency of an URL. Empirically, we found that an available solution is to bucketize these features into small intervals. Specifically, we map these continuous attributes in range $[0,1), [1,2),..., [2^k, 2^{k+1})$ into $0,1,..., k$ in this work. \subsubsection{Attribute Embedding Layer} We then project them into the same latent space via a set of attribute-specific transformation matrices $W_1, W_2, ..., W_{s+t}$ to project all the attributes into a $w$-dimensional space. The attributes of each neighbor then are stacked as a matrix in shape of $s \times w$ for users and $t \times w$ for URLs. However, we treat the target user-URL pair differently. After projecting attributes by the same attribute-specific transformation matrix as their relational neighbors, instead of stacking them as a matrix, we concatenate the attribute embedding vectors together and feed it through a linear projection to generate $u^{\prime}_i \in \mathbb{R}^d$ and $c^{\prime}_j \in \mathbb{R}^d$ for future reference. \subsubsection{Spatial Attention Block} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{sp_attn.png} \caption{The illustration of Spatial Attention Mechanism (show an attention block in the historical context URL stream for illustration).} \label{spat_attn} \vspace{-15pt} \end{figure} To prevent some unknown misalignment and conduct better comparison among the neighborhood features, we proposed a schema for jointly learning the layer-wise and channel-wise attention. In particular, for each stream, we pile the neighbors' representation matrices together to obtain a $3$-dimensional tensor $M$. Intuitively, the design helps improve the alignment quality of neighbor's features. Then, inspired by \cite{hu2018senet,li2018harmonious}, we employ a spatial attention block in each stream for jointly learning channel-level and layer-level soft attention. See figure \ref{spat_attn} for a high-level illustration of our spatial attention block. All the streams adopt identical spatial attention blocks, and each block attends the input attribute representations independently. In the figure, we use the historical context URL stream for illustration. The output of spatial attention block is an attention weight map $S \in \mathbb{R}^{t \times w \times b}$ which is in the same shape with the input tensor $M$. Intuitively, the layer-wise attention and channel-wise attention are dedicated to selecting the most discriminative features and the most important neighbors, respectively. Thus, they are highly complementary to each other in functionality; and we adopt a factorized manner for optimization and computational efficiency as: \begin{equation} S = L \times C \end{equation} where $L \in \mathbb{R}^{t \times w \times 1}$ and $C \in \mathbb{R}^{1 \times 1 \times b}$ denote the layer-wise feature map and channel-wise feature map, respectively. $S$ is the result of tensor multiplication. \paragraph{\textbf{Layer-wise Attention}} Conceptually, the layer-wise attention learns globally important elements in the feature. We apply a cross-channel average pooling operation onto the input tensor, following by 2 convolution layers of $3 \times 3$ and $1 \times 1$ filter, respectively. Specifically, cross-channel average pooling operation is defined as: \begin{equation} L = \frac{1}{b}\sum_{b^{\prime}=1}^b M_{1:t,1:w,b^{\prime}} \end{equation} where $b$ is the number of selected neighbors. \paragraph{\textbf{Channel-wise Attention}} The design of channel-wise attention is very similar to layer-wise attention, which aims to acquire a global view of discriminative users. Formally, the global average pooling is defined as: \begin{equation} C = \frac{1}{t \times w}\sum_{w^{\prime}=1}^w \sum_{t^{\prime}=1}^t M_{t^{\prime},w^{\prime},1:b} \end{equation} where $t$ and $w$ are shared height and width of all channels. Similarly, we employ two convolution layers after the pooling operation. Note that each convolution layer was followed by batch normalization operation. Furthermore, as other work of modern CNN structure \cite{cnnrelu}, we append a ReLU activation function to assure $L>0, C>0$. We further introduce one more convolution layer of $1 \times 1 \times b$ filter for enhancing the fusion of the layer-wise attention and channel-wise attention. The output tensor then is fed through a sigmoid function for normalization and generate the final attention weight tensor of spatial attention block. Formally, the output of the spatial attention module is the element-wise product of initial feature tensor $M$ and generated attention weights $S$: \begin{equation} N = M \odot S \end{equation} Intuitively, the attended feature map learned fine-grained important elements via high alignment and compatible attentions. \subsubsection{Gated Branch Fusion Layer} We apply another CNN layer of $3 \times 3$ filter after the attended user representation of each stream for feature extraction and dimension : \begin{equation} N_{op}= ReLU(W N) \end{equation} \begin{equation} p^k = MAXPOOLING(N_{op}) \end{equation} which produces the multi-relational context representation vectors: $o_{i_h}, o_{i_c}, o_{u_f}$ and $o_{u_c}$ for each stream, respectively. We employ a gated mechanism to assigns different weights to relation-specific neighborhood representation as: \begin{equation} p_i = g_u \cdot o_{u_f} + (1 - g_u) \cdot o_{u_c} \end{equation} \begin{equation} p_j = g_v \cdot o_{i_h} + (1 - g_v) \cdot o_{i_c} \end{equation} where scalars $g_u$ and $g_v$ are learned automatically to control the importance of the two streams within each branch. \begin{comment} Furthermore, we formulate the output of this module as: \begin{equation} o_m = u^{\prime}_i \oplus p_j \oplus p_i \oplus c^{\prime}_j \end{equation} where $\oplus$ denotes vector concatenation. \end{comment} \subsection{Heterogeneous Graph Attention Network (HGAN)} Following recent success in Graph Convolutional Network (GCN) \cite{Kipf2016GCN,LGCL2018,RGCN2018,NIPS2017GraphSAGE,velickovic2018gat}. We propose a heterogeneous graph attention network (HGAN) which is tailored for recommendation task. In particular, our proposed module adopts a parallel attention structure for the user neighbor and the URL neighbor of the central node, respectively. Considering a heterogeneous graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$, the nodes represent objects in this network which can be either user or URL. The edges denote the relation between connected nodes. The node attributes pass along the edges during the propagation. We try to leverage between the local node attributes and global network structure. Our novelty lies in two aspects: (i) we differentiate the contribution of URL node and user node, respectively; and (ii) we consider both similarities of node and the influence of different relation types. While the CSAN obtains information from multi-relational immediate neighbors, which expand the scope of knowledge for target user and target URL representations, HGAN aims at learning deeper semantic representations of target user and target URL. \subsubsection{Heterogeneous Graph Network} We try to capture different semantic relation behind various types of nodes and edges. For every single layer, if the central node is user node, its neighborhood contains its co-occurrenced users and posted URLs. If the central node type is URL, its neighborhood nodes consist of users who posted it and its co-occurrenced URLs. We adopt similar embedding approach as we did in CSAN for the initial representation of each node, but we concatenate all the features into a long vector $x_i$ for each node instead of stacking them as a matrix. Considering the different types of the node associated with the varied feature set, we use a set of node type-specific transformation matrices to project different types of node representation into the same feature space before aggregation as follows: \begin{equation} h^{(0)}_i= W_{\phi_i} \cdot x_i \end{equation} Let $H^{(0)} \in \mathbb{R}^{(m+n) \times d}$ be the embedding matrix of all the attributed nodes, where $m+n$ is the total number of nodes and d is the dimension of latent embedding space; each row $h_i^{(0)}$ stands for the initial embedding vector of node $i$. We define edges based on users' reference of URL (user-URL edges), user co-occurrence relation (user-user edges), and URL co-occurrence (URL-URL edges). We then introduce an adjacency matrix $A$ of $\mathcal{G}$ based on the importance of each edge. In particular, to compute the weight of user-user edges and URL-URL edges, we adopt a matrix named Shifted Positive Point-wise Mutual Information (SPPMI) \cite{NIPS2014SPPMI}, a popular measure for word associations, to utilize the co-concurrence context information. In word embedding scenario, each cell within the matrix measures the relation of corresponding word-context pair. The factorization of such matrix is proved to be equivalent to skip-gram model with negative sampling (SGNS). The Point-wise Mutual Information (PMI) between node $i$ and node $j$ is computed as $PMI(i,j) = log \frac{P(i,j)}{P(i)P(j)}$ where $P(i,j) = \frac{\# (i,j)}{|D|}$ and $P(i) = \frac{\# (i)}{|D|}$. $|D|$ denotes the total number of observed word-context pairs within a predefined sliding window. $P(i,j)$ is the joint probability that word $i$ and word $j$ appear together within the window size. Furthermore, we introduce the SPPMI matrix as an extension based on PMI value: \begin{equation} SPPMI(i,j)=max\{PMI(i,j)-log(k),0\} \end{equation} where $k$ is a hyperparameter, which represents the number of negative samples. Conceptually, a positive PMI value implies a semantically correlated word-context pair, Therefore, SPPMI, which only takes the positive value of PMI shifted by a global constant, reflects a closer semantic relation between word-context pairs. Inspired by this concept/idea, we use $|D|$ to denote the number of times of user (URL) co-occurrence and generate the user co-occurrence matrix in shape of $n \times n$ and URL co-occurrence matrix of $m \times m$. Note that we do not discriminate between the target node and context node. Similarly, we learn from the TF-IDF concept and redefine it on recommendation task with implicit feedback \cite{fayyadadvances} as: \begin{equation} TF-IDF_{ij} = TF_{ij} \times IDF_i = \frac{\# (i,j)}{\max_k \# (i,k)} log \frac{m}{m_i} \end{equation} where $\# (i,j)$ represents the number of times URL $j$ be posted by user $i$. $TF_{ij}$ further normalizes it by the maximum number of post times of any URL by user $i$. The $IDF_i$ is associated with the user's previous behavior as $m$ denotes the total number of URLs and $m_i$ is the number of URLs posted by user $i$. Formally, the weight of the edge between node $i$ and node $j$ is defined as: \begin{equation} A_{ij} = \begin{cases} SPPMI(i,j) & \text{$i,j$ are user (URL)}\\ TF-IDF_{ij} & \text{$i$ is user, $j$ is URL}\\ 1 & \text{i=j,}\\ 0 & \text{otherwise} \end{cases} \end{equation} \subsubsection{Heterogeneous Attention Layer (HGAL)} Given the node's initial representation defined as above, we then pass messages to aggregate the neighborhood nodes' information and combine it with the target user's interests. A popular propagation strategy in existing GCN works is the normalized Laplacian matrix \cite{Kipf2016GCN}. Even though it proves to be effective, it is not trainable and it assigns every adjacent node with the same weight. Following previous work \cite{velickovic2018gat}, we propose to incorporate a hierarchical attention mechanism to learn the weight of each adjacent node adaptively. Since the distribution of the number of neighbors of each node disperses greatly, sub-sampling becomes an essential procedure in our task to avoid an explosion of computation cost after multiple hops stacked. We adopt Weighted Random Selection (WRS) \cite{efraimidis2006WRS} to select a fixed number of nodes for both node types in each graph attention layer. Figure \ref{hgat} shows a graphical illustration of one HGAL. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{gat_layer.png} \caption{Graphical illustration of a single heterogeneous graph attention layer. In this example, we assume the central node as a user node. Circles denote users, and triangles denote URLs. Colored objects with a solid line are selected neighbors at each layer, and the nodes with a dotted line are randomly dropped. (Best viewed in color).} \label{hgat} \vspace{-15pt} \end{figure} Assume that the central node is a user node. We separately calculate the attention weights between the user node and its user node neighbors, or between the user node and its URL node neighbors. The similarity between the target user's node representation $h^{(l)}_u$ and all of its selected neighbors are defined as: \begin{equation} \alpha_{ij}^{\phi^{(l)}} = softmax(e_{ij}^{\phi^{(l)}}) = \frac{exp(f(h^{(l)}_i,h^{(l)}_j))}{\sum_{k \in \mathcal{N}^{\phi_t}_i} exp(f(h^{(l)}_i,h^{(l)}_k))} \end{equation} where $h^{(l)}_i$ is the representation of user $i$ at layer $l$, and $\mathcal{N}^{\phi_t}_i$ denotes the node type-based neighbor. We adopt $f(h^{(l)}_i,h^{(l)}_j)=cosine(h^{(l)}_i,h^{(l)}_j)$ as similarity function. Intuitively, $\alpha^{\phi}_{ij}$ measures the importance of neighbor $j$ towards central node $i$. Meanwhile, we obtain the edge weight $A_{ij}$ as well. After this, we aggregate the type-based neighborhood node representation and generate the embedding of neighborhood as the average of different types of nodes: \begin{equation} z_{ij} = ReLU(A_{ij} h^{(l)}_i) \end{equation} \begin{equation} \tilde{h}^{(l+1)}_i = \frac{1}{|\mathcal{A}|}(\sum_{j \in \phi_{\mathcal{U}}} \alpha_{ij}^{\phi^{(l)}} z_{ij} + \sum_{j \in \phi_{\mathcal{C}}} \alpha_{ij}^{\phi^{(l)}} z_{ij}) \end{equation} To model the information propagation and capture higher-order relations, we stack the HGAL multiple times. In addition, we introduce the residual connection \cite{He2016skipconnection} to help train a HGAN with many layers. \begin{equation} g^{(l+1)} = \sigma(W_g^{(l)} h^{(l)} + b_g^{(l-1)}) \end{equation} \begin{equation} h^{(l+1)}= (1 - g^{(l+1)}) \odot \tilde{h}^{(l+1)}_i + g^{(l+1)} \odot h^{(l)} \end{equation} where $\sigma$ denotes the sigmoid function. $W_g^{(l)}$ and $b_g^{(l-1)}$ are the shared weight matrix and bias term at layer $l$, respectively. The node representation at $l$-th layer provides knowledge of $l$ degrees away. \subsection{Interaction Layer} The interaction layer is tailored for recommendation tasks. Recall that we obtained wide context-based user embedding $u^{\prime}_i$ and URL embedding $c^{\prime}_j$, context representations $p_i$, $p_j$ and deep context-based user embedding $h^{(l)}_i$ and URL embedding $h^{(l)}_j$ in the previous sections. Then we formulate the final URL-dependent user representation by using a fully connected layer as: \begin{equation} o_i = W_o[u^{\prime}_i \oplus c^{\prime}_j \oplus p_i \oplus p_j \oplus h^{(l)}_i \oplus h^{(l)}_j] + b_o \end{equation} where $W_o$ and $b_o$ are a linear transformation weight matrix and bias term, respectively. $\oplus$ denotes vector concatenation. Note that the fully-connected layer can be replaced by other techniques (e.g. CNN). Finally, we feed it through a softmax function to calculate the probability that user interested in the given URL. \subsection{Training} We adopt the cross-entropy loss function during the training process. \begin{equation} \mathcal{L} = - \sum_{(i,j) \in Y^{+} \bigcup Y^{-}} y_{ij}log (\hat{y}_{ij}) + (1 - y_{ij}) log (1 - \hat{y}_{ij}) \end{equation} We follow a uniform sampling strategy to obtain negative samples $(i,j) \in Y^{-}$ from unobserved interactions. Since the entire architecture is differentiable, we use back propagation to achieve end-to-end training. \section{Evaluation} In this section, we describe a dataset, baselines, experimental setting, and experimental results. In the experiments, we seek to answer the following research questions: \squishlist \item \textbf{RQ1:} What is the performance of our model and baselines? \item \textbf{RQ2:} How beneficial is each submodule of our model? \item \textbf{RQ3:} How effective is our attention mechanisms? \item \textbf{RQ4:} What is sensitivity of our model with regard to hyperparameters? \squishend \subsection{Dataset} We evaluate our proposed model on a Twitter dataset obtained from the authors of \cite{Vo:2018}\footnote{https://github.com/nguyenvo09/CombatingFakeNews}. The interaction behavior collected in the dataset is consistent with our definition in \ref{definition}. As they did for their study, we only kept users who have at least three interactions (i.e., posting at least three fact-checking messages containing fact-checking URLs). We conducted additional preprocessing step by removing users whose posts are non-English, or their tweets were inaccessible, because some of our baselines require a fact-checker's tweets. Our final dataset consists of 11,576 users (i.e, fact-checkers), 4,732 fact-checking URLs and 63,429 interactions. The dataset also contains each user's social network information. Note that each user's social relationship is restricted within available users in the dataset. And we further take available feature values of both user and URL into consideration. For instance, a category of referred fact-checking article and the name of corresponding fact-checking website reveals linguistic characteristics such as writing style and topical interest of each URL; while the number of followers and number of followees of each user indicates the credibility and influence of the fact-checker. Statistics of the final dataset is presented in Table \ref{data_stat}. \subsection{Baselines} To measure relative effectiveness of our model, we compare our model against eight state-of-the-art baselines including the traditional collaborative filtering method, neural network-based models, and context-aware approaches. \squishlist \item \textbf{MF} \cite{Koren:2009:MFT} is a standard collaborative filtering technique. It factorizes an interaction matrix $X \in \mathbb{R}^{M \times N}$ into two matrices $U \in \mathbb{R}^{M \times d}$ and $X \in \mathbb{R}^{d \times N}$. $U$ contains each user's latent representation, and $X$ contains each URL's latent representation. \item \textbf{GAU} \cite{Vo:2018} is a framework specifically designed for fact-checking URL recommendation utilizing rich side information such as a user' social network, tweets, and referred fact-checking pages. It is the most relevant and domain-specific baseline. \item \textbf{NeuMF} \cite{He:2017:NCF} is a neural network based item recommendation algorithm. We adopted a composite version of MF jointly coupled with a MLP. \item \textbf{CMN} \cite{Ebesu:2018:CMN} combines a global latent factor model with an augmented memory network to capture personalized neighbor-based structure in a non-linear fashion. \item \textbf{NAIS} \cite{He2018NAISNA} is an item-based collaborative filtering architecture that integrates attention mechanism to distinguish the contribution of previously consumed items. The authors proposed two versions of NAIS: (1) $NAIS_{concat}$ which concatenates two vectors to learn the attention weight; and (2) $NAIS_{prod}$ which feeds the element-wise product of the two vectors to the attention network. Therefore, we also build two versions of NAIS, and compare them with our model. \item \textbf{DeepCoNN} \cite{zheng2017joint} was originally proposed for an item rating prediction task which jointly model user and item based on their textual reviews. The prior work shows that it significantly outperforms other topic modeling based methods.We re-implemented the baseline and adapted it for our recommendation task with implicit feedback. \item \textbf{NARRE} \cite{chen2018neural} is a deep neural network based framework for a item rating prediction task. It employs the attention mechanism to distinguish the importance of each review. We re-implemented the framework for our implicit feedback situation. \item \textbf{NGCF} \cite{NGCF19} is a new recommendation framework based on graph neural network, explicitly encoding the collaborative signal in the form of high-order connectivity in user-item bipartite graph by performing embedding propagation. \squishend Table~\ref{approaches} presents characteristics of baselines and our model, showing what information each model utilizes. Note that even though CMN and NAIS both utilize co-occurrence context, CMN only utilizes user co-occurrence context whereas NAIS looks into URL co-occurrence context. \begin{table}[t] \caption{Statistics of our evaluation dataset.} \centering \small \begin{tabular}{@{}llll@{}} \toprule Interaction \# & User \# & URLs \# & Sparsity \\ \midrule 63429 & 11576 & 4732 & 99.884\% \\ \bottomrule \end{tabular} \label{data_stat} \vspace{-13pt} \end{table} \begin{table*}[th] \caption{Characteristics of baselines and our model.} \centering \small \begin{tabular}{@{}lccccccccc@{}} \toprule & MF & GAU & NeuMF & CMN & NAIS & DeepCoNN & NARRE & NGCF & AMRAN \\ \midrule Implicit Feedback & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ \\ Textual Content & $\setminus$ & $\surd$ & $\setminus$ & $\setminus$ & $\setminus$ & $\surd$ & $\surd$ & $\setminus$ & $\setminus$ \\ Co-occurrence Context & $\setminus$ & $\surd$ & $\setminus$ & $\surd$ & $\surd$ & $\setminus$ & $\setminus$ & $\setminus$ & $\surd$ \\ Social Context & $\setminus$ & $\surd$ & $\setminus$ & $\setminus$ & $\setminus$ & $\setminus$ & $\setminus$ & $\setminus$ & $\surd$ \\ Higher-order Information & $\setminus$ & $\setminus$ & $\setminus$ & $\setminus$ & $\setminus$ & $\setminus$ & $\setminus$ & $\surd$ & $\surd$ \\ Deep Learning & $\setminus$ & $\setminus$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ \\ \bottomrule \end{tabular} \label{approaches} \vspace{-13pt} \end{table*} \subsection{Evaluation Protocol} We adopt the leave-one-out evaluation protocol to evaluate the performance of our model and baselines. The leave-one-out evaluation protocol has been widely used in top-K recommendation tasks. In particular, we held the latest interaction of each user as the test set and used the remaining interactions for training. Each testing instance was paired with 99 randomly sampled negative instances. Each recommendation model ranks the 100 instances according to its predicted results. The ranked list is judged by Hit Ratio (HR) \cite{Deshpande:2004:HR} and Normalized Discount Cumulative Gain (NDCG) \cite{He:2015:NDCG} at the position 10. HR@10 is a recall-based metric, measuring the percentage of the testing item being correctly recommended in the top-10 position. NDCG@10 is a ranked evaluation metric which considers the position of the correct hit in the ranked result. Since both modules in our framework introduce randomness, we repeat each experiment 5 times with different weight initialization and randomly selecting neighbors. We report the average score of the best performance in each training process for both metrics to ensure the robustness of our framework. \subsection{Hyper-parameter Settings} We implement our framework by using Pytorch framework, initialize weight parameters by Xavier initialization \cite{Goodfellow-et-al-2016}, and optimize the model with Adam optimizer \cite{kingma2014adam}. The mini-batch size is set to 128. Empirically, in CSAN, we select 10 neighbors for each stream. In HGAN, we choose 8 user neighbors and 8 URL neighbors for each central node at a single layer, and the default number of graph attention layers is set to 2. If the object (i.e.g, user neighbor or URL neighbor) is not sufficient enough, we pad the sequence with zeros vectors. In the proposed AMRAN model, all hyperparameters are tuned by using the grid-search on the validation set, which is formed by holding out one interaction of each user from the training data like the prior work \cite{He:2017:NCF}. We conduct the grid search over a latent dimension size from \{8,16,32,64\}, a regularization term from \{0.1, 0.01, 0.001, 0.0001, 0.00001\}, a learning rate from \{0.0001, 0.0003, 0.001, 0.01, 0.05, 0.1\}, and SPPMI shifted constant value $s$ from \{1, 2, 5, 10\}. The number of negative samples w.r.t each positive interaction is set to 4. We adopt the same latent dimension size for all sub-modules. For a fair comparison, we also thoroughly optimize the baselines' hyperparameters by using the validation set. \subsection{RQ1: Performance of Our Model and Baselines} \begin{table}[htbp] \caption{Performance of our AMRAN and baseline models. AMRAN outperforms all baselines in both evaluation metrics.} \centering \small \begin{tabular}{@{}lll@{}} \toprule Model & HR@10 & NDCG@10 \\ \midrule MF & 0.537 & 0.364\\ GAU & 0.589 & 0.372 \\ NeuMF & 0.621 & 0.389 \\ CMN & 0.589 & 0.382 \\ NAIS\_prod & 0.617 & 0.392 \\ NAIS\_concat & 0.624 & 0.398 \\ DeepCoNN & 0.609 & 0.377 \\ NARRE & 0.615 & 0.382 \\ NGCF & 0.600 & 0.373 \\ \midrule \emph{our} AMRAN & \textbf{0.657} & \textbf{0.410} \\ \bottomrule \end{tabular} \label{results} \vspace{-13pt} \end{table} Table~\ref{results} presents performance of our model and baselines. According to the results and information described in Table \ref{approaches}, we had the following observations. First, deep learning-based approaches usually obtained better performance than traditional models (e.g., MF and GAU). This observation makes sense because (1) traditional models failed to capture the important non-linear relationship between users and fact-checking URLs; (2) Most deep-learning based baseline models employ attention mechanism which helps better understand the semantic relation between user and URL; and (3) training tricks such as drop out and batch normalization also contribute to a better quality of training. In particular, $NAIS_{concat}$ achieves better performance than $NAIS_{prod}$ which supports the reason (1). The second observation is that models with text review achieve better results compared with collaborative filtering-based methods. It is not surprising since that textual content contains rich information which could be auxiliary information to implicit feedback data and thus improve the recommendation accuracy. However, we observed that text-based recommendation approaches usually have a high complexity. Third, social context and co-occurrence context play important roles in improving recommendation results. NAIS significantly outperforms CMN and becomes the strongest baseline model. It indicates that URL-URL co-occurrence relationship is more important than user-user co-occurrence relationship since semantic representation of each user is much complex than semantic representation of a fact-checking URL. Overall, our AMRAN outperforms all baselines, achieving 0.657 HR@10 and 0.410 NDCG@10. It improves HR@10 by 5.3\% and NDCG@10 by 3\% over the best baseline (i.e., $NAIS_{concat}$). \begin{table}[htbp] \caption{Performance of two submodules (CSAN and HGAN), and AMRAN.} \centering \small \begin{tabular}{@{}lll@{}} \toprule Model & HR@10 & NDCG@10 \\ \midrule \emph{our} CSAN & 0.642 & 0.387 \\ \emph{our} HGAN & 0.653 & 0.403 \\ \midrule \emph{our} AMRAN & \textbf{0.657} & \textbf{0.410} \\ \bottomrule \end{tabular} \label{results2} \vspace{-13pt} \end{table} \subsection{RQ2: Effectiveness of our submodules} In this experiment, we are interested in measuring effectiveness of our submodules of AMRAN: CSAN and HGAN. Table~\ref{results2} the experimental result. CSAN achieves 0.642 HR@10 and 0.387 HR@10, whereas HGAN achieves 0.653 HR@10 and 0.403 NDCG@10. Both of the submodules outperform all the baselines in HR@10. HGAN outperforms all the baselines, and CSAN is competitive over the baselines. This experimental result confirms that both CSAN and HGAN positively contributed to the performance of our AMRAN. \subsection{RQ3: Effectiveness of our Attention Mechanisms} We proposed two attention mechanisms: (1) spatial attention block in CSAN; and (2) graph attention mechanism in HGAN described in Section~\ref{sec:framework}. In this experiment, we are interested in studying the impact of the attention mechanisms. In particular, we run each submodule of AMRAN (i.e., CSAN or HGAN) with/without a corresponding attention mechanism. Table \ref{csa} shows performance of these models. In both submodules, our proposed attention mechanisms positively improved the performance of these submodules, confirming the positive impact toward correctly recommending fact-checking URLs. \begin{table}[t] \caption{Performance of submodules with/without our proposed attention mechanisms.} \centering \small \begin{tabular}{@{}lll@{}} \toprule & HR@10 & NDCG@10 \\ \midrule Without Spatial Attention Block & 0.614 & 0.368 \\ CSAN & 0.642 & 0.387 \\ \midrule Without Graph Attention Mechanism & 0.638 & 0.389 \\ HGAN & 0.653 & 0.403 \\ \bottomrule \end{tabular} \label{csa} \vspace{-13pt} \end{table} \subsection{RQ4: Hyperparameter Sensitivity} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{csan_exp.png} \caption{Performance of CSAN when varying the number of neighbors in each stream.} \label{csan_exp} \vspace{-15pt} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{hgat_exp.png} \caption{Performance of HGAN when varying a size of neighbor nodes at each layer (HGAL).} \label{hgat_exp} \vspace{-15pt} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{hyper_exp.png} \caption{Performance of AMRAN when varying the number of negative samples and the size of latent semantic space (i.e., embedding size).} \label{hyper_exp} \vspace{-15pt} \end{figure} Now, we turn to analyze how our model is sensitive to hyperparameter values, and which hyperparameter value produces the best recommendation result. Recall that we utilize the context information to generate comprehensive embedding of given user and URL. In CSAN, we employ four streams to capture fine-grained context characteristics and share the embedding weight matrix with the target user and target URL representations. In the first experiment, we vary the number of neighbors associated with each steam in CSAN to show how CSAN's performance is changed. Figure \ref{csan_exp} shows that both $HR@10$ and $NDCG@10$ have similar trends, and selecting 10 neighbors at each stream produced the best result. Next, we measure how performance of HGAN is changed when varying the number of HGALs and a size of selected neighbor nodes at each layer. Figure \ref{hgat_exp} demonstrates the necessity of employing 2 HGALs, which consistently outperforms the one HGAL. The best performance was achieved when a size of selected neighbor nodes was set to 8. In addition, we vary the number of negative samples, and a size of latent semantic space for the target user and target URL (i.e., an embedding vector size of the target user and target URL). Figure \ref{hyper_exp} shows high dimensional latent semantic space produces high performance of AMRAN. 64 dimensional embeddings produced the best results. We also observe that one negative sample would not be enough to produce good results in especially when an embedding vector size is small. The top performance is achieved when one positive instance paired with 3 or 4 negative instances. \subsection{Case Study: Visualization of Relevance Propagation} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{case_study.png} \caption{Visualization of relevance propagation of a user 7849. Objects in yellow denote target user and target URL. (Best viewed in color).} \label{case} \vspace{-15pt} \end{figure} Attention mechanism not only improve recommendation performance of our model, but also provide explainability of our model. As a case study, we specifically chose an example to demonstrate relevance propagation. In particular, we randomly sampled a user 7849 as the example as shown in Figure \ref{case}. The user 7849 has 3 co-occurrenced users, 3 following users, and posted 4 URLs. Note that we omit less important 2nd-degree neighbors for simplicity. The most relevant neighbors and the propagation paths are highlighted automatically via the attention mechanism. In general, based on the user's historical context URLs, we observe that the topic that user 7849 would like to participate in debunking is fauxtography. However, in this very particular case, the most influential context neighbors of the user are user 25 (co-occurrence user) and user 4759 (social context) given URL 1623. Both of the context neighbors share the similar taste with user 7849 on the favorite website (Politifact.com). Moreover, we found that URL 2525 appeared in 2nd-degree neighborhood of the user 7849, and was originated from the same website (Snopes.com) with URL 1623. \section{Conclusion} In this paper, we proposed a novel framework, which effectively recommends relevant fact-checking URLs to \emph{fact-checkers}. The proposed framework inspired by recent advancements in graph neural network and attention mechanism leveraged user-URL specific context information to capture deep semantic and complex structure between target user and target URL. We compared the performance of our model, AMRAN, with eight state-of-the-art baselines. Experimental results showed that our model achieved up to 5.3\% improvement against the best baseline. Both submodules of AMRAN positively contributed to the recommendation results. \begin{acks} This work was supported in part by NSF grant CNS-1755536, AWS Cloud Credits for Research, and Google Cloud. Any opinions, findings and conclusions or recommendations expressed in this material are the author(s) and do not necessarily reflect those of the sponsors. \end{acks} \bibliographystyle{ACM-Reference-Format}
2,869,038,155,344
arxiv
\chapter{THEOREM} \\ \\ \marginparsep = 10pt \marginparwidth = 10pt \reversemarginpar \marginpar{$\textcolor{red}{\bullet}$} The real parts of all the nontrivial Riemann zeta function zeros $ \rho $ are equal $ Re\left(\rho\right)=\dfrac{1}{2}. $ \\ \\ \\ \chapter{PROOF:} \\ \\\marginpar{$\textcolor{green}{\bullet}$} According to the functional equality~\cite[p.~22]{Titchmarsh-1987}, ~\cite[p.~8-11]{Karatsuba-1992}: \begin{equation}\label{function_eq} \centering \Gamma \left(\dfrac{s}{2}\right){\pi }^{-\dfrac{s}{2}}\zeta \left(s\right)=\Gamma \left(\dfrac{1-s}{2}\right){\pi }^{-\dfrac{1-s}{2}}\zeta \left(1-s\right),\qquad Re\left(s\right) >0 \end{equation} \\ $ \zeta\left(s\right) $ - the Riemann zeta function, $\Gamma\left(s\right)$ - the Gamma function. \\ \marginpar{$\textcolor{green}{\bullet}$} From~\cite[p.~8-11]{Karatsuba-1992}\; $\zeta \left({\bar{s}}\right)=\overline{\zeta\left(s\right)}$, it means that $\forall \rho =\sigma +it$: $\zeta\left(\rho\right)=0$ and $0\leqslant \sigma \leqslant 1$ we have: \begin{equation}\label{eq_zero} \zeta \left({\bar{\rho}}\right)=\zeta \left({1-\rho}\right)=\zeta \left({1-\bar{\rho}}\right)=0 \end{equation} \marginpar{$\textcolor{green}{\bullet}$} From~\cite{Valle-Poussin-1897},~\cite[p.~128]{Smith-1994},~\cite[p.~45]{Titchmarsh-1987} \; we know that $\zeta \left({s}\right)$ has no nontrivial zeros on the line $\sigma = 1$ and consequently on the line $\sigma = 0$ also, in accordance with~\eqref{eq_zero} they don't exist. \\ \linebreak \marginpar{$\textcolor{yellow}{\bullet}$} Let's denote the set of nontrivial zeros $\zeta \left({s}\right)$ through $ \mathcal{P} $ (multiset with consideration of multiplicitiy): \[ \mathcal{P} \stackrel{\scriptscriptstyle\mathrm{def}}{=} \left\lbrace \rho:\; \zeta\left(\rho\right)=0,\; \rho =\sigma +it,\; 0<\sigma <1 \right\rbrace. \] \begin{eqnarray \mbox{And:}\;\mathcal{P}_1 \stackrel{\scriptscriptstyle\mathrm{def}}{=} \left\lbrace \rho:\; \zeta\left(\rho\right)=0,\; \rho =\sigma +it,\; 0<\sigma <\dfrac{1}{2} \right\rbrace,\nonumber\\ \mathcal{P}_2 \stackrel{\scriptscriptstyle\mathrm{def}}{=} \left\lbrace \rho:\; \zeta\left(\rho\right)=0,\; \rho =\dfrac{1}{2} +it \right\rbrace\,,\quad\qquad\qquad\nonumber\\ \mathcal{P}_3 \stackrel{\scriptscriptstyle\mathrm{def}}{=} \left\lbrace \rho:\; \zeta\left(\rho\right)=0,\; \rho =\sigma +it,\; \dfrac{1}{2}<\sigma < 1 \right\rbrace\nonumber. \end{eqnarray} Then: \[ \mathcal{P}= \mathcal{P}_1 \cup \mathcal{P}_2 \cup \mathcal{P}_3\;\;\; \mbox{and}\;\; \mathcal{P}_1 \cap \mathcal{P}_2 = \mathcal{P}_2 \cap \mathcal{P}_3 = \mathcal{P}_1 \cap \mathcal{P}_3 = \varnothing , \] \[ \mathcal{P}_1 =\varnothing \Leftrightarrow \mathcal{P}_3=\varnothing.\] \marginpar{$\textcolor{green}{\bullet}$} Hadamard's theorem (Weierstrass preparation theorem) about the\linebreak decomposition of function through the roots gives us the following result ~\cite[p.~30]{Titchmarsh-1987}, ~\cite[p.~31]{Karatsuba-1992},~\cite{Voros-1987}: \begin{equation}\label{eq_Adamar} \zeta \left({s}\right)= \dfrac{{\pi }^{\dfrac{s}{2}}{e}^{as}}{s\left(s-1\right)\Gamma \left(\frac{s}{2}\right)}\prod _{\rho \in \mathcal{P}}\left(1-\frac{s}{\rho }\right){e}^{\dfrac{s}{\rho }},\qquad Re\left(s\right) >0 \end{equation} \[ a=\mathrm{ln}2\sqrt{\pi }-\dfrac{\gamma }{2}-1,\; \gamma - \mbox{Euler's constant and} \] \begin{equation}\label{eq_PrimeLn} \dfrac{{\zeta }^{\prime }\left(s\right)}{\zeta \left(s\right)}=\dfrac{1}{2}\mathrm{ln}\pi + a - \dfrac{1}{s} + \dfrac{1}{1-s} -\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{s}{2}\right)}{\Gamma \left(\dfrac{s}{2}\right)}+\sum _{\rho \in \mathcal{P}}\left(\dfrac{1}{s-\rho }+\dfrac{1}{\rho }\right) \end{equation} \marginpar{$\textcolor{green}{\bullet}$} According to the fact that $\;\;\dfrac{{\Gamma }^{\prime }\left(\dfrac{s}{2}\right)}{\Gamma \left(\dfrac{s}{2}\right)}\;\; $ - Digamma function of~\cite[p.~31]{Titchmarsh-1987}, ~\cite[p.~23]{Karatsuba-1992} we have: \begin{equation}\label{eq_PrimeLnDigamma} \dfrac{{\zeta }^{\prime }\left(s\right)}{\zeta \left(s\right)}= \dfrac{1}{1-s} +\sum _{\rho \in \mathcal{P}}\left(\dfrac{1}{s-\rho }+\dfrac{1}{\rho }\right)+\sum _{n=1}^{\infty }\left(\dfrac{1}{s+2n}-\dfrac{1}{2n}\right)+C, \end{equation} \[ C=const \]. \\ \marginpar{$\textcolor{green}{\bullet}$} From~\cite[p.~160]{Edwards-1974}, ~\cite[p.~272]{Lehmer-1988}, ~\cite[p.~81]{Davenport-1980}: \begin{equation}\label{eq_RHO} \qquad \sum _{\rho \in \mathcal{P}}\dfrac{1}{\rho }=1+\dfrac{\gamma }{2}-\mathrm{ln}2\sqrt{\pi }=0,0230957\dotsc \end{equation} \\ \marginpar{$\textcolor{yellow}{\bullet}$} Indeed, from~\eqref{eq_zero}: \[ \sum _{\rho \in \mathcal{P}}\dfrac{1}{\rho }=\dfrac{1}{2}\sum _{\rho \in \mathcal{P}}\left(\dfrac{1}{1-\rho }+\dfrac{1}{\rho }\right). \] \marginpar{$\textcolor{green}{\bullet}$} From~\eqref{eq_PrimeLn}: \[2 \sum _{\rho \in \mathcal{P}}\dfrac{1}{\rho }=\underset{s\to 1}{lim}\left(\dfrac{{\zeta }^{\prime }\left(s\right)}{\zeta \left(s\right)}-\dfrac{1}{1-s}+\dfrac{1}{s}-a-\dfrac{1}{2}\mathrm{ln}\pi+\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{s}{2}\right)}{\Gamma \left(\dfrac{s}{2}\right)}\right). \] \marginpar{$\textcolor{green}{\bullet}$} Also it's known, for example, from ~\cite[p.~49]{Titchmarsh-1987}, ~\cite[p.~98]{Davenport-1980} that the number of nontrivial zeros of $\rho =\sigma +it$ in strip $ 0<\sigma <1 $, the imaginary parts of which $t$ are less than some number $T>0$ is limited, i.e. \begin{equation*}\label{eq_SetZeroes} \| \left\lbrace \rho:\; \rho \in \mathcal{P},\; \rho =\sigma +it,\; \left| t\right| <T \right\rbrace\|<\infty. \end{equation*} \marginpar{$\textcolor{yellow}{\bullet}$} Indeed, it can be presented that on the contrary the sum of $ \sum _{\rho \in \mathcal{P}}\dfrac{1}{\rho } $ would have been unlimited. \\ \linebreak \marginpar{$\textcolor{yellow}{\bullet}$} Thus $\forall T>0 \; \exists\; \delta_x>0, \; \delta_y>0$ such that \begin{eqnarray} \mbox{in area}\; 0<t\leqslant \delta_y, 0<\sigma\leqslant \delta_x\; \mbox{there are no zeros}\; \rho =\sigma +it \in \mathcal{P}.\nonumber \end{eqnarray} Let's consider random root $ q \in \mathcal{P}_1 \cup \mathcal{P}_2 $ \\ \linebreak Let's denote $k(q)$ the multiplicity of the root $q$. \\ \linebreak Let's examine the area $Q\left(R\right)\stackrel{\scriptscriptstyle\mathrm{def}}{=} \left\{s:\left\| s-q\right\| \leqslant R, R>0\right\}$. \\ \linebreak \marginpar{$\textcolor{yellow}{\bullet}$} From the fact of finiteness of set of nontrivial zeros $ \zeta(s) $ in the limited area follows $\exists\; R>0$, such that $Q(R)$ does not contain any root from $ \mathcal{P}$ except $q$. \\ \renewcommand{\figurename}{Fig.} \begin{figure}[h] \begin{picture}(330,100)(-100,0) \put(2,2){\begin{picture}(270,240)% \put(15,0){\vector(0,1){110}} \put(0,15){\vector(1,0){260}} \put(230,2){$Re(s)$} \put(-24,100){$Im(s)$} \put(-4,-4){$0$} \multiput(30,0)(0,3){37}% {\circle*{1}} \multiput(220,0)(0,3){37}% {\circle*{1}} \multiput(120,0)(0,3){37}% {\circle*{1}} \multiput(0,30)(3,0){86}% {\circle*{1}} \put(17,-4){$\delta_x$} \put(110,-4){$\frac{1}{2}$} \put(210,-4){$1$} \put(-12,20){$\delta_y$} \put(80,80){\circle{70}} \put(80,80){\circle*{2}} \put(80,80){\vector(3,1){19}} \put(81,85){$R$} \put(72,81){$q$} \put(50,40){\circle*{2}} \put(190,40){\circle*{2}} \put(160,80){\circle*{2}} \put(120,55){\circle*{2}} \put(120,90){\circle*{2}} \end{picture}} \end{picture} \caption{} \end{figure} \\ \marginpar{$\textcolor{green}{\bullet}$} From~\cite{Abramowitz-1972}, ~\cite[p.~31]{Titchmarsh-1987},~\cite[p.~23]{Karatsuba-1992} we know that the Digamma function $\dfrac{{\Gamma}^{\prime }\left(\dfrac{s}{2}\right)}{\Gamma \left(\dfrac{s}{2}\right)} $ in the area $Q(R)$ has no poles, i.e. $\forall s \in Q(R)$ \[\left\| \dfrac{{\Gamma}^{\prime }\left(\dfrac{s}{2}\right)}{\Gamma \left(\dfrac{s}{2}\right)}\right\| <\infty. \] Let's denote: \[ I_{\mathcal{P} }(s)\stackrel{\scriptscriptstyle\mathrm{def}}{=} - \dfrac{1}{s} + \dfrac{1}{1-s}+\sum _{\rho \in \mathcal{P}}\dfrac{1}{s-\rho } \] and \begin{eqnarray} I_{\mathcal{P}\setminus \left\lbrace q\right\rbrace }(s)\stackrel{\scriptscriptstyle\mathrm{def}}{=} - \dfrac{1}{s} + \dfrac{1}{1-s}+\sum _{\rho \in \mathcal{P}\setminus \left\lbrace q\right\rbrace}\dfrac{1}{s-\rho }.\nonumber \end{eqnarray} \\ Hereinafter $\mathcal{P}\setminus \left\lbrace q\right\rbrace \stackrel{\scriptscriptstyle\mathrm{def}}{=} \mathcal{P}\setminus \left\lbrace (q, k(q))\right\rbrace $ (the difference in the multiset). \\ \linebreak Also we shall consider the summation - $\sum _{\rho \in \mathcal{P}}\dfrac{1}{s-\rho } $ and $ \sum _{\rho \in \mathcal{P}\setminus \left\lbrace q\right\rbrace}\dfrac{1}{s-\rho } $ further as the sum of pairs $\left( \dfrac{1}{s-\rho }+\dfrac{1}{s-(1-\rho) } \right) $ and $ \sum _{\rho \in \mathcal{P}}\dfrac{1}{\rho } $ as the sum of pairs $\left( \dfrac{1}{\rho }+\dfrac{1}{1-\rho} \right) $ as a consequence of division of the sum from~\eqref{eq_PrimeLnDigamma} $ \sum _{\rho \in \mathcal{P}}\left(\dfrac{1}{s-\rho }+\dfrac{1}{\rho }\right) $ into $ \sum _{\rho \in \mathcal{P}}\dfrac{1}{s-\rho }+\sum _{\rho \in \mathcal{P}}\dfrac{1}{\rho } $. As specifed in ~\cite{Edwards-1974}, ~\cite{Keiper-1992}, ~\cite{Lehmer-1988}, ~\cite{Titchmarsh-1987}. \\ \linebreak \marginpar{$\textcolor{yellow}{\bullet}$} Let's note that $ I_{\mathcal{P}\setminus \left\lbrace q\right\rbrace }(s) $ is holomorphic function $\forall\; s \in Q(R) $. \\ \linebreak Then from~\eqref{eq_PrimeLn} we have: \begin{eqnarray} &\dfrac{{\zeta }^{\prime }\left(s\right)}{\zeta \left(s\right)}=\dfrac{1}{2}\mathrm{ln}\pi + a -\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{s}{2}\right)}{\Gamma \left(\dfrac{s}{2}\right)}+\sum _{\rho \in \mathcal{P}}\dfrac{1}{\rho }+I_{\mathcal{P} }(s).\nonumber \end{eqnarray} And in view of \eqref{eq_Adamar}, \eqref{eq_RHO}: \begin{equation}\label{eq_Dzeta_Re} Re\dfrac{{\zeta }^{\prime }\left(s\right)}{\zeta \left(s\right)}=\dfrac{1}{2}\mathrm{ln}\pi +Re\left( -\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{s}{2}\right)}{\Gamma \left(\dfrac{s}{2}\right)}+I_{\mathcal{P} }(s)\right) . \end{equation} Let's note that from the equality of \begin{equation}\label{eq_Minus_rho} \sum _{\rho \in \mathcal{P}}\dfrac{1}{1-s-\rho }= -\sum _{(1-\rho) \in \mathcal{P}}\dfrac{1}{s-(1-\rho) } = -\sum _{\rho \in \mathcal{P}}\dfrac{1}{s-\rho } \end{equation} follows that: \[ I_{\mathcal{P} }(1-s)=- I_{\mathcal{P} }(s), \; I_{\mathcal{P}\setminus \left\lbrace q\right\rbrace }(1-s)= -I_{\mathcal{P}\setminus \left\lbrace 1-q\right\rbrace }(s), \;Re\left(s\right) >0. \] \marginpar{$\textcolor{green}{\bullet}$} Besides \[ I_{\mathcal{P}\setminus \left\lbrace q\right\rbrace }(s)=I_{\mathcal{P} }(s)-\dfrac{k(q)}{s-q} \] and $ I_{\mathcal{P}\setminus \left\lbrace q\right\rbrace }(s) $ is limited in the area of $ s \in Q(R) $ as a result of absence of its poles in this area as well as its differentiability in each point of this area. \\ \marginpar{$\textcolor{green}{\bullet}$} If in \eqref{eq_PrimeLn} we replace $ s $ with $1-s$ that in view of \eqref{eq_RHO}, in a similar way if we take derivative of the basic logarithm \eqref {function_eq}: \begin{equation}\label{eq_SumDig} \dfrac{{\zeta }^{\prime }\left(s\right)}{\zeta \left(s\right)}+\dfrac{{\zeta }^{\prime }\left(1-s\right)}{\zeta \left(1-s\right)}=-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{s}{2}\right)}{\Gamma \left(\dfrac{s}{2}\right)}-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{1-s}{2}\right)}{\Gamma \left(\dfrac{1-s}{2}\right)}+\ln\pi,\;Re\left(s\right) >0. \end{equation} \\ \marginpar{$\textcolor{green}{\bullet}$} Let's examine a circle with the center in a point $q$ and radius $r \leqslant R$, laying in the area of $Q(R)$: \\ \begin{figure}[h] \begin{picture}(330,100)(-100,0) \put(2,2){\begin{picture}(270,280)% \put(15,0){\vector(0,1){110}} \put(0,15){\vector(1,0){260}} \put(230,2){$Re(s)$} \put(-24,100){$Im(s)$} \put(-4,-4){$0$} \multiput(30,0)(0,3){37}% {\circle*{1}} \multiput(220,0)(0,3){37}% {\circle*{1}} \multiput(120,0)(0,3){37}% {\circle*{1}} \multiput(0,30)(3,0){86}% {\circle*{1}} \put(17,-4){$\delta_x$} \put(110,-4){$\frac{1}{2}$} \put(210,-4){$1$} \put(-9,19){$\delta_y$} \put(80,80){\circle{70}} \put(80,80){\circle*{2}} \put(80,80){\vector(3,1){19}} \put(80,84){$R$} \put(74,84){$q$} \put(50,40){\circle*{2}} \put(190,40){\circle*{2}} \put(160,80){\circle*{2}} \put(120,55){\circle*{2}} \put(120,90){\circle*{2}} \put(80,80){\circle{33}} \put(80,80){\vector(3,-4){10}} \put(87,72){$r$} \multiput(80,12)(0,4){18}% {\circle*{1}} \multiput(12,80)(4,0){18}% {\circle*{1}} \multiput(12,65)(4,0){17}% {\circle*{1}} \put(75,65){\circle*{2}} \multiput(75,12)(0,4){14}% {\circle*{1}} \put(58,55){$m_r$} \put(62,0){${x}_{m_r}$} \put(80,0){$\sigma_q$} \put(-3,80){$t_q$} \put(-3,58){$y_{m_r}$} \end{picture}} \end{picture} \caption{} \end{figure} \\ \marginpar{$\textcolor{green}{\bullet}$} For $s=x+iy, \; q=\sigma_q+it_q$ \[ Re\dfrac{k(q)}{s-q}= Re\dfrac{k(q)}{x+iy-\sigma_q-it_q}=\dfrac{k(q)(x-\sigma_q)}{(x-\sigma_q)^2+(y-t_q)^2}=k(q)\dfrac{x-\sigma_q}{r^2}. \] \\ \marginpar{$\textcolor{yellow}{\bullet}$} Let us prove the following Lemma: \\ \linebreak \chapter{LEMMA 1} \[ \forall\; q \in \mathcal{P} \] \[\exists \; 0< R_q \leqslant R: \;\; \forall\; 0<r \leqslant R_q\;\; \exists \; m_r: \left\| m_r-q\right\| =r,\; Im (m_r)\leqslant Im (q), \] \marginpar{$\textcolor{red}{\bullet}$} \begin{equation} \label{eq_ReZero} Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}-Re\dfrac{{\zeta }^{\prime }\left(1-m_r\right)}{\zeta \left(1-m_r\right)} =0 \end{equation} \\ And for the angle $ \beta_{m_r} $ between the ordinate axis and the straight line passing through the points $ q $ and $ m_r $, the following equality holds: \begin{equation} \label{eq_tg_beta} \tg \beta_{m_r}=O(r)_{ r \to 0}. \end{equation} \chapter{PROOF:} \\ \linebreak For $ s \in Q (R) $ we consider the function: \[ Re\dfrac{{\zeta }^{\prime }\left(s\right)}{\zeta \left(s\right)}-Re\dfrac{{\zeta }^{\prime }\left(1-s\right)}{\zeta \left(1-s\right)}-2Re\dfrac{k(q)}{s-q} \] From \eqref{eq_Dzeta_Re} and \eqref{eq_Minus_rho}, it is equal to: \[ Re\left( -\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{s}{2}\right)}{\Gamma \left(\dfrac{s}{2} \right)}+\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{1-s}{2}\right)}{\Gamma \left(\dfrac{1-s}{2} \right)} + 2I_{\mathcal{P}\setminus \left\lbrace q\right\rbrace }(s)\right). \] \\ \linebreak Since all components of the brace are limited in the area of $ s \in Q (R),$ then $ \exists\; H_1 (R) > 0: H_1 (R) \in \mathbb{R} $: \\ \linebreak \[ \left| Re\dfrac{{\zeta }^{\prime }\left(s\right)}{\zeta \left(s\right)}-Re\dfrac{{\zeta }^{\prime }\left(1-s\right)}{\zeta \left(1-s\right)} -Re\dfrac{2k(q)}{s-q} \right| <H_1(R), \qquad \forall s \in Q(R). \] \\ \linebreak \marginpar{$\textcolor{yellow}{\bullet}$} On each of the semicircles: the bottom semicircle - \\ \linebreak $\left\{s:\left\| s-q\right\|=r,\: t_q-r\leqslant y \leqslant t_q\right\} $ and the upper semicircle - \\ \linebreak $\left\{s:\left\|s-q\right\|=r,\: t_q\leqslant y \leqslant t_q+r\right\}$ the function $ Re \dfrac{k(q)}{s-q} $ is continuous and takes values from $ -\dfrac{k(q)}{r}$ to $\dfrac{k(q)}{r}, \: r>0$. \\ \linebreak Consequently $ \forall\;0<r<\dfrac{2k(q)}{H_1(R)}, \; \exists\;m_{min,r},\;m_{max,r}:\\ \\ \; \left\|m_{min,r}-q \right\| =r,\;\left\|m_{max,r}-q \right\| =r :$ \[ Re\dfrac{2k(q)}{m_{min,r}-q} <- H_1(R),\; Re\dfrac{2k(q)}{m_{max,r}-q}> H_1(R) \] and the sum of two functions: \\ \[Re\dfrac{{\zeta }^{\prime }\left(s\right)}{\zeta \left(s\right)}-Re\dfrac{{\zeta }^{\prime }\left(1-s\right)}{\zeta \left(1-s\right)} -Re\dfrac{2k(q)}{s-q}\qquad\mbox{and}\qquad Re\dfrac{2k(q)}{s-q} \] \\ in points $m_{min, r} $ and $m_{max, r} $ will have values with a different signs. \\ \linebreak From the property of a continuous function on a segment taking all the intermediate values between its extrema, it follows that $\exists\; R_q~\in~\mathbb{R},\\\; R_q>0:$ \[ R_q<R,\; \dfrac{2k(q)}{R_q}>H_1(R) \] and then $\forall\; 0< r \leqslant R_q $ \\ exists on the lower semicircle point $ m_r\stackrel{\scriptscriptstyle\mathrm{def}}{=}x_{m_r}+iy_{m_r} $ such as that: \\ \[ Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}-Re\dfrac{{\zeta }^{\prime }\left(1-m_r\right)}{\zeta \left(1-m_r\right)} =0.\] \\ \\\marginpar{$\textcolor{green}{\bullet}$} From \eqref{eq_SumDig} and \eqref{eq_ReZero} it follows that $\forall\; 0< r \leqslant R_q $: \\ \begin{eqnarray} \label{eq_ReMidl} &Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)} =Re\dfrac{{\zeta }^{\prime }\left(1-m_r\right)}{\zeta \left(1-m_r\right)}=\nonumber\\ &=\dfrac{1}{2}\mathrm{ln}\pi +\dfrac{1}{2}Re\left( -\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{m_r}{2}\right)}{\Gamma \left(\dfrac{m_r}{2} \right)}-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{1-m_r}{2}\right)}{\Gamma \left(\dfrac{1-m_r}{2} \right)} \right). \end{eqnarray} \\ \marginpar{$\textcolor{green}{\bullet}$} I.e. taking into account the absence of singular points for $ \Gamma(s),\;\;\forall\; s \in Q(R) $ for $ r \to 0$: \\ \begin{eqnarray} \label{eq_ReFin} &Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)} =Re\dfrac{{\zeta }^{\prime }\left(1-m_r\right)}{\zeta \left(1-m_r\right)}=O(1). \end{eqnarray} \\ \linebreak Point $ m_r $: \\ \linebreak \begin{figure}[h] \begin{picture}(340,300)(-60,0) \put(2,2){\begin{picture}(340,340)% \put(15,0){\vector(0,1){340}} \put(0,15){\vector(1,0){360}} \put(330,2){$Re(s)$} \put(-22,330){$Im(s)$} \put(180,180){\circle*{3}} \put(188,170){$q$} \multiput(12,180)(4,0){85 {\circle*{1}} \multiput(180,8)(0,4){43 {\circle*{1}} \multiput(180,180)(0,4){38 {\circle*{1}} \put(182,0){$\sigma_q$} \put(0,184){$t_q$} \put(180,180){\vector(-1,1){100}} \qbezier(46,227)(23,145)(81,81 \qbezier(81,81)(125,39)(180,40 \qbezier(46,227)(59,262)(79,281 \qbezier(281,81)(235,39)(180,40 \qbezier(79,281)(123,320)(180,320 \qbezier(314,227)(337,145)(281,81 \qbezier(314,227)(301,262)(281,281 \qbezier(281,281)(237,320)(180,320 \put(180,180){\vector(1,1){55} \qbezier(125,125)(153,101)(180,102) \qbezier(125,125)(102,151)(102,180) \qbezier(235,125)(207,101)(180,102 \qbezier(235,125)(258,151)(258,180) \qbezier(235,235)(207,259)(180,258 \qbezier(235,235)(258,209)(258,180) \qbezier(125,235)(153,259)(180,258 \qbezier(125,235)(102,209)(102,180) \put(105,232){$R$} \put(210,200){$r$} \multiput(180,180)(-1,-3){50}% {\circle*{1}} \multiput(180,180)(-1,3){50}% {\circle*{1}} \multiput(180,180)(1,3){50}% {\circle*{1}} \multiput(180,180)(1,-3){50}% {\circle*{1}} \multiput(12,106)(5,0){67 {\circle*{1}} \put(-5,110){$y_{m_r}$} \put(155,0){$x_{m_r}$} \multiput(155,106)(0,-5){20 {\circle*{1}} \put(155,106){\circle*{3}}\put(142,118){$m_r$} \qbezier(167,141)(173,137)(180,140 \put(163,125){$\beta_{m_r}$} \end{picture}} \end{picture} \caption{} \end{figure} \\ \marginpar{$\textcolor{green}{\bullet}$} In the case if $ y_{m_r} \ne t_q $ the tangent modulus of the angle $ \beta_{m_r} $ is equal to: \[ \left| \tg \beta_{m_r}\right| = \dfrac{\left|\sigma_q-x_{m_r} \right| }{t_q-y_{m_r}}. \] From \eqref{eq_Dzeta_Re} it follows that: \[k(q)\dfrac{x_{m_r}-\sigma_q}{r^2}=Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}-\dfrac{1}{2}\mathrm{ln}\pi -Re\left( -\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{m_r}{2}\right)}{\Gamma \left(\dfrac{m_r}{2}\right)}+I_{\mathcal{P}\setminus \left\lbrace q\right\rbrace }(m_r)\right). \] \\ In view of \eqref{eq_ReFin} at $ r \to 0 $: \[ \dfrac{x_{m_r}-\sigma_q}{r^2}=O(1). \] \\ Then from equality: \[ (\sigma_q-x_{m_r})^2+(t_q-y_{m_r})^2=r^2 \] it follows that when $ r \to 0$: \[ (t_q-y_{m_r})^2=r^2 -O(r^4).\] \\ \marginpar{$\textcolor{yellow}{\bullet}$} I.e. $ \exists \; 0<R_1\leqslant R_q: \;\; \forall\; 0<r<R_1 $ \[ t_q-y_{m_r}\ne 0 \] and therefore $ r \to 0$: \\ \[ \tg \beta_{m_r} = \dfrac{O(r^2)}{\theta(r)} = O(r). \] \\ $ \square $ \\ \\ \linebreak \marginpar{$\textcolor{yellow}{\bullet}$} Let's prove the second Lemma: \\ \linebreak \chapter{LEMMA 2} \[\forall\;q \in \mathcal{P} \] \marginpar{$\textcolor{red}{\bullet}$} \begin{eqnarray} & Re \left( \dfrac{{\Gamma }^{\prime }\left(\dfrac{q}{2}\right)}{\Gamma \left(\dfrac{q}{2}\right)}\right)^{\prime }=Re \left( \dfrac{{\Gamma }^{\prime }\left(\dfrac{1-q}{2}\right)}{\Gamma \left(\dfrac{1-q}{2}\right)}\right)^{\prime }.\nonumber \end{eqnarray} \\ \chapter{PROOF:} \\ \linebreak \marginpar{$\textcolor{yellow}{\bullet}$} From the first Lemma $\forall\;\; 0<r \leqslant R_q $, for $ s=x+iy:\; \left\|s-q \right\|=r $ let's consider the function: \[g(x,y) \stackrel{\scriptscriptstyle\mathrm{def}}{=} Re\dfrac{{\zeta }^{\prime }\left(s\right)}{\zeta \left(s\right)}Re\dfrac{{\zeta }^{\prime }\left(1-s\right)}{\zeta \left(1-s\right)}. \] \begin{figure}[h] \begin{picture}(340,320)(-60,0) \put(2,2){\begin{picture}(340,340)% \put(15,0){\vector(0,1){340}} \put(0,15){\vector(1,0){360}} \put(330,2){$Re(s)$} \put(-22,330){$Im(s)$} \put(180,180){\circle*{3}} \put(188,170){$q$} \multiput(12,180)(4,0){85 {\circle*{1}} \multiput(180,8)(0,4){43 {\circle*{1}} \multiput(180,180)(0,4){38 {\circle*{1}} \put(176,0){$\sigma_q$} \put(0,184){$t_q$} \put(180,180){\vector(-1,1){100}} \qbezier(46,227)(23,145)(81,81 \qbezier(81,81)(125,39)(180,40 \qbezier(46,227)(59,262)(79,281 \qbezier(281,81)(235,39)(180,40 \qbezier(79,281)(123,320)(180,320 \qbezier(314,227)(337,145)(281,81 \qbezier(314,227)(301,262)(281,281 \qbezier(281,281)(237,320)(180,320 \put(180,180){\vector(1,1){55} \qbezier(125,125)(153,101)(180,102) \qbezier(125,125)(102,151)(102,180) \qbezier(235,125)(207,101)(180,102 \qbezier(235,125)(258,151)(258,180) \qbezier(235,235)(207,259)(180,258 \qbezier(235,235)(258,209)(258,180) \qbezier(125,235)(153,259)(180,258 \qbezier(125,235)(102,209)(102,180) \put(105,232){$R$} \put(210,200){$r$} \multiput(180,180)(-1,-3){50}% {\circle*{1}} \multiput(180,180)(-1,3){50}% {\circle*{1}} \multiput(180,180)(1,3){50}% {\circle*{1}} \multiput(180,180)(1,-3){50}% {\circle*{1}} \multiput(12,106)(5,0){67 {\circle*{1}} \put(-5,110){$y_{m_r}$} \put(155,0){$x_{m_r}$} \put(155,106){\circle*{3}}\put(156,95){$m_r$} \qbezier(167,141)(173,137)(180,140 \put(163,125){$\beta_{m_r}$} \put(155,106){\line(-3,1){90} \put(155,106){\line(3,-1){90} \qbezier(107,106)(104,113)(110,121 \put(85,112){$\beta_{m_r}$} \multiput(180,180)(-2,-2){31}% {\circle*{1}} \put(125,125){\circle*{3}}\put(122,129){$\Theta_{r,\varepsilon}$} \put(140,114){\circle*{3}}\put(137,117){$m_{r,\varepsilon}$} \put(110,147){\circle*{3}}\put(112,148){$m_{r,\varepsilon_1}$} \put(125,125){\line(-1,1){60} \put(125,125){\line(1,-1){60} \multiput(12,125)(5,0){23 {\circle*{1}} \put(-10,129){$y_{\Theta_{r,\varepsilon}}$} \multiput(12,147)(5,0){20 {\circle*{1}} \put(-14,151){$y_{m_{r,\varepsilon_1}}$} \multiput(155,106)(0,-5){20 {\circle*{1}} \multiput(140,114)(0,-5){22 {\circle*{1}} \put(130,0){$x_{m_{r,\varepsilon}}$} \qbezier(159,160)(166,149)(180,152 \qbezier(161,163)(168,152)(180,155 \put(150,145){$\beta_{\Theta_{r,\varepsilon}}$} \qbezier(88,125)(90,143)(100,150 \qbezier(91,125)(93,141)(102,148 \put(65,140){$\beta_{\Theta_{r,\varepsilon}}$} \end{picture}} \end{picture} \caption{} \end{figure} For arbitrary $ \varepsilon,\; \varepsilon_1>0 $, taking into account that the function $ Re\dfrac{k(q)}{s-q} $ is continuous and takes values from $ -\dfrac{k(q)}{r} $ to $ \dfrac{k(q)}{r}$, there must exist a radius $ 0<R_2 \leqslant R_1:\;\; \forall\;0<r\leqslant R_2: \;\; \exists\; m_{r,\varepsilon},\;m_{r,\varepsilon_1}:$ \begin{eqnarray} \label{eq_mEpsilon} Re\dfrac{{\zeta }^{\prime }\left(m_{r,\varepsilon}\right)}{\zeta \left(m_{r,\varepsilon}\right)}=Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}-\varepsilon,\;\;Re\dfrac{{\zeta }^{\prime }\left(1-m_{r,\varepsilon_1}\right)}{\zeta \left(1-m_{r,\varepsilon_1}\right)}=Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}+\varepsilon_1. \end{eqnarray} \\ \linebreak \linebreak Let's designate $ \forall\; s \in Q(R) $: \begin{eqnarray} \alpha(s)\stackrel{\scriptscriptstyle\mathrm{def}}{=}\dfrac{1}{2}\mathrm{ln}\pi +\dfrac{1}{2}Re\left( -\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{s}{2}\right)}{\Gamma \left(\dfrac{s}{2} \right)}-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{1-s}{2}\right)}{\Gamma \left(\dfrac{1-s}{2} \right)} \right).\nonumber \end{eqnarray} \\ \marginpar{$\textcolor{yellow}{\bullet}$} From \eqref {eq_ReMidl} follows: \[ Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)} =Re\dfrac{{\zeta }^{\prime }\left(1-m_r\right)}{\zeta \left(1-m_r\right)}=\dfrac{1}{2}\left( Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)} +Re\dfrac{{\zeta }^{\prime }\left(1-m_r\right)}{\zeta \left(1-m_r\right)}\right)=\alpha(m_r), \] which means taking into account \eqref{eq_SumDig} and \eqref{eq_mEpsilon}: \begin{eqnarray} \label{eq_mEpsilon2} &Re\dfrac{{\zeta }^{\prime }\left(1-m_{r,\varepsilon}\right)}{\zeta \left(1-m_{r,\varepsilon}\right)}=2\alpha(m_{r,\varepsilon})-Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}+\varepsilon=\nonumber \\ &=Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}+\varepsilon+2\alpha(m_{r,\varepsilon})-2\alpha(m_r),\nonumber \\ \\ &Re\dfrac{{\zeta }^{\prime }\left(m_{r,\varepsilon_1}\right)}{\zeta \left(m_{r,\varepsilon_1}\right)}=2\alpha(m_{r,\varepsilon_1})-Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}-\varepsilon_1=\nonumber \\ &=Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}-\varepsilon_1+2\alpha(m_{r,\varepsilon_1})-2\alpha(m_r)\nonumber. \end{eqnarray} \\ Let's designate: \[ x_{m_{r,\varepsilon}} +iy_{m_{r,\varepsilon}}\stackrel{\scriptscriptstyle\mathrm{def}}{=} m_{r,\varepsilon},\;\;x_{m_{r,\varepsilon_1}} +iy_{m_{r,\varepsilon_1}}\stackrel{\scriptscriptstyle\mathrm{def}}{=} m_{r,\varepsilon_1}. \] \\ \marginpar{$\textcolor{green}{\bullet}$} The points $ m_{r, \varepsilon} $ and $ m_{r, \varepsilon_1} $ lie on a circle with center at the point $ q $ and radius $ r $, i.e. all the points $ s = x + iy $ of the smallest of the arcs that connects them satisfy the equation: \[ y=f_r(x) \stackrel{\scriptscriptstyle\mathrm{def}}{=} t_q-\sqrt{r^2-(\sigma_q-x)^2}. \] And: \[ f_r(x_{m_{r,\varepsilon}})=y_{m_{r,\varepsilon}}, \; f_r(x_{m_{r,\varepsilon_1}})=y_{m_{r,\varepsilon_1}}. \] \\ \marginpar{$\textcolor{yellow}{\bullet}$} Function $ g(x, y) $ is differentiated, so function $g(x, f_r(x)) $ is continuous and differentiated on $ x $. \\ \linebreak Let's designate $ \forall\; s \in Q(R) $: \begin{eqnarray} \omega(s)\stackrel{\scriptscriptstyle\mathrm{def}}{=}\dfrac{1}{2}\mathrm{ln}\pi +Re\left( -\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{s}{2}\right)}{\Gamma \left(\dfrac{s}{2}\right)}+I_{\mathcal{P}\setminus \left\lbrace q\right\rbrace }(s)\right).\nonumber \end{eqnarray} \\ \\ \linebreak \marginpar{$\textcolor{yellow}{\bullet}$} From \eqref{eq_Dzeta_Re} and \eqref{eq_mEpsilon}, because of the continuity of the function $ Re\dfrac{{\zeta}^{\prime} \left(x + if_r(x) \right)}{\zeta \left( x + if_r (x) \right)} $ for $ \forall \; x \in (\sigma_q-R, \sigma_q + R) $, based on the mean value theorem for $ r \to 0 $: \begin{eqnarray} \label{eq_omega_xx} &-k(q)\dfrac{x_{m_r}-x_{m_{r,\varepsilon_1}}}{r^2}=-\varepsilon_1-\omega(1-m_r)+\omega(1-m_{r,\varepsilon_1})=\nonumber\\ &=-\varepsilon_1+ O(x_{m_r}-x_{m_{r,\varepsilon_1}}),\\ &k(q)\dfrac{x_{m_r}-x_{m_{r,\varepsilon}}}{r^2}=\varepsilon-\omega(x_{m_r})+\omega(x_{m_{r,\varepsilon}})=\nonumber\\ &=\varepsilon+ O(x_{m_r}-x_{m_{r,\varepsilon}}).\nonumber \end{eqnarray} \\ \marginpar{$\textcolor{yellow}{\bullet}$} Thus $\forall\forall\; \varepsilon, \; \varepsilon_1>0,\;\;\exists\; 0<R_3 \leqslant R_2:\; \forall\;0<r\leqslant R_3 $: \begin{eqnarray} &\lim_{\varepsilon \to 0} m_{r,\varepsilon}=m_r,\;\;\lim_{\varepsilon_1 \to 0} m_{r,\varepsilon_1}=m_r. \nonumber \end{eqnarray} \\ \marginpar{$\textcolor{yellow}{\bullet}$} Thus, a real function that is continuous and differentiable on the inner interval takes on the values on its ends: \\ \begin{eqnarray} \label{eq_g_one} &g(x_{m_{r,\varepsilon_1}},f_r(x_{m_{r,\varepsilon_1}}))=\nonumber\\ &=\left(Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}+\varepsilon_1 \right)\left(Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}-\varepsilon_1+2\alpha(m_{r,\varepsilon_1})-2\alpha(m_r) \right)=\nonumber\\ &= \left(Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)} \right)^2-\varepsilon_1^2+2\left( Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}+\varepsilon_1\right)(\alpha(m_{r,\varepsilon_1})-\alpha(m_r)), \end{eqnarray} \\ \begin{eqnarray} \label{eq_g_nul} &g(x_{m_{r,\varepsilon}},f_r(x_{m_{r,\varepsilon}}))=\nonumber\\ &=\left(Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}-\varepsilon \right)\left(Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}+\varepsilon+2\alpha(m_{r,\varepsilon})-2\alpha(m_r) \right)=\nonumber\\ &= \left(Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)} \right)^2-\varepsilon^2+2\left( Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}-\varepsilon\right) (\alpha(m_{r,\varepsilon})-\alpha(m_r)). \end{eqnarray} \\ \marginpar{$\textcolor{yellow}{\bullet}$} Let's consider equality: \begin{equation} \label{eq_eqviv_g} g(x_{m_{r,\varepsilon}},f_r(x_{m_{r,\varepsilon}})) =g(x_{m_{r,\varepsilon_1}},f_r(x_{m_{r,\varepsilon_1}})). \end{equation} \\ \marginpar{$\textcolor{yellow}{\bullet}$} On the basis of the Lagrange theorem about the average value $ \forall \forall \; \varepsilon, \varepsilon_1> 0 $, \\ $ \forall \; 0 <r \leqslant R_3 $ on the arc described by $ f_r (x) $ in interval $(x_{m_{r, \varepsilon_1}}, x_{m_r}) $, there is a point $ \varUpsilon_ {r, \varepsilon_1} \stackrel {\scriptscriptstyle \mathrm {def}} {=} x_{\varUpsilon_{r, \varepsilon_1}} + if_r (x_{\varUpsilon_{r, \varepsilon_1}}) $, for which it is true: \[\alpha(m_{r,\varepsilon_1})-\alpha(m_r)=\alpha^{\prime}_x(x+if_r(x))_{x= x_{\varUpsilon_{r, \varepsilon_1}} } (x_{m_{r,\varepsilon_1}}-x_{m_r}), \] similarly in the interval $ ( x_{m_{r,\varepsilon}}, x_{m_r}) \; \exists\; \varUpsilon_{r,\varepsilon} \stackrel{\scriptscriptstyle\mathrm{def}}{=} x_{\varUpsilon_{r, \varepsilon}}+if_r(x_{\varUpsilon_{r, \varepsilon}}):$ \[\alpha(m_{r,\varepsilon})-\alpha(m_r)=\alpha^{\prime}_x(x+if_r(x))_{x= x_{\varUpsilon_{r, \varepsilon}} } (x_{m_{r,\varepsilon}}-x_{m_r}). \] Also in the same intervals there are two more points:\\ $ \varkappa_{r,\varepsilon_1} \stackrel{\scriptscriptstyle\mathrm{def}}{=} x_{\varkappa_{r, \varepsilon_1}}+if_r(x_{\varkappa_{r, \varepsilon_1}}) $ and $ \varkappa_{r,\varepsilon} \stackrel{\scriptscriptstyle\mathrm{def}}{=} x_{\varkappa_{r, \varepsilon}}+if_r(x_{\varkappa_{r, \varepsilon}}) $,\\ $ x_{\varkappa_{r, \varepsilon_1}} \in (x_{m_{r,\varepsilon_1}},x_{m_r} ),\;\; x_{\varkappa_{r, \varepsilon}} \in (x_{m_{r,\varepsilon}}, x_{m_r}): $ \begin{eqnarray} &\omega(1-m_r)-\omega(1-m_{r,\varepsilon_1})=\omega^{\prime}_x(1-x-if_r(x))_{x= x_{\varkappa_{r, \varepsilon_1}} } (x_{m_r}-x_{m_{r,\varepsilon_1}}),\nonumber\\ &\omega(m_r)-\omega(m_{r,\varepsilon})=\omega^{\prime}_x(x+if_r(x))_{x= x_{\varkappa_{r, \varepsilon}} } (x_{m_r}-x_{m_{r,\varepsilon}}).\nonumber \end{eqnarray} \\ \marginpar{$\textcolor{green}{\bullet}$} And \eqref{eq_omega_xx} will be as follows: \begin{eqnarray} &(x_{m_r}-x_{m_{r,\varepsilon_1}})\left( \dfrac{k(q)}{r^2}+\omega^{\prime}_x(1-x-if_r(x))_{x= x_{\varkappa_{r, \varepsilon_1}} }\right) =\varepsilon_1,\nonumber\\ &\nonumber\\ &(x_{m_r}-x_{m_{r,\varepsilon}})\left( \dfrac{k(q)}{r^2}+\omega^{\prime}_x(x+if_r(x))_{x= x_{\varkappa_{r, \varepsilon}} }\right) =\varepsilon.\nonumber \end{eqnarray} \\ \linebreak \linebreak Or: \begin{eqnarray} \label{eq_div_xx_Ke} &x_{m_r}-x_{m_{r,\varepsilon_1}} =\dfrac{\varepsilon_1 r^2}{k(q)+r^2\omega^{\prime}_x(1-x-if_r(x))_{x= x_{\varkappa_{r, \varepsilon_1}} }},\nonumber\\ &\\ &x_{m_r}-x_{m_{r,\varepsilon}} =\dfrac{\varepsilon r^2}{k(q)+r^2\omega^{\prime}_x(x+if_r(x))_{x= x_{\varkappa_{r, \varepsilon}} }}.\nonumber \end{eqnarray} \marginpar{$\textcolor{green}{\bullet}$} And then the equality \eqref{eq_eqviv_g} will look like \eqref{eq_g_one} and \eqref{eq_g_nul} as follows: \\ \begin{eqnarray} &\quad-\varepsilon^2+\varepsilon\dfrac{2r^2\left( Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}-\varepsilon\right)\alpha^{\prime}_x(x+if_r(x))_{x= x_{\varUpsilon_{r, \varepsilon}} } }{k(q)+r^2\omega^{\prime}_x(x+if_r(x))_{x= x_{\varkappa_{r, \varepsilon}} }}=\nonumber \nonumber\\ &=-\varepsilon_1^2+\varepsilon_1\dfrac{2r^2\left( Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}+\varepsilon_1\right)\alpha^{\prime}_x(x+if_r(x))_{x= x_{\varUpsilon_{r, \varepsilon_1}} }}{k(q)+r^2\omega^{\prime}_x(1-x-if_r(x))_{x= x_{\varkappa_{r, \varepsilon_1}} }}.\nonumber \end{eqnarray} \marginpar{$\textcolor{yellow}{\bullet}$} Or: \begin{eqnarray} \label{eq_fin_ur_gg} &A_{r, \varepsilon}\varepsilon^2-B_{r, \varepsilon}\varepsilon=A_{r, \varepsilon_1}\varepsilon_1^2-B_{r, \varepsilon_1}\varepsilon_1. \end{eqnarray} \\ So from \eqref{eq_fin_ur_gg} it is visible, that $ \exists \; 0 <R_4 \leqslant R_3: \; \forall \; 0 <r \leqslant R_4 $: \[A_{r, \varepsilon} \stackrel{\scriptscriptstyle\mathrm{def}}{=} 1+\dfrac{2r^2\alpha^{\prime}_x(x+if_r(x))_{x= x_{\varUpsilon_{r, \varepsilon}} }}{k(q)+r^2\omega^{\prime}_x(x+if_r(x))_{x= x_{\varkappa_{r, \varepsilon}} }} >0 \] and \[A_{r, \varepsilon_1} \stackrel{\scriptscriptstyle\mathrm{def}}{=} 1-\dfrac{2r^2\alpha^{\prime}_x(x+if_r(x))_{x= x_{\varUpsilon_{r, \varepsilon_1}} }}{k(q)+r^2\omega^{\prime}_x(1-x-if_r(x))_{x= x_{\varkappa_{r, \varepsilon_1}} }} >0, \] \\ as well as: \[ k(q)+r^2\omega^{\prime}_x(x+if_r(x))_{x= x_{\varkappa_{r, \varepsilon}} } >0 \] and \[ k(q)+r^2\omega^{\prime}_x(1-x-if_r(x))_{x= x_{\varkappa_{r, \varepsilon_1}} } >0. \] \\ Where: \[ B_{r, \varepsilon} \stackrel{\scriptscriptstyle\mathrm{def}}{=}\dfrac{2r^2 Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}\alpha^{\prime}_x(x+if_r(x))_{x= x_{\varUpsilon_{r, \varepsilon}} }}{k(q)+r^2\omega^{\prime}_x(x+if_r(x))_{x= x_{\varkappa_{r, \varepsilon}} }}, \] \[ B_{r, \varepsilon_1} \stackrel{\scriptscriptstyle\mathrm{def}}{=}\dfrac{2r^2 Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}\alpha^{\prime}_x(x+if_r(x))_{x= x_{\varUpsilon_{r, \varepsilon_1}} }}{k(q)+r^2\omega^{\prime}_x(1-x-if_r(x))_{x= x_{\varkappa_{r, \varepsilon_1}} }}. \] \\ \linebreak \marginpar{$\textcolor{red}{\bullet}$} Let's assume that: \begin{eqnarray} \label{eq_Lim_fin_ne0} \alpha(q)\dfrac{\partial}{\partial x}\alpha(q)\ne 0. \end{eqnarray} \\ \marginpar{$\textcolor{green}{\bullet}$} Then, taking into account the existence of a two-dimensional neighborhood of the point $ q $ in which the continuous function of two variables \linebreak $ \alpha(x+iy) \dfrac{\partial} {\partial x} \alpha (x+iy) $ preserves the sign, and also that : \[ \dfrac{d}{d x}\alpha(x+if_r(x))=\dfrac{\partial}{\partial x}\alpha(x+if_r(x))+\dfrac{d}{d x}f_r(x)\dfrac{\partial}{\partial y}\alpha(x+iy)_{y = f_r(x)} \] \\ and in accordance with \eqref{eq_div_xx_Ke} $ \forall\; x \in (\min(x_{m_{r,\varepsilon_1}},x_{m_{r,\varepsilon}} ),\max(x_{m_{r,\varepsilon_1}},x_{m_{r,\varepsilon}} ) )$: \begin{eqnarray} \dfrac{d}{d x}f_r(x)=\dfrac{\sigma_q-x}{\sqrt{r^2-(\sigma_q-x)^2}}=O(r)_{r \to 0 }.\nonumber \end{eqnarray} \\ \marginpar{$\textcolor{yellow}{\bullet}$} We have: $\exists\; 0<R_5 \leqslant R_4:\; \forall\;0<r\leqslant R_5 $: \begin{equation} \label{eq_alpha_ne_0} \alpha(x+if_r(x)) \ne 0,\;\; \dfrac{d}{d x}\alpha(x+if_r(x)) \ne 0,\;\; \forall\;x \in [\sigma_q-R_5,\sigma_q+R_5]. \end{equation} \\ \marginpar{$\textcolor{yellow}{\bullet}$} Let's notice, that in the assumption \eqref {eq_Lim_fin_ne0}, $ \forall \; 0 <r \leqslant R_5 $ factors $ B_{r, \varepsilon_1} $ and $ B_{r, \varepsilon} $ are not equal $ 0 $ and have one sign. \\ \linebreak For resolvability of the equation \eqref{eq_fin_ur_gg} it is enough to us to show a continuity $ \alpha(m_{r, \varepsilon}) $ on $ \varepsilon $. Really in this case, in view of \eqref{eq_g_nul}, the left part of equality \eqref{eq_fin_ur_gg} will be continuous on $ \varepsilon $. \\ \linebreak And then $ \forall \; \varepsilon_1> 0 $ at $ \varepsilon \to 0 $ there would be a value $ \varepsilon $ such, that the left part \eqref{eq_fin_ur_gg} is less on the module of value in the right part, as well as $ \exists \; \varDelta_1> 0: \; \; \forall \; 0 <\varepsilon_1 \leqslant \varDelta_1 $ at $ \varepsilon \to \infty $, there would be a value of a variable at which the module of the left part is more the than module right and both parts have one sign. \\ \linebreak Consequently, in view of a continuity, between the specified values of\linebreak parameter there should be a point which is a root of the equation \eqref{eq_fin_ur_gg} concerning a variable $ \varepsilon $, for fixed $ 0 <\varepsilon_1 \leqslant \varDelta_1 $. \\ \linebreak The continuity of $ \alpha (m_{r, \varepsilon}) $ with respect to $ \varepsilon $ follows from the continuity of the function $ \alpha (s) $ for $ \forall \; s \in Q (R) $ and the continuity of $ m_{r, \varepsilon} $ in $ \varepsilon $ because the equation \eqref{eq_omega_xx} can be written as follows: \[ k(q)\dfrac{x_{m_r}-\sigma_q-h_r(\tau)}{r^2}=-\tau-\omega(m_r)+\omega(\sigma_q+h_r(\tau)+i(t_q-\sqrt{r^2-h_r(\tau)^2})), \] where $h_r(\tau) $ it is defined from equality: \begin{equation} \label{eq_hr_def} h_r(\varepsilon) \stackrel{\scriptscriptstyle\mathrm{def}}{=} x_{m_{r,\varepsilon}}-\sigma_q . \end{equation} \\ Or: \begin{eqnarray} k(q)\dfrac{\sigma_q+h_r(\tau)-x_{m_r}}{r^2}-\omega(m_r)+\omega(\sigma_q+h_r(\tau)+i(t_q-\sqrt{r^2-h_r(\tau)^2}))=\tau. \nonumber \end{eqnarray} \\ \marginpar{$\textcolor{yellow}{\bullet}$} Those the function $ h_r (\tau) $ is the inverse of the function: \begin{equation} \label{eq_fun_h} k(q)\dfrac{\sigma_q+\tau-x_{m_r}}{r^2}-\omega(m_r)+\omega(\sigma_q+\tau+i(t_q-\sqrt{r^2-\tau^2})). \end{equation} \\ \marginpar{$\textcolor{green}{\bullet}$} It follows from the inverse function theorem that if a function is defined, continuous and strictly monotone on some interval, then it has an inverse function that is continuous and strictly monotone on the corresponding interval, the image of the initial interval. \\ \linebreak Let's look at a derivative of function from \eqref{eq_fun_h}: \[ \dfrac{k(q)}{r^2}+\omega_{x}^{\prime}(\sigma_q+\tau+i(t_q-\sqrt{r^2-\tau^2}))+\dfrac{\tau}{\sqrt{r^2-\tau^2}}\omega_{y}^{\prime}(\sigma_q+\tau+i(t_q-\sqrt{r^2-\tau^2})). \] \\ From \eqref{eq_div_xx_Ke} follows, that $ \forall \; \tau> 0, \; r \to 0 $: \[ h_r(\tau)=O(r^2). \] \\ I.e. area of values of argument $ \tau $ in \eqref{eq_fun_h} at $r \to 0 $: \[ \tau=O(r^2). \] \\ Then $ \exists \; 0 <R_6 \leqslant R_5: \; \forall \; 0 <r \leqslant R_6 $ and for any argument $ \tau $ possible in \eqref{eq_fun_h} the derivative of this function will be strictly positive. \\ \\ I.e. function from \eqref{eq_fun_h} is continuous and strictly increasing on all interval of definition of the argument. \\ \\ \marginpar{$\textcolor{yellow}{\bullet}$} This means that $ \forall \; 0 <r \leqslant R_6, \; \exists \; \varDelta_1> 0: \; \; \forall \; 0 <\varepsilon_1 \leqslant \varDelta_1, \; \exists \; \varepsilon> 0 $ is executed \eqref{eq_eqviv_g}: \[ g(x_{m_{r,\varepsilon}},f_r(x_{m_{r,\varepsilon}})) =g(x_{m_{r,\varepsilon_1}},f_r(x_{m_{r,\varepsilon_1}})). \] \\ \marginpar{$\textcolor{red}{\bullet}$} Let's assume, that: \begin{eqnarray} \label{eq_xx_eqviv} x_{m_{r,\varepsilon_1}}=x_{m_{r,\varepsilon}}. \end{eqnarray} \\ Then from \eqref{eq_g_one}, \eqref{eq_g_nul}, \eqref{eq_eqviv_g}, in view of equality $\alpha(m_{r,\varepsilon_1})=\alpha(m_{r,\varepsilon}) $: \[\varepsilon^2 -\varepsilon_1^2+2\left(\varepsilon+\varepsilon_1\right) (\alpha(m_{r,\varepsilon_1})-\alpha(m_r))=0. \] \\ I.e. \begin{equation} \label{eq_e_min_e1} \varepsilon-\varepsilon_1=2(\alpha(m_r)-\alpha(m_{r,\varepsilon_1})). \end{equation} \\ And from \eqref{eq_mEpsilon2}: \[Re\dfrac{{\zeta }^{\prime }\left(m_{r,\varepsilon_1}\right)}{\zeta \left(m_{r,\varepsilon_1}\right)}=Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}-\varepsilon_1+2\alpha(m_{r,\varepsilon_1})-2\alpha(m_r)=Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}-\varepsilon. \] \\ I.e. \begin{eqnarray Re\dfrac{{\zeta }^{\prime }\left(m_{r,\varepsilon_1}\right)}{\zeta \left(m_{r,\varepsilon_1}\right)}=Re\dfrac{{\zeta }^{\prime }\left(m_{r,\varepsilon}\right)}{\zeta \left(m_{r,\varepsilon}\right)}. \nonumber \end{eqnarray} \\ \marginpar{$\textcolor{yellow}{\bullet}$} That means according to \eqref{eq_hr_def}, \eqref{eq_fun_h}, \eqref{eq_xx_eqviv} that: \[\sigma_q+h_r(\varepsilon)=\sigma_q+h_r(\varepsilon_1) \Leftrightarrow h_r(\varepsilon)=h_r(\varepsilon_1). \] And in view of a continuity and strict monotony $ h_r(\varepsilon) $: \[ \varepsilon=\varepsilon_1. \] And then from \eqref{eq_e_min_e1}: \[\alpha(m_r)-\alpha(m_{r,\varepsilon_1})=0, \] that contradicts strict monotony of function $ \alpha (s) $ according to \eqref{eq_div_xx_Ke} and \eqref{eq_alpha_ne_0}. \\ \\ \marginpar{$\textcolor{yellow}{\bullet}$} Hence the assumption of \eqref{eq_xx_eqviv} is false, that is: \[\forall\;0<r\leqslant R_6, \;\;\exists\;0<\varDelta \leqslant \varDelta_1:\; \forall \; 0<\varepsilon_1<\varDelta,\; \exists\; \varepsilon>0:\;\; m_{r,\varepsilon_1}\ne m_{r,\varepsilon} \] and \[ g(x_{m_{r,\varepsilon}},f_r(x_{m_{r,\varepsilon}})) =g(x_{m_{r,\varepsilon_1}},f_r(x_{m_{r,\varepsilon_1}})). \] \\ I.e. continuous and differentiable on the inner interval, the real function takes on its ends the same values. \\ \linebreak \marginpar{$\textcolor{yellow}{\bullet}$} By Rolle's theorem about the extremum of a differentiable function on an interval, we have: \begin{equation}\label{eq_TauZero_w} \exists\; x_{\Theta_{r, \varepsilon_1}}:\;\; {\left(g(x,f_r(x))\right)}_{x = x_{\Theta_{r, \varepsilon_1}} }^{\prime}=0. \end{equation} \\ Where: \[ \Theta_{r,\varepsilon_1} \stackrel{\scriptscriptstyle\mathrm{def}}{=} (x_{\Theta_{r, \varepsilon_1}},f_r(x_{\Theta_{r, \varepsilon_1}}) ). \] \\ From \eqref{eq_fin_ur_gg} follows: \[\varepsilon= o(1)_{\varepsilon_1 \to 0}. \] \\ And taking into account that the value of $ x_{\Theta_{r, \varepsilon_1}} $ lies between $ x_{m_{r, \varepsilon_1}} $ and $ x_{m_{r, \varepsilon}} $, we have: \[ \lim_{\varepsilon_1 \to 0} \Theta_{r,\varepsilon_1}=m_r. \] \\ Let $ \beta_{\Theta_{r, \varepsilon_1}} $ be the angle between the ordinate axis and the line passing through the points $ q $ and $ \Theta_{r, \varepsilon_1} $. \\ \linebreak \marginpar{$\textcolor{yellow}{\bullet}$} Also: \[ \lim_{\varepsilon_1 \to 0} \beta_{\Theta_{r, \varepsilon_1}} = \beta_{m_r} \] and in view of infinite differentiability of function $ Re \dfrac {{\zeta} {\prime} \left (x+if_r(x) \right)}{\zeta \left (x+if_r (x) \right)} $ for $ \forall \; x $ between $ x_{m_{r, \varepsilon_1}} $ and $ x_{m_{r, \varepsilon}} $, i.e. to appropriating continuity of derivative function $ g(x, f_r(x)) $, from equality \eqref{eq_TauZero_w} follows: \\ \begin{eqnarray} {\left(g(x,f_r(x))\right)}_{x = x_{m_r}}^{\prime}=0.\nonumber \end{eqnarray} \\ \marginpar{$\textcolor{yellow}{\bullet}$} This equality, taking into account the fact that the angle $ \beta_{m_r} $ between the axis of ordinates and the line passing through the points $ q $ and $ m_r $ coincides with the angle of inclination of the tangent passing through the point $ m_r $, can be written as follows: \\ \begin{eqnarray} &{\left(g(x,f_r(x))\right)}_{x = x_{m_r}}^{\prime}=\nonumber\\ &\nonumber\\ &=\dfrac{d}{dx}\left(Re\dfrac{{\zeta }^{\prime }\left(x+if_r(x)\right)}{\zeta \left(x+if_r(x)\right)} Re\dfrac{{\zeta }^{\prime }\left(1-x-if_r(x)\right)}{\zeta \left(1-x-if_r(x)\right)} \right)_{x = x_{m_r}}=\nonumber \end{eqnarray} \begin{eqnarray} \label{eq_Div_fin} &=Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}\left(Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)} \right)^{\prime}_x-Re\dfrac{{\zeta }^{\prime }\left(1-m_r\right)}{\zeta \left(1-m_r\right)}\left(Re\dfrac{{\zeta }^{\prime }\left(1-m_r\right)}{\zeta \left(1-m_r\right)} \right)_x^{\prime }-\nonumber\\ &-\tg \beta_{m_r} \left( Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}\left(Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)} \right)^{\prime}_y\right. -\nonumber\\ &\left. -Re\dfrac{{\zeta }^{\prime }\left(1-m_r\right)}{\zeta \left(1-m_r\right)}\left(Re\dfrac{{\zeta }^{\prime }\left(1-m_r\right)}{\zeta \left(1-m_r\right)} \right)_y^{\prime }\right)=0. \end{eqnarray} \\ \linebreak \linebreak \marginpar{$\textcolor{yellow}{\bullet}$} And taking into account \eqref{eq_ReZero}, \eqref{eq_SumDig}: \[ Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)} =Re\dfrac{{\zeta }^{\prime }\left(1-m_r\right)}{\zeta \left(1-m_r\right)}, \] \\ \begin{eqnarray} &\left(Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)} \right)^{\prime}_x-\left(Re\dfrac{{\zeta }^{\prime }\left(1-m_r\right)}{\zeta \left(1-m_r\right)} \right)_x^{\prime }=\nonumber\\ &=\left(Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)} + Re\dfrac{{\zeta }^{\prime }\left(1-m_r\right)}{\zeta \left(1-m_r\right)} \right)_x^{\prime }=\nonumber\\ &=\dfrac{\partial}{\partial x}\left(Re \left(-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{s}{2}\right)}{\Gamma \left(\dfrac{s}{2}\right)}-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{1-s}{2}\right)}{\Gamma \left(\dfrac{1-s}{2}\right)}+\ln\pi \right) \right)_{s=m_r}=\nonumber\\ &=Re\dfrac{d}{d s}\left(-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{m_r}{2}\right)}{\Gamma \left(\dfrac{m_r}{2}\right)}-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{1-m_r}{2}\right)}{\Gamma \left(\dfrac{1-m_r}{2}\right)} \right),\nonumber \end{eqnarray} \\ \linebreak \begin{eqnarray} &\left(Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)} \right)^{\prime}_y-\left(Re\dfrac{{\zeta }^{\prime }\left(1-m_r\right)}{\zeta \left(1-m_r\right)} \right)_y^{\prime }=\nonumber\\ &=\left(Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)} + Re\dfrac{{\zeta }^{\prime }\left(1-m_r\right)}{\zeta \left(1-m_r\right)} \right)_y^{\prime }=\nonumber \end{eqnarray} \begin{eqnarray} &=\dfrac{\partial}{\partial y}\left(Re \left(-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{s}{2}\right)}{\Gamma \left(\dfrac{s}{2}\right)}-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{1-s}{2}\right)}{\Gamma \left(\dfrac{1-s}{2}\right)}+\ln\pi \right) \right)_{s=m_r}=\nonumber\\ &=Im\dfrac{d}{d s}\left(-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{m_r}{2}\right)}{\Gamma \left(\dfrac{m_r}{2}\right)}-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{1-m_r}{2}\right)}{\Gamma \left(\dfrac{1-m_r}{2}\right)} \right).\nonumber \end{eqnarray} \\ \linebreak \marginpar{$\textcolor{yellow}{\bullet}$} Thus, the equality \eqref{eq_Div_fin} can be written as follows: \begin{eqnarray} &{\left(g(x,f_r(x))\right)}_{x = x_{m_r}}^{\prime}=\nonumber\\ &=Re\dfrac{{\zeta }^{\prime }\left(m_r\right)}{\zeta \left(m_r\right)}\left(Re\dfrac{d}{d s}\left(-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{m_r}{2}\right)}{\Gamma \left(\dfrac{m_r}{2}\right)}-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{1-m_r}{2}\right)}{\Gamma \left(\dfrac{1-m_r}{2}\right)} \right)-\right. \nonumber\\ &\left. -\tg \beta_{m_r} Im\dfrac{d}{d s}\left(-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{m_r}{2}\right)}{\Gamma \left(\dfrac{m_r}{2}\right)}-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{1-m_r}{2}\right)}{\Gamma \left(\dfrac{1-m_r}{2}\right)} \right)\right)=0.\nonumber \end{eqnarray} \\ \marginpar{$\textcolor{yellow}{\bullet}$} And taking into account \eqref{eq_tg_beta}, \eqref{eq_ReMidl} as well as the presence of the last equality of finite limits for all the terms at $ r \to 0 $ we get: \begin{eqnarray} &0=\lim_{ r \to 0 }{\left(g(x,f_r(x))\right)}_{x = x_{m_r}}^{\prime}=\nonumber\\ &=\left( \dfrac{1}{2}\mathrm{ln}\pi +\dfrac{1}{2}Re\left( -\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{q}{2}\right)}{\Gamma \left(\dfrac{q}{2} \right)}-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{1-q}{2}\right)}{\Gamma \left(\dfrac{1-q}{2} \right)} \right)\right)* \nonumber\\ &*\left(Re\left(-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{q}{2}\right)}{\Gamma \left(\dfrac{q}{2}\right)}-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{1-q}{2}\right)}{\Gamma \left(\dfrac{1-q}{2}\right)} \right)^{\prime }\right)=\dfrac{1}{2}\alpha(q)\dfrac{\partial}{\partial x}\alpha(q).\nonumber \end{eqnarray} \\ \\ \marginpar{$\textcolor{green}{\bullet}$} This contradicts the assumption that \eqref{eq_Lim_fin_ne0}, i.e.: \begin{eqnarray} \alpha(q)\dfrac{\partial}{\partial x}\alpha(q)= 0.\nonumber \end{eqnarray} \\ \newpage What is equivalent: \\ \begin{eqnarray} \label{eq_Lim_fin} &\left( \dfrac{1}{2}\mathrm{ln}\pi +\dfrac{1}{2}Re\left( -\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{q}{2}\right)}{\Gamma \left(\dfrac{q}{2} \right)}-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{1-q}{2}\right)}{\Gamma \left(\dfrac{1-q}{2} \right)} \right)\right)* \nonumber\\ &*\left(Re\left(-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{q}{2}\right)}{\Gamma \left(\dfrac{q}{2}\right)}-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{1-q}{2}\right)}{\Gamma \left(\dfrac{1-q}{2}\right)} \right)^{\prime }\right)=0. \end{eqnarray} \\ \linebreak Taking into account \eqref{eq_PrimeLn}, \eqref{eq_PrimeLnDigamma} and the formula of the Digamma function from ~\cite[p.259 \S 6.3.16]{Abramowitz-1972} we estimate the first factor: \\ \linebreak \begin{eqnarray} \label{eq_FirstMult1} &\dfrac{1}{2}\mathrm{ln}\pi +\dfrac{1}{2}Re\left( -\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{q}{2}\right)}{\Gamma \left(\dfrac{q}{2} \right)}-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{1-q}{2}\right)}{\Gamma \left(\dfrac{1-q}{2} \right)} \right)=\nonumber\\ &=\dfrac{1}{2}Re\left( \mathrm{ln}\pi +\dfrac{\gamma}{2}+\dfrac{1}{q}+\sum _{n=1}^{\infty }\left(\dfrac{1}{q+2n}-\dfrac{1}{2n}\right)+\right. \nonumber \\ &\left. +\dfrac{\gamma}{2}+ \dfrac{1}{1-q}+\sum _{n=1}^{\infty }\left(\dfrac{1}{1-q+2n}-\dfrac{1}{2n}\right)\right)= \nonumber \end{eqnarray} \begin{eqnarray} \label{eq_FirstMult2} &=\dfrac{1}{2}\left( \mathrm{ln}\pi +\gamma +\dfrac{\sigma_q}{\sigma_q^2+t_q^2}+\sum _{n=1}^{\infty }\left(\dfrac{2n+\sigma_q}{(2n+\sigma_q)^2+t_q^2}-\dfrac{1}{2n}\right)+\right. \nonumber \\ &\left.+ \dfrac{1-\sigma_q}{(1-\sigma_q)^2+t_q^2} +\sum _{n=1}^{\infty }\left(\dfrac{2n+1-\sigma_q}{(2n+1-\sigma_q)^2+t_q^2}-\dfrac{1}{2n}\right)\right). \end{eqnarray} \\ \newpage Let's note that the derivative of the function: \\ \[ \dfrac{1}{2}\mathrm{ln}\pi +\dfrac{1}{2}Re\left( -\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{x+iy}{2}\right)}{\Gamma \left(\dfrac{x+iy}{2} \right)}-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{1-x-iy}{2}\right)}{\Gamma \left(\dfrac{1-x-iy}{2} \right)} \right) \] along the ordinate axis for any fixed $ 0<x \leqslant \dfrac{1}{2} $ and $ y>0 $ is negative: \\ \begin{eqnarray} &\dfrac{\partial}{\partial y}\left( \dfrac{1}{2}\mathrm{ln}\pi +\dfrac{1}{2}Re\left( -\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{x+iy}{2}\right)}{\Gamma \left(\dfrac{x+iy}{2} \right)}-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{1-x-iy}{2}\right)}{\Gamma \left(\dfrac{1-x-iy}{2} \right)} \right)\right)= \nonumber\\ &=\dfrac{1}{2} \sum _{n=0}^{\infty } \left( \dfrac{\partial}{\partial y} \left(\dfrac{2n+x}{(2n+x)^2+y^2}\right)+\dfrac{\partial}{\partial y}\left(\dfrac{2n+1-x}{(2n+1-x)^2+y^2}\right)\right)= \nonumber\\ &=-\dfrac{1}{2} \sum _{n=0}^{\infty } \left( \dfrac{2(2n+x)y}{((2n+x)^2+y^2)^2}+\dfrac{2(2n+1-x)y}{((2n+1-x)^2+y^2)^2}\right)<0.\nonumber \end{eqnarray} \\ \linebreak \marginpar{$\textcolor{yellow}{\bullet}$} Therefore, if the left-hand side of the equality \eqref{eq_FirstMult1} is negative for numbers of the form $ q_0 = \sigma_0 + i t_0 $, where $ t_0> 0 $ is fixed and $ 0 <\sigma_0 \leqslant \dfrac{1}{2} $ is arbitrarily chosen, then it will be negative for any $ q = \sigma_q + i t_q: \; t_q \geqslant t_0, \; \; 0< \sigma_q \leqslant \dfrac{1}{2} $. \\ \linebreak \linebreak \newpage Consider $ q_0 = \sigma_0 + 8i, \; \; 0< \sigma_0 \leqslant \dfrac{1}{2} $, then from \eqref{eq_FirstMult1} will follow: \begin{eqnarray} \label{eq_SecndMult} &\dfrac{1}{2}\mathrm{ln}\pi +\dfrac{1}{2}Re\left( -\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{q_0}{2}\right)}{\Gamma \left(\dfrac{q_0}{2} \right)}-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{1-q_0}{2}\right)}{\Gamma \left(\dfrac{1-q_0}{2} \right)} \right)=\nonumber\\ &=\dfrac{1}{2}\left( \mathrm{ln}\pi+\gamma+ \dfrac{1-\sigma_0}{(1-\sigma_0)^2+8^2} +\dfrac{\sigma_0}{\sigma_0^2+8^2}+\right. \nonumber\\ &\left. +\sum _{n=1}^{\infty }\left(\dfrac{2n+\sigma_0}{(2n+\sigma_0)^2+8^2}-\dfrac{1}{2n}\right)+\right. \nonumber \\ &\left. +\sum _{n=1}^{\infty }\left(\dfrac{2n+1-\sigma_0}{(2n+1-\sigma_0)^2+8^2}-\dfrac{1}{2n}\right)\right)<\nonumber\\ &<\dfrac{1}{2}\left( \mathrm{ln}\pi+\gamma+ \dfrac{1}{8^2} +\sum _{n=1}^{\infty }\left(\dfrac{2n+\dfrac{1}{2}}{(2n)^2+8^2}-\dfrac{1}{2n}\right)+\right. \nonumber \\ &\left. +\sum _{n=1}^{\infty }\left(\dfrac{2n+1}{(2n)^2+8^2}-\dfrac{1}{2n}\right)\right)=\nonumber\\ &=\dfrac{1}{2}\left( \mathrm{ln}\pi+\gamma+ \dfrac{1}{8^2} +\sum _{n=1}^{\infty }\left( \dfrac{n-8^2}{2n((2n)^2+8^2)}+\dfrac{2n-8^2}{2n((2n)^2+8^2)}\right)\right)=\nonumber \\ &=\dfrac{1}{2}\left( \mathrm{ln}\pi+\gamma+ \dfrac{1}{64} +\sum _{n=1}^{\infty }\left( \dfrac{3}{8(n^2+16)}-\dfrac{16}{n(n^2+16)}\right)\right). \end{eqnarray} \\ From~\cite[p.259]{Abramowitz-1972}, \cite[\S~6.495]{Adams-1922} : \[ y\sum _{n=1}^{\infty } \dfrac{1}{n^2+y^2}=-\dfrac{1}{2y}+\dfrac{\pi}{2}\coth \pi y.\] Consequently: \begin{equation} \label{eq_FirstCoth} \sum _{n=1}^{\infty } \dfrac{1}{n^2+16}=-\dfrac{1}{32}+\dfrac{\pi}{8}\coth 4\pi=0,3614490... \end{equation} \marginpar{$\textcolor{green}{\bullet}$} The remaining amount in the \eqref{eq_SecndMult} is estimated for the first nine terms: \begin{equation} \label{eq_First16n3} \sum _{n=1}^{\infty } \dfrac{16}{n(n^2+16)}>\sum _{n=1}^{9 } \dfrac{16}{n(n^2+16)}>1,8873330. \end{equation} Thus, taking into account \eqref{eq_FirstCoth} and \eqref{eq_First16n3} the inequality \eqref{eq_SecndMult} can be continued: \begin{eqnarray} &\dfrac{1}{2}\left( \mathrm{ln}\pi+\gamma+ \dfrac{1}{64} +\sum _{n=1}^{\infty }\left( \dfrac{3}{8(n^2+16)}-\dfrac{16}{n(n^2+16)}\right)\right)<\nonumber\\ &<\dfrac{1}{2}\left(1,1447299+0,5772157+0,015625+\dfrac{3}{8}0,3614491-1,8873330 \right)<\nonumber\\ &<\dfrac{1}{2}\left(1,8731141-1,8873330 \right)<0. \nonumber \end{eqnarray} I.e. for $ \forall \; q=\sigma_q+i t_q: \; t_q \geqslant 8, \; \; 0 <\sigma_q \leqslant \dfrac{1}{2} $ the first multiplier of work from \eqref{eq_Lim_fin} is not equal $ 0 $. \\ And taking into account the symmetry of the values of this factor relative to the line $ \sigma_q = \dfrac{1}{2} $ it is not equal to $ 0 $ for $ \forall \; q = \sigma_q + i t_q: \; t_q \geqslant 8, \; \; 0 <\sigma_q <1 $. \\ \linebreak Let's estimate the minimal value $ t_q> 0 $. \\ \linebreak For $ \forall\; \rho=\sigma+it: $ \begin{eqnarray} &\dfrac{1}{\rho}+\dfrac{1}{\bar{\rho}}+\dfrac{1}{1-\rho}+\dfrac{1}{1-\bar{\rho}}=\nonumber\\ &=\dfrac{\sigma}{\sigma^2+t^2}+\dfrac{\sigma}{\sigma^2+t^2}+\dfrac{1-\sigma}{(1-\sigma)^2+t^2}+\dfrac{1-\sigma}{(1-\sigma)^2+t^2}=\nonumber\\ &=\dfrac{2\sigma}{\sigma^2+t^2}+\dfrac{2(1-\sigma)}{(1-\sigma)^2+t^2}>\dfrac{2\sigma}{1+t^2}+\dfrac{2(1-\sigma)}{1+t^2}=\dfrac{2}{1+t^2}.\nonumber \end{eqnarray} \\ \linebreak Let's designate through $ t_1 \stackrel{\scriptscriptstyle \mathrm {def}}{=} \min_{\rho \in \mathcal {P}} \left| Im (\rho) \right| $ then in view of \eqref{eq_RHO}: \[ \dfrac{2}{1+t_1^2}<\sum _{\rho \in \mathcal{P}}\dfrac{1}{\rho }<0,0230958, \] i.e. \begin{eqnarray} &t_1>9,2518015. \nonumber \end{eqnarray} \\ \linebreak Thus $ \forall \; q \in \mathcal{P} $ multiplier: \[ \dfrac{1}{2}\mathrm{ln}\pi +\dfrac{1}{2}Re\left( -\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{q}{2}\right)}{\Gamma \left(\dfrac{q}{2} \right)}-\dfrac{1}{2}\dfrac{{\Gamma }^{\prime }\left(\dfrac{1-q}{2}\right)}{\Gamma \left(\dfrac{1-q}{2} \right)} \right) \ne 0. \] \\ \linebreak Hence the second factor of \eqref{eq_Lim_fin} must be equal to $ 0 $, which is equivalent to: \[ Re \left( \dfrac{{\Gamma }^{\prime }\left(\dfrac{q}{2}\right)}{\Gamma \left(\dfrac{q}{2}\right)}\right)^{\prime }=Re \left( \dfrac{{\Gamma }^{\prime }\left(\dfrac{1-q}{2}\right)}{\Gamma \left(\dfrac{1-q}{2}\right)}\right)^{\prime }. \] \\ \linebreak $ \square $ \\ \\ \linebreak \\ \linebreak \\ \linebreak \\ \linebreak \\ \newpage Let's prove the third Lemma: \\ \linebreak \chapter{LEMMA 3} \[ \forall\; s=x+i y,\;0<x \leqslant \dfrac{1}{2},\;y\geqslant 4: \] \marginpar{$\textcolor{red}{\bullet}$} \begin{eqnarray} \label{eq_sigma_1_2} &Re \left( \dfrac{{\Gamma }^{\prime }\left(\dfrac{s}{2}\right)}{\Gamma \left(\dfrac{s}{2}\right)}\right)^{\prime }=Re \left( \dfrac{{\Gamma }^{\prime }\left(\dfrac{1-s}{2}\right)}{\Gamma \left(\dfrac{1-s}{2}\right)}\right)^{\prime } \Leftrightarrow \\ &\nonumber\\ &\nonumber\\ &\Leftrightarrow x =\dfrac{1}{2}.\nonumber \end{eqnarray} \\ \chapter{PROOF:} \\ \linebreak \linebreak \marginpar{$\textcolor{yellow}{\bullet}$} From \eqref{eq_FirstMult2}, the equality \eqref{eq_sigma_1_2} can be written as follows: \begin{eqnarray} \label{eq_DivXX_0} \sum _{n=0}^{\infty } \left( \dfrac{(2n+x)^2-y^2}{((2n+x)^2+y^2)^2}-\dfrac{(2n+1-x)^2-y^2}{((2n+1-x)^2+y^2)^2}\right)=0 \end{eqnarray} \\ \linebreak In its turn: \begin{eqnarray &\sum _{n=0}^{\infty } \left( \dfrac{(2n+x)^2-y^2}{((2n+x)^2+y^2)^2}-\dfrac{(2n+1-x)^2-y^2}{((2n+1-x)^2+y^2)^2}\right)=\nonumber\\ &=\sum _{n=0}^{\infty } \left( \dfrac{1}{(2n+x)^2+y^2}-\dfrac{1}{(2n+1-x)^2+y^2}\right)-\nonumber\\ &-2y^2\sum _{n=0}^{\infty } \left( \dfrac{1}{((2n+x)^2+y^2)^2}-\dfrac{1}{((2n+1-x)^2+y^2)^2}\right)=\nonumber \end{eqnarray} \begin{eqnarray} \label{eq_DivXX_line} &=\sum _{n=0}^{\infty } \dfrac{(1-2x)(4n+1)}{((2n+x)^2+y^2)((2n+1-x)^2+y^2)}-\nonumber\\ &-2y^2\sum _{n=0}^{\infty } \dfrac{(1-2x)(4n+1)((2n+x)^2+(2n+1-x)^2+2y^2)}{((2n+x)^2+y^2)^2((2n+1-x)^2+y^2)^2}=\nonumber\\ &=(1-2x) \left( \sum _{n=0}^{\infty } \dfrac{4n+1}{((2n+x)^2+y^2)((2n+1-x)^2+y^2)}-\right. \nonumber\\ &\left. -2y^2\sum _{n=0}^{\infty } \dfrac{(4n+1)((2n+x)^2+(2n+1-x)^2+2y^2)}{((2n+x)^2+y^2)^2((2n+1-x)^2+y^2)^2}\right). \end{eqnarray} \\ \linebreak Let's estimate the sum of the general brackets of equality \eqref{eq_DivXX_line}: \begin{eqnarray} &\sum_{n=0}^{\infty } \dfrac{4n+1}{((2n+x)^2+y^2)((2n+1-x)^2+y^2)}-\nonumber\\ & -2y^2\sum _{n=0}^{\infty } \dfrac{(4n+1)((2n+x)^2+(2n+1-x)^2+2y^2)}{((2n+x)^2+y^2)^2((2n+1-x)^2+y^2)^2}.\nonumber \end{eqnarray} \\ \linebreak From~\cite[p.259]{Abramowitz-1972}, \cite[\S~6.495]{Adams-1922} : \[ \sum _{n=1}^{\infty } \dfrac{1}{(2n-1)^2+y^2}=\dfrac{\pi}{4y}\tanh \dfrac{\pi y}{2}, \] \[ \sum _{n=1}^{\infty } \dfrac{1}{(2n)^2+y^2}=-\dfrac{1}{2y^2}+\dfrac{\pi}{4y}\coth \dfrac{\pi y}{2}.\] And then the first composed in the considered sum: \begin{eqnarray &\sum_{n=0}^{\infty } \dfrac{4n+1}{((2n+x)^2+y^2)((2n+1-x)^2+y^2)}<\nonumber\\ &<\dfrac{1}{(x^2+y^2)((1-x)^2+y^2)}+\sum_{n=1}^{\infty } \dfrac{4n+1}{((2n-1)^2+y^2)((2n)^2+y^2)}<\nonumber\\ &<\dfrac{1}{y^4}+\sum_{n=1}^{\infty }\left( \dfrac{1}{(2n-1)^2+y^2}-\dfrac{1}{(2n)^2+y^2}\right)+\nonumber\\ &+\sum_{n=1}^{\infty }\dfrac{2}{((2n-1)^2+y^2)^2}.\nonumber \end{eqnarray} Here: \[\sum _{n=1}^{\infty }\left( \dfrac{1}{(2n-1)^2+y^2}-\dfrac{1}{(2n)^2+y^2}\right)=\dfrac{\pi}{4y}\tanh \dfrac{\pi y}{2}-\dfrac{\pi}{4y} \coth \dfrac{\pi y}{2} +\dfrac{1}{2y^2}, \] \begin{eqnarray} &\sum _{n=1}^{\infty }\dfrac{2}{((2n-1)^2+y^2)^2}=-\dfrac{1}{y}\dfrac{d}{d y}\left( \sum _{n=1}^{\infty } \dfrac{1}{(2n-1)^2+y^2}\right)=\nonumber\\ &=-\dfrac{1}{y}\dfrac{d}{d y}\left(\dfrac{\pi}{4y}\tanh \dfrac{\pi y}{2} \right)= \dfrac{\pi}{4y^3}\tanh \dfrac{\pi y}{2}-\dfrac{\pi^2}{8y^2}\dfrac{1}{\cosh^2 \dfrac{\pi y}{2}} \nonumber \end{eqnarray} I.e. \begin{eqnarray} &\sum_{n=0}^{\infty } \dfrac{4n+1}{((2n+x)^2+y^2)((2n+1-x)^2+y^2)}<\nonumber\\ &<\dfrac{1}{2y^2}+\dfrac{\pi}{4y^3}\tanh \dfrac{\pi y}{2}+\dfrac{1}{y^4}-\nonumber\\ &-\dfrac{\pi}{4y}\left( \coth \dfrac{\pi y}{2}-\tanh \dfrac{\pi y}{2}\right)-\dfrac{\pi^2}{8y^2}\dfrac{1}{\cosh^2 \dfrac{\pi y}{2}}. \nonumber \end{eqnarray} The second composed: \begin{eqnarray} &\sum _{n=0}^{\infty } \dfrac{(4n+1)((2n+x)^2+(2n+1-x)^2+2y^2)}{((2n+x)^2+y^2)^2((2n+1-x)^2+y^2)^2}=\nonumber\\ & =\sum _{n=1}^{\infty } \dfrac{4n-3}{((2n-2+x)^2+y^2)((2n-1-x)^2+y^2)}*\nonumber\\ &*\left(\dfrac{1}{(2n-2+x)^2+y^2}+\dfrac{1}{(2n-1-x)^2+y^2} \right)>\nonumber\\ &>\sum _{n=1}^{\infty } \dfrac{4n-1}{((2n-1)^2+y^2)((2n)^2+y^2)}\left(\dfrac{1}{(2n-1)^2+y^2}+\dfrac{1}{(2n)^2+y^2} \right)-\nonumber\\ &-\sum _{n=1}^{\infty } \dfrac{2}{((2n-1)^2+y^2)((2n)^2+y^2)}\left(\dfrac{1}{(2n-1)^2+y^2}+\dfrac{1}{(2n)^2+y^2} \right)>\nonumber\\ &>\sum _{n=1}^{\infty } \left(\dfrac{1}{((2n-1)^2+y^2)^2}-\dfrac{1}{((2n)^2+y^2)^2} \right) -\nonumber\\ &-\sum _{n=1}^{\infty } \dfrac{4}{((2n-1)^2+y^2)^3}.\nonumber \end{eqnarray} Here: \begin{eqnarray} &\sum _{n=1}^{\infty } \left(\dfrac{1}{((2n-1)^2+y^2)^2}-\dfrac{1}{((2n)^2+y^2)^2} \right)=\nonumber\\ &=-\dfrac{1}{2y}\dfrac{d}{d y}\left( \sum _{n=1}^{\infty } \dfrac{1}{(2n-1)^2+y^2}-\dfrac{1}{(2n)^2+y^2}\right)=\nonumber\\ &=-\dfrac{1}{2y}\dfrac{d}{d y}\left(\dfrac{\pi}{4y}\tanh \dfrac{\pi y}{2} +\dfrac{1}{2y^2}-\dfrac{\pi}{4y}\coth \dfrac{\pi y}{2}\right)=\nonumber\\ &= \dfrac{\pi}{8y^3}\tanh \dfrac{\pi y}{2}-\dfrac{\pi^2}{16y^2}\dfrac{1}{\cosh^2 \dfrac{\pi y}{2}} +\dfrac{1}{2y^4}-\dfrac{\pi}{8y^3}\coth \dfrac{\pi y}{2}-\dfrac{\pi^2}{16y^2}\dfrac{1}{\sinh^2 \dfrac{\pi y}{2}}.\nonumber \end{eqnarray} And \begin{eqnarray} &\sum _{n=1}^{\infty } \dfrac{4}{((2n-1)^2+y^2)^3}=\nonumber\\ &=-\dfrac{1}{2y}\dfrac{d}{dy}\left(-\dfrac{1}{y}\dfrac{d}{dy}\left( \sum _{n=1}^{\infty } \dfrac{1}{(2n-1)^2+y^2} \right) \right)=\nonumber \end{eqnarray} \begin{eqnarray} &=-\dfrac{1}{2y}\dfrac{d}{dy}\left(-\dfrac{1}{y}\dfrac{d}{dy}\left( \dfrac{\pi}{4y}\tanh \dfrac{\pi y}{2}\right) \right)=\nonumber\\ &=-\dfrac{1}{2y}\dfrac{d}{dy}\left( \dfrac{\pi}{4y^3}\tanh \dfrac{\pi y}{2}-\dfrac{\pi^2}{8y^2}\dfrac{1}{\cosh^2 \dfrac{\pi y}{2}}\right) =\nonumber\\ &=\dfrac{3\pi}{8y^5}\tanh \dfrac{\pi y}{2}-\dfrac{\pi^2}{16y^4}\dfrac{1}{\cosh^2 \dfrac{\pi y}{2}}-\dfrac{\pi^2}{8y^4}\dfrac{1}{\cosh^2 \dfrac{\pi y}{2}}-\dfrac{\pi^3}{16y^3}\dfrac{\tanh \dfrac{\pi y}{2}}{\cosh^2 \dfrac{\pi y}{2}}=\nonumber\\ &=-\dfrac{\pi^2}{8y^3 \cosh^2 \dfrac{\pi y}{2}}\left( \dfrac{3}{2y}+\dfrac{\pi}{2} \tanh \dfrac{\pi y}{2} \right) +\dfrac{3\pi}{8y^5}\tanh \dfrac{\pi y}{2}. \nonumber \end{eqnarray} \\ \linebreak \\ \linebreak \\ \linebreak \\ \linebreak \marginpar{$\textcolor{yellow}{\bullet}$} Hence: \begin{eqnarray} &\sum_{n=0}^{\infty } \dfrac{4n+1}{((2n+x)^2+y^2)((2n+1-x)^2+y^2)}-\nonumber\\ & -2y^2\sum _{n=0}^{\infty } \dfrac{(4n+1)((2n+x)^2+(2n+1-x)^2+2y^2)}{((2n+x)^2+y^2)^2((2n+1-x)^2+y^2)^2}<\nonumber\\ &<\dfrac{1}{2y^2}+\dfrac{\pi}{4y^3}\tanh \dfrac{\pi y}{2}+\dfrac{1}{y^4}-\nonumber\\ &-\dfrac{\pi}{4y}\left( \coth \dfrac{\pi y}{2}-\tanh \dfrac{\pi y}{2}\right)-\dfrac{\pi^2}{8y^2}\dfrac{1}{\cosh^2 \dfrac{\pi y}{2}}-\nonumber\\ & -2y^2\left(\dfrac{\pi}{8y^3}\tanh \dfrac{\pi y}{2}-\dfrac{\pi^2}{16y^2}\dfrac{1}{\cosh^2 \dfrac{\pi y}{2}} +\dfrac{1}{2y^4}-\right. \nonumber\\ &\left. -\dfrac{\pi}{8y^3}\coth \dfrac{\pi y}{2}-\dfrac{\pi^2}{16y^2}\dfrac{1}{\sinh^2 \dfrac{\pi y}{2}} \right)-\nonumber\\ &-\left(-\dfrac{\pi^2}{8y^3 \cosh^2 \dfrac{\pi y}{2}}\left( \dfrac{3}{2y}+\dfrac{\pi}{2} \tanh \dfrac{\pi y}{2} \right) +\dfrac{3\pi}{8y^5}\tanh \dfrac{\pi y}{2} \right)=\nonumber \end{eqnarray} \begin{eqnarray} \label{eq_First_delta} &=\dfrac{1}{y^2}\left( -\dfrac{1}{2}-\dfrac{\pi^2}{8}\dfrac{1}{\cosh^2 \dfrac{\pi y}{2}}-\dfrac{3\pi}{8y^3}\tanh \dfrac{\pi y}{2}+\right. \nonumber\\ &\left. +\dfrac{\pi^2}{8}\dfrac{y^2}{\cosh^2 \dfrac{\pi y}{2}}+\dfrac{\pi^2}{8}\dfrac{y^2}{\sinh^2 \dfrac{\pi y}{2}}+\right. \nonumber\\ &\left. +\dfrac{\pi}{4y}\tanh \dfrac{\pi y}{2}+\dfrac{1}{y^2}+\dfrac{\pi^2}{8y \cosh^2 \dfrac{\pi y}{2}}\left( \dfrac{3}{2y}+\dfrac{\pi}{2} \tanh \dfrac{\pi y}{2} \right)\right). \end{eqnarray} \\ \linebreak Let's consider positive composed inside of the general bracket of the right part of an inequality \eqref{eq_First_delta} at $ y \geqslant 4 $: \\ \linebreak Derivative: \begin{eqnarray} \left( \dfrac{y^2}{\cosh^2 \dfrac{\pi y}{2}}\right)^{\prime}=y\dfrac{2 \cosh \dfrac{\pi y}{2} - \pi y \sinh \dfrac{\pi y}{2} }{\cosh^3 \dfrac{\pi y}{2}}<0,\nonumber \end{eqnarray} since for $ \forall\; y \geqslant 4 $ \[ \dfrac{2}{\pi} \coth \dfrac{\pi y}{2}<y. \] Similarly, the derivative: \begin{eqnarray} \left( \dfrac{y^2}{\sinh^2 \dfrac{\pi y}{2}}\right)^{\prime}=y\dfrac{2 \sinh \dfrac{\pi y}{2} - \pi y \cosh \dfrac{\pi y}{2} }{\sinh^3 \dfrac{\pi y}{2}}<0,\nonumber \end{eqnarray} since for $ \forall\; y \geqslant 4 $ \[ \dfrac{2}{\pi} \tanh \dfrac{\pi y}{2}<y. \] Hence $ \forall\; y \geqslant 4 $: \begin{eqnarray} \dfrac{\pi^2}{8}\dfrac{y^2}{\cosh^2 \dfrac{\pi y}{2}} \leqslant \dfrac{\pi^2}{8}\dfrac{16}{\cosh^2 2\pi}<0,0002754,\nonumber\\ \dfrac{\pi^2}{8}\dfrac{y^2}{\sinh^2 \dfrac{\pi y}{2}} \leqslant \dfrac{\pi^2}{8}\dfrac{16}{\sinh^2 2\pi}<0,0002754.\nonumber \end{eqnarray} Further $ \forall\; y \geqslant 4 $: \[ \dfrac{\pi}{4y}\tanh \dfrac{\pi y}{2} < \dfrac{\pi}{16}< 0,1963496, \] \[ \dfrac{1}{y^2} \leqslant 0,0625, \] \[ \dfrac{\pi^2}{8y \cosh^2 \dfrac{\pi y}{2}}\left( \dfrac{3}{2y}+\dfrac{\pi}{2} \tanh \dfrac{\pi y}{2} \right) < \dfrac{\pi^2}{32 \cosh^2 2\pi}\left( \dfrac{3}{8}+\dfrac{\pi}{2} \right)<0,0000084. \] \\ \linebreak \\ \linebreak \\ \linebreak \marginpar{$\textcolor{yellow}{\bullet}$} Hence $ \forall \; y \geqslant 4 $ the total sum of positive composed in the general bracket does not exceed $ \dfrac{1}{2} $: \begin{eqnarray} & \dfrac{\pi^2}{8}\dfrac{y^2}{\cosh^2 \dfrac{\pi y}{2}}+\dfrac{\pi^2}{8}\dfrac{y^2}{\sinh^2 \dfrac{\pi y}{2}}+ \nonumber\\ & +\dfrac{\pi}{4y}\tanh \dfrac{\pi y}{2}+\dfrac{1}{y^2}+\dfrac{\pi^2}{8y \cosh^2 \dfrac{\pi y}{2}}\left( \dfrac{3}{2y}+\dfrac{\pi}{2} \tanh \dfrac{\pi y}{2} \right)<0,2594088.\nonumber \end{eqnarray} \\ \linebreak This means that for $ \forall \; y \geqslant 4, \; \; 0 <x \leqslant \dfrac{1}{2} $ the second factor of the right side of the equality \eqref{eq_DivXX_line} does not turn into $ 0 $, \\ hence from \eqref{eq_DivXX_0} and \eqref{eq_DivXX_line}: \[ x=\dfrac{1}{2}. \] \\ \marginpar{$\textcolor{green}{\bullet}$} In a underside the validity of the statement of the Lemma 3 is obvious. \\ \linebreak $ \square $ \\ \linebreak \\ \linebreak So, assuming that an arbitrary nontrivial root $ q $ of zeta functions belongs to the union $ \mathcal{P}_1 \cup \mathcal{P}_2 $ we found that it belongs only to $ \mathcal{P}_2 $, i.e. $ \mathcal{P}_1=\varnothing $. \\ \linebreak And according to the fact that $ \mathcal{P}_1 =\varnothing \Leftrightarrow \mathcal{P}_3=\varnothing $ we have: \\ \[\mathcal{P}_3=\mathcal{P}_1=\varnothing,\;\; \mathcal{P}=\mathcal{P}_2.\] \\ This proves the basic statement and the assumption which had been made by Bernhard Riemann about of the real parts of the nontrivial zeros of zeta function. \\ \renewcommand{\refname}{REFERENCES:}
2,869,038,155,345
arxiv
\section{Introduction} Let $F_\infty$ be the free group on a countable number of generators and let $w$ be a non-trivial reduced word in $F_\infty$ on $k$ generators, say $w=w(x_1,...,x_k)$. Given a group $G$ we say that $w$ is an \textit{identity}, or a \textit{law}, for $G$, if $w(g_1,...,g_k)=1$ for every $(g_1,...,g_k)\in G^k$. We denote by $\alpha(G)$ the length of the shortest identity of the group $G$ (on any number of generators). Let $\mathbb{F}_q$ be the finite field with $q$ elements, where $q=p^m$ is a prime power, and let $G$ be a finite simple group of Lie type over $\mathbb{F}_q$ of rank $r$. Here, if $G$ is untwisted, $r$ is the rank of the ambient simple algebraic group, while we define $r$ for the twisted groups as follows (see \cite[Sec. 13.1]{Car} and also \cite[Prop. 2.3.2]{GLS}): \begin{center} \begin{tabular}{ |l | c| c |c|c|c|c|c|c| r | } \hline type & ${}^2A_{2d}$ & ${}^2A_{2d-1}$ & ${}^2D_{d+1}$ & ${}^3D_4$ & ${}^2E_6$ & ${}^2F_4$ & ${}^2G_2$ & ${}^2B_2$ \\ \hline rank & $d$ & $d$ & $d$ & $2$ & $4$ & $2$& $1$& $1$ \\ \hline \hline \end{tabular} \end{center} Let $q^*(G)$ be the number of elements of the field where the group $G$ is realized. For example, $q^*(A_n(q))=q$, $q^*({}^3D_4(q))=q^3$ and $q^*({}^2A_{2d}(q))=q^2$. See Section \ref{not_def} for more details. The main result of this paper is the following: \bthm \label{thm:main} Let $G$ be a finite simple group of Lie type over $\mathbb{F}_q$ of rank $r$. Then the length of the shortest identity of $G$ satisfies $$\frac{q^*(G)^{\frac{r}{4}}-1}{3} \leq \alpha(G) < (31 r+2)^3q^{31 r}.$$ Furthermore, we give an explicit construction of an identity of length less than the upper bound. \ethm In the case of $G=A_d(q)=PSL_{d+1}(q)$ we give a more precise statement, and then deduce Theorem \ref{thm:main} from this special case. Indeed, we prove the following: \bthm \label{thm:sl} The length of the shortest identity of $A_d(q)$ satisfies $$ \frac{q^{\lfloor\frac{d+1}{2}\rfloor}-1}{3} \leq \alpha(A_d(q)) < (d+3)^2dq^{d+1}.$$ Furthermore, we give an explicit construction of an identity of length less than the upper bound. \ethm In particular, Theorem \ref{thm:sl} improves a result of Gamburd et. al. \cite[Prop. 11]{GHSSV} which states that $\alpha(SL_2(p)) \geq c\frac{p}{\log p}$ for some constant $c$. The proof of Theorem \ref{thm:main} is based on the fact that if $G$ is a finite simple group of Lie type over $\mathbb{F}_q$ (with the exception of Suzuki group) then there exists two positive integers $c_1$ and $c_2$, which depend only on the type and the rank of $G$, such that $$A_1(q^{c_1}) \leq G \leq A_{c_2}(q). $$ Now since an identity of a group is inherited by its subgroups, we observe that Theorem \ref{thm:main} follows from Theorem \ref{thm:sl}. \begin{rem} If a group $G$ is a central extension of a group $Z$ by $H$( i.e $H=G/Z$), then if a word $1\neq w(x_1,...,x_k) \in F_\infty$ is an identity of $H$ then it is easy to check that the word $[w,x_{k+1}] \in F_\infty$ is also an identity of $G$. Furthermore, if $1 \neq w(x_1,...,x_k) \in F_\infty$ is an identity of $G$ then it is an identity of $H$. Therefore we obtain similar results for quasisimple groups. \end{rem} Let $F_k$ be the free group on $k$ generators $x_1,...,x_k$. Given a word $w(x_1,...,x_k) \in F_k$ we define $l(w)$ to be the length of $w$. Let $1 \neq w \in F_k$ be an identity for a finite group $G$ and let $S=\{g_1,...,g_k\}\subset G$. Then starting at any vertex in the Cayley graph $Cay(G,S)$, the walk $w(g_1,...,g_k)$ is a closed walk of length $l(w)$. Hence in some sense, $\alpha(G)$ is a universal girth of the Cayley graphs of the group. Recently there has been great interest in the study of word maps in finite simple groups and in algebraic groups (see \cite{Lar,LS,NS1,Sh}). Let $w=w(x_1,...,x_k)$ be a non-trivial word in $F_k$, where $k \geq 1$, and let $G$ be a group. We define $$w(G)=\langle w(g_1,...,g_k):(g_1,...,g_k) \in G^k\rangle.$$ The next result follows immediately from Theorem \ref{thm:main} (since the hypothesis on $l(w)$ implies that $w(G)$ is a non-trivial normal subgroup of $G$). \bco Let $G$ be a finite simple group of Lie type over $\mathbb{F}_q$ of rank $r$, and let $w \in F_k$ be a non-trivial word with $l(w) < \frac{q^*(G)^{\frac{r}{4}}-1}{3}$, where $k \geq 1$. Then $w(G)=G.$ \eco \subsection{Notation and Definitions} \label{not_def} Let $G$ be group, for every $g,h, \in G$ we define $g^h=hgh^{-1}$ and $[g,h]=ghg^{-1}h^{-1}$. In case $G$ is finite, we write $exp(G)$ for the exponent of $G$. In addition, for an element $g\in G$ we define $ord(g)$ to be the order of $g$. For $x \in \mathbb{R}$ we write $\lfloor x \rfloor$ for the greatest integer less than or equal to $x$. The untwisted groups $A_d(q),B_d(q),C_d(q),D_d(q),E_6(q),E_7(q),E_8(q),F_4(q),G_2(q)$ are realized over the finite field $\mathbb{F}_q$. The twisted groups $${}^2A_{2d}(q),{}^2A_{2d-1}(q),{}^2D_{d+1}(q),{}^3D_4(q),{}^2E_6(q),$$ are realized over finite fields with $q^2,q^2,q^2,q^3,q^2$ elements respectively, and the Ree and Suzuki groups ${}^2F_4(q),{}^2G_2(q),{}^2B_2(q),$ are realized over finite fields with $q=2^{2n+1},q=3^{2n+1},q=2^{2n+1}$ elements respectively (see \cite[ch. 13.] {Car}). We shall not go into further details about the structure and construction of finite simple groups of Lie type, we refer the reader to the book of Carter \cite{Car} which is our main reference (see also \cite{GLS}). \section{Previous Work} \label{pre_work} A \textit{variety} of groups is a class of groups that satisfy a given set of laws (see \cite[ch. 1]{Ne}). By Birkhoff's theorem \cite{Be} each variety $\hbox{\goth B}\hskip 2.5pt$ is defined by a suitable set of words $B \subseteq F_\infty$, that is, $\hbox{\goth B}\hskip 2.5pt$ consists of all the groups $G$ on which $w(g_1,...,g_k)=1$ holds for each word $w(x_1,...,x_k)\in B$ and for each set of elements $g_1,...,g_k \in G$. For example the variety which is defined by the law $$\{[x_1,x_2]=x_1x_2x_1^{-1}x_2^{-1}=1\},$$ is the abelian groups. In the language of varieties, the main aim of this paper is to study the length of the shortest law in the variety which is generated by all the laws in a finite simple group of Lie type. In the 1960's, various papers were written on varieties of groups, which were mainly concerned with their qualitative properties. The most notable contribution is Hanna Neumann's book \cite{Ne}. In this book, she raised (\cite[p. 166]{Ne}) the following question: \begin{quote} \textit{Is there a law which is satisfied in an infinite number of non-isomorphic non-abelian finite simple groups?} \end{quote} G. A. Jones \cite{Jon} gave a negative answer to this question, but his proof does not give an explicit bound on the length of the shortest identity in each family of finite simple groups of Lie type (except for the case of Suzuki groups). Our work can be thought of as a quantitative version of Jones's results. A \textit{basis} for a variety $\hbox{\goth B}\hskip 2.5pt$ is a set of laws such that its closure is $\hbox{\goth B}\hskip 2.5pt$ (for the definition of `closure', see \cite[ch. 1]{Ne}). Oates and Powell in \cite{OP} proved that every variety generated by a finite group has a finite basis. It is clear that if we know a basis for the variety generated by the laws in a given group $G$, then the minimum length of a law in this basis, is an upper bound for the shortest identity of the group. In the literature a few attempts have been made to find an explicit basis for the variety generated by a finite non-abelian simple group. J. Cossey and S. Macdonald \cite{CM} gave a finite basis for the set of laws in $PSL_2(5)$, and this was extended in \cite{CMS} to $PSL_2(p^n)$ with $p^n\leq 11$. B. Southcott \cite{So1,So2} gave a basis for the family $PSL_2(2^n)$, but the length of each element in his basis is greater than the upper bound of the shortest identity we state in Theorem \ref{thm:sl}. Let $\mathcal{X}$ be any infinite set of groups. A group $G$ is said to be \textit{residually} $\mathcal{X}$, if for every $1\neq g \in G$ there exists an epimorphism $\varphi$ from $G$ to some $H \in \mathcal{X}$ such that $\varphi(g)\neq 1$. Suppose that the free group on $k$ generators $F_k$ is residually $\mathcal{X}$, and for a given group $H \in \mathcal{X}$ we can determine the maximal length of a non-trivial word $w$ in $F_k$ such that there exists an epimorphism $\varphi$ from $F_k$ to $H$ with $\varphi(w)\neq 1$. Then this gives a lower bound on the length of the shortest identity (on $k$ generators) in $H$. W. Magnus \cite[p. 309]{Ma} raised the following related problem: \begin{quote}\textit{Let $\mathcal{X}$ be any infinite set of non-abelian finite simple groups. Is the free group $F_k$ on $k\geq 1$ generators residually $\mathcal{X}$?} \end{quote} \noindent T. Weigel \cite{We1,We2,We3} gave a complete answer to this question. From his proof, we can conclude that in the case of a classical simple group $G$ over $\mathbb{F}_q$ where $q=p^m$ is a prime power, the length of the shortest identity (on two generators) is at least $p$, and for the exceptional groups $E_6(q),E_7(q),E_8(q),G_2(q),F_4(q)$ and the twisted groups ${}^3D_4(q),{}^2E_6(q),{}^2F_4(q)$, the length of the shortest identity (on two generators) is at least $\log q$. Weigel's work involves `identities' on two generators (and their inverses) of the group; in this paper we consider identities on any number of elements of the group (not necessarily generators). \section {Proof of Theorem \ref{thm:sl}} \label{proof_thm_sl} \subsection{The Upper Bound} In this section we give an explicit construction of an identity for the group $SL_n(q)$ and this gives the upper bound stated in Theorem \ref{thm:sl}. This construction is based on the exponent of the group. \blem \label{lem_exp} Let $q=p^f$ where $p$ is a prime. Then the exponent of $SL_n(q)$ is $$p^e\cdot lcm[q-1,q^2-1,...,q^{n-1}-1,\frac{q^n-1}{q-1}],$$ where $e$ is the minimal positive integer such that $p^e \geq n$. \elem \begin{proof} Let $x\in GL_n(q)$ be a non-trivial element. Write $x=x_s\cdot x_u$ in Jordan form, where $x_u\in GL_n(q)$ is unipotent, $x_s \in GL_n(q)$ is diagonalizable over a splitting field $\mathbb{F}_{q^r} (r \leq n)$ and $[x_s,x_u]=1$. Here $ord(x_u)$ is a power of $p$, while $ord(x_s)$ is coprime to $p$. Suppose $x_u=1$. Then $x$ is diagonalizable over $\mathbb{F}_{q^r}$, and therefore $x^{q^r-1}=x_s^{q^r-1}=1$. Furthermore, for every $1 \leq j\leq n$, there exists an element in $GL_n(q)$ of order $q^j-1$. This follows from the fact that the field $\mathbb{F}_{q^j}$ can be considered as a vector space over $\mathbb{F}_q$ of dimension $j$. Take a generator of the cyclic group $\mathbb{F}_{q^j}^*$; it is of order $q^j-1$ and it acts as a linear transformation on $\mathbb{F}_q^j$, hence it belongs to $GL_n(q)$. Now suppose $x_u$ is non-trivial and let $m=ord(x)$. Then $x_s^mx_u^m=1$ since $[x_s,x_u]=1$, so $m$ is divisible by $ord(x_s)$ and $ord(x_u)$. Now every unipotent matrix $x_u$ can we written as $x_u=1+N$, where $1$ is the identity matrix in $GL_n(q)$ and $N$ is an $n\times n$ nilpotent matrix over $\mathbb{F}_{q}$. For every $i \in \mathbb{N}$, we have $x_u^{p^i}=(1+N)^{p^i}=1+N^{p^i}$. Let $e$ be the minimal positive integer such that $p^e \geq n$. Then it is easy to check that $p^e$ is the exponent of the upper unitriangular matrices in $GL_n(q)$, and there exists a unipotent matrix in $GL_n(q)$ with this order. For $SL_n(q)$ all the above holds except for the fact that the maximal order of an element in $SL_n(q)$ is $\frac{q^n-1}{q-1}$ and not $q^n-1$ as in $GL_n(q)$. \end{proof} \begin{rem} There exists a constant $c$ such that $exp(SL_n(q))\geq q^{cn^2}$ (see \cite[Lemma 2.3]{BMP} and the discussion afterward). Thus our bound in Theorem \ref{thm:sl} is shorter than the length of the exponent identity $x^{exp(SL_n(q))}$. \end{rem} \blem \label{lem_cons_word} Let $G$ be a group and let $w_1,...,w_m$ be distinct non-trivial power-words in one variable in the free group $F_{k+1}$ on $k+1$ generators $x_1,...,x_{k+1}$ where $k=2^{\lfloor \log_2m \rfloor}$. Suppose that $l(w_1) \geq l(w_2) \geq ... \geq l(w_m)$ and that for each element $g \in G$, there exists some $i$ such that $w_i(g)=1$. Then there exists a non-trivial word $w\in F_{k+1}$ of length at most $4m^2(l(w_1)+1)$ which is an identity in $G$. \elem \begin{proof} For $1 \leq i \leq \lfloor\frac{m}{2}\rfloor$, set $u_i=[w_{2i-1}(x_1),w_{2i}(x_1^{x_{i+1}})]$. If $m$ is odd then set $$u_{\lfloor\frac{m}{2}\rfloor+1}=[w_m(x_1),x_{\lfloor\frac{m}{2}\rfloor+2}],$$ else if $m$ is even set $$u_{\lfloor\frac{m}{2}\rfloor+1}=[x_1,x_{\lfloor\frac{m}{2}\rfloor+2}].$$ For $\lfloor\frac{m}{2}\rfloor+2 \leq i \leq k$, set $u_i=[x_1,x_{i+1}]$. Let $j=2^e$ for some positive integer $e \leq \lfloor \log_2m \rfloor$, and define a recursive function $f$ in the following way: $$f(u_1,...,u_{2^{e-1}},...,u_j)=[f(u_1,...,u_{2^{e-1}}),f(u_{2^{e-1}+1},...,u_j)]$$ and $f(u_i,u_{i+1})=[u_i,u_{i+1}]$. Let $w=f(u_1,...,u_k).$ It is clear that $w$ is a non-trivial word in $F_{k+1}$ (since we have a new letter $x_{j+1}$ in each $u_j$, which appears only in $u_j$). To show that $w$ is an identity of $G$ it is enough to note that $x$ and $x^y$ have the same order, hence at least one of the commutators in the expression for $w$ collapses and hence so does the whole word. Now since $w_i(x_1^{x_j})=x_jw_i(x_1)x_j^{-1}$, the length of the word $[u_1,u_2]$ is at most $2^4\cdot l(w_1)+2^4$. By induction on $e$, since $l(w_1)\geq l(w_i)$ for all $i$, we get that the length of the word $f(u_1,...,u_{2^e})$ is at most $$2^{2(e+1)} l(w_1)+ 2^{2(e+1)}.$$ Therefore, since $e \leq \lfloor \log_2m \rfloor$, the length of $w$ is at most $$2^{2({\lfloor \log_2m \rfloor}+1) }l(w_1)+2^{2({\lfloor \log_2m \rfloor}+1 )} \leq 4m^2l(w_1)+4m^2 =4m^2(l(w_1)+1).$$ \end{proof} \bpr \label{pr:sl:iden} Let $q=p^f$ be a prime power and let $G=SL_n(q)$ where $n\geq2$. Then there exists an identity in $G$ of length at most $ (n+2)^2 p^e q^{n-1}$, where $e$ is the minimal positive integer such that $p^e \geq n$.\epr \begin{proof} Following the proof of Lemma \ref{lem_exp}, every element $g \in G$ satisfies at least one of the following words: $$X^{p^e\frac{q^n-1}{q-1}},X^{p^e(q^{n-1}-1)},X^{p^e(q^{n-2}-1)},...,X^{p^e(q-1)}.$$ If $g$ satisfies the first word, then it also satisfies $X^{\frac{q^n-1}{q-1}}$ since it has distinct eigenvalues in the splitting field $\mathbb{F}_{q^n}$, hence it is diagonalizable over $\mathbb{F}_{q^n}$. Now for every $ i \in \mathbb{N}$, $q^i-1$ divides $q^{2i}-1$, therefore every $g \in G$ satisfies at least one of the following words (ordered in decreasing length): $$w_1=X^{p^e(q^{n-1}-1)},w_2=X^{\frac{q^n-1}{(q-1)}},w_3=X^{p^e(q^{n-2}-1)},...,w_{\lceil\frac{n+1}{2}\rceil}=X^{p^e (q^{\lceil\frac{n+1}{2}\rceil}-1)}.$$ \noindent Now set $m=\frac{n+2}{2}$. The result follows by Lemma \ref{lem_cons_word}. \end{proof} If $w$ is an identity in the group $G$, it is easy to see that $w$ is also an identity for every quotient of $G$. Hence the identity we construct in Proposition \ref{pr:sl:iden} holds for $A_{n-1}(q)=PSL_n(q)$, and this gives the upper bound in Theorem \ref{thm:sl}. \subsection {The Lower Bound} \blem \label{len_even} Let $G$ be a finite group and $w$ an identity of $G$. If $l(w) < exp(G)$, then $l(w)$ is even. \elem \begin{proof} Let $w=w(x_1,...,x_k)$ be an identity in $G$ on $k$ elements. If $l(w)$ is odd, then there exists $ 1 \leq i \leq k$ such that the sum of the exponents of $x_i$ appearing in $w$ is odd. Hence if we set $x_j=1$ for all $j \neq i$, we get a power-word of length less than $exp(G)$, a contradiction. \end{proof} We start with the case $A_1(q)=PSL_2(q)$. \blem \label{lem:sl_2} Let $G=PSL_2(q)$, then $\alpha(G) \geq \frac{q-1}{3}$. \elem \begin {proof} If $q$ is even then $PSL_2(q)=SL_2(q)$. If $q$ is odd then the group $PSL_2(q)$ is isomorphic to $SL_2(q)/\mathbb{Z}_2$. In any case, if $w \in F_k$ is an identity of $PSL_2(q)$, then it is immediate to check that $w^2$ is an identity of $SL_2(q)$. So it is enough to prove that the length of the shortest identity of $SL_2(q)$ is at least $\frac{2}{3}(q-1)$. Let $$u(t)=\left(\begin{array}{cc} 1 & t \\ 0 & 1\end{array} \right), \tau= \left(\begin{array}{cc} 0 & -1 \\ 1 & 0\end{array} \right) ,h(\lambda) = \left(\begin{array}{cc} \lambda & 0 \\ 0 & \lambda^{-1} \end{array} \right)$$ where $t \in \mathbb{F}_q$ and $\lambda \in \mathbb{F}_q^*$. From the Bruhat decomposition for $SL_2(q)$, every element $g \in SL_2(q)$ has a unique expression in one of the following forms (see \cite[Cor. 8.4.4]{Car}): $$g=u(a)h(\lambda) \mbox{ or } g=u(b)h(\gamma)\tau u(c).$$ Suppose $w=w(x_1,...,x_k)$ is a non-trivial reduced word in $F_k$ of length $l$ which is an identity of $SL_2(q)$. Since we are interested in deriving a lower bound, we may assume that $l$ is less than $exp(SL_2(q))$ (see Lemma \ref{lem_exp}), so $l$ is even by Lemma \ref{len_even}. Let $$M=\{a_i,b_i,c_i,\lambda_i,\gamma_i:1 \leq i \leq k \}$$ be a set of independent commuting indeterminates over $\mathbb{F}_q$, and let $$X_i=u(a_i)h(\lambda_i) \mbox{ and } Y_i=u(b_i)h(\gamma_i)\tau u(c_i).$$ Then $X_i$ and $Y_i$ are matrices with entries in $\mathbb{F}_q[M]$ and it is immediate to verify that the entries of $$\lambda_iX_i,\gamma_iY_i,\lambda_iX_i^{-1} \mbox{ and } \gamma_iY_i^{-1}$$ are polynomials of degree at most $2$ in the variables in $M$. Let $n_i$ be the sum of the moduli of the exponents of $x_i$ appearing in $w$ (for example if $w=x_1x_2x_1^{-1}x_2^{-1}$ then $n_1=n_2=2$) and let $I_2$ be the identity matrix in $SL_2(q)$. For any $Z_i \in \{X_i,Y_i\}$, the matrix $$C(Z_1,...,Z_k)=\prod \limits_{i=1}^k\beta(Z_i)^{n_i}(w(Z_1,...,Z_k)-I_2)$$ \noindent where $\beta_i(X_i)= \lambda_i$ and $\beta(Y_i) = \gamma_i$, has entries in $\mathbb{F}_q[M]$ having degree at most $2n_i$ in each of the variables $a_i,b_i,c_i,\lambda_i,\gamma_i$. \noindent Now there are two cases to consider: \begin{enumerate} \item[(i)] For all the substitutions of $Z_i$ by $X_i$ or $Y_i$ we get $C(Z_1,...,Z_k)=0$. \item[(ii)] There is a substitution $(Z_1,...,Z_k)$ such that $C(Z_1,...,Z_k)\neq 0$. \end{enumerate} First let us consider case (i). Let $K$ be the algebraic closure of $\mathbb{F}_q$. Since for every substitution $C(Z_1,...,Z_k)$ is zero, we deduce that $w$ is a law on $SL_2(K)$. For every $n\in \mathbb{N}$, $SL_2(K)$ has a subgroup isomorphic to $SL_2(q^n)$, hence we obtain a law for the infinite family $\{SL_2(q^n)\}_{n=1}^\infty$. But the main theorem of Jones \cite{Jon} states that there is no law which is satisfied by an infinite family of non-abelian finite simple groups, a contradiction. Now let us consider case (ii). Let $(Z_1,...,Z_k)$ be a substitution such that the word $w(Z_1,...,Z_k)-I_2$ is not zero. Decompose $w(Z_1,...,Z_k)$ as a product of two words: $$w(Z_1,...,Z_k)=w_1(Z_1,...,Z_k)w_2(Z_1,...,Z_k)$$ where $w_1(Z_1,...,Z_k)$ is a word of length $\frac{1}{2}l$ (recall that $l$ is even). Let $T$ be the matrix $$T= \prod \limits_{i=1}^k\beta(Z_i)^{n_i}(w_1(Z_1,...,Z_k)-w_2(Z_1,...,Z_k)^{-1}).$$ \noindent Then there is an entry $T_{i,j}$ for some $1 \leq i,j \leq 2$, which is not formally zero. Let $M_1 \subseteq M$ be the set of indeterminates appearing in $T_{i,j}$. Then $T_{i,j}$ is a polynomial in $|M_1|$ variables of degree at most $\frac{3l}{2}$. Recall that if $f$ is a polynomial in $\mathbb{F}_q[M_1]$ with $deg(f)=d\geq 0$, then the equation $f=0$ has at most $dq^{|M_1|-1}$ solutions in $\mathbb{F}_q^{|M_1|}$ (see \cite[Thm. 6.13]{LN}). Therefore $T_{i,j}$ has at most $\frac{3}{2}l \cdot q^{|M_1|-1}$ solutions, hence $l \geq \frac{2}{3}(q-1)$ (the $q-1$ factor is because $\lambda_i,\gamma_i \in \mathbb{F}_q^*$). \end{proof} It is easy to see that $A_1(q^n)=PSL_2({q^n})$ is a subgroup of $A_{2n-1}(q)=PSL_{2n}(q)$ (this follows from the fact that a $2$ dimensional vector space over $\mathbb{F}_{q^n}$, can be considered as a $2n$ dimensional vector space over $\mathbb{F}_q$), thus we obtain the following result: \bpr Let $G=A_d(q)$ where $d\geq 1$. Then $\alpha(G)>\frac{q^{\lfloor\frac{d+1}{2}\rfloor}-1}{3}$. \epr This completes the proof of Theorem \ref{thm:sl}. \section {Proof of Theorem \ref{thm:main}} \label{proof_thm_main} \subsection{The Lower Bound} \label{sec_lw} In this section we will show that if $G$ is a finite simple group of Lie type of rank $r$, and $G$ is not a Suzuki group (i.e. type ${}^2B_2$), then $G$ contains $A_{r'}(q')$, where $q'\leq q^*(G)$ and $r' \leq r$. Therefore, using Theorem \ref{thm:sl}, we obtain a lower bound for the shortest identity of $G$. For the Suzuki groups we use a lemma of Jones. \subsubsection{Untwisted groups} Let $G\neq A_d(q)$ be a simple untwisted group of Lie type over $\mathbb{F}_q$. By considering the associated Dynkin diagram, it is clear that if $G$ has rank $d$ then $G$ contains $A_{d-1}(q)$, so Theorem \ref{thm:sl} implies that $\alpha(G) \geq \frac{q^{\lfloor \frac{d}{2}\rfloor}-1}{3}$. \subsubsection{Twisted groups} \textbf{Ree Groups:} It is known that $^2F_4(2^{2n+1})$ and $^2G_2(3^{2n+1})$ contains $A_1(3^{2n+1})$ and $A_1(3^{2n+1})$ respectively ~\cite{Ti,Lh}. \noindent \textbf{Suzuki groups:} The following lemma is a quantitative version of a result of Jones. \blem The length of the shortest identity of $^2B_2(2^{2n+1})$ satisfies $$\alpha(^2B_2(2^{2n+1})) \geq \frac{2^{2n}-1}{1+2^n}.$$\elem For the proof we refer the reader to \cite[Lemma 5]{Jon}). Although this quantitative result is not stated there explicitly, it follows immediately from the proof (in fact, one can improve this bound by decomposing the identity word in the same way we did in the proof of Lemma \ref{lem:sl_2}).\\ \noindent \textbf{The twisted groups $^2A_d(q)$,$^2D_d(q)$,$^2E_6(q)$,$^3D_4(q)$:} The Dynkin diagrams of type $A_d,D_d,E_6$ and $D_4$ admit symmetries of order $2,2,2$ and $3$, respectively. The twisted groups $^2A_d(q)$,$^2D_d(q)$,$^2E_6(q)$ and $^3D_4(q)$ are subgroups of the untwisted groups $A_d(q^2),D_d(q^2),E_6(q^2),D_4(q^3)$, and they are fixed by an automorphism $\sigma$ (of $A_d(q^2)$ etc.) which maps each root element $X_\alpha(t)$ to $X_{\alpha'}(t^q)$, where $\alpha \mapsto \alpha'$ is a symmetry of the root system and $t \mapsto t^q$ is a field automorphism. The automorphism $\sigma$ has order $2,2,2,3$ respectively. By inspecting the Dynkin diagrams of type $A_{2d-1},A_{2d},D_d,E_6$ and $D_4$, together with the corresponding automorphisms and roots relations, one can show that each of the following groups $G$ has a subgroup $H$ as given in the following table (see also \cite[Sec. 2]{Ni}): \begin{center} \begin{tabular}{ |l | c| c|c |c|c|c| r | } \hline $G$ & $^2A_2(q)$ & $^2A_{2d-1}(q)$& $^2A_{2d}(q),d >1$ & $^2E_6(q)$ & $^3D_4(q)$ \\ \hline $H$ & $A_1(q)$ & $A_{d-1}(q^2)$ & $A_{d-1}(q^2)$ & $A_{2}(q^2)$ & $A_1(q^3)$\\ \hline \hline \end{tabular} \end{center} We give the details in the case of ${}^2A_{2d-1}(q)$, a similar argument applies in each of the remaining cases. Let $\prod=\{\omega_1,...,\omega_{2d-1}\}$ be a base for the root system of type $A_{2d-1}$. Now for every $t \in \mathbb{F}_{q^2}$ and for every $ 1 \leq i <d$ the element $X_{\omega_i}(t)X_{\omega_{2d-i}}(t^q)$ is fixed by the automorphism $\sigma$ and for every $t\in \mathbb{F}_q$ the element $X_{\omega_d}(t)$ is fixed by $\sigma$. It is well-known fact that if $|i-j|>1$ then the root subgroups $X_{\omega_i}$ and $X_{\omega_j}$ commute, thus we get that the subgroup $$H=\langle X_{\omega_i}(t)X_{\omega_{2d-i}}(t^q): 1 \leq i <d,t\in \mathbb{F}_{q^2}\rangle$$ is isomorphic to $A_{d-1}(q^2)$. So for a twisted group $G$ in the above table with rank $r$, we deduce from Theorem \ref{thm:sl} that the shortest identity of $G$ has length at least $\frac{(q^*(G))^{\frac{r}{4}}-1}{3}$. The only case left is $^2D_{d+1}(q)$ (recall that the rank of $^2D_{d+1}(q)$ is $d$). Let $n=2^k$ for some integer $k\geq 2$ and fix $d$ such that $ n \leq d+1 <2n =2^{k+1}$. Then we have $^2D_{d+1}(q) \geq {^2D_{\frac{n}{2}}}(q^2) \geq A_1(q^n)$ (see \cite[Table 3.5.F]{KL}). Now since $2(\frac{d}{4}) < n$, we deduce that the shortest identity of $^2D_{d+1}(q)$ has length at least $ \frac{q^n-1}{3} > \frac{(q^2)^\frac{d}{4}-1}{3}$. \subsection{The Upper Bound} \label{sec_up} In this final section we show that there exists a constant $c$ such that every finite simple group of Lie type over $\mathbb{F}_q$ of rank $r$ is isomorphic to a subgroup of $PSL_{cr}(q)$ or $SL_{cr}(q)$, and so the desired upper bound follows from Proposition \ref{pr:sl:iden}. Each finite simple group of Lie type can be constructed (see \cite[ch. 4]{Car}) as a subgroup of the automorphism group of the corresponding Lie algebra. In the following table we give the dimension of the Lie algebra $\hbox{\goth g}\hskip 2.5pt$ in terms of the untwisted Lie rank: \begin{center} \begin{tabular}{ |l | c| c |c|c|c|c|c|c|c| r | } \hline \hbox{\goth g}\hskip 2.5pt & $A_d$ & $B_d$ & $C_d$ & $D_d$ & $E_6$ & $E_7$ & $E_8$ & $F_4$ & $G_2$ \\ \hline dim \hbox{\goth g}\hskip 2.5pt & $d(d+2)$ & $d(2d+1)$ & $d(2d+1)$ & $d(2d-1)$ & $78$ & $133$& $248$& $52$ & $14$\\ \hline \hline \end{tabular} \end{center} It is well known (see \cite[Ch. 11]{Car}) that each of the simple classical groups $$A_d(q)\mbox{,}^2A_d(q),B_d(q),C_d(q),D_d(q)\mbox{,}^2D_d(q)$$ has a matrix representation as a subgroup of $PSL_{2d+1}(q^2)$. \noindent In addition, the above table indicates that the exceptional groups $$E_6(q),E_7(q),E_8(q),F_4(q),G_2(q)$$ are subgroups of $SL_{248}(q)$ (via the adjoint representation on the corresponding Lie algebra), so for the untwisted groups we can take $c=31$. Each twisted group $G$ is the set of fixed points of some automorphism of the corresponding untwisted group, hence a subgroup of some $SL_{cr}(q)$ (or $PSL_{cr}(q))$ for some $c$. For example, $$^2F_4(q) < F_4(q^2)<SL_{52}(q^2)<SL_{104}(q).$$ One can check from the table above that $c = 31$, and this finishes the proof of Theorem \ref{thm:main}. \section {Acknowledgments} This paper is part of the author's PhD thesis. The author is grateful to his advisor Prof. Alex Lubotzky for introducing him to the problem and for useful discussions. Thanks to Prof. Inna (Korchagina) Capdeboscq, Prof. Nati Linial, Prof. Avinoam Mann, Dr. Nikolay Nikolov and Prof. Aner Shalev for useful remarks. Thanks are also due to M. Berman for his careful reading and for the referee for his constructive suggestions and criticisms which helped me improve this paper.
2,869,038,155,346
arxiv
\section{Electronic Submission} \label{submission} Submission to ICML 2020 will be entirely electronic, via a web site (not email). Information about the submission process and \LaTeX\ templates are available on the conference web site at: \begin{center} \textbf{\texttt{http://icml.cc/}} \end{center} The guidelines below will be enforced for initial submissions and camera-ready copies. Here is a brief summary: \begin{itemize} \item Submissions must be in PDF\@. \item Submitted papers can be up to eight pages long, not including references, plus unlimited space for references. Accepted papers can be up to nine pages long, not including references, to allow authors to address reviewer comments. Any paper exceeding this length will automatically be rejected. \item \textbf{Do not include author information or acknowledgements} in your initial submission. \item Your paper should be in \textbf{10 point Times font}. \item Make sure your PDF file only uses Type-1 fonts. \item Place figure captions \emph{under} the figure (and omit titles from inside the graphic file itself). Place table captions \emph{over} the table. \item References must include page numbers whenever possible and be as complete as possible. Place multiple citations in chronological order. \item Do not alter the style template; in particular, do not compress the paper format by reducing the vertical spaces. \item Keep your abstract brief and self-contained, one paragraph and roughly 4--6 sentences. Gross violations will require correction at the camera-ready phase. The title should have content words capitalized. \end{itemize} \subsection{Submitting Papers} \textbf{Paper Deadline:} The deadline for paper submission that is advertised on the conference website is strict. If your full, anonymized, submission does not reach us on time, it will not be considered for publication. \textbf{Anonymous Submission:} ICML uses double-blind review: no identifying author information may appear on the title page or in the paper itself. Section~\ref{author info} gives further details. \textbf{Simultaneous Submission:} ICML will not accept any paper which, at the time of submission, is under review for another conference or has already been published. This policy also applies to papers that overlap substantially in technical content with conference papers under review or previously published. ICML submissions must not be submitted to other conferences during ICML's review period. Authors may submit to ICML substantially different versions of journal papers that are currently under review by the journal, but not yet accepted at the time of submission. Informal publications, such as technical reports or papers in workshop proceedings which do not appear in print, do not fall under these restrictions. \medskip Authors must provide their manuscripts in \textbf{PDF} format. Furthermore, please make sure that files contain only embedded Type-1 fonts (e.g.,~using the program \texttt{pdffonts} in linux or using File/DocumentProperties/Fonts in Acrobat). Other fonts (like Type-3) might come from graphics files imported into the document. Authors using \textbf{Word} must convert their document to PDF\@. Most of the latest versions of Word have the facility to do this automatically. Submissions will not be accepted in Word format or any format other than PDF\@. Really. We're not joking. Don't send Word. Those who use \textbf{\LaTeX} should avoid including Type-3 fonts. Those using \texttt{latex} and \texttt{dvips} may need the following two commands: {\footnotesize \begin{verbatim} dvips -Ppdf -tletter -G0 -o paper.ps paper.dvi ps2pdf paper.ps \end{verbatim}} It is a zero following the ``-G'', which tells dvips to use the config.pdf file. Newer \TeX\ distributions don't always need this option. Using \texttt{pdflatex} rather than \texttt{latex}, often gives better results. This program avoids the Type-3 font problem, and supports more advanced features in the \texttt{microtype} package. \textbf{Graphics files} should be a reasonable size, and included from an appropriate format. Use vector formats (.eps/.pdf) for plots, lossless bitmap formats (.png) for raster graphics with sharp lines, and jpeg for photo-like images. The style file uses the \texttt{hyperref} package to make clickable links in documents. If this causes problems for you, add \texttt{nohyperref} as one of the options to the \texttt{icml2020} usepackage statement. \subsection{Submitting Final Camera-Ready Copy} The final versions of papers accepted for publication should follow the same format and naming convention as initial submissions, except that author information (names and affiliations) should be given. See Section~\ref{final author} for formatting instructions. The footnote, ``Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute.'' must be modified to ``\textit{Proceedings of the $\mathit{37}^{th}$ International Conference on Machine Learning}, Online, PMLR 119, 2020. Copyright 2020 by the author(s).'' For those using the \textbf{\LaTeX} style file, this change (and others) is handled automatically by simply changing $\mathtt{\backslash usepackage\{icml2020\}}$ to $$\mathtt{\backslash usepackage[accepted]\{icml2020\}}$$ Authors using \textbf{Word} must edit the footnote on the first page of the document themselves. Camera-ready copies should have the title of the paper as running head on each page except the first one. The running title consists of a single line centered above a horizontal rule which is $1$~point thick. The running head should be centered, bold and in $9$~point type. The rule should be $10$~points above the main text. For those using the \textbf{\LaTeX} style file, the original title is automatically set as running head using the \texttt{fancyhdr} package which is included in the ICML 2020 style file package. In case that the original title exceeds the size restrictions, a shorter form can be supplied by using \verb|\icmltitlerunning{...}| just before $\mathtt{\backslash begin\{document\}}$. Authors using \textbf{Word} must edit the header of the document themselves. \section{Format of the Paper} All submissions must follow the specified format. \subsection{Dimensions} The text of the paper should be formatted in two columns, with an overall width of 6.75~inches, height of 9.0~inches, and 0.25~inches between the columns. The left margin should be 0.75~inches and the top margin 1.0~inch (2.54~cm). The right and bottom margins will depend on whether you print on US letter or A4 paper, but all final versions must be produced for US letter size. The paper body should be set in 10~point type with a vertical spacing of 11~points. Please use Times typeface throughout the text. \subsection{Title} The paper title should be set in 14~point bold type and centered between two horizontal rules that are 1~point thick, with 1.0~inch between the top rule and the top edge of the page. Capitalize the first letter of content words and put the rest of the title in lower case. \subsection{Author Information for Submission} \label{author info} ICML uses double-blind review, so author information must not appear. If you are using \LaTeX\/ and the \texttt{icml2020.sty} file, use \verb+\icmlauthor{...}+ to specify authors and \verb+\icmlaffiliation{...}+ to specify affiliations. (Read the TeX code used to produce this document for an example usage.) The author information will not be printed unless \texttt{accepted} is passed as an argument to the style file. Submissions that include the author information will not be reviewed. \subsubsection{Self-Citations} If you are citing published papers for which you are an author, refer to yourself in the third person. In particular, do not use phrases that reveal your identity (e.g., ``in previous work \cite{langley00}, we have shown \ldots''). Do not anonymize citations in the reference section. The only exception are manuscripts that are not yet published (e.g., under submission). If you choose to refer to such unpublished manuscripts \cite{anonymous}, anonymized copies have to be submitted as Supplementary Material via CMT\@. However, keep in mind that an ICML paper should be self contained and should contain sufficient detail for the reviewers to evaluate the work. In particular, reviewers are not required to look at the Supplementary Material when writing their review. \subsubsection{Camera-Ready Author Information} \label{final author} If a paper is accepted, a final camera-ready copy must be prepared. For camera-ready papers, author information should start 0.3~inches below the bottom rule surrounding the title. The authors' names should appear in 10~point bold type, in a row, separated by white space, and centered. Author names should not be broken across lines. Unbolded superscripted numbers, starting 1, should be used to refer to affiliations. Affiliations should be numbered in the order of appearance. A single footnote block of text should be used to list all the affiliations. (Academic affiliations should list Department, University, City, State/Region, Country. Similarly for industrial affiliations.) Each distinct affiliations should be listed once. If an author has multiple affiliations, multiple superscripts should be placed after the name, separated by thin spaces. If the authors would like to highlight equal contribution by multiple first authors, those authors should have an asterisk placed after their name in superscript, and the term ``\textsuperscript{*}Equal contribution" should be placed in the footnote block ahead of the list of affiliations. A list of corresponding authors and their emails (in the format Full Name \textless{}[email protected]\textgreater{}) can follow the list of affiliations. Ideally only one or two names should be listed. A sample file with author names is included in the ICML2020 style file package. Turn on the \texttt{[accepted]} option to the stylefile to see the names rendered. All of the guidelines above are implemented by the \LaTeX\ style file. \subsection{Abstract} The paper abstract should begin in the left column, 0.4~inches below the final address. The heading `Abstract' should be centered, bold, and in 11~point type. The abstract body should use 10~point type, with a vertical spacing of 11~points, and should be indented 0.25~inches more than normal on left-hand and right-hand margins. Insert 0.4~inches of blank space after the body. Keep your abstract brief and self-contained, limiting it to one paragraph and roughly 4--6 sentences. Gross violations will require correction at the camera-ready phase. \subsection{Partitioning the Text} You should organize your paper into sections and paragraphs to help readers place a structure on the material and understand its contributions. \subsubsection{Sections and Subsections} Section headings should be numbered, flush left, and set in 11~pt bold type with the content words capitalized. Leave 0.25~inches of space before the heading and 0.15~inches after the heading. Similarly, subsection headings should be numbered, flush left, and set in 10~pt bold type with the content words capitalized. Leave 0.2~inches of space before the heading and 0.13~inches afterward. Finally, subsubsection headings should be numbered, flush left, and set in 10~pt small caps with the content words capitalized. Leave 0.18~inches of space before the heading and 0.1~inches after the heading. Please use no more than three levels of headings. \subsubsection{Paragraphs and Footnotes} Within each section or subsection, you should further partition the paper into paragraphs. Do not indent the first line of a given paragraph, but insert a blank line between succeeding ones. You can use footnotes\footnote{Footnotes should be complete sentences.} to provide readers with additional information about a topic without interrupting the flow of the paper. Indicate footnotes with a number in the text where the point is most relevant. Place the footnote in 9~point type at the bottom of the column in which it appears. Precede the first footnote in a column with a horizontal rule of 0.8~inches.\footnote{Multiple footnotes can appear in each column, in the same order as they appear in the text, but spread them across columns and pages if possible.} \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{icml_numpapers}} \caption{Historical locations and number of accepted papers for International Machine Learning Conferences (ICML 1993 -- ICML 2008) and International Workshops on Machine Learning (ML 1988 -- ML 1992). At the time this figure was produced, the number of accepted papers for ICML 2008 was unknown and instead estimated.} \label{icml-historical} \end{center} \vskip -0.2in \end{figure} \subsection{Figures} You may want to include figures in the paper to illustrate your approach and results. Such artwork should be centered, legible, and separated from the text. Lines should be dark and at least 0.5~points thick for purposes of reproduction, and text should not appear on a gray background. Label all distinct components of each figure. If the figure takes the form of a graph, then give a name for each axis and include a legend that briefly describes each curve. Do not include a title inside the figure; instead, the caption should serve this function. Number figures sequentially, placing the figure number and caption \emph{after} the graphics, with at least 0.1~inches of space before the caption and 0.1~inches after it, as in Figure~\ref{icml-historical}. The figure caption should be set in 9~point type and centered unless it runs two or more lines, in which case it should be flush left. You may float figures to the top or bottom of a column, and you may set wide figures across both columns (use the environment \texttt{figure*} in \LaTeX). Always place two-column figures at the top or bottom of the page. \subsection{Algorithms} If you are using \LaTeX, please use the ``algorithm'' and ``algorithmic'' environments to format pseudocode. These require the corresponding stylefiles, algorithm.sty and algorithmic.sty, which are supplied with this package. Algorithm~\ref{alg:example} shows an example. \begin{algorithm}[tb] \caption{Bubble Sort} \label{alg:example} \begin{algorithmic} \STATE {\bfseries Input:} data $x_i$, size $m$ \REPEAT \STATE Initialize $noChange = true$. \FOR{$i=1$ {\bfseries to} $m-1$} \IF{$x_i > x_{i+1}$} \STATE Swap $x_i$ and $x_{i+1}$ \STATE $noChange = false$ \ENDIF \ENDFOR \UNTIL{$noChange$ is $true$} \end{algorithmic} \end{algorithm} \subsection{Tables} You may also want to include tables that summarize material. Like figures, these should be centered, legible, and numbered consecutively. However, place the title \emph{above} the table with at least 0.1~inches of space before the title and the same after it, as in Table~\ref{sample-table}. The table title should be set in 9~point type and centered unless it runs two or more lines, in which case it should be flush left. \begin{table}[t] \caption{Classification accuracies for naive Bayes and flexible Bayes on various data sets.} \label{sample-table} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcccr} \toprule Data set & Naive & Flexible & Better? \\ \midrule Breast & 95.9$\pm$ 0.2& 96.7$\pm$ 0.2& $\surd$ \\ Cleveland & 83.3$\pm$ 0.6& 80.0$\pm$ 0.6& $\times$\\ Glass2 & 61.9$\pm$ 1.4& 83.8$\pm$ 0.7& $\surd$ \\ Credit & 74.8$\pm$ 0.5& 78.3$\pm$ 0.6& \\ Horse & 73.3$\pm$ 0.9& 69.7$\pm$ 1.0& $\times$\\ Meta & 67.1$\pm$ 0.6& 76.5$\pm$ 0.5& $\surd$ \\ Pima & 75.1$\pm$ 0.6& 73.9$\pm$ 0.5& \\ Vehicle & 44.9$\pm$ 0.6& 61.5$\pm$ 0.4& $\surd$ \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} Tables contain textual material, whereas figures contain graphical material. Specify the contents of each row and column in the table's topmost row. Again, you may float tables to a column's top or bottom, and set wide tables across both columns. Place two-column tables at the top or bottom of the page. \subsection{Citations and References} Please use APA reference format regardless of your formatter or word processor. If you rely on the \LaTeX\/ bibliographic facility, use \texttt{natbib.sty} and \texttt{icml2020.bst} included in the style-file package to obtain this format. Citations within the text should include the authors' last names and year. If the authors' names are included in the sentence, place only the year in parentheses, for example when referencing Arthur Samuel's pioneering work \yrcite{Samuel59}. Otherwise place the entire reference in parentheses with the authors and year separated by a comma \cite{Samuel59}. List multiple references separated by semicolons \cite{kearns89,Samuel59,mitchell80}. Use the `et~al.' construct only for citations with three or more authors or after listing all authors to a publication in an earlier reference \cite{MachineLearningI}. Authors should cite their own work in the third person in the initial version of their paper submitted for blind review. Please refer to Section~\ref{author info} for detailed instructions on how to cite your own papers. Use an unnumbered first-level section heading for the references, and use a hanging indent style, with the first line of the reference flush against the left margin and subsequent lines indented by 10 points. The references at the end of this document give examples for journal articles \cite{Samuel59}, conference publications \cite{langley00}, book chapters \cite{Newell81}, books \cite{DudaHart2nd}, edited volumes \cite{MachineLearningI}, technical reports \cite{mitchell80}, and dissertations \cite{kearns89}. Alphabetize references by the surnames of the first authors, with single author entries preceding multiple author entries. Order references for the same authors by year of publication, with the earliest first. Make sure that each reference includes all relevant information (e.g., page numbers). Please put some effort into making references complete, presentable, and consistent. If using bibtex, please protect capital letters of names and abbreviations in titles, for example, use \{B\}ayesian or \{L\}ipschitz in your .bib file. \section*{Software and Data} If a paper is accepted, we strongly encourage the publication of software and data with the camera-ready version of the paper whenever appropriate. This can be done by including a URL in the camera-ready copy. However, \textbf{do not} include URLs that reveal your institution or identity in your submission for review. Instead, provide an anonymous URL or upload the material as ``Supplementary Material'' into the CMT reviewing system. Note that reviewers are not required to look at this material when writing their review. \section*{Acknowledgements} \textbf{Do not} include acknowledgements in the initial version of the paper submitted for blind review. If a paper is accepted, the final camera-ready version can (and probably should) include acknowledgements. In this case, please place such acknowledgements in an unnumbered section at the end of the paper. Typically, this will include thanks to reviewers who gave useful comments, to colleagues who contributed to the ideas, and to funding agencies and corporate sponsors that provided financial support. \nocite{langley00} \section{Introduction} \subsection*{Background} When it comes to inferring causal direction, the most popular tool is proper trial designs of experiments. More specifically, the randomized control trial (RCT) is a popular tool of choice since it allows us to easily separate treatment results from confounding variables so we are able to retrieve statistics such as average treatment effects that can inform the causalit of a given treatment. However, such methods are not globally applicable since randomized trials can not be conducted in many scenarios: for example, they can be too costly, unethical or simply infeasible due to the complexity of real world systems. Furthermore, the RCT is only applicable to prospective, but not retrospective studies - which is a large source of data to analyze. Applications in complex clinical trials or studies in social, economic and political sciences require more specialized tools to assist in discerning causality in the slew of data generated from less than ideal conditions in the modern computing era. Machine learning, most notably deep learning, is a powerful tool that has allowed for state-of-the-art performance in both discriminative and generative tasks and has enjoyed huge amounts of growth in recent years as a result. However, canonical learning techniques often are likelihood based optimizations which converge regardless of causal direction with functionally equivalent parameterizations in both directions. Hence, we require specialized learning methods and importantly, additional assumptions that allow deep learning models to discern the causal direction given their parameterizations. We begin with the philosophy of Occam's razor - if multiple answers are correct, the best answer is the simplest one. Applied to our problem, this suggests that given the parameterizations $p(x | y)p(y)$ and $p(y | x)p(x)$ we assume that the "true" parameterization is the one that yields the simpler pair of conditional and marginal distributions. Specifically, the measure of simplicity that we will use is the speed at which we are able to adapt a transfer distribution. Thus, under the assumptions, we expect that the correct causal direction will allow for faster transfer learning of the distribution under which we develop our methodology. \subsection*{Related Work} While the derivation of causal directions with meta-learning is still a new research topic, first entries already exist. \citet{dasgupta2019causal} use meta-learning with model-free reinforcement learning to deduce causal reasoning in an end-to-end approach. \citet{bengio2019meta} instead apply meta-learning to derive the causal direction between variables in an optimization-based framework using Bayesian Networks. In their work, they apply learned models assuming different causal directions to data with a changed transfer distribution. As the correct causal model will only have to adjust its transfer distribution, and thus adapt faster, this allows it to extract the underlying causal directions. \citet{bengio2019meta} further apply this model to the Representation Learning domain, in which information from underlying variables has to be extracted. For this, they only consider a single freedom rotation framework as defined in \citet{bengio2013representation}, though more general models could provide better generalisation. The approach by \citet{CGNN} is a try at a more general approach through leverage a series of generative models in order to model each of the observable states of the graph. This allows it to resample a dataset distribution by sampling in topological order. Given a starting causal structure, their method refines the direction by an iterative method of resampling and computing a Maximum Mean Discrepancy (MMD) statistic that serves as a heuristic measurement as to the similarity of the ground truth distribution to the current iteration's resampled distribution. It then reassigns edge directions to reduce the MMD score and employs a cycle correcting method that allows it to resolve cyclic causal structure with its best acyclic counterpart by using a hill climbing technique. Here, the approach is flexible but rely on the correlation of MMD with correct causal structure in order to return a good assignment of the causal directions. \citet{metaCGNN} describe a meta-learning technique that is based on techniques from \citet{CGNN}. Noting computational issues of the original method, the authors propose a meta-learning method that re-frames assigning causal direction in a single graph as a meta-task. They then utilize a dataset of reference causal graphs and their corresponding meta-dataset to learn an efficient means of performing the meta-task. Thus, it is able to leverage similarity between meta-datasets to learn the causal direction of a new dataset more quickly. However, this method remains reliant on the usage of MMD and also follows the implicit assumption of knowing the existence of causal relationships and only needing to assign them direction. Several approaches exist to compute and train the log-likelihood of a model given a causal graph structure. The most popular are Variational Autoencoders \cite{varAutoEnc} and Bayes Networks \citet{DeepLearningBook}. Another approach is to parametrize the causal model more directly using Functional Causal Models (FCM) \cite{FCM}. FCMs further generalize results from \citet{CGNN} since we are no longer assuming that we know the exact distribution of latent confounders and instead allow stochasticity by way of inference through a proxy variable. The contributions of this paper are that we are - by the authors best knowledge - the first to apply directional inference in meta-learning to variable structures with common confounders and improve on prior analysis by using FCMs. Additionally, we derive a more robust measure by introducing plurality voting and analyze our methodology for robustness properties to infer the feasibility of our approach. \section{Methodology} \subsection*{Meta Learning Causal Directions} To learn the joint distribution of two variables $X$ and $Y$ we can use their conditional distributions $p_{x|y}$ and $p_{y|x}$ alongside their marginal distributions $p_x$ and $p_y$. In a Bayesian framework, this is expressed as \begin{flalign*} P_{X \rightarrow Y}(X, Y) = P_{X \rightarrow Y}(X) P_{X \rightarrow Y}(Y|X) \\ P_{Y \rightarrow X}(X, Y) = P_{Y \rightarrow X}(Y) P_{Y \rightarrow X}(X|Y) \end{flalign*} where both parameterizations can be learned by Bayesian networks. Similar to \citet{bengio2019meta}, we assume that the true causal direction is $X \rightarrow Y$ and use the training distribution $p_0(x, y) = p_0(x) p(y|x)$. Thereafter, the distribution is changed to the transfer distribution $p_1(x, y) = p_1(x) p(y|x)$. Both networks are meta-trained to the transfer distribution for $T$ steps with resulting likelihoods \begin{flalign*} L_{X \rightarrow Y} = \prod_{t=1}^T P_{X \rightarrow Y, t}(x_t, y_t), \quad L_{Y \rightarrow X} = \prod_{t=1}^T P_{Y \rightarrow X, t}(x_t, y_t) \end{flalign*} which is trained in the following two step process: \begin{enumerate} \item The relationship between $X$ and $Y$ is learned using two models: one assumes $X$ causes $Y$, the other the opposite causal direction. \item The distribution of $X$ is changed to a transfer distribution. Both models are retrained on the new data and the resulting likelihoods are recorded. \end{enumerate} Here, $P_{X \rightarrow Y, t}$ denotes the trained Bayesian network after step $t$. Next, the loss statistic \begin{equation*} R(\alpha) = -ln \left(\sigma(\alpha)L_{X \rightarrow Y} + (1 - \sigma(\alpha)) L_{Y \rightarrow X} \right) \end{equation*} is computed with $\alpha$ denoting a structural parameter defining the causal direction and $\sigma(\cdot)$ the sigmoid transformation. In this methodology, $\alpha$ is now optimized to minimize $R(\alpha)$. The loss statistic's gradient is \begin{equation*} \frac{\partial R}{\partial \alpha} = \sigma(\alpha) - \sigma(\alpha + ln(L_{X \rightarrow Y}) - ln(L_{Y \rightarrow X})) \end{equation*} such that $\frac{\partial R}{\partial \alpha} > 0$ if $L_{X \rightarrow Y} < L_{Y \rightarrow X}$, that is if $P_{X \rightarrow Y}$ is better at explaining the transfer distribution than $P_{Y \rightarrow X}$. \citet{bengio2019meta} show that if \begin{equation*} E_{D_{transfer}}[ln(L_{X \rightarrow Y})] > E_{D_{transfer}}[ln(L_{Y \rightarrow X})] \end{equation*} where $D_{transfer}$ is the data drawn from the transfer distribution, stochastic gradient descent on $E_{D_{transfer}}[R]$ will converge to $\sigma(\alpha) = 1$ and $\sigma(\alpha) = 0$ if $ E_{D_{transfer}}[ln(L_{X \rightarrow Y})] < E_{D_{transfer}}[ln(L_{Y \rightarrow X})]$. As the loss function modeling the correct direction - this is, if $X$ causes $Y$, $L_{X \rightarrow Y}$ - only needs to update its estimate for the unconditional distribution $P_{X \rightarrow Y}(X)$ from $p_0(x)$ to $p_1(x)$ while the reverse direction networks needs to change both $P_{Y \rightarrow X}(Y)$ and $P_{Y \rightarrow X}(X|Y)$, it holds that indeed the loss statistic for the correct direction has a lower expected value and we can recover the causal direction. \subsection*{Latent Variables Structure} The previous results make the assumptions that the observed $X$ and $Y$ are independent of other hidden effects. However, this is unlikely to hold in practice. Noting the success of \citet{CGNN} in determining causal relations in graphs and their ability to express latent variables and effects directly, we adopt a similar representation of variable observations by using FCMs which suggest that observations are formed as tuples $$X_i = (\{P_i\}, f_i, \epsilon_i)$$ where $i$ indexes the vertex on the causal graph, $\{P_i\}$ denotes the set of causal parents of $X_i$, $\epsilon_i$ is independent noise modelling latent effects on $X_i$, and $f_i$ is a learned function. \begin{figure}[ht] \vskip 0.2in \begin{center} \includegraphics[width=\columnwidth]{plots/latentfcm.PNG} \caption{Example causal graph featuring observation variables $X$ and $Y$, latent confounder $Z$, and proxy variable $W$.} \label{fig: FCMPlot} \end{center} \vskip 0.0in \end{figure} As an example, in Figure \ref{fig: FCMPlot}, we can consider the FCM function for $Z$ to be $f_Z(\epsilon_Z)$ since it only has a causal parent in its unobserved, independent latent effects. In this case $\{P_Z\}$ is the empty set. To contrast, the FCM function for $Y$ would be $f_Y(X, Z, \epsilon_Y)$ since it has causal parents $X$, the latent $Z$, and its own independent hidden effects $\epsilon_Y$. In this case $\{P_Y\} = \{X, Z\}$. In the canonical definition of FCM, these functions look to predict the realization of the observable $\hat{X}_i = f_i(\{P_i\}, \epsilon_i)$. However, we find it more useful to use the FCM structure to predict model parameters for the distribution of the observable $$f_i: (\{P_i\}, \epsilon_i) \rightarrow (\{\pi_j\}_i, \{\mu_j\}_i, \{\sigma_j\}_i)$$ which we assume to be a Gaussian mixture with $j$ Gaussians. Then in our previously defined language, we have $p_{X | Y}(X | Y) = f_X(Y, \epsilon_X), P(Y) = f_Y(\epsilon_Y)$ which is the conditional and marginal distributions relevant for the $Y \rightarrow X$ direction and similarly $P_{Y | X}(Y | X) = f_Y(X, \epsilon_Y), P(X) = f_X(\epsilon_X)$ are the relevant distributions for the $X \rightarrow Y$ direction. Given realizations of the ground truth observations, we then have a closed form for the likelihoods given each of the learned distributions. \subsection*{Modelling Confounding Factors} The basic FCM structure allows us to generalize the idea of an encoder-decoder structure by modelling hidden effects as different distributions $\epsilon_i$. Fortunately, it is also easy to extend to modelling latent confounders between variables. In particular, in the style of \citet{louizos2017causalLatent}, we introduce the latent variable $Z$ which can affect both $X$ and $Y$. Since $Z$ is not an independent hidden effect, it cannot be absorbed into either $\epsilon_X$ or $\epsilon_Y$. Instead, we must append $Z$ as an input to both $f_X$ and $f_Y$. While \citet{CGNN} assume that each confounder follows a known distribution, this is perhaps an overly ideal scenario that we are unlikely to encounter in practice. Instead, we choose to infer the values of $Z$. Let us consider the proxy variable $W$ that is also impacted by $Z$ such that $W|Z \perp \!\!\! \perp X|Z$ and $W|Z \perp \!\!\! \perp Y|Z$ as depicted in the setup from Figure \ref{fig: FCMPlot}. While $Z$ and $W$ can in principle be sets of variables of arbitrary length, for simplicity we restrict both to a single variable in this analysis. Further, we will model all causal causal effects of $Z$ on $X$, $Y$ and $W$ as additive effects. To incorporate this variational inference of latent variables into the causal direction methodology, we follow \citet{variational} and assume that $Z$, $W|Z$ and $Y|X,Z$ are continuous, normally distributed variables with assumed \begin{gather*} p(Z) \sim N(0, 1) \\ p(W|Z) \sim N(\mu_W(Z), \sigma_W^2(Z)) \\ p(X|Z) \sim N(\mu_X(Z), \sigma_X^2(Z)) \\ p(Z|W) \sim N(\mu_Z(X), \sigma_Z^2(X)) \end{gather*} We can estimate a lower bound for the combined probability of all variables by the Evidence Lower Bound (ELBO) which is defined by \begin{flalign*} L = \sum_{i=1}^N E_{p(z_i|x_i)}[&ln p(w_i|z_i) + ln p(x_i|z_i) + ln p(y_i|x_i, z_i) &\\ & + ln p(z_i) - ln p(z_i|x_i)] \end{flalign*} We generate the ELBO for both causal directions such that they approximate $L_{X \rightarrow Y}$ and $L_{Y \rightarrow X}$. To compute the expected value inside the ELBO we use Monte Carlo simulation. Further, we use variational encoders to model $\mu_W(Z), \sigma_W^2(Z)$ and their alike parameters for $X$ and $Z$. \subsection*{Super-runs and Plurality Voting} Importantly, using ELBO instead of an exact likelihood introduces stochasticity since the bounds bounds can be of varying tightness for the different parameterizations. In particular, this holds for our approach as we use FCMs with specified error terms. As such, doing a single run is equivalent to observing a single outcome of a probabilistic event. When a causal direction exists, this probabilistic event is highly biased towards increasing/decreasing $\alpha$ depending on the direction, which can be observed in the robustness of results and consistency of $\alpha$ convergence. In contrast, when there is no causal direction, the probabilistic event is closer to random as demonstrated by highly varying $\alpha$ paths. Then while we gain an ability to perform inference of causal direction with latent variable structures, we are no longer able to accurately query existence of causal direction with a single result. One simple and promising solution is to leverage multiple results in combination, called a super-run, to infer the existence and direction of causality. We consider perform a plurality vote to determine one of the three possible results. For each result, we consider the following parameters: \begin{enumerate} \item Voter count: The number of runs used \item Majority required: Percentage of the voter count required to win the election \item Cutoff: Values splitting the voting direction of an $\sigma(\alpha)$ path \end{enumerate} Each run contributes a vote for either $X\rightarrow Y$ or $Y\rightarrow X$ when the $\sigma(\alpha\alpha)$ path cutoff threshold is made, otherwise the voter abstains. If one parameterization gains the designated majority required (i.e. 2/3 majority) then it is declared the winner. If neither achieves the required majority, then no causality is declared. It is also important to notice that by construction, performing an inference task is independent when conditioned on the data. Then the final $\sigma(\alpha)$ value falls within $[0, 1]$ with some distribution $f(\sigma(\alpha))$. Regardless of the distribution, we can summarize the probability of a positive vote as $p = \int_{\alpha_+}^1f(\alpha) d\alpha$ and similarly the probability of a negative vote as $q = \int_0^{\alpha_-} f(\alpha) d\alpha$. Then we see that the super-run resulting in a decisive causal direction are tails of binomial random variables. \begin{flalign*} & \quad \quad \quad P(X\rightarrow Y) = \sum_{i = Nr}^N {N \choose i} p^{i}(1 - p)^{N - i} &\\ & \quad \quad \quad P(Y\rightarrow X) = \sum_{i = Nr}^N {N \choose i} q^{i}(1 - q)^{N - i} &\\ P(\textrm{$X\perp Y | Z$}) &= 1 - \sum_{i = Nr}^N {N \choose i} p^{i}(1 - p)^{N - i} - \sum_{i = Nr}^N {N \choose i} q^{i}(1 - q)^{N - i}\\ &= 1 - \sum_{i = Nr}^N {N \choose i} (pq)^{i}(1 - p - q + pq)^{N - i} \end{flalign*} These three quantities can further be simplified to a partition of a single binomial random variable if we take $\alpha_+ = \alpha_-$ to get $p = 1-q$. \iffalse Then supposing that each run has a single cutoff $\alpha_0$ to determine a positive or negative vote for $X\rightarrow Y$ or vice versa. In general, each path has probability $p$ of returning a positive vote and denoting $N$ the voter count and $r$ fractional majority required, we have that the result of the vote is a binomial random variable $B_{p, N, r}$ $$P(X\rightarrow Y) = P(B_{p, N, r} \geq Nr) = \sum_{i = Nr}^N {N \choose i} p^{i}(1 - p)^{N - i}$$ $$P(Y\rightarrow X) = P(B_{p, N, r} \leq N(1 - r)) = \sum_{i = N(1 - r)}^N {N \choose i} p^{i}(1 - p)^{N - i}$$ $$P(\textrm{No causality}) = P(1 - r < \frac{B_{p, N, r}}{N} < r) = \sum_{i = N(1 - r) + 1}^{Nr - 1} {N \choose i} p^{i}(1 - p)^{N - i}$$\\ \fi \section{Experiments \& Results} Unless otherwise specified, we modelled all data by the above process. Specifically, we modelled $Y$ as $Y \sim f(X) + Z$ where $f$ is a cubic spline function. For our experiments, we use 1000 observations for each draw of the training and transfer distribution, as this is also the number used by \cite{bengio2019meta}, and 300 iterations of the Monte Carlo method. The trained models are meta-learned for 5 steps and the FCM uses 5 Gaussians. Moreover, we model $X$ by $X \sim N(\mu_X, 2) + Z$ with $\mu_X = 0$ for the training distribution and $\mu_X \sim U(-4, 4)$ under the transfer distribution. For our inference analysis we assumed that all variables are normally distributed. All causal models are trained for 500 iterations and the $alpha$ parameter for 400 iterations and the results for $\sigma(\hat{\alpha})$ are extracted in the end. \subsection*{Normality Results} To show the functionality of our methodology a plot of the estimation path for $\sigma(\alpha)$ with 10 repetitions is provided in Figure \ref{fig: BetaPlots}. The graphs depicts two distinct scenarios: in the upper plot our true model states that $X$ causes $Y$ while in Figure \ref{fig: reverse_alpha} we model that $Y$ causes $X$. This allows us to evaluate the properties of our optimisation method without falling for a potential fallacy if $\alpha$ is biased in a certain direction. \begin{figure}[ht] \vskip 0.2in \begin{center} \centering \subfloat[a][Results for $\sigma(\alpha)$]{ \includegraphics[width=\columnwidth]{new_plots/alphas_normal.jpg} \label{fig: Normal_alpha}} \centering \subfloat[a][Results for $\sigma(\alpha)$]{ \includegraphics[width=\columnwidth]{new_plots/alphas_reverse.jpg} \label{fig: reverse_alpha}} \caption{$\sigma(\alpha)$ estimates for the the standard model. We generate two models with opposite directions. In \ref{fig: Normal_alpha} $X$ causes $Y$ while in \ref{fig: reverse_alpha} $Y$ is the true cause of $X$. Both plots show 10 different optimization curves for distinct i.i.d. data} \label{fig: NormalPlots} \end{center} \vskip 0.0in \end{figure} The sigmoid transformation of $\alpha$ shows that in both cases we are able to infer the correct causal direction. For the model that specifies $X$ causing $Y$ we can observe that in 9 out of 10 cases $\sigma(\alpha)$ increases beyond 0.8 and in fact converges towards 1 for larger epochs in many such cases, while for the reverse causal direction $\sigma(\alpha)$ decreases towards 0 in all 10 instances. We can further observe that for most instances $\sigma(\alpha)$ goes towards 0 relatively fast and the function has already converged after about 250 iterations. Therefore, this analysis supports the overall feasibility of our methodology \subsection*{Analysis of the FCM} To analyze the correctness of our results we can investigate the output of the FCM, specifically the estimate for the mean of the predicted variable. If our assumed causal direction is that $X$ causes $Y$ and the FCM models $k$ Gaussian variables then we can use \begin{equation*} \hat{y}(x, z) = \frac{\sum_{i=1}^k \pi_{i | x,z} \mu_{i | x,z}}{\sum_{i=1}^k \pi_{i | x,z}} \end{equation*} as the prediction of the mean of $Y$ given $X$ and $Z$. The FCM mean of $X$ can be inferred in a likewise fashion. For this analysis, we use $X=0$, $Y=0$ and $Z = 0$ as input variables to predict the conditioned mean. Plots for this experiment for 10 repetitions can be seen in Figure \ref{fig: FCM_ouput_standard}. For the causal direction $X \rightarrow Y$ in Figure \ref{fig: FCM_ouput_standard_x2y} the dashed line indicates the actual mean of $Y|X=0, Z=0$ for our spline function. As the spline does not have a proper inverse function, no such line is included in Figure \ref{fig: FCM_ouput_standard_y2x}. \begin{figure}[ht] \vskip 0.2in \begin{center} \centering \subfloat[a][Results for the model assuming $X$ causes $Y$]{ \includegraphics[width=\columnwidth]{new_plots/means_x2y.jpg} \label{fig: FCM_ouput_standard_x2y}} \centering \subfloat[a][Results for the model assuming $Y$ causes $X$]{ \includegraphics[width=\columnwidth]{new_plots/means_y2x.jpg} \label{fig: FCM_ouput_standard_y2x}} \caption{output of the FCM indicating the mean of $Y$ given $X$ and $Z$ for $X = 0, Z = 0$. The dashed line shows the actual value of $Y$ at this point} \label{fig: FCM_ouput_standard} \end{center} \vskip 0.0in \end{figure} As Figure \ref{fig: FCM_ouput_standard_x2y} shows, the FCM mean converges to the correct conditioned mean for all ten repetitions. For all iterations this happens within the first 200 episodes. As we only use our converged FCM when training $\alpha$ and $\beta$, this shows that all transfer distributions will use a correct FCM in the computation of their ELBOs. Additionally, we can also note that in Figure \ref{fig: FCM_ouput_standard_y2x} the conditional means also move around the same conditional mean, such that their models also appear to have converged. In models that train correctly, we expect that the total variation after convergence of the FCM should be due to the $\epsilon$ noise. Seeing that the FCM models for $X\rightarrow Y$ converge to the same variance after many iterations adds evidence towards that causal direction. In contrast, the variance of $Y\rightarrow X$ FCM models fail to consistently attain the same variance near the end of training which suggests that indeed this is the wrong causal direction. The reason for the more erratic behaviour for the Y causes X direction is s the aforementioned non-invertibility of the spline function. As several X-values correspond to $y = 0$, the models fluctuate around the conditional mean of these values. \subsection*{Detection of no Causality} To demonstrate the behavior of our model for the case where there is no causal relationship between $X$ and $Y$ variables, we consider a independent ground truth relationship after conditioning on the latent variable $Z$. In Figure \ref{fig: UncorrPlots} we plot the paths for $\sigma(\alpha)$ in this problem setup, again for 10 repetitions. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{new_plots/alphas_none.jpg}} \caption{10 repetitions of $\sigma(\alpha)$ paths for scenario with no causal relationship} \label{fig: UncorrPlots} \end{center} \vskip -0.2in \end{figure} The different directions of the sigmoid transformation for $\alpha$ indicate that the model does not find a clear causal direction in this scenario. While some of the curves increase or decrease towards $1$ or $0$, $\sigma(\alpha)$ remains at 0.5 for many instances. Therefore, our model is able to infer for such scenarios that no causal direction yields a lower loss function and thus indicates that there is no causal relationship. Further, for cases in which the parameter does not remain flat, the optimization curves do not show drastic changes but instead increase or decrease only slowly when compared to the prior analysis in Figure \ref{fig: NormalPlots} that contained an existing causal structure. Hence, this analysis supports the robustness of our analysis. \subsection*{Super-runs} For these experiments, we use two symmetric cutoffs. The first is $\alpha_+ = \alpha_- = 0.5$ and the second is $\alpha_+ = 0.7 \neq \alpha_- = 0.3$, both with $N = 10$. We perform two experiments with each configuration, one where there is causality $X\rightarrow Y$ and one where there is no causal relation. With $\alpha_+ = \alpha_- = 0.5$ and under $X\rightarrow Y$, we find that the super-run concludes a the correct causal relationship all ten times. Under no causal relationship, the super-run concludes correctly that there is no causal relationship seven times and predicts $X\rightarrow Y$ three times. With $\alpha_+ = 0.7, \alpha_- = 0.3$ and under $X\rightarrow Y$, we find that the super-run concludes a the correct causal relationship eight times and concludes no causal relationship twice. Under no causal relationship, the super-run concludes correctly that there is no causal relationship ten times. Clearly, choosing the cutoff point is a hyperparameter tuning problem that can be optimized to find the desired sensitivity to errors for both causal discovery and causal direction. It is also easy to improve results by adding more voters. These additional voters can be run in parallel such that a super-run can be achieved in the same time as a single result given sufficient cores. We further note that the existence of causality can be established using standard causal inference methods such that a super-run with $\alpha_+ = \alpha_- = 0.5$ can be applied to identified relationships to only detect the direction of the causality. \subsection*{Robustness for Non-normality} In our model we correctly assume that $Z$ is normally distributed when estimating its parameter for our causal inference. As this assumption of a correct causal model is not always applicable in practice it is of interest if our inference results keep their predictive power if this is not the case. Hence, we model $Z$ as a Beta distributed variable centered around 0 with parameters $a$ and $b$ and choose 2 variable combinations for the two variables. The first is $0.5, 0.5$, which has a U-shaped distribution function and is therefore strongly dissimilar to a Normal distribution. The second is $a=3, b=3$, which has a more similar probability density function to the Normal distribution. The results for the two distributions are shown in Figure \ref{fig: BetaPlots}. \begin{figure}[ht] \vskip 0.2in \begin{center} \centering \subfloat[a][Results for $Z \sim Beta(0.5, 0.5)$]{ \includegraphics[width=\columnwidth]{new_plots/alphas_beta_05.jpg} \label{fig: Beta_alpha}} \centering \subfloat[a][Results for $Z \sim Beta(3,3)$]{ \includegraphics[width=\columnwidth]{new_plots/alphas_beta_2.jpg} \label{fig: Beta_beta}} \caption{$\sigma(\alpha)$ estimates if $Z \sim Beta(0.5, 0.5)$ or $Beta(3,3)$.} \label{fig: BetaPlots} \end{center} \vskip 0.0in \end{figure} As can be seen, for both distributions the methodology performs worse in predicting the correct causal relationship. For the $Beta(0.5, 0.5)$ distribution, the model can no longer reliably predict the correct causal direction and only depicts correct convergence behavior in 5 out of 10 instances. For the $Beta(3,3)$ distribution however, we retain our ability to deduce the correct causal direction, with 8 correct out of 10 predictions. Yet, we can also observe that our model takes longer to converge to the correct value of $\sigma(\alpha) = 1$ even in this scenario. This is an additional indicator that the accuracy of the model was reduced under the new distribution. Overall, we hypothesize that this decrease in predictive performance occurs as the used ELBO is not a strong enough bound and too dependent on the assumed distribution of the latent variable. This issue could be overcome by constructing a more robust ELBO function, which we leave to further research. \subsection*{Robustness for Limited Sample Data} In previous results, we sample data from well defined ground truth distributions. In particular, for each iteration step of the training process we sample new data from the specified distributions and - in case of the training distribution - randomly change the mean of $X$'s probability distribution to derive the transfer distribution. However, in real world applications, there will only exist a finite dataset - some number of realizations from the ground truth distribution which is inaccessible and only a limited number of transfer distributions. In effect, by sampling from the ground truth directly, we are assuming some infinite size dataset from which we draw data. Instead, a more realistic approach is to mimic a finite dataset environment by first creating a static dataset. Afterwards, we can draw data samples from it during training and compute our parameter estimates in the usual way. For such a more realistic approach it is also important to account for the effect that in practice there exists only a limited number of transfer distributions. Therefore, to define these finite datasets we use four hyperparameters: \begin{enumerate} \item The total number of observations in the training distribution \item The total number of observations in each sample of the transfer distribution \item The number of observations used in each training episode \item The number of distinct transfer distributions \end{enumerate} When sampling our data from the predefined datasets we only use a portion of that data during each training episode. Specifically, we will use the Bootstrap to sample our data, such that we draw samples with replacement from the datasets. Table \ref{Tab:LimitTable} depicts mean and standard deviation results for $\sigma(\alpha)$ over 10 iterations each for different combinations of the four hyperparameters. To analyze effects for datasets of different size we used 1000, 100 and 30 total observations for the training samples and 500, 50 and 15 for each of the transfer distributions. For each combination, we used a fixed amount of observations per training episode and 10, 3, and 1 transfer distributions. \begin{table*}[ht] \centering \begin{tabular}{c|c|c|c|c|c} \thead{Total observations \\ for training sample} & \thead{Total observations \\ for transfer samples} & \thead{Observations \\ per episode} & \thead{Total transfer \\ distributions} & Mean of $\sigma(\alpha)$ & St.d. of $\sigma(\alpha)$\\ \hline 1000 & 500 & 200 & 10 & 0.995 & 0.009 \\ 1000 & 500 & 200 & 3 & 0.823 & 0.306 \\ 1000 & 500 & 200 & 1 & 0.849 & 0.235 \\ 100 & 50 & 50 & 10 & 0.898 & 0.294 \\ 100 & 50 & 50 & 3 & 0.801 & 0.395 \\ 100 & 50 & 50 & 1 & 0.800 & 0.396 \\ 30 & 13 & 13 & 10 & 0.300 & 0.458 \\ 30 & 15 & 15 & 3 & 0.500 & 0.500 \\ 30 & 15 & 15 & 1 & 0.500 & 0.500 \end{tabular} \caption{Mean results for predefined datasets. For all results we used the average of 10 repetitions.} \label{Tab:LimitTable} \end{table*} The table shows that for strongly restricted preselected data samples the predictive accuracy of our methodology decreases, while no strong effect can be detected for moderate restrictions. For example, for 1000 training samples with 10 transfer distributions we reach a mean value of 0.995 for $\sigma(\alpha)$ which drops to 0.300 for 30 samples. Therefore, our methodology works with moderately limited data sets and can therefore be used in practical applications Moreover, the effect of stark reductions of the number of available training sets are also distorting. While we remain able to identify the correct causal direction for 10 transfer distributions, this capability is greatly limited for lower values. For instance, for 1000 training and 500 transfer observations with 3 transfer distributions the mean of $\sigma(\alpha)$ drops to 0.823. Our explanation is that for a limited number of transfer sets the optimized parameter for $\alpha$ will strongly depend on the randomly chosen value for $\mu$, which determines the mean of the transfer distribution. If a value close to 0 is selected, the training and transfer sets are not sufficiently different to guarantee a appropriate difference of the ELBOs. Therefore, under the assumption that the data generating processes guarantee a sufficient difference of the data sets, this effect may be less important in practice. \section{Conclusions} In our experiments we have improved on the the meta-learning approach from \citet{bengio2019meta} and \citet{dasgupta2019causal} to express the causal graph structure more explicitly by using FCM. This innovation yields great improvements in the results as we are able to demonstrate faster and better convergence to higher confidence of the correct causal direction in more difficult problem setups as compared to prior works. In particular, we have shown that for generalized independent hidden effects and with single latent confounders, we are able to recover the correct causal direction with high confidence. The framework that we use is easily extendable to larger number of confounding factors as well as more observational variables as they can be fit into our structure by introducing more inputs to the relevant FCM networks and more FCM networks to learn respectively. Our analysis further shows that our architecture is also able to predict the existence of a causal relationship. Especially with the introduction of super-runs that combine several model predictions using the notion of conditional independence both our predictive performance of the correct causal direction and its existence improve, though some caution with respect to the cutoff parameter should be kept. Finally, we show that in cases where model assumptions are violated, the model's predictive performance decreases, but the model is robust overall with respect to moderate violations. When we have distributional deviation from normality, the $\alpha$ paths have more difficulty converging but still tend towards the correct direction if the actual distribution resembles a normal distribution. When we constrain the model to finite datasets, the probability of converging to the correct causal relation decreases as a function of dataset size and the number of transfer distributions, but we empirically still see that we can often recover the correct relation regardless. \subsection*{Directions for Further Research} In order to generalize assumptions about the distribution of confounding effects, we introduce a variational inference framework for the latent variables that we measure through some proxy variable. However, this also necessitates the replacement of the exact likelihood with the ELB which is not a sharp bound and allows that $L_{X\rightarrow Y} > L_{Y \rightarrow X}$ while also having $ELBO_{X \rightarrow Y} < ELBO_{Y \rightarrow X}$. Especially for the non-normality tests, this constituted a major issue. Therefore, we advise that additional work on this implementation of the ELBO could find a more robust bound and thus improves convergence towards the correct causal direction. Furthermore, this work can be extended to larger causal graphs with more complex latent and observed variable structures. To infer the causal direction on larger graphs, we advise to assume a starting orientation and iteratively perform a two step process similar to the method of \citet{CGNN} to perform hill climbing \begin{enumerate} \item Update causal direction scheme by relearning the transfer distributions in topological order using the current orientation to decide the causal parents of variables. \item Resolve any formed cycles by reorienting violating causal arrows with the smallest confidence \end{enumerate} However, we notice the ability of meta-CGNN in \citet{metaCGNN} to use dataset and causal direction pairs as meta-tasks to train networks that can leverage dataset similarities to find causal directions during meta-test time more quickly. It may be possible to combine these two works in order to consolidate the computational complexity of extending work to larger graph sizes. Finally, we explore only some perturbations to the distributional assumptions. However, there are a larger number of possibilities to explore as well as extensions to more complex datasets. Improvements on these fronts will allow for the development of agents with improved robustness and usefulness in practice.
2,869,038,155,347
arxiv
\section{Introduction} Consider a Kerr black hole of mass $M$ and dimensionless spin $a/M=1-\epsilon^2$, with $\epsilon\ll 1$; and send a particle of rest mass $\mu\ll M$ into the black hole along the equatorial plane. Ignore back-reaction effects on the particle's motion, and assume it follows a geodesic of the Kerr geometry, with conserved specific energy $E$ and angular momentum $L$. Jacobson and Sotiriou \cite{js} showed that there exists an open domain in the $\{\mu,E,L\}$ space, for which the final configuration is overextremal: \begin{equation}\label{OSgeodesics} a M +\mu L>(M+\mu E)^2. \end{equation} Later work by Barausse {\it et al.}\ \cite{bck1,bck2} demonstrated that, at least for some orbits in that open domain, radiative losses cannot prevent the overspinning. The purpose of our current work is to show that the overspinning scenario is entirely ruled out once all relevant back-reaction effects are taken into account. The history of the problem goes a while back. In 1974 Wald \cite{wald} proposed the black hole--particle system as a test bed for weak cosmic censorship. He showed (ignoring back-reaction) that an initially extremal Kerr--Newman black hole cannot be made overextremal by capturing a particle, although parameters can be chosen so that the black hole remains extremal. Particles carrying ``dangerous'' amounts of angular momentum, electric charge and/or spin are never captured; rather, they get deflected away by centrifugal, electrostatic and/or spin-spin coupling forces. In 1999 Hubeny \cite{hub} argued that the situation is different if one starts with a {\it nearly} extremal black hole: focusing on the problem of an electric charge in a Reissner-Nordstr\"om geometry, she showed there is an (open) set of captured orbits that overcharge the black hole in that case. However, she also reasoned that back-reaction from the electromagnetic self-force may well prevent suitable particles from being captured. Recent work has gone some way towards confirming that expectation. Isoyama, Sago and Tanaka \cite{soich} considered an electric charge released from the point of quasi-static equilibrium---an initial configuration which they represented by the exact (static) ``double Reissner-Nordstr\"{o}m'' solution, whose total Arnowitt--Deser--Misner (ADM) energy and charge are known analytically. Showing that radiative losses from the subsequent plunge into the black hole are negligible, Isoyama {\it et al.}~calculated that the final configuration cannot be that of an overcharged geometry. They further argued that the same conclusion must then apply to any radially falling charge. In a later work, Zimmerman, Vega and Poisson \cite{zimm} presented an explicit (numerical) calculation of the charged particle's trajectory including the full effect of the electromagnetic self-force. Analyzing a large sample of orbits within the domain identified by Hubeny, they found no example of successful overcharging: All particles with charge and energy suitable for overcharging the black hole were found to be repelled before reaching the horizon. However, both above analyses \cite{soich,zimm} have neglected the coupling to gravity (both the back-reaction from the gravitational perturbation sourced by the perturbed electromagnetic stress-energy, and the perturbation in the electromagnetic field due to the particle's mass), which cannot be easily justified for the problem at hand. A complete analysis would require a calculation of both electromagnetic and gravitational self-forces in the coupled problem, which remains a difficult challenge despite recent progress \cite{zimmerman,tlinz,Zimmerman:2015rga}. Here we instead consider the purely gravitational problem of an electrically-neutral particle in Kerr geometry. Thus evading the coupling problem comes at the obvious cost of giving up the convenience of working in a spherically symmetric background. However, the current state of affairs in self-force calculations is such that computations on a Kerr background are now routine (see, e.g., the recent \cite{vandeMeent:2015lxa}). Here we revisit the problem from this new vantage point, and provide what we believe to be a first example of a complete, self-consistent analysis of the overspinning problem within the first-order self-force approximation. The necessary groundwork for our calculation was laid in a previous paper by two of us \cite{CB}, to be referred to in what follows as CB. Focusing on equatorial capture trajectories, and excluding deeply bound orbits (see below), CB formulated a set of ``censorship conditions'' that must be satisfied in order for the overspinning scenario to be ruled out for all such orbits. The conditions are formulated in terms of a certain one-parameter family of geodesic trajectories, and involve the gravitational self-force (GSF) evaluated along the trajectories in the limit of an extremal geometry. In the current work we numerically evaluate the censorship conditions, and establish, with confidence, that they are indeed satisfied. Our results reveal some interesting nuances. To describe them we must first review the results of CB in some more detail. CB first identify (working at leading order in $\epsilon\ll 1$) the complete region in the parameter-space $\{E,L,\epsilon,\eta:=\mu/M\}$ for which Eq.\ (\ref{OSgeodesics}) is satisfied, i.e.~overspinning occurs, if the GSF is ignored and the particle follows a geodesic of the Kerr background. It is found that, for any $E>1$, overspinning occurs in certain narrow ranges ($\propto\epsilon$) of $\eta,L$ values. Geodesics with $E\leq 1$ cannot overspin. Focusing, therefore, on particles sent in from infinity, CB proceed to incorporate all relevant GSF effects through $O(\eta^2)$ in Eq.\ (\ref{OSgeodesics}). This includes gravitational-radiation losses, as well as the (more subtle but not less important) $O(\eta)$ correction to the critical value of impact parameter for capture. Since the censorship condition is formulated in terms of background (``black-hole centred'') coordinates, which is beneficial in practice for GSF calculations, rather than in a ``center-of-mass'' system where the identification of ADM quantities is easier, care is taken in expressing relevant ADM quantities in terms of background-related quantities through the required order [$O(\eta^2)$ in Eq.\ (\ref{OSgeodesics})]. CB thus obtain an inequality involving the initial energy and angular momentum (at infinity), as well as $\eta$, $\epsilon$ and the GSF, which describes a condition for the final configuration to remain subextremal. CB then show that the above censorship condition is satisfied for all capture trajectories (and for all sufficiently small $\eta,\epsilon$) if and only if it is satisfied for the one-parameter family of ``critical'' orbits---those lying on the scatter--capture separatrix in the space of initial conditions. Thus, the problem may be conveniently reduced to the question of whether critical orbits satisfy the censorship condition. CB parametrize such orbits by the energy-at-infinity, and formulate a necessary and sufficient censorship condition on that one-parameter family. The behavior of critical orbits is subtle when radiation is taken into account. CB distinguish between ``strong'' (exponential) and ``weak'' (power-law) fine-tuning of the initial conditions, depending on the proximity to exact criticality.\footnote{In CB's terminology, these two types are respectively referred to as ``fine-tuned'' and ``generic'' near-critical orbits. We prefer here ``strong'' and ``weak'' fine-tuning, to avoid possible confusion. Both types are fine-tuned, but at a different level.} {\it Strongly} fine-tuned orbits radiate a significant fraction of their initial energy as they evolve adiabatically through a sequence of unstable circular orbits; in the most extreme case (``perfect'' fine-tuning) ``all'' the orbital energy is radiated away and the particle plunges upon reaching the innermost stable circular orbit (ISCO) \cite{gund}. {\it Weak} fine-tuning still results in trapping on quasi-circular orbits, formally for an infinite amount of time as $\eta\to 0$, but the fraction of energy emitted is vanishingly small in that limit. (A more precise definition of weak and strong fine-tuning will be given in Sec.\ \ref{Sec:review}.) CB show that, in the case of weak fine-tuning, all radiation-related terms drop out of the censorship condition through the relevant order [$O(\eta^2)$]. Only one GSF term remains, associated with the conservative correction to the critical value of the angular momentum (at fixed initial energy). CB describe two methods for calculating this correction. The first, direct method involves integration of the GSF along the critical geodesics all the way in from infinity and down to the limiting (unstable) circular orbit. The second method, which relies on a recent perturbative formulation of the conservative Hamiltonian for circular orbits \cite{Isoyama14,hami,Tiec:2015cxa}, requires information about the local metric perturbation (not the GSF itself) only on the limiting circular orbit. The second method is much more easily implemented, and will be used here to compute the shift in the critical value of the angular momentum. In the case of strong fine-tuning, radiative terms enter the censorship condition already at the leading order in $\eta$. CB show that the additional information required in this case amounts to a single function of the circular-orbit energy, namely the ratio ${\cal R}(E)$ between the flux of energy absorbed by the black hole and the flux of energy radiated to null infinity. Thus, two separate censorship conditions emerge. The first applies to weakly fine-tuned orbits and involves only information about the conservative piece of the GSF. It has a remarkably simple form, especially in conjunction with the Hamiltonian formulation of \cite{Isoyama14,hami,Tiec:2015cxa}. The second censorship condition applies to strongly fine-tuned orbits, and requires also knowledge of the radiative flux ratio ${\cal R}(E)$. Both conditions involve quantities that are formally evaluated in the limits $\epsilon\to 0$ and $\eta\to 0$. Thus, $\epsilon$ and $\eta$ themselves do not features in the final form of these conditions. The censorship inequalities should be interpreted as necessary and sufficient conditions for overspinning to be averted {\it for sufficiently small $\epsilon$ and $\eta$}. However, there is no assumption about the relative magnitudes of these two parameters. In the current paper we evaluate both censorship conditions. We consider weakly fine-tuned orbits first. We give a strong numerical evidence to suggest that all such orbits {\em precisely saturate} the overspinning condition, i.e.\ they lead to a precisely extremal final geometry, for any value of the initial energy. Next we examine whether over-extremality may be achieved through a strong fine-tuning of the initial conditions. This is an intriguing possibility, not least because violation of censorship has been (famously) demonstrated in the past for other carefully fine-tuned configurations \cite{chop}. However, we find that strong fine-tuning in our problem always acts to drive the system {\em away} from extremality. Capture of strongly fine-tuned orbits results in a subextremal geometry. A corollary of the above results is that, within our first-order GSF approximation, captured particles generically leave the black hole subextremal, except in the case of weakly fine-tuned critical orbits, which appear to precisely saturate the censorship condition. In the latter case, one must go beyond the first-order GSF approximation to determine whether saturation actually occurs in the physical problem, and theoretically there even remains the possibility of overspinning. One of our main conclusions is that a definitive answer to the overspinning question cannot be reached within the first-order GSF approximation, just as it could not be reached within the geodesic approximation. This is perhaps disappointing, but also intriguing. The rest of the paper is structured as follows. In Section \ref{Sec:review} we give a more detailed review of the CB analysis, and present the censorship conditions. In Section \ref{Sec:conservative} we evaluate the condition in the case of weak fine-tuning and conclude that it is saturated, but overspinning never occurs. In Section \ref{Sec:dissipative} we show that strong fine-tuning promotes censorship in our problem. Section \ref{Sec:conclusions} contains a summary and a discussion of our results and their implications. Throughout this paper we set $G=c=1$ and use the metric signature $(-,+,+,+)$. \section{Review of CB: the censorship conditions}\label{Sec:review} \subsection{Basic setup} The system in question is a gravitationally-bound binary of mass ratio $\eta:=\mu/M\ll 1$. The larger body is a near-extremal Kerr black hole of spin $aM=M^2(1-\epsilon^2)$, with $\epsilon\ll 1$.\footnote{Beware the alternative convention $a/M=(1-\epsilon^2)^{1/2}\simeq 1-\epsilon^2/2$ is common in related literature.} The smaller body (``particle'') is a compact object of negligible spin, which, for concreteness and simplicity, we may think of as a small non-spinning black hole. We work in black-hole perturbation theory, which is based on a formal expansion in $\eta$ about the background geometry of the large black hole. Specifically, we will consider the binary dynamics in the first-order GSF approximation, namely through the first order in $\eta$ beyond the limit of geodesic motion. We let $\{t,r,\theta,\varphi\}$ denote standard Boyer-Lindquist coordinates on the background geometry, and let the particle be sent in along the equatorial plane, $\theta=\pi/2$. From symmetry it is clear that the orbit will remain equatorial even under the effect of the GSF. It is assumed that the standard GSF formulation of the binary dynamics applies: the particle's motion is described locally in terms of an effective (accelerated) trajectory on the background spacetime, which, at our order of approximation, is insensitive to the particle's internal structure. In particular, the particle will be assumed to have fallen into the large black hole if and only if its trajectory had crossed the latter's event horizon. It will be useful to define, for given $\{M,\mu,\epsilon\}$, \begin{eqnarray}\label{ELADM} \mathsf{E}&:=&(M_{\rm ADM}-M)/\mu, \\ \mathsf{L}&:=&(J_{\rm ADM}-aM)/(\mu M), \end{eqnarray} where $M_{\rm ADM}$ and $J_{\rm ADM}$ are the total (conserved) ADM mass and angular momentum of the binary's spacetime. $\mathsf{E}$ may be interpreted as the particle's contribution (per $\mu$) to the ADM energy in the ``black hole frame'', and $\mathsf{L}$ is the particle's specific (and a-dimensionalized) orbital angular momentum, again in the ``black hole frame''.\footnote{We use $\mathsf{E}$ and $\mathsf{L}$ in place of CB's $E_{\rm ADM}^{\rm p}$ and $L_{\rm ADM}^{\rm p}$, for notational simplicity.} For the GSF formulation to make sense, we require, in addition to $\eta\ll 1$, also $\mathsf{E}\ll 1/\eta$. We do not consider ultrarelativistic particles with $\mathsf{E}\gg 1$ (not even ones with $\mathsf{E}\ll 1/\eta$), because for such orbits there does not yet exist a rigorous GSF formulation. Nor do we consider the family of ``deeply bound'' orbits discussed in Sec.\ II.B of CB: low-energy orbits that are confined to the immediate exterior of the large hole, below the ISCO radius. CB explain that, for $\epsilon\ll 1$ and relevant values of $\eta$, such orbits plunge into the black hole within an amount of proper time comparable to the particle's own ``light-crossing'' time. Ignoring the particle's finite extent does not seem justified in that situation, so such orbits require a separate treatment. For simplicity, CB opted to exclude deeply bound orbits from the analysis. In practice, this amounts to assuming that the particle is thrown in from outside the ISCO radius. Our system is thus parametrized by the quartet of values $\{\eta,\epsilon,\mathsf{E},\mathsf{L}\}$, up to an initial orbital phase and up to a trivial overall scale set by $M$, neither of which are of relevance in our analysis. This parameter space splits into two precisely complementary subspaces, one corresponding to orbits that end up crossing to horizon, and another corresponding to orbits that scatter to infinity without crossing the horizon. For captured orbits, let $M_{\rm B}$ and $J_{\rm B}$ be the Bondi energy and angular momentum of the spacetime at the (retarded) time of horizon crossing. The question of overspinning then takes the following simple form: Among all configurations $\{\eta,\epsilon,\mathsf{E},\mathsf{L}\}$, with arbitrarily small $\{\eta,\epsilon\}$, are there any in which the particle crosses the horizon and $J_{\rm B}>M_{\rm B}^2$? If so, then the likely scenario is that a naked singularity is exposed \cite{wald}. If not, then the conjecture of cosmic censorship holds. \subsection{Test-particle limit} Let us first review the situation when back-reaction is ignored and the particle is treated as a test mass (this is the case first analyzed in \cite{js}, but the details below follow CB). The particle moves on a timelike geodesic of the Kerr background metric $g_{\alpha\beta}$, with conserved (specific) energy and angular momentum given by $\mathsf{E}=-u_t$ and $\mathsf{L}=u_\varphi/M$, where $u^{\alpha}$ is the particle's four-velocity and $u_{\alpha}=g_{\alpha\beta}u^{\beta}$. Assuming the particle is thrown in from outside the ISCO radius, it will cross the horizon if and only if $\mathsf{L}$ is smaller than a certain critical value $\mathsf{L}_{c}^{(0)}(\mathsf{E};\epsilon)$, which corresponds to the angular momentum of an unstable circular geodesic orbit with energy $\mathsf{E}$ (``peak of the effective potential''). We use a superscript $(0)$ to denote the test-particle limit. CB find, in the near-extremal case, \begin{equation}\label{Lc0} \mathsf{L}_c^{(0)}(\mathsf{E};\epsilon)=2\mathsf{E}+(6\mathsf{E}^2-2)^{1/2}\epsilon+O(\epsilon^2). \end{equation} The Bondi mass and angular momentum are constant and given by $M_{\rm B}=M+\mu \mathsf{E}$ and $J_{\rm B}=aM+\mu M \mathsf{L}$, respectively. Captured orbits overspin the black hole if and only if they satisfy $J_{\rm B}>M_{\rm B}^2$, which may be rearranged to give $\mathsf{L}>2\mathsf{E}+\epsilon^2/\eta+\eta \mathsf{E}^2$. Thus, for given $\{\eta,\epsilon,\mathsf{E}\}$, the capture condition bounds $\mathsf{L}$ from above, $\mathsf{L}<\mathsf{L}_{c}^{(0)}$, while the overspinning condition bounds it from below. Neglecting high-order terms in $\epsilon$, overspinning would occur if the double inequality \begin{equation}\label{double_ineq} \epsilon^2/\eta+\eta \mathsf{E}^2<\mathsf{L}-2\mathsf{E}<(6\mathsf{E}^2-2)^{1/2}\epsilon \end{equation} can be satisfied with $\epsilon,\eta\ll 1$. A simple analysis \cite{CB} shows that (\ref{double_ineq}) is never satisfied for $\mathsf{E}\leq 1$, but it can be satisfied for any $\mathsf{E}>1$ by choosing $\eta$ from within the range \begin{equation}\label{mrange} \epsilon f_-(\mathsf{E}) <\eta < \epsilon f_+(\mathsf{E}), \end{equation} with $f_{\pm}=\frac{1}{\sqrt{2}\, \mathsf{E}^2}\left[\sqrt{3\mathsf{E}^2-1}\pm \sqrt{\mathsf{E}^2-1}\right]$. For any such choice, there is an open range of $\mathsf{L}$ values satisfying (\ref{double_ineq}). Thus, for a given $\epsilon\ll 1$, there is an open overspinning domain in the space of $\{\eta,\mathsf{E}, \mathsf{L}\}$, described by $\mathsf{E}>1$ together with Eqs.\ (\ref{mrange}) and (\ref{double_ineq}). This domain is depicted in Figures 2 and 3 of CB. The key point: if back-reaction effects were negligible, it would be possible to overspin the black hole by throwing a particle in from infinity. Equation (\ref{mrange}) shows that, as might be expected, $\eta$ and $\epsilon$ are of the same order of magnitude in overspinning configurations. Equation (\ref{double_ineq}) shows, in turn, that all overspinning orbits have $\mathsf{L}-2\mathsf{E}=O(\eta)$. Such orbits are located close to the parameter-space separatrix between captured and scattered orbits. They each exercise $O(\ln\eta)\gg 1$ near-circular revolutions near the peak of the effective potential before plunging into the black hole. The amount of excess spin produced by such orbits is $J_B-M_B^2=O(\eta^2)$. This implies that back-reaction effects may qualitatively change the outcome of the analysis. A priori, $O(\eta)$ self-acceleration may shift the value of the critical angular momentum (for a given energy) by an amount of $O(\eta)$, comparable to the width of the overspinning window. Furthermore, radiation of energy and angular momentum in gravitational waves to infinity may, a priori, alter the final excess spin by an amount of $O(\eta^2\ln\eta)$, {\em greater} than the excess spin predicted with radiation neglected. Clearly, {\em the point-particle approximation is inadequate} in our problem. It is a case where a leading-order perturbative treatment predicts its own inadequateness. By the end of our analysis it will become clear that, remarkably, the same conclusion carries over to the next perturbative order. \subsection{Self-force approximation and the correction to the capture--scatter separatrix} In the first-order GSF approximation, the particle's motion is described locally by an accelerated worldline in the Kerr background metric $g_{\alpha\beta}$. The tangent four-velocity, normalized using $g_{\alpha\beta}u^{\alpha}u^{\beta}=-1$, satisfies $\mu u^{\beta}\nabla_{\beta}u^{\alpha}=F^{\alpha}$, where $\nabla_{\beta}$ is a covariant derivative compatible with $g_{\alpha\beta}$, and $F^{\alpha}(\propto \eta^2)$ is the GSF. The latter may be attributed to a certain smooth (locally-defined) self-potential denoted $h^R_{\alpha\beta}$---the ``R field''---which is a particular solution of the source-free linearized Einstein's equations \cite{DW}. The particle's trajectory may also be interpreted as a geodesic in $g_{\alpha\beta}+h^R_{\alpha\beta}$. For given $\epsilon,\eta$, our equatorial orbits may again be parametrized by the ADM-related constants $\mathsf{E},\mathsf{L}$ defined in Eq.\ (\ref{ELADM}). These now also carry information about the self-gravity of the particle and about the radiation content of spacetime. For given $\epsilon,\eta,\mathsf{E}$ there exists a critical value $\mathsf{L}_c(\mathsf{E};\epsilon,\eta)$ representing a threshold for immediate capture: orbits with $\mathsf{L}>\mathsf{L}_c$ scatter off the black hole (at least at first approach; they may still eventually fall into the black hole following subsequent encounters), while those with $\mathsf{L}<\mathsf{L}_c$ are immediately captured. The geodesic limit, $\eta\to 0$, of $\mathsf{L}_c(\mathsf{E};\epsilon,\eta)$ is $\mathsf{L}_c^{(0)}(\mathsf{E};\epsilon)$, given in Eq.\ (\ref{Lc0}). Not surprisingly, much of the physics relevant to our problem is played out near the capture--scatter threshold. In the geodesic case (and for a fixed $\epsilon$), $L=\mathsf{L}_c^{(0)}(\mathsf{E};\epsilon)$ describes a 1-parameter family of disjoint {\it critical orbits}. Each critical orbit is homoclinic in nature, approaching an (unstable) circular geodesic at $t\to\infty$. The situation becomes more subtle when GSF effects are included, due to dissipation. For given $\epsilon,\eta$, the function $L=\mathsf{L}_c(\mathsf{E};\epsilon)$ still describes a 1-parameter family of critical orbits, but these are no longer disjoint. Rather, they all merge to form a ``global attractor''. The attractor may be thought of as a smooth sequence of quasi-circular unstable orbits starting at the ``light ring'' and ending at the ISCO. A critical orbit meets the attractor at a point that depends on its initial energy, and proceeds to evolve radiatively along it until it transits to a plunge near the ISCO. A small perturbation away from the critical value $\mathsf{L}=\mathsf{L}_c(\mathsf{E};\epsilon,\eta)$ results in the particle's leaving the attractor before the ISCO is reached. The point of departure may be controlled by fine-tuning the magnitude of $L$ around its critical value. When considering near-critical orbits in the overspinning analysis, it is important to distinguish between two cases, differing by how well $\mathsf{L}$ is tuned to its critical value. ``Strong'' fine-tuning is one in which $\mathsf{L}-\mathsf{L}_c\sim \exp(-\alpha/\eta)$, with some positive constant $\alpha$. Such orbits radiate away amounts of $O(\eta)$ energy and angular momentum as they evolve along the attractor [leading to $O(1)$ changes in the values of the particle's specific energy and angular momentum]. ``Weak'' fine-tuning is one in which $\mathsf{L}-\mathsf{L}_c\sim \eta^\beta$, with some $\beta\geq 1$. Such orbits radiate away only $O(\eta^2\ln\eta)$ of energy and angular momentum before leaving the attractor. As will be discussed below, CB found that radiation effects enter the overspinning condition at relevant order only for strongly fine-tuned near-critical orbits. For all other orbits, radiation terms are negligible within the first-order GSF approximation. The overspinning analysis requires an explicit expression for $\mathsf{L}_c(\mathsf{E};\epsilon,\eta)$ through $O(\eta,\epsilon)$. For orbits that come from infinity (i.e., ones with $\mathsf{E}\geq 1$) CB obtain, to the required order, \begin{equation}\label{Lc} \mathsf{L}_c(\mathsf{E};\epsilon,\eta)=\mathsf{L}_c^{(0)}(\mathsf{E};\epsilon)+\delta\mathsf{L}_c(\mathsf{E};\eta), \end{equation} where $\mathsf{L}_c^{(0)}$ is given in Eq.\ (\ref{Lc0}), and the GSF correction is \begin{equation}\label{dLc} \delta\mathsf{L}_c=-\eta(\mathsf{E}^2+1)+\lim_{\epsilon\to 0}\int_{R_\epsilon}^{\infty}\left(2\tilde F_t+\tilde F_{\varphi}\right)dr/u^r. \end{equation} Here $\tilde F_t:=F_t/\mu$ and $\tilde F_\varphi:=F_\varphi/(M\mu)$ are components of the self-acceleration ($\propto \eta$), and the integration is carried out along the critical {\em geodesic} with energy $\mathsf{E}$ and angular momentum $\mathsf{L}_c^{(0)}(\mathsf{E};\epsilon)$. The cutoff radius $r=R_{\epsilon}(\mathsf{E};\epsilon)$ is that of the corresponding unstable circular geodesic. The term $-\eta(\mathsf{E}^2+1)$ arises from the transformation between the background-defined energy-at-infinity and angular-momentum-at-infinity, $-u_{t}(r\to\infty)$ and $u_{\varphi}(r\to\infty)$ respectively (these are used in the GSF description of the motion), and the ADM-related quantities $\mathsf{E}$ and $\mathsf{L}$---see Eq. (72) and Appendix A of CB. \subsection{Censorship conditions} Consider an orbit that ends up crossing the event horizon, and let ${\cal E}^+$ and ${\cal L}^+$ be the total energy and angular momentum in gravitational waves radiated out to null infinity up until the retarded time of horizon crossing. The Bondi mass and angular momentum corresponding to that retarded time are $J_B=M+\mu \mathsf{E}-{\cal E}^+$ and $aM +\mu \mathsf{L}-{\cal L}^+$, respectively. The condition for overspinning is $J_B>M_B^2$, which, upon substituting $a/M=1-\epsilon^2$ and rearranging, reads \begin{equation} \label{OS1} \epsilon^2+\eta(2\mathsf{E}-\mathsf{L})-{\cal W}^+ + (\eta\mathsf{E}-{\cal E}^+)^2<0, \end{equation} where ${\cal W}^+:=2{\cal E}^+-{\cal L}^+$. CB showed that the left-hand side of this inequality is minimized (with respect to $\mathsf{L}$, at fixed $\mathsf{E},\epsilon,\eta$) by near-critical orbits with $\mathsf{L}=\mathsf{L}_c(\mathsf{E};\epsilon,\eta)+o(\eta)$. Thus it suffices to restrict attention to this type of orbits. If, for some $\mathsf{E}$ (and $\epsilon,\eta\ll 1$), the inequality (\ref{OS1}) can be shown to apply for a near-critical orbit, that would establish a violation of censorship. If, conversely, (\ref{OS1}) can be shown not to be satisfied even for near-critical orbits (for any $\mathsf{E}$), then it cannot be satisfied for any captured orbit, and censorship is protected. We can therefore proceed by substituting $\mathsf{L}=\mathsf{L}_c(\mathsf{E};\epsilon,\eta)$ in Eq.\ (\ref{OS1}). Using (\ref{Lc}) and (\ref{Lc0}) one thus arrives at the {\it censorship} condition \begin{equation} \label{OS2} \epsilon^2 + \epsilon\eta \phi(\mathsf{E})-\eta\delta\mathsf{L}_c(\mathsf{E})-{\cal W}^+ +(\eta\mathsf{E}-{\cal E}^+)^2\geq 0, \end{equation} where \begin{equation}\label{phi} \phi(\mathsf{E}):=-(6\mathsf{E}^2-2)^{1/2}. \end{equation} Censorship is protected if and only if (\ref{OS2}) is satisfied for all $\mathsf{E}$ when $\epsilon,\eta$ are arbitrarily small. CB next showed that (\ref{OS2}) can be written in the convenient equivalent form \begin{equation} \label{OS3} \epsilon^2+ \epsilon\eta \phi(\mathsf{E})-\eta\delta\mathsf{L}_c^{\rm cons}(\mathsf{E})-{\cal W}^+_{\rm (qc)} +(\eta\mathsf{E}-{\cal E}^+_{\rm (qc)})^2\geq 0, \end{equation} where $\delta\mathsf{L}_c^{\rm cons}$ is the contribution to $\delta\mathsf{L}_c$ from the {\it conservative} piece of the GSF, and ${\cal W}^+_{\rm (qc)},{\cal E}^+_{\rm (qc)}$ are the contributions to ${\cal W}^+,{\cal E}^+$ from the quasi-circular part of the near-critical orbit. [CB established this by showing that the dissipative contribution to $\delta\mathsf{L}_c$ cancels against the part of ${\cal W}^+$ corresponding to the ``approach'' part of the critical orbit; that the contribution to ${\cal E}^+$ from the approach is negligible in Eq.\ (\ref{OS2}); and that the contributions to both ${\cal W}^+$ and ${\cal E}^+$ from the final plunge from the attractor to the horizon are also negligible in that equation.] In Eq.\ (\ref{OS3}), $\delta\mathsf{L}_c^{\rm cons}$ is an $O(\eta)$ quantity for any near-critical orbit, while the $\eta$-scaling of the radiative terms ${\cal W}^+_{\rm (qc)}$ and ${\cal E}^+_{\rm (qc)}$ depends on the degree of fine-tuning. For weakly fine-tuned orbits CB find ${\cal W}^+_{\rm (qc)}=O(\epsilon)O(\eta^2\ln\eta)$ and ${\cal E}^+_{\rm (qc)}=O(\eta^2\ln\eta)$, while for strong fine-tuning the scaling is instead ${\cal W}^+_{\rm (qc)}=O(\epsilon)O(\eta)$ and ${\cal E}^+_{\rm (qc)}=O(\eta)$. It follows that, for weak fine-tuning, both radiative terms ${\cal W}^+_{\rm (qc)}$ and ${\cal E}^+_{\rm (qc)}$ are subdominant in Eq.\ (\ref{OS3}) and drop out at our working order (we assume here $\epsilon|\ln\eta|\ll 1$). The two terms survive in Eq.\ (\ref{OS3}) only for strongly fine-tuned orbits. We proceed by considering the two cases separately. \subsubsection{Weak fine-tuning} In the case of weak fine-tuning we are left with the simple censorship condition \begin{equation} \label{OS4} \epsilon^2+\eta\epsilon \phi(\mathsf{E})+\eta^2 \psi(\mathsf{E})\geq 0, \end{equation} with $\psi(\mathsf{E}):=\mathsf{E}^2-\delta\breve\mathsf{L}_c^{\rm cons}(\mathsf{E})$. We have introduced \begin{equation} \label{brevedL} \delta\breve\mathsf{L}_c^{\rm cons}:=\eta^{-1}\delta\mathsf{L}_c^{\rm cons}, \end{equation} so that both coefficients $\phi(\mathsf{E})$ and $\psi(\mathsf{E})$ in Eq.\ (\ref{OS4}) have finite (generally nonzero) limits $\eta\to 0$ and $\epsilon\to 0$ (taken with a fixed $\mathsf{E}$). Censorship holds if and only if, for any $\mathsf{E}$, the inequality in Eq.\ (\ref{OS4}) is satisfied for all (small, positive) $\eta,\epsilon$. A simple analysis shows \cite{CB} that this demands $\psi\geq \phi^2/4$, leading to the necessary and sufficient censorship condition \begin{equation} \label{OS_weak} \delta\breve\mathsf{L}_c^{\rm cons}(\mathsf{E})\leq \frac{1}{2}(1-\mathsf{E}^2) , \end{equation} for all $\mathsf{E}$. The condition is {\it sufficient} in the sense that its confirmation would establish that overspinning is not possible for any $\eta,\epsilon\ll 1$ and any $\mathsf{E}$ (when strongly fine-tuned orbits are excluded). The condition (\ref{OS_weak}) is {\it necessary} in the sense that its violation for some $\mathsf{E}$ would imply that, for that value of $\mathsf{E}$, there exist $\eta,\epsilon\ll 1$ with which overspinning can be achieved. In Eq.\ (\ref{OS_weak}) we may allow $\mathsf{E}$ to vary in the full range $\mathsf{E}_{\rm isco}<\mathsf{E}\ll 1/\eta$, where $\mathsf{E}_{\rm isco}=\frac{1}{\sqrt{3}}$ is the ISCO value of $\mathsf{E}$ in the limit $\eta,\epsilon\to 0$. This is notwithstanding the fact that the explicit expression given for $\delta\mathsf{L}_c^{\rm cons}$ in Eq.\ (\ref{dLc}) (with the full GSF $\tilde F_{\alpha}$ replaced with its conservative piece) only applies for $\mathsf{E}\geq 1$. In subsection \ref{subsec:z} below we will derive an alternative expression for $\delta\mathsf{L}_c^{\rm cons}$, applicable for any $\mathsf{E}$ in the above full range. \subsubsection{Strong fine-tuning} First, however, let us formulate a censorship condition for strongly fine-tuned orbits. As mentioned, in that case one has ${\cal W}^+_{\rm (qc)}=O(\epsilon)O(\eta)$ and ${\cal E}^+_{\rm (qc)}=O(\eta)$ and both terms feature already at leading order in Eq.\ (\ref{OS3}). We introduce the rescaled quantities \begin{equation} \breve{\cal W}^+_{\rm (qc)}:=(\eta\epsilon)^{-1}{\cal W}^+_{\rm (qc)}, \quad\quad \breve{\cal E}^+_{\rm (qc)}:=\eta^{-1}{\cal E}^+_{\rm (qc)}, \end{equation} which should have finite (generally nonzero) limits $\eta,\epsilon\to 0$. We note that $\breve{\cal W}^+_{\rm (qc)}$ and $\breve{\cal E}^+_{\rm (qc)}$ depend not only on $\mathsf{E}$ but also on the precise fine-tuning. It is then convenient to re-parametrize the problem using the pair $\{E_i,E_f\}:=\{\mathsf{E},\mathsf{E}-\cal{E}/\eta\}$, where $\cal{E}$ is the total energy radiated away (both to infinity and down the black hole) during the quasi-circular whirl. To leading order in $\eta$, $E_i$ is the particle's specific energy just upon entering the whirl, and $E_f$ is its specific energy just before leaving it. The difference $E_i-E_f$ is the total energy (per $\eta$) radiated during the whirl. Any value of $E_f$ in the range $\mathsf{E}_{\rm isco}< E_f<E_i$ may be obtained via strong fine-tuning. The censorship condition (\ref{OS3}) now takes the form \begin{equation} \label{OS5} \epsilon^2+\eta\epsilon \tilde\phi(E_i,E_f)+\eta^2 \tilde\psi(E_i,E_f)\geq 0, \end{equation} with \begin{eqnarray} \label{tildephi} \tilde\phi&=&-(6E_i^2-2)^{1/2}-\breve{\cal W}^+_{\rm (qc)}, \\ \label{tildepsi} \tilde\psi&=&-\delta\breve\mathsf{L}_c^{\rm cons}(E_i)+(E_i-\breve{\cal E}^+_{\rm (qc)})^2. \end{eqnarray} Here we have used the fact that $\mathsf{E}=E_i$ to leading order in $\eta$. $\tilde\phi$ and $\tilde\psi$ depend on $E_f$ through $\breve{\cal W}^+_{\rm (qc)}$ and $\breve{\cal E}^+_{\rm (qc)}$, respectively. Censorship holds if and only if, for any $E_i,E_f$ satisfying $\mathsf{E}_{\rm isco}< E_f<E_i$, the inequality in Eq.\ (\ref{OS5}) is satisfied for all (small, positive) $\eta,\epsilon$. A simple analysis shows \cite{CB} that this happens if and only if \begin{equation}\label{OS_strong} \tilde\psi \geq \left(\min\{\tilde\phi/2,0\}\right)^2 \end{equation} for any $\mathsf{E}_{\rm isco}< E_f<E_i$. This constitutes a necessary and sufficient censorship condition for strongly fine-tuned orbits. The evaluation of (\ref{OS_strong}) requires, in addition to $\delta\breve\mathsf{L}_c^{\rm cons}$, also the radiative quantities $\breve{\cal E}^+_{\rm (qc)}$ and $\breve{\cal W}^+_{\rm (qc)}$. CB show that, at the required order, these can be conveniently obtained using \begin{equation}\label{calE+} \breve{\cal E}^+_{\rm (qc)}(E_i,E_f)=\int_{E_f}^{E_i}\frac{dE}{1+{\cal R}(E)}, \end{equation} \begin{equation}\label{calW+} \breve{\cal W}^+_{\rm (qc)}(E_i,E_f)=-\int_{E_f}^{E_i}\frac{b(E)}{1+{\cal R}(E)}\,dE, \end{equation} where \begin{equation}\label{b} b(E):=6E(6E^2-2)^{-1/2}. \end{equation} Here ${\cal R}(E)$ is the ratio between the flux of energy absorbed by the black hole and the flux of energy radiated to infinity, for a particle on a circular geodesic orbit with specific energy $E$, in the extremal limit $\epsilon\to 0$. Hence, the only information we require about radiation is encapsulated in a single dimensionless function ${\cal R}(E)$, evaluated on circular geodesics. We note ${\cal R}<0$ for $E<\frac{2}{\sqrt{3}}$, the superradiant regime (we will show how this result is arrived at in Sec.\ \ref{Sec:dissipative}). However, we also have ${\cal R}>-1$ for any $E$, implied by the known non-existence of ``floating'' orbits. Since the integrand in (\ref{calW+}) is positive definite (and $E_i>E_f$), we have $\breve{\cal W}^+_{\rm (qc)}<0$, so the sign of $\tilde\phi$ in Eq.\ (\ref{OS_strong}) is not a priori known. \subsection{Reexpressing $\delta\mathsf{L}_c^{\rm cons}$ in terms of redshift}\label{subsec:z} Calculating $\delta\mathsf{L}_c^{\rm cons}$ using Eq.\ (\ref{dLc}) has two drawbacks. First, this formula applies only to particles thrown in from infinity (i.e., ones with $\mathsf{E}\geq 1$). This restriction comes from the fact that in deriving (\ref{dLc}) CB relied on having at hand an explicit relation, correct through $O(\eta^2)$, between the ADM mass and angular momentum on one hand, and background-defined quantities like the four-velocity $u^{\alpha}$ on the other hand. Such a relation could only be derived under the condition that the initial binary separation was infinitely large. Although the test-particle analysis provides some motivation for concentrating on such orbits, it is preferable to relax the restriction $\mathsf{E}\geq 1$ when considering the GSF case. The second drawback of the formula (\ref{dLc}) is a practical one. Implementing the formula requires a calculation of the GSF along unbound orbits. However, existing GSF calculation methods and working codes are specialized to {\em bound}, quasi-periodic orbits. Adapting these codes to deal with unbound orbits might be possible in principle, but would require much development of new method and code. CB derived an alternative formula for $\delta\mathsf{L}_c^{\rm cons}$, circumventing both problems. Their derivation built on recent work by Isoyama, Le Tiec and collaborators \cite{Isoyama14,hami,Tiec:2015cxa}, in which expressions were derived through $O(\eta^2)$ for the (Bondi-like) energy and angular momentum of a circular binary with time-symmetric boundary conditions (so that spacetime admits a global helical symmetry). These expressions were given in terms of the so-called ``redshift'' variable \cite{Detweiler:2008ft} $z:=1/\hat u^t$, where $\hat u^{\alpha}$ is the four-velocity of the circular orbit, normalized not in the background metric $g_{\alpha\beta}$ but rather in the smooth perturbed metric $g_{\alpha\beta}+h^R_{\alpha\beta}$. CB related Refs.\ \cite{Isoyama14,hami,Tiec:2015cxa}'s notions of energy and angular momentum to $\mathsf{E}$ and $\mathsf{L}$, and thereby obtained an expression for $\delta\mathsf{L}_c^{\rm cons}$ in terms of the redshift $z$. The expression they derived is remarkably simple: \begin{equation}\label{Z} \delta\mathsf{L}_c^{\rm cons}(\mathsf{E}) = -\lim_{\epsilon\to 0} \delta z(\mathsf{E};\epsilon), \end{equation} where $\delta z$ is the first-order GSF correction to $z$, defined through the expansion \begin{equation} z(\mathsf{E};\epsilon)=z^{(0)}(\mathsf{E};\epsilon)+\delta z(\mathsf{E};\epsilon) +O(\eta^2). \end{equation} Here $\mathsf{E}$ is used to parametrize the circular orbit, $z^{(0)}$ is the geodesic limit of $z$, and the $O(\eta)$ GSF correction $\delta z$ is defined with a fixed $\mathsf{E}$ (and a fixed $\epsilon$).\footnote{Beware that, in related literate, the GSF correction to $z$ is often defined with a fixed orbital frequency $\Omega$, not with a fixed $\mathsf{E}$ as here.} We note that, at our order of approximation, it is permissible to evaluate Eq.\ (\ref{Z}) using a sequence of {\em geodesic} circular orbits, replacing the ADM-related quantity $\mathsf{E}$ with the geodesic specific energy $E=-u_{t}$. The relation (\ref{Z}) is valid for all $\mathsf{E}>\mathsf{E}_{\rm isco}$ and can be used to derive $\delta\mathsf{L}_c^{\rm cons}$ for any unstable circular orbit, without the restriction $\mathsf{E}\geq 1$. Furthermore, the evaluation of $\delta\mathsf{L}_c^{\rm cons}$ using (\ref{Z}) requires only circular-orbit GSF data, readily computable using existing codes. In fact, using Eqs.\ (\ref{Z}), (\ref{calE+}) and (\ref{calW+}) it is evidently possible to evaluate both censorship condition (\ref{OS_weak}) (for weak fine-tuning) and (\ref{OS_strong}) (for strong fine-tuning) using circular-orbit information only. Our calculation in the next two sections takes a full advantage of that fact. \section{Evaluation of censorship condition}\label{Sec:conservative} In this section we evaluate the censorship condition (\ref{OS_weak}), which ignores the possibility of strong fine-tuning. The latter will be considered in Sec.\ \ref{Sec:dissipative}. At our order of approximation, we may replace the ADM-related quantity $\mathsf{E}$ in Eq.\ (\ref{OS_weak}) with the geodesic specific energy $E$, and regard (\ref{OS_weak}) as a condition on the family of (unstable) circular {\em geodesic} orbits, evaluated in the limit $\epsilon\to 0$. If the inequality can be shown to hold for all $E> E_{\rm isco}=\frac{1}{\sqrt{3}}$, then overspinning is ruled out for all orbits (except, possibly, strongly fine-tuned ones). The evaluation of $(\ref{OS_weak})$ requires only the function $\delta\breve\mathsf{L}_c^{\rm cons}(E)=\delta \mathsf{L}_c^{\rm cons}(E)/\eta$, and we shall use Eq.\ (\ref{Z}) to calculate it. As will be shown, the evaluation of the redshift correction $\delta z$ in Eq.\ (\ref{Z}) becomes particularly simple in the limit $\epsilon\to 0$, to the effect that the essential part of the calculation can be done analytically. The only numerical input we shall require is a verification that a certain perturbative quantity has a finite limit $\epsilon\to 0$; the precise numerical value of that limit will not be important to us. Below we first present the analytical part of the calculation, and in subsection (\ref{subsec:NumResultsZ}) we discuss the numerical input. \subsection{Analytical considerations} Isoyama {\it et al.}~\cite{Isoyama14} show that the first-order GSF correction to the redshift $z$ can be obtained via \begin{equation}\label{deltaz} \delta z=-z^{(0)}H^R\quad \text{where} \quad H^R:=\frac{1}{2}h_{\alpha\beta}^{R,{\rm sym}} u^{\alpha}u^{\beta}. \end{equation} Here $z^{(0)}$ (recall) is the geodesic limit of $z$ (taken with fixed $\mathsf{E},\epsilon$), $u^{\alpha}$ is the four-velocity of the corresponding circular geodesic, and $h_{\alpha\beta}^{R,{\rm sym}}$ is the ``time-symmetric'' part of the $R$ field evaluated on that geodesic. More precisely, $h_{\alpha\beta}^{R,{\rm sym}}$ is a certain regular piece not of the physical, retarded metric perturbation, but of a time-symmetrized, ``half-retarded plus half-advanced'' perturbation that is responsible for the ``conservative'' part of the dynamics. The evaluation of $\delta\mathsf{L}_c^{\rm cons}$ via Eq.\ (\ref{Z}) requires taking the limit $\epsilon\to 0$ of $\delta z$ with fixed $E$. To evaluate the limit of the factor $z^{(0)}$, start with the general expression \cite{Isoyama14} \begin{equation}\label{z0} z^{(0)}=(1-a\Omega)^{1/2}\left[1+a\Omega-3(M\Omega)^{2/3}(1-a\Omega)^{1/3}\right]^{1/2}, \end{equation} in which $\Omega:=d\phi/dt=[a+M(R/M)^{3/2}]^{-1}$ is the orbital angular velocity, with $R$ being the Boyer-Linquist radius of the orbit. The latter admits the small-$\epsilon$, fixed-$E$ expansion \cite{CB} \begin{equation} \label{rhopar} R/M= 1+\epsilon \rho_1(E) +\epsilon^2 \rho_2(E) +O(\epsilon^3) , \end{equation} where the first two coefficients, needed below, read \begin{eqnarray} \label{rho12} \rho_1&=& 4E(6E^2-2)^{-1/2}, \\ \rho_2&=& 2(2E^4-E^2+1)(3E^2-1)^{-2}. \end{eqnarray} Combined with $a/M=1-\epsilon^2$, this gives \begin{equation}\label{OmegaExpansion} M \Omega=\frac{1}{2}-\frac{1}{4}b(E)\,\epsilon +O(\epsilon^2), \end{equation} where $b$ was defined in (\ref{b}). Plugging (\ref{OmegaExpansion}) into Eq.\ (\ref{z0}) yields, in turn, \begin{equation}\label{z0expansion} z^{(0)}(E;\epsilon)=\frac{\epsilon}{\sqrt{6E^2-2}}+O(\epsilon^2). \end{equation} Finally, using (\ref{Z}) with (\ref{deltaz}) and (\ref{z0expansion}), we obtain \begin{equation}\label{Z2} \delta\mathsf{L}_c^{\rm cons}(E) =(6E^2-2)^{-1/2}\lim_{\epsilon\to 0}\left[\epsilon H^R(E;\epsilon)\right]. \end{equation} Note that, for $\delta\mathsf{L}_c^{\rm cons}$ to be finite and generally nonzero (as expected) requires $H^R$ to blow up like $1/\epsilon$ for $\epsilon\to 0$. Our calculation of $H^R(E;\epsilon)$ will be based on the strategy and numerical codes developed in Refs.\ \cite{Shah:2012gu,vandeMeent:2015lxa,meent:2015a}. $H^R(E;\epsilon)$ is expressed as a sum of two contribution: \begin{equation}\label{recons+compl} H^R = H^R_{\rm recons} + H^R_{\rm compl}. \end{equation} The ``reconstructed'' part $H^R_{\rm recons}$ is obtained numerically, starting from frequency-domain solutions of Teukolsky's equation with a circular-geodesic source and retarded boundary conditions, following through a reconstruction of the multipole modes of the metric perturbation (in a suitable ``half-string'' radiation gauge \cite{PMB}), and finally applying a suitable form of mode-sum regularization \cite{Barack:1999wf} to extract the $R$ part of the perturbation. (A more detailed description will be given in the next subsection.) In our case of a circular-orbit source, the double contraction of $h_{\alpha\beta}^{R}$ with $u^\alpha$ [recall Eq.\ (\ref{deltaz})] automatically picks out the time-symmetric part of $h_{\alpha\beta}^{R}$, as desired. The second contribution to $H^R$ in Eq.\ (\ref{recons+compl}) is the ``completion'' piece $H^R_{\rm compl}$, which (by definition) arises from any part of the metric perturbation that is not captured by the reconstruction procedure. In our problem, this piece corresponds simply to mass and angular-momentum perturbations of the background Kerr geometry (plus pure-gauge perturbations) \cite{waldrec,Shah:2012gu}. In the vacuum region $r>R$ outside the particle's orbit, these stationary perturbations can be written analytically, in a ``Boyer-Lindquist'' gauge, as \cite{Shah:2012gu} \begin{equation}\label{dMdJ} h_{\alpha\beta}^{(\delta M)}=\mu E \frac{\partial g_{\alpha\beta}}{\partial M}, \quad\quad h_{\alpha\beta}^{(\delta J)}=\mu L \frac{\partial g_{\alpha\beta}}{\partial J}, \end{equation} where $g_{\alpha\beta}=g_{\alpha\beta}(x^{\mu};M,J)$ is the Kerr background metric, $\partial_M$ is taken with fixed $J:=Ma$, $\partial_J$ is taken with fixed $M$, and both derivatives are taken with fixed Boyer-Lindquist coordinates $x^\mu$. Our particular regularization procedure (see below) does not require the completion piece for $r<R$. The quantity $H^R_{\rm compl}$ is given by \begin{equation}\label{Hcompl} H^R_{\rm compl}= \frac{1}{2}u^{\alpha}u^{\beta}\left(h_{\alpha\beta}^{(\delta M)}+h_{\alpha\beta}^{(\delta J)}\right), \end{equation} where the perturbations are evaluated in the sided limit $r\to R^+$ (with $\theta=\pi/2$). Let us denote by $\delta\mathsf{L}_{c,{\rm recons}}^{\rm cons}$ and $\delta\mathsf{L}_{c,{\rm compl}}^{\rm cons}$ the contributions to $\delta\mathsf{L}_{c}^{\rm cons}$ from $H^R_{\rm recons}$ and $H^R_{\rm compl}$, respectively, and proceed to obtain $\delta\mathsf{L}_{c,{\rm compl}}^{\rm cons}$ analytically. First use Eq.\ (\ref{Hcompl}) with Eq.\ (\ref{dMdJ}) and with $u^{\alpha}=g^{\alpha\beta}u_{\beta}$, where $u_{\beta}=\{-E,0,0,L\}$ (in Boyer-Lindquist coordinates). This gives $H^R_{\rm compl}$ in terms of the circular-orbit radius $R$, its energy $E$ and its angular momentum $L$. Then substitute the fixed-$E$ expansions (\ref{rhopar}) for $R$, and \cite{CB} \begin{equation}\label{LofE} L/M=2E+(6E^2-2)^{1/2}\epsilon +O(\epsilon^2) \end{equation} for $L$, along with $a=1-\epsilon^2$. Finally, expand the result in $\epsilon$ at fixed $E$. The outcome is \begin{equation} H^R_{\rm compl}=\frac{\eta}{2\epsilon}(1-E^2)(6E^2-2)^{1/2} + O(\epsilon^0). \end{equation} Notice this is an $O(\epsilon^{-1})$ quantity, so, recalling Eq.\ (\ref{Z2}), it gives a finite contribution to $\delta\breve\mathsf{L}_c^{\rm cons}$. We find \begin{equation}\label{Z3} \delta\breve\mathsf{L}_{c,{\rm compl}}^{\rm cons} =\frac{1}{2}(1-E^2), \end{equation} where $\delta\breve\mathsf{L}_{c,{\rm compl}}^{\rm cons}:=\eta^{-1}\delta\mathsf{L}_{c,{\rm compl}}^{\rm cons}$. Remarkably, it follows that the completion contribution, on its own, {\em precisely saturates} the censorship condition (\ref{OS_weak}). In the next subsection we will demonstrate numerically that the reconstructed part, $H^{R}_{\rm recons}(E;\epsilon)$, has a {\it finite} (non-divergent) fixed-$E$ limit $\epsilon\to 0$. This will imply \begin{equation}\label{Hrecons} \delta\mathsf{L}_{c,{\rm recons}}^{\rm cons}=0, \end{equation} and therefore \begin{equation}\label{saturation} \delta\breve\mathsf{L}_{c}^{\rm cons} =\frac{1}{2}(1-E^2). \end{equation} The censorship condition (\ref{OS_weak}) is precisely saturated. This result and its implications will be discussed in Sec.\ \ref{Sec:conclusions}. \subsection{Numerical input} \label{subsec:NumResultsZ} To validate Eq.\ (\ref{Hrecons}), we will demonstrate numerically that the limit \begin{equation}\label{hatHR} \lim_{\epsilon\to 0}H^R_{\rm recons}(E;\epsilon) =: \hat H^R(E) \end{equation} (taken with fixed $E$) exists and yields a finite value. It may be possible to establish this mathematically through analysis of the reconstructed solutions in the near-extremal near-horizon approximations (perhaps modelled upon the method of Ref.\ \cite{Gralla:2015rpa}). Here we content ourselves with a numerical calculation, which, we find, already illustrates the finiteness of $\hat H^R(E)$ rather convincingly. We have performed two independent numerical calculations, using two different (albeit related) methods, to be described below. One of the methods performs best at $\epsilon$ values that are not too small, and the other does best for $\epsilon$ values that are not too large. The combination of the two methods thus allowed us access to a range of $\epsilon$ values wide enough to enable taking the limit $\epsilon\to 0$ accurately. The agreement we found between the two sets of results in an overlapping domain of intermediate $\epsilon$ values provides reassurance. Our first calculation is based on method and code developed by one of us (AGS, with collaborators) in Refs.\ \cite{Shah:2010bi,Shah:2012gu}, with input from Ref.\ \cite{vandeMeent:2015lxa}. In this method, we first numerically integrate the sourced Sasaki-Nakamura equation in the frequency domain, with ``retarded'' boundary conditions at infinity and on the event horizon, to obtain the modes of the Weyl scalar $\psi_0$ associated with the metric perturbation produced by the particle. We then derive an appropriate Hertz potential (this is done algebraically in terms of $\psi_0$), from which the modes of the metric perturbation are reconstructed by applying a certain second-order differential operator \cite{Keidl:2010pm}. We use a version of the reconstruction procedure that yields the metric perturbation in a regular outgoing radiation gauge anywhere in the vacuum region $r>R$, where $R$ is the Boyer-Lindquist radius of the circular orbit. Finally, we apply a mode-sum regularization procedure to obtain $H^R_{\rm recons}$. The mode-sum variant we are using is the one developed in \cite{vandeMeent:2015lxa}, with the particle limit taken from $r\to R^+$. (Ref.\ \cite{vandeMeent:2015lxa} derived the regularization parameter values suitable for this one-sided version of the mode-sum method.) The code is implemented in \verb!C++! and uses double-precision arithmetic. This has been a first implementation of the code in the near-extremal regime, $\epsilon\ll 1$. Certain technical subtleties arise in this regime, as recently reviewed in Sec.\ V of \cite{Gralla:2015rpa}. We have found that such subtleties were rather easily controlled for $\epsilon\gtrsim 10^{-4}$. We needed only ensure that inner boundary conditions were placed sufficiently close to the horizon and determined to sufficient accuracy. We have not attempted to improve the performance of the code at lower values of $\epsilon$ (e.g., using the techniques described in Ref.\ \cite{Gralla:2015rpa}), but instead resorted to our second method, to be described next, whose performance actually improves near extremality. Our second calculation is based on a code developed by one of us (MvdM) in \cite{meent:2015a}, which follows an approach by Fujita \cite{Fujita:2004rb}, itself based on the semi-analytical formalism of Mano, Suzuki, and Takasugi (MST) \cite{Mano:1996vt,Mano:1996gn}. In this approach, the Weyl scalar---$\psi_4$ in our particular implementation---is obtained semi-analytically rather than numerically. ``Semi'' here refers to an element of the calculation in which a certain continued-fraction equation is solved numerically. The reconstruction and mode-sum procedures are essentially as in the first method, but they are implemented using an independent code. The entire calculation is performed using {\it Mathematica} with arbitrary-precision arithmetic. In the MST-based calculation, working near extremality is computationally advantageous. This is due to the improved convergence properties of the MST formalism for cicular orbits with $a\sim 1$ and $\Omega\sim 1/2$, highlighted in \cite{meent:2015a}. In this domain, the series of special functions featuring in MST's solutions for $\psi_4$ converges faster. Furthermore, the aforementioned continued-fraction equation is both faster convergent and more easily solvable (using the analytically-known extremal solution as an initial guess). As a result, the method is particularly efficient for studying the $\epsilon\to 0$ limit. For our purpose, it was sufficient to apply it in the range $10^{-8}\leq \epsilon\leq 10^{-4}$. Above $\epsilon\sim 10^{-4}$, the analytically-known extremal solution no longer provides an accurate enough initial guess to guarantee finding the solution of the continued-fraction equation for all frequency modes in the spectrum, making our implementation of the MST formalism unreliable. Our calculation of $\hat H^R(E)$ proceeded as follows. We considered a dense sample of $E$ values in the range $E_{\rm isco}<E<2$. For each value of $E$ in the sample we obtained a dataset $H^R_{\rm recons}(E,\epsilon)$, where $\epsilon$ is sampled (roughly) uniformly in $\log\epsilon$ between $\epsilon=10^{-1}$ and $\epsilon=10^{-8}$. We switched from our fully numerical method to our MST-based method at around $\epsilon=10^{-4}$. $\hat H^R(E)$ was then obtained via extrapolation of each of the fixed-$E$ datasets to $\epsilon=0$. For each pair $\{E,\epsilon\}$ in our sample, we directly computed the first 70 multipoles ($l$-modes) of the metric perturbation, for use as input in the mode-sum formula. The remaining large-$l$ tail of modes was approximated by fitting an inverse-power-law model, as detailed in \cite{Shah:2012gu}. At high values of $E$, the $l$-mode distribution becomes skewed towards larger $l$ values, due to what may be interpreted as a beaming effect. A similar behavior near the light-ring of a Schwarzschild black hole was discussed by Akcay {\it et al.} \cite{Akcay}, who pointed out that the implementation of the mode-sum technique can become problematic in that case, because the standard inverse-power-law tail may fail to manifest itself until $l$ values larger than one can feasibly calculate. This effect restricted our calculation here to $E$ values that are not too large---in practice, to $E\lesssim 2$. However, that should suffice for our purpose here, which is simply to determine the $\epsilon$-scaling of $H^R_{\rm recons}$ in the limit $\epsilon\to 0$ at fixed $E$: it is perfectly reasonable to assume that the $\epsilon$-scaling at any fixed $E>2$ would be the same as it is for lower $E$. Figure \ref{fig:Hvseps} shows $H^R_{\rm recons}(E;\epsilon)$ as a function of $\epsilon$ for a few $E$ values within our sample. It is evident that $H^R_{\rm recons}(E;\epsilon)$ approaches a finite limit as $\epsilon\to 0$. Figure \ref{fig:HvsE} displays the extrapolated values $\hat H^R$ as a function of $E$. We remind that the details of the function $\hat H^R(E)$ are unimportant to us; we needed only establish here that $\hat H^R$ is finite for any finite $E$. \begin{figure}[htb] \includegraphics[width=\columnwidth]{Hvseps.png} \caption{$H^R_{\rm recons}$ as a function of $\epsilon$, for a sample of $E$ values. Data points for $\epsilon\geq 10^{-4}$ (diamonds) are from our fully numerical computation, while points for $\epsilon\leq 10^{-4}$ (squares) are from our semi-analytical, MST-based method (there is an overlapping data point at $\epsilon=10^{-4}$). Error bars are in all cases too small to be resolved in this figure. Curves (dotted line) are quartic polynomial fits. At each fixed $E$, $H^R_{\rm recons}(E;\epsilon)$ appears to approach a constant value in the extremal limit $\epsilon\to 0$. } \label{fig:Hvseps} \end{figure} \begin{figure}[htb] \includegraphics[width=\columnwidth]{HvsE.png} \caption{The function $\hat H^R(E)$, obtained by extrapolating our numerical data for $H^R_{\rm recons}$ to $\epsilon\to 0$ at each fixed $E$. The actual value of $\hat H^R(E)$ is not needed in our analysis, only the fact that it is finite for each finite $E$. } \label{fig:HvsE} \end{figure} \section{Effect of strong fine-tuning}\label{Sec:dissipative} We have shown that, within our first-order GSF approximation, any weakly fine-tuned capture produces a precisely extremal geometry. Can strong fine-tuning push the system beyond extremality? To answer that question we need to evaluate the condition (\ref{OS_strong}). Any choice of $\{E_i,E_f\}$ (with $E_{\rm isco}\leq E_f<E_i$) violating that condition would imply that overspinning is possible via strong fine-tuning. If, on the other hand, we can show that (\ref{OS_strong}) applies for any $\{E_i,E_f\}$, then censorship holds even allowing for strong fine-tuning. The evaluation of (\ref{OS_strong}) requires the angular-momentum shift $\delta\breve\mathsf{L}_c^{\rm cons}(E_i)$ and the flux ratio ${\cal R}(E)$. For the former we use our result (\ref{saturation}). For the latter we will perform a numerical calculation, to be presented in subsection \ref{subsec:NumResults} below. However, much of what we need to know about ${\cal R}$ can be deduced from simple analytic considerations, to be presented first. We will show that it is sufficient to require that ${\cal R}$ is bounded from below by $-1/3$ over the range $E_{\rm isco}\leq E<\frac{2}{\sqrt{3}}$ in order to guarantee that the censorship condition (\ref{OS_strong}) always holds. Our actual calculation will later show that $\cal R$ lies very comfortably above that bound. \subsection{Analytical considerations} \subsubsection{Superradiant domain} First, we consider the sign of ${\cal R}(E)$. We recall that this function, defined for circular equatorial geodesics, is the ratio of energy flux through the horizon to the energy flux at infinity. A gravitational-wave mode of the form $\sim e^{-i\omega t}e^{im\varphi}$ is known to be superradiant if and only if $0<\omega <m\Omega_H$, where $\Omega_H=a/(2Mr_+)$ is the horizon's angular velocity, with $r_+=M+(M^2-a^2)^{1/2}$. For circular equatorial orbits the gravitational-wave spectrum is simple: $\omega=m\Omega$, where $\Omega$ is the orbital angular velocity. Thus, all modes of the radiation are superradiant for $\Omega<\Omega_H$, giving ${\cal R}<0$ in that case. For $\Omega>\Omega_H$ all modes are instead non-superradiant, giving ${\cal R}>0$. Let us now specialize to a near-extremal geometry, and reexpress the above in terms of a condition on the specific energy $E$ of the circular geodesic. For $\epsilon\ll 1$ we find $M\Omega_H=\frac{1}{2}-\frac{1}{\sqrt{2}}\epsilon+O(\epsilon^2)$. Combining this with Eq.\ (\ref{OmegaExpansion}) translates the superradiance condition $\Omega<\Omega_H$ to $E(6E^2-2)^{-1/2}>\sqrt{2}/3$ (at leading order in $\epsilon$), leading to \begin{equation} E<\frac{2}{\sqrt{3}} =: E_{\rm sr} \end{equation} as the superradiant domain in the extremal limit. Thus \begin{eqnarray} {\cal R} &<& 0\quad \text{for}\quad E_{\rm isco}\leq E<E_{\rm sr}, \\ {\cal R} &>& 0\quad \text{for}\quad E>E_{\rm sr}. \label{noSR} \end{eqnarray} This will be confirmed numerically in subsection \ref{subsec:NumResults}. \subsubsection{Sufficient lower bound for ${\cal R}(E)$} We now show that the censorship condition (\ref{OS_strong}) is satisfied for all $E_i>E_f\geq \frac{1}{\sqrt{3}}$ if ${\cal R}(E)$ is bounded from below by $-1/3$. The condition involves $\tilde\phi$ and $\tilde\psi$, given in Eqs.\ (\ref{tildephi}) and (\ref{tildepsi}), respectively, where in the latter we now substitute for $\delta\breve\mathsf{L}_c^{\rm cons}$ from Eq.\ (\ref{saturation}). We do not know the sign of $\tilde\phi$ (for given $E_i,E_f$) a priori, so we proceed by considering the two options $\tilde\phi\geq 0$ and $\tilde\phi<0$ in turn. {\it Case $\tilde\phi\geq 0$}:--- The censorship condition (\ref{OS_strong}) becomes \begin{equation}\label{OS_strong1} \tilde\psi=-\frac{1}{2}(1-E_i^2)+(E_i-\breve{\cal E}^+_{\rm (qc)})^2 \geq 0. \end{equation} This is trivially satisfied for $E_i\geq 1$, so it remains to consider $E_i<1$, in which case the condition becomes \begin{equation}\label{OS_strong2} \breve{\cal E}^+_{\rm (qc)}\leq E_i-\sqrt{(1-E_i^2)/2}=:\nu_1(E_i). \end{equation} Recalling (\ref{calE+}), we may bound $\breve{\cal E}^+_{\rm (qc)}$ from above using \begin{equation}\label{OS_strong3} \breve{\cal E}^+_{\rm (qc)} \leq \int_{\frac{1}{\sqrt{3}}}^{E_i} \frac{dE}{1+{\cal R}(E)} \leq \frac{E_i-\frac{1}{\sqrt{3}}}{1+{\cal R}_m} =:\nu_2(E_i), \end{equation} where in the first inequality we used the positivity of the integrand together with $E_f\geq\frac{1}{\sqrt{3}}$, and in the second inequality we assumed $\cal R$ is bounded from below by some number ${\cal R}_m(>-1)$. Since $\nu_1=\nu_2(=0)$ at $E_i=\frac{1}{\sqrt{3}}$, establishing the inequality in (\ref{OS_strong2}) requires only showing that $\nu_1'(E_i)\geq \nu_2'(E_i)=(1+{\cal R}_m)^{-1}$ for all $\frac{1}{\sqrt{3}}<E_i<1$. But the minimal value of $\nu_1'$ over this domain is $3/2$, so the condition becomes $(1+{\cal R}_m)^{-1}\leq \frac{3}{2}$, or ${\cal R}_m\geq -\frac{1}{3}$. We have thereby shown that the censorship condition (\ref{OS_strong1}) holds for any $E_i>E_f\geq\frac{1}{\sqrt{3}}$ with $\tilde\phi(E_i,E_f)\geq 0$, under the sole assumption \begin{equation}\label{calRm} {\cal R}(E)\geq -\frac{1}{3}. \end{equation} {\it Case $\tilde\phi< 0$}:--- The censorship condition (\ref{OS_strong}) becomes $\tilde\psi\geq \tilde\phi^2/4$, or, explicitly, \begin{eqnarray} 0&\leq& -\frac{1}{4}\breve{\cal W}^+_{\rm (qc)}\left(\breve{\cal W}^+_{\rm (qc)}-2\phi(E_i)\right) +\breve{\cal E}^+_{\rm (qc)}\left(\breve{\cal E}^+_{\rm (qc)}-2E_i\right) \nonumber\\ &=:&\Delta(E_i,E_f), \end{eqnarray} where $\phi(E_i)=-(6E_i^2-2)^{1/2}$. Since $\breve{\cal W}^+_{\rm (qc)}=0=\breve{\cal E}^+_{\rm (qc)}$ for $E_i=E_f$, we have $\Delta(E,E)=0$ for all $E\geq\frac{1}{\sqrt{3}}$. Thus, to establish $\Delta\geq 0$ it suffices to show $\partial\Delta(E_i,E_f)/\partial E_i\geq 0$ for all $E_i\geq E_f\geq\frac{1}{\sqrt{3}}$. With the aid of Eqs.\ (\ref{calE+}) and (\ref{calW+}), we find \begin{equation}\label{dDelta} [1+{\cal R}(E_i)]\frac{\partial\Delta}{\partial E_i}= E_i + {\cal R}(E_i)\left[\frac{3E_i \breve{\cal W}^+_{\rm (qc)}}{\phi(E_i)} - 2\breve{\cal E}^+_{\rm (qc)}\right]. \end{equation} Consider the cases ${\cal R}(E_i)\leq 0$ and ${\cal R}(E_i)> 0$ separately. For ${\cal R}(E_i)\leq 0$, we use $\breve{\cal W}^+_{\rm (qc)}>\phi(E_i)$ (following from $\tilde\phi< 0$) to bound the right-hand side of (\ref{dDelta}) from below by $E_i[1+3{\cal R}(E_i)]-2{\cal R}(E_i)\breve{\cal E}^+_{\rm (qc)}$. Since the last term here is non-negative, it is sufficient to require ${\cal R}(E_i)\geq-\frac{1}{3}$ in order to guarantee $\partial\Delta/\partial E_i> 0$ and hence $\Delta(E_i,E_f)\geq 0$. If, instead, ${\cal R}(E_i)> 0$, one can first use $- 2\breve{\cal E}^+_{\rm (qc)}>\breve{\cal W}^+_{\rm (qc)}$ [which follows from Eqs.\ (\ref{calE+}) and (\ref{calW+}), noting $b(E)>2$], then again $\breve{\cal W}^+_{\rm (qc)}>\phi(E_i)$, to bound the right-hand side of (\ref{dDelta}) from below by $E_i+{\cal R}(E_i)\left[\phi(E_i)+3E_i\right]$. This is non-negative for all $E_i\geq \frac{1}{\sqrt{3}}$ if and only if ${\cal R}(E_i)\geq -\frac{1}{3}$. Thus, the condition (\ref{calRm}) always implies $\Delta\geq 0$ and, in turn, that the censorship condition (\ref{OS_strong}) holds also for $\tilde\phi<0$. We conclude that it is sufficient to show that the flux ratio $\cal R$ is bounded from below by $-\frac{1}{3}$ in order to guarantee that the censorship condition (\ref{OS_strong}) is always satisfied. In fact, recalling (\ref{noSR}), we see that it is sufficient to obtain such a bound for $\cal R$ on the restricted superradiant domain $E_{\rm isco}\leq E<E_{\rm sr}$. Our numerical calculation, to be presented below, shows that $\cal R$ is comfortably bounded above the value of $-1/3$. \subsection{Numerical input} \label{subsec:NumResults} To compute the flux ratio $\cal R$ we used our MST-based method described above, as implemented in \cite{meent:2015a}. The gravitational-wave energy fluxes to infinity and down the horizon are obtained directly from the semi-analytical solutions for $\psi_4$, with no need to reconstruct the metric perturbation. Thanks to the improved convergence properties (already mentioned above) of the MST formalism at $\epsilon\ll 1$ and $\Omega\sim 1/2$, we can obtain the energy fluxes to essentially any accuracy we desire using arbitrary-precision computer algebra. To determine $\cal R$ for a given $E$, we calculated the ratio between the flux down the horizon and the flux to infinity for a sequence of fixed-$E$ orbits with $\epsilon$ values that decrease to $10^{-8}$ in exponential steps. The value of ${\cal R}(E)$ was then found by extrapolating to $\epsilon=0$. The results are presented in Fig.\ \ref{fig:Rplot}. As expected, ${\cal R}(E)$ is negative only in the range $E_{\rm isco}\leq E<E_{\rm sr}$. The minimum of $\cal R(E)$ appears to be attained at $E_{\rm isco}$ with a value of $-0.13744\pm 3\cdot 10^{-5}$. This is comfortably above the value of $-1/3$ required to assure that the censorship condition \eqref{OS_strong} is satisfied. \begin{figure}[htb] \includegraphics[width=\columnwidth]{Rplot.png} \caption{Numerical values for the flux ratio $\cal R$, as a function of specific energy $E$, for unstable circular equatorial geodesic orbits in the extremal Kerr limit. Each dot represents a numerical measurement, with error bars being much smaller than the size of the dots. Orbits with $E<E_{\rm sr}=2/\sqrt{3}$ are superradiant, with ${\cal R}<0$. The inset expands the area around the minimum at $E_{\rm isco}=1/\sqrt{3}$. We find a minimum value of ${\cal R}=-0.13744\pm 3\cdot 10^{-5}$, safely above what is required for the censorship condition \eqref{OS_strong} to hold (dashed red line). } \label{fig:Rplot} \end{figure} \section{Conclusions and discussion}\label{Sec:conclusions} We studied the scenario in which a mass particle is thrown into a nearly-extremal Kerr black hole on an equatorial trajectory, working consistently in the first-order self-force approximation, i.e.~taking into account all finite-$\eta$ effects (radiative and other) to one order in $\eta$ beyond the geodesic approximation. To describe the fate of the post-capture geometry we followed a strategy set out by CB in \cite{CB}, according to which it suffices to consider two types of captured orbits near the capture--scatter separatrix: (i) weakly fine-tuned orbits, which execute $O(\ln\eta)$ quasi-circular revolutions below the ISCO prior to falling into the black hole, radiating $O(\eta^2\ln\eta)$ of gravitational-wave energy in the process; and (ii) strongly fine-tuned orbits, which execute $O(\eta^{-1})$ revolutions and emit $O(\eta)$ of energy. In Sec.\ \ref{Sec:conservative} we found that, within our first-order GSF approximation, any weakly fine-tuned capture leads to a precisely extremal geometry. This implies \cite{CB} that ``generic'' captures (ones that are not fine-tuned at all, with $\mathsf{L}-2\mathsf{E}$ negative and not small) produce {\em sub}-extremal geometries. In Sec.\ \ref{Sec:dissipative} we further established that {\em strong fine-tuning promotes censorship}: all strongly fine-tuned captures produce subextremal geometries. Thus, one can at best reach extremality, using weakly fine-tuned orbits (any such orbit would do), but there is no way of overspinning the black hole. In summary: \begin{quote} Within the first-order self-force approximation (and excluding deeply bound orbits), {\it equatorial captures generically result in a subextremal post-capture geometry. One can at best achieve extremality, through weak fine-tuning, but overspinning is not possible.} \end{quote} That overspinning appears to be possible in the geodesic approximation \cite{js} is simply an artefact of ignoring important GSF terms that appear {\it already at leading order} in the relevant overspinning conditions. We note that the above conclusions were arrived at almost entirely via analytical considerations. We required only two pieces of numerical input, one confirming the boundedness of the extremal limit in Eq.\ (\ref{hatHR}), and another establishing the bound (\ref{calRm}) for the flux ratio. Both numerical computations involve only circular geodesic orbits, and neither requires particularly high precision. The above general conclusions are entirely robust with respect to numerical error. However, it is important to remember that here we have been working strictly within the framework of the first-order GSF approximation, with no control whatsoever over high-order GSF corrections. Since the first-order analysis appeared to allow for an exact saturation of the overspinning condition, higher-order effects may qualitatively change the outcome. A second-order analysis may potentially yield any possible result: that the final geometry is always subextremal, or that overspinning is possible, or (once again) that the black hole can at most be brought to extremality. In that respect, {\it our first-order GSF analysis---just like the geodesic analysis of Ref.\ \cite{js}---does not provide a conclusive answer to the question of overspinning.} It is not clear if the question can be fully resolved at second-order or at any other finite order in perturbation theory. This may be a disappointing conclusion, but it is an interesting one nonetheless. For whatever it is worth, let us return to discuss the consequences of our first-order analysis. We have found that overspinning is not possible, consistent with the conjecture of weak cosmic censorship. However, we have also founds that, through (weak) fine-tuning of the orbital parameters, one can drive the black hole to extremality. This is an intriguing possibility, because it appears to be in violation of Israel's ``third law'' of black hole dynamics \cite{Israel}. We have not studied in detail whether the conditions of the third law can be said to be met in full in our problem. Reference \cite{zimm} contains some discussion of this point, and proposes how any apparent violation of the third law in the overspinning problem might be reconciled. There are several ways in which our analysis may be improved and further tested. First, it would be desirable to repeat the calculation of the angular momentum shift $\delta\mathsf{L}_c^{\rm const}$ using a direct integration of the GSF, via Eq.\ (\ref{dLc}) (with $\tilde F_{\alpha}\to \tilde F_{\alpha}^{\rm cons}$). This would eliminate our reliance [in deriving Eq.\ (\ref{Z})] on the effective Hamiltonian formulation of Refs.\ \cite{Isoyama14,hami,Tiec:2015cxa}, which is axiomatic in nature. In fact, an explicit demonstration of agreement between the direct formula (\ref{dLc}) and redshift formula (\ref{Z}) would constitute an important test of the Hamiltonian formulation. There is work in progress at Southampton to directly (numerically) evaluate $\delta\mathsf{L}_c^{\rm const}$ via Eq.\ (\ref{dLc}). Second, it may be possible to replace some (or all) of the numerical input for our analysis with analytical arguments. Specifically, one may seek to establish analytically the boundedness of $\hat H^R$ in Eq.\ (\ref{hatHR}), and the lower bound (\ref{calRm}) for the flux ratio $\cal R$. Both may be achieved by extending the ISCO analysis of Ref.\ \cite{Gralla:2015rpa} to unstable circular orbits, and (in the case of $\hat H^R$) also from the Weyl scalar to the reconstructed metric perturbation. We leave this to future work. To conclude, we reiterate our view that our work represents a first complete analysis of the overspinning/overcharging problem through first post-geodesic order in perturbation theory, for a particular capture scenario. Of course, we have only explored here a fraction of the space of interesting scenarios. Our analysis did not cover, for example, (i) very low energy configurations of deeply-bound orbits, (ii) non-equatorial orbits, (iii) ultrarelativistic or null particles, or (iv) spinning and/or electrically charged particles on a Kerr-Newman background. These scenarios all deserve examination. \section*{Acknowledgements} We gratefully acknowledge support from the European Research Council under the European Union's Seventh Framework Programme FP7/2007-2013/ERC, Grant No.\ 304978. LB acknowledges additional support from STFC through grant number PP/E001025/1.
2,869,038,155,348
arxiv
\section{Introduction} Expansions in integer negative base $-b$, where $b \geqslant 2$, seem to have been introduced by Gr\"unwald in~\cite{Grunwald}, and rediscovered by several authors, see the historical comments given by Knuth~\cite{Knu}. The choice of a negative base $-b$ and of the alphabet $\{0,\ldots,b-1\}$ is interesting, because it provides a signless representation for every number (positive or negative). In this case it is easy to distinguish the sequences representing a positive integer from the ones representing a negative integer: denoting $(w\raisebox{0.1ex}{\textbf{.}})_{-b}:=\sum_{i=0}^k w_k(-b)^k$ for any $w=w_k\cdots w_0$ in$\{0,\ldots,b-1\}^*$ with no leading $0$'s, we have $\mathbb N=\{(w\raisebox{0.1ex}{\textbf{.}})_{-b} \mid |w| \textnormal{ is odd}\}$. The classical monotonicity between the lexicographical ordering on words and the represented numerical values does not hold anymore in negative base, for instance $3=(111\raisebox{0.1ex}{\textbf{.}})_{-2}$, $4=(100\raisebox{0.1ex}{\textbf{.}})_{-2}$ and $111 >_{lex} 100$. Nevertheless it is possible to restore such a correspondence by introducing an appropriate ordering on words, in the sequel denoted by $\prec_{alt}$, and called the {\em alternate order}. Representations in negative base also appear in some complex base number systems, for instance base $\beta=2i$ since $\beta^2=-4$ (see \cite{Frougny99} for a study of their properties from an automata theoretic point of view). Thus, beyond the interest in the problem in itself, the authors also wish the study of negative bases to be an useful preliminar step to better understanding the complex case. Ito and Sadahiro recently introduced expansions in non-integer negative base ${-\beta}$ in \cite{IS}. They have given a characterization of admissible sequences, and shown that the $({-\beta})$-shift is sofic if and only if the $({-\beta})$-expansion of the number $-\frac{\beta}{\beta+1}$ is eventually periodic. In this paper we pursue their work. The purpose of this contribution is to show that many properties of the positive base (integer or not) numeration systems extend to the negative base case, the main difference being the sets of numbers that are representable in the two different cases. The results could seem not surprising, but this study put into light the important role played by the order on words: the lexicographic order for the positive bases, the alternate order for the negative bases. Very recently there have been several contributions to the study of numbers having only positive powers of the base in their expansion, the so-called $({-\beta})$-integers, in \cite{ADMP}, \cite{MPV}, and \cite{steiner}. We first establish some properties of the negative integer base $-b$, that are more or less folklore. This allows to introduce the definitions of alternate order and of short-alternate order, that are natural to order numbers by their $({-\beta})$-expansions. We then prove a general result which is not related to numeration systems but to the alternate order, and which is of interest in itself. We define a symbolic dynamical system associated with a given infinite word $s$ satisfying some properties with respect to the alternate order on infinite words. We design an infinite countable automaton recognizing it. We then are able to characterize the case when the symbolic dynamical system is sofic (resp. of finite type). Using this general construction we can prove that the $({-\beta})$-shift is a symbolic dynamical system of finite type if and only if the $({-\beta})$-expansion of $-\frac{\beta}{\beta+1}$ is purely periodic. We also show that the entropy of the $({-\beta})$-shift is equal to $\log \beta$. We then focus on the case where $\beta$ is a Pisot number, that is to say, an algebraic integer greater than 1 such that the modulus of its Galois conjugates is less than 1. The natural integers and the Golden Mean are Pisot numbers. We extend all the results known to hold true in the Pisot case for $\beta$-expansions to the $({-\beta})$-expansions. In particular we prove that, if $\beta$ is a Pisot number, then every number from $\mathbb Q(\beta)$ has an eventually periodic $({-\beta})$-expansion, and thus that the $({-\beta})$-shift is a sofic system. When $\beta$ is a Pisot number, it is known that addition in base $\beta$ --- and more generally normalization in base $\beta$ on an arbitrary alphabet --- is realizable by a finite transducer \cite{Frougny92}. We show that this is still the case in base ${-\beta}$. The conversion from positive integer base to negative integer base is realizable by a finite right sequential transducer. When $\beta$ is not an integer, we give an on-line algorithm for the conversion from base $\beta$ to base ${-\beta}$, where the result is not admissible. When $\beta$ is a Pisot number, the conversion can be realized by a finite on-line transducer. A preliminary version of Sections \ref{alt} and \ref{s_real} has been presented in~\cite{FrougnyLai}. \section{Definitions and preliminaries} \subsection{Words and automata} An \emph{alphabet} is a totally ordered set. In this paper the alphabets are always finite. A finite sequence of elements of an alphabet $A$ is called a {\em word}, and the set of words on $A$ is the free monoid $A^*$. The empty word is denoted by $\varepsilon$. The set of infinite (resp. bi-infinite) words on $A$ is denoted by $A^\mathbb{N}$ (resp. $A^\mathbb{Z}$). Let $v$ be a word of $A^*$, denote by $v^n$ the concatenation of $v$ to itself $n$ times, and by $v^\omega$ the infinite concatenation $vvv\cdots$. A word of the form $uv^\omega$ is said to be {\em eventually periodic}. A (purely) {\em periodic} word is an eventually periodic word of the form $v^\omega$. A finite word $v$ is a \emph{factor} of a (finite, infinite or bi-infinite) word $x$ if there exists $u$ and $w$ such that $x=uvw$. When $u$ is the empty word, $v$ is a \emph{prefix} of $x$. The prefix $v$ is \emph{strict} if $v \neq x$. When $w$ is empty, $v$ is said to be a \emph{suffix} of $x$. We recall some definitions on automata, see \cite{Eil} and \cite{Sak} for instance. An {\em automaton over $A$}, $\mathcal A=(Q,A,E,I,T)$, is a directed graph labelled by elements of $A$. The set of vertices, traditionally called {\em states}, is denoted by $Q$, $I \subset Q$ is the set of {\em initial} states, $T \subset Q$ is the set of {\em terminal} states and $E \subset Q \times A \times Q$ is the set of labelled {\em edges}. If $(p,a,q) \in E$, we write $p \stackrel{a}{\to} q$. The automaton is {\em finite} if $Q$ is finite. The automaton $\mathcal A$ is {\em deterministic} if $E$ is the graph of a (partial) function from $Q \times A$ into $Q$, and if there is a unique initial state. A subset $H$ of $A^*$ is said to be {\em recognizable by a finite automaton}, or {\em regular}, if there exists a finite automaton $\mathcal A$ such that $H$ is equal to the set of labels of paths starting in an initial state and ending in a terminal state. Recall that two words $u$ and $v$ are said to be {\em right congruent modulo} $H$ if, for every $w$, $uw$ is in $H$ if and only if $vw$ is in $H$. It is well known that $H$ is recognizable by a finite automaton if and only if the congruence modulo $H$ has finite index. Let $A$ and $A'$ be two alphabets. A {\em transducer} is an automaton $\mathcal{T}=(Q,A^* \times A'^*,E,I,T)$ where the edges of $E$ are labelled by couples in $A^* \times A'^*$. It is said to be {\em finite} if the set $Q$ of states and the set $E$ of edges are finite. If $(p,(u,v),q) \in E$, we write $p \stackrel{u| v}{\longrightarrow} q$. The \emph{input automaton} (resp. \emph{output automaton}) of such a transducer is obtained by taking the projection of edges on the first (resp. second) component. A transducer is said to be {\em sequential} if its input automaton is deterministic. An on-line transducer is a particular kind of sequential transducer. An {\em on-line transducer} with delay $\delta$, ${\mathcal A}=(Q, A \times (A' \cup \varepsilon), E,\{q_0\})$, is a sequential automaton composed of a transient part and of a synchronous part, see \cite{M}. The set of states is equal to $Q=Q_t \cup Q_s$, where $Q_t$ is the set of transient states and $Q_s$ is the set of synchronous states. In the transient part, every path of length $\delta$ starting in the initial state $q_0$ is of the form $$q_0 \stackrel{x_1|\varepsilon}{\longrightarrow}q_1 \stackrel{x_2|\varepsilon}{\longrightarrow } \cdots \stackrel{x_{\delta}|\varepsilon}{\longrightarrow}q_{\delta}$$ where $q_0, \ldots, q_{\delta -1}$ are in $Q_t$, $x_j$ in $A$, for $1 \leqslant j \leqslant \delta$, and the only edge arriving in a state of $Q_t$ is as above. In the synchronous part, edges are labelled by elements of $A \times A'$. This means that the transducer starts reading words of length $\leqslant \delta$ and outputting nothing, and after that delay, outputs serially one digit for each input digit. If the set of states $Q$ and the set of edges $E$ are finite, the on-line automaton is said to be finite. The same notions can be defined for automata and transducer processing words from right to left : they are called {\em right} automata or transducers. \subsection{Symbolic dynamics} Let us recall some definitions on symbolic dynamical systems or subshifts (see~\cite[Chapter~1]{Lot} or ~\cite{LM}). The set $A^\mathbb Z$ is endowed with the lexicographic order, denoted $<_{lex}$, the product topology, and the shift $\sigma$, defined by $\sigma((x_i)_{i \in \mathbb Z})=(x_{i+1})_{i \in \mathbb Z}$. A set $S \subseteq A^\mathbb Z$ is a {\em symbolic dynamical system}, or {\em subshift}, if it is shift-invariant and closed for the product topology on $A^\mathbb Z$. A bi-infinite word $z$ {\em avoids} a set of word $X \subset A^*$ if no factor of $z$ is in $X$. The set of all words which avoid $X$ is denoted $S_X$. A set $S \subseteq A^\mathbb Z$ is a subshift if and only if $S$ is of the form $S_X$ for some $X$. The same notion can be defined for a one-sided subshift of $A^\mathbb N$. Let $F(S)$ be the set of factors of elements of $S$, let $I(S)=A^+ \setminus F(S)$ be the set of words avoided by $S$, and let $X(S)$ be the set of elements of $I(S)$ which have no proper factor in $I(S)$. The subshift $S$ is {\em sofic} if and only if $F(S)$ is recognizable by a finite automaton, or equivalently if $X(S)$ is recognizable by a finite automaton. The subshift $S$ is of {\em finite type} if $S=S_X$ for some finite set $X$, or equivalently if $X(S)$ is finite. The topological entropy of a subshift $S$ is \begin{equation*} h(S)=\lim_{n \to \infty} \frac{1}{n} \log(B_n(S)) \end{equation*} where $B_n(S)$ is the number of elements of $F(S)$ of length $n$. When $S$ is sofic, the entropy of $S$ is equal to the logarithm of the spectral radius of the adjacency matrix of the finite automaton recognizing $F(S)$. \subsection{Numeration systems} The reader is referred to~\cite[Chapter 7]{Lot} and to ~\cite{cant} for a detailed presentation of these topics. Representations of real numbers in a non-integer base $\beta$ were introduced by R\'enyi~\cite{Ren} under the name of {\em $\beta$-expansions}. Let $x$ be a real number in the interval $[0,1]$. A {\em representation in base $\beta$} (or a $\beta$-representation) of $x$ is an infinite word $ (x_i)_{i \geqslant 1}$ such that $$x= \sum_{i \geqslant 1} x_i \beta^{-i}.$$ Let $\mathbf x = (x_i)_{i \geqslant 1}$. The \emph{numerical value} in base $\beta$ is the function $\pi_\beta$ defined by $\pi_\beta(\mathbf x)=\sum_{i=1}^\infty x_i\beta^{-i}$. A particular $\beta$-representation --- called the $\beta$-{\em expansion} --- can be computed by the ``greedy algorithm''~: denote by $\lfloor y \rfloor$, $\lceil y \rceil$ and $\{y\}$ the lower integer part, the upper integer part and the fractional part of a number $y$. Set $r_0=x$ and let for $i \geqslant 1$, $x_i=\lfloor \beta r_{i-1} \rfloor$, $r_i=\{\beta r_{i-1}\}$. Then $x= \sum_{i \geqslant 1} x_i \beta^{-i}$. The digits $x_i$ are elements of the canonical alphabet $A_\beta =\{0, \ldots,\lceil \beta \rceil -1\}$. The $\beta$-expansion of $x \in [0,1]$ will be denoted by $\mathop{\mathsf{d}_{\beta}}(x)=(x_i)_{i \geqslant 1}$. If $x>1$, there exists some $k \geqslant 1$ such that $x/\beta^{k}$ belongs to $[0,1)$. If $\mathop{\mathsf{d}_{\beta}}(x/\beta^{k})=(y_i)_{i \geqslant 1}$ then by shifting $x=(y_1 \cdots y_{k} \raisebox{0.1ex}{\textbf{.}} y_{k+1} y_{k+2}\cdots)_\beta$. An equivalent definition is obtained by using the {\em $\beta$-transformation} of the unit interval which is the mapping $$T_{\beta} : x \mapsto \beta x- \lfloor \beta x \rfloor.$$ Then $\mathop{\mathsf{d}_{\beta}}(x)=(x_i)_{i \geqslant 1}$ if and only if $x_i = \lfloor \beta T_{\beta}^{i-1}(x) \rfloor$. If a representation ends in infinitely many zeros, like $v0^\omega$, the ending zeros are omitted and the representation is said to be {\em finite}. In the case where the $\beta$-expansion of 1 is finite, there is a special representation playing an important role. Let $ \mathop{\mathsf{d}_{\beta}}(1) =(t_i)_{i \geqslant 1}$ and set $\mathop{\mathsf{d}^{*}_{\beta}}(1)=\mathop{\mathsf{d}_{\beta}}(1)$ if $\mathop{\mathsf{d}_{\beta}}(1)$ is infinite and $\mathop{\mathsf{d}^{*}_{\beta}}(1)= (t_1 \cdots t_{m-1} (t_m - 1))^\omega$ if $\mathop{\mathsf{d}_{\beta}}(1)=t_1 \cdots t_{m-1}t_m$ is finite. Denote by $ D_{\beta}$ the set of $\beta$-expansions of numbers of $ [0,1)$. It is a shift-invariant subset of $A_\beta^\mathbb N$. The {\em $\beta$-shift} $S_{\beta}$ is the closure of $ D_{\beta} $ and it is a subshift of $A_\beta^\mathbb Z$. When $\beta$ is an integer, $S_{\beta}$ is the full $\beta$-shift $A_\beta^\mathbb Z$. \begin{theorem}[Parry\cite{Parry}]\label{parryth} Let $\beta>1$ be a real number. A word $(w_i)_{i\geqslant 1}$ belongs to $D_{\beta} $ if and only if for all $n \geqslant 1$ $$w_nw_{n+1}\cdots <_{lex} \mathop{\mathsf{d}^{*}_{\beta}}(1).$$ A word $(w_i)_{i\in \mathbb Z}$ belongs to $S_{\beta}$ if and only if for all $n$ $$w_nw_{n+1}\cdots \leqslant_{lex} \mathop{\mathsf{d}^{*}_{\beta}}(1).$$ \end{theorem} The following results are well-known (see~\cite[Chapt. 7]{Lot}). \begin{theorem} \begin{enumerate} \item The $\beta$-shift is sofic if and only if $\mathop{\mathsf{d}_{\beta}}(1)$ is eventually periodic. \item The $\beta$-shift is of finite type if and only if $\mathop{\mathsf{d}_{\beta}}(1)$ is finite. \end{enumerate} \end{theorem} It is known that the entropy of the $\beta$-shift is equal to $\log \beta$. \bigskip If $\beta$ is a Pisot number, then every element of $\mathbb Q(\beta) \cap [0,1]$ has an eventually periodic $\beta$-expansion, and the $\beta$-shift $S_\beta$ is a sofic system \cite{Bertrand,Schmidt}. Let $C$ be an arbitrary finite alphabet of integer digits. The {\em normalization\index{normalization} function} in base $\beta$ on $C$ $$\nu_{\beta,C} : C^{\mathbb N} \rightarrow \mathcal A_\beta^\mathbb N$$ is the partial function which maps an infinite word $\mathbf y=(y_i)_{i \geqslant 1}$ over $C$, such that $0 \leqslant y=\sum_{i \geqslant 1}y_i \beta ^{-i} \leqslant 1$, onto the $\beta$-expansion of $y$. It is known \cite{Frougny92} that, when $\beta$ is a Pisot number, normalization is computable by a finite transducer on any alphabet $C$. Note that addition is a particular case of normalization, with $C=\{0,\ldots,2(\lceil \beta \rceil -1)\}$. \section{Negative integer base}\label{int} Let $b>1$ be an integer. It is well known, see Knuth \cite{Knu} for instance, that every integer (positive or negative) has a unique $(-b)$-representation with digits in $A_b=\{0,1,\ldots,b-1\}$. Every real number (positive or negative) has a $(-b)$-representation, not necessarily unique, since $$\big( -\dfrac{1}{b(b+1)}\big)_{-b} =\raisebox{0.1ex}{\textbf{.}} 1 ((b-1)0)^\omega =\raisebox{0.1ex}{\textbf{.}} 0(0(b-1))^\omega$$ for instance. The representation $\raisebox{0.1ex}{\textbf{.}} 1 ((b-1)0)^\omega$ will be the admissible one. We recall some well-known facts. \begin{proposition} The set of $(-b)$-representations of the positive integers is $\{u \in \{0,1,\ldots,b-1\}^* \mid u$ does not begin with $0$ and $|u|$ is odd$\}$. The set of $(-b)$-representations of the negative integers is $\{u \in \{0,1,\ldots,b-1\}^* \mid u$ does not begin with $0$ and $|u|$ is even$\}$. \end{proposition} Let $A$ be a finite alphabet totally ordered, and let $\min A$ be its smallest element. \begin{definition} The {\em alternate order} $\prec_{alt}$ on infinite words or finite words with same length on $A$ is defined by: $$u_1u_2u_3\cdots \prec_{alt} v_1v_2v_3\cdots$$ if and only if there exists $k \geqslant 1$ such that $$u_i=v_i \; \; \textrm{for} \; \; 1 \leqslant i <k \; \; \; \textrm{and} \; \; \; (-1)^k(u_k-v_k)<0.$$ \end{definition} This order was implicitely defined in~\cite{Grunwald}. \begin{definition} On finite words, we define the {\em short-alternate order}, denoted $\prec_{sa}$, by: if $u=u_{1} \cdots u_{\ell} $ and $v=v_{1} \cdots v_m$ are in $A^*$, then $u \prec_{sa} v$ if and only if \begin{itemize} \item $\ell$ and $m$ are odd, and $\ell<m$, or $\ell=m$ and $(\min A)u \prec_{alt} (\min A)v$ \item $\ell$ and $m$ are even, and $\ell >m$, or $\ell=m$ and $u \prec_{alt} v$ \item $\ell<m$ and $(\min A)^{m-\ell}u \prec_{sa} v$ \item $\ell>m$ and $u \prec_{sa} (\min A)^{\ell - m}v$. \end{itemize} \end{definition} The short-alt order is analogous to the short-lex or radix order relatively to the lexicographical order. Denote $\langle x \rangle_{-b}$ the $(-b)$-representation of $x$. We have the following result. \begin{proposition} If $x$ and $y$ are integers, $x<y$ if and only if $\langle x \rangle_{-b} \prec_{sa} \langle y \rangle_{-b}$.\\ If $x$ and $y$ are real numbers from the interval $[-\tfrac{b}{b+1}, \tfrac{1}{b+1})$ then $x<y$ if and only if $\langle x \rangle_{-b} \prec_{alt} \langle y \rangle_{-b}$. \end{proposition} \begin{example} In base $-2$, $\langle 3 \rangle_{-2}=111$, $\langle 4 \rangle_{-2}=100$, $\langle 6 \rangle_{-2}= 11010$, and $111 \prec_{sa} 100 \prec_{sa} 11010 $. \end{example} \begin{proposition}\label{conv_b} The function that maps the $b$-representation of a positive integer to its $-b$-representation can be realized by a finite right sequential transducer. \end{proposition} \begin{proof} In Fig.~\ref{cb}, $0 \leqslant c \leqslant b-1$, $1 \leqslant d \leqslant b-1$, $0 \leqslant e \leqslant b-2$. The processing is done from right to left by 2-letter blocks. A finite word $x_{k-1}\cdots x_0$ which is the $b$-expansion of $x$ is transformed by the transducer into a finite word $y_k \cdots y_0$ which is the $(-b)$-expansion of $x$. It is straightforward to transform this transducer into a finite right sequential transducer. \begin{figure}[h] \begin{center} \VCDraw{% \begin{VCPicture}{(-1,-1)(4,2)} \State[1]{(-1,0)}{A} \State[0]{(4,0)}{B} \Initial[n]{B} \Final[s]{B} \FinalL{s}{A}{\varepsilon|1} \ArcL[.5]{B}{A}{dc|(b-d)c} \ArcL[.5]{A}{B}{0e|0(e+1)} \LoopW[.5]{A}{dc|(b-d-1)(b-c-1), 0(b-1)|(b-1)0} \LoopE[.5]{B}{0c|0c} % \end{VCPicture}% } \end{center} \caption{Finite right sequential transducer realizing conversion from base $b$ to base $-b$}\label{cb} \end{figure} \end{proof} \begin{example}\label{conv2} Base $-2$. \begin{figure}[h] \begin{center} \VCDraw{% \begin{VCPicture}{(-1,-1)(4,2)} \State[1]{(-1,0)}{A} \State[0]{(4,0)}{B} \Initial[n]{B} \Final[s]{B} \FinalL{s}{A}{\varepsilon|1} \ArcL[.5]{B}{A}{10|10,11|11} \ArcL[.5]{A}{B}{00|01} \LoopW[.5]{A}{01|10,10|11,11|00} \LoopE[.5]{B}{00|00,01|01} % \end{VCPicture}% } \end{center} \caption{Finite right sequential transducer realizing conversion from base $2$ to base $-2$}\label{c2} \end{figure} \end{example} \section{Symbolic dynamical systems and the alternate order}\label{alt} We have seen in the previous section that the alternate order is the tool to compare numbers written in a negative base. In this section we give general results on symbolic dynamical systems defined by the alternate order. This is analogous to the symbolic dynamical systems defined by the lexicographical order, see~\cite{cant}. Let $A$ be a totally ordered finite alphabet. \begin{definition} A word $s=s_1s_2 \cdots$ in $A^\mathbb N$ is said to be an {\em alternately shift minimal} word (asmin-word for short) if $s_1=\max A$ and $s$ is smaller than, or equal to, any of its shifted images in the alternate order: for each $n \geqslant 1$, $s \preceq_{alt} s_n s_{n+1}\cdots$. \end{definition} Let $$S(s)=\{w=(w_i)_{i \in \mathbb Z} \in A^\mathbb Z \mid \forall n, \; s \preceq_{alt} w_nw_{n+1} \cdots \}.$$ We construct a countable infinite automaton ${\cal A}_{S(s)}$ as follows (see Fig.~\ref{autom}, where $[a,b]$ denotes the set $\{a,a+1,\ldots,b\}$ if $a \leqslant b$, $\varepsilon$ else. It is assumed in Fig.~\ref{autom} that $s_1 >s_j$ for $j \geqslant 2$.) The set of states is $\mathbb N$. For each state $i \geqslant 0$, there is an edge $i \stackrel{s_{i+1}}{\longrightarrow} i+1$. Thus the state $i$ is the name corresponding to the path labelled $s_1 \cdots s_{i}$. If $i$ is even, then for each $a$ such that $0 \leqslant a \leqslant s_{i+1}-1$, there is an edge $i \stackrel{a}{\longrightarrow} j$, where $j$ is such that $s_1 \cdots s_j$ is the suffix of maximal length of $s_1 \cdots s_{i}a$. If $i$ is odd, then for each $b$ such that $s_{i+1}+1 \leqslant b \leqslant s_1-1 $, there is an edge $i \stackrel{b}{\longrightarrow} j$ where $j$ is maximal such that $s_1 \cdots s_j$ is a suffix of $s_1 \cdots s_{i}b$; and if $s_{i+1} < s_1$ there is one edge $i \stackrel{s_1}{\longrightarrow} 1$. By contruction, the deterministic automaton ${\cal A}_{S(s)}$ recognizes exactly the words $w$ such that every suffix of $w$ is $\succeq_{alt} s$ and the result below follows. \begin{figure}[h] \centering \VCDraw{% \begin{VCPicture}{(-1,-2)(11,2.8)} \State[0]{(-2,0)}{A}\State[1]{(2,0)}{B} \State[2]{(6,0)}{C}\State[3]{(10,0)}{D} \HideState\State{(14,0)}{E} \ChgEdgeLabelScale{0.7} \EdgeL{A}{B}{s_1} \EdgeL{B}{C}{s_2} \EdgeL{C}{D}{s_3} \LoopN[0.5]{A}{[0,s_1-1]}\LoopN[0.5]{B}{s_1} \ArcR[0.5]{B}{A}{[s_2+1,s_1-1]} \ArcL[0.5]{C}{A}{[0,s_3-1]} \LArcR[0.5]{D}{B}{s_1} \LArcL[0.5]{D}{A}{[s_4+1,s_1-1]} \EdgeL[0.5]{D}{E}{s_4} \end{VCPicture} } \caption{The automaton ${\cal A}_{S(s)}$} \label{autom} \end{figure} \begin{proposition}\label{aut} The subshift $S(s)=\{w=(w_i)_{i \in \mathbb Z} \in A^\mathbb Z \mid \forall n, \; s \preceq_{alt} w_nw_{n+1} \cdots \}$ is recognizable by the countable infinite automaton ${\cal A}_{S(s)}$. \end{proposition} \begin{proposition}\label{gensof} The subshift $S(s)=\{w=(w_i)_{i \in \mathbb Z} \in A^\mathbb Z \mid \forall n, \; s \preceq_{alt} w_nw_{n+1} \cdots \}$ is sofic if and only if $s$ is eventually periodic. \end{proposition} \begin{proof} The subshift $S(s)$ is sofic if and only if the set of its finite factors $F(S(s))$ is recognizable by a finite automaton. Given a word $u$ of $A^*$, denote by $[u]$ the right class of $u$ modulo $F(S(s))$. Then in the automaton ${\cal A}_{S(s)}$, for each state $i \geqslant 1$, $i=[s_1 \cdots s_i]$, and $0=[\varepsilon]$. Suppose that $s$ is eventually periodic, $s=s_1 \cdots s_m(s_{m+1} \cdots s_{m+p})^\omega$, with $m$ and $p$ minimal. Thus, for each $k \geqslant 0$ and each $0 \leqslant i \leqslant p-1$, $s_{m+pk+i}=s_{m+i}$. \\ {\sl Case 1}: $p$ is even. Then $m+i=[s_1 \cdots s_{m+i}]=[s_1 \cdots s_{m+pk+i}]$ for every $k \geqslant 0$ and $0 \leqslant i \leqslant p-1$. Then the set of states of ${\cal A}_{S(s)}$ is $\{0,1,\ldots,m+p-1\}$.\\ {\sl Case 2}: $p$ is odd. Then $m+i=[s_1 \cdots s_{m+i}]=[s_1 \cdots s_{m+2pk+i}]$ for every $k \geqslant 0$ and $0 \leqslant i \leqslant 2p-1$. The set of states of ${\cal A}_{S(s)}$ is $\{0,1,\ldots,m+2p-1\}$.\\ Conversely, suppose that $s$ is not eventually periodic. Then there exists an infinite sequence of indices $i_1<i_2<\cdots $ such that the sequences $s_{i_k}s_{i_k+1}\cdots $ are all different for all $k \geqslant 1$. Take any pair $(i_j,i_\ell)$, $j,\ell \geqslant 1$. If $i_j$ and $i_\ell$ do not have the same parity, then $s_1 \cdots s_{i_{j}}$ and $s_1 \cdots s_{i_{\ell}}$ are not right congruent modulo $F({S(s)})$. If $i_j$ and $i_\ell$ have the same parity, there exists $q \geqslant 0$ such that $s_{i_{j}} \cdots s_{i_{j}+q-1}=s_{i_{\ell}} \cdots s_{i_{\ell}+q-1}=v$ and, for instance, $(-1)^{i_{j}+q}(s_{i_{j}+q} - s_{i_{\ell}+q})>0$ (with the convention that, if $q=0$ then $v=\varepsilon$). Then $s_1 \cdots s_{i_{j}-1}vs_{i_{j}+q} \in F({S(s)})$, $s_1 \cdots s_{i_{\ell}-1}vs_{i_{\ell}+q} \in F({S(s)})$, but $s_1 \cdots s_{i_{j}-1}vs_{i_{\ell}+q}$ does not belong to $F({S(s)})$. Hence $s_1 \cdots s_{i_{j}}$ and $s_1 \cdots s_{i_{\ell}}$ are not right congruent modulo $F({S(s)})$, so the number of right congruence classes is infinite and $F({S(s)})$ is thus not recognizable by a finite automaton. \end{proof} \begin{proposition}\label{gensft} The subshift ${S(s)}=\{w=(w_i)_{i \in \mathbb Z} \in A^\mathbb Z \mid \forall n, \; s \preceq_{alt} w_nw_{n+1} \cdots \}$ is a subshift of finite type if and only if $s$ is purely periodic. \end{proposition} \begin{proof} Suppose that $s=(s_1 \cdots s_p)^\omega$. Consider the finite set $X=\{s_1 \cdots s_{n-1}b \mid b \in A, \; (-1)^n(b - s_n)<0, \;1 \leqslant n \leqslant p\}$. We show that ${S(s)}={S(s)}_X$. If $w$ is in ${S(s)}$, then $w$ avoids $X$, and conversely. Now, suppose that ${S(s)}$ is of finite type. It is thus sofic, and by Proposition~\ref{gensof} $s$ is eventually periodic. If it is not purely periodic, then $s=s_1 \cdots s_m(s_{m+1} \cdots s_{m+p})^\omega$, with $m$ and $p$ minimal, and $s_1 \cdots s_m \neq \varepsilon$. Let $I=\{s_1 \cdots s_{n-1}b \mid b \in A, \; (-1)^n(b - s_n)<0, \;1 \leqslant n \leqslant m\} \cup \{s_1 \cdots s_m(s_{m+1} \cdots s_{m+p})^{2k}$ $s_{m+1} \cdots s_{m+n-1}b \mid b \in A, \; k \geqslant 0, (-1)^{m+2kp+n}(b - s_{m+n})<0, \;1 \leqslant n \leqslant 2p\}$. Then $I \subset A^+ \setminus F({S(s)})$. First, suppose there exists $1 \leqslant j \leqslant p$ such that $(-1)^j(s_j - s_{m+j})<0$ and $s_1 \cdots s_{j-1} = s_{m+1} \cdots s_{m+j-1}$. For $k \geqslant 0$ fixed, let $w^{(2k)}=s_1 \cdots s_m(s_{m+1} \cdots s_{m+p})^{2k} s_1 \cdots s_j \in I$. We have $s_1 \cdots s_m(s_{m+1} \cdots s_{m+p})^{2k}s_{m+1}$ $\cdots s_{m+j-1} \in F({S(s)})$. On the other hand, for $n \geqslant 2 $, $s_n \cdots s_m(s_{m+1} \cdots s_{m+p})^{2k}$ is greater in the alternate order than the prefix of $s$ of same length, thus $s_n \cdots s_m(s_{m+1} \cdots s_{m+p})^{2k} s_1 \cdots s_j$ belongs to $F({S(s)})$. Hence any strict factor of $w^{(2k)}$ is in $ F({S(s)})$. Therefore for any $k \geqslant 0$, $w^{(2k)} \in X({S(s)})$, and $X({S(s)})$ is thus infinite: ${S(s)}$ is not of finite type. Now, if such a $j$ does not exist, then for every $1 \leqslant j \leqslant p$, $s_j=s_{m+j}$, and $s=(s_1 \cdots s_m)^\omega$ is purely periodic. \end{proof} \begin{remark}\label{remgeneral} Let $s'=s'_1s'_2 \cdots $ be a word in $A^\mathbb N$ such that $s'_1=\min A$ and, for each $n \geqslant 1$, $s'_n s'_{n+1}\cdots \preceq_{alt} s'$. Such a word is said to be an {\em alternately shift maximal} word. Let $S'(s')=\{w=(w_i)_{i \in \mathbb Z} \in A^\mathbb Z \mid \forall n, \; w_nw_{n+1} \cdots \preceq_{alt} s'\}$. The statements in Propositions~\ref{aut}, \ref{gensof} and \ref{gensft} are also valid for the subshift $S'(s')$ (with the automaton ${\cal A}_{S'(s')}$ constructed accordingly). \end{remark} \section{Negative real base}\label{s_real} \subsection{The $(-\beta)$-shift} Ito and Sadahiro \cite{IS} introduced a greedy algorithm to represent any real number in real base $-\beta$, $\beta>1$, and with digits in $A_{-\beta}=\{0,1,\ldots,\lfloor \beta \rfloor\}$. Remark that, when $\beta$ is not an integer, $A_{-\beta}=A_\beta$. A transformation on $I_{-\beta}=\left[-\frac{\beta}{\beta+1},\frac{1}{\beta+1}\right)$ is defined as follows: \[ T_{-\beta}(x)=-\beta x-\lfloor -\beta x+\frac{\beta}{\beta+1}\rfloor.\] For every real number $x \in I_{-\beta}$ denote $\mathop{\mathsf{d}_{-\beta}}(x)$ the $(-\beta)$-expansion of $x$. Then $\mathop{\mathsf{d}_{-\beta}}(x)=(x_i)_{i \geqslant 1}$ if and only if $x_i=\lfloor -\beta T_{-\beta}^{i-1}(x) + \frac{\beta}{\beta+1} \rfloor$, and $x=\sum_{i \geqslant 1} x_i ({-\beta})^{-i}$. When this last equality holds, we may also write: \begin{equation*}\label{eshift} x=(\raisebox{0.1ex}{\textbf{.}} x_1 x_2 \cdots)_{{-\beta}}. \end{equation*} We show that the alternate order $\prec_{alt}$ on $({-\beta})$-expansions gives the numerical order. \begin{proposition} Let $x$ and $y$ be in $I_{-\beta}$. Then $$x<y \iff \mathop{\mathsf{d}_{-\beta}}(x) \prec_{alt} \mathop{\mathsf{d}_{-\beta}}(y).$$ \end{proposition} \begin{proof} Suppose that $\mathop{\mathsf{d}_{-\beta}}(x) \prec_{alt} \mathop{\mathsf{d}_{-\beta}}(y)$. Then there exists $k \geqslant 1$ such that $x_i=y_i $ for $1 \leqslant i <k $ and $(-1)^k(x_k-y_k)<0$. Suppose that $k$ is even, $k=2q$. Then $x_{2q} \leqslant y_{2q}-1$. Thus $x-y \leqslant -\beta^{-2q}+\sum_{i \geqslant 2q+1} x_i ({-\beta})^{-i}-\sum_{i \geqslant 2q+1} y_i ({-\beta})^{-i}<0$, since $\sum_{i \geqslant 1} x_{2q+i} ({-\beta})^{-i}$ and $\sum_{i \geqslant 1} y_{2q+i} ({-\beta})^{-i}$ are in $I_{-\beta}$. The case $k=2q+1$ is similar. The converse is immediate. \end{proof} A word $(x_i)_{i \geqslant 1}$ is said to be $(-\beta)$-{\em admissible} if there exists a real number $x \in I_{-\beta}$ such that $\mathop{\mathsf{d}_{-\beta}}(x)=(x_i)_{i \geqslant 1}$. The {\em $(-\beta)$-shift} $S_{-\beta}$ is the closure of the set of $(-\beta)$-admissible words, and it is a subshift of $A_\beta^\mathbb Z$. Define the sequence $\mathop{\mathsf{d}^{*}_{-\beta}}(\frac{1}{\beta+1})$ as follows: \begin{itemize} \item if $\mathop{\mathsf{d}_{-\beta}}(-\frac{\beta}{\beta+1})=d_1d_2\cdots$ is not a periodic sequence with odd period, $$\mathop{\mathsf{d}^{*}_{-\beta}}(\frac{1}{\beta+1})=\mathop{\mathsf{d}_{-\beta}}(\frac{1}{\beta+1})=0d_1d_2\cdots$$ \item otherwise if $\mathop{\mathsf{d}_{-\beta}}(-\frac{\beta}{\beta+1})=(d_1\cdots d_{2p+1})^\omega$, $$\mathop{\mathsf{d}^{*}_{-\beta}}(\frac{1}{\beta+1})=(0d_1\cdots d_{2p}(d_{2p+1}-1) )^\omega.$$ \end{itemize} \begin{theorem}[Ito-Sadahiro \cite{IS}]\label{adm} A word $(w_i)_{i \geqslant 1}$ is $(-\beta)$-admissible if and only if for each $n \geqslant 1$ \begin{equation*} \mathop{\mathsf{d}_{-\beta}}(-\frac{\beta}{\beta+1})\preceq_{alt} w_nw_{n+1 }\cdots \prec_{alt} \mathop{\mathsf{d}^{*}_{-\beta}}(\frac{1}{\beta+1}). \end{equation*} A word $(w_i)_{i \in \mathbb Z}$ is an element of the $(-\beta)$-shift if and only if for each $n$ \begin{equation*} \mathop{\mathsf{d}_{-\beta}}(-\frac{\beta}{\beta+1})\preceq_{alt} w_nw_{n+1 }\cdots \preceq_{alt} \mathop{\mathsf{d}^{*}_{-\beta}}(\frac{1}{\beta+1}). \end{equation*} \end{theorem} Put $\mathbf{d}=\mathop{\mathsf{d}_{-\beta}}(-\frac{\beta}{\beta+1})=d_1d_2\cdots$ and $\mathbf{d}^*=\mathop{\mathsf{d}^{*}_{-\beta}}(\frac{1}{\beta+1})$. Theorem~\ref{adm} can be restated as follows. \begin{lemma}\label{inter} If $\mathbf{d}=\mathop{\mathsf{d}_{-\beta}}(-\frac{\beta}{\beta+1})$ is not a periodic sequence with odd period, then $$S_{-\beta}=S(\mathbf{d})=\{(w_i)_{i \in \mathbb Z} \in A_\beta^\mathbb Z \mid \forall n,\; \mathbf{d} \preceq_{alt} w_nw_{n+1 }\cdots \}.$$ If $\mathbf{d}=\mathop{\mathsf{d}_{-\beta}}(-\frac{\beta}{\beta+1})$ is a periodic sequence of odd period, then $\mathbf{d}^*=(0d_1\cdots d_{2p}(d_{2p+1}-1) )^\omega$ and $$S_{-\beta}= S(\mathbf{d}) \cap S'(\mathbf{d}^*)$$ where $$S'(\mathbf{d}^*)=\{(w_i)_{i \in \mathbb Z} \in A_\beta^\mathbb Z \mid \forall n,\; w_nw_{n+1 }\cdots \preceq_{alt} \mathbf{d}^*\}.$$ \end{lemma} \begin{theorem} The $(-\beta)$-shift is a system of finite type if and only if $\mathop{\mathsf{d}_{-\beta}}(-\frac{\beta}{\beta+1})$ is purely periodic. \end{theorem} \begin{proof} If $\mathop{\mathsf{d}_{-\beta}}(-\frac{\beta}{\beta+1})$ is purely periodic with an even period, the result follows from Theorem~\ref{adm}, Lemma~\ref{inter} and Proposition~\ref{gensft}. If $\mathop{\mathsf{d}_{-\beta}}(-\frac{\beta}{\beta+1})$ is purely periodic with an odd period, the result follows from Theorem~\ref{adm}, Lemma~\ref{inter}, Proposition~\ref{gensft}, Remark~\ref{remgeneral}, and the fact that the intersection of two finite sets is finite. \end{proof} By Theorem~\ref{adm}, Lemma~\ref{inter}, Proposition~\ref{gensof}, Remark~\ref{remgeneral}, and the fact that the intersection of two regular sets is again regular the following result follows. \begin{theorem}[Ito-Sadahiro \cite{IS}] The $(-\beta)$-shift is a sofic system if and only if $\mathop{\mathsf{d}_{-\beta}}(-\frac{\beta}{\beta+1})$ is eventually periodic. \end{theorem} \begin{example}\label{ex1} Let $G=\frac{1+\sqrt{5}}{2}$; then $\mathop{\mathsf{d}_{G}}(1)=11$ and the $G$-shift is of finite type. Since $\mathop{\mathsf{d}_{-G}}(-\frac{G}{G+1})=10^\omega$ the $(-G)$-shift is a sofic system which is not of finite type. \\ The automaton in Fig.~\ref{shiftG} (right) recognizing the $(-G)$-shift is obtained by minimizing the result of the construction of Proposition~\ref{aut}. Remark that it is the automaton which recognizes the celebrated even shift (see~\cite{LM}). \begin{figure}[h] \begin{center} \VCDraw{% \begin{VCPicture}{(-1,-0.2)(4,1.8)} \State{(-4,0)}{A} \State{(-1,0)}{B} \ArcL[.5]{A}{B}{1} \ArcL[.5]{B}{A}{0} \LoopN[.5]{A}{0} \State{(4,0)}{A1} \State{(7,0)}{B1} \ArcL[.5]{A1}{B1}{0} \ArcL[.5]{B1}{A1}{0} \LoopN[.5]{A1}{1} \end{VCPicture} } \end{center} \caption{Finite automata for the $G$-shift (left) and for the $(-G)$-shift (right)} \label{shiftG} \end{figure} \end{example} \begin{example}\label{ex2} Let $\beta=G^2=\frac{3+\sqrt{5}}{2}$; then $\mathop{\mathsf{d}_{\beta}}(1)=21^\omega$ and the $\beta$-shift is sofic, but not of finite type. Now, $\mathop{\mathsf{d}_{-\beta}}(-\frac{\beta}{\beta+1})=(21)^\omega$ and the $({-\beta})$-shift is of finite type: the set of minimal forbidden factors is $X(S_{-\beta})=\{20\}$. \medskip \begin{figure}[h] \begin{center} \VCDraw{% \begin{VCPicture}{(-1,-0.2)(4,1.8)} \State{(-4,0)}{A} \State{(-1,0)}{B} \ArcL[.5]{A}{B}{2} \ArcL[.5]{B}{A}{0} \LoopN[.5]{A}{0,1} \LoopN[.5]{B}{1} \State{(4,0)}{A1} \State{(7,0)}{B1} \ArcL[.5]{A1}{B1}{2} \ArcL[.5]{B1}{A1}{1} \LoopN[.5]{A1}{0,1} \LoopN[.5]{B1}{2} \end{VCPicture} } \end{center} \caption{Finite automata for the $G^2$-shift (left) and for the $(-G^2)$-shift (right)} \label{autmG} \end{figure} \end{example} \subsection{Entropy of the $-\beta$-shift} Examples \ref{ex1} and \ref{ex2} suggest that the entropy of the ${-\beta}$-shift is the same as the entropy of the $\beta$-shift because the adjacency matrices of the automata are the same. This is what we show in this section. A standard technique for computing the entropy of a subshift $S$ is to construct a (not necessarily finite) automaton recognizing $F(S)$. Then the submatrices of the adjacency matrix are taken into account and for every $n$ the greatest eigenvalue $\lambda_n$ of the submatrix of order $n$ is computed. A result proved in \cite{Ho1} ensures that the limit $\lambda$ of the sequence $\lambda_n$ exists and it satisfies $h(S)=\log \lambda$. Unfortunately the explicit computation of the $\lambda_n$'s in the general case turns out to be very complicated, so we use tools from the theory of dynamical systems: \begin{itemize} \item[--] the notion of topological entropy for one-dimensional dynamical systems, a one-dimensional dynamical system being a couple $(I,T)$ consisting in a bounded interval $I$ and a piecewise continuous transformation $T:I\rightarrow I$; \item[--] a result by Takahashi~\cite{Tak80} establishing the relation between topological entropies of one-dimensional dynamical systems and symbolic dynamical systems; \item[--] a result by Shultz~\cite{Shu07} on the topological entropy of some one-dimensional dynamical systems. \end{itemize} Let us begin with the definition of topological entropy for one-dimensional dynamical systems. \vskip0.2cm \begin{definition} Let $(I,T)$ be a dynamical system. For every finite cover of $I$, say $\mathcal C$, set: \begin{equation*} H(T,\mathcal C):=\lim\sup\frac{1}{n}\log N\left(\bigvee_{m=0}^{n-1}T^{-m}\mathcal C\right) \end{equation*} with $\bigvee$ denoting the finest common refinement and $N(\mathcal C)$ denoting the number of elements of the smallest subcover of $\mathcal C$, a subcover of $\mathcal C$ being a subfamily of $\mathcal C$ still covering $I$. The \emph{topological entropy} of $(I,T)$ is given by the formula \begin{equation} h(I,T):=\sup H(T,\mathcal C). \end{equation} \end{definition} In \cite{Tak80} Takahashi proved the equality between the topological entropy of a piecewise continuous dynamical system and the topological entropy of an appropriate subshift. Before stating such a result we need a definition. \begin{definition} Let $T:I\rightarrow I$ be a piecewise continuous map on the interval $I$. The \emph{lap intervals} $I_0,\dots,I_l$ of $T$ are closed intervals satisfying the following conditions: \begin{enumerate}[(a)] \item $I_0\cup\dots\cup I_l=I$; \item $T$ is monotone on each interval $I_i$, $~i=0,\dots,l$; \item the number $l$ is minimal under the conditions (a) and (b). \end{enumerate} The number $l$ is called \emph{lap number} and it is denoted $lap(T)$. \end{definition} \begin{remark} If the map $T$ is piecewise linear then the lap intervals are unique and they coincide with the intervals of continuity of $T$. \end{remark} \begin{theorem}[Takahashi~\cite{Tak80}] \label{t entr tak} Let $T$ be a piecewise continuous transformation over the closed interval $I$ on itself. Let $\gamma_T:I\rightarrow A_T^\mathbb N$ be the map defined by the relation $x\mapsto x_1x_2\cdots$ with $x_n$ satisfying $T^n(x)\in I_{x_n}$. Define the subshift $X_T:=\overline{\gamma_T(I)}$ in $A^\mathbb N$. If $lap(T)$ is finite then: \begin{equation} h(X_T)=h(I,T). \end{equation} \end{theorem} The entropy in the very particular case of a piecewise linear map with constant slope is explicitely given in the following result. \begin{proposition}[{Shultz~\cite[Proposition 3.7]{Shu07}}]\label{p entr piecew} Let $T$ be a piecewise linear map with slope $\pm \beta$. Then the topological entropy of $T$ is equal to $\log \beta$. \end{proposition} We now prove our result. \begin{theorem} The topological entropy of $S_{-\beta}$ is equal to $\log \beta$. \end{theorem} \begin{proof} Consider the dynamical system $(I_{-\beta},T_{-\beta})$. We extend the map $T_{-\beta}$ to the closure of $I_{-\beta}$ to fullfill the conditions of Theorem \ref{t entr tak}. By definition of the $(-\beta)$-expansion, the subshift $X_{T_\beta}$ coincides with the closure of the set of the $(-\beta)$-expansions in $A_{-\beta}^\mathbb N$, whose entropy is the same as $S_{-\beta}\subset A_{-\beta}^\mathbb Z$. As $T_{-\beta}$ is piecewise linear, the lap intervals coincide with the (finite) number of continuity intervals. Then, by Theorem \ref{t entr tak} and by Proposition \ref{p entr piecew}, $h(S_{-\beta})=h(I_{-\beta},T_{-\beta})=\log \beta$. \end{proof} \subsection{The Pisot case} We first prove that the classical result saying that if $\beta$ is a Pisot number, then every element of $\mathbb Q(\beta) \cap [0,1]$ has an eventually periodic $\beta$-expansion is still valid for the base ${-\beta}$. \begin{theorem}\label{rat} If $\beta$ is a Pisot number, then every element of $\mathbb Q(\beta) \cap I_{-\beta}$ has an eventually periodic $(-\beta)$-expansion. \end{theorem} \begin{proof} Let $M_\beta(X)=X^d -a_1X^{d-1}-\cdots-a_d$ be the minimal polynomial of $\beta$ and denote by $\beta=\beta_1,\ldots,\beta_d$ the conjugates of $\beta$. Let $x$ be arbitrarily fixed in $\mathbb Q(\beta) \cap I_{-\beta}$. Since $\mathbb Q(\beta) = \mathbb Q(-\beta)$, $x$ can be expressed as $x=q^{-1}\sum_{i=0}^{d-1} p_i (-\beta)^i$ with $q$ and $p_i$ in $\mathbb Z$, $q>0$ as small as possible in order to have uniqueness. Let $(x_i)_{i\geqslant 1}$ be the $(-\beta)$-expansion of $x$, and write \[ r_n=r_n^{(1)}=r_n^{(1)}(x)=\frac{x_{n+1}}{-\beta}+\frac{x_{n+2}}{(-\beta)^2}+\cdots=(-\beta)^n\left(x-\sum_{k=1}^{n} x_k (-\beta)^{-k}\right). \] Since $r_n=T^n_{-\beta}(x)$ belongs to $I_{-\beta}$ then $|r_n|\leqslant \frac{\beta}{\beta+1}<1$. For $2\leqslant j \leqslant d$, let \[r_n^{(j)}=r_n^{(j)}(x)=(-\beta_j)^n\left(q^{-1}\sum_{i=0}^{d-1} p_i (-\beta_j)^i-\sum_{k=1}^{n} x_k (-\beta_j)^{-k}\right). \] Let $\eta=\max\{|\beta_j| \mid 2\leqslant j \leqslant d\}$: since $\beta$ is a Pisot number, $\eta<1$. Since $x_k \leqslant \lfloor \beta \rfloor$ we get \[ |r_n^{(j)}|\leqslant q^{-1}\sum_{i=0}^{d-1}| p_i | \eta^{n+i} + \lfloor \beta \rfloor \sum_{k=0}^{n-1} \eta^{k} \] and since $\eta<1$, $\max_{1\leqslant j \leqslant d}\{\sup_n\{|r_n^{(j)}|\}\}< \infty.$ We need a technical result. Set $R_n=(r_n^{(1)}, \ldots, r_n^{(d)})$ and let $B$ the matrix $B=((-\beta_j)^{-i})_{1\leqslant i,j \leqslant d}.$ \begin{lemma} Let $x=q^{-1}\sum_{i=0}^{d-1} p_i (-\beta)^i$. For every $ n \geqslant 0$ there exists a unique $d$-uple $Z_n=(z_n^{(1)},\dots,z_n^{(d)})$ in $\mathbb Z^d$ such that $R_n=q^{-1}Z_nB$. \end{lemma} \begin{proof} By induction on $n$. First, $r_1=-\beta x - x_1$, thus \[ r_1=q^{-1}\left(\sum_{i=0}^{d-1} p_i (-\beta)^{i+1}-qx_1\right)=q^{-1}\left( \frac{z_1^{(1)}}{-\beta}+\cdots+\frac{z_1^{(d)}}{(-\beta)^d}\right) \] using the fact that $(-\beta)^d= -a_1(-\beta)^{d-1}+a_2 (-\beta)^{d-2}+ \cdots+(-1)^{d}a_d$. Now, $r_{n+1}=-\beta r_{n} - x_{n+1}$, hence \[ r_{n+1}=q^{-1}\left(z_n^{(1)}+ \frac{z_1^{(2)}}{-\beta}+\cdots+\frac{z_n^{(d)}}{(-\beta)^{d-1}}-qx_{n+1}\right)=q^{-1}\left( \frac{z_{n+1}^{(1)}}{-\beta}+\cdots+\frac{z_{n+1}^{(d)}}{(-\beta)^{d}}\right) \] since $z_n^{(1)}-qx_{n+1}\in \mathbb Z$. Thus for every $n$ there exists $(z_n^{(1)}, \ldots,z_n^{(d)})$ in $\mathbb Z^d$ such that \[r_{n}=q^{-1}\sum_{k=1}^{d}z_n^{(k)}(-\beta)^{-k}. \] Since the latter equation has integral coefficients and is satisfied by $-\beta$, it is also satisfied by $-\beta_j$, $2 \leqslant j \leqslant d$, and \[r_{n}^{(j)}=(-\beta_j)^n\left(q^{-1}\sum_{i=0}^{d-1}\bar p_i (-\beta_j)^i-\sum_{k=1}^{n} x_k (-\beta_j)^{-k}\right)=q^{-1}\sum_{k=1}^{d}z_n^{(k)}(-\beta_j)^{-k}. \] \end{proof} Let us go back to the proof of Theorem~\ref{rat}. Let $V_n=qR_n$. The $(V_n)_{n\geq1}$ have bounded norm, since $\max_{1\leqslant j \leqslant d}\{\sup_n\{|r_n^{(j)}|\}\}< \infty$. As the matrix $B$ is invertible, for every $ n\geq1$, \[\|Z_n\|=\|(z_n^{(1)},\dots,z_n^{(d)})\|=\max\{|z_n^{(j)}|\; : 1\leqslant j \leqslant d\}<\infty\] so there exist $p$ and $m \geqslant 1$ such that $Z_{m+p}=Z_{p}$, hence $r_{m+p}=r_{p}$ and the $(-\beta)$-expansion of $x$ is eventually periodic. \end{proof} As a corollary we get the following result. \begin{theorem}\label{pis-sof} If $\beta$ is a Pisot number then the $(-\beta)$-shift is a sofic system. \end{theorem} The \emph{normalization} in base $-\beta$ is the function which maps any $(-\beta)$-represen\-tation on an alphabet $C$ of digits of a given number of $I_{{-\beta}}$ onto the admissible $(-\beta)$-expansion of that number. Let $C=\{-c, \ldots,c\}$, where $c \geqslant \lfloor \beta \rfloor$ is an integer. Denote $$Z_{{-\beta}}(2c)= \Big\{(z_i)_{i \geqslant 0} \in\{-2c,\ldots,2c\}^\mathbb N\ \Big|\ \sum_{i \geqslant 0}z_i({-\beta})^{-i}=0\Big\}\,. $$ The set $Z_{-\beta}(2c)$ is recognized by a countable infinite automaton $\mathcal A_{-\beta}(2c)$: the set of states $Q(2c)$ consists of all $s\in\mathbb Z[\beta] \cap [-\frac{2c}{\beta-1},\frac{2c}{\beta-1}]$. Transitions are of the form $s\stackrel e \to s'$ with $e \in\{-c,\ldots,c\}$ such that $s'=-\beta s+e$. The state $0$ is initial; every state is terminal. Let $M_\beta(X)$ be the minimal polynomial of $\beta$, and denote by $\beta=\beta_1$, $\beta_2$, \ldots, $\beta_d$ the roots of $M_\beta$. We define a norm on the discrete lattice of rank $d$, $\mathbb Z[X]/(M_\beta)$, as $$||P(X)||=\max_{1 \leqslant i \leqslant d} |P(\beta_i)|.$$ \begin{proposition} If $\beta$ is a Pisot number then the automaton $\mathcal A_{-\beta}(2c)$ is finite for every $c \geqslant \lfloor \beta \rfloor$. \end{proposition} \begin{proof} Every state $s$ in $Q(2c)$ is associated with the label of the shortest path $f_0f_1\cdots f_n$ from $0$ to $s$ in the automaton. Thus $s=f_{0}(-\beta)^{n} +f_1(-\beta)^{n-1} +\cdots + f_n=P(\beta)$, with $P(X)$ in $\mathbb Z[X]/(M_\beta)$. Since $f_0f_1\cdots f_n$ is a prefix of a word of $Z_{-\beta}(2c)$, there exists $f_{n+1}f_{n+2}\cdots$ such that $(f_i)_{i \geqslant 0}$ is in $Z_{-\beta}(2c)$. Thus $s=|P(\beta)|<\frac {2c}{\beta-1}$. For every conjugate $\beta_i$, $2 \leqslant i \leqslant d$, $|\beta_i|<1$, and $|P(\beta_i)| < \frac {2c}{1-|\beta_i|}$. Thus every state of $Q(2c)$ is bounded in norm, and so there is only a finite number of them. \end{proof} The {\em redundancy transducer} $\mathcal R_{-\beta}(c)$ is similar to $\mathcal A_{-\beta}(2c)$. Each transition $s\stackrel e\to s'$ of $\mathcal A_{-\beta}(2c)$ is replaced in $\mathcal R_{-\beta}(c)$ by a set of transitions $s\stackrel{a|b}\longrightarrow s'$, with $a,b\in\{-c,\ldots,c\}$ and $a-b=e$. Thus one obtains the following proposition. \begin{proposition}\label{redundancy} The redundancy transducer $\mathcal R_{-\beta}(c)$ recognizes the set $$ \big\{(x_1x_2\cdots,y_1y_2\cdots) \in C^\mathbb N \times C^\mathbb N\ \big|\ \; \sum_{i \geqslant 1}x_i({-\beta})^{-i}=\sum_{i \geqslant 1}y_i({-\beta})^{-i} \big\}. $$ If $\beta$ is a Pisot number, then $\mathcal R_{-\beta}(c)$ is finite. \end{proposition} \begin{theorem} If $\beta$ is a Pisot number, then normalization in base $-\beta$ on any alphabet $C$ is realizable by a finite transducer. \end{theorem} \begin{proof} The normalization is obtained by keeping in $\mathcal R_{-\beta}(c)$ only the outputs $y$ that are $({-\beta}$)-admissible. By Theorem~\ref{pis-sof} the set of admissible words is recognizable by a finite automaton $\mathcal D_{-\beta}$. The finite transducer $\mathcal N_{-\beta}(c)$ doing the normalization is obtained by making the intersection of the output automaton of $\mathcal R_{-\beta}(c)$ with $\mathcal D_{-\beta}$. \end{proof} \begin{proposition}\label{conversion} If $\beta$ is a Pisot number, then the conversion from base $-\beta$ to base $\beta$ is realizable by a finite transducer. The result is $\beta$-admissible. \end{proposition} \begin{proof} Let $x \in I_{-\beta}$, $x \geqslant 0$, such that $\mathop{\mathsf{d}_{-\beta}}(x)=x_1x_2x_3\cdots$. Denote $\bar a$ the signit digit $(-a)$. Then $\overline{x_1}x_2\overline{x_3}\cdots$ is a $\beta$-representation of $x$ on the alphabet $\widetilde{A_{-\beta}}=\{-\lfloor \beta \rfloor , \ldots,\lfloor \beta \rfloor\}$. Thus the conversion is equivalent to the normalization in base $\beta$ on the alphabet $\widetilde{A_{-\beta}}$, and when $\beta$ is a Pisot number, it is realizable by a finite transducer by \cite{Frougny92}. \end{proof} \section{On-line conversion from positive to negative base}\label{s_conversion} Proposition \ref{conversion} shows the actability of the conversion from positive to negative base with a finite transducer for a particular class of bases, {\it i.e.} the Pisot numbers. The result is admissible, but this transducer is not sequential. In the case where the base is a negative integer, we have seen in Section~\ref{int} that the conversion from base $b$ to base $-b$ is realizable by a finite right sequential transducer. \subsection{On-line conversion in the general case}\label{algo} An on-line algorithm is such that, after a certain delay of latency $\delta$ during which the data are read without writing, a digit of the output is produced for each digit of the input, see \cite{M} for on-line arithmetic in integer base. \begin{theorem} There exists a conversion from base $\beta$ to base $-\beta$ which is computable by an on-line algorithm with delay $\delta$, where $\delta$ is the smallest positive integer such that \begin{equation}\label{delay} \frac{\lfloor \beta \rfloor}{\beta^{\delta -1}} + \frac{\lfloor \beta \rfloor}{\beta^{\delta }} \leqslant 1 - \{\beta\}. \end{equation} The result is not admissible. \end{theorem} \vskip0.2cm \hrule \vskip0.2cm \noindent {\bf On-line algorithm} \vskip0.2cm \hrule \vskip0.2cm \noindent{\sl Input}: a word $ (x_j)_{j \geqslant 1}$ of $A_\beta^\mathbb N$ such that $x= \sum_{j \geqslant 1} x_j \beta^{-j}$ and $0 \leqslant x <\frac{1}{\beta+1}$.\\ {\sl Output}: a word $ (y_j)_{j \geqslant 1}$ of $A_\beta^\mathbb N$ such that $x=\sum_{j\geqslant 1} y_j (-\beta)^{-j}.$\\ \noindent\texttt{begin}\\ \noindent$q_0:=0$\\ \noindent\texttt{for $j:= 1$ to $\delta$ do}\\ \hspace*{0.5cm} $q_j:=q_{j-1}+\frac{x_{j}}{\beta^j}$\\ $j:=1$\\ \noindent\texttt{while $j\geqslant 1$ do}\\ \hspace*{0.5cm} $z_{\delta+j} := -\beta q_{\delta+j-1}+ (-1)^j\frac{x_{\delta+j}}{\beta^\delta}$\\ \hspace*{0.5cm} \texttt{if} $-\frac{ \beta}{\beta+1} \leqslant z_{\delta+j} \leqslant \frac{\beta^2}{\beta+1}$ \texttt{then} $y_j:= \lfloor z_{\delta+j} + \frac{\beta}{\beta+1} \rfloor$\\ \hspace*{0.5cm} \texttt{if} $z_{\delta+j} >\frac{\beta^2}{\beta+1}$ \texttt{then} $y_j:=\lfloor \beta \rfloor$ \\ \hspace*{0.5cm} \texttt{if} $z_{\delta+j} <-\frac{ \beta}{\beta+1}$ \texttt{then} $y_j:=0$ \\ \hspace*{0.5cm} $q_{\delta+j}:=z_{\delta+j}-y_j$\\ \hspace*{0.5cm} $j:=j+1$\\ \noindent\texttt{end}\\ \hrule \vskip0.2cm \begin{proof}\textbf{Claim 1.} For each $j \geqslant 1$ $$\frac{x_1}{\beta}+ \frac{x_2}{\beta^2}+\cdots + \frac{x_{\delta+j}}{\beta^{\delta+j}}= -\frac{y_1}{\beta}+ \frac{y_2}{\beta^2}-\cdots + (-1)^j\frac{y_j}{\beta^{j}}+(-1)^j\frac{q_{\delta+j}}{\beta^{j}}.$$ \noindent\textbf{Claim 2.} If $-\frac{ \beta}{\beta+1} \leqslant z_{\delta+j} \leqslant \frac{\beta^2}{\beta+1}$ then $y_j$ belongs to $A_\beta$ and $ q_{\delta+j}$ belongs to $I_{{-\beta}}=[-\frac{ \beta}{\beta+1} ,\frac{1}{\beta+1})$.\\ Proof of Claim 2: Clearly $0 \leqslant y_j \leqslant \frac{\beta^2}{\beta+1} + \frac{\beta}{\beta+1}=\beta$. Moreover, $ z_{\delta+j} + \frac{\beta}{\beta+1}=y_j + \{ z_{\delta+j} + \frac{\beta}{\beta+1}\}$, thus $q_{\delta+j}:=z_{\delta+j}-y_j=\{ z_{\delta+j} + \frac{\beta}{\beta+1}\} - \frac{\beta}{\beta+1}$, and the claim is proved.\\ \noindent\textbf{Claim 3.} If $z_{\delta+j} >\frac{\beta^2}{\beta+1}$ then $ q_{\delta+j}>-\frac{ \beta}{\beta+1}$.\\ Proof of Claim 3: We have that $q_{\delta+j}=z_{\delta+j}-\lfloor \beta \rfloor> \frac{\beta^2}{\beta+1} -\lfloor \beta \rfloor>-\frac{ \beta}{\beta+1}$.\\ \noindent\textbf{Claim 4.} If $z_{\delta+j} >\frac{\beta^2}{\beta+1}$ and $q_{\delta+j-1} \geqslant -\frac{ \beta}{\beta+1}$ then $ q_{\delta+j}<\frac{1}{\beta+1}$.\\ Proof of Claim 4: Since $q_{\delta+j}= -\beta q_{\delta+j-1}+ (-1)^j\frac{x_{\delta+j}}{\beta^\delta} -\lfloor \beta \rfloor \leqslant \frac{\beta^2}{\beta+1} + \frac{\lfloor \beta \rfloor}{\beta^\delta} - \lfloor \beta \rfloor$, the claim is proved if, and only if, $\frac{\lfloor \beta \rfloor}{\beta^\delta} - \lfloor \beta \rfloor <1 -\beta$, that is to say, if, and only if, $\frac{\lfloor \beta \rfloor}{\beta^\delta}<1-\{\beta\}$, which is true thanks to~(\ref{delay}).\\ \noindent\textbf{Claim 5.} If $z_{\delta+j} <-\frac{ \beta}{\beta+1}$ and $q_{\delta+j-1} \in I_{{-\beta}}$ then $j$ is odd, $- \frac{\beta}{\beta+1}- \frac{\lfloor \beta \rfloor}{\beta^\delta} \leqslant q_{\delta+j}<-\frac{\beta}{\beta+1}$, and $ q_{\delta+j+1}$ belongs to $I_{{-\beta}}$.\\ Proof of Claim 5: If $j$ is even then $z_{\delta+j} := -\beta q_{\delta+j-1}+ \frac{x_{\delta+j}}{\beta^\delta} > -\frac{ \beta}{\beta+1}+ \frac{x_{\delta+j}}{\beta^\delta} \geqslant -\frac{ \beta}{\beta+1}$, hence $j$ must be odd. Set $j=2k+1$. We have $y_{2k+1}=0$ and $q_{\delta+2k+1}=z_{\delta+2k+1}=-\beta q_{\delta+2k} - \frac{x_{\delta+2k+1}}{\beta^\delta}\geqslant - \frac{\beta}{\beta+1}- \frac{\lfloor \beta \rfloor}{\beta^\delta}$ since $q_{\delta+j-1} \in I_{{-\beta}}$. Then $z_{\delta+2k+2}=-\beta q_{\delta+2k+1} + \frac{x_{\delta+2k+2}}{\beta^\delta} >\frac{\beta^2}{\beta+1}$. Hence $y_{2k+2}=\lfloor \beta \rfloor$. By Claim 3, $q_{\delta+2k+2} >-\frac{ \beta}{\beta+1}$.\\ On the other hand $q_{\delta+2k+2}=z_{\delta+2k+2} - \lfloor \beta \rfloor= -\beta q_{\delta+2k+1} + \frac{x_{\delta+2k+2}}{\beta^\delta} - \lfloor \beta \rfloor= \beta^2q_{\delta+2k}+\frac{x_{\delta+2k+1}}{\beta^{\delta-1}}+ \frac{x_{\delta+2k+2}}{\beta^\delta} - \lfloor \beta \rfloor<\frac{\beta^2}{\beta+1} + \frac{\lfloor \beta \rfloor}{\beta^{\delta-1}}+ \frac{\lfloor \beta \rfloor}{\beta^\delta} - \lfloor \beta \rfloor \leqslant \frac{1}{\beta+1}$ by~(\ref{delay}), thus $q_{\delta+2k+2}$ belongs to $I_{-\beta}$.\\ By hypothesis, $q_\delta$ is in $I_{-\beta}$. By the previous claims, for every $k \geqslant 0$, $q_{\delta+2k}$ belongs to $I_{-\beta}$ and $- \frac{\beta}{\beta+1}- \frac{\lfloor \beta \rfloor}{\beta^\delta} \leqslant q_{\delta+2k+1}<\frac{1}{\beta+1}$. Thus, for every $j \geqslant 1$, $$\frac{x_1}{\beta}+ \cdots + \frac{x_{\delta+j}}{\beta^{\delta+j}}= \frac{y_1}{(-\beta)}+ \cdots + \frac{y_{j}}{(-\beta)^{j}}+\frac{q_{\delta+j}}{(-\beta)^{j}}$$ with $q_{\delta+j}$ bounded. Therefore the algorithm converges, and $$\sum_{j \geqslant 1} x_j \beta^{-j}=\sum_{j\geqslant 1} y_j (-\beta)^{-j}.$$ \end{proof} \subsection{Conversion in the Pisot case} We now show that, when $\beta$ is a Pisot number, there is a finite on-line transducer realizing the conversion. \begin{theorem} If $\beta$ is a Pisot number, the conversion from base $\beta$ to base ${-\beta}$ is realizable by a finite on-line transducer. \end{theorem} \begin{proof} Following the on-line algorithm of Section~\ref{algo} we construct an on-line transducer $\mathcal{C}$ as follows. The set of states is $Q=Q_t \cup Q_s$, with the set of transient states $Q_t=\{q_j \mid 0 \leqslant j \leqslant \delta-1\}$, and the set of synchronous states $Q_s=\{q_{\delta+j} \mid j \geqslant 0\}$. The initial state is $q_0$. For $1 \leqslant j \leqslant \delta$, transient edges are defined by $$q_{j-1} \stackrel{x_j|\varepsilon}{\longrightarrow}q_j.$$ Synchronous edges are defined by $$q_{\delta+j-1} \stackrel{x_{\delta+j}|y_{j}}{\longrightarrow}q_{\delta+j}$$ for $j \geqslant 1$. There is an infinite path in the automaton $\mathcal C$ starting in $q_0$ and labelled by $$q_0\stackrel{x_1|\varepsilon}{\longrightarrow}q_1 \cdots \stackrel{x_\delta|\varepsilon}{\longrightarrow}q_\delta \stackrel{x_{\delta+1}|y_1}{\longrightarrow}q_{\delta+1} \stackrel{x_{\delta+2}|y_2}{\longrightarrow}q_{\delta+2}\cdots $$ if, and only if, $\sum_{j \geqslant 1} x_j \beta^{-j}=\sum_{j \geqslant 1} y_j (-\beta)^{-j}$. \bigskip Let $M_\beta(X)$ be the minimal polynomial of $\beta$ and let $\beta=\beta_1, \beta_2,\ldots,\beta_d$ be the roots of $M_\beta$. Recall that $\mathbb Z[X]/(M_\beta(X)) \sim \mathbb Z[\beta] $ is a discrete lattice of rank $d$. Since $\beta$ is a Pisot number, $|\beta_i|<1$ for $2 \leqslant i \leqslant d$. For each $j \geqslant 1$, $q_{j}$ is an element of $\mathbb Z[\beta, \beta^{-1}]$. For $1 \leqslant i \leqslant d$ let $q_{j}(\beta_i)$ be the element of $\mathbb Z[\beta_i, \beta_i^{-1}]$ obtained by replacing $\beta$ by $\beta_i$ in $q_j$. Then $q_{j}=q_{j}(\beta)$. First of all, for every $j \geqslant 1$, $- \frac{\beta}{\beta+1}- \frac{\lfloor \beta \rfloor}{\beta^\delta} \leqslant q_{j}(\beta)<\frac{1}{\beta+1}$ by the on-line algorithm. Secondly, for every $j \geqslant 1$ and $2 \leqslant i \leqslant d$, \begin{equation}\label{bou} q_{\delta+j}(\beta_i)=-\beta_i q_{\delta+j-1}(\beta_i)+(-1)^j\frac{x_{\delta+j}}{\beta_i^{\delta}} -y_{j}. \end{equation} For $2 \leqslant i \leqslant d$ let $$M_i=\frac{\lfloor \beta \rfloor}{(1-|\beta_i|)}\big(1 +\frac{1}{|\beta_i|^{\delta}}\big).$$ Then, if $|q_{\delta+j-1}(\beta_i)| \leqslant M_i$, then $|q_{\delta+j}(\beta_i)| \leqslant M_i$ by (\ref{bou}). Now, for $0 \leqslant j \leqslant \delta$ and $2 \leqslant i \leqslant d$, $$|q_j(\beta_i)| < \lfloor \beta \rfloor (\frac{1}{|\beta_i|} + \cdots + \frac{1}{|\beta_i|^{\delta}})<M_i.$$ Define a norm on $\mathbb Z[X]/(M_\beta(X))$ by $\Vert q \Vert= \max_{1 \leqslant i \leqslant d} |q(\beta_i)|$. Thus the elements of $Q$ are all bounded in norm, and so $Q$ is finite. \end{proof} \bigskip In the particular case that $\beta^2=a \beta +1$ ($\beta$ is thus a Pisot number) we can construct directly a simpler finite left sequential transducer realizing the conversion. \begin{proposition} If $\beta^2=a \beta +1$, $a \geqslant 1$, then the conversion from base $\beta$ to base ${-\beta}$ is realizable by the finite left sequential transducer of Fig.~\ref{quad}. \end{proposition} \begin{proof} The left sequential transducer in Fig.~\ref{quad} converts a $\beta$-expansion of a real number $x$ in $[0,\beta)$ of the form $x_0 \raisebox{0.1ex}{\textbf{.}} x_1 x_2 \cdots$ into a $({-\beta})$-representation of $x$ of the form $y_0 \raisebox{0.1ex}{\textbf{.}} y_1 y_2 \cdots$. We take $0 \leqslant d \leqslant a$, $0 \leqslant c \leqslant a-1$, $1 \leqslant e \leqslant a$. Since the input is admissible, no factor $ae$, with $1 \leqslant e \leqslant a$ can occur. \begin{figure}[h] \begin{center} \VCDraw{% \begin{VCPicture}{(-1,-1)(4,2)} \State[0]{(-1,0)}{A} \State[\bar 1]{(4,0)}{B} \Initial[s]{A} \ArcL[.5]{B}{A}{e0|(e-1)0} \ArcL[.5]{A}{B}{ce|(c+1)(a-e)} \LoopW[.5]{A}{d0|d0} \LoopE[.5]{B}{00|0a,de|d(a-e)} % \end{VCPicture}% } \end{center} \caption{Finite left sequential transducer realizing conversion from base $\beta$ to base ${-\beta}$, $\beta^2=a \beta +1$}\label{quad} \end{figure} \end{proof}
2,869,038,155,349
arxiv
\section{Introduction} The task of knowledge-aided dialogue response generation aims to find useful knowledge for an on-going conversation to help a chatbot generate more relevant and engaging responses. This is an important direction for dialogue response generation due to three advantages: (1) it allows a dialogue model to access a large pool of knowledge beyond local conversational contexts; (2) it enables a dialogue model to capture the dynamic nature of the world \cite{komeili2021internet}, where knowledge sources are frequently updated; (3) it may enhance the interpretability of dialogue models by examining retrieved knowledge and allows fine-grained interventions by replacing certain pieces of knowledge \cite{adiwardana2020towards,zhang2020dialogpt,roller2021recipes}. \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{material/kd-driven.png} \caption{Previous knowledge-aided dialogue response generation models (up), where related articles are given as input, versus our model (down), which can dynamically fetch knowledge using a standard search engine.} \label{fig:intro} \end{figure} Initial efforts \cite{ghazvininejad2018knowledge,liu2018knowledge,wu2019proactive,zhou2020kdconv,tian2020response,chen2020bridging,Kim2020Sequential} on knowledge-aided response generation assume that relevant knowledge (e.g., news or movie reviews) is given as input and design dialogue systems that can effectively utilize the provided knowledge. However, as shown in Fig. \ref{fig:intro}, this static setting violates the dynamic nature of real-world scenarios. This gives rise to approaches that can retrieve and select information from a knowledge source for response generation \cite{zhao2020knowledge,dinan2018wizard,lee2019latent}. These projects assume searching from a static pool of articles (e.g., a Wikipedia dump). The queries and articles are represented as sparse vectors of $n$-grams \cite{dinan2018wizard} or even dense contextualized vectors \cite{lee2019latent} for retrieval. However, these approaches with a static pool of knowledge still fall short of taking the dynamic nature of knowledge into account. In this paper, we propose a dialogue model that can access the vast and dynamic knowledge from any search engine for response generation. We choose to work with search engines based on two reasons. First, search engines (e.g., Google) store continually updating knowledge, which well captures the dynamic nature of our world. Second, we get rid of the difficulties of building our own search engines with $n$-grams and dense contextualized vectors, since the ranking algorithms of well-established search engines are highly optimized. Fig. \ref{fig:pipeline} shows the framework of our model, consisting of a query producer and a response generator. The query producer generates queries from a dialogue context. Then, we send the queries to a search engine to obtain relevant articles. The response generator takes both the retrieved articles and the dialogue context to generate a response. As a key component in our model, the query producer determines the quality of fetched knowledge, which further affects response generation. However, annotating gold queries are costly, because annotators usually need to examine multiple candidate queries by looking into their fetched articles. To obtain automatic training signals for our query producer, we design a function based on existing cheap noisy supervision for scoring queries. It simply compares the retrieved articles of a query with the corresponding gold response to estimate the quality of the query. The scoring function does not require extra annotations, such as gold queries, making our model easily transferable to other domains and search engines. We use Wizard of Wikipedia (WoW, \cite{dinan2018wizard}), a popular benchmark on knowledge-aided response generation, for evaluating our model, taking the publicly free search engine from Wikipedia to retrieve knowledge instead of using the static knowledge provided by WoW. Experiments show that our query producer can achieve a R@$1$ (R@$5$) rate of 62.4\% (74.8\%) for retrieving the correct knowledge on the \emph{unseen} test set of WoW. Besides, our model generates better replies than a strong BART \cite{lewis2020bart} model and knowledge-aided baselines with heuristic algorithms for query acquisition. These results indicate the feasibility of using a search engine as the knowledge source for response generation. \footnote{Our source code is available at \url{https://github.com/DeepLearnXMU/SEA-DialogGen}.} \begin{figure*}[t] \centering \includegraphics[width=0.99\textwidth]{material/pipeline_1212.pdf} \caption{The training process using the example in Fig. \ref{fig:intro}, where solid lines and dashed lines indicate forward and backward pass. \textbf{First} (\textcolor{green}{$\rightarrow$}), input utterances $\mathcal{D}_{<t}$ and (optional) query candidates $\mathcal{Q}$ are fed into the \colorbox{green!20}{query producer} to get search query $\tilde{q}$, and then (\textcolor{gray}{$\rightarrow$}) relevant articles $\mathcal{K}^{\tilde{q}}$ are retrieved from a search engine with $\tilde{q}$. \textbf{Next} (\textcolor{blue}{$\rightarrow$}), the \colorbox{blue!20}{response generator} constructs the next dialogue turn $u_t$ given both $\mathcal{D}_{<t}$ and $\mathcal{K}^{\tilde{q}}$. \textbf{Finally} (\textcolor{orange}{$\rightarrow$}), supervision signals are calculated based on $\mathcal{K}^{\tilde{q}}$ and $u_t$ to update (\textcolor{green}{$\dashedrightarrow$}) the query producer. The response generator is updated (\textcolor{blue}{$\dashedrightarrow$}) based on a cross-entropy loss over $u_t$.} \label{fig:pipeline} \end{figure*} \section{Model} Formally, given a dialogue context of prior $t-1$ turns $\mathcal{D}_{<t}=\{u_1, u_2,...,u_{t-1}\}$, our model first predicts a query $\tilde{q}$ (optionally from a set of query candidates $\mathcal{Q}=\{q^1,q^2,...,q^{|\mathcal{Q}|}\}$ selected by a heuristic algorithm), before sending it to a search engine for retrieving a list of articles $\mathcal{K}^{\tilde{q}}=\{k^{\tilde{q}}_1,k^{\tilde{q}}_2,...,k^{\tilde{q}}_{|\mathcal{K}^{\tilde{q}}|}\}$. With the retrieved knowledge $\mathcal{K}^{\tilde{q}}$ and dialogue context $\mathcal{D}_{<t}$, a response $u_t$ is generated. Fig. \ref{fig:pipeline} visualizes the workflow of our model. In the rest of this section, we introduce the two key components, the query producer (\S \ref{sec:QP}) and the response generator (\S \ref{sec:RG}). \subsection{Query Production} \label{sec:QP} We explore two popular directions based on either extraction (\S \ref{sec:EP}) or generation (\S \ref{sec:GP}) to build our query producer. We further prune the query search space to minimize the number of possible queries and speed up training (\S \ref{sec:SSP}). We use cheap noisy supervisions to train the query producers with Maximum Likelihood Estimation (MLE) based pre-training and reinforcement learning fine-tuning (\S \ref{sec:f_func}). \subsubsection{QP-Ext: Extraction-based Query Producer} \label{sec:EP} Our extraction-based query producer aims to extract text spans from the dialogue context $\mathcal{D}_{<t}$ as queries. We use a pre-trained language model (\texttt{PLM}) as its backbone and add a linear layer with the softmax activation (\texttt{MLP-Softmax}) as the output layer to predict the probability distribution $\boldsymbol{\mathrm{P}}$ over all query candidates $\mathcal{Q}=[q^1,\dots,q^{|\mathcal{Q}|}]$: \begin{equation} \begin{aligned} \boldsymbol{\mathrm{P}}&={\rm MLP\text{-}Softmax}([\boldsymbol{\mathrm{H}}^{q^1},...,\boldsymbol{\mathrm{H}}^{q^{|\mathcal{Q}|}}]), \\ \boldsymbol{\mathrm{H}}^{q^i}&={\rm MeanPooling}(\boldsymbol{\mathrm{H}}_{beg_i:end_i}), \\ \boldsymbol{\mathrm{H}}&={\rm PLM}(\mathcal{D}_{<t}) \text{,} \end{aligned} \label{eq:ext_score} \end{equation} where $\boldsymbol{\mathrm{H}}$ represents the contextualized embeddings produced by \texttt{PLM}, and $beg_i$ and $end_i$ are the begin and end indices for the $i$-th candidate span in $\mathcal{D}_{<t}$. Each candidate query $q^i$ is a continuous span in a turn of $\mathcal{D}_{<t}$. We use ${\rm MeanPooling}$ over the contextualized embeddings of its tokens from $beg_i$ to $end_i$ to get its representation $\boldsymbol{\mathrm{H}}^{q^i}$. \subsubsection{QP-Gen: Generation-based Query Producer} \label{sec:GP} Different from the extraction-based model, this generation-based model adopts a seq2seq architecture to construct search queries from scratch. It can produce queries that are not contained in $\mathcal{D}_{<t}$ at the cost of a larger search space. We adopt a pre-trained encoder-decoder model (denoted as \texttt{PGM}) to generate queries in an auto-regressive manner, and beam search is adopted during decoding to produce multiple queries at the same time \cite{meng-etal-2017-deep}. The score $s_i$ for a query $q^i$ is the sum of the log probabilities for its tokens over the whole vocabulary: \begin{equation} \begin{aligned} s_i &= \frac{\sum_{j=1}^{|q^i|} \log {\rm MLP\text{-}Softmax}(\boldsymbol{\mathrm{H}}^{q^i}_j)}{\sqrt{|q^{i}|}} , \\ \boldsymbol{\mathrm{H}}^{q^i}_j &= {\rm PGM}(\mathcal{D}_{<t}, q^{i}_{<j}) \text{,} \end{aligned} \label{eq:gen_score} \end{equation} where $\boldsymbol{\mathrm{H}}^{q^i}_j$ is the decoder state of the $j$-th step for query $q_i$, and $\sqrt{|q_i|}$ is the length-based normalization item to ease the preference of short candidates \subsubsection{Pruning Query Search Space} \label{sec:SSP} Querying a search engine can be time consuming for training a query producer, as the training process can take hundreds of thousands of steps, and each query can take more than 0.1 seconds. A natural solution for this issue is to create an offline cache of articles for all possible queries before the actual training. However, both extraction-based and generation-based models take a large search space of candidate queries. Given a dialogue of $m$ turns with $n$ words for each turn, there are $\mathcal{O}(m\cdot n^2)$ possible queries for the extraction-based model, while the number is exponential to average query length for the generation-based model. We study different methods to prune the search space for query production, so that an offline cache can be efficiently established, while the coverage of the pruned space is still large enough. In particular, we explore the two main directions in the task of keyword acquisition \cite{siddiqi2015keyword}. \begin{itemize}[leftmargin=*] \item \emph{Dictionary-based}: Typical methods in this direction \cite{ferragina2010tagme} consider the overlap between each dialogue context and a predefined taxonomy as the search space, where the taxonomy is constructed from a large knowledge source (e.g. Wikipedia). \item \emph{Metric-based}: Approaches in this direction \cite{rose2010automatic,CAMPOS2020257} extract keywords from a dialogue context based on metric scores (e.g., TF-IDF) without using any vocabulary, and then they merge adjacent keywords into larger spans by heuristic rules. \end{itemize} \subsubsection{Training with Cheap Noisy Supervision} \label{sec:f_func} We leverage a \emph{cheap noisy supervision} signal to train our query producers, which makes it easier to transfer to other domains and search engines compared with using human annotations \cite{komeili2021internet}. The whole training process contains \emph{pre-training with cross-entropy loss} and \emph{reinforcement learning fine-tuning}. The reinforcement learning fine-tuning directly uses the supervision signals as reward, while the pre-training uses the signals as gold labels. \subparagraph{Cheap noisy supervision for query scoring} \label{sec:RD} We design a function $f$ that leverages the corresponding gold response $u$ as cheap noisy supervision to assign a score $s_q$ for each query $q$ to indicate its quality. In particular, the function $f$ compares the corresponding top articles $\mathcal{K}^q=\{k_1^q,k_2^q,\dots\}$ retrieved by $q$ with the gold response $u$ for calculating score $s^q$: \begin{equation} \label{eq:f_func} s^q = f(\mathcal{K}^q, u) \text{.} \end{equation} We consider this as a type of \emph{cheap} supervision because the function $f$ \emph{does not} require extra annotations (e.g., the annotations of gold queries). We study different approaches and choose the popular BM25 metric \cite{robertson1994some} to implement $f$. More specifically, it first calculates the score for each article by $s_i^q={\rm BM25}(k_i^q,u)$, before determining the overall score $s^q$ as the maximum among them: $s^q=\max(\{s_1^q,s_2^q,\dots\})$. We introduce two pre-processing methods for improving upon the vanilla BM25. The first method adopts coreference resolution, which finds the actual entity referred by a pronoun. We then expand response $u$ by concatenating it with the entity mentions referred by its pronouns. This is important as coreference frequently exists in human conversations. The second method drops function words from both articles $\mathcal{K}$ and response $u$ before passing them to the noisy supervision function $f$. This makes $f$ focus more on content words. \subparagraph{Pre-training with noisy labels} At this stage, we take the query with the highest score $s^q$ by function $f$ (Eq. \ref{eq:f_func}) from query candidates $\mathcal{Q}$ as pseudo ground-truth to train both extraction-based and generation-based producers with the standard cross-entropy loss: \begin{align} \mathcal{L}_{ext.}^{pt}&=-\log P(\bar{q}|\mathcal{D}_{<t},\theta_{ext.}), \\ \mathcal{L}_{gen.}^{pt}&=-\sum_{i=1}^{|\bar{q}|} \log P(\bar{q}_i|\mathcal{D}_{<t},\bar{q}_{<i},\theta_{gen.}) \text{,} \label{eq:ce_gen} \end{align} where $\bar{q}$ denotes the pseudo ground-truth, $\mathcal{L}_{ext.}^{pt}$ and $\mathcal{L}_{gen.}^{pt}$ are loss terms for extraction-based and generation-based models respectively, and $\theta_{ext.}$ and $\theta_{gen.}$ are the parameters for the models. \subparagraph{Fine-tuning using reinforcement learning} At the fine-tuning stage, we adopt a REINFORCE algorithm \cite{williams1992simple} with the cheap noisy supervision $f$ as the reward. We subtract a baseline value, which is set to the reward of the candidate query with the highest model score (calculated by Eq. \ref{eq:ext_score} or \ref{eq:gen_score}) from $f$ to reduce variance. As BM25 scores are not bounded, we further normalize them to reduce training variance. For each dialog turn with multiple query candidates, we rescale the reward $r_i$ for the $i$-th candidate as $\frac{r_i-min}{max-min}-0.5$ with the minimum ($min$) and maximum ($max$) values within the candidates. The losses for both producers at fine-tuning stage are defined as: \begin{align} \mathcal{L}^{ft} &= - \Delta(r_s,r_b) \log p_s \text{,} \end{align} where $p_s$ is the probability of a candidate query sampled from the model output distribution, $r_s$ and $r_b$ are the rescaled rewards for the sampled and the baseline candidates, respectively. \subsection{Response Generation} \label{sec:RG} After retrieving relevant articles, the next step of our model is to generate a proper response using the articles and the dialogue context. We implement response generators, Rank-Gen and Merge-Gen, based on two representative research directions. Both models use different strategies to leverage the retrieved articles, and thus we can better study the robustness of our query producer. \subsubsection{Rank-Gen} \label{sec:RaGe} Rank-Gen takes an explicit ranker to choose one piece from a set of articles \cite{ijcai2019-0706,NEURIPS2020_6b493230,zhao2020knowledge}. There are several benefits of this direction, such as improving the explainability and the ability of handling large knowledge set. The ranker first selects a piece of knowledge $\tilde{k}$ from candidates $\mathcal{K}$, then the seq2seq-based generator predicts the response given the dialogue context $\mathcal{D}_{<t}$ and selected knowledge $\tilde{k}$: \begin{equation} \begin{aligned} \tilde{k} &= {\rm argmax}_{k \in \mathcal{K}}{\rm Ranker}(\mathcal{D}_{<t}, k), \\ u_t &= {\rm Generator}(\mathcal{D}_{<t}, \tilde{k}). \end{aligned} \end{equation} We adopt reinforcement learning to jointly train the ranker and generator, where the ranker is guided by the signal from the generator via policy gradient, and the generator is trained by cross-entropy loss taking sampled knowledge $\tilde{k}_s$ from the ranker: \begin{align} \mathcal{L}_{RG} &= \mathcal{L}_{rank} + \mathcal{L}_{gen},\\ \mathcal{L}_{rank} &= -(\mathcal{L}^{\tilde{k}_b}_{gen} - \mathcal{L}^{\tilde{k}_s}_{gen}) \log P(\tilde{k}_s|\mathcal{D}_{<t}, \mathcal{K}) , \\ \mathcal{L}_{gen} &= - \sum_{i=1}^{|u_t|} \log P(u_{t,i}|u_{t,<i}, \mathcal{D}_{<t}, \tilde{k}_s) \text{,} \end{align} where $\tilde{k}_b$ is the baseline knowledge to reduce variance, and $\mathcal{L}_{gen}^{x} (x\in \{\tilde{k}_b,\tilde{k}_s\})$ is the generation loss taking the corresponding knowledge as extra input. Before joint training, we also introduce a warm up stage following \cite{zhao2020knowledge}, where the ranker is trained with cross-entropy loss on the pseudo ground-truth knowledge $\bar{k}$ that has the highest BM25 score among knowledge candidates, and the generator is also trained with cross-entropy loss taking $\bar{k}$ as the additional input: \begin{align} \bar{k} &= {\rm argmax}_{k \in \mathcal{K}}{\rm BM25}(\mathcal{D}_{<t}, \mathcal{K}), \\ \mathcal{L}_{rank}^{pt} &= - \log P(\bar{k}|\mathcal{D}_{<t}, \mathcal{K}), \\ \mathcal{L}_{gen}^{pt} &= - \sum_{i=1}^{|u_t|} \log P(u_{t,i}|u_{t,<i}, \mathcal{D}_{<t}, \bar{k}) \text{.} \end{align} \subsubsection{Merge-Gen} \label{sec:Merge-Gen} Merge-Gen is our implementation of the FiD model \cite{izacard2021leveraging,shuster2021retrieval}, which follows another popular direction by consuming all input knowledge. Particularly, each knowledge piece $k_i$ in knowledge pool $\mathcal{K}$ is first paired with the dialogue context $\mathcal{D}_{<t}$. Then, these pairs $\{\mathcal{D}_{<t}, k_i\}_{k_i \in \mathcal{K}}$ are encoded into hidden states independently before being concatenated as inputs to the decoder for response generation: \begin{equation} \begin{aligned} u_t &= {\rm Decoder}([\boldsymbol{\mathrm{H}}_1;\boldsymbol{\mathrm{H}}_2;...;\boldsymbol{\mathrm{H}}_{|\mathcal{K}|}]), \\ \boldsymbol{\mathrm{H}}_i &= {\rm Encoder}(\mathcal{D}_{<t}, k_i) \text{.} \end{aligned} \end{equation} Comparing with Rank-Gen, Merge-Gen does not suffer from the risk of selecting wrong knowledge by a ranker. However, it lacks explainability and is costly for its decoder to process fused hidden states of all input knowledge. The training signal is based on the standard cross-entropy loss over gold response $u_t$: \begin{equation} \mathcal{L}_{MG} = - \sum_{i\in[1..|u_t|]} \log P(u_{t,i}|u_{t,<i}, \mathcal{D}_{<t}, \mathcal{K}) \text{.} \end{equation} \section{Experiment} We study the effectiveness of our model, especially the usefulness of knowledge retrieval using search queries for response generation. \subsection{Dataset} We choose the Wizard-of-Wikipedia (WoW, \cite{dinan2018wizard}) dataset for evaluation. The dataset is split into 18,430/967/968 dialogues for train/dev/test, respectively. For each dialogue, it includes the relevant knowledge (e.g., the title of the ground-truth article) annotated by human. Therefore, we can use WoW to \emph{additionally} measure the performance of query production by comparing the titles of a retrieved article and the ground-truth article. We use its \emph{unseen} test set for evaluation. We remove the first turn of each dialogue, because the first turn reveals the title of the Wikipedia article for discussion, which will expose the main topic of the dialogue. \subparagraph{Search engine} We choose Wikipedia Search\footnote{\url{https://en.wikipedia.org/wiki/Special:Search}}, a free vertical search engine that returns the latest content of Wikipedia\footnote{Though Wikipedia seems to be static, it is in fact dynamically updated. According to \url{https://en.wikipedia.org/wiki/Wikipedia:Statistics}, it develops at a rate of around 2 edits every second, and the English Wikipedia alone gets 585 new articles per day.} given a user query. We retain the \textbf{top 5} retrieved Wikipedia articles for each query for response generation, extracting the summary of each article (the first paragraph of a Wikipedia article) as external knowledge. \subsection{Setting} We choose hyperparameters by following previous work or the results of development experiments. \subparagraph{Query production} The official ELECTRA-base \cite{Clark2020ELECTRA:}\footnote{\url{https://huggingface.co/google/electra-base-discriminator}} and BART-base \cite{lewis2020bart}\footnote{\url{https://huggingface.co/facebook/bart-base}} models are taken as the backbones for extraction-based and generation-based query producers, respectively. We use AdamW \cite{loshchilov2018decoupled} with learning rate $10^{-5}$ and batch size 64 to optimize our models. The extraction-based producer is pre-trained for 1 epoch, while the generation-based producer is pre-trained for 5 epochs. To prune the search space of query production, we adopt two keyword acquisition tools, TagMe (dictionary-based) and YAKE! (metric-based). We use recall, denoted as R@$x$ ($x\in \{1,3,5\}$), which compares the top $x$ retrieved candidates with ground-truth knowledge to evaluate the performance of query producers. \subparagraph{Response generation} Both Rank-Gen and Merge-Gen use a BART-base model for response generation. All models are trained using AdamW with learning rate 1e-5 and batch size 64. The warm-up stage for ranker in Rank-Gen takes 2 epoch. We perform early stopping based on the perplexity (PPL) on the development set. Following previous work, We adopt PPL and Unigram F1 to evaluate response generation. \subsection{Development Experiments} We explore the design choices for query space pruning (\S \ref{sec:SSP}) and the scoring function $f$ (Eq. \ref{eq:f_func}), as they determine the quality of query production, which in turn affects response generation. \label{sec:dev1} \begin{table}[t] \centering \begin{tabular}{llccc} \toprule Pruning & Query Scoring & R@1 & R@3 & R@5 \\ \midrule \multirow{6}{*}{\makecell[c]{TagMe}} & Random & 12.55 & 31.27 & 44.19 \\ & TF-IDF & 39.30 & 61.28 & 67.26 \\ & BM25$(q,u)$ & 36.09 & 58.73 & 65.89 \\ & BM25 & 53.36 & 65.25 & 69.46 \\ & BM25$_{++}$ & \textbf{60.59} & \textbf{69.81} & \textbf{72.49} \\ \midrule \multirow{6}{*}{\makecell[c]{YAKE!}} & Random & 14.21 & 33.96 & 46.00 \\ & TF-IDF & 36.92 & 58.63 & 64.78 \\ & BM25$(q,u)$ & 28.01 & 52.94 & 62.59 \\ & BM25 & 50.70 & 65.32 & 69.91 \\ & BM25$_{++}$ & 57.97 & 69.15 & 72.03 \\ \bottomrule \end{tabular} \caption{Development results of various search-space pruning methods and query scoring algorithms.} \label{tab:dev_algo} \end{table} \subparagraph{Different choices of space pruning and query scoring algorithms} Table \ref{tab:dev_algo} shows the development results of several popular query scoring algorithms with \emph{TagMe} and \emph{YAKE!} for search space pruning. Among the scoring algorithms: \begin{itemize}[leftmargin=*] \setlength{\itemsep}{0pt} \item \emph{Random}: It randomly picks a query from the candidate pool. \item \emph{TF-IDF}: It averages the TF-IDF scores of all words within a candidate query as its score. This algorithm \emph{only considers the query information}. \item \emph{BM25($q$,$u$)}: It measures the similarity between $q$ and $u$ using BM25 without considering the actual retrieved knowledge by $q$. \item \emph{BM25}: It is our proposed scoring function $f$ (Eq. \ref{eq:f_func}) with standard BM25. \item \emph{BM25$_{++}$}: It is also based on $f$ using BM25 but equipped with pre-processing methods: coreference resolution and function words dropping. \end{itemize} Regarding search-space pruning, the average candidate number and the ceiling performance (R@M in Fig. \ref{fig:dev_knum}) using TagMe are 17.45 and 75.47\%, respectively, while the corresponding numbers are 21.64 and 75.04\% for YAKE!. \textbf{First}, the upper bound does not reach 100\% because: (1) the pruning method fails to keep some good search queries; (2) some dialogue turns (4.7\%) do not require any external knowledge; (3) speakers change the topics in some turns, which requires queries that are not contained in the dialogue context. Overall, we get a decent number of around 75\%. \textbf{Second}, most ranking algorithms using TagMe outperform their corresponding ones using YAKE!. Besides, TagMe reaches higher upper bound (75.47\% vs 75.04\%) with less candidates (17.45 vs 21.64) than YAKE!. Based on the results, we choose TagMe for query space pruning in further experiments. Regarding query scoring, BM25$_{++}$ outperforms all other algorithms, demonstrating the effectiveness of coreference resolution and function words dropping. BM25 is the second best method, which shows that the retrieved articles provide more information beyond the query and the response. We choose BM25$_{++}$ for future experiments. \begin{figure}[t] \centering \pgfplotsset{every axis/.append style={ semithick}, } \begin{tikzpicture}[scale=0.8] \begin{axis}[ legend style={ at={(0.98,0.38)}, cells={anchor=west}}, xtick={1,2,3,4,5,6,7,8,9}, xticklabels={1,2,3,4,5,6,7,8,$\ge$9}, xlabel=Turns, ylabel=R@$x$(\%)] \addplot[sharp plot,color=red, mark=*] coordinates { (1,44.50)(2,57.71)(3,59.32)(4,60.06)(5,60.34)(6,60.54)(7,60.41)(8,60.54)(9,60.59) }; \addlegendentry{R@$1$} \addplot[dashed,color=blue, mark=triangle*] coordinates { (1,48.60)(2,65.53)(3,67.32)(4,68.95)(5,69.38)(6,69.63)(7,69.71)(8,69.84)(9,69.81) }; \addlegendentry{R@$3$} \addplot[dash dot,color=green, mark=square*] coordinates { (1,48.91)(2,67.14)(3,69.46)(4,71.37)(5,71.70)(6,72.28)(7,72.31)(8,72.44)(9,72.49) }; \addlegendentry{R@$5$} \addplot[dash dot dot,color=cyan, mark=diamond*] coordinates { (1,48.96)(2,67.88)(3,70.73)(4,73.33)(5,74.17)(6,74.99)(7,75.21)(8,75.44)(9,75.47) }; \addlegendentry{R@M} \end{axis} \end{tikzpicture} \caption{Development results of BM25$_{++}$ and the ceiling performances (R@M) given keyword candidates from the last $k$ turns.} \label{fig:dev_knum} \end{figure} \subparagraph{The number of dialogue turns for obtaining candidate queries} With the pruning method and query scoring algorithm determined, the next step is to choose the number ($k$) of turns for obtaining candidate queries. Intuitively, considering more turns will increase the ceiling performance on knowledge retrieval with extra noise on the query scoring algorithm. As shown in Fig. \ref{fig:dev_knum}, the performance of BM25$_{++}$ consistently improves with the increase of $k$, indicating that the benefit of considering longer dialogue context for candidate queries exceeds the cost (extra noise). Therefore, we choose to consider all turns for the remaining experiments. \begin{table*}[t] \setlength\tabcolsep{4pt} \centering \begin{tabular}{lc|ccc|cc|cc} \toprule \multirow{2}{*}{\makecell[c]{Query/KN \\ Production}} & \multirow{2}{*}{\makecell[c]{Avg. Num. \\ Querying}} & \multicolumn{3}{c|}{Query Ranking} & \multicolumn{2}{c|}{Rank-Gen} & \multicolumn{2}{c}{Merge-Gen} \\ & & R@1 & R@3 & R@5 & PPL$\downarrow$ & Uni. F1 & PPL$\downarrow$ & Uni. F1 \\ \midrule None & -- & -- & -- & -- & 25.26 & 16.53 & 25.13 & 16.64 \\ \hline Last 2 turns & 8.29 & -- & -- & -- & 22.77 & 17.43 & 20.04 & 17.55 \\ Last 4 turns & 13.38 & -- & -- & -- & 22.86 & 17.38 & 19.89 & 17.72 \\ All history & 17.45 & -- & -- & -- & 23.03 & 17.32 & \textbf{19.79} & 17.71 \\ \hline Concat & \multirow{1}{*}{\textbf{1}} & \;\;4.15 & \;\;5.45 & \;\;5.88 & 24.79 & 16.76 & 24.57 & 16.51 \\ TF-IDF & \multirow{1}{*}{\textbf{1}} & 43.41 & 61.63 & 66.65 & 22.86 & 17.28 & 21.53 & 17.64 \\ QP-Ext & \textbf{1} & \textbf{62.41} & \textbf{72.91} & \textbf{74.87} & \textbf{21.60} & \textbf{17.81} & 20.20 & \textbf{18.15} \\ QP-Gen & \textbf{1} & 56.77 & 66.08 & 68.22 & 21.65 & 17.51 & 20.69 & 17.95 \\ \midrule \midrule GENRE \cite{decao2020multilingual} & -- & -- & -- & -- & 22.59 & 17.60 & 20.24 & 18.15 \\ Gold KN & -- & -- & -- & -- & 18.83 & 18.63 & 18.99 & 18.42 \\ \bottomrule \end{tabular} \caption{Main results of query production and response generation on WoW unseen test set, where ``PPL$\downarrow$'' and ``Uni. F1'' indicates perplexity and unigram F1, respectively. \emph{QP-Ext} is significantly better than \emph{TF-IDF} at $p<0.01$ across all aspects, and \emph{QP-Gen} is better than \emph{TF-IDF} at $p<0.05$.} \label{tab:main_exp} \end{table*} \subsection{Main Results} Table \ref{tab:main_exp} shows the main testing results including the performance on search query production and response generation. We compared our models with typical baselines with different query acquisition techniques: (1) no external knowledge is used (first group); (2) fetching knowledge with all search queries in the last $k$ turns that are not filtered by query pruning\footnote{They are based on the heuristic that people tend to keep talking the topics just mentioned in the last few turns.} (second group); (3) using search queries produced by different techniques (third group); (4) several upper bounds to pinpoint the current bottleneck of our model (last group). We design the baselines in the second group to better highlight the merit of our model. This is because the queries in later turns are more likely to retrieve gold knowledge than those in earlier turns, as people tend to keep discussing the topics they just mention. For the third group, \emph{Concat} concatenates each dialogue context as the query, \emph{TF-IDF} uses the TF-IDF score to pick the query, and both \emph{QP-Ext} (\S \ref{sec:EP}) and \emph{QP-Gen} (\S \ref{sec:GP}) correspond to our query production methods. We can draw the following conclusions: \textbf{First}, leveraging external knowledge is always helpful for dialogue response generation (comparing the first line with others). \textbf{Second}, Merge-Gen based models tend to perform better than the corresponding Rank-Gen based ones, because it avoids the error propagation from the ranker and uses multiple pieces of knowledge. Besides, for the baselines using multiple queries (the second group), Rank-Gen and Merge-Gen show opposite trends when the number of turns for obtaining queries increases with Merge-Gen being consistently better. Both results confirm the advantage of Merge-Gen over Rank-Gen in two aspects: Merge-gen can utilize multiple pieces of knowledge at the same time; and it prevents the error propagation from using an explicit ranker. \textbf{Third}, using more queries for knowledge fetching can generally improve the overall performance, but the time of knowledge gathering (querying a search engine and retrieving pages) also grows linearly with the query number. For instance, the querying time can be more than 2 seconds when using 10 queries. \textbf{Lastly}, our models using either of the proposed query producers perform better than all baselines for most situations, indicating that our query producer trained with cheap noisy supervision signals can retrieve useful contents for response generation. The \emph{Concat} baseline fails to get any articles for many cases, because its generated queries are very long. The baselines (the second group) using multiple queries show slightly better perplexity values than our models when combined with Merge-Gen. But, their knowledge fetching process is at least 8-time slower than ours, \emph{causing delay} in response time and being \emph{computational costly}. Besides, our models still manage to get better Uni. F1 scores with fewer times of search-engine querying. We also observe a positive correlation between query ranking performance and response generation performance, which again validates the necessity of studying query production. We also compare several \emph{upper-bound} systems (the last group) to pinpoint the current bottleneck of our model. The first system, \emph{GENRE} \cite{decao2020multilingual}, is pretrained on the WoW dataset to generate the title of the corresponding Wikipedia article given a dialogue context. Thus, it utilizes the human annotated labels during training. Besides, it adopts constrained decoding for inference so that all generation outputs are valid Wikipedia titles. For \emph{GENRE}, we use the proposed ``constrained beam search'' to get 5 distinct titles, which are then mapped to 5 different passages (same number as other systems) as the knowledge for response generation. It yields a recall number of 67.55\% on gold articles, while the R@1 of \emph{QP-Ext} is 62.41\%.\footnote{This is roughly comparable as we use the top 5 articles for each query.} \emph{GENRE} can also be considered as a purely retrieval-based model that directly get articles from a static Wikipedia dump. Although, \emph{GENRE} is trained with the human annotation on knowledge selection, \emph{QP-Ext} is comparable on both knowledge retrieval and the final response generation, indicating the potential of search-engine-based approaches and our training method with cheap noisy supervision. The other upper-bound system, \emph{Gold KN}, takes the summary (the first paragraph) of the gold article for both training and testing. We observe significant performance gaps on response generation when either Rank-Gen or Merge-Gen is used. This indicates the potential for further improving the accuracy on query production, which in turns boosts the recall of knowledge retrieval. \subsection{Analysis} \begin{table}[t] \centering \begin{tabular}{lccc} \toprule System & R@1 & R@3 & R@5 \\ \midrule \makecell[l]{QP-Ext} & 62.41 & 72.91 & 74.87 \\ \makecell[l]{~~~~w/o pre-train} & 61.97 & 71.84 & 73.77 \\ \makecell[l]{~~~~w/o fine-tune} & 61.36 & 73.08 & 74.94 \\ \makecell[l]{~~~~w/o prune search space} & 60.65 & 67.68 & 69.97 \\ \hline \makecell[l]{QP-Gen} & 56.77 & 66.08 & 68.22 \\ \makecell[l]{~~~~w/o pre-train} & 38.14 & 54.91 & 59.83 \\ \makecell[l]{~~~~w/o fine-tune} & 51.91 & 65.82 & 69.75 \\ \makecell[l]{~~~~w/ prune search space} & 60.67 & 71.55 & 73.52 \\ \bottomrule \end{tabular} \caption{Ablation study on both extraction-based (QP-Ext) and generation-based (QP-Gen) query producers.} \label{tab:ablation} \end{table} \subparagraph{Ablation study} Table \ref{tab:ablation} shows the ablation study on our query producers. We can draw the following conclusions. \textbf{First}, both pre-training with cross-entropy loss and reinforcement learning fine-tuning are helpful for query producers. For extraction-based approach, pre-training (w/o fine-tune) mainly helps the performance on R@3 and R@5, while fine-tuning (w/o pre-train) mostly helps the performance on R@1. In general, fine-tuning provides more robust performances than pre-training, as it can better handle the noisy supervision. For generation-based method, both training stages are very crucial, probably due to its large search space. In this case, pre-training-alone (w/o fine-tune) outperforms the fine-tuning-alone counterpart (w/o pre-train). This is because RL-based fine-tuning from scratch is slow to converge \cite{paulus2018a,wang2018reinforced}. \textbf{Second}, adding search space pruning brings in significant performance gains on both extraction-based and generation-based query producers, proving the importance of limiting the search space to high-quality candidate queries. Notice that for a generation-based query producer, it will degenerate to a ranking model when using search space pruning, because its search space is limited within provided query candidates. Though effective, it loses the ability to generate queries not contained in the dialogue context, which has great potential for further improvement. We leave it as future work. \begin{figure} \centering \begin{tikzpicture}[scale=0.74] \begin{axis}[ legend style={ cells={anchor=west}}, xlabel=Turns, ylabel=R@1(\%)] \addplot[dash dot dot,color=blue, mark=triangle*] coordinates { (2,65.75)(3,42.76)(4,48.04)(5,40.78)(6,42.57) (7,38.37)(8,39.45)(9,28.97)(10,29.80) }; \addlegendentry{TF-IDF} \addplot[sharp plot,color=red, mark=*] coordinates { (2,77.29)(3,70.39)(4,70.89)(5,64.69)(6,62.10) (7,55.04)(8,53.90)(9,44.89)(10,43.91) }; \addlegendentry{Ext. based} \addplot[dash dot,color=green, mark=square*] coordinates { (2,70.89)(3,62.28)(4,66.40)(5,55.92)(6,57.81) (7,49.56)(8,50.39)(9,42.24)(10,38.82) }; \addlegendentry{Gen. based} \end{axis} \end{tikzpicture} \caption{Performance of different query producers at different dialogue turns.} \label{fig:conv_flow} \end{figure} \textbf{Performances at different turns}~ We further compare the R@$1$ of query producers at various turns. Generally, the last several turns yield more query candidates than the first ones, causing larger search spaces. As shown in Fig. \ref{fig:conv_flow}, the performance of all producers drops rapidly when a dialogue continues. But, ours still achieve R@1 rates of around 45\% and 40\% with only noisy supervision, which are 15\% and 10\% higher than TF-IDF. \subsection{Human Evaluation} \begin{table} \centering \setlength{\tabcolsep}{4pt} \begin{tabular}{lc|c|cc} \toprule \multirow{2}{*}{\makecell[c]{Query Prod.}} & Query & Article & \multicolumn{2}{c}{Response} \\ & Sound. & KN Cov. & Natural. & Know. \\ \midrule None & -- & -- & 2.39 & 1.89 \\ TF-IDF & 2.37 & 2.37 & 2.41 & 2.14 \\ QP-Ext & \textbf{2.79} & \textbf{2.76} & \textbf{2.65} & 2.39 \\ QP-Gen & 2.65 & 2.59 & 2.58 & \textbf{2.44} \\ \midrule \midrule GENRE \cite{decao2020multilingual} & 2.78 & 2.85 & 2.57 & 2.67 \\ \bottomrule \end{tabular} \caption{Human evaluation results, where \emph{Sound.}, \emph{KN Cov.}, \emph{Natural.}, \emph{Know.} indicates soundness, knowledge coverage, naturalness and knowledgeable respectively. Both \emph{QP-Ext} and \emph{QP-Gen} are significantly better over TF-IDF at $p<0.01$ across all aspects.} \label{tab:human_eval} \end{table} We conduct human evaluation on 100 test samples, and we choose Merge-Gen as the response generator, because it shows better performance than Rank-Gen on automatic metrics. The models are rated regarding query production, the quality of retrieved article, and response generation. For query production, we measure \textbf{Soundness}, which means whether the query is sound by itself.\footnote{Sometimes, a sound query may not retrieve good knowledge due to search-engine mistake.} For the quality of retrieved article, we evaluate its \textbf{Coverage}, meaning how relevant it is to the current dialogue context. For response generation, we follow previous work to measure \textbf{Naturalness}, indicating how fluent and relevant a response is, and \textbf{Knowledgeable}, representing how much knowledge is used in a response. We ask 3 annotators capable of fluent English communication to score each aspect with 3-point schema\footnote{We attach detailed guidelines in Appendix.}, and we average their scores as the final score of the aspect. The inner-annotator agreement (Fleiss' $\kappa$) is 0.5461, indicating a moderate level of agreements. As shown in Table \ref{tab:human_eval}, both the \emph{TF-IDF} baseline and our models (\emph{QP-Ext} and \emph{QP-gen}) improve the \emph{None} baseline regarding the ``knowledgeable'' aspect of response generation, which indicates that using retrieved knowledge is helpful. Comparatively, our \emph{QP-Ext} model significantly improves \emph{TF-IDF} ($+$0.42 for ``query soundness'' and $+$0.39 for ``knowledge coverage'' over 3-point schema) regarding the quality of obtained queries and articles. This advantage also transfers to response generation ($+$0.24 for ``naturalness'' and $+$0.25 for ``knowledgable'' over 3-point schema), which also indicates the positive correlation between query production and the final response generation. The improvements (especially for \emph{TF-IDF}) regarding ``naturalness'' are relatively low, because \emph{None} can already generate fluent replies aided by large-scale pre-training on text generation. Note that general replies such as ``\emph{Sorry, I don't know}'' are considered natural in certain context like ``\emph{Do you know Mike Tyson?}''. We also compare with the upper-bound system, \emph{GENRE} \cite{decao2020multilingual}, for a more comprehensive understanding of our models. Generally, our \emph{QP-Ext} shows comparable ``soundness'' but lower ``knowledge coverage'' score, which also leads to inferior ``knowledgeable'' score. This is because \emph{GENRE} can provide knowledge across various topics by directly producing the passage titles, but our \emph{QP-Ext} retrieves knowledge with much similar topic using only 1 query. This indicates that using more queries and smarter knowledge pre-filtering can further benefit the query production. We also notice that \emph{GENRE} give a lower ``naturalness'' score than our \emph{QP-Ext}. It may be because the response generator enjoys the high quality knowledge from \emph{GENRE} but instead ignores the dialogue context information. This shows that the current model still struggles to balance the ``naturalness'' and ``knowledgeable''. \begin{table*}[t!] \scriptsize \centering \begin{tabularx}{0.98\textwidth}{rX} \textbf{\#1} \\ \midrule Context & \textbf{A:} One of my favorite bands despite usual protests is Nickelback. How about you or who is one of your favorites? \\ & \textbf{B:} I like Nickelback too. They are my favorite Canadian rock band and I've liked them since 1995 they started. \\ & \textbf{A:} Speaking of Canadian rock bands, my absolute all time favorite band is Rush. I have every album by them! \\ Ref response & Yes Rush! They formed in 1968 and took their first United States tour in 1974. \\ \midrule \textbf{None} & Rush is one of my favorite bands too. They are from Hanna, Alberta. \\ \midrule \textbf{TF-IDF} \\ \hdashline Seach query & Nickelback \\ Retrieved passages & Nickelback; Rockstar (Nickelback song); Nickelback discography; Chad Kroeger; Dark Horse (Nickelback album) \\ Knowledge fragment & ...Nickelback is one of the most commercially successful Canadian rock bands, having sold more than 50 million albums worldwide... \\ Rank-Gen & Rush is one of my favorite bands too. They have sold more than 50 million albums worldwide. \\ Merge-Gen & Rush is one of my all time favorite bands. They have sold more than 50 million albums worldwide. \\ \midrule \textbf{QP-Ext} \\ \hdashline Seach query & Rush \\ Retrieved passages & Rush (Rush album); Into the Rush; Rush (band) \\ Knowledge fragment & ...Rush was a Canadian rock band formed in Toronto in 1968, consisting of Geddy Lee (bass, vocals, keyboards, composer), Alex Lifeson (guitars, composer), and Neil Peart (drums, percussion, lyricist)... \\ Rank-Gen & Rush is one of my favorite bands too. They were formed in 1968 in Toronto. \\ Merge-Gen & Rush was formed in 1968 by Geddy Lee, Alex Lifeson, and Neil Peart. \\ \end{tabularx} \caption{The first example (1/2) regarding query production and response generation from the WoW unseen test set.} \label{tab:example1} \end{table*} \begin{table*}[t!] \scriptsize \centering \begin{tabularx}{0.98\textwidth}{rX} \textbf{\#2} \\ \midrule Context & \textbf{A:} Do you like Elvis Presley? he was a famous american singer \\ & \textbf{B:} He is pretty good, I am from Nashville, So i've even been to graceland. \\ Ref response & Presley was born in Tupelo, Mississippi, and then to memphis \\ \midrule \textbf{No KN} & He was born and raised in Nashville, TN \\ \midrule \textbf{TF-IDF} \\ \hdashline Seach query & graceland \\ Retrieved passages & Graceland; Graceland (album); Graceland (TV series); Graceland University \\ Knowledge fragment & ...Graceland is located at 3764 Elvis Presley Boulevard in the vast White haven community, about nine miles (14 kilometers) from Downtown and less than four miles (6.4 km) north of the Mississippi border.It was opened to the public as a museum on June 7, 1982. The site was listed in... \\ Rank-Gen & Yes, it was opened to the public as a museum in 1982 \\ Merge-Gen & I've been to Graceland a few times. It was opened to the public as a museum in 1982 \\ \midrule \textbf{QP-Ext} \\ \hdashline Seach query & Elvis Presley \\ Retrieved passages & Elvis Presley; Elvis Presley (album); Priscilla Presley; Elvis Presley albums discography; Lisa Marie Presley \\ Knowledge fragment & ...Presley was born in Tupelo, Mississippi, and relocated to Memphis, Tennessee with his family when he was 13... \\ Rank-Gen & He was born in Tupelo, Mississippi and moved to Memphis when he was 13 \\ Merge-Gen & He was born in nashville and moved to Memphis when he was 13 \\ \end{tabularx} \caption{The second example (2/2) regarding query production and response generation from the WoW unseen test set.} \label{tab:example2} \end{table*} \subsection{Case Study} We further demonstrate several typical examples from our human study to help visualize the benefits of our query producer for response generation, as shown in Table \ref{tab:example1} and \ref{tab:example2}. We compare our models with a vanilla BART and models equipped with a TF-IDF query producer because all these models are trained without annotated queries. The baseline (NO KN) suffers from hallucination problem where the generated responses in these cases are conflicted with the facts. For example, Elvis Presley is actually born in Tupelo, Mississippi instead of Nashville, TN. Another baseline (TF-IDF) generate responses that tally with the facts. However, they fail to produce correct queries as it is difficult for TF-IDF to consider the rich contextual information and recognize the most related topic. Our models using knowledge from QP-Ext correctly produce the search queries and generate the most satisfying responses thanks to our cheaply supervised training framework. But we notice that the retrieved passages may not cover the fact used in gold reference (e.g., ``Rush took their first United States tour in 1974'' in the first example) and the model may still suffer from hallucination given necessary facts (e.g., ``Presley was born in nashville'' in the second example). These problems need further exploring in future study. \section{Related Work} \subparagraph{Knowledge-aided dialogue response generation} How to properly incorporate external knowledge has become an important topic in dialogue response generation. Regarding the type of adopted knowledge, previous work has explored using passages \cite{dinan2018wizard,zhou2018dataset,gopalakrishnan2019topical}, knowledge graphs \cite{moon2019opendialkg,zhou2020kdconv}, commonsense knowledge \cite{zhou2018commonsense,zhang2020grounded,wu2020diverse}, persona \cite{li2016persona,zhang2018personalizing,madotto2019personalizing} as the external knowledge, and recent efforts \cite{moghe2018towards,wu2021more} even propose integrating multiple sources of knowledge. These efforts mainly focus on the ``knowledge-centric'' scenario where each dialogue mainly discusses the corresponding knowledge (e.g., a short passage), and thus simply using the knowledge can enjoy a high coverage on the dialogue. However, a dialogue model may need to dynamically fetch relevant knowledge in practical scenarios (e.g., open-domain chitchat), as the discussed topic may change through time. Later work \cite{ijcai2019-0706,zhao2020knowledge,shuster2021retrieval,chi2021neural,saha2021proto} constructs retrieval-based generative models, where they adopt a retriever (e.g. DPR \cite{karpukhin2020dense}) to obtain relevant knowledge from a static knowledge pool. Though they can potentially access more knowledge, the knowledge pool remains unchanged. To tackle the above issue, search-engine-aided dialogue response generation is recently proposed so that the vast knowledge form internet can be leveraged. This is an important yet under explored research direction with the key being query production, which is to generate search queries to interact with a search engine. There is one related preprint draft \cite{komeili2021internet} in parallel, which explores using Bing\footnote{\url{https://www.bing.com/}} as the knowledge source for dialogue response generation. We both share a similar motivation of using a search engine as the knowledge source, but we are different on how to train the query producer, a \emph{key} module for interacting with the search engine. \cite{komeili2021internet} manually annotate 48K queries to train their query generator. Thus, the supervision signals are expensive to obtain and may not be transferable to other domains and search engines. On the other hand, our model is search-engine agnostic, as we design an algorithm that obtains annotation-free and effective signals for training the query producer. In addition, \cite{komeili2021internet} ignores the fact that Bing (same as the other search engines) is frequently updated, thus the same query may not result in the same retrieved articles after a long while. Comparatively, it is much less costly to update our model for handling web content update. We also notice several very recent preprint drafts \cite{lazaridou2022internet,menick2022teaching} that propose to leverage internet for language model pretraining or question answering. Our work share a similar motivation on leveraging internet to solve a practical task. However, all these efforts simply annotate queries to train their query producer, while we study leveraging cheap noisy supervision. \subparagraph{Keyword production} As a longstanding task, keyword production was initially proposed to automatically create keywords for articles. Classic techniques (e.g., TF-IDF and TextRank) have been widely used over decades. Initial neural keyword producers \cite{zhang2016keyphrase,luan2017scientific} are extraction-based that extract keywords from inputs. Recently, generation-based methods \cite{meng-etal-2017-deep,chen2018keyphrase,Chen_Gao_Zhang_King_Lyu_2019,meng-etal-2021-empirical,xie2022wr} using a seq2seq model are gaining popularity. We produce keywords as queries to a search engine and study both extraction-based and generation-based methods on our task in conversational domain. \subparagraph{Cheaply Supervised Training} Training a model with supervision from human annotations is the commonly-used approach to build the model for a task. However, it is costly to collect enough annotations to train a strong model and it is still challenging when serving cross-domain settings. Thus, many researchers have explored self-supervised pretraining losses as cheap supervision for exploiting common knowledge for various downstream tasks \cite{lewis2020bart,Clark2020ELECTRA:,devlin2019bert}. Some work \cite{su2021enhanced,jiang2021exploring} also adopt self-training or reinforcement learning free of human annotations to enhance model abilities. In this work, we are the first to design cheap noisy supervision for conversational query producers on knowledge-aided dialogue response generation task. \section{Conclusion} We have introduced a model that leverages a general search engine for knowledge-aided response generation. To effectively interact with the search engine, it adopts a query producer to generate search queries. We design cheap noisy supervision signals to train our query producer, so that no extra human annotation is needed, making our model easily transferable to other search engines and domains. Experimental results under both automatic metrics and human judges show the superiority of our model over a pre-trained BART model and other baselines. \section*{Acknowledgments} This work was supported by National Natural Science Foundation of China (No. 62276219), Natural Science Foundation of Fujian Province of China (No. 2020J06001), and Youth Innovation Fund of Xiamen (No. 3502Z20206059).
2,869,038,155,350
arxiv
\section{Introduction} Solar cycles are asymmetric with respect to their maxima, the rise time being shorter than the decay time. While the cycle amplitude (peak value) and the duration have cycle-to-cycle variations, we find some correlations among different quantities connected with the solar cycle. Since 1935, it has been realized that the stronger cycles take less time to rise than the weaker ones (Waldmeier, 1935). This anti-correlation between rise times and peak values of the solar cycle is popularly known as the \we. \citet{KarakChou11} have defined this aspect of the \we\ as WE1, whereas the correlation between the rise rates and the peak values is called WE2 (see also \opencite{CS08}). Although WE2 is a more robust feature of the solar cycle, \citet{KarakChou11} have shown that both WE1 and WE2 exist in many proxies of the solar cycle. WE2 provides a valuable precursor for predicting solar cycles because one can predict the strength of a cycle once it has just started (see \opencite{Lantos00,Kane08} ). The declining phase of the cycle also provides important clues for understanding long-term variations. We find that stronger cycles not only rise rapidly but also fall rapidly (shorter decay time). This results in a good correlation between the decay rate and amplitude of the same cycle. However, defining the decay rate differently, \citet{CS08} did not find a significant correlation between the decay rate and amplitude. Furthermore, we find a strong correlation between the decay rate of the current cycle and the amplitude of the next cycle, \blue{which was also found by Yoshida and Yamagishi (2010).} The decay time, however, is found to have no correlation with the amplitude of the same cycle. Another important feature observed is that the amplitude of the cycle is inversely correlated with the period of the previous cycle \blue{\citep{Hathaway02, Solanki02, Ogurtsov11}}. These two correlations again provide promising precursors to predict the strength of the future cycle \citep{Solanki02, Watari08}. Apart from showing these correlations from observational data, we also attempt to provide theoretical explanations for them. A dynamo mechanism operating in the solar convection zone is believed to be responsible for producing the solar cycle. It is generally accepted that the strong toroidal field (responsible for the formation of bipolar sunspots) is produced from the poloidal field by differential rotation in the solar convection zone \citep{Parker55a}. This is the first part of solar dynamo theory. Due to magnetic buoyancy \citep{Parker55b} the flux tubes of toroidal field erupt out through the surface to form bipolar sunspot regions. These bipolar sunspots acquire tilts due to the action of the Coriolis force during their journey through the convection zone, giving rise to Joy's law \citep{Dsilva93}. To complete dynamo action, the toroidal field has to be converted back into the poloidal field. One possible mechanism for generating the poloidal field is the Babcock--Leighton (B-L) process \citep{Bab61,Leighton69}, for which we now have strong observational support \citep{DasiEspuig10, Kitchatinov11a, Munoz13}. In this process, the fluxes of tilted bipolar active regions spread on the solar surface through different processes (diffusion, meridional circulation, differential rotation) to produce the poloidal field. A model of the solar dynamo that includes a coherent meridional circulation and this B-L mechanism for the generation of the poloidal field is called the flux transport dynamo model. This model was proposed in the 1990s \citep{WSN91,Durney95,CSD95} and has been successful in reproducing many observed regular as well as irregular features of the solar cycle \citep{CD2000,Kuker01, Nandy02, CNC04, Guerrero04, CK09, Hotta10, KarakChou13}. Recently \citet{Charbonneau10} , \citet{Chou11} and \citet{Karakreview14} have reviewed this dynamo model. An important ingredient in flux transport dynamo is the \mc, which is not completely constrained either from observations or from theoretical studies. Until recently not much was known about the detailed structure of the meridional circulation in the convection zone \citep{Zhao13, Schad13}. Therefore, most of the dynamo models use a single-cell \mc\ in each hemisphere. However, very recently \citet{HKC14} have shown that a complicated multi-cellular \mc\ also retains many of the attractive features of the flux transport dynamo model if there is an equator-ward propagating meridional circulation near the bottom of the convection zone or if there is an equator-ward turbulent pumping \citep{Guerrero08}. \blue{While most of the calculations in this paper are done for a single-cell meridional circulation, we show that the results remain qualitatively similar for more complicated meridional circulations.} Since we want to do a theoretical study of the irregularities in the solar cycle, let us consider the sources of irregularities in the \ftdm\ that make different solar cycles unequal. At present we know two major sources: (i) variations in the poloidal field generation due to fluctuations in the B-L process \citep{CCJ07,GoelChou09} and (ii) variations in the meridional circulation \citep{Karak10, KarakChou11}. Direct observations of the polar field during last three cycles \citep{SCK05}, as well as its proxies such as the polar faculae and the active network index available for about last 100 years \citep{Munoz13,Priyal14}, indicate large cycle-to-cycle variations of the polar field. The poloidal field generation mechanism mainly depends on the tilts of active regions, their magnetic fluxes and the meridional circulation, all of which have temporal variations. Particularly the scatter of tilt angles around the mean, caused by the effect of convective turbulence on rising flux tubes \citep{Longcope02}, has been studied by many authors \citep{WS89,DasiEspuig10}. Recently \citet{Jiang14} found that the tilt angle scatter led to a variation in the polar field by about $30\%$ for cycle 17. In fact, even a single big sunspot group with large tilt angle and large area appearing near the equator can change the polar field significantly \citep{Cameron13}. On the other hand, for the meridional circulation, we have some surface measurements for about last 20 years, showing significant temporal variations (Chou and Dai, 2001; Hathaway and Rightmire 2010). Although our theoretical understanding of the \mc\ is very limited, a few existing spherical global convection simulations do show significant variations in the \mc\ \citep{PCB12,Karak15}. Introducing randomness in the poloidal field generation and in the \mc, Karak and Choudhuri (2011) have been able to reproduce the Waldmeier effect in their high diffusivity dynamo model. When the \mc\ becomes weaker, the cycle period and hence the rise time becomes longer. The longer cycle period allows the turbulent diffusion to act for a longer time, making the cycle amplitude weaker \citep{Yeates08,Karak10} and leading to the Waldmeier effect. The variation of the meridional circulation is crucial in reproducing this effect. The motivation of the present work is to explore how the decay rates of cycles are related to their amplitudes in a flux transport dynamo model, with the aim of explaining the observed correlations mentioned earlier. The presentation of the paper is following. In the next section, we summarize some of the features of solar cycle that are often considered as precursors of the solar cycle. In Section~3, we present a brief summary of our flux transport dynamo model and then in Sections~4 we introduce suitable stochastic fluctuations in the poloidal field and the meridional circulation, in order to reproduce various observed features of the solar cycle. Finally the last section summarizes our conclusions. \section{Observational Studies} We have used three different observational data sets: ({i}) Wolf sunspot number{\footnote{http://solarscience.msfc.nasa.gov/greenwch/spot\_num.txt}} (cycles 1--23), ({ii}) sunspot area{\footnote{http://solarscience.msfc.nasa.gov/greenwch/sunspot\_area.txt}} (cycles 12--23), and ({iii}) $10.7$~cm radio flux{\footnote{http://www.ngdc.noaa.gov/stp/solar/flux.html}} (available only for the last five cycles). These parameters are very good proxies of magnetic activity and are often used to study the solar cycle (Hathaway {\it {et al.}}\ 2002). To minimize the noise while keeping the underlying properties unchanged, we smooth these monthly data using a Gaussian filter having a full-width at half maximum (FWHM) of 1~year. We also average the data with FWHM of 2~years to check how the results change with the filtering. \begin{figure} \centerline{\includegraphics[width=1.0\textwidth,clip=]{fig1} } \caption{Scatter plots of the decay rate and the amplitude of the same cycle computed from (a) sunspot number, (b) sunspot area, and (c) $10.7$~cm radio flux data. In all these cases the original monthly data are smoothed using a Gaussian filter with FWHM of 2~years.The straight line in each plot is best linear fit of data. The correlation coefficients ($r$) and the significance levels are also given in each plot.} \label{obs1} \end{figure} \subsection{Correlation between the Decay Rate and the Cycle Amplitude} We have calculated the decay rate at three different phases of the descending phase of the cycle, namely, early phase, late phase and entire phase. For the early phase, the decay rate is taken as the slope between two points with a separation of 1~year with the first point one year after the cycle peak, whereas for the late phase the second point is taken 1~year before the cycle minimum. Here we exclude one year after the maximum when computing decay rate for the early phase because sometimes the cycle peaks are not so prominent. While computing the decay rate for the late phase we also exclude 1~year before the minimum just to avoid the effect of overlapping between two cycles during solar minimum. Finally, the decay rate of the entire decay part ({\it i.e}, entire phase) is taken as the average of the individual decay rates computed at four different locations with a separation of one year starting from early phase to the late phase. In \Fig{obs1}~(a), (b) and (c), we show the correlations of the cycle amplitudes with the decay rates of the entire phase computed from sunspot number, sunspot area and $10.7$~cm radio flux data, respectively. We would like to point out that Cameron and Sch\"ussler (2008) have computed the decay rate from the intervals of two fixed values of solar activity and they did not get significant correlation between the decay rate and the amplitude (see right column, Figure\ 2 of their paper). The reason of not finding significant correlation is that they have calculated the decay rate in the late phase of the cycle, {\it i.e.} near the tail of the cycle where the rate of decay is really very small. We find that their values are comparable with our decay rates computed in the late phase. In 4th and 5th columns of Table~1 we have listed our values and the values computed following Cameron and Sch\"ussler (2008) method (hereafter referred as CS08). It is interesting to note that even for the radio flux data for which we have only five data points, we get strong correlation; see Table~1 for details. Therefore we can see that if we determine the decay rates from the entire phase of the solar cycle or the early phase, we find strong correlation with the amplitude. Thus, to determine the decay rate from descending part of the solar cycle, we need to consider the entire decay phase of the cycle, which provides a better estimate than CS08. \begin{table} \caption{Correlation coefficients between different quantities of the solar cycle.} \begin{tabular}{ccccccccc} \hline &&\multicolumn{6}{c}{Correlation coefficients of the decay rate with}& Correlation \\ &&\multicolumn{6}{c}{the amplitude of}& between the \\ \cline{3-8} &&\multicolumn{4}{c}{Same cycle}&\multicolumn{2}{c}{Next cycle}& amplitude \\ \cline{3-8} &&Entire &\multicolumn{2}{c}{Late decay phase}& Early & Entire & Late & and the\\ \cline{4-5} Data set&FWHM &phase&Our & CS08's &Phase&phase&phase & previous\\ &&& value & value&&& & cycle period \\ \hline Sunspot&1~yr&0.79&0.21&0.22&0.67 & 0.55 & 0.61 & -0.64\\ number&2~yr&0.86&0.45&--&0.86 & 0.65 & 0.83 & -0.67 \\ \hline Sunspot&1~yr&0.84&0.20&0.11&0.69 & 0.14 & 0.37 & -0.49\\ area&2~yr&0.91&0.53&--&0.92 & 0.39 & 0.66 & -0.60\\ \hline Radio&1~yr&0.86&-0.11&0.14&0.93&-0.42 & 0.64 & 0.11\\ flux &2~yr&0.82&0.24&--&0.95 & -0.43& 0.46 & 0.09 \\ \hline \end{tabular} \end{table} \begin{figure}[!h] \centerline{\includegraphics[width=1.0\textwidth,clip=]{fig2.eps} } \caption{ Scatter plots showing the correlation of the amplitude vs. the decay rate of the previous cycle computed from sunspot number data (smoothed with FWHM of 2~year). In (a) the decay rate is computed from the entire decay phase, whereas in (b) it is at late decay phase.} \end{figure} \subsection{Correlation between the Decay Rate and the Next Cycle Amplitude} Next we find that there is a significant correlation between the amplitude of cycle and the decay rate of the previous cycle. Again we find this correlation for all the data sets considered here (see Table~1). However in Figure~2(a) we show this correlation only for sunspot number. Note that here the decay rates have been calculated from the entire decay phase as discussed in Section 2.1. This correlation suggests that the decay rate of a cycle carries some information of the strength of the next cycle. It is interesting to note that when we look at this correlation with the decay rate computed in the late phase, the correlations become even stronger; see Figure~2(b). In 7th and 8th column of Table~2, we show both correlations for all three data sets. These results suggest that particularly the late phase of the cycle carries more information of the forthcoming cycle. \blue{This correlation of decay rate with the amplitude of succeeding cycle is already reported in {\citet{Yoshida10}}. They have shown this correlation for only sunspot number data and their methodology for calculation of decay rate (rate of decrease in sunspot number over some time) is somewhat different from our methodology. They have studied decay rate in six different cases (see their Figures 1(a)-(f)). They have obtained the decay rate from the decrease of sunspot number (SSN) over the period of 1, 2, 3, 4, 5 and 6 years before the minima of the cycle in the six different cases of study respectively. Since solar cycles sometimes have overlapping regions during minima and it is difficult to ascertain the actual minima, there are some uncertainties in the methodology of Yoshida and Yamagishi (2010). The correlation coefficient ($r$ = 0.70) obtained in the second case of their study (see their Figure 1(b)) should be the same with what we obtained during late phase correlation ($r$ = 0.83). Since they have not considered the overlapping region between the minima and used monthly smoothed SSN, the value of the correlation coefficient is slightly different.} \citet{CS07} (also see \opencite{Brown76}) have observed similar feature that the activity level during the solar minimum is an indicator for the strength of the next solar cycle and argued that this is caused by overlap between two cycles during solar minimum. In all our theoretical calculations (subsequent section), while studying the correlation between the amplitude and the decay rate of the same cycle, we shall consider the decay rate of the entire phase, but for the correlation with the next cycle we shall consider only the late-phase decay rate. \begin{figure}[!h] \centerline{\includegraphics[width=0.8\textwidth,clip=]{fig3.eps}} \caption{ Scatter plot of $n$th cycle amplitude and the amplitude of the next $n+1$ cycle from sunspot number data (smoothed with FWHM of 2~years).} \label{amplcorl} \end{figure} Since the decay rate of the cycle $n$ is correlated both with the amplitude of cycle $n$ (Figure~1) and the amplitude of cycle $n+1$ (Figure~2), one question that naturally arises is whether the amplitude of cycle $n$ and the amplitude of cycle $n+1$ are themselves correlated. We show a correlation plot between these amplitudes in Figure~3, demonstrating that there is not a significant correlation. The challenge before a theoretical model is, therefore, to explain how the decay rate of cycle $n$ is correlated both with the amplitude of cycle $n$ and the amplitude of cycle $n+1$, while these amplitudes themselves do not have a strong correlation. \subsection{Correlation between the Cycle Period and the Next Cycle Amplitude} Finally, we also find that the shorter cycles are followed by stronger cycles and vice versa. This produces an anti-correlation between the amplitude of a cycle and the period of the previous cycle \blue{\citep{Hathaway02, Solanki02, Ogurtsov11}}. \Fig{percorl} shows this correlation from sunspot number data (smoothed using a Gaussian filter with FWHM of 2~years). The correlation coefficients from other data are listed in Table~1. For all data we have taken the period of the cycle just as the time difference between two successive minima. \begin{figure}[!h] \centerline{\includegraphics[width=0.75\textwidth,clip=]{fig4}} \caption{Scatter plot showing the anti-correlation between the cycle amplitude and the period of the previous cycle from sunspot number data (smoothed with FWHM of 2~years).} \label{percorl} \end{figure} \section{Theoretical Framework of the Dynamo Model} \label{sec:model} We carry out our theoretical studies using the flux transport dynamo model originally presented by Chatterjee, Nandy, and Choudhuri (2004). In this model, the evolution of the axisymmetric two-dimensional magnetic field is governed by following two equations: \begin{equation} \frac{\partial A}{\partial t} + \frac{1}{s}({\bf v}.\nabla)(s A) = \eta_{\rm{p}} \left( \nabla^2 - \frac{1}{s^2} \right) A + S_{\rm{BL}}(r,\theta;B), \label{eqA} \end{equation} \begin{equation} \frac{\partial B}{\partial t} + \frac{1}{r} \left[ \frac{\partial}{\partial r} (r v_r B) + \frac{\partial}{\partial \theta}(v_{\theta} B) \right] = \eta_{\rm{t}} \left( \nabla^2 - \frac{1}{s^2} \right) B + s({\bf B}_{\rm{p}}.\nabla)\Omega + \frac{1}{r}\frac{d\eta_{\rm t}}{dr}\frac{\partial{B}}{\partial{r}}, \end{equation} where $s = r \sin \theta$, $B (r, \theta)$ is the toroidal component of the magnetic field , $A(r, \theta)$ is the vector potential of the poloidal field, ${\bf v}=v_r{\bf \hat r} + v_\theta\hat {\bf \theta}$ is the velocity of the meridional flow, $\Omega$ is the internal angular velocity of the Sun and $\eta_{\rm{t}}$, $\eta_{\rm{p}}$ are the turbulent diffusivities of the toroidal and the poloidal fields. Since the detailed discussion of the parameters and boundary conditions are given in Chatterjee, Nandy, and Choudhuri (2004) and Karak and Choudhuri (2011), here we do not discuss them again. We only make a few remarks about magnetic buoyancy and about the term $S_{\rm{BL}}(r,\theta;B)$ appearing in Equation (\ref{eqA}), which captures the longitude averaged B-L mechanism. Let us discuss how the magnetic buoyancy is treated in this model. When the toroidal field above the tachocline ($r= 0.71 \Rs$) at any latitude exceeds a certain value, a fraction of it is reduced there and the equivalent amount of this field is added on the solar surface. Then this local toroidal field near the surface is multiplied by a factor $\alpha$ to give the poloidal field. The source term in Equation~(\ref{eqA}), therefore, is \begin{equation} S_{\rm{BL}}(r,\theta;B)=\alpha B(r,\theta,t), \label{alphaH} \end{equation} where \begin{equation} \alpha =\frac{\alpha_0}{4} \cos \theta \left[ 1 + \er \left(\frac{r - 0.95\Rs}{0.03\Rs} \right) \right] \left[ 1 - \er \left(\frac{r - \Rs}{0.03\Rs} \right) \right], \end{equation} with $\alpha_0 = 30$~m~s$^{-1}$. Now our job is to use this model to study the observed features of solar cycle reported in previous sections. To study any irregular feature of the solar cycle, we have to make the cycles unequal by introducing randomness in this regular dynamo model, as we discuss in the following sections. \blue{In most of our calculations, we have followed Chatterjee, Nandy, and Choudhuri (2004) in assuming the meridional circulation to consist of one cell. Of late, this assumption has been questioned, although the exact nature of the meridional circulation in the deeper layers of the convection zone is still not known. We have shown in Section~4.4 that we can retain the attractive features of our results with more complicated meridional circulation (Hazra, Karak, and Choudhuri 2014). We have also included the near-surface shear layer in the calculations presented in Section~4.4.} \section{Results of Theoretical Modeling} \subsection{Fluctuations in the Poloidal Field Generation} We have discussed in the Introduction that the Sun does not produce equal amount of poloidal field at the end of every cycle and that the generation of the poloidal field involves randomness. Therefore, similar to adding stochastic fluctuations in the traditional mean-field alpha \citep{Chou92}, adding stochastic fluctuations in the B-L $\alpha$ has become a standard practice in the flux transport dynamo community \citep{CD2000, Jiang07, KarakNandy12}. In the present work, first we introduce stochastic noise in the B-L $\alpha$ in the following way: \begin{equation} \alpha_0 \rightarrow \alpha_0 +\sigma(t,\tau) \alpha_0', \end{equation}\\ where $\tau$ is the coherence time during which the fluctuating component remains constant and $\sigma$ is a uniformly distributed random number in the interval [-1, 1]. Considering the typical decay time of the active regions by surface flux transport process, we fix the coherence time within 0.5 -- 2 months. To see a noticeable effect, we add $75\%$ fluctuations in $\alpha$ ({\it i.e.}, $\alpha_0'/\alpha_0=0.75$) with coherence time of $1$ month. From this stochastically forced model we have to calculate a measure of the theoretical sunspot number. We consider the magnetic energy density ($B^2$) of toroidal field at latitude $15^{\circ}$ at the base of the convection zone ($r = 0.7 \Rs$) as a proxy of sunspot number (this was done by Charbonneau and Dikpati, 2000). Note that absolute value of the theoretical sunspot number does not have any physical meaning. Therefore, we scale it by an appropriate factor to match it with the observed sunspot number. From the time series of theoretical sunspot number, we calculate the cycle periods and decay rates in the same way as we have done for the observational data. \begin{figure} \centering{ \includegraphics[width=1.0\textwidth,clip=]{fig5.eps} } \caption{Results from stochastically forced dynamo model with B-L $\alpha$ fluctuations: Scatter plots showing the correlations between (a) the decay rate and the amplitude of cycle $n$, (b) the decay rate of cycle $n$ and the amplitude of cycle $n+1$, (c) the period of cycle $n$ and the amplitude of cycle $n+1$.} \label{alflc} \end{figure} In Figure~\ref{alflc}(a) we show the correlation between the decay rates and the amplitudes of the same cycles. We see a positive correlation as in the observed data presented in Figure~1. It is easy to understand the reason behind getting this positive correlation. Since we have kept \mc\ fixed, the periods of the solar cycle do not vary much but the cycle strengths do vary due to the fluctuations in the poloidal field generation. Therefore, when the amplitude of a cycle increases while its period remains approximately fixed, the cycle has to decay rapidly. Hence we find that the stronger cycles decay faster than the weaker cycles, producing the positive correlation seen in Figure~\ref{alflc}(a). However, we see in Figure~\ref{alflc}(b) that there is not much correlation between the decay rate of the cycle $n$ and the amplitude of the next cycle $n+1$ and we are unable to explain the observed correlation seen in Figure~2. Note that for Figure~\ref{alflc}(a) the decay rates are calculated from the entire decaying part of the cycle which is more appropriate definition of the decay rate as we argued in Section~2, whereas for Figure~\ref{alflc}(b) it is computed at the late decay phase because observationally we find strong correlation when decay rate is computed in late decay phase only. Finally we see in Figure~\ref{alflc}(c) that in this study the observed anti-correlation between the period of cycle $n$ and the amplitude of cycle $n+1$ (shown in \Fig{percorl}) is also not reproduced. Note that the period does not vary too much when the \mc\ is kept constant. To sum up, when we introduce fluctuations in the poloidal field generation mechanism, we can explain the observed correlation between the decay rate and the amplitude of the cycle shown in Figure~1, but we cannot explain the other observed correlations presented in Figures~2 and 4. \subsection{Fluctuations in the Meridional Circulation} Next we introduce the other important source of fluctuations in the flux transport dynamo model, namely, variations of the meridional circulation. Although we have some observational results of the \mc\ variations near the solar surface for the last 15 -- 20 years, we do not have long data to conclude the nature of long-term variations \citep{CD01, Hathaway10b}. However, there are indirect evidences for the variation of the \mc\ over a long time \citep{LP09, Karak10, PL12}. Particularly, Karak and Choudhuri (2011) have used the durations of the past cycles to argue that the \mc\ has long-term variations with the coherence time of probably 20 -- 45~years. There can also be short-term variations in the \mc\ whose time scale may be related to the convective turnover time of the solar convection zone. Such variations with the time scale from a few months to a year are also observed in global magnetohydrodynamic simulations \citep{Karak14}. In this work, we vary the amplitude of the meridional circulation in the same way as we have done for the $\alpha$ term but with a different coherence time. We show the results of simulations with 30\% fluctuations in the \mc\ with coherence time of 30~years. \blue{We shall discuss later that various observed correlations can be explained only if the coherence time is assumed to be not much less than the cycle period. While fluctuations of shorter duration (along with spatial variations) are likely to be present in the meridional circulation, we believe that they do not play any role in producing the correlations we are studying.} With $30\%$ level of fluctuations with a coherence time of 30 years, we get variations of the \amp\ and of the period in our theoretical model comparable to the observational data. As in Section~4.1, we take the time series ($B^2$) at latitude $15^{\circ}$ at the base of the convection zone as our proxy of sunspot activity and calculate the required correlations from it. The relevant correlation plots are shown in Figure~\ref{figmc}. We see in \Fig{figmc}(a) that now the correlation between the decay rates and the cycle amplitudes has improved. Importantly, the other correlations are also correctly reproduced in Figures~\ref{figmc}(b) and \ref{figmc}(c) and can be compared with the observational plots Figure~2(b) and Figure~4. These correlations did not appear at all when the fluctuations in poloidal field generation was introduced ({\it cf.}\ Figures~\ref{alflc}(b) and \ref{alflc}(c)). To show how the correlations change on changing the correlation time or the level of fluctuations, we tabulate the values of correlations coefficients under different situations in Table~2. Each correlation coefficient is calculated from a run of 50 cycles. It should be kept in mind that there is some statistical noise in the values of correlation coefficients. If the correlation coefficient for exactly the same set of parameters is calculated from different independent runs, the values for different runs will be a little bit different. Keeping this in mind, we note that there is no clear trend of the correlation coefficients increasing or decreasing with increasing levels of fluctuations (other things being the same). However, all the correlation coefficients tend to decrease on decreasing the coherence time. \begin{figure} \centering{ \includegraphics[width=1.0\textwidth,clip=]{fig6.eps} } \caption{Same as \Fig{alflc} but with \mc\ fluctuations.} \label{figmc} \end{figure} It is not difficult to understand how the correlation in \Fig{figmc}(a) arises. For a stronger cycle, the sunspot number has to decrease by a larger amount during the decay phase, making the decay rate faster. However, to understand the physical reason behind the other two correlations seen in Figures~\ref{figmc}(b) and \ref{figmc}(c), more subtle arguments are needed. \citet{KarakChou11} extended the arguments of \citet{Yeates08} and pointed out that a weaker \mc, which makes the cycles longer, will have two effects. Firstly, the differential rotation has more time to generate more toroidal field and tends to make the cycles stronger. Secondly, the turbulent diffusivity gets more time to act on the fields and tends to make the cycles weaker. When the diffusivity is high (as in our model), the second effect dominates over the first and the longer cycles are weaker (the opposite is true for dynamo models with low diffusivity). \citet{KarakChou11} showed that this led to an explanation of the Waldmeier effect for dynamo models with high diffusivity. We now point out that this tendency (longer cycles tending to be weaker) is also crucial in our understanding of the correlations seen in Figures~\ref{figmc}(b) and \ref{figmc}(c). If the \mc\ keeps fluctuating with a coherence time of 30 years, it would happen very often that the \mc\ would have a certain value during a cycle (say cycle $n$) and the early rising phase of the next cycle (say cycle $n+1$). This is less likely to happen when the coherence time is reduced. Suppose the \mc\ is weaker during cycle $n$ and the rising phase of cycle $n+1$. Then cycle $n$ will tend to be longer and to have a weaker decay rate. The following cycle $n+1$ will have a tendency of being weaker. This will produce the correlations seen in Figures~\ref{figmc}(b) and \ref{figmc}(c). On decreasing the coherence time, it will happen less often that the \mc\ will be the same during cycle $n$ and the rising phase of the next cycle $n+1$. Hence the correlations degrade on decreasing the coherence time of the \mc. We have realized that there is also a memory effect, which enhances the correlations explained in the previous paragraph. To illustrate this memory effect, we make a run of our dynamo code in which the \mc\ is decreased suddenly during a sunspot minimum and then brought back to its original value during another sunspot minimum a few cycles later. The \mc\ and the resulting sunspot activity are plotted in \Fig{memory}. The periods of successive cycles are also indicated in the middle panel of \Fig{memory}. On changing the \mc, it is found that the periods of cycles begin changing almost immediately. However, there seems to be a memory effect as far as the amplitudes of the cycles are concerned. Even after the \mc\ changes, the amplitude of the next cycle is very similar to the amplitude corresponding to the earlier value of the \mc. This memory effect will certainly enhance the correlations we are discussing. Suppose the \mc\ is weaker during the cycle $n$, making its period longer and decay rate weaker. Even if the \mc\ becomes stronger by the rising phase of the next cycle $n+1$, the memory effect will ensure the amplitude of the cycle $n+1$ will still be weak, thereby producing the correlation. \begin{figure} \centering{ \includegraphics[width=1.0\textwidth,clip=]{fig7.eps} } \caption{Plots showing how the variation of meridional circulation, measured by $v_0$, with time (upper panel) changes the period of the cycle (middle panel) and strength of the magnetic field (shown by $B^2$ in the lower panel).} \label{memory} \end{figure} At this point, we would like to mention a misconception behind the correlation between cycle $n$ period and cycle $n+1$ amplitude. It may be thought that the overlap between two cycles during solar minimum is the cause of this correlation. If the next cycle is stronger, then it starts early and the overlap with the present cycle is more. This makes the present cycle shorter. However we believe that this is not the source of this correlation because if this is so, then we would have seen this correlation in Figure~\ref{alflc}(c) also, where cycle strengths were varied by fluctuations in the poloidal field generation. So the overlap is not the reason behind this correlation and we only get this in high diffusivity dynamo model with fluctuating \mc. \begin{table}[h!] \caption{Correlation coefficients at different levels of fluctuations and coherence time of meridional circulation.} \begin{tabular}{ccccc} \hline &&\multicolumn{2}{c}{Correlation of decay rate}&Correlation of\\ &&\multicolumn{2}{c}{with cycle amplitude of}&previous cycle \\ \cline{3-4} Coherence time & Fluctuations& Same cycle & Next cycle& period with \\ (year)&($\%$)&(Entire phase)& (Late phase)& amplitude\\ \hline & 10 & 0.92 & 0.92 & -0.97\\ & 20 & 0.86 & 0.92 & -0.95 \\ 30 & 30 & 0.87 & 0.89 & -0.96 \\ & 40 & 0.92 & 0.96 & -0.73 \\ & 50 & 0.87 & 0.91 & -0.94\\ \hline & 10 & 0.79 & 0.85 & -0.95\\ & 20 & 0.86 & 0.86 & -0.98\\ 20 & 30 & 0.93 & 0.96 & -0.97 \\ & 40 & 0.90 & 0.87 & -0.88\\ & 50 & 0.89 & 0.90 & -0.97\\ \hline & 10 & 0.78 & 0.74 & -0.90\\ & 20 & 0.88 & 0.77 & -0.97\\ 11 & 30 & 0.90 & 0.85 & -0.92 \\ & 40 & 0.82 & 0.74 & -0.89\\ & 50 & 0.82 & 0.84 & -0.83 \\ \hline & 10 & 0.70 & 0.63 & -0.87\\ & 20 & 0.83 & 0.74 & -0.86\\ 5.5 & 30 & 0.81 & 0.79 & -0.84 \\ & 40 & 0.81 & 0.57 & -0.85\\ & 50 & 0.80 & 0.81 & -0.78\\ \hline & 10 & 0.57 & 0.48 & -0.78\\ & 20 & 0.58 & 0.59 & -0.64\\ 1 & 30 & 0.61 & 0.67 & -0.80 \\ & 40 & 0.73 & 0.25 & -0.65\\ & 50 & 0.69 & 0.38 & -0.72\\ & 75 & 0.64 & 0.39 & -0.58\\ & 100 & 0.65 & 0.73 & -0.76\\ \hline & 10 & 0.42 & 0.62 & -0.80\\ & 20 & 0.56 & 0.69 & -0.78\\ 0.5 & 30 & 0.68 & 0.47 & -0.74 \\ & 40 & 0.62 & 0.56 & -0.67\\ & 50 & 0.61 & 0.56 & -0.79\\ & 75 & 0.64 & 0.50 & -0.81\\ & 100 & 0.64 & 0.60 & -0.87\\ \hline \label{tabmc} \end{tabular} \end{table} \subsection{Fluctuations in the Poloidal Field Generation and the Meridional Circulation} Finally we add fluctuations in both the poloidal field generation process and the meridional circulation of the regular model, which is the realistic scenario. We add the same amount of fluctuations in poloidal field generation and in meridional circulation that we had added earlier in the individual cases ({\it i.e.}, 75\% fluctuations in the poloidal field generation with a coherence time of 1 month and 30\% fluctuations in the meridional circulation with a coherence time of 30~years). The results are shown in Figures~\ref{figalmc}. In this figure, we see that the scatters in the correlation plots are very close to what we find in actual observations. It is perhaps not a big surprise that all the correlations are reproduced correctly, because they were already reproduced on introducing fluctuations in \mc\ alone. A correct theoretical model also should explain the lack of correlation seen in Figure~3 between peaks of two successive cycles. \Fig{theoampl}(a) shows the correlation between the amplitude of cycle $n$ and the amplitude of cycle $n+1$ for the same level of fluctuations which were used to generate \Fig{figalmc}, whereas \Fig{theoampl}(b) gives the same correlation when the fluctuation is B-L $\alpha$ is raised to 100\% from 75\%. It is seen that the correlations between these amplitudes is weak and becomes weaker still on increasing the fluctuation in the B-L $\alpha$. A physical interpretation is not difficult to give. A coherence time of 30 years in \mc\ implies that very often the \mc\ will be the same during two successive cycles, trying to produce a correlation between the cycles. On the other hand, a fluctuation in the B-L $\alpha$ will definitely try to reduce the correlation. Certainly this fluctuation would try to reduce the correlations seen in \Fig{figalmc} as well. However, for our choice of parameters, we are able to theoretically reproduce the three observed correlations as seen in \Fig{figalmc}, whereas the correlation between successive cycles is much weaker in conformity with observations. \blue{We may mention that we also get an anti-correlation between the amplitude of a cycle and its duration. Our theoretical correlation coefficient ($r = -0.65$) is somewhat stronger than what Charbonneau and Dikpati (2000) obtained from the observational data ($r = -0.37$).} \begin{figure} \centering{ \includegraphics[width=1.1\textwidth,clip=]{fig8.eps} } \caption{Same as \Fig{alflc} but with both B-L $\alpha$ and \mc\ fluctuations.} \label{figalmc} \end{figure} \begin{figure} \centering{ \includegraphics[width=1.0\textwidth,clip=]{fig9.eps} } \caption{(a) Scatter plot of the amplitude of cycle $n$ with the amplitude of cycle $n+1$ with 75\% fluctuation in B-L $\alpha$. (b) Same as (a) but with 100\% fluctuation in B-L $\alpha$.} \label{theoampl} \end{figure} \subsection{\blue{Robustness of the Results on Changing the Meridional Circulation and Differential Rotation Profiles}} \blue{ So far, our earlier computations are performed using a single-cell meridional circulation in each hemisphere. However, recent observations, helioseismic inversions and convection simulations suggest the possibility that the meridional circulation may have a complicated multi-cellular structure rather than being single-cellular ({\it{e.g.}}, Zhao {\it{et al.,}}\ 2013; Karak {\it{et al.,}}\ 2015). In Hazra, Karak, and Choudhuri (2014), we have shown that the flux transport dynamo model can reproduce most of basic features of solar cycle using multi-structured meridional circulation as long as there is an equator-ward flow near the bottom of the convection zone. Therefore we are curious to know whether the correlations studied in this paper are also reproduced with multi-structured circulation. To answer this question, we perform a simulation with three radially stacked circulation cells, exactly the same as used in Section~3 of Hazra, Karak, and Choudhuri (2014). For the differential rotation in all our previous works, we have used a simplified profile of the observed differential rotation that does not capture the near-surface shear layer (see {\it e.g.} Figure~1 of Chatterjee {\it et al.,}\ 2004). Although it is expected that the near-surface shear layer does not produce significant effect on global large-scale fields in the flux transport dynamo \citep{Dikpati02}, just for the sake of completeness we use a somewhat improved profile of differential rotation captured by the following analytical formula} \begin{eqnarray} \blue{\Omega(r,\theta) = \sum_{j=0}^2 \cos\left(2j\left(\frac{\pi}{2}-\theta\right)\right)\,\sum_{i=0}^4 c_{ij} (r/R_\odot)^i}. \end{eqnarray} \blue{For the coefficients $c_{ij}$ see Table 1 of \citet{Belvedere00}, (see also their Figure~1 for the comparison with observed profile).} \blue{With these new profiles of the meridional circulation and the differential rotation, we perform a dynamo simulation by adding the same amount of stochastic fluctuations in B-L $\alpha$ and in meridional circulation as done in the previous section. In the results presented earlier, magnetic buoyancy was treated by moving a part of the toroidal magnetic field to the surface whenever it became larger than a critical value. However, as pointed out in \citet{HKC14} and \citet{KKC14}, this way of treating magnetic buoyancy is not very robust under a large change in parameters and model ingredients. Therefore, for the computations of this section we use the `non-local' magnetic buoyancy as used in Charbonneau and Dikpati (2000), and in many other works. } \begin{figure} \centering{ \includegraphics[width=1.0\textwidth,clip=]{fig10.eps}} \caption{\blue{Same as \Fig{figalmc} but in this model, the large-scale flow has three cells radially stacked in the solar convection zone and the differential rotation includes near surface shear layer.} } \label{figc3mc} \end{figure} \blue{ Final results from this computation are shown in Figure \ref{figc3mc}. We observe that even with the {\it unconventional} meridional flow profile (three radial cells) and addition of near-surface shear layer, the correlations do not disappear. Although the correlations in Figures~\ref{figc3mc}(b) and \ref{figc3mc}(c) become a little weaker compared to what we have found for the usual single-cell circulation (Figure \ref{figalmc}), they show the correct general features as found in observations. The values of the correlations might be improved a little bit by tuning the amount of imposed fluctuations; we do not want to do that here, rather we have used same amount of fluctuations as we used in earlier sections.} \blue{We make a few remarks about the two ways of treating magnetic buoyancy. The behaviour of the dynamo can become substantially different on treating magnetic buoyancy in these two different ways \citep{CNC05}. Since some magnetic field is removed due to magnetic buoyancy, one would expect the strength of the toroidal field at the bottom of the convection zone to be depleted due to the action of magnetic buoyancy. One unphysical aspect of the non-local treatment of magnetic buoyancy is that this effect is usually not taken into account. As we have repeatedly pointed out, one requirement for obtaining the Waldmeier effect as well as the correlations discussed in this paper is that the effect of diffusivity has to be more important than the effect of toroidal field generation. Since the first method of treating magnetic buoyancy (used in the earlier subsections) puts a cap on the strength of the toroidal field but not the second non-local method, toroidal field generation remains unrealistically strong in the second method and it is more difficult to get the correlations properly in this method. We have taken the magnetic energy density ($B^2$) at latitude $15^{\circ}$ at the bottom of the convection zone as the proxy of the sunspot number. In the first method of treating magnetic buoyancy (with the single-cell meridional circulation, as presented in Section~4.1--3), we found that all the correlations come out robustly if we use the magnetic energy density ($B^2$) in a wide range of latitudes as a proxy of the sunspot number. However, on using the second method of non-local magnetic buoyancy, we find that the magnetic energy density ($B^2$) has to be taken in a narrow band of low latitudes, with the correlations disappearing or even reversing if we use the magnetic energy density at higher latitudes. To sum up, the second non-local method of treating magnetic buoyancy is a more robust method and keeps the dynamo stable over a wide range of parameters (which is not the case with the first method). However, it is more difficult to reproduce various observed correlations of the solar cycle with this non-local buoyancy method because the depletion of magnetic field due to buoyancy is not included.} \section{Conclusion} We have discussed three important features of solar cycle -- {i}) a linear correlation between the amplitude of cycle and its decay rate, {ii}) a linear correlation between the amplitude of cycle $n$ and the decay rate of cycle $n-1$ and {iii}) an anti-correlation between the amplitude of cycle $n$ and the period of cycle $n-1$. We have seen that all these correlations exist in all the data sets considered here. Last two correlations involve characteristics of one cycle and the amplitude of the next. So they provide useful precursors for predicting a future cycle. Just by measuring the period and the decay rate of a cycle, we can get an idea of the strength of the next cycle. We have also explored whether these features can be explained in a B-L type flux transport dynamo model. We have first introduced stochastic fluctuations in the poloidal field generation (B-L $\alpha$ term) and we find that only the correlation between the decay rate and the cycle amplitude is reproduced. However when we added fluctuations in the \mc, we found that all three correlations are reproduced in qualitative agreement with observational data. In our high diffusivity dynamo model, strong \mc\ makes the period shorter and the decay rate faster, but it also makes the next cycle stronger---especially because the cycle strength displays a memory effect, depending on the \mc\ a few years earlier. The opposite case happens when \mc\ becomes weaker. Therefore the fluctuations in the \mc\ are essential to reproduce the observed features. This study is consistent with earlier studies for modeling the cycle durations and strengths of observed cycles \citep{Karak10}, the Waldmeier effect \citep{KarakChou11}, grand minima \citep{CK12} and few others \citep{Passos12} which indicate that the variable meridional circulation is crucial in modeling many aspects of the solar cycle. \blue{We have found that the observed correlations are reproduced even when the meridional circulation is assumed to be more complicated than the one-cell pattern used in most flux transport dynamo models. However, the coherence time of the fluctuations in the meridional circulation has to be not less than the cycle period in order to produce the correlations. The correlations disappear on making the coherence time too short, implying that fluctuations in the meridional circulation having coherence time of the order of convective turnover time cannot be the cause of the observed correlations. The theory of meridional circulation is still very poorly understood and we have no understanding of what may cause the fluctuations in meridional circulation with long coherence time. However, the pattern in the periods of the past cycles indicate the presence of such fluctuations (Karak and Choudhuri 2011) and the fact that only such fluctuations can explain the various observed correlations of the solar cycle convinces us that such fluctuations in the meridional circulation with long coherence time must exist.} We have pointed out that the period or the decay rate of a cycle may be used to predict the next cycle, since these quantities indicate the strength of the \mc\ which also determines the amplitude of the next cycle a few years later (due to the memory effect). It seems that the decay rate during the late phase of the cycle is the most reliable precursor for the next cycle, as seen in Figure~2(b)---presumably because the decay rate during this phase is the best indicator of the \mc\ during the particular interval of time which is most crucial in determining the amplitude of the next cycle. However, fluctuations in the poloidal field generation process degrades all the observed correlations. As a result, even Figure~2(b)---displaying the correlation between the decay rate during the late phase and the amplitude of the next cycle---has considerable scatter, limiting our ability to predict the next cycle in this way. \section*{Acknowledgment} This work is partly supported by DST through the J. C. Bose Fellowship awarded to ARC. GH thanks CSIR, India for financial support. We thank an anonymous referee for careful reading and providing constructive comments that helped to improve the quality of the paper. \bibliographystyle{spr-mp-sola-cnd}
2,869,038,155,351
arxiv
\section{Introduction} \label{sec:introduction} Image-based plant phenotyping is a method by which scientists use image data to characterize and categorize plants within and across species. This process typically involves the use of tools, instrumentation, and domain expertise to (i) measure information from individual or groups of samples in the greenhouse, field, and/or nature, (ii) be applied across scales, ranging from cell microscopy to satellite imagery, and (iii) allow researchers to extract complex morphological and topological features that would otherwise be impossible to measure by hand. One of the main challenges of image-based phenotyping is identification of the relevant biological structures (foreground) from the background. In some cases, imaging methods can be modified to highlight these objects, such as using back lights or relying on fluorescence of those objects, however in many situations the contrast between the relevant object and the background is low. When contrast is high, simple greyscale or color-based thresholding can be used, but in more complex color imagery, plant phenomics has focused on machine learning approaches. Deep learning has revolutionized computer vision as a powerful and efficient way to extract features from image-based data~\cite{krizhevsky2017imagenet, he2016deep, ronneberger2015u}. An important strength of this approach is that deep learning models can learn an invariance to heterogeneous background effects which allows them to generalize to new samples outside of the training set. However, such approaches can be laborious and expensive to adopt, because users must generally annotate hundreds or thousands of images to provide sufficient training data. In plant biology for example, to reliably associate plant traits with genes at population scale requires large amounts of observations that span hundreds or thousands of genotypes. Such associations provide deeper understanding of the genetic architectures and underlying mechanisms that govern complex processes that control the growth, acclimation, response, and composition of plants, with important implications for sustainable agriculture and bioenergy~\cite{taylor2019sustainable, grattapaglia2018quantitative}. Thus, there exists a need to develop methods for fast and accurate image-based plant phenotyping that alleviate the data annotation bottleneck. In contrast to traditional deep learning approaches, which can require thousands or millions of training samples to reach sufficient prediction accuracy~\cite{krizhevsky2017imagenet, he2016deep}, few-shot learning is an emerging subset of machine/deep learning that attempts to maximize predictive accuracy while using only a small number of labeled samples for training. Multiple approaches exist to solve this problem, including data augmentation, metric learning, external memory, and parameter optimization~\cite{yang_fewshot}. This work utilizes a combination of data augmentation (i.e., applying random spatial and color augmentations to images during training) and iterative algorithms, which have been previously demonstrated for biomedical image analysis, e.g., semantic segmentation of cells and retinal vasculature~\cite{rutter_tracing, rutter_combo, januszewski_floodfilling, lagergren_growing}. The goal of this study is to extend these methods to image-based plant phenotyping by leveraging convolutional neural networks (CNNs) to segment the body and visible vein architecture of poplar (\textit{Populus trichocarpa}) leaves from high-resolution scans obtained in the field. In particular, few-shot learning is utilized in this work because it divides a small number of large images into a large number of small image tiles. In this way, the complex task of whole-image segmentation is broken down into smaller easier decision rules, which enables accurate segmentation using very few labeled images for training. \textit{P. trichocarpa} (also called black cottonwood, western balsam-poplar, or California poplar) is a model system for studying the genetic architecture of complex traits and climate adaptation in woody plants. Spanning from central California, USA, to northern British Columbia, Canada, it harbors tremendous geographic, climatic, phenotypic, and genetic diversity. Further, \textit{P. trichocarpa} has a fully sequenced genome, genome annotation, abundant transcriptomes, resequencing, and phenotypic data. Importantly, rapid biomass growth, clonal propagation, and the ability to grow in marginal lands with low agricultural input make it an ideal crop for sustainable bioenergy applications~\cite{tuskan2006, garcia2006protease, geraldes2011snp, zhang2018genome, slavov2012genome, evans2014population, chhetri2019multitrait, chhetri2020genome, slavov2012genome}. As a result, research and commercial groups have invested heavily in the development of \textit{P. trichocarpa} as a high-impact species for forest products and biofuel production~\cite{jansson2007populus, rubin2008genomics, evans2014population, mckown2014genome}. To this end, leaves play a key role in biomass production since they are the primary organs responsible for sunlight absorption and carbon fixation, the primary food source of vascular plant systems. Further, vein architecture supports the mechanical structure of the leaf and governs the distribution of water and other nutrients, which has important implications for the physiology, biomechanics, and structure of a plant~\cite{sack2013venation}. Thus, capturing accurate leaf traits and relating them to the genetic components that control them may provide insights toward improved tree biomass and composition. In plant phenotyping, segmentation of individual leaves and their venation has seen sparse attention. In general, existing approaches use (i) experimental methods to chemically clear the leaf lamina and stain the veins to highlight the venation against the background~\cite{buhler2015phenovein, xu2021automated}, (ii) image pre-processing by greyscaling, aggregating specific color channels, or spatial rescaling~\cite{katyal2012leaf, larese2012legume, buhler2015phenovein, salima2015leaf, xu2021automated}, (iii) global filters and morphological operations (e.g., Odd Gabor filters, Hessian matrices, vesselness filters, and region merging) to obtain binary segmentations~\cite{katyal2012leaf, larese2012legume, buhler2015phenovein, salima2015leaf, zhu2020fast}, (iv) ensembles of scales and models to make aggregate predictions~\cite{zhu2020fast, xu2021automated}, and (v) require hundreds of manually-annotated training samples to produce accurate segmentation models~\cite{xu2021automated}. However, these commonly encountered steps can bottleneck the scalability and accuracy of image-based plant phenotyping at population scale. For example, approach (i) adds additional experimental time, effort, materials, expenses, and hazards to data acquisition compared to capturing just raw images, (ii) destroys fine-grained image details across spatial and color dimensions, (iii) may be overly simplistic and generate large amounts of effort in segmentation post-processing, (iv) uses complex workflows which may be difficult to automate at scale, and (v) can be infeasible for smaller research groups with limited time and budgets. These challenges may help explain why leaf and vein segmentation has not received as much attention compared to crop- or field-level phenotyping for plant stress, shoot morphology, and plant/organ counting~\cite{jiang2020convolutional}. This work presents two few-shot learning methods based on CNNs to segment the body and visible vein architecture of \textit{P. trichocarpa} leaves. Leaf segmentation is formulated as a tracing task, in which a CNN iteratively traces the boundary of a leaf to produce a single contiguous leaf segmentation. Previous studies have shown that alternative CNN-based segmentation methods (e.g., the fully-convolutional neural network, U-Net~\cite{ronneberger2015u}) can result in ``patchy'' segmentations that must be addressed with complex post-processing methods that are difficult to generalize~\cite{rutter_tracing}. In contrast, boundary tracing eliminates this patchiness problem by only segmenting one contiguous region, thereby ensuring accurate downstream extraction of morphological features. Alternatively, vein segmentation is formulated as a region growing task, in which a CNN iteratively adds neighboring pixels to a growing region of interest corresponding to the visible vein architecture. Similar to the tracing approach, the vein segmentation ensures biologically-realistic morphological features by including pixels in the segmentation only if a neighboring pixel was previously classified. Each method is fully automated (i.e., requires no human supervision or initialization), and segments images orders of magnitude faster compared to manual annotation. The current work is designed to provide the plant phenotyping community with (i) methods for fast and accurate image-based feature extraction with minimal training data and (ii) a new population-scale data set for domain scientists and machine learning researchers. In particular, the few-shot learning methods developed here are applied to raw RGB images with no experimental/image pre-processing, use individual CNN models that learn the complex relationships between pixels for accurate leaf and vein segmentation, and require very few training samples to generalize and make accurate predictions at population scale. The segmentations are used to extract biologically realistic features that are validated using real-world physical measurements and applied downstream using broad-sense clonal heritability estimates and a genome-wide association study (GWAS). \section{Materials and Methods} \label{sec:methods} Few-shot learning is used to segment the body and visible vein architecture of \textit{P. trichocarpa} leaves from high-resolution scans. The resulting segmentations are combined with open-source tools for image processing and genomic analysis to expand the application of these methods to a wider scientific audience. All deep learning methods are implemented in Python (version 3.7.8) using the PyTorch deep learning library (version 1.11.0)~\cite{paszke2019pytorch} and are made available at \url{https://github.com/jlager/few-shot-leaf-segmentation}. Feature extraction is completed using Fiji (version 2.9.0)~\cite{schindelin2012fiji} and RhizoVision Explorer (RVE, version 2.0.3)~\cite{seethepalli2020rhizovision, seethepalli2021rhizovision}. Genomic analysis is conducted in R (version 4.2.0) using the GAPIT3 software package (version 3)~\cite{wang2021gapit}. All of the data, including images, manual segmentations, model predictions, extracted features, and the underlying genomes are available at \url{https://doi.org/10.13139/ORNLNCCS/1908723}. \subsection{Data collection} \label{sec:data} The leaf scans considered in this work were collected during a field campaign in August, 2021, from the 10-acre poplar plantation at the University of California, Davis (UC Davis), which maintains a common garden of poplar trees that can be grown on low-quality, poor, and marginal land~\cite{baileybale2021plantation, taylor2019sustainable}. The plantation follows a randomized complete block design composed of three blocks. Each block is partitioned into rows and positions that uniquely identify the corresponding genotypes, and contain approximately 1,500 \textit{P. trichocarpa} genotypes per block. For practical reasons, leaf samples were collected from one entire block (1,322 viable samples) and partially from a second block (131 samples) totaling 1,453 trees. Leaves were sampled from a branch at approximately breast height (i.e., $\sim$1.37 meters) from the south-facing side of each tree. Leaves were chosen by selecting the first fully mature leaf counting from the top of each branch. Each leaf was also paired with a barcode label that encoded the treatment, block, row, and position of the tree, which uniquely identified the corresponding genotype and allowed the user to record the sample ID during data capture. This helped expedite the phenotyping process and reduce human error. Selected leaves were scanned in the field as they were sampled from each tree using a USB-powered Epson Perfection model V39~\cite{epsonwebsite}. The top and bottom of each leaf was scanned with a resolution of 300 dots-per-inch (DPI). To account for heterogeneous leaf shapes (e.g., leaves with non-trivial 3D characteristics like ``waviness''), a weight was used on the scanner lid to compress each leaf to the glass of the scanner in order to reduce image artifacts like blurring. Additionally, between rows of trees (there are approximately 30 trees per row), the scanner glass and background were cleaned to reduce the buildup of dust and other debris. During data capture, the scanner suffered a hardware failure where one of the pixels of the scanner began to malfunction and caused a vertical white line to gradually appear near the center of each subsequent scan. This artifact affected approximately 100 leaf scans. To mitigate the malfunction, leaves were moved to the edge of the scanner away from the malfunctioning pixel, affecting 62 leaf scans. A new scanner was acquired and used for the remainder of the field campaign (2,634 leaf scans). Despite the hardware failure, these data acquisition steps resulted in 2,906 RGB leaf scans (i.e., top and bottom of 1,453 samples), each with dimension $3510\times2550$ pixels. In addition to image-based measurements, petiole length and diameter were measured manually for each leaf. Using a similar procedure to leaf imaging, barcode scanners were used to record the sample ID, followed by length/diameter measurements using USB-powered SPI 17-600-8 electronic calipers~\cite{spiwebsite}. The manual measurements for petiole length and width are used to validate image-based measurements. Obtaining accurate high-quality ground truth data is important for deep learning applications in general, but it is crucial for few-shot learning, since a model must learn features from a small number of training samples that generalize well to the broader population. To this end, training data was generated for leaf body segmentation using the top and bottom scans of 25 leaves (50 images in total), which were randomly selected and manually traced. Manual segmentation was completed using the open-source graphics editor, GNU Image Manipulation Program (GIMP)~\cite{gimp}, taking between 15 and 30 minutes per image, depending on the size and serration of the leaf. Similarly for vein segmentation, GIMP was used to manually draw all visible leaf venation for eight leaf-bottom scans, taking between four and eight hours per image, depending on the vein density. Note that only leaf-bottom venation is considered in this work. Leaf-top venation will be considered in future work. Due to the large amount of manual effort required for vein segmentation, the training data set was constructed using \emph{iterative data set refinement}, in which images were individually added to the training set based on manual inspection of model performance across the set of all images. For example, compressing samples against the scanner glass caused some leaves to fold on themselves, which produced dark lines that were falsely identified as veins. Thus, an image with multiple examples of such folds was manually segmented and added to the training set so that the model learned an invariance to such artifacts. This process was repeated similarly for other leaf characteristics (e.g., dead, diseased, and nutrient-deficient leaf tissue), including a scan exhibiting the hardware failure discussed above, until the model converged to acceptable performance across the population. This strategy resulted in a total of eight images (six for training, two for validation) mentioned previously. Note that in practice, the number of images may vary depending on the application and image quality, but it is important (particularly for few-shot learning) that the training data set is fine-tuned to the point that the model is able to generalize. \subsection{Leaf segmentation} \label{sec:tracing} Segmentation of the leaf body is formulated as an object tracing task based on~\cite{rutter_tracing, rutter_combo}, in which a CNN is used to iteratively trace along the contour of a leaf. These methods have been shown to reach state-of-the-art accuracy in biomedical image segmentation using a fraction of the training data required by other approaches~\cite{rutter_combo}. In this framework, a CNN inputs a small image tile centered somewhere on the edge of an object and outputs a predicted trace (i.e., set of pixel displacements) along the object boundary from the center to the edge of the tile. The iteration proceeds by generating new image tiles along the predicted contours, continuing the trace until reaching the starting location, thereby closing the loop and finishing the segmentation. An important benefit of this approach is that it breaks the complex task of whole-leaf segmentation into multiple smaller, easier decision rules, and requires only a small number of images to train an accurate model. See Figure~\ref{fig:tracer} for a diagram of the leaf tracing algorithm and Supplementary Video S1 for a video of the iteration. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/tracer.pdf} \caption{\textbf{Leaf tracing algorithm.} An image tile and a small segment of the previously traced path are input to a CNN which predicts the next steps of the trace. The predictions are added to the leaf contour and used to generate an image tile at the new location. This iteration continues until the trace reaches the starting location of the contour. \underline{Left}: RGB leaf scan and input tile (front) with previously traced pixels (back). \underline{Center}: the leaf tracing CNN which transforms the $256\times256$ input tile into a $2\times128$ set of pixel displacements for trace prediction. \underline{Right}: predicted pixel displacements that are used to update the trace and generate image tiles in the next iteration.} \label{fig:tracer} \end{figure} The leaf tracing CNN inputs $256\times256\times4$ image tiles, which include three color channels (RGB) and an overlay of the previously traced path as an additional channel. The tile size is chosen large enough to provide the model with sufficient context to trace through areas where the leaf contour may be obscured (e.g., in damaged/diseased areas or near the petiole). The RGB values are normalized to $[0, 1]$ for computational stability. The additional channel is a binary image comprised of ones along pixels of the previously traced path and zeros otherwise, and thus provides the network with a direction to continue the trace. Each image tile is centered at a pixel on the contour of a leaf, by which the 50 manually-traced samples are used to generate more than 300,000 individual tiles for training. Further, heavy image augmentation is used so that the CNN learns an invariance to heterogeneous leaf shapes and conditions. In particular, random continuous rotations, horizontal and vertical flips, displacement jitter, and color augmentation (hue, saturation, brightness, and contrast) are combined so that no two image tiles appear the same during training. The leaf tracing CNN outputs $2\times N$ trace predictions, which encode $N$ horizontal and vertical pixel displacements along the leaf contour relative to the center pixel of the input tile. Training data is generated by evenly sampling pixels from the center to the edge of the tile along the contour of the leaf. Distance is then measured between the predicted trace and the ground truth contour using mean squared error, $\mathcal{L}_{\text{MSE}}$, as an objective function. Importantly, the quality of the predicted trace degrades near the edges of image tiles since the CNN does not have context beyond the boundaries of the input. However, it is still important for the model to predict the trace from the center pixel to the edge of the image tile so that the predicted trace can ``skip'' over obscured segments of the contour~\cite{rutter_combo}. To account for these effects, weighted mean squared error is used to weight predictions closer to the center pixel more heavily than predictions near the edge. The objective function is given by \begin{subequations} \begin{align} \mathcal{L}_{\text{MSE}} &= \frac{1}{N} \sum_{i=1}^{N} \omega_i \big\| y_i - \text{CNN}(x)_i \big\|_2^2, \label{eq:mse} \\ \omega_i &= 1 + \frac{1 - \tanh\left(\alpha i + \beta\right)}{2}, \label{eq:tanh} \end{align} \end{subequations} \noindent where $x \in \mathbb{R}^{256\times256\times4}$ is the input image tile, $y \in \mathbb{R}^{2\times N}$ is the set of ground truth row/column coordinates with $y_i$ indicating the row and column position of the $i^{\text{th}}$ pixel, $\omega \in \mathbb{R}^{N}$ is the weight vector, the number of pixel displacements is $N=128$, and $\alpha = 8/N$ and $\beta = -4$ are chosen such that the hyperbolic tangent function (which defines $\omega$) gradually decreases the error weight from two to one along the predicted contour. In this way, the objective function weights pixels near the center of the tile approximately two times greater than pixels near the edge. The model architecture follows standard practices for CNNs~\cite{simonyan2014very, he2016deep}. In particular, the CNN uses blocks of three $3\times3$ convolution layers with zero-padding and one max pooling layer. Each convolution layer includes batch normalization to stabilize training~\cite{ioffe2015batch} and a ``LeakyReLU'' activation function for nonlinearity~\cite{maas2013rectifier}. Additionally, residual connections are applied between convolution layers for easier optimization and better prediction accuracy~\cite{he2016deep}. In total, the leaf tracing CNN includes six blocks with max pooling and one block without, which transforms the spatial image dimensionality from $256\times256$ to $4\times4$. Then, a final $4\times4$ convolution layer reduces the outputs to a vector of length 256, which is reshaped into $2\times128$ for trace prediction. Note that the final convolution is linear (i.e., it does not include a nonlinear activation function) so that the trace predictions can reach the edges of the input tile in any direction. To prevent overfitting, images are randomly split into 80\% training (i.e., 40 images totaling $\sim$240K image tiles) and 20\% validation (i.e., 10 images totaling $\sim$60K image tiles) sets. The model is then trained for 1,000 epochs with a batch size of 256 and the Adam optimizer~\cite{kingma2014adam} with default parameters. Further, early stopping with 20 epochs (i.e., training is stopped if the validation error does not improve within 20 epochs) is used to guarantee the convergence of the model. Once the leaf tracing CNN is trained, it is used to iteratively trace the contour of each leaf image in the data set. The tracing algorithm is initialized using automatic thresholding to obtain a rough segmentation of the leaf, which provides both a starting location and trace direction. An image tile centered at the top of the rough segmentation (i.e., at the tip of the leaf) is initially fed to the CNN, which outputs the initial trace prediction from the center to the edge of the image tile. The first 32 pixel predictions along the edge of the leaf are added to the trace, and a new image tile is drawn centered at the new location. This iteration continues until the predicted trace falls within 10 pixels of the previously-traced contour, after which a line is drawn from the prediction to the contour to close the loop. To eliminate errors from the trace initialization, the tracing algorithm uses 10 ``burn-in'' iterations before storing traced pixels for the final segmentation. The leaf body segmentation is obtained by classifying all interior pixels as foreground and exterior pixels as background. Note that the trace direction is randomized during training, so that the tracing CNN can segment leaves in either clockwise or counterclockwise directions. In practice, the tracing algorithm does not require human supervision to start or stop the iteration and takes $\sim$1 second per image on a single GPU of an NVIDIA DGX Station A100. \subsection{Vein segmentation} \label{sec:growing} Segmentation of the leaf venation is formulated as a region growing task based on~\cite{januszewski_floodfilling, lagergren_growing}, in which a CNN is used to iteratively expand a region of interest (i.e., visible veins of a leaf). The convolutional region growing method (also called flood filling networks~\cite{januszewski_floodfilling}) has been shown to reach state-of-the-art segmentation accuracy while preserving biologically realistic morphological features~\cite{lagergren_growing}. However, rather than tracing the boundary of an object with a 1D line, the vein growing CNN iteratively grows a segmentation in all directions (e.g., 2D in~\cite{lagergren_growing} and 3D in~\cite{januszewski_floodfilling}) by classifying which pixels/voxels should be included in or rejected from the region. In particular, a CNN inputs small image tiles centered on pixels of interest and predicts classifications of the center pixel and its adjacent neighbors. Neighboring pixels that are added to the region become the seeds for new image tiles in the next iteration. This process continues until no new pixels are added to the region, thereby finishing the segmentation. Similar to the leaf tracing framework, the region growing approach breaks the complex task of vein segmentation into many smaller decision rules and can produce high-accuracy segmentations using fewer than ten images for training. See Figure~\ref{fig:grower} for a diagram of the vein growing algorithm and Supplementary Video S2 for a video of the iteration. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/grower.pdf} \caption{\textbf{Vein growing algorithm.} Image tiles centered on pixels of interest are input to a CNN which predicts the classification of the center pixel and its neighbors. Neighboring pixels with high probability are added to the vein region and used as seed pixels in the next iteration. The iteration continues until no new pixels are added to the vein region. \underline{Left}: RGB leaf scan and input tiles with center pixels highlighted in black. \underline{Center}: the vein growing CNN which transforms the $128\times128$ input tile into a $3\times3$ matrix of vein probabilities. \underline{Right}: predicted pixel probabilities that are used to update the region and generate new image tiles in the next iteration.} \label{fig:grower} \end{figure} The vein growing CNN inputs $128\times128\times3$ RGB image tiles (also normalized to $[0, 1]$) centered on pixels in the interior of a leaf. The tile size is chosen to be smaller than the leaf tracing tiles (i) since vein classification does not require as much context and (ii) for computational efficiency, since many more image tiles are used in this framework. However, the tile size is still large enough so that the model can accurately predict vein pixels in areas of uncertainty (e.g., blurry patches and diseased/dead tissue). To construct a training set, image tiles are drawn for each vein pixel, by which the eight manually-segmented images generate more than 1,000,000 positive samples (i.e., samples centered on leaf veins rather than leaf lamina). Further, to account for heterogeneous backgrounds and image artifacts, up to ten times as many background pixels are sampled from the interior of each leaf, resulting in approximately 10,000,000 negative samples. The image tiles are augmented during training using a combination of random continuous rotations, horizontal and vertical flips, color augmentation, and Gaussian blur. The vein growing CNN outputs $3\times3\times2$ predictions of the center pixel and its neighbors, in which the two prediction channels represent probabilities that a pixel belongs to the foreground (vein) or background (lamina). To measure error between predicted pixel probabilities and their ground truth classifications, Focal Loss ($\mathcal{L}_{\text{FL}}$), an extension of standard cross-entropy, is used as an objective function~\cite{lin2017focal}. In particular, Focal Loss seamlessly accounts for the class imbalance between positive and negative samples (i.e., there are many more background pixels than vein pixels) and allows the model to focus on more difficult examples where veins are obscured. The objective function is given by \begin{equation} \mathcal{L}_{\text{FL}} = \begin{cases} -\alpha \, (1-p)^\gamma \, \log(p) & \text{if $y=1$} \\ -(1-\alpha) \, p^\gamma \, \log(1-p) & \text{otherwise}, \end{cases} \label{eq:focalloss} \end{equation} \noindent where $p = \text{CNN}(x)$ are the pixel probabilities, $x \in \mathbb{R}^{128\times128\times3}$ is the input image tile, $y \in \{0, 1\}$ are the ground truth pixel classes, and $\alpha=0.25$ and $\gamma=2.0$ are the default hyperparameters of the Focal Loss function~\cite{lin2017focal}. The model architecture and training strategy are nearly identical to the leaf tracing framework. Since the input tiles for vein segmentation are half the dimension of the inputs for leaf tracing, the first block of $3\times3$ convolutional layers and max pooling is removed from the architecture described in Section~\ref{sec:tracing}. Thus, the vein growing CNN transforms the spatial image dimensionality from $128\times128$ to $4\times4$, after which a final $4\times4$ convolution layer reduces the outputs to a vector of length 18, which is reshaped into $3\times3\times2$ for vein classification. Note that, unlike the leaf tracing CNN, the final convolution includes a Softmax activation function, which constrains the outputs to between 0 and 1 and motivates the probabilistic interpretation for the objective function. The model is trained with the Adam optimizer for 1,000 epochs with a batch size of 1024 and early stopping of 20. Note that a larger batch size is used here compared to the leaf tracer since the inputs are smaller and thus more can be included in each batch. Finally, six images (totaling $\sim$7M image tiles) are used for training and two (totaling $\sim$2.5M image tiles) for validation. Once the vein growing CNN is trained, it is used in a recursive framework in which the CNN decides whether new pixels should be added to the vein segmentation. The algorithm is initialized by randomly sampling 10,000 seed pixels inside the leaf body (using the segmentations from Section~\ref{sec:tracing}). For each seed pixel, image tiles are generated and fed to the model, which then classifies the seed pixel and its neighbors. Neighboring pixels that are classified as leaf veins are used as seeds in the next iteration. Once a seed pixel has been considered, it is removed from the sample set for future iterations. This process is then repeated, continuously adding pixels to the segmentation, until no new pixels are positively classified. Note that a pixel can receive multiple classifications as its neighbors become seeds during the iterations. To account for this, the final vein segmentation is determined by thresholding the \emph{average} probability of each pixel. The optimal probability threshold is chosen by minimizing the number of connected components in the segmentation mask across a range of threshold values. In other words, the optimal threshold is the one that maximizes vein connectivity in the segmentation mask. Like the leaf tracing framework, the vein growing algorithm does not require human supervision at inference time and completes accurate vein segmentations in $\sim$60 seconds on a single GPU of an NVIDIA DGX Station A100, which is orders of magnitude faster compared to human annotation. \subsection{Feature extraction} \label{sec:featureextraction} Given the binary segmentation maps from Sections~\ref{sec:tracing} and~\ref{sec:growing}, traditional open-source image-processing tools are used to extract biologically meaningful traits from the leaf body, vein architecture, and petiole. This is possible since the few-shot learning methods effectively remove background artifacts and highlight the salient information in leaf scans. In this work, Fiji~\cite{schindelin2012fiji} is used to extract leaf-level traits, RhizoVision Explorer (RVE)~\cite{seethepalli2020rhizovision, seethepalli2021rhizovision} is used for vein traits (e.g., length and thickness), and a custom implementation is used for petiole traits (length and width). RVE is chosen for vein traits in particular since it is designed to analyze root systems, which are composed of vessel-like structures with tips, branch points, redundant connections, etc., and makes it applicable to studying vein architectures, which share many of the same characteristics. Further, since the scan resolution is known (i.e., 300 DPI), features extracted from Fiji and RVE are easily converted from pixel-coordinates to standard units (e.g., cm). Fiji is applied to the leaf segmentations from Section~\ref{sec:tracing} to extract 23 image-based traits related to whole-leaf morphology. Some morphological descriptors include area (cm$^2$), perimeter (cm), circularity (unitless), and solidity (unitless), etc. Color features are also derived by relating the segmentations back to the original scanned images, including average red, green, blue, hue, saturation, and brightness values corresponding to leaf pixels. Feature extraction in Fiji is scripted and applied in ``batch mode'' to the full set of leaf segmentations. A detailed description of each of leaf-level trait is provided in Supplementary Table~\ref{tab:leaftraits}. RVE is used to extract 27 features from the vein segmentations. Note that only vein pixels inside the leaf segmentation were used for vein architecture traits (i.e., the petiole is not considered here). The software parameters are set to 300 DPI and ``whole mode'' for image-level traits. Vein diameter ranges are used classify veins into three ranges: (i) less than 0.25mm, (ii) between 0.25mm and 0.80mm, and (iii) above 0.80 mm, in an attempt to correspond to third, second, and first order veins, respectively. Extracted traits include those supplied by default (e.g., average vein diameter (mm), length (mm), and area (mm$^2$)), with some traits being repeated across the three vein diameter ranges. Following~\cite{sack2013venation}, additional venation traits are also derived that measure proportions between vein length/area to leaf morphology. See Supplementary Table~\ref{tab:veintraits} for the full list of vein traits and their descriptions. Petiole segmentations are derived by considering the largest connected component of vein pixels outside of the leaf segmentation. Then, to compute petiole length and width, Fiji is used to compute the best-fit rotated rectangle around the petiole mask. The height of the bounding rectangle is sufficient to estimate petiole length. However, rectangle width is not used to estimate petiole width since (i) petiole width changes along the length of the petiole (i.e., it tends to be wider near the ends and thinner near the midpoint), and (ii) the caliper measurements for petiole width were taken near the center of the petiole. Thus, petiole width is estimated by computing the average diameter over the center 20\% of the segmentation. Finally, Fiji is used to estimate similar traits for the petiole compared to the leaf body (e.g., area and perimeter), and RVE was used to estimate petiole volume. See Supplementary Table~\ref{tab:petioletraits} for the full list of 18 petiole traits and their descriptions. The feature extraction process yields 68 traits related to leaf, vein, and petiole morphology that can be used for genomic analysis. To validate image-based features with real-world measurements, petiole length and width are compared against caliper measurements that were recorded manually during image capture. To consider the results from a biological perspective, (i) broad-sense clonal heritability is computed for each recorded trait, and (ii) a genome-wide association study (GWAS) is performed for the vein density trait (i.e., the ratio of vein area to leaf area). Vein density is chosen since it utilizes both the leaf and vein segmentations, and since the ratio between lamina and venation must balance sunlight intake and carbon fixation with the transport of sugars and other nutrients to sink organs, all of which are essential processes for biomass production. \subsection{Validation} \label{sec:validation} To validate the leaf and vein segmentations, following~\cite{rutter_tracing} and~\cite{lagergren_growing}, the Jaccard index (intersection over union) is used to measure segmentation accuracy for images in the validation sets (i.e., ten for leaf segmentation and two for vein segmentation). This metric measures similarity between two semantic segmentations by computing the ratio between the set intersection (all true positive pixels) and the set union (all true positive, false positive, and false negative pixels), where scores near one indicate high accuracy and near zero indicate low accuracy. Since validation error was monitored during training, conclusions drawn from segmentation accuracy for validation images may be affected by data leakage (i.e., create an over-optimistic intepretation of the model). To account for this, the predicted digital measurements across the population are further validated using real-world physical measurements. To this end, calipers were used to measure petiole length and width during data collection. These values are compared against the corresponding features extracted from the vein segmentations described in Sections~\ref{sec:growing} and~\ref{sec:featureextraction}. To measure the agreement between digital and manual values quantitatively, the coefficient of determination ($R^2$) is computed for each trait. \subsection{Genomic analysis} \label{sec:genomics} To pre-process the vein density trait for GWAS, outliers are removed using median absolute deviation (MAD), where any measurement with $\text{MAD} > 6$ is removed. To account for geospatial variation across the plantation, thin plate spline (TPS) correction is applied using the \textit{fields} software package in R~\cite{fieldsR}, in which the row and position of each tree are used as coordinates for the TPS models. To extract the genetic component of each sample, best linear unbiased predictors (BLUPs) are computed for the TPS-corrected values using the \textit{lme4} software package in R~\cite{bateslme4}, which fits genotypes as random effects for each trait. In addition, to assess the repeatability of each measurement and the genetic control of the vein density trait, broad-sense heritability ($H^2$) was estimated using the TPS-corrected values of the clonal replicates (131 replicated samples) from the two blocks considered in this work. Heritability is computed by \begin{equation} H^2 = \frac{\sigma^{2}_{G}}{\sigma^{2}_{G} + \sigma^{2}_{E}}, \label{eq:genomicvariance} \end{equation} \noindent where $\sigma^{2}_{G}$ is the genotypic variance due to clonal differences and $\sigma^{2}_{E}$ represents environmental variance. For genomic analysis, a total of 1,492 \textit{P. trichocarpa} accessions were previously sequenced using the Illumina genetic analyzer with paired-end sequencing technology at the Department of Energy Joint Genome Institute~\cite{nordberg2014genome}. The sequences are aligned to the v4 reference genome using the Burrows-Wheeler Alignment tool, BWA-MEM~\cite{li2013aligning}, and variant calling is performed using the GATK (version 4.0) Haplotype caller~\cite{van2013fastq}. Starting with more than 22 million single nucleotide polymorphisms (SNPs) obtained by the GATK Variant Quality Score Recalibration (VQSR) method at tranche 99, 847,066 SNPs across 1,419 genotypes were retained for population-scale genomic analysis after applying the following filters. 73 individuals were removed due to having excessive genomic relatedness or having greater than 10\% missing SNP data. SNPs were removed if they had greater than 15\% missing genotypes, or minor allele frequency less than 0.05, or Hardy Weinberg Equilibrium chi-square test P value $< 10^{-50}$. SNPs were further pruned using a linkage disequilibrium (LD) coefficient of determination threshold of $R^2 \geq 0.7$. The data pre-processing steps above yield 847,066 SNPs for 1,419 unrelated genotypes that are used for GWAS analysis of the vein density trait. Association between the SNPs and the phenotypic vector was tested using a multilocus GWAS method, BLINK, from the GAPIT3 software package in R~\cite{huang2019blink}, that uses two fixed effect models (FEM) iteratively. The first FEM tests for the association of all genetic markers independently to generate a set of pseudo Quantitative Trait Nucleotides (QTNs) that are then used in the second FEM to optimize the selection of pseudo QTNs. Only those QTNs that are significant and not in LD are used as covariates in the association test. The first FEM is given by \begin{equation} y_i = S_{i1}b_1 + S_{i2}b_2 + \cdots + S_{ik}b_k + S_{ij}d_j + e_i, \label{eq:fem1} \end{equation} \noindent where $y_i$ is the phenotypic value of the $i^{\text{th}}$ individual, $S_{i1}, \dots, S_{ik}$ are the genotypes of the $k$ QTNs, $b_1, \dots, b_k$ are the corresponding effects of the QTNs, $S_{ij}$ is the genotype of the $i^{\text{th}}$ individual and $j^{\text{th}}$ SNP, $d_j$ is the $j^{\text{th}}$ SNP effect, and $e_i$ is the residual. The second FEM is used to optimize the QTNs for use as covariates in the first FEM, and is given by \begin{equation} y_i = S_{i1}b_1 + S_{i2}b_2 + \cdots + S_{i}b_k + e_i, \label{eq:fem2} \end{equation} \noindent with a similar interpretation to Equation~\ref{eq:fem1}. Note that Equation~\ref{eq:fem2} is essentially a reduced version of Equation~\ref{eq:fem1}, in which the SNP term that tests for the association with the phenotypic vector is removed. The model optimization is performed with Bayesian Information Criterion (BIC). \section{Results} \label{sec:results} \subsection{Few-shot learning results} The few-shot learning methods are applied to the total set of images, in which the 2,906 top and bottom scans are used for leaf segmentation, and the 1,453 bottom scans are used for vein architecture. Examples of the resulting model outputs are given in Figure~\ref{fig:segmentation}. Note that the leaf in Figure~\ref{fig:segmentation} was not used for model training or validation. Additional segmentation results are visualized in Supplementary Figure~\ref{fig:overlays}, which illustrates leaf heterogeneity by varying leaf size and vein density. All of the image data, ground truth annotations, and predicted leaf/vein segmentations are made publicly available at \url{https://doi.org/10.13139/ORNLNCCS/1908723}. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/segmentation.png} \caption{\textbf{Leaf and vein segmentations.} Results of the leaf and vein segmentation methods on an example leaf outside the training set. The top row shows the full leaf and the bottom row gives a zoomed in view. \underline{Left}: example leaf scan chosen from outside the training and validation sets. \underline{Center left}: predicted segmentation of the leaf body where pixels inside the traced contour are shown in white and outside the contour in black. \underline{Center right}: predicted segmentation of the visible vein architecture with vein pixels shown in white and background pixels in black. \underline{Right}: example leaf scan with the predicted leaf boundary and vein architecture overlaid in blue and red, respectively. Note that for visualization these images are zoomed in to remove redundant white space from the scanner background.} \label{fig:segmentation} \end{figure} The Jaccard index is used to measure segmentation accuracy for images in the validation sets (i.e., ten for leaf segmentation and two for vein segmentation), where scores near one indicate high accuracy and near zero indicate low accuracy. For leaf tracing, all segmentations in the validation set exceed a Jaccard score of 0.99, indicating a high degree of overlap between the predicted and ground truth segmentations. For vein segmentation, the two validation images achieve Jaccard scores of 0.6134 and 0.6334. This reduced score is due mainly to (i) human errors in the ground truth segmentation, (ii) the complexity of the vein architecture, and (iii) the method for probability threshold selection. For example, the model identifies veins that were missed during manual annotation, and thus are considered false positives in the Jaccard score. Further, due to thin veins, predicted veins that are off by just one pixel can result in large changes in Jaccard score. Finally, choosing a threshold that maximizes vein connectivity creates slightly wider vein predictions compared to the ground truth, which further decreases the score, but increases the biological accuracy of the vein structure. For a visualization of these phenomena, see Supplementary Figure~\ref{fig:accuracy}, which illustrates these effects for the validation image with the lowest Jaccard score. Despite the lower Jaccard metric, the vein growing framework achieves recall/sensitivity values (i.e., the probability of detecting a vein pixel) of 0.9219 and 0.8673, respectively, which indicates that the method has a high detection rate, exceeding even human-level accuracy in some cases, and thus almost completely captures the structure of the visible vein architecture. The predicted digital measurements across the population are further validated using real-world caliper measurements, which are visualized in Figure~\ref{fig:validation}. In particular, the data are compared against a linear model, which results in $R^2 = 0.96$ for petiole length and $R^2 = 0.77$ for petiole width. This discrepancy between $R^2$ values is due to several factors. First, manual measurement of petiole length is made from end to end, resulting in larger, more consistent measurements. However, for petiole width, the caliper was placed at the approximate center of the petiole, resulting in greater variation due to subjective positioning of the caliper. Further, since petiole width is typically smaller, measurement errors affect the $R^2$ value more compared to petiole length. Despite these effects, the digital traits strongly agree with the manual measurements, which helps validate the accuracy of the segmentations and extracted features. \begin{figure}[h!] \centering \includegraphics[width=0.85\textwidth]{figures/validation.png} \caption{\textbf{Petiole measurement validation.} Each subplot compares predicted ($x$-axis) with actual ($y$-axis) morphological measurements of the petiole. Manual measurements were obtained with calipers during data collection while digital measurements were derived from the leaf and vein segmentations. \underline{Left}: petiole length comparison with data shown in black and the best-fit line shown in red. \underline{Right}: petiole width comparison with data shown in black and the best-fit line shown in red.} \label{fig:validation} \end{figure} \subsection{Genomic analysis results} To consider the segmentation and feature extraction methods from a biological perspective, a GWAS analysis is conducted at population-scale to associate vein density (i.e., the ratio of vein area to leaf area) to the \textit{P. trichocarpa} genome. The broad-sense clonal heritability of the vein density trait is moderately high ($H^2 = 0.65$), which suggests that the trait is under genetic control. The multilocus BLINK method is used to perform GWAS on the vein density trait with 847,066 SNPs in the genome. This analysis identified 12 significant SNPs using a false discovery rate (FDR) P value, $P<0.05$, and 15 unique SNPs using a less-strict FDR P value, $P<0.2$. A Manhattan plot, quantile-quantile (QQ) plot, and the distribution of the TPS-corrected vein density BLUPs are shown in Figure~\ref{fig:gwas}. A total of 30 unique genes that potentially control for the variation in vein density trait are identified for these SNPs based on the nearest flanking genes in both directions of the GWAS hits in the genome. To gain more insight into the function of these genes, the \textit{Arabidopsis thaliana} orthologs are identified based on the protein sequence similarity. The GWAS results are summarized in Table~\ref{tab:gwas}. See Section~\ref{sec:discussion} for discussion about these genes and their associated physiological plant processes. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/gwas.png} \caption{\textbf{Population-scale genomic analysis.} \underline{Top}: a Manhattan plot of the GWAS results for leaf vein density using the multilocus BLINK method for 847,066 SNPs across 1,419 \textit{P. trichocarpa} genotypes. The horizontal axis corresponds to genomic positions by chromosome and the vertical axis shows the negative log-base-10 P value for each SNP. The dashed horizontal line represents the FDR threshold, $P<0.05$, and dotted line represents the FDR threshold, $P<0.2$. \underline{Bottom left}: the quantile-quantile (QQ) plot corresponding to the P values shown in the Manhattan plot with the expected values shown by the red dashed line. \underline{Bottom right}: the distribution of TPS-corrected vein density BLUPs used for GWAS.} \label{fig:gwas} \end{figure} \begin{table}[h!] \centering \caption{\textbf{Identified genes.} Top gene models detected by the BLINK GWAS method based on FDR P value $P<0.2$ for vein density in \textit{Populus trichocarpa}. Each row corresponds to different genes and includes the gene ID, chromosome number, SNP position, distance in the genome (positive for upstream and negative for downstream from the SNP position), minor allele frequency (MAF), P value, \textit{A. thaliana} ortholog, and ortholog annotation.} \label{tab:gwas} \begin{tabular}{ccccccp{2.3cm}} \hline \textbf{Gene ID} & \textbf{Chr. (pos.)} & \textbf{Dist.} & \textbf{MAF} & \textbf{P value} & \textbf{Ortholog} & \textbf{Annotation} \\ \hline Potri.017G077200 & 17 (8,592,229) & -5,329 & 0.1246 & 3.24e-4 & AT3G04680 & CLP-SIMILAR PROTEIN 3 \\ Potri.017G077300 & 17 (8,592,229) & 6,036 & 0.1246 & 3.24e-4 & AT3G01300 & PBS1-LIKE 35 \\ Potri.006G090501 & 6 (6,910,580) & -2,825 & 0.1938 & 4.69e-4 & - & - \\ Potri.006G090600 & 6 (6,910,580) & 1,617 & 0.1938 & 4.69e-4 & AT3G53880 & ALDO-KETO REDUCTASE FAMILY 4 MEMBER \\ Potri.006G227300 & 6 (23,159,363) & -3,829 & 0.2158 & 7.47e-4 & AT1G18600 & RHOMBOID-LIKE PROTEIN 12 \\ Potri.006G227400 & 6 (23,159,363) & 2,083 & 0.2158 & 7.47e-4 & AT3G13784 & CELL WALL INVERTASE 5 \\ \hline \end{tabular} \end{table} \section{Discussion} \label{sec:discussion} In this work, few-shot learning was used for both leaf and vein segmentation. Each method leveraged few-shot learning by dividing a small number of large images into a large number of small image tiles, and leveraging predictions of previous iterations to expand partial segmentations until stopping criteria was reached. In particular, iterative data set refinement was paired with heavy image augmentation so that the CNN models learned an invariance to potential image artifacts not included in the small number of annotated images. In this way, the complex task of whole-image segmentation was broken down into smaller easier decision rules, thereby maximizing predictive accuracy while minimizing the amount of labeled images needed for model training. This strategy was chosen primarily due to (i) the lack of labeled training images (e.g., only 50 leaf segmentations and eight vein segmentations) and (ii) because each leaf scan is large ($3510 \times 2550$ pixels) and does not easily fit into a standard deep learning model (e.g., a CNN). To address (i), one could simply annotate more data manually, however, to segment the visible venation in the images considered here (Section~\ref{sec:data}) would require up to 12,000 person-hours (i.e., 6 years, assuming a 40-hour work week). Note that the few-shot learning approaches discussed in this work could be used to automatically label training data for larger deep learning workflows (e.g., U-Net~\cite{ronneberger2015u}). However, even that may not be sufficient, since many modern deep learning architectures require thousands or even millions of training samples. For (ii), large images are typically greyscaled, reshaped, or downsampled to fit within system requirements and hardware limitations~\cite{xu2021automated}. However, this strategy can alter or destroy fine-grained details that may be crucial for accurate prediction. Thus, the methods demonstrated here utilized raw RGB images at full resolution, but rather than using entire images as inputs, smaller tiles were sampled from within images to iteratively segment leaf boundaries and visible venation. For example, see Supplementary Figure~\ref{fig:overlays}, which illustrates how these tile-based approaches are generally insensitive to changes in object size and characteristics. Leaf segmentation was formulated as a tracing task, in which a CNN inputs image tiles centered along the boundary of a leaf, and outputs trace predictions that are used to sample new tiles in the next iteration. This methodology performed well in this application since there is only one leaf per image and each leaf is fully contained within the image. Note that images with multiple objects have been previously considered~\cite{rutter_tracing, rutter_combo}, but additional modifications to the algorithm (e.g., recurrence) may be needed to account for images with cluttered objects and overlapping boundaries. The resulting leaf segmentations were highly accurate and captured the morphology and serration of each leaf. Since each leaf was scanned against a white background, automatic thresholding could be used to obtain a rough segmentation of the leaf. However, this approach captures background artifacts, includes the petiole, and falsely detects shaded regions near the leaf boundaries and petiole. These artifacts could be addressed for individual samples by post-processing the binary segmentation maps (e.g., binary erosion and dilation), but defining such rules that generalize to all cases across the population is increasingly difficult. In contrast, a strength of the tracing approach is that the tracing CNN can \emph{learn} an invariance to image artifacts in a data-driven way without human supervision, and still leverage the auto-threshold segmentations for trace initialization, which allows the method to be fully automated. Finally, the tracing methodology produced accurate leaf segmentations approximately three orders of magnitude faster than human annotation ($\sim1$ second compared to 15-30 minutes per image), allowing the method to scale up to population-level data sets, especially for computing systems that support parallelization (i.e., tracing more than one image at a time). Due to the complexity of leaf venation, vein segmentation was formulated as a region growing task where a CNN predicts whether to include pixels in a segmentation by inputting image tiles centered at those pixels. Unlike the tracing framework (which segments objects by tracing boundaries in 1D), the region growing approach grows the segmentation directly by continuously adding pixels to the region of interest in 2D (see~\cite{januszewski_floodfilling} for 3D). A strength of this approach is that each pixel is considered individually. However, unlike previous methods which conduct an exhaustive prediction over all pixels in an image (which would equate to 8,950,500 pixels per image in this work)~\cite{ciresan2012deep}, pixels were only considered if they exceed a probability threshold, which allowed the model to focus only on pixels of interest. This distinction dramatically increased segmentation speed ($\sim60$ seconds compared to 4-8 hours per image with manual segmentation). In addition, the region growing framework can use a random sample of pixels from anywhere in the image to initialize the iteration. Note that in this work, seed pixels were drawn using the leaf body segmentations from the tracer to reduce the number of redundant white background pixels. Finally, and perhaps most importantly, the model produced accurate segmentations (exceeding even human-level accuracy) at population scale using just six images for training and two images for validation. This is in contrast to previous approaches that used CNNs for vein segmentation and required more than 700 ground truth vein segmentations~\cite{xu2021automated}. In particular, this result highlights the importance of iterative data set refinement, in which images were specifically added to the training set in order to build an invariance to observed artifacts in the population (e.g., leaf folds that were falsely classified as veins), which has also been noted in similar applications for root segmentation~\cite{smith2022rootpainter}. Leaf and vein segmentation were specified as independent tasks, and used separate computational strategies to achieve each goal. Since both approaches were designed for segmentation, a natural question arises concerning whether two distict approaches are necessary, or whether one would suffice for both tasks. In principle, the region growing CNN could be used for leaf segmentation, however, this would be computationally inefficient due to the large number of leaf pixels in the high-resolution scans. Compared to the tracing CNN (which focuses solely on boundary pixels), the region growing CNN would waste a large amount of computational resources on redundant ``interior'' pixels, which vastly outnumber boundary pixels. In the reverse case, the tracing CNN could theoretically be applied to vein segmentation. However, since the vein architecture is not homotopic to a circle, the tracing CNN would need to be re-initialized thousands of times inside the leaf to account for all of the ``holes'' in the vein architecture. Further, overlapping tracer boundaries near one-pixel-thick veins would create additional challenges in post-processing the thousands of traced contours. Thus, each few-shot learning method was suited for the particular segmentation task it was assigned. A utility of the few-shot learning methods discussed here is that they remove background artifacts and highlight salient information in images (e.g., the leaf body or vein architecture). Using traditional computer vision applications (e.g., Fiji and RVE) to extract digital traits from such segmentations becomes trivial compared to using the raw image data. These advances not only reduce human effort, but also expand the number and variety of traits one can extract by including traits that can only be estimated digitally (e.g., vein density, leaf solidity, etc.). Further, custom algorithms can be developed that extract cryptic phenotypes related to leaf morphology and topological information from the vein networks which may yield new biological insights into the role leaves play in plant physiology -- this analysis is left for future work. These methods were also used to estimate measurable traits like petiole width and length, which were used as a source of validation in this work. This study demonstrated that manually estimated petiole length and width strongly correlated with their corresponding digital measurements, suggesting that the segmentation quality and feature extraction methodology accurately predicts biologically relevant features from the raw images. A strategy to consider these digital traits from a biological perspective was to estimate their broad-sense heritability (i.e., the amount of variation in the trait that is controlled by genetics), denoted by $H^2$. In particular, the $H^2$ value for vein density was 0.65, suggesting that the trait is under significant genetic control. As a proof of concept for downstream application of the few-shot learning methods, a GWAS analysis was performed for the vein density trait at population scale. The top GWAS hit was Potri.017G077200, which is highly expressed in apical bud, dormant bud, and stem~\cite{sreedasyam2022jgi}. This gene is also expressed in immature/young leaves, suggesting that it plays a role in such tissues~\cite{sreedasyam2022jgi}. Comparing to \textit{A. thaliana}, \textit{CLP-SIMILAR PROTEIN 3} (\textit{CLPS3}, AT3G04680) is the closest ortholog of Potri.017G077200, and is related to the human Cleavage factor polyribonucleotide kinase subunit 1 (hCLP1), which forms part of the complex responsible for polyadenylation of 3' (3 prime) of messenger RNA~\cite{hCLP_de2000human}. CLPS3 also interacts with components of the polyadenylation complex in plants and it is expressed throughout whole plant development, including leaves and vasculature~\cite{CLP3_1}. Overexpression of \textit{CLPS3} causes aberrant leaf phenotypes, abnormal phyllotaxis, and early flowering~\cite{CLP3_1}. Futher, \textit{CLPS3} increases the expression of \textit{CUP-SHAPED COTYLEDON 1} (\textit{CUC1}), an NAC transcription factor, which together with \textit{CUC2} and \textit{CUC3}, have been found to participate in meristem formation, organ boundary separation, and leaf shape~\cite{cuc_postemb_hibara2006arabidopsis, cuc1_meristem_spinelli2011mechanistic, cuc2_leaf_nikovics2006balance}. Since Potri.017G077200 is an \textit{CLPS3} ortholog, it could play similar roles in leaf development of \textit{P. trichocarpa}, making it a strong target for genomic selection studies. Potri.006G227300 is expressed in most plant tissues, but it is highly expressed in apical bud in spring, swelling bud, late dormant bud, as well as young and immature leaves~\cite{sreedasyam2022jgi}. Its \textit{Arabidopsis} ortholog, \textit{RHOMBOID-LIKE PROTEIN 12} (\textit{RBL12}, AT1G18600), follows a similar expression pattern, being enriched in floral buds~\cite{klepikova2016high}. Very little is know about \textit{RBL12}, but it is predicted to be an active transmembrane protease located in the mitochondria~\cite{RBL12_2_lemberg2007functional}. Further, \textit{RBL12} substrates in \textit{A. thaliana} have not been identified, therefore its role is yet to be determined~\cite{RBL12_1_kmiec2008plant}. Other genes that were associated with vein density by GWAS may play an indirect role in leaf development. For instance, the Potri.017G077300 ortholog, \textit{PBS1-LIKE 35} (\textit{PBL35}, AT3G01300), participates in shoot apical meristem homeostasis and plant immunity, while the Potri.006G090600 ortholog, \textit{ALDO-KETO REDUCTASE FAMILY 4 MEMBER C11} (\textit{AKR4C11}, AT3G53880), participates in abiotic stress tolerance through detoxification of reactive carbonyls~\cite{ark_rc_simpson2009characterization, ark_review_sengupta2015plant, pbl35_immunity_luo2020tyrosine, pbl35_sam_wang2022receptor}. Thus, Potri.017G077300 and Potri.006G090600 may play a role in leaf and vein development through such processes. Further, the Potri.006G227400 ortholog, \textit{CELL WALL INVERTASE 5} (\textit{CWINV5}, AT3G13784), is a cell wall invertase and members of this family have been found to affect plant development by making hexoses available for transport~\cite{cwi_1_sherson2003roles, klepikova2016high}. \subsection{Conclusions} Few-shot segmentation methods were extended to image-based plant phenotyping, whereby researchers can maximize predictive accuracy while minimizing the amount of training data. These methods were demonstrated for leaf scans of \textit{P. trichocarpa}, where 50 training images were used to train an automated tracing algorithm for whole-leaf segmentation and eight images were used to train a region growing algorithm to segment the visible vein architecture. The segmentations were used to extract biologically relevant morphological and topological leaf traits related to the leaf body, venation, and petiole, which were validated with real-world manual measurements. Broad-sense clonal heritability estimates for each trait were measured, and a population-scale genomic analysis was conducted for vein density, which combined information from both leaf and vein segmentations. The GWAS analysis revealed a set of previously unconsidered SNPs and associated genes with mechanistic associations to multiple physiological processes relating to leaf development and function. Future work will include a deep dive into the relevant biology surrounding the features discussed in this work, and will include the extraction of additional cryptic phenotupes relating to leaf morphology and vein topology. In particular, this work will leverage systems biology, network analysis, and climatic data to uncover the mechanistic associations within and across genotypes as they relate to sustainable bioenergy applications (e.g., biomass yield and composition). In conclusion, this study demonstrated a complete workflow from image acquisition to phenotype extraction. The utility of these methods for biological use cases was further demonstrated by performing GWAS which identified genomic regions and associated genes potentially controlling important plant phenotypes, such as vein density. This enhances current understanding of the genetic architecture of complex traits and may facilitate future quantitative genetics and genotype $\times$ environment interaction studies. This further allows researchers to assess how vein traits relate to other physiological processes, such as stomatal conductance, gas exchange, and overall plant productivity with important implications for developing \textit{Populus} as a bioenergy crop. Genes detected from the quantitative genetic analysis can be used in future biotechnology experiments for optimizing traits targeted for climate resilience, biomass production, and accelerated domestication for agriculture and biofuel production. \section*{Acknowledgments} \subsection*{Author Contributions} \noindent J. Lagergren: Conceptualization, funding acquisition, data collection, few-shot learning, feature extraction, writing. \\ \noindent M. Pavicic: Conceptualization, data collection, feature extraction, writing. \\ \noindent H. Chhetri: Conceptualization, data collection, genomic analysis, writing. \\ \noindent L. York: Conceptualization, feature extraction, writing. \\ \noindent D. Hyatt: Genomic analysis, writing. \\ \noindent D. Kainer: Genomic analysis, writing. \\ \noindent E. Rutter: Few-shot learning, writing. \\ \noindent K. Flores: Few-shot learning, writing. \\ \noindent G. Taylor: Field site support, writing. \\ \noindent D. Jacobson: Conceptualization, funding acquisition, supervision, writing. \\ \noindent J. Streich: Conceptualization, funding acquisition, data collection, writing. \subsection*{Special Thanks} The authors would like to acknowledge members of the Taylor Lab (University of California, Davis): Jack Bailey-Bale, Marie Klein, Zi (Janna) Meng, and Aiwei Zhu, for their support during data collection. \subsection*{Funding} This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. This work was funded by the Center for Bioenergy Innovation, a DOE Bioenergy Research Center supported by the Office of Biological and Environmental Research in the DOE Office of Science, and the Artificial Intelligence (AI) Initiative, an ORNL Laboratory Directed Research and Development program. The manuscript was authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the US Department of Energy. The US Government retains and the publisher, by accepting the article for publication, acknowledges that the US Government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (\url{http://energy.gov/downloads/doe-public-access-plan}). \subsection*{Conflicts of Interest} The authors declare that they have no competing interests. \subsection*{Data Availability} In addition to releasing all of the few-shot learning code on a public GitHub repository (\url{https://github.com/jlager/few-shot-leaf-segmentation}), we are also releasing all of the images, manual segmentations, model predictions, 68 extracted leaf phenotypes, and a new set of SNPs called against the v4 \textit{P. trichocarpa} genome for 1,419 genotypes on the Oak Ridge National Laboratory Constellation Portal (a public DOI data server) at \url{https://doi.org/10.13139/ORNLNCCS/1908723}. This is, to our knowledge, one of the largest releases of plant genotype and phenotype data in a single manuscript. We hope that this work becomes a valuable community resource and helps reduce barriers commonly associated with high throughput image-based plant phenotyping and machine learning. \newpage \section*{Supplementary Materials} \beginsupplement \noindent \textbf{Video S1:} \textbf{Leaf segmentation video.} Animation of the leaf tracing algorithm, in which a CNN iteratively traces the boundary of a leaf. \underline{Left}: the raw leaf scan with an overlay of the previously traced path and a bounding box indicating the current position of the CNN model. \underline{Top right}: the image tile and with an overlay of the previously traced path that are input to the CNN. \underline{Bottom right}: the predicted pixels along the contour of the leaf that are used to update the position in the next iteration. The iteration proceeds until the CNN predictions reach the start of the trace. Note that in practice the iteration completes in $\sim$1 second, but is slowed down for better visualization. \\ \noindent \textbf{Video S2:} \textbf{Vein segmentation video.} Animation of the vein growing algorithm, in which a CNN iteratively adds pixels to a growing segmentation of the visible vein architecture. \underline{Left}: the original leaf scan with an overlay of the pixels being considered by the CNN in yellow and classified vein pixels in red. \underline{Top right}: a zoomed in view of the top of the leaf and overlay. \underline{Bottom right}: a zoomed in view of the bottom of the leaf and overlay. The iteration proceeds, continuously adding new pixels to the segmentation, until no new pixels remain in the sample set. Note that in practice the iteration completes in $\sim$60 seconds, but is sped up for better visualization. \newpage \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{figures/overlays.png} \caption{\textbf{Example leaf and vein segmentations.} Results of the leaf and vein segmentation methods on example leaf images outside the training set. Traced leaf contours are shown in blue and vein segmentations in red. \underline{Top}: segmentation overlays for leaves varying in size, going from smallest (left) to largest (right). \underline{Bottom}: segmentation overlays for leaves of approximately equal area, but varying in vein density, going from sparse (left) to dense (right) venation.} \label{fig:overlays} \end{figure} \newpage \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{figures/accuracy.png} \caption{\textbf{Vein segmentation accuracy.} Results of the vein segmentation method on a leaf from the validation set. The top row shows the full leaf and the bottom row gives a zoomed in view. \underline{Left}: example leaf scan chosen from the validation set. \underline{Center left}: hand-annotated vein segmentation overlaid in red. \underline{Center right}: predicted vein segmentation overlaid in red. \underline{Right}: a comparison between the ground truth and predicted segmentations, in which red pixels indicate true positives, green pixels indicate false positives, and blue pixels indicate false negatives. Note that the zoomed-in tile reveals veins identified by the region growing method that are incorrectly reported as false positives (see veins with only green pixels) due to errors in the ground truth segmentation.} \label{fig:accuracy} \end{figure} \newpage \newpage \begin{longtable}[h!]{lcccp{7.5cm}} \caption{\textbf{Leaf features.} Includes names, units, broad-sense clonal heritability estimates, and descriptions of the 23 traits related to leaf morphology and color. Abbreviations: avg: average, max: maximum, min: minimum.} \label{tab:leaftraits} \\ \hline \textbf{Feature} & \textbf{Units} & \textbf{$H^2$} & \textbf{Tool} & \textbf{Description} \\ \hline \endfirsthead \textbf{Feature} & \textbf{Units} & \textbf{$H^2$} & \textbf{Tool} & \textbf{Description} \\ \hline \endhead Area & cm$^2$ & $0.30$ & Fiji & Total pixel count of leaf segmentation \\ Aspect ratio & - & $0.58$ & Fiji & Ellipse major axis / ellipse minor axis \\ Bottom blue & - & $0.57$ & Fiji & Avg. blue value of leaf abaxial side \\ Bottom brightness & - & $0.42$ & Fiji & Avg. brightness value of leaf abaxial side \\ Bottom green & - & $0.41$ & Fiji & Avg. green value of leaf abaxial side \\ Bottom hue & - & $0.39$ & Fiji & Avg. hue value of leaf abaxial side \\ Bottom red & - & $0.45$ & Fiji & Avg. red value of leaf abaxial side \\ Bottom saturation & - & $0.27$ & Fiji & Avg. saturation value of leaf abaxial side \\ Circularity & - & $0.23$ & Fiji & $4\pi A/P^2$ where $A$: area and $P$: perimeter \\ Convex area & mm$^2$ & $0.29$ & RVE & Total pixel count of convex hull \\ Major axis length & cm & $0.21$ & Fiji & Major axis length of best-fit ellipse \\ Minor axis length & cm & $0.44$ & Fiji & Minor axis length of best-fit ellipse \\ Max. Feret & cm & $0.23$ & Fiji & Max. distance between any two points in the leaf segmentation \\ Min. Feret & cm & $0.42$ & Fiji & Min. distance between two parallel lines tangent to Max. Feret line \\ Perimeter & cm & $0.25$ & Fiji & Sum of Euclidean distances between contour pixels in the leaf segmentation \\ Roundness & - & $0.56$ & Fiji & $4A/(\pi M^2)$ where $A$: area, $M$: major axis \\ Solidity & - & $0.09$ & Fiji & $A/C$ where $A$: area and $C$: convex area \\ Top blue & - & $0.26$ & Fiji & Avg. blue value of leaf adaxial side \\ Top brightness & - & $0.26$ & Fiji & Avg. brightness value of leaf adaxial side \\ Top green & - & $0.23$ & Fiji & Avg. green value of leaf adaxial side \\ Top hue & - & $0.29$ & Fiji & Avg. hue value of leaf adaxial side \\ Top red & - & $0.24$ & Fiji & Avg. red value of leaf adaxial side \\ Top saturation & - & $0.21$ & Fiji & Avg. saturation value of leaf adaxial side \\ \hline \end{longtable} \newpage \begin{longtable}[h!]{lcccp{7.5cm}} \caption{\textbf{Vein features} Includes names, units, broad-sense clonal heritability estimates, and descriptions of the 27 traits related to vein morphology. Abbreviations: avg: average, DR: diameter range, max: maximum, min: minimum, RVE: RhizoVision Explorer.} \label{tab:veintraits} \\ \hline \textbf{Feature} & \textbf{Units} & \textbf{$H^2$} & \textbf{Tool} & \textbf{Description} \\ \hline \endfirsthead \textbf{Feature} & \textbf{Units} & \textbf{$H^2$} & \textbf{Tool} & \textbf{Description} \\ \hline \endhead Area & mm$^2$ & $0.43$ & RVE & Total pixel count of vein segmentation \\ Area DR 1 & mm$^2$ & $0.55$ & RVE & Projected area of veins with DR 0 - 0.25 mm \\ Area DR 2 & mm$^2$ & $0.38$ & RVE & Projected area of veins with DR 0.25 - 0.8 mm \\ Area DR 3 & mm$^2$ & $0.26$ & RVE & Projected area of veins with DR above 0.8 mm \\ Avg. diameter & mm & $0.34$ & RVE & Avg. skeletal pixel radius, doubled for diameter \\ Convex area & mm$^2$ & $0.29$ & RVE & Total pixel count of convex hull \\ Density & - & $0.65$ & Custom & Ratio of vein area to leaf area \\ Length-to-area ratio & - & $0.62$ & RVE & $V/A$ where $V$: total length, $A$: leaf area \\ Max. depth & mm & $0.24$ & RVE & Max. vertical distance in vein segmentation \\ Max. diameter & mm & $0.32$ & RVE & Max. skeletal pixel radius, doubled for diameter \\ Max. width & mm & $0.41$ & RVE & Max. horizontal distance in vein segmentation \\ Network solidity & - & $0.64$ & RVE & Network Area per Convex Area ratio \\ Perimeter & mm & $0.52$ & RVE & Sum of Euclidean distances between contour pixels in the vein segmentation \\ Surface area & mm$^2$ & $0.46$ & RVE & Length multiplied by cross-section circumference summed over skeletal pixels. \\ Surface area DR 1 & mm$^2$ & $0.55$ & RVE & Surface area of veins with DR 0 - 0.25 mm \\ Surface area DR 2 & mm$^2$ & $0.38$ & RVE & Surface area of veins with DR 0.25 - 0.8 mm \\ Surface area DR 3 & mm$^2$ & $0.26$ & RVE & Surface area of veins with DR above 0.8 mm \\ Third order fraction & - & $0.29$ & RVE & Ratio of total length of DR 3 to total length \\ Total length & mm & $0.53$ & RVE & Sum of Euclidean distances between connected skeletal pixels \\ Total length DR 1 & mm & $0.56$ & RVE & Total length of veins with DR 0 - 0.25 mm \\ Total length DR 2 & mm & $0.40$ & RVE & Total length of veins with DR 0.25 - 0.8 mm \\ Total length DR 3 & mm & $0.27$ & RVE & Total length of veins with DR above 0.8 mm \\ Volume & mm$^3$ & $0.29$ & RVE & Length multiplied by cross-section area summed over skeletal pixels \\ Volume DR 1 & mm$^3$ & $0.55$ & RVE & Volume of veins with DR of 0 - 0.25 mm \\ Volume DR 2 & mm$^3$ & $0.37$ & RVE & Volume of veins with DR of 0.25 - 0.8 mm \\ Volume DR 3 & mm$^3$ & $0.27$ & RVE & Volume of veins with DR of above 0.8 mm \\ Width-to-depth ratio & - & $0.55$ & RVE & Ratio of max. width to depth \\ \hline \end{longtable} \newpage \begin{longtable}[h!]{lcccp{7.5cm}} \caption{\textbf{Petiole features} Includes names, units, broad-sense clonal heritability estimates, and descriptions of the 18 traits related to petiole morphology and color. Abbreviations: avg: average, max: maximum, min: minimum. Note that Max. Feret is equivalent to petiole diameter that is used for validation in this work with real-world measurements.} \label{tab:petioletraits} \\ \hline \textbf{Feature} & \textbf{Units} & \textbf{$H^2$} & \textbf{Tool} & \textbf{Description} \\ \hline \endfirsthead \textbf{Feature} & \textbf{Units} & \textbf{$H^2$} & \textbf{Tool} & \textbf{Description} \\ \hline \endhead Area & cm$^2$ & $0.49$ & Fiji & Total pixel count of petiole segmentation \\ Aspect ratio & - & $0.41$ & Fiji & Ellipse major axis / Ellipse minor axis \\ Bottom blue & - & $0.29$ & Fiji & Avg. blue value of petiole abaxial side \\ Bottom brightness & - & $0.25$ & Fiji & Avg. brightness value of petiole abaxial side \\ Bottom green & - & $0.27$ & Fiji & Avg. green value of petiole abaxial side \\ Bottom hue & - & $0.15$ & Fiji & Avg. hue value of petiole abaxial side \\ Bottom red & - & $0.22$ & Fiji & Avg. red value of petiole abaxial side \\ Bottom saturation & - & $0.33$ & Fiji & Avg. saturation value of petiole abaxial side \\ Circularity & - & $0.45$ & Fiji & $4\pi A/P^2$ where $A$: area and $P$: perimeter \\ Major axis length & cm & $0.52$ & Fiji & Major axis length of the best-fit ellipse \\ Minor axis length & cm & $0.20$ & Fiji & Minor axis length of the best-fit ellipse \\ Max. Feret & cm & $0.55$ & Fiji & Max. distance between any two points in the petiole segmentation \\ Min. Feret & cm & $0.09$ & Fiji & Min. distance between two parallel lines tangent to Max. Feret line \\ Perimeter & cm & $0.55$ & Fiji & Sum of Euclidean distances between contour pixels in the petiole segmentation \\ Roundness & - & $0.39$ & Fiji & $4A/(\pi M^2)$ where $A$: area, $M$: major axis \\ Solidity & - & $0.10$ & Fiji & $A/C$ where $A$: area and $C$: convex area \\ Volume & mm$^3$ & $0.43$ & RVE & Length multiplied by cross-section area estimated from petiole diameter \\ Width & cm & $0.25$ & Custom & Avg. diameter of the center 20\% of the petiole \\ \hline \end{longtable} \newpage \printbibliography \end{document} \section{Introduction} \label{sec:introduction} Image-based plant phenotyping is a method by which scientists use image data to characterize and categorize plants within and across species. This process typically involves the use of tools, instrumentation, and domain expertise to (i) measure information from individual or groups of samples in the greenhouse, field, and/or nature, (ii) be applied across scales, ranging from cell microscopy to satellite imagery, and (iii) allow researchers to extract complex morphological and topological features that would otherwise be impossible to measure by hand. One of the main challenges of image-based phenotyping is identification of the relevant biological structures (foreground) from the background. In some cases, imaging methods can be modified to highlight these objects, such as using back lights or relying on fluorescence of those objects, however in many situations the contrast between the relevant object and the background is low. When contrast is high, simple greyscale or color-based thresholding can be used, but in more complex color imagery, plant phenomics has focused on machine learning approaches. Deep learning has revolutionized computer vision as a powerful and efficient way to extract features from image-based data~\cite{krizhevsky2017imagenet, he2016deep, ronneberger2015u}. An important strength of this approach is that deep learning models can learn an invariance to heterogeneous background effects which allows them to generalize to new samples outside of the training set. However, such approaches can be laborious and expensive to adopt, because users must generally annotate hundreds or thousands of images to provide sufficient training data. In plant biology for example, to reliably associate plant traits with genes at population scale requires large amounts of observations that span hundreds or thousands of genotypes. Such associations provide deeper understanding of the genetic architectures and underlying mechanisms that govern complex processes that control the growth, acclimation, response, and composition of plants, with important implications for sustainable agriculture and bioenergy~\cite{taylor2019sustainable, grattapaglia2018quantitative}. Thus, there exists a need to develop methods for fast and accurate image-based plant phenotyping that alleviate the data annotation bottleneck. In contrast to traditional deep learning approaches, which can require thousands or millions of training samples to reach sufficient prediction accuracy~\cite{krizhevsky2017imagenet, he2016deep}, few-shot learning is an emerging subset of machine/deep learning that attempts to maximize predictive accuracy while using only a small number of labeled samples for training. Multiple approaches exist to solve this problem, including data augmentation, metric learning, external memory, and parameter optimization~\cite{yang_fewshot}. This work utilizes a combination of data augmentation (i.e., applying random spatial and color augmentations to images during training) and iterative algorithms, which have been previously demonstrated for biomedical image analysis, e.g., semantic segmentation of cells and retinal vasculature~\cite{rutter_tracing, rutter_combo, januszewski_floodfilling, lagergren_growing}. The goal of this study is to extend these methods to image-based plant phenotyping by leveraging convolutional neural networks (CNNs) to segment the body and visible vein architecture of poplar (\textit{Populus trichocarpa}) leaves from high-resolution scans obtained in the field. In particular, few-shot learning is utilized in this work because it divides a small number of large images into a large number of small image tiles. In this way, the complex task of whole-image segmentation is broken down into smaller easier decision rules, which enables accurate segmentation using very few labeled images for training. \textit{P. trichocarpa} (also called black cottonwood, western balsam-poplar, or California poplar) is a model system for studying the genetic architecture of complex traits and climate adaptation in woody plants. Spanning from central California, USA, to northern British Columbia, Canada, it harbors tremendous geographic, climatic, phenotypic, and genetic diversity. Further, \textit{P. trichocarpa} has a fully sequenced genome, genome annotation, abundant transcriptomes, resequencing, and phenotypic data. Importantly, rapid biomass growth, clonal propagation, and the ability to grow in marginal lands with low agricultural input make it an ideal crop for sustainable bioenergy applications~\cite{tuskan2006, garcia2006protease, geraldes2011snp, zhang2018genome, slavov2012genome, evans2014population, chhetri2019multitrait, chhetri2020genome, slavov2012genome}. As a result, research and commercial groups have invested heavily in the development of \textit{P. trichocarpa} as a high-impact species for forest products and biofuel production~\cite{jansson2007populus, rubin2008genomics, evans2014population, mckown2014genome}. To this end, leaves play a key role in biomass production since they are the primary organs responsible for sunlight absorption and carbon fixation, the primary food source of vascular plant systems. Further, vein architecture supports the mechanical structure of the leaf and governs the distribution of water and other nutrients, which has important implications for the physiology, biomechanics, and structure of a plant~\cite{sack2013venation}. Thus, capturing accurate leaf traits and relating them to the genetic components that control them may provide insights toward improved tree biomass and composition. In plant phenotyping, segmentation of individual leaves and their venation has seen sparse attention. In general, existing approaches use (i) experimental methods to chemically clear the leaf lamina and stain the veins to highlight the venation against the background~\cite{buhler2015phenovein, xu2021automated}, (ii) image pre-processing by greyscaling, aggregating specific color channels, or spatial rescaling~\cite{katyal2012leaf, larese2012legume, buhler2015phenovein, salima2015leaf, xu2021automated}, (iii) global filters and morphological operations (e.g., Odd Gabor filters, Hessian matrices, vesselness filters, and region merging) to obtain binary segmentations~\cite{katyal2012leaf, larese2012legume, buhler2015phenovein, salima2015leaf, zhu2020fast}, (iv) ensembles of scales and models to make aggregate predictions~\cite{zhu2020fast, xu2021automated}, and (v) require hundreds of manually-annotated training samples to produce accurate segmentation models~\cite{xu2021automated}. However, these commonly encountered steps can bottleneck the scalability and accuracy of image-based plant phenotyping at population scale. For example, approach (i) adds additional experimental time, effort, materials, expenses, and hazards to data acquisition compared to capturing just raw images, (ii) destroys fine-grained image details across spatial and color dimensions, (iii) may be overly simplistic and generate large amounts of effort in segmentation post-processing, (iv) uses complex workflows which may be difficult to automate at scale, and (v) can be infeasible for smaller research groups with limited time and budgets. These challenges may help explain why leaf and vein segmentation has not received as much attention compared to crop- or field-level phenotyping for plant stress, shoot morphology, and plant/organ counting~\cite{jiang2020convolutional}. This work presents two few-shot learning methods based on CNNs to segment the body and visible vein architecture of \textit{P. trichocarpa} leaves. Leaf segmentation is formulated as a tracing task, in which a CNN iteratively traces the boundary of a leaf to produce a single contiguous leaf segmentation. Previous studies have shown that alternative CNN-based segmentation methods (e.g., the fully-convolutional neural network, U-Net~\cite{ronneberger2015u}) can result in ``patchy'' segmentations that must be addressed with complex post-processing methods that are difficult to generalize~\cite{rutter_tracing}. In contrast, boundary tracing eliminates this patchiness problem by only segmenting one contiguous region, thereby ensuring accurate downstream extraction of morphological features. Alternatively, vein segmentation is formulated as a region growing task, in which a CNN iteratively adds neighboring pixels to a growing region of interest corresponding to the visible vein architecture. Similar to the tracing approach, the vein segmentation ensures biologically-realistic morphological features by including pixels in the segmentation only if a neighboring pixel was previously classified. Each method is fully automated (i.e., requires no human supervision or initialization), and segments images orders of magnitude faster compared to manual annotation. The current work is designed to provide the plant phenotyping community with (i) methods for fast and accurate image-based feature extraction with minimal training data and (ii) a new population-scale data set for domain scientists and machine learning researchers. In particular, the few-shot learning methods developed here are applied to raw RGB images with no experimental/image pre-processing, use individual CNN models that learn the complex relationships between pixels for accurate leaf and vein segmentation, and require very few training samples to generalize and make accurate predictions at population scale. The segmentations are used to extract biologically realistic features that are validated using real-world physical measurements and applied downstream using broad-sense clonal heritability estimates and a genome-wide association study (GWAS). \section{Materials and Methods} \label{sec:methods} Few-shot learning is used to segment the body and visible vein architecture of \textit{P. trichocarpa} leaves from high-resolution scans. The resulting segmentations are combined with open-source tools for image processing and genomic analysis to expand the application of these methods to a wider scientific audience. All deep learning methods are implemented in Python (version 3.7.8) using the PyTorch deep learning library (version 1.11.0)~\cite{paszke2019pytorch} and are made available at \url{https://github.com/jlager/few-shot-leaf-segmentation}. Feature extraction is completed using Fiji (version 2.9.0)~\cite{schindelin2012fiji} and RhizoVision Explorer (RVE, version 2.0.3)~\cite{seethepalli2020rhizovision, seethepalli2021rhizovision}. Genomic analysis is conducted in R (version 4.2.0) using the GAPIT3 software package (version 3)~\cite{wang2021gapit}. All of the data, including images, manual segmentations, model predictions, extracted features, and the underlying genomes are available at \url{https://doi.org/10.13139/ORNLNCCS/1908723}. \subsection{Data collection} \label{sec:data} The leaf scans considered in this work were collected during a field campaign in August, 2021, from the 10-acre poplar plantation at the University of California, Davis (UC Davis), which maintains a common garden of poplar trees that can be grown on low-quality, poor, and marginal land~\cite{baileybale2021plantation, taylor2019sustainable}. The plantation follows a randomized complete block design composed of three blocks. Each block is partitioned into rows and positions that uniquely identify the corresponding genotypes, and contain approximately 1,500 \textit{P. trichocarpa} genotypes per block. For practical reasons, leaf samples were collected from one entire block (1,322 viable samples) and partially from a second block (131 samples) totaling 1,453 trees. Leaves were sampled from a branch at approximately breast height (i.e., $\sim$1.37 meters) from the south-facing side of each tree. Leaves were chosen by selecting the first fully mature leaf counting from the top of each branch. Each leaf was also paired with a barcode label that encoded the treatment, block, row, and position of the tree, which uniquely identified the corresponding genotype and allowed the user to record the sample ID during data capture. This helped expedite the phenotyping process and reduce human error. Selected leaves were scanned in the field as they were sampled from each tree using a USB-powered Epson Perfection model V39~\cite{epsonwebsite}. The top and bottom of each leaf was scanned with a resolution of 300 dots-per-inch (DPI). To account for heterogeneous leaf shapes (e.g., leaves with non-trivial 3D characteristics like ``waviness''), a weight was used on the scanner lid to compress each leaf to the glass of the scanner in order to reduce image artifacts like blurring. Additionally, between rows of trees (there are approximately 30 trees per row), the scanner glass and background were cleaned to reduce the buildup of dust and other debris. During data capture, the scanner suffered a hardware failure where one of the pixels of the scanner began to malfunction and caused a vertical white line to gradually appear near the center of each subsequent scan. This artifact affected approximately 100 leaf scans. To mitigate the malfunction, leaves were moved to the edge of the scanner away from the malfunctioning pixel, affecting 62 leaf scans. A new scanner was acquired and used for the remainder of the field campaign (2,634 leaf scans). Despite the hardware failure, these data acquisition steps resulted in 2,906 RGB leaf scans (i.e., top and bottom of 1,453 samples), each with dimension $3510\times2550$ pixels. In addition to image-based measurements, petiole length and diameter were measured manually for each leaf. Using a similar procedure to leaf imaging, barcode scanners were used to record the sample ID, followed by length/diameter measurements using USB-powered SPI 17-600-8 electronic calipers~\cite{spiwebsite}. The manual measurements for petiole length and width are used to validate image-based measurements. Obtaining accurate high-quality ground truth data is important for deep learning applications in general, but it is crucial for few-shot learning, since a model must learn features from a small number of training samples that generalize well to the broader population. To this end, training data was generated for leaf body segmentation using the top and bottom scans of 25 leaves (50 images in total), which were randomly selected and manually traced. Manual segmentation was completed using the open-source graphics editor, GNU Image Manipulation Program (GIMP)~\cite{gimp}, taking between 15 and 30 minutes per image, depending on the size and serration of the leaf. Similarly for vein segmentation, GIMP was used to manually draw all visible leaf venation for eight leaf-bottom scans, taking between four and eight hours per image, depending on the vein density. Note that only leaf-bottom venation is considered in this work. Leaf-top venation will be considered in future work. Due to the large amount of manual effort required for vein segmentation, the training data set was constructed using \emph{iterative data set refinement}, in which images were individually added to the training set based on manual inspection of model performance across the set of all images. For example, compressing samples against the scanner glass caused some leaves to fold on themselves, which produced dark lines that were falsely identified as veins. Thus, an image with multiple examples of such folds was manually segmented and added to the training set so that the model learned an invariance to such artifacts. This process was repeated similarly for other leaf characteristics (e.g., dead, diseased, and nutrient-deficient leaf tissue), including a scan exhibiting the hardware failure discussed above, until the model converged to acceptable performance across the population. This strategy resulted in a total of eight images (six for training, two for validation) mentioned previously. Note that in practice, the number of images may vary depending on the application and image quality, but it is important (particularly for few-shot learning) that the training data set is fine-tuned to the point that the model is able to generalize. \subsection{Leaf segmentation} \label{sec:tracing} Segmentation of the leaf body is formulated as an object tracing task based on~\cite{rutter_tracing, rutter_combo}, in which a CNN is used to iteratively trace along the contour of a leaf. These methods have been shown to reach state-of-the-art accuracy in biomedical image segmentation using a fraction of the training data required by other approaches~\cite{rutter_combo}. In this framework, a CNN inputs a small image tile centered somewhere on the edge of an object and outputs a predicted trace (i.e., set of pixel displacements) along the object boundary from the center to the edge of the tile. The iteration proceeds by generating new image tiles along the predicted contours, continuing the trace until reaching the starting location, thereby closing the loop and finishing the segmentation. An important benefit of this approach is that it breaks the complex task of whole-leaf segmentation into multiple smaller, easier decision rules, and requires only a small number of images to train an accurate model. See Figure~\ref{fig:tracer} for a diagram of the leaf tracing algorithm and Supplementary Video S1 for a video of the iteration. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/tracer.pdf} \caption{\textbf{Leaf tracing algorithm.} An image tile and a small segment of the previously traced path are input to a CNN which predicts the next steps of the trace. The predictions are added to the leaf contour and used to generate an image tile at the new location. This iteration continues until the trace reaches the starting location of the contour. \underline{Left}: RGB leaf scan and input tile (front) with previously traced pixels (back). \underline{Center}: the leaf tracing CNN which transforms the $256\times256$ input tile into a $2\times128$ set of pixel displacements for trace prediction. \underline{Right}: predicted pixel displacements that are used to update the trace and generate image tiles in the next iteration.} \label{fig:tracer} \end{figure} The leaf tracing CNN inputs $256\times256\times4$ image tiles, which include three color channels (RGB) and an overlay of the previously traced path as an additional channel. The tile size is chosen large enough to provide the model with sufficient context to trace through areas where the leaf contour may be obscured (e.g., in damaged/diseased areas or near the petiole). The RGB values are normalized to $[0, 1]$ for computational stability. The additional channel is a binary image comprised of ones along pixels of the previously traced path and zeros otherwise, and thus provides the network with a direction to continue the trace. Each image tile is centered at a pixel on the contour of a leaf, by which the 50 manually-traced samples are used to generate more than 300,000 individual tiles for training. Further, heavy image augmentation is used so that the CNN learns an invariance to heterogeneous leaf shapes and conditions. In particular, random continuous rotations, horizontal and vertical flips, displacement jitter, and color augmentation (hue, saturation, brightness, and contrast) are combined so that no two image tiles appear the same during training. The leaf tracing CNN outputs $2\times N$ trace predictions, which encode $N$ horizontal and vertical pixel displacements along the leaf contour relative to the center pixel of the input tile. Training data is generated by evenly sampling pixels from the center to the edge of the tile along the contour of the leaf. Distance is then measured between the predicted trace and the ground truth contour using mean squared error, $\mathcal{L}_{\text{MSE}}$, as an objective function. Importantly, the quality of the predicted trace degrades near the edges of image tiles since the CNN does not have context beyond the boundaries of the input. However, it is still important for the model to predict the trace from the center pixel to the edge of the image tile so that the predicted trace can ``skip'' over obscured segments of the contour~\cite{rutter_combo}. To account for these effects, weighted mean squared error is used to weight predictions closer to the center pixel more heavily than predictions near the edge. The objective function is given by \begin{subequations} \begin{align} \mathcal{L}_{\text{MSE}} &= \frac{1}{N} \sum_{i=1}^{N} \omega_i \big\| y_i - \text{CNN}(x)_i \big\|_2^2, \label{eq:mse} \\ \omega_i &= 1 + \frac{1 - \tanh\left(\alpha i + \beta\right)}{2}, \label{eq:tanh} \end{align} \end{subequations} \noindent where $x \in \mathbb{R}^{256\times256\times4}$ is the input image tile, $y \in \mathbb{R}^{2\times N}$ is the set of ground truth row/column coordinates with $y_i$ indicating the row and column position of the $i^{\text{th}}$ pixel, $\omega \in \mathbb{R}^{N}$ is the weight vector, the number of pixel displacements is $N=128$, and $\alpha = 8/N$ and $\beta = -4$ are chosen such that the hyperbolic tangent function (which defines $\omega$) gradually decreases the error weight from two to one along the predicted contour. In this way, the objective function weights pixels near the center of the tile approximately two times greater than pixels near the edge. The model architecture follows standard practices for CNNs~\cite{simonyan2014very, he2016deep}. In particular, the CNN uses blocks of three $3\times3$ convolution layers with zero-padding and one max pooling layer. Each convolution layer includes batch normalization to stabilize training~\cite{ioffe2015batch} and a ``LeakyReLU'' activation function for nonlinearity~\cite{maas2013rectifier}. Additionally, residual connections are applied between convolution layers for easier optimization and better prediction accuracy~\cite{he2016deep}. In total, the leaf tracing CNN includes six blocks with max pooling and one block without, which transforms the spatial image dimensionality from $256\times256$ to $4\times4$. Then, a final $4\times4$ convolution layer reduces the outputs to a vector of length 256, which is reshaped into $2\times128$ for trace prediction. Note that the final convolution is linear (i.e., it does not include a nonlinear activation function) so that the trace predictions can reach the edges of the input tile in any direction. To prevent overfitting, images are randomly split into 80\% training (i.e., 40 images totaling $\sim$240K image tiles) and 20\% validation (i.e., 10 images totaling $\sim$60K image tiles) sets. The model is then trained for 1,000 epochs with a batch size of 256 and the Adam optimizer~\cite{kingma2014adam} with default parameters. Further, early stopping with 20 epochs (i.e., training is stopped if the validation error does not improve within 20 epochs) is used to guarantee the convergence of the model. Once the leaf tracing CNN is trained, it is used to iteratively trace the contour of each leaf image in the data set. The tracing algorithm is initialized using automatic thresholding to obtain a rough segmentation of the leaf, which provides both a starting location and trace direction. An image tile centered at the top of the rough segmentation (i.e., at the tip of the leaf) is initially fed to the CNN, which outputs the initial trace prediction from the center to the edge of the image tile. The first 32 pixel predictions along the edge of the leaf are added to the trace, and a new image tile is drawn centered at the new location. This iteration continues until the predicted trace falls within 10 pixels of the previously-traced contour, after which a line is drawn from the prediction to the contour to close the loop. To eliminate errors from the trace initialization, the tracing algorithm uses 10 ``burn-in'' iterations before storing traced pixels for the final segmentation. The leaf body segmentation is obtained by classifying all interior pixels as foreground and exterior pixels as background. Note that the trace direction is randomized during training, so that the tracing CNN can segment leaves in either clockwise or counterclockwise directions. In practice, the tracing algorithm does not require human supervision to start or stop the iteration and takes $\sim$1 second per image on a single GPU of an NVIDIA DGX Station A100. \subsection{Vein segmentation} \label{sec:growing} Segmentation of the leaf venation is formulated as a region growing task based on~\cite{januszewski_floodfilling, lagergren_growing}, in which a CNN is used to iteratively expand a region of interest (i.e., visible veins of a leaf). The convolutional region growing method (also called flood filling networks~\cite{januszewski_floodfilling}) has been shown to reach state-of-the-art segmentation accuracy while preserving biologically realistic morphological features~\cite{lagergren_growing}. However, rather than tracing the boundary of an object with a 1D line, the vein growing CNN iteratively grows a segmentation in all directions (e.g., 2D in~\cite{lagergren_growing} and 3D in~\cite{januszewski_floodfilling}) by classifying which pixels/voxels should be included in or rejected from the region. In particular, a CNN inputs small image tiles centered on pixels of interest and predicts classifications of the center pixel and its adjacent neighbors. Neighboring pixels that are added to the region become the seeds for new image tiles in the next iteration. This process continues until no new pixels are added to the region, thereby finishing the segmentation. Similar to the leaf tracing framework, the region growing approach breaks the complex task of vein segmentation into many smaller decision rules and can produce high-accuracy segmentations using fewer than ten images for training. See Figure~\ref{fig:grower} for a diagram of the vein growing algorithm and Supplementary Video S2 for a video of the iteration. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/grower.pdf} \caption{\textbf{Vein growing algorithm.} Image tiles centered on pixels of interest are input to a CNN which predicts the classification of the center pixel and its neighbors. Neighboring pixels with high probability are added to the vein region and used as seed pixels in the next iteration. The iteration continues until no new pixels are added to the vein region. \underline{Left}: RGB leaf scan and input tiles with center pixels highlighted in black. \underline{Center}: the vein growing CNN which transforms the $128\times128$ input tile into a $3\times3$ matrix of vein probabilities. \underline{Right}: predicted pixel probabilities that are used to update the region and generate new image tiles in the next iteration.} \label{fig:grower} \end{figure} The vein growing CNN inputs $128\times128\times3$ RGB image tiles (also normalized to $[0, 1]$) centered on pixels in the interior of a leaf. The tile size is chosen to be smaller than the leaf tracing tiles (i) since vein classification does not require as much context and (ii) for computational efficiency, since many more image tiles are used in this framework. However, the tile size is still large enough so that the model can accurately predict vein pixels in areas of uncertainty (e.g., blurry patches and diseased/dead tissue). To construct a training set, image tiles are drawn for each vein pixel, by which the eight manually-segmented images generate more than 1,000,000 positive samples (i.e., samples centered on leaf veins rather than leaf lamina). Further, to account for heterogeneous backgrounds and image artifacts, up to ten times as many background pixels are sampled from the interior of each leaf, resulting in approximately 10,000,000 negative samples. The image tiles are augmented during training using a combination of random continuous rotations, horizontal and vertical flips, color augmentation, and Gaussian blur. The vein growing CNN outputs $3\times3\times2$ predictions of the center pixel and its neighbors, in which the two prediction channels represent probabilities that a pixel belongs to the foreground (vein) or background (lamina). To measure error between predicted pixel probabilities and their ground truth classifications, Focal Loss ($\mathcal{L}_{\text{FL}}$), an extension of standard cross-entropy, is used as an objective function~\cite{lin2017focal}. In particular, Focal Loss seamlessly accounts for the class imbalance between positive and negative samples (i.e., there are many more background pixels than vein pixels) and allows the model to focus on more difficult examples where veins are obscured. The objective function is given by \begin{equation} \mathcal{L}_{\text{FL}} = \begin{cases} -\alpha \, (1-p)^\gamma \, \log(p) & \text{if $y=1$} \\ -(1-\alpha) \, p^\gamma \, \log(1-p) & \text{otherwise}, \end{cases} \label{eq:focalloss} \end{equation} \noindent where $p = \text{CNN}(x)$ are the pixel probabilities, $x \in \mathbb{R}^{128\times128\times3}$ is the input image tile, $y \in \{0, 1\}$ are the ground truth pixel classes, and $\alpha=0.25$ and $\gamma=2.0$ are the default hyperparameters of the Focal Loss function~\cite{lin2017focal}. The model architecture and training strategy are nearly identical to the leaf tracing framework. Since the input tiles for vein segmentation are half the dimension of the inputs for leaf tracing, the first block of $3\times3$ convolutional layers and max pooling is removed from the architecture described in Section~\ref{sec:tracing}. Thus, the vein growing CNN transforms the spatial image dimensionality from $128\times128$ to $4\times4$, after which a final $4\times4$ convolution layer reduces the outputs to a vector of length 18, which is reshaped into $3\times3\times2$ for vein classification. Note that, unlike the leaf tracing CNN, the final convolution includes a Softmax activation function, which constrains the outputs to between 0 and 1 and motivates the probabilistic interpretation for the objective function. The model is trained with the Adam optimizer for 1,000 epochs with a batch size of 1024 and early stopping of 20. Note that a larger batch size is used here compared to the leaf tracer since the inputs are smaller and thus more can be included in each batch. Finally, six images (totaling $\sim$7M image tiles) are used for training and two (totaling $\sim$2.5M image tiles) for validation. Once the vein growing CNN is trained, it is used in a recursive framework in which the CNN decides whether new pixels should be added to the vein segmentation. The algorithm is initialized by randomly sampling 10,000 seed pixels inside the leaf body (using the segmentations from Section~\ref{sec:tracing}). For each seed pixel, image tiles are generated and fed to the model, which then classifies the seed pixel and its neighbors. Neighboring pixels that are classified as leaf veins are used as seeds in the next iteration. Once a seed pixel has been considered, it is removed from the sample set for future iterations. This process is then repeated, continuously adding pixels to the segmentation, until no new pixels are positively classified. Note that a pixel can receive multiple classifications as its neighbors become seeds during the iterations. To account for this, the final vein segmentation is determined by thresholding the \emph{average} probability of each pixel. The optimal probability threshold is chosen by minimizing the number of connected components in the segmentation mask across a range of threshold values. In other words, the optimal threshold is the one that maximizes vein connectivity in the segmentation mask. Like the leaf tracing framework, the vein growing algorithm does not require human supervision at inference time and completes accurate vein segmentations in $\sim$60 seconds on a single GPU of an NVIDIA DGX Station A100, which is orders of magnitude faster compared to human annotation. \subsection{Feature extraction} \label{sec:featureextraction} Given the binary segmentation maps from Sections~\ref{sec:tracing} and~\ref{sec:growing}, traditional open-source image-processing tools are used to extract biologically meaningful traits from the leaf body, vein architecture, and petiole. This is possible since the few-shot learning methods effectively remove background artifacts and highlight the salient information in leaf scans. In this work, Fiji~\cite{schindelin2012fiji} is used to extract leaf-level traits, RhizoVision Explorer (RVE)~\cite{seethepalli2020rhizovision, seethepalli2021rhizovision} is used for vein traits (e.g., length and thickness), and a custom implementation is used for petiole traits (length and width). RVE is chosen for vein traits in particular since it is designed to analyze root systems, which are composed of vessel-like structures with tips, branch points, redundant connections, etc., and makes it applicable to studying vein architectures, which share many of the same characteristics. Further, since the scan resolution is known (i.e., 300 DPI), features extracted from Fiji and RVE are easily converted from pixel-coordinates to standard units (e.g., cm). Fiji is applied to the leaf segmentations from Section~\ref{sec:tracing} to extract 23 image-based traits related to whole-leaf morphology. Some morphological descriptors include area (cm$^2$), perimeter (cm), circularity (unitless), and solidity (unitless), etc. Color features are also derived by relating the segmentations back to the original scanned images, including average red, green, blue, hue, saturation, and brightness values corresponding to leaf pixels. Feature extraction in Fiji is scripted and applied in ``batch mode'' to the full set of leaf segmentations. A detailed description of each of leaf-level trait is provided in Supplementary Table~\ref{tab:leaftraits}. RVE is used to extract 27 features from the vein segmentations. Note that only vein pixels inside the leaf segmentation were used for vein architecture traits (i.e., the petiole is not considered here). The software parameters are set to 300 DPI and ``whole mode'' for image-level traits. Vein diameter ranges are used classify veins into three ranges: (i) less than 0.25mm, (ii) between 0.25mm and 0.80mm, and (iii) above 0.80 mm, in an attempt to correspond to third, second, and first order veins, respectively. Extracted traits include those supplied by default (e.g., average vein diameter (mm), length (mm), and area (mm$^2$)), with some traits being repeated across the three vein diameter ranges. Following~\cite{sack2013venation}, additional venation traits are also derived that measure proportions between vein length/area to leaf morphology. See Supplementary Table~\ref{tab:veintraits} for the full list of vein traits and their descriptions. Petiole segmentations are derived by considering the largest connected component of vein pixels outside of the leaf segmentation. Then, to compute petiole length and width, Fiji is used to compute the best-fit rotated rectangle around the petiole mask. The height of the bounding rectangle is sufficient to estimate petiole length. However, rectangle width is not used to estimate petiole width since (i) petiole width changes along the length of the petiole (i.e., it tends to be wider near the ends and thinner near the midpoint), and (ii) the caliper measurements for petiole width were taken near the center of the petiole. Thus, petiole width is estimated by computing the average diameter over the center 20\% of the segmentation. Finally, Fiji is used to estimate similar traits for the petiole compared to the leaf body (e.g., area and perimeter), and RVE was used to estimate petiole volume. See Supplementary Table~\ref{tab:petioletraits} for the full list of 18 petiole traits and their descriptions. The feature extraction process yields 68 traits related to leaf, vein, and petiole morphology that can be used for genomic analysis. To validate image-based features with real-world measurements, petiole length and width are compared against caliper measurements that were recorded manually during image capture. To consider the results from a biological perspective, (i) broad-sense clonal heritability is computed for each recorded trait, and (ii) a genome-wide association study (GWAS) is performed for the vein density trait (i.e., the ratio of vein area to leaf area). Vein density is chosen since it utilizes both the leaf and vein segmentations, and since the ratio between lamina and venation must balance sunlight intake and carbon fixation with the transport of sugars and other nutrients to sink organs, all of which are essential processes for biomass production. \subsection{Validation} \label{sec:validation} To validate the leaf and vein segmentations, following~\cite{rutter_tracing} and~\cite{lagergren_growing}, the Jaccard index (intersection over union) is used to measure segmentation accuracy for images in the validation sets (i.e., ten for leaf segmentation and two for vein segmentation). This metric measures similarity between two semantic segmentations by computing the ratio between the set intersection (all true positive pixels) and the set union (all true positive, false positive, and false negative pixels), where scores near one indicate high accuracy and near zero indicate low accuracy. Since validation error was monitored during training, conclusions drawn from segmentation accuracy for validation images may be affected by data leakage (i.e., create an over-optimistic intepretation of the model). To account for this, the predicted digital measurements across the population are further validated using real-world physical measurements. To this end, calipers were used to measure petiole length and width during data collection. These values are compared against the corresponding features extracted from the vein segmentations described in Sections~\ref{sec:growing} and~\ref{sec:featureextraction}. To measure the agreement between digital and manual values quantitatively, the coefficient of determination ($R^2$) is computed for each trait. \subsection{Genomic analysis} \label{sec:genomics} To pre-process the vein density trait for GWAS, outliers are removed using median absolute deviation (MAD), where any measurement with $\text{MAD} > 6$ is removed. To account for geospatial variation across the plantation, thin plate spline (TPS) correction is applied using the \textit{fields} software package in R~\cite{fieldsR}, in which the row and position of each tree are used as coordinates for the TPS models. To extract the genetic component of each sample, best linear unbiased predictors (BLUPs) are computed for the TPS-corrected values using the \textit{lme4} software package in R~\cite{bateslme4}, which fits genotypes as random effects for each trait. In addition, to assess the repeatability of each measurement and the genetic control of the vein density trait, broad-sense heritability ($H^2$) was estimated using the TPS-corrected values of the clonal replicates (131 replicated samples) from the two blocks considered in this work. Heritability is computed by \begin{equation} H^2 = \frac{\sigma^{2}_{G}}{\sigma^{2}_{G} + \sigma^{2}_{E}}, \label{eq:genomicvariance} \end{equation} \noindent where $\sigma^{2}_{G}$ is the genotypic variance due to clonal differences and $\sigma^{2}_{E}$ represents environmental variance. For genomic analysis, a total of 1,492 \textit{P. trichocarpa} accessions were previously sequenced using the Illumina genetic analyzer with paired-end sequencing technology at the Department of Energy Joint Genome Institute~\cite{nordberg2014genome}. The sequences are aligned to the v4 reference genome using the Burrows-Wheeler Alignment tool, BWA-MEM~\cite{li2013aligning}, and variant calling is performed using the GATK (version 4.0) Haplotype caller~\cite{van2013fastq}. Starting with more than 22 million single nucleotide polymorphisms (SNPs) obtained by the GATK Variant Quality Score Recalibration (VQSR) method at tranche 99, 847,066 SNPs across 1,419 genotypes were retained for population-scale genomic analysis after applying the following filters. 73 individuals were removed due to having excessive genomic relatedness or having greater than 10\% missing SNP data. SNPs were removed if they had greater than 15\% missing genotypes, or minor allele frequency less than 0.05, or Hardy Weinberg Equilibrium chi-square test P value $< 10^{-50}$. SNPs were further pruned using a linkage disequilibrium (LD) coefficient of determination threshold of $R^2 \geq 0.7$. The data pre-processing steps above yield 847,066 SNPs for 1,419 unrelated genotypes that are used for GWAS analysis of the vein density trait. Association between the SNPs and the phenotypic vector was tested using a multilocus GWAS method, BLINK, from the GAPIT3 software package in R~\cite{huang2019blink}, that uses two fixed effect models (FEM) iteratively. The first FEM tests for the association of all genetic markers independently to generate a set of pseudo Quantitative Trait Nucleotides (QTNs) that are then used in the second FEM to optimize the selection of pseudo QTNs. Only those QTNs that are significant and not in LD are used as covariates in the association test. The first FEM is given by \begin{equation} y_i = S_{i1}b_1 + S_{i2}b_2 + \cdots + S_{ik}b_k + S_{ij}d_j + e_i, \label{eq:fem1} \end{equation} \noindent where $y_i$ is the phenotypic value of the $i^{\text{th}}$ individual, $S_{i1}, \dots, S_{ik}$ are the genotypes of the $k$ QTNs, $b_1, \dots, b_k$ are the corresponding effects of the QTNs, $S_{ij}$ is the genotype of the $i^{\text{th}}$ individual and $j^{\text{th}}$ SNP, $d_j$ is the $j^{\text{th}}$ SNP effect, and $e_i$ is the residual. The second FEM is used to optimize the QTNs for use as covariates in the first FEM, and is given by \begin{equation} y_i = S_{i1}b_1 + S_{i2}b_2 + \cdots + S_{i}b_k + e_i, \label{eq:fem2} \end{equation} \noindent with a similar interpretation to Equation~\ref{eq:fem1}. Note that Equation~\ref{eq:fem2} is essentially a reduced version of Equation~\ref{eq:fem1}, in which the SNP term that tests for the association with the phenotypic vector is removed. The model optimization is performed with Bayesian Information Criterion (BIC). \section{Results} \label{sec:results} \subsection{Few-shot learning results} The few-shot learning methods are applied to the total set of images, in which the 2,906 top and bottom scans are used for leaf segmentation, and the 1,453 bottom scans are used for vein architecture. Examples of the resulting model outputs are given in Figure~\ref{fig:segmentation}. Note that the leaf in Figure~\ref{fig:segmentation} was not used for model training or validation. Additional segmentation results are visualized in Supplementary Figure~\ref{fig:overlays}, which illustrates leaf heterogeneity by varying leaf size and vein density. All of the image data, ground truth annotations, and predicted leaf/vein segmentations are made publicly available at \url{https://doi.org/10.13139/ORNLNCCS/1908723}. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/segmentation.png} \caption{\textbf{Leaf and vein segmentations.} Results of the leaf and vein segmentation methods on an example leaf outside the training set. The top row shows the full leaf and the bottom row gives a zoomed in view. \underline{Left}: example leaf scan chosen from outside the training and validation sets. \underline{Center left}: predicted segmentation of the leaf body where pixels inside the traced contour are shown in white and outside the contour in black. \underline{Center right}: predicted segmentation of the visible vein architecture with vein pixels shown in white and background pixels in black. \underline{Right}: example leaf scan with the predicted leaf boundary and vein architecture overlaid in blue and red, respectively. Note that for visualization these images are zoomed in to remove redundant white space from the scanner background.} \label{fig:segmentation} \end{figure} The Jaccard index is used to measure segmentation accuracy for images in the validation sets (i.e., ten for leaf segmentation and two for vein segmentation), where scores near one indicate high accuracy and near zero indicate low accuracy. For leaf tracing, all segmentations in the validation set exceed a Jaccard score of 0.99, indicating a high degree of overlap between the predicted and ground truth segmentations. For vein segmentation, the two validation images achieve Jaccard scores of 0.6134 and 0.6334. This reduced score is due mainly to (i) human errors in the ground truth segmentation, (ii) the complexity of the vein architecture, and (iii) the method for probability threshold selection. For example, the model identifies veins that were missed during manual annotation, and thus are considered false positives in the Jaccard score. Further, due to thin veins, predicted veins that are off by just one pixel can result in large changes in Jaccard score. Finally, choosing a threshold that maximizes vein connectivity creates slightly wider vein predictions compared to the ground truth, which further decreases the score, but increases the biological accuracy of the vein structure. For a visualization of these phenomena, see Supplementary Figure~\ref{fig:accuracy}, which illustrates these effects for the validation image with the lowest Jaccard score. Despite the lower Jaccard metric, the vein growing framework achieves recall/sensitivity values (i.e., the probability of detecting a vein pixel) of 0.9219 and 0.8673, respectively, which indicates that the method has a high detection rate, exceeding even human-level accuracy in some cases, and thus almost completely captures the structure of the visible vein architecture. The predicted digital measurements across the population are further validated using real-world caliper measurements, which are visualized in Figure~\ref{fig:validation}. In particular, the data are compared against a linear model, which results in $R^2 = 0.96$ for petiole length and $R^2 = 0.77$ for petiole width. This discrepancy between $R^2$ values is due to several factors. First, manual measurement of petiole length is made from end to end, resulting in larger, more consistent measurements. However, for petiole width, the caliper was placed at the approximate center of the petiole, resulting in greater variation due to subjective positioning of the caliper. Further, since petiole width is typically smaller, measurement errors affect the $R^2$ value more compared to petiole length. Despite these effects, the digital traits strongly agree with the manual measurements, which helps validate the accuracy of the segmentations and extracted features. \begin{figure}[h!] \centering \includegraphics[width=0.85\textwidth]{figures/validation.png} \caption{\textbf{Petiole measurement validation.} Each subplot compares predicted ($x$-axis) with actual ($y$-axis) morphological measurements of the petiole. Manual measurements were obtained with calipers during data collection while digital measurements were derived from the leaf and vein segmentations. \underline{Left}: petiole length comparison with data shown in black and the best-fit line shown in red. \underline{Right}: petiole width comparison with data shown in black and the best-fit line shown in red.} \label{fig:validation} \end{figure} \subsection{Genomic analysis results} To consider the segmentation and feature extraction methods from a biological perspective, a GWAS analysis is conducted at population-scale to associate vein density (i.e., the ratio of vein area to leaf area) to the \textit{P. trichocarpa} genome. The broad-sense clonal heritability of the vein density trait is moderately high ($H^2 = 0.65$), which suggests that the trait is under genetic control. The multilocus BLINK method is used to perform GWAS on the vein density trait with 847,066 SNPs in the genome. This analysis identified 12 significant SNPs using a false discovery rate (FDR) P value, $P<0.05$, and 15 unique SNPs using a less-strict FDR P value, $P<0.2$. A Manhattan plot, quantile-quantile (QQ) plot, and the distribution of the TPS-corrected vein density BLUPs are shown in Figure~\ref{fig:gwas}. A total of 30 unique genes that potentially control for the variation in vein density trait are identified for these SNPs based on the nearest flanking genes in both directions of the GWAS hits in the genome. To gain more insight into the function of these genes, the \textit{Arabidopsis thaliana} orthologs are identified based on the protein sequence similarity. The GWAS results are summarized in Table~\ref{tab:gwas}. See Section~\ref{sec:discussion} for discussion about these genes and their associated physiological plant processes. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/gwas.png} \caption{\textbf{Population-scale genomic analysis.} \underline{Top}: a Manhattan plot of the GWAS results for leaf vein density using the multilocus BLINK method for 847,066 SNPs across 1,419 \textit{P. trichocarpa} genotypes. The horizontal axis corresponds to genomic positions by chromosome and the vertical axis shows the negative log-base-10 P value for each SNP. The dashed horizontal line represents the FDR threshold, $P<0.05$, and dotted line represents the FDR threshold, $P<0.2$. \underline{Bottom left}: the quantile-quantile (QQ) plot corresponding to the P values shown in the Manhattan plot with the expected values shown by the red dashed line. \underline{Bottom right}: the distribution of TPS-corrected vein density BLUPs used for GWAS.} \label{fig:gwas} \end{figure} \begin{table}[h!] \centering \caption{\textbf{Identified genes.} Top gene models detected by the BLINK GWAS method based on FDR P value $P<0.2$ for vein density in \textit{Populus trichocarpa}. Each row corresponds to different genes and includes the gene ID, chromosome number, SNP position, distance in the genome (positive for upstream and negative for downstream from the SNP position), minor allele frequency (MAF), P value, \textit{A. thaliana} ortholog, and ortholog annotation.} \label{tab:gwas} \begin{tabular}{ccccccp{2.3cm}} \hline \textbf{Gene ID} & \textbf{Chr. (pos.)} & \textbf{Dist.} & \textbf{MAF} & \textbf{P value} & \textbf{Ortholog} & \textbf{Annotation} \\ \hline Potri.017G077200 & 17 (8,592,229) & -5,329 & 0.1246 & 3.24e-4 & AT3G04680 & CLP-SIMILAR PROTEIN 3 \\ Potri.017G077300 & 17 (8,592,229) & 6,036 & 0.1246 & 3.24e-4 & AT3G01300 & PBS1-LIKE 35 \\ Potri.006G090501 & 6 (6,910,580) & -2,825 & 0.1938 & 4.69e-4 & - & - \\ Potri.006G090600 & 6 (6,910,580) & 1,617 & 0.1938 & 4.69e-4 & AT3G53880 & ALDO-KETO REDUCTASE FAMILY 4 MEMBER \\ Potri.006G227300 & 6 (23,159,363) & -3,829 & 0.2158 & 7.47e-4 & AT1G18600 & RHOMBOID-LIKE PROTEIN 12 \\ Potri.006G227400 & 6 (23,159,363) & 2,083 & 0.2158 & 7.47e-4 & AT3G13784 & CELL WALL INVERTASE 5 \\ \hline \end{tabular} \end{table} \section{Discussion} \label{sec:discussion} In this work, few-shot learning was used for both leaf and vein segmentation. Each method leveraged few-shot learning by dividing a small number of large images into a large number of small image tiles, and leveraging predictions of previous iterations to expand partial segmentations until stopping criteria was reached. In particular, iterative data set refinement was paired with heavy image augmentation so that the CNN models learned an invariance to potential image artifacts not included in the small number of annotated images. In this way, the complex task of whole-image segmentation was broken down into smaller easier decision rules, thereby maximizing predictive accuracy while minimizing the amount of labeled images needed for model training. This strategy was chosen primarily due to (i) the lack of labeled training images (e.g., only 50 leaf segmentations and eight vein segmentations) and (ii) because each leaf scan is large ($3510 \times 2550$ pixels) and does not easily fit into a standard deep learning model (e.g., a CNN). To address (i), one could simply annotate more data manually, however, to segment the visible venation in the images considered here (Section~\ref{sec:data}) would require up to 12,000 person-hours (i.e., 6 years, assuming a 40-hour work week). Note that the few-shot learning approaches discussed in this work could be used to automatically label training data for larger deep learning workflows (e.g., U-Net~\cite{ronneberger2015u}). However, even that may not be sufficient, since many modern deep learning architectures require thousands or even millions of training samples. For (ii), large images are typically greyscaled, reshaped, or downsampled to fit within system requirements and hardware limitations~\cite{xu2021automated}. However, this strategy can alter or destroy fine-grained details that may be crucial for accurate prediction. Thus, the methods demonstrated here utilized raw RGB images at full resolution, but rather than using entire images as inputs, smaller tiles were sampled from within images to iteratively segment leaf boundaries and visible venation. For example, see Supplementary Figure~\ref{fig:overlays}, which illustrates how these tile-based approaches are generally insensitive to changes in object size and characteristics. Leaf segmentation was formulated as a tracing task, in which a CNN inputs image tiles centered along the boundary of a leaf, and outputs trace predictions that are used to sample new tiles in the next iteration. This methodology performed well in this application since there is only one leaf per image and each leaf is fully contained within the image. Note that images with multiple objects have been previously considered~\cite{rutter_tracing, rutter_combo}, but additional modifications to the algorithm (e.g., recurrence) may be needed to account for images with cluttered objects and overlapping boundaries. The resulting leaf segmentations were highly accurate and captured the morphology and serration of each leaf. Since each leaf was scanned against a white background, automatic thresholding could be used to obtain a rough segmentation of the leaf. However, this approach captures background artifacts, includes the petiole, and falsely detects shaded regions near the leaf boundaries and petiole. These artifacts could be addressed for individual samples by post-processing the binary segmentation maps (e.g., binary erosion and dilation), but defining such rules that generalize to all cases across the population is increasingly difficult. In contrast, a strength of the tracing approach is that the tracing CNN can \emph{learn} an invariance to image artifacts in a data-driven way without human supervision, and still leverage the auto-threshold segmentations for trace initialization, which allows the method to be fully automated. Finally, the tracing methodology produced accurate leaf segmentations approximately three orders of magnitude faster than human annotation ($\sim1$ second compared to 15-30 minutes per image), allowing the method to scale up to population-level data sets, especially for computing systems that support parallelization (i.e., tracing more than one image at a time). Due to the complexity of leaf venation, vein segmentation was formulated as a region growing task where a CNN predicts whether to include pixels in a segmentation by inputting image tiles centered at those pixels. Unlike the tracing framework (which segments objects by tracing boundaries in 1D), the region growing approach grows the segmentation directly by continuously adding pixels to the region of interest in 2D (see~\cite{januszewski_floodfilling} for 3D). A strength of this approach is that each pixel is considered individually. However, unlike previous methods which conduct an exhaustive prediction over all pixels in an image (which would equate to 8,950,500 pixels per image in this work)~\cite{ciresan2012deep}, pixels were only considered if they exceed a probability threshold, which allowed the model to focus only on pixels of interest. This distinction dramatically increased segmentation speed ($\sim60$ seconds compared to 4-8 hours per image with manual segmentation). In addition, the region growing framework can use a random sample of pixels from anywhere in the image to initialize the iteration. Note that in this work, seed pixels were drawn using the leaf body segmentations from the tracer to reduce the number of redundant white background pixels. Finally, and perhaps most importantly, the model produced accurate segmentations (exceeding even human-level accuracy) at population scale using just six images for training and two images for validation. This is in contrast to previous approaches that used CNNs for vein segmentation and required more than 700 ground truth vein segmentations~\cite{xu2021automated}. In particular, this result highlights the importance of iterative data set refinement, in which images were specifically added to the training set in order to build an invariance to observed artifacts in the population (e.g., leaf folds that were falsely classified as veins), which has also been noted in similar applications for root segmentation~\cite{smith2022rootpainter}. Leaf and vein segmentation were specified as independent tasks, and used separate computational strategies to achieve each goal. Since both approaches were designed for segmentation, a natural question arises concerning whether two distict approaches are necessary, or whether one would suffice for both tasks. In principle, the region growing CNN could be used for leaf segmentation, however, this would be computationally inefficient due to the large number of leaf pixels in the high-resolution scans. Compared to the tracing CNN (which focuses solely on boundary pixels), the region growing CNN would waste a large amount of computational resources on redundant ``interior'' pixels, which vastly outnumber boundary pixels. In the reverse case, the tracing CNN could theoretically be applied to vein segmentation. However, since the vein architecture is not homotopic to a circle, the tracing CNN would need to be re-initialized thousands of times inside the leaf to account for all of the ``holes'' in the vein architecture. Further, overlapping tracer boundaries near one-pixel-thick veins would create additional challenges in post-processing the thousands of traced contours. Thus, each few-shot learning method was suited for the particular segmentation task it was assigned. A utility of the few-shot learning methods discussed here is that they remove background artifacts and highlight salient information in images (e.g., the leaf body or vein architecture). Using traditional computer vision applications (e.g., Fiji and RVE) to extract digital traits from such segmentations becomes trivial compared to using the raw image data. These advances not only reduce human effort, but also expand the number and variety of traits one can extract by including traits that can only be estimated digitally (e.g., vein density, leaf solidity, etc.). Further, custom algorithms can be developed that extract cryptic phenotypes related to leaf morphology and topological information from the vein networks which may yield new biological insights into the role leaves play in plant physiology -- this analysis is left for future work. These methods were also used to estimate measurable traits like petiole width and length, which were used as a source of validation in this work. This study demonstrated that manually estimated petiole length and width strongly correlated with their corresponding digital measurements, suggesting that the segmentation quality and feature extraction methodology accurately predicts biologically relevant features from the raw images. A strategy to consider these digital traits from a biological perspective was to estimate their broad-sense heritability (i.e., the amount of variation in the trait that is controlled by genetics), denoted by $H^2$. In particular, the $H^2$ value for vein density was 0.65, suggesting that the trait is under significant genetic control. As a proof of concept for downstream application of the few-shot learning methods, a GWAS analysis was performed for the vein density trait at population scale. The top GWAS hit was Potri.017G077200, which is highly expressed in apical bud, dormant bud, and stem~\cite{sreedasyam2022jgi}. This gene is also expressed in immature/young leaves, suggesting that it plays a role in such tissues~\cite{sreedasyam2022jgi}. Comparing to \textit{A. thaliana}, \textit{CLP-SIMILAR PROTEIN 3} (\textit{CLPS3}, AT3G04680) is the closest ortholog of Potri.017G077200, and is related to the human Cleavage factor polyribonucleotide kinase subunit 1 (hCLP1), which forms part of the complex responsible for polyadenylation of 3' (3 prime) of messenger RNA~\cite{hCLP_de2000human}. CLPS3 also interacts with components of the polyadenylation complex in plants and it is expressed throughout whole plant development, including leaves and vasculature~\cite{CLP3_1}. Overexpression of \textit{CLPS3} causes aberrant leaf phenotypes, abnormal phyllotaxis, and early flowering~\cite{CLP3_1}. Futher, \textit{CLPS3} increases the expression of \textit{CUP-SHAPED COTYLEDON 1} (\textit{CUC1}), an NAC transcription factor, which together with \textit{CUC2} and \textit{CUC3}, have been found to participate in meristem formation, organ boundary separation, and leaf shape~\cite{cuc_postemb_hibara2006arabidopsis, cuc1_meristem_spinelli2011mechanistic, cuc2_leaf_nikovics2006balance}. Since Potri.017G077200 is an \textit{CLPS3} ortholog, it could play similar roles in leaf development of \textit{P. trichocarpa}, making it a strong target for genomic selection studies. Potri.006G227300 is expressed in most plant tissues, but it is highly expressed in apical bud in spring, swelling bud, late dormant bud, as well as young and immature leaves~\cite{sreedasyam2022jgi}. Its \textit{Arabidopsis} ortholog, \textit{RHOMBOID-LIKE PROTEIN 12} (\textit{RBL12}, AT1G18600), follows a similar expression pattern, being enriched in floral buds~\cite{klepikova2016high}. Very little is know about \textit{RBL12}, but it is predicted to be an active transmembrane protease located in the mitochondria~\cite{RBL12_2_lemberg2007functional}. Further, \textit{RBL12} substrates in \textit{A. thaliana} have not been identified, therefore its role is yet to be determined~\cite{RBL12_1_kmiec2008plant}. Other genes that were associated with vein density by GWAS may play an indirect role in leaf development. For instance, the Potri.017G077300 ortholog, \textit{PBS1-LIKE 35} (\textit{PBL35}, AT3G01300), participates in shoot apical meristem homeostasis and plant immunity, while the Potri.006G090600 ortholog, \textit{ALDO-KETO REDUCTASE FAMILY 4 MEMBER C11} (\textit{AKR4C11}, AT3G53880), participates in abiotic stress tolerance through detoxification of reactive carbonyls~\cite{ark_rc_simpson2009characterization, ark_review_sengupta2015plant, pbl35_immunity_luo2020tyrosine, pbl35_sam_wang2022receptor}. Thus, Potri.017G077300 and Potri.006G090600 may play a role in leaf and vein development through such processes. Further, the Potri.006G227400 ortholog, \textit{CELL WALL INVERTASE 5} (\textit{CWINV5}, AT3G13784), is a cell wall invertase and members of this family have been found to affect plant development by making hexoses available for transport~\cite{cwi_1_sherson2003roles, klepikova2016high}. \subsection{Conclusions} Few-shot segmentation methods were extended to image-based plant phenotyping, whereby researchers can maximize predictive accuracy while minimizing the amount of training data. These methods were demonstrated for leaf scans of \textit{P. trichocarpa}, where 50 training images were used to train an automated tracing algorithm for whole-leaf segmentation and eight images were used to train a region growing algorithm to segment the visible vein architecture. The segmentations were used to extract biologically relevant morphological and topological leaf traits related to the leaf body, venation, and petiole, which were validated with real-world manual measurements. Broad-sense clonal heritability estimates for each trait were measured, and a population-scale genomic analysis was conducted for vein density, which combined information from both leaf and vein segmentations. The GWAS analysis revealed a set of previously unconsidered SNPs and associated genes with mechanistic associations to multiple physiological processes relating to leaf development and function. Future work will include a deep dive into the relevant biology surrounding the features discussed in this work, and will include the extraction of additional cryptic phenotupes relating to leaf morphology and vein topology. In particular, this work will leverage systems biology, network analysis, and climatic data to uncover the mechanistic associations within and across genotypes as they relate to sustainable bioenergy applications (e.g., biomass yield and composition). In conclusion, this study demonstrated a complete workflow from image acquisition to phenotype extraction. The utility of these methods for biological use cases was further demonstrated by performing GWAS which identified genomic regions and associated genes potentially controlling important plant phenotypes, such as vein density. This enhances current understanding of the genetic architecture of complex traits and may facilitate future quantitative genetics and genotype $\times$ environment interaction studies. This further allows researchers to assess how vein traits relate to other physiological processes, such as stomatal conductance, gas exchange, and overall plant productivity with important implications for developing \textit{Populus} as a bioenergy crop. Genes detected from the quantitative genetic analysis can be used in future biotechnology experiments for optimizing traits targeted for climate resilience, biomass production, and accelerated domestication for agriculture and biofuel production. \section*{Acknowledgments} \subsection*{Author Contributions} \noindent J. Lagergren: Conceptualization, funding acquisition, data collection, few-shot learning, feature extraction, writing. \\ \noindent M. Pavicic: Conceptualization, data collection, feature extraction, writing. \\ \noindent H. Chhetri: Conceptualization, data collection, genomic analysis, writing. \\ \noindent L. York: Conceptualization, feature extraction, writing. \\ \noindent D. Hyatt: Genomic analysis, writing. \\ \noindent D. Kainer: Genomic analysis, writing. \\ \noindent E. Rutter: Few-shot learning, writing. \\ \noindent K. Flores: Few-shot learning, writing. \\ \noindent G. Taylor: Field site support, writing. \\ \noindent D. Jacobson: Conceptualization, funding acquisition, supervision, writing. \\ \noindent J. Streich: Conceptualization, funding acquisition, data collection, writing. \subsection*{Special Thanks} The authors would like to acknowledge members of the Taylor Lab (University of California, Davis): Jack Bailey-Bale, Marie Klein, Zi (Janna) Meng, and Aiwei Zhu, for their support during data collection. \subsection*{Funding} This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. This work was funded by the Center for Bioenergy Innovation, a DOE Bioenergy Research Center supported by the Office of Biological and Environmental Research in the DOE Office of Science, and the Artificial Intelligence (AI) Initiative, an ORNL Laboratory Directed Research and Development program. The manuscript was authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the US Department of Energy. The US Government retains and the publisher, by accepting the article for publication, acknowledges that the US Government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (\url{http://energy.gov/downloads/doe-public-access-plan}). \subsection*{Conflicts of Interest} The authors declare that they have no competing interests. \subsection*{Data Availability} In addition to releasing all of the few-shot learning code on a public GitHub repository (\url{https://github.com/jlager/few-shot-leaf-segmentation}), we are also releasing all of the images, manual segmentations, model predictions, 68 extracted leaf phenotypes, and a new set of SNPs called against the v4 \textit{P. trichocarpa} genome for 1,419 genotypes on the Oak Ridge National Laboratory Constellation Portal (a public DOI data server) at \url{https://doi.org/10.13139/ORNLNCCS/1908723}. This is, to our knowledge, one of the largest releases of plant genotype and phenotype data in a single manuscript. We hope that this work becomes a valuable community resource and helps reduce barriers commonly associated with high throughput image-based plant phenotyping and machine learning. \newpage \section*{Supplementary Materials} \beginsupplement \noindent \textbf{Video S1:} \textbf{Leaf segmentation video.} Animation of the leaf tracing algorithm, in which a CNN iteratively traces the boundary of a leaf. \underline{Left}: the raw leaf scan with an overlay of the previously traced path and a bounding box indicating the current position of the CNN model. \underline{Top right}: the image tile and with an overlay of the previously traced path that are input to the CNN. \underline{Bottom right}: the predicted pixels along the contour of the leaf that are used to update the position in the next iteration. The iteration proceeds until the CNN predictions reach the start of the trace. Note that in practice the iteration completes in $\sim$1 second, but is slowed down for better visualization. \\ \noindent \textbf{Video S2:} \textbf{Vein segmentation video.} Animation of the vein growing algorithm, in which a CNN iteratively adds pixels to a growing segmentation of the visible vein architecture. \underline{Left}: the original leaf scan with an overlay of the pixels being considered by the CNN in yellow and classified vein pixels in red. \underline{Top right}: a zoomed in view of the top of the leaf and overlay. \underline{Bottom right}: a zoomed in view of the bottom of the leaf and overlay. The iteration proceeds, continuously adding new pixels to the segmentation, until no new pixels remain in the sample set. Note that in practice the iteration completes in $\sim$60 seconds, but is sped up for better visualization. \newpage \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{figures/overlays.png} \caption{\textbf{Example leaf and vein segmentations.} Results of the leaf and vein segmentation methods on example leaf images outside the training set. Traced leaf contours are shown in blue and vein segmentations in red. \underline{Top}: segmentation overlays for leaves varying in size, going from smallest (left) to largest (right). \underline{Bottom}: segmentation overlays for leaves of approximately equal area, but varying in vein density, going from sparse (left) to dense (right) venation.} \label{fig:overlays} \end{figure} \newpage \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{figures/accuracy.png} \caption{\textbf{Vein segmentation accuracy.} Results of the vein segmentation method on a leaf from the validation set. The top row shows the full leaf and the bottom row gives a zoomed in view. \underline{Left}: example leaf scan chosen from the validation set. \underline{Center left}: hand-annotated vein segmentation overlaid in red. \underline{Center right}: predicted vein segmentation overlaid in red. \underline{Right}: a comparison between the ground truth and predicted segmentations, in which red pixels indicate true positives, green pixels indicate false positives, and blue pixels indicate false negatives. Note that the zoomed-in tile reveals veins identified by the region growing method that are incorrectly reported as false positives (see veins with only green pixels) due to errors in the ground truth segmentation.} \label{fig:accuracy} \end{figure} \newpage \newpage \begin{longtable}[h!]{lcccp{7.5cm}} \caption{\textbf{Leaf features.} Includes names, units, broad-sense clonal heritability estimates, and descriptions of the 23 traits related to leaf morphology and color. Abbreviations: avg: average, max: maximum, min: minimum.} \label{tab:leaftraits} \\ \hline \textbf{Feature} & \textbf{Units} & \textbf{$H^2$} & \textbf{Tool} & \textbf{Description} \\ \hline \endfirsthead \textbf{Feature} & \textbf{Units} & \textbf{$H^2$} & \textbf{Tool} & \textbf{Description} \\ \hline \endhead Area & cm$^2$ & $0.30$ & Fiji & Total pixel count of leaf segmentation \\ Aspect ratio & - & $0.58$ & Fiji & Ellipse major axis / ellipse minor axis \\ Bottom blue & - & $0.57$ & Fiji & Avg. blue value of leaf abaxial side \\ Bottom brightness & - & $0.42$ & Fiji & Avg. brightness value of leaf abaxial side \\ Bottom green & - & $0.41$ & Fiji & Avg. green value of leaf abaxial side \\ Bottom hue & - & $0.39$ & Fiji & Avg. hue value of leaf abaxial side \\ Bottom red & - & $0.45$ & Fiji & Avg. red value of leaf abaxial side \\ Bottom saturation & - & $0.27$ & Fiji & Avg. saturation value of leaf abaxial side \\ Circularity & - & $0.23$ & Fiji & $4\pi A/P^2$ where $A$: area and $P$: perimeter \\ Convex area & mm$^2$ & $0.29$ & RVE & Total pixel count of convex hull \\ Major axis length & cm & $0.21$ & Fiji & Major axis length of best-fit ellipse \\ Minor axis length & cm & $0.44$ & Fiji & Minor axis length of best-fit ellipse \\ Max. Feret & cm & $0.23$ & Fiji & Max. distance between any two points in the leaf segmentation \\ Min. Feret & cm & $0.42$ & Fiji & Min. distance between two parallel lines tangent to Max. Feret line \\ Perimeter & cm & $0.25$ & Fiji & Sum of Euclidean distances between contour pixels in the leaf segmentation \\ Roundness & - & $0.56$ & Fiji & $4A/(\pi M^2)$ where $A$: area, $M$: major axis \\ Solidity & - & $0.09$ & Fiji & $A/C$ where $A$: area and $C$: convex area \\ Top blue & - & $0.26$ & Fiji & Avg. blue value of leaf adaxial side \\ Top brightness & - & $0.26$ & Fiji & Avg. brightness value of leaf adaxial side \\ Top green & - & $0.23$ & Fiji & Avg. green value of leaf adaxial side \\ Top hue & - & $0.29$ & Fiji & Avg. hue value of leaf adaxial side \\ Top red & - & $0.24$ & Fiji & Avg. red value of leaf adaxial side \\ Top saturation & - & $0.21$ & Fiji & Avg. saturation value of leaf adaxial side \\ \hline \end{longtable} \newpage \begin{longtable}[h!]{lcccp{7.5cm}} \caption{\textbf{Vein features} Includes names, units, broad-sense clonal heritability estimates, and descriptions of the 27 traits related to vein morphology. Abbreviations: avg: average, DR: diameter range, max: maximum, min: minimum, RVE: RhizoVision Explorer.} \label{tab:veintraits} \\ \hline \textbf{Feature} & \textbf{Units} & \textbf{$H^2$} & \textbf{Tool} & \textbf{Description} \\ \hline \endfirsthead \textbf{Feature} & \textbf{Units} & \textbf{$H^2$} & \textbf{Tool} & \textbf{Description} \\ \hline \endhead Area & mm$^2$ & $0.43$ & RVE & Total pixel count of vein segmentation \\ Area DR 1 & mm$^2$ & $0.55$ & RVE & Projected area of veins with DR 0 - 0.25 mm \\ Area DR 2 & mm$^2$ & $0.38$ & RVE & Projected area of veins with DR 0.25 - 0.8 mm \\ Area DR 3 & mm$^2$ & $0.26$ & RVE & Projected area of veins with DR above 0.8 mm \\ Avg. diameter & mm & $0.34$ & RVE & Avg. skeletal pixel radius, doubled for diameter \\ Convex area & mm$^2$ & $0.29$ & RVE & Total pixel count of convex hull \\ Density & - & $0.65$ & Custom & Ratio of vein area to leaf area \\ Length-to-area ratio & - & $0.62$ & RVE & $V/A$ where $V$: total length, $A$: leaf area \\ Max. depth & mm & $0.24$ & RVE & Max. vertical distance in vein segmentation \\ Max. diameter & mm & $0.32$ & RVE & Max. skeletal pixel radius, doubled for diameter \\ Max. width & mm & $0.41$ & RVE & Max. horizontal distance in vein segmentation \\ Network solidity & - & $0.64$ & RVE & Network Area per Convex Area ratio \\ Perimeter & mm & $0.52$ & RVE & Sum of Euclidean distances between contour pixels in the vein segmentation \\ Surface area & mm$^2$ & $0.46$ & RVE & Length multiplied by cross-section circumference summed over skeletal pixels. \\ Surface area DR 1 & mm$^2$ & $0.55$ & RVE & Surface area of veins with DR 0 - 0.25 mm \\ Surface area DR 2 & mm$^2$ & $0.38$ & RVE & Surface area of veins with DR 0.25 - 0.8 mm \\ Surface area DR 3 & mm$^2$ & $0.26$ & RVE & Surface area of veins with DR above 0.8 mm \\ Third order fraction & - & $0.29$ & RVE & Ratio of total length of DR 3 to total length \\ Total length & mm & $0.53$ & RVE & Sum of Euclidean distances between connected skeletal pixels \\ Total length DR 1 & mm & $0.56$ & RVE & Total length of veins with DR 0 - 0.25 mm \\ Total length DR 2 & mm & $0.40$ & RVE & Total length of veins with DR 0.25 - 0.8 mm \\ Total length DR 3 & mm & $0.27$ & RVE & Total length of veins with DR above 0.8 mm \\ Volume & mm$^3$ & $0.29$ & RVE & Length multiplied by cross-section area summed over skeletal pixels \\ Volume DR 1 & mm$^3$ & $0.55$ & RVE & Volume of veins with DR of 0 - 0.25 mm \\ Volume DR 2 & mm$^3$ & $0.37$ & RVE & Volume of veins with DR of 0.25 - 0.8 mm \\ Volume DR 3 & mm$^3$ & $0.27$ & RVE & Volume of veins with DR of above 0.8 mm \\ Width-to-depth ratio & - & $0.55$ & RVE & Ratio of max. width to depth \\ \hline \end{longtable} \newpage \begin{longtable}[h!]{lcccp{7.5cm}} \caption{\textbf{Petiole features} Includes names, units, broad-sense clonal heritability estimates, and descriptions of the 18 traits related to petiole morphology and color. Abbreviations: avg: average, max: maximum, min: minimum. Note that Max. Feret is equivalent to petiole diameter that is used for validation in this work with real-world measurements.} \label{tab:petioletraits} \\ \hline \textbf{Feature} & \textbf{Units} & \textbf{$H^2$} & \textbf{Tool} & \textbf{Description} \\ \hline \endfirsthead \textbf{Feature} & \textbf{Units} & \textbf{$H^2$} & \textbf{Tool} & \textbf{Description} \\ \hline \endhead Area & cm$^2$ & $0.49$ & Fiji & Total pixel count of petiole segmentation \\ Aspect ratio & - & $0.41$ & Fiji & Ellipse major axis / Ellipse minor axis \\ Bottom blue & - & $0.29$ & Fiji & Avg. blue value of petiole abaxial side \\ Bottom brightness & - & $0.25$ & Fiji & Avg. brightness value of petiole abaxial side \\ Bottom green & - & $0.27$ & Fiji & Avg. green value of petiole abaxial side \\ Bottom hue & - & $0.15$ & Fiji & Avg. hue value of petiole abaxial side \\ Bottom red & - & $0.22$ & Fiji & Avg. red value of petiole abaxial side \\ Bottom saturation & - & $0.33$ & Fiji & Avg. saturation value of petiole abaxial side \\ Circularity & - & $0.45$ & Fiji & $4\pi A/P^2$ where $A$: area and $P$: perimeter \\ Major axis length & cm & $0.52$ & Fiji & Major axis length of the best-fit ellipse \\ Minor axis length & cm & $0.20$ & Fiji & Minor axis length of the best-fit ellipse \\ Max. Feret & cm & $0.55$ & Fiji & Max. distance between any two points in the petiole segmentation \\ Min. Feret & cm & $0.09$ & Fiji & Min. distance between two parallel lines tangent to Max. Feret line \\ Perimeter & cm & $0.55$ & Fiji & Sum of Euclidean distances between contour pixels in the petiole segmentation \\ Roundness & - & $0.39$ & Fiji & $4A/(\pi M^2)$ where $A$: area, $M$: major axis \\ Solidity & - & $0.10$ & Fiji & $A/C$ where $A$: area and $C$: convex area \\ Volume & mm$^3$ & $0.43$ & RVE & Length multiplied by cross-section area estimated from petiole diameter \\ Width & cm & $0.25$ & Custom & Avg. diameter of the center 20\% of the petiole \\ \hline \end{longtable} \newpage \printbibliography \end{document}
2,869,038,155,352
arxiv
\section{Introduction} When $\Gamma$ is a finitely generated group, one may form its $\SL{2}{\C}$-character variety, which is an algebraic set parametrizing representations $\Gamma \rightarrow \SL{2}{\C}$. Work of Thurston and Culler-Shalen (\cite{CullerShalen}) introduced the character variety to the study of the geometry and topology of compact $3$-manifolds. More recently, Chinburg-Reid-Stover in \cite{CRS} paid particular attention to the arithmetic aspects of the component of the character variety---called the canonical component---containing the character of the faithful discrete representation in the setting that $M$ is a hyperbolic knot complement. More specifically, if we write $C$ for the canonical component and $k(C)$ for its function field, there is a canonically defined quaternion algebra, $A_{k(C)}$, defined over $k(C)$. The geometric content encoded by this object is that it specializes at a character of a hyperbolic Dehn surgery to the quaternion algebra associated to that Kleinian group. \par It is natural to ask how these quaternion algebras vary through different surgeries on the knot. Chinburg-Reid-Stover show in \cite{CRS} that a condition on the Alexander polynomial of the knot (called condition $(\star)$ in \cite{CRS}) guarantees that there are only finitely many rational primes lying under any finite prime ramifying the specializations of this quaternion algebra. Let us write $S$ for this set of rational primes. Let us define $S_{D} \subseteq S$ to be the set of rational primes $p$ such that there is a specialization to the character of a hyperbolic Dehn surgery such that the quaternion algebra is ramified at some prime lying above $p$ so that knots satisfying condition $(\star)$ have $S_D$ of finite cardinality. When condition $(\star)$ fails it is shown in \cite[Theorem 1.1(3)]{CRS} (using work of Harari \cite{Harari94}) that $S$ is infinite. They furthermore state as a conjecture \cite[Conjecture 6.7]{CRS} that \begin{conj}[{\cite[Conjecture 6.7]{CRS}}] \label{conj:inf_bad} Let $K$ be a hyperbolic knot in $S^3$ that fails condition $(\star)$, then, in the notation above, $S=S_{D}$. \end{conj} The first example of a knot with infinite $S_D$ was given in \cite{SevenFour}. Our main result is to give an infinite family of twist knots that fail condition $(\star)$ and that have infinite $S_D$. \begin{thm} \label{thm:main} Let $t\geq 2$, $K_t$ be a hyperbolic twist knot with $t$ half-twists. Suppose further that there exist distinct, rational, odd primes $p$ and $q$ such that \begin{enumerate} \item $pq \mid \frac{t+1}{2}$, and \item $t \equiv {-1}\Mod{pq}$. \end{enumerate} Set $T$ to be the set of rational primes $l$ such that there exists a place $\mathfrak{l}$ lying above $l$ of the trace field of some hyperbolic Dehn surgery $(d,0)$ at which the canonical quaternion algebra associated to that surgery is ramified. Then $T$ is infinite. \end{thm} \begin{rem} Infinitely many twist knots are covered by the theorem. For example, one can fix any pair of distinct, rational, odd primes $p$ and $q$ and consider the arithmetic progression $\{2pq-1 + 2jpq \> | \> j \in \Z_{\geq 0}\}$. \end{rem} \subsection{Outline} The paper is organized as follows. We introduce some background material on character varieties, cyclotomic fields, quaternion algebras, and the Brauer group of fields and varieties in Sections \ref{sec:charVars}, \ref{sec:cyclo}, and \ref{sec:brauer}. We then give an outline of the proof of Theorem \ref{thm:main} in Section \ref{sec:sketch} without giving proofs of intermediate steps. The remaining sections contain the proofs of the needed lemmas and propositions. \subsection{Acknowledgments} The author wishes to thank his advisor, Alan Reid, for his help and support throughout this project. The author also wishes to thank Neil Hoffman for pointing out some errors in earlier versions of this paper. \section{Character varieties} \label{sec:charVars} In this section we give some background on $\SL{2}{\C}$-character varieties of Kleinian groups, trace fields, quaternion algebras. We defer more details about quaternion algebras over fields and quaternion Azumaya algebras to Section \ref{sec:brauer}. \subsection{Generalities} We begin by recalling that, for a finitely generated group $\Gamma$, the $\SL{2}{\C}$-representation variety of $\Gamma$ is $R(\Gamma) = \Hom(\Gamma, \SL{2}{\C})$. Given a generating set $\{\gamma_i\}$, we identify a representation $\rho : \Gamma \rightarrow \SL{2}{\C}$ with $(\rho(\gamma_1),\dots,\rho(\gamma_n)) \subset \SL{2}{\C}^n \subset \C^{4n}$. Given a different choice of generators, there is a canonical isomorphism between the two algebraic sets obtained this way. Fixing an element $\gamma \in \Gamma$, we may define a map $I_{\gamma}$ on $R(\Gamma)$ that associates to a representation $\rho$ the trace of $\rho(\gamma)$. That is, $I_{\gamma} : R(\Gamma) \rightarrow \C$ is defined by $I_{\gamma}(\rho) = \tr \rho(\gamma) = \chi_{\rho}(\gamma)$. This is a regular function on the algebraic set $R(\Gamma)$, and the ring $T$ generated by all such $I_{\gamma}$ turns out to be finitely generated (see \cite[Proposition 1.4.1]{CullerShalen}). Fixing a generating set $I_{\gamma_1},\dots,I_{\gamma_m}$ for $T$, define a map $t: R(\Gamma) \rightarrow \C^m$ by $t(\rho) = (I_{\gamma_1}(\rho),\dots,I_{\gamma_m}(\rho))$. Then the $\SL{2}{\C}$-character variety of $\Gamma$ is defined to be $t(R(\Gamma)) \subset \C^m$. This is a closed algebraic set, and different choices of generators for $T$ give isomorphic algebraic sets. In the case of $\Gamma$ equal to the fundamental group of a complement of a hyperbolic knot $K$ in $S^3$, we define its \textbf{canonical component} to be the irreducible component of $X(\Gamma)$ containing the character of the discrete and faithful representation of $\pi_1(S^3 \backslash K)$. We refer the reader to \cite{CullerShalen} for more detail.\par Let us now fix some notation that we will use for the remainder of the paper. Let $K \subset S^3$ be a two-bridge knot. The fundamental group of $S^3 \setminus K$ can be generated by two meridians, $a$ and $b$, which are conjugate in $\pi_1\left(S^3 \setminus K \right)$. Any nonabelian representation $\rho$ of the knot group then can be conjugated to be of the form \[ \begin{aligned} \rho(a) &= \begin{pmatrix} x & 1 \\ 0 & 1/x \end{pmatrix} \\ \rho(b) &= \begin{pmatrix} x & 0 \\ r & 1/x \end{pmatrix}. \end{aligned} \] When $r=0$, the representation is reducible. We use the variables \[ \begin{aligned} Z &= \chi_{\rho}(a) = \chi_{\rho}(b) = x + \frac{1}{x} \\ R &= \chi_{\rho}(ab^{-1}) = 2 - r. \end{aligned} \] \subsection{Computation of the character varieties} We now collect some facts about the $\SL{2}{\C}$-character variety of the knot $K_t$. Most of this follows from work in \cite{MPvL}, but we prove some further specifics that will be needed for later proofs. For the knots covered by Theorem \ref{thm:main} (and others), results in \cite{MPvL} show that there is only one component containing the character of an irreducible representation. As a matter of notation our knots $K_t$ are their knots $J(-t, 2)$. We will use their work to prove the following characterization of the defining polynomial for the canonical component. \begin{prop} \label{prop:f_tFormula} Let $t \geq 3$ be an odd positive integer. Write $f_t(R,Z)$ for the defining polynomial of the $\SL{2}{\C}$-character variety for the knot $K_t$. Then, \[ f_t(R,Z) = \begin{cases} R^t + \left(1-Z^2\right) + \sum\limits_{i=1}^{t-1}\left(a_i - b_i Z^2\right)R^i & t\equiv 1\Mod{4} \\ R^t - 1 + \sum\limits_{i=1}^{t-1}\left(a_i - b_i Z^2\right)R^i & t\equiv 3\Mod{4}, \end{cases} \] where $a_i, b_i \in \Z$ and $a_{t-1}=b_{t-1} = 1$. In particular, $\deg_Z{f_t} = 2$. \end{prop} The proof is straightforward using results in \cite{MPvL}, but their work requires some setup. The following definition is from {\cite[Definitions 3.1 and 3.2]{MPvL}} except their $f$ and $g$ are our $\sigma$ and $\tau$, respectively, and we use $i$ to index rather than their $j$ or $k$. \begin{defn} Set $\sigma_0 = 0$, $\sigma_1 = 1$. For all other $i \in \Z$, define $\sigma_i \in \Z[u]$ by the relation $\sigma_{i+1} - u \sigma_i + \sigma_{i-1} = 0$. For all integers $i$, define $\tau_i = \sigma_i - \sigma_{i-1}$, $\Phi_{2i} = \sigma_i$, $\Phi_{2i-1} = \tau_i$, and $\Psi_i = \Phi_{i+1} - \Phi_{i-1}$. \end{defn} \begin{lem}[{\cite[Lemma 3.5]{MPvL}}] \label{lem:MPvLPhiRelations} For any integer $i$, we have $\Phi_{i+2} = u\Phi_i - \Phi_{i-2}$, $\Phi_i = (-1)^{i+1}\Phi_{-i}$, and $\deg \Phi_i = \left\lfloor{\left(\abs{i}-1\right)/2}\right\rfloor$. \end{lem} Using these notations, we may write down polynomials defining the $\PSL{2}{\C}$- and $\SL{2}{\C}$-character varieties. \begin{prop}[{\cite[Proposition 3.8]{MPvL}}] \label{prop:MPvLvarieties} Let $\mu, \nu$ be any integers with $\nu$ even. Let $Y = \chi_{\rho}(a^2)$ and $R$, $Z$ be as in Section \ref{sec:sketch}. The $\PSL{2}{\C}$-character variety of $J(\mu, \nu)$ is isomorphic to the subvariety of $\mathbf{A}^2$ cut out by the polynomial \[ h_{\mu, \nu}(R, Y) = \sigma_{\xi}(\theta)\left(\Phi_{-\mu}(R)\Phi_{\mu-1}(R)\left(Y-R\right)-1\right) + \sigma_{\xi-1}(\theta), \] where $\theta = \Phi_{-\mu}(R)\Phi_{\mu}(R)(Y-R) + 2$ and $\xi = \nu/2$. The $\SL{2}{\C}$-character variety is isomorphic to the double cover of the above model of the $\PSL{2}{\C}$-character variety given by $Y = Z^2 - 2$. \end{prop} Fortunately for us, the above formula cleans up significantly when we restrict attention to the family of twist knots $K_t = J(-t, 2)$. In particular, $\xi = 1$ in which case $\sigma_{\xi} = 1$ and $\sigma_{\xi-1} = 0$. This also allows us to ignore the $\theta$ variable. We summarize this as \begin{lem} \label{lem:simplerCharVar} Let $t$ be a nonzero integer. The $\PSL{2}{\C}$-character variety of the knot $K_t$ is given by \[ h_t(R,Y) = \Phi_t(R)\Phi_{-t-1}(R)(Y-R) - 1. \] \end{lem} \begin{proof}[Proof of Proposition \ref{prop:f_tFormula}] The content of Proposition \ref{prop:f_tFormula} is \begin{enumerate}[ref=(\arabic*)] \item \label{item:bidegree} the $(R,Z)$-bidegree of $f_t(R,Z)$ is $(t,2)$, \\ \item \label{item:noZfirst} there are no $Z$ to the first power terms, \\ \item \label{item:leadingCoefficients} the coefficients of $R^t$ and $R^{t-1}$ are $1$ and $(1-Z^2)$ respectively, and \\ \item \label{item:constantTerm} the constant term with respect to $R$ is $(1-Z^2)$ when $t \equiv 1 \Mod{4}$ and $-1$ when $t \equiv 3 \Mod{4}$. \end{enumerate} We note that the $R$-degree part of item \ref{item:bidegree} follows from Lemma \ref{lem:MPvLPhiRelations} and the $Z$-degree part from lemma \ref{lem:simplerCharVar} as the $\Phi_i$ family are univariate polynomials in $R$. We also get item \ref{item:noZfirst} from Lemma \ref{lem:simplerCharVar}. To prove items \ref{item:leadingCoefficients} and \ref{item:constantTerm}, we first show that $\Phi_i$ is of the form \[ \Phi_i(u) = \begin{cases} 0 & i = 0 \\ u^{\frac{\abs{i}-1}{2}}-u^{\frac{\abs{i}-3}{2}} + \cdots + 1 & i \equiv 1,7\Mod{8} \\ u^{\frac{\abs{i}-1}{2}}-u^{\frac{\abs{i}-3}{2}} + \cdots - 1 & i \equiv 3,5\Mod{8} \\ \sgn(i)u^{\frac{\abs{i}}{2}-1} + \lambda_{i_1}u^{\frac{\abs{i}}{2}-3} + \cdots + \lambda_{i_2}u & i\neq 0, i \equiv 0,4\Mod{8} \\ \sgn(i)u^{\frac{\abs{i}}{2}-1} + \lambda_{i_1}u^{\frac{\abs{i}}{2}-3} + \cdots + 1 & i \equiv 2\Mod{8} \\ \sgn(i)u^{\frac{\abs{i}}{2}-1} + \lambda_{i_1}u^{\frac{\abs{i}}{2}-3} + \cdots - 1 & i \equiv 6\Mod{8}, \end{cases} \] where $\lambda_{i_j} \in \Z$. That is, when $i$ is odd, $\Phi_i$ is monic, its second leading coefficient is $-1$, and its constant term is $\pm 1$ depending on the residue class modulo $8$ of $i$, and when $i$ is even, the leading coefficient is equal to the sign of $i$, the second leading coefficient is $0$, and the constant term is $0$ when $i$ is divisible by $4$, $1$ when $i \equiv 2 \Mod{8}$ and $-1$ when $i \equiv 6\Mod{8}$. The degrees follow from Lemma \ref{lem:MPvLPhiRelations}. Moreover, note that it suffices to handle the cases of $i$ positive since we have the relation $\Phi_i = (-1)^{i+1}\Phi_{-i}$, so we henceforth assume $i$ positive. We use the base cases $\Phi_1(u) = \Phi_2(u) = 1$, $\Phi_3(u) = u-1$, $\Phi_4(u) = u$, and $\Phi_5(u) = u^2-u-1$. We use the relation $\Phi_{i+2} = u\Phi_i - \Phi_{i-2}$ to see immediately that, when $i$ is even, $\Phi_{i+2}$ must be monic. To see that the second leading coefficient is $0$ when $i$ is odd, we use the equality $\Phi_{i+2} = u\Phi_i - \Phi_{i-2}$ to find that the second leading coefficient of $\Phi_{i+2}$ is the same as that of $u\Phi_i$ because the second leading coefficient is the coefficient of $u$ to the power $(i-2)/2$, but $\Phi_{i-2}$ has degree $(i-4)/2$, so using the base case of $\Phi_4$, we see that the second leading coefficient of $\Phi_i$ is $0$ when $i$ is even. For $i$ odd, note that the relation $\Phi_{i+2} = u\Phi_i - \Phi_{i-2}$ implies that the leading coefficient of $\Phi_{i+2}$ is the same as that of $\Phi_i$ (and so equal to $1$ by induction). Further the degree of $\Phi_{i-2}$ is $(i-3)/2$, so it makes no contribution to the coefficient of $u^{\frac{i-1}{2}}$, which is the second leading term of $\Phi_{i+2}$. Then by the inductive hypothesis, we see that the second leading coefficient must be $-1$. To treat the constant term, note that $\Phi_{i+2}(0) = -\Phi_{i-2}$, so the constant term depends only on the residue class modulo $8$. The base cases listed above combined with this observation handle all the constant terms. \par Now we may prove item \ref{item:leadingCoefficients}. Recall that $t$ is odd and positive, so $\Phi_t(R)$ is monic and $\Phi_{-t-1}(R)$ has leading coefficient $-1$. Note that the second leading coefficient of $\Phi_{t-1}(R)$ is $0$, so the second leading coefficient of $\Phi_t(R)\Phi_{-t-1}(R)$ is the coefficient of $-R^{\frac{t-3}{2}}$ times $-R^{\frac{t+1}{2}-1}$, namely $1$. We summarize this as \[ \begin{aligned} h_t(R,Y) &= \Phi_t(R)\Phi_{-t-1}(R)(Y-R) - 1 \\ &=\left(-R^{t-1}+R^{t-2}+ \cdots \right)(Y-R) - 1 \\ &=R^t + (-1-Y)R^{t-1} + \cdots \\ \end{aligned} \] Then we see that $h_t(R,Y)$ is monic is $R$ and after substituting $Y=Z^2-2$, the second leading coefficient is $(1-Z^2)$. \par For item \ref{item:constantTerm}, we first suppose that $t\equiv 1\Mod{4}$, so $t$ is equivalent to either $1$ or $5$ modulo $8$. If $t \equiv 1 \Mod{8}$, then $\Phi_t(0) = 1$ and $\Phi_{-t-1}(0) = -1$ as $-t-1 \equiv 6 \Mod{8}$. If $t \equiv 5 \Mod{8}$, then $\Phi_t(0) = -1$ and $\Phi_{-t-1}(0) = 1$ since $-t-1 \equiv 2 \Mod{8}$. In either case, then, $h_t(0,Y) = -Y-1$, which becomes $1-Z^2$. If $t\equiv 3 \Mod{4}$, then $-t-1$ is divisible by $4$, so $\Phi_{t-1}(0) = 0$, so $h_t(0,Y) = -1$. \end{proof} Next, we prove a lemma relating the character variety to the Alexander polynomial. \begin{lem} \label{lem:constantTerm} Let $f_t$ be as above and $\Delta_{K_t}(x) = \left(\frac{t+1}{2}\right)x^2 - tx + \left(\frac{t+1}{2}\right)$ be the Alexander polynomial of $K_t$. Then, \[ f_t(2,x+x^{-1}) = \frac{-\Delta_{K_t}(x^2)}{x^2} = -\left(\left(\dfrac{t+1}{2}\right)x^2-t+\left(\dfrac{t+1}{2}\right)x^{-2}\right). \] \end{lem} \begin{proof} We first note that \[ \Phi_i(2) = \begin{cases} 1 & i \text{ odd } \\ i/2 & i \text { even }. \end{cases} \] Indeed, note that $\Phi_1(2) = \Phi_3(2) = 1$, $\Phi_0(2) = 0$, and $\Phi_2(2) = 1$. The specialized relation $\Phi_{i+2}(2) = 2\Phi_i(2) - \Phi_{i-2}(2)$ immediately handles the $i$ odd case. For $i$ even, we may induct and note that \[ \begin{aligned} \Phi_{i+2}(2) &= 2\Phi_i(2) - \Phi_{i-2}(2) \\ &= 2\left(\dfrac{i}{2}\right)-\dfrac{i-2}{2} \\ &= \frac{i+2}{2}. \end{aligned} \] Now we compute using Lemma \ref{lem:simplerCharVar}. We have $h_t(2,Y) = \left(\dfrac{-t-1}{2}\right)\left(Y-2\right)-1$. Then we substitute $Y=Z^2-2$ and $Z = x + x^{-1}$. After cleaning up, we get the desired equality. \end{proof} \subsection{Number Fields and Quaternion Algebras Associated to Subgroups of \texorpdfstring{$\SL{2}{\C}$}{SL2(C)}}We next turn to some background information about subgroups of $\SL{2}{\C}$. A subgroup $\Gamma$ of $\SL{2}{\C}$ is \textbf{non-elementary} if its image in $\mathrm{PSL}_2\C$ has no finite orbit in its action on $\mathbf{H}^3\cup\widehat{\C}$. Given an non-elementary subgroup $\Gamma$ of $\SL{2}{\C}$, we define its \textbf{trace field} by $k_{\Gamma} = \Q\left(\tr \gamma \mathspace | \mathspace \gamma \in \Gamma\right)$ and \textbf{quaternion algebra} by the $k_{\Gamma}$-span of elements of $\Gamma$. That is, \[ A_{\Gamma} = \left\{\sum_{\text{finite}}\alpha_i \gamma_i \mathspace \big| \mathspace \alpha_i \in k_{\Gamma}, \gamma_i \in \Gamma \right\}. \] As shown in \cite[p.78]{MR}, we may write a Hilbert symbol for this quaternion algebra as \[ \HilbertSymbol{\chi(g)^2-4}{\chi(g,h)-2}{k_{\Gamma}}, \] where $g, h$ are noncommuting hyperbolic elements of $\Gamma$. In fact, this pointwise construction extends to define a quaternion algebra over the function field of the canonical component. \begin{prop}[{\cite[Corollary 2.9]{CRS}}] \label{CRSFunctionFieldHilbertSymbol} Let $\Gamma$ be a finitely generated group, and $C$ an irreducible component of the character variety of $\Gamma$ defined over the number field $k$. Assume that $C$ contains the character of an irreducible representation, and let $g,h \in \Gamma$ be two elements such that there exists a representation $\rho$ with character $\chi_{\rho} \in C$ for which the restriction of $\rho$ to $\langle g, h \rangle$ is irreducible. Then the canonical quaternion algebra $A_{k(C)}$ is described by the Hilbert symbol \[ \HilbertSymbol{I_g^2-4}{I_{[g,h]}-2}{k(C)}. \] \end{prop} \section{Algebraic number theory and cyclotomic fields} \label{sec:cyclo} In this section we collect some basic facts about number fields, cyclotomic fields, and their maximal totally real subfields that we will use in later sections. For context, the trace field of a $(d,0)$ surgery always contains the maximally totally real subfield of the $d$-th cyclotomic field as a subfield, and we often leverage properties of this subfield and its elements for information about the trace field. \subsection{Number fields} We first fix some language. A \textbf{number field} is a finite degree extension of the rational numbers. For a number field of degree $n$, there are $n$ distinct embeddings into $\C$ that fix $\Q$. If all of these embeddings have image inside $\R$, then the field is said to be \textbf{totally real}, and if none of them do, the field is \textbf{totally imaginary}. For $L/K$ an extension of number fields, we will use an important function $N_{L/K} : K \rightarrow K$ called the \textbf{field norm}. It is defined by \[ N_{L/K}(x) = \prod_{\sigma} \sigma(x), \] where the product is taken over embeddings $\sigma: L \rightarrow \overline{K}$ that fix $L$ element-wise and $\overline{K}$ is an algebraic closure of $K$. The norm behaves well in towers. \begin{prop}[{\cite[Corollary I.2.7]{NeukirchANT}}] Let $K \subseteq L \subseteq M$ be a tower of finite field extensions. Then, \[ N_{M/K} = N_{L/K} \circ N_{M/L}. \] \end{prop} For an element $x \in K$, a rational prime $l$ divides $N_{K/\Q}(x)$ if and only if there is a prime ideal $\mathfrak{l}$ of $K$ lying above $l$ such that $x \in \mathfrak{l}$. See the next subsection for a primer on ideals in number fields. One may efficiently compute norms if the minimal polynomial for an element is known. In particular, if $p(t) \in \Q[t]$ is a monic minimal polynomial for $x$, then $N_{K/\Q}(x) = (-1)^{\deg p} p(0)$. This observation is actually how we produce candidate primes that might ramify the relevant quaternion algebras. \subsection{Rings of integers and prime ideals} For a number field $K$, the integral closure of $\Z$ inside $K$ is called the \textbf{ring of integers} of $K$, and is often written $\mathcal{O}_K$. This ring may be more concretely identified as the set of elements of $K$ that satisfy a monic polynomial with coefficients in $\Z$. The ring of integers is often not a unique factorization domain, but its ideals uniquely factor into prime ideals. By a ``prime in a number field," we will mean a prime ideal in the ring of integers of that number field. For an extension $L/K$ of number fields and a prime $\mathfrak{l}$ of $\mathcal{O}_K$, the primes appearing in the factorization of $\mathfrak{l}\mathcal{O}_L$ are said to \textbf{lie above} $\mathfrak{l}$, and $\mathfrak{l}$ \textbf{lies below} those primes of $\mathcal{O}_L$. For a prime $\mathfrak{l}$ of $\mathcal{O}_K$ with factorization \[ \mathfrak{l}\mathcal{O}_L = \prod_i \mathfrak{L_i}^{e_i}, \] the integer $e_i$ is the \textbf{ramification index} of $\mathfrak{L_i}$ over $\mathfrak{l}$. There \textit{is} an analogy between ramification of prime ideals in extensions and ramification of quaternion algebras, but they are distinct concepts. The degree of the field extension $\mathcal{O}_L/\mathfrak{L_i}$ over $\mathcal{O}_K/\mathfrak{l}$ is the \textbf{inertia degree} of $\mathfrak{L_i}$ over $\mathfrak{l}$. A fundamental fact is that if $\mathfrak{l}$ splits into $r$ distinct primes in $L$ with inertia degrees $f_i$ and inertia degrees $e_i$, then $[L : K] = \sum\limits_{i=1}^{r}e_i f_i$. When $r=e_i=1$ for all $i$, we say that $\mathfrak{l}$ is \textbf{totally inert} or simply \textbf{inert}. When $L/K$ is Galois, the ramification indices and inertia degrees for any given prime $\mathfrak{L_i}$ above $\mathfrak{l}$ are the same as any other prime $\mathfrak{L_j}$ above $\mathfrak{l}$. In this setting, we simply write $e$ and $f$ for the ramification index and inertia degree of any prime above $\mathfrak{l}$, and we have $[L : K] = ref$, where $r$ is the number of distinct primes that $\mathfrak{l}$ splits into. \subsection{Cyclotomic fields} A \textbf{cyclotomic field} is a field extension obtained by adjoining roots of unity. All finite field extensions are cyclotomic. We shall write $\zeta_d$ for a primitive $d$-th root of unity, and by the $\mathbf{d}$\textbf{-th cyclotomic field}, we mean $\Q(\zeta_d)$, the field obtained by adjoining $\zeta_d$ to $\Q$. These fields and hence their subfields are abelian. For $d \geq 3$, $\Q(\zeta_d)$ is totally imaginary, but it does have a totally real subfield generated by $\zeta_d + \zeta_d^{-1}$. Moreover this subfield is maximal with respect to inclusion among totally real subfields, so it may be uniquely identified as the maximal totally real subfield of $\Q(\zeta_d)$. We will often write $2\cos(2\pi/d)$ for $\zeta_d + \zeta_d^{-1}$ without a particular embedding of $\zeta_d$ into $\C$ in mind. We will also often write $\Q(\zeta_d)^+$ in place of $\Q(2\cos(2\pi/d))$. The rings of integers of $\Q(\zeta_d)$ and $\Q(\zeta_d)^+$ are $\Z[\zeta_d]$ and $\Z[2\cos(2\pi/d)]$, respectively (see \cite[Theorem 2.6, Proposition 2.16]{WashingtonCycloFields}). \par We will use the following description of the splittings of primes in cyclotomic extensions. \begin{thm}[{\cite[Proposition I.10.3]{NeukirchANT}}] \label{thm:cyclosplitting} Let $l$ be a rational prime comprime to $d$. Then $l$ factors into distinct primes in $\Z[\zeta_d]$ all with inertia degree equal to the multiplicative order of $l \Mod{d}$. \end{thm} We emphasize some special cases of the above theorem. A rational prime $l$ is totally split in $\Q(\zeta_d)$ if and only if it is $1 \Mod{d}$ and is totally inert if and only if it is a primitive root modulo $d$. It follows that if $l \in \Z$ is totally split in $\Q(\zeta_d)^+$ but is not equivalent to $1\Mod{d}$, then any prime $\mathfrak{l}$ of $\Q(\zeta_d)^+$ above $l$ is inert in $\Q(\zeta_d)$. \section{Quaternion algebras, Azumaya algebras, and Brauer groups} \label{sec:brauer} \subsection{Quaternion algebras over fields} In this section we recall some facts about quaternion algebras and their generalization, Azumaya algebras. A \textbf{quaternion algebra} $A$ over a field $k$ is a $4$-dimensional central simple algebra over $k$. When $k$ has characteristic different from $2$, $A$ can be described as an $k$-vector space with basis $\{1, i, j, ij\}$ and algebra structure given by $i^2=a$, $j^2=b$, and $ij=-ji$ where $a,b \in k^{*}$. One may encode this information in a \textbf{Hilbert symbol} as $\HilbertSymbol{a}{b}{k}$. We will be primarily interested in the cases where $k$ is either a number field arising as a trace field of Dehn surgeries on a knot or the function field of the canonical component of the $\SL{2}{\C}$-character variety of a hyperbolic knot. The fundamental dichotomy for quaternion algebras is that they are either division algebras or matrix algebras. Let us collect some facts about quaternion algebras that we will use later. \begin{prop} \begin{enumerate} Let $k$ be a field of characteristic not equal to $2$ and $a,b,x,y \in k^{*}$. \item $\HilbertSymbol{a}{b}{k} \cong \HilbertSymbol{b}{a}{k}$, \item $\HilbertSymbol{a}{1}{k} \cong \mathrm{M}_2(k)$, \item $\HilbertSymbol{ax^2}{by^2}{k} \cong \HilbertSymbol{a}{b}{k}$. \end{enumerate} \end{prop} A quaternion algebra $A$ over a number field $k$ is determined up to isomorphism by the places $\mathfrak{l}$ of $F k$ for which $A_{\mathfrak{l}}=A\otimes_k k_{\mathfrak{l}}$ is a division algebra where $k_{\mathfrak{l}}$ is the completion of $k$ with respect to $\mathfrak{l}$. The set of such $\mathfrak{l}$ is finite and of even cardinality. We say $A$ is \textbf{ramified} at these places and \textbf{split} at all the others. By abuse of notation, we occasionally write $\HilbertSymbol{a}{b}{k_{\mathfrak{l}}} = -1$ when $A$ is ramified at $\mathfrak{l}$ and $+1$ when it is split. As a justification for this notation is that, we may break up the Hilbert symbols as follows. \[ \HilbertSymbol{a}{bc}{k_{\mathfrak{l}}} = \HilbertSymbol{a}{b}{k_{\mathfrak{l}}}\HilbertSymbol{a}{c}{k_{\mathfrak{l}}}, \] which is essentially equivalent to quadratic reciprocity. This multiplicative notation can also be understood as equivalence in the Brauer group. In particular, $\HilbertSymbol{a}{bc}{k}\otimes_k \mathrm{M}_2(k) \cong \HilbertSymbol{a}{b}{k}\otimes_k\HilbertSymbol{a}{c}{k}$, which is equivalence in the Brauer group. We will also use this multiplicative notation for number fields where it should be interpreted as an equality of ramification sets. That is, $\HilbertSymbol{a}{bc}{k} = \HilbertSymbol{a}{b}{k}\HilbertSymbol{a}{c}{k}$ means that $\HilbertSymbol{a}{bc}{k_{\mathfrak{l}}} = \HilbertSymbol{a}{b}{k_{\mathfrak{l}}}\HilbertSymbol{a}{c}{k_{\mathfrak{l}}}$ for every prime $\mathfrak{l}$ of $k$. There is also an efficient way to compute ramification sets, which is another avatar of quadratic reciprocity. \begin{thm}[{\cite[Theorem 2.2.6(b)]{MR}}] \label{thm:MRlocalSymbol} Let $A$ be a quaternion algebra over a nondyadic $\mathfrak{l}$-adic field $k_{\mathfrak{l}}$ with ring of integers $\mathcal{O}$ and maximal ideal $\mathfrak{l}$. Let $A=\HilbertSymbol{a}{b}{k_{\mathfrak{l}}}$ with $a,b \in \mathcal{O}$. If $a \notin \mathfrak{l}$ and $b \in \mathfrak{l}\setminus\mathfrak{l}^2$, then $A$ splits if and only if $a$ is a square modulo $\mathfrak{l}$. \end{thm} Let us also point out that quaternion algebras generate the $2$-torsion of the Brauer group of the field $k$. \subsection{Azumaya algebras} \label{subsec:azumaya} For a Noetherian scheme $X$, its \textbf{Brauer group} $\Br X$ is defined to be $H^2_{\mathrm{\acute{e}t}}(X, \mathbf{G}_m)$. We will have no need for the details of \'{e}tale cohomology, and in this paper one may think of $X$ as being the canonical component of a hyperbolic knot, or a smooth model thereof. \par We now wish to define a generalization of quaternion algebras over fields. Let $\mathscr{O}_X$ be the structure sheaf of $X$ so that $\mathscr{O}(U)_X$ is the ring of regular functions on $U$. A coherent sheaf of $\mathscr{O}_X$ algebras is a sheaf $\mathcal{F}$ of abelian groups on $X$ such that $\mathcal{F}(U)$ is a finitely generated $\mathscr{O}_X$ algebra and the restriction maps are compatible with the algebra structure. Moreover, $\mathcal{F}$ is locally free if it has an open covering by sets $U$ such that $\mathcal{F}|_{U}$ is a free $\mathscr{O}_X|_U$ module. An \textbf{quaternion Azumaya algebra} is a nonzero $\mathscr{O}_X$-algebra that is locally free of rank $4$. \par The connection to quaternion algebras over fields is that when $X$ is a variety over a field $k$ of characteristic not equal to $2$ and $\mathcal{A}$ is a quaternion Azumaya algebra, there is a finite open covering of $X$ by sets $U$ such that for each $U$, there is an isomorphism of $\mathscr{O}_X|_U$-modules \[ \mathcal{A}|_U \cong \mathscr{O}|_U \oplus i\mathscr{O}|_U \oplus j\mathscr{O}|_U \oplus ij\mathscr{O}|_U, \] where $i^2 = f_U$, $j^2 = g_U$, and $ij=-ji$ for $f_U,g_U \in k[U]^{*}$. So quaternion Azumaya algebras locally look like quaternion algebras. Moreover, one can take the fiber at a point $\mathcal{A}(x) = \mathcal{A} \otimes_{\mathscr{O}_X} k(x)$ where $k(x)$ is the residue field to obtain a quaternion algebra over the residue field. \par Writing $k(X)$ for the function field of $X$, there is a canonical injection $\Br X \hookrightarrow \Br k(X)$ and an exact sequence which determines its image. \begin{thm}[{\cite[Theorem 6.8.3.]{QPoints}}] \label{thm:cohomologicalPurity} Let $X$ be a regular integral Noetherian scheme. Let $X^{(1)}$ be the set of codimension $1$ points of $X$. Then the sequence \[ 0 \longrightarrow \Br X \longrightarrow \Br k(X) \xrightarrow{\mathrm{res}} \bigoplus_{x \in X^{(1)}} H^1(k(x), \Q/\Z) \] is exact with the caveat that one must exclude the $p$-primary part of all the groups if $X$ is of dimension $\leq 1$ and some $k(x)$ is imperfect of characteristic $p$, or if $X$ is of dimension $\geq 2$ and some $k(x)$ is of characteristic $p$. \end{thm} This theorem is known as absolute cohomological purity. It was conjectured by Grothendieck and proved by Gabber, though the above formulation appears in \cite{QPoints}. The last arrow is the residue homomorphism to the Galois cohomology group $H^1(k(x), \Q/\Z) = H^1(\Gal(k(x)^{sep}/k(x), \Q/\Z)$. We say that a quaternion algebra $A_{k(X)}$ defined over the function field $k(X)$ ``extends" over a point $x \in X$ if its residue is trivial at $x$. The exact sequence in Theorem \ref{thm:cohomologicalPurity} says that $A_{k(X)}$ extends to a quaternion Azumaya algebra if and only if its residue is trivial everywhere. For quaternion algebras over function fields, the residues may be calculated using a tame symbol at least when the residue field has characteristic different from $2$. Namely, let $\alpha, \beta \in k(X)$ and $x$ be a codimension $1$ point, and write \[ \{\alpha, \beta\} = (-1)^{\ord_x(\alpha)\ord_x(\beta)}\beta^{\ord(\alpha)}/\alpha^{\ord(\beta)} \in k(x)^{*}/k(x)^{*^2}, \] where $\ord_x(\alpha)$ is the order of vanishing of $\alpha$ at $x$. Then this class in $k(x)^{*}/k(x)^{*^2}$ is equal to the residue of $\HilbertSymbol{\alpha}{\beta}{k(X)}$ at $x$. In particular, if this class is the trivial square class, then the algebra extends over $x$. See \cite[\S 2]{CTetalMumbai} for details. \par An important property of Azumaya algebras of varieties over number fields is that their fibers, which are quaternion algebras over number fields, can only ramify at a finite set of places. We give the statement that appears in \cite{SkoroNotes} Theorem 2.4(2), though we point out that it holds for any Azumaya algebra, not just the quaternion ones. \begin{thm} \label{thm:resultatClassique} Let $X$ be a smooth projective irreducible variety over a number field $k$, and let $\mathcal{A}$ be a quaternion Azumaya algebra on $X$. Then, for almost all places $\mathfrak{l}$, we have $\mathcal{A}(P) \cong \mathrm{M}_2(k_{\mathfrak{l}})$ for all $P \in X(k_{\mathfrak{l}})$. \end{thm} However, work of Harari (\cite{Harari94}) shows that if $A_{k(X)}$ has a residue, then there are infinitely many places $\mathfrak{l}$ for which there is a nontrivial fiber at some local point. \subsection{Connection to Kleinian groups} We now explain the connection between the Azumaya algebra machinery and Kleinian groups. We refer the reader to \cite{CRS} for more details. Let $C$ denote the canonical component of a hyperbolic knot $K$. As mentioned in the introduction, there is always a quaternion algebra defined over the function field of the canonical component that specializes at a character of a hyperbolic Dehn surgery to the usual quaternion algebra of a Kleinian group. A natural question to ask is whether this quaternion algebra extends to an Azumaya algebra. The answer for hyperbolic knot complements is that it extends if and only if the Alexander polynomial satisfies condition $(\star)$ of \cite{CRS}. \begin{thm}[\cite{CRS} Theorems 1.2., 1.4.] \label{thm:1.2OfCRS} Let $K$ be a hyperbolic knot with $\Gamma = \pi_1\left(S^3 \backslash K\right)$, and suppose that $\Delta_K$ satisfies condition $(\star)$. Then \begin{enumerate} \item $A_{k(C)}$ comes from an Azumaya algebra in $\Br \tilde{C}$ where $\tilde{C}$ denotes the normalization of the projective closure of $C$. \\ \item Furthermore, if the canonical component is defined over $\Q$, there exists a finite set $S_K$ of rational primes such that, for any hyperbolic Dehn surgery $N$ on $K$ with trace field $k_N$, the $k_N$-quaternion algebra $A_N$ can only ramify at real places of $k_N$ and finite places lying over primes in $S_K$. \end{enumerate} \end{thm} For example, the authors calculate in \cite{CRS} that the figure-eight knot can have only real and dyadic ramification. There is also a partial converse in \cite{CRS} (Theorems 1.2, 1.4), namely that $A_{k(C)}$ does not extend when the knot fails condition $(\star)$. With the results of \cite{Harari94}, this implies that one can obtain ramification above infinitely many rational primes by specializing $A_{k(C)}$. However, these points \textit{a priori} need not be interesting from the point of view of geometric structures. Experimental evidence led them to conjecture (Conjecture \ref{conj:inf_bad}) that when the knot fails condition $(\star)$, there should be ramification above infinitely many rational primes, even when one restricts attention to Dehn surgery points. \subsection{Ramification for Dehn surgery points} We now prove that the ramification of the specializations to $(d,0)$ surgery can be expressed in terms of the following Hilbert symbol. Throughout, we write $r_d$ for the algebraic number that $r$ specializes to at $(d,0)$ surgeries to avoid confusion with the coordinate $r=2-R$. \begin{prop} \label{prop:HilbertSymbolForKnot} Let $d$ be an odd positive integer that is not a power of a prime and suppose that $K$ is a $2$-bridge knot whose Alexander polynomial has only simple roots and is Azumaya negative. Let $k_d$ be the trace field of the $(d,0)$ surgery and $r_d$ as above. Then, there is a finite set $S$ of rational primes such that for a prime $\mathfrak{L}$ of $k_d$ lying above a prime not in $S$, the ramification of the (invariant) quaternion algebra for the $(d,0)$ surgery on $K$ agrees at $\mathfrak{L}$ with Hilbert symbol \[ \HilbertSymbol{2\cos(2\pi/d)-2}{-r_d}{k_d}. \] \end{prop} \begin{rem} Proposition \ref{prop:HilbertSymbolForKnot} implies that if there are infinitely many primes $\mathfrak{l}$ as $d$ varies of different residue characteristics at which \[ \HilbertSymbol{2\cos(2\pi/d)-2}{-r_d}{k_d}, \] is ramified, then the (invariant) quaternion algebras for the $(d,0)$ surgeries have infinitely many different ramified residue characteristics as $d$ varies. \end{rem} \begin{proof}[Proof of Proposition \ref{prop:HilbertSymbolForKnot}] In general the Hilbert symbol at a representation $\rho$ is given (see Proposition \ref{CRSFunctionFieldHilbertSymbol}) by \[ \HilbertSymbol{\chi_{\rho}(a)^2-4}{\chi_{\rho}([a,b])-2}{k_{\rho}}. \] Using the coordinates $Z$ and $R$ defined above, this looks like \begin{equation} \label{eq:breakingHilbertSymbol} \HilbertSymbol{Z^2-4}{2Z^2+R^2-Z^2R-4}{k(C)} = \HilbertSymbol{Z^2-4}{R-2}{k(C)}\HilbertSymbol{Z^2-4}{R+2-Z^2}{k(C)}. \end{equation} For notational convenience, call the leftmost symbol in Equation \ref{eq:breakingHilbertSymbol} $A$, the first symbol to the right of the equals sign $B$, and the rightmost symbol $C$. We claim that $C$ extends to an Azumaya algebra. Indeed, its two entries vanish only at the points determined by $R=2$, $Z=\pm 2$, and $Z^2=R+2$. For $R=2$, note that $C$ has trivial residue there. In fact specializing to $R=2$ makes the second entry $4-Z^2$, so that the algebra is split (see \cite[Corollary 2.3.3]{MR}) at all specializations outside of $Z=\pm 2$, which we treat later. Then, we find that the $B$ retains the nontrivial residue at $R=2$. Moreover, $C$ \textit{a priori} might have a nontrivial residue at $Z^2=R+2$, but neither $A$ nor $B$ does, so neither does the rightmost symbol. Finally, $B$ and $C$ have nontrivial residue at $Z=\pm 2$. However, when $Z=\pm 2$, $R-2$ is a global square in the residue field. Indeed, since $a$ and $b$ are conjugate in the fundamental group, there is a conjugation \[ \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} \delta & -\beta \\ -\gamma & \alpha \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ r & 1 \end{pmatrix}. \] Multiplying everything out on the left-hand side yields an $\alpha^2$ in the upper right entry, so $\alpha=0$, which allows us to obtain \[ \begin{pmatrix} 0 & \beta \\ \gamma & \delta \end{pmatrix} \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} \delta & -\beta \\ -\gamma & 0 \end{pmatrix} = \begin{pmatrix} -\beta\gamma & 0 \\ -\gamma^2 & -\beta\gamma \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ -\gamma^2 & 1 \end{pmatrix}. \] Note further than $\gamma$ is in the trace field of any representation with $Z=2$. Indeed, writing $\rho(c)$ for the element effecting the above conjugation (so $\rho(cac^{-1}) = \rho(b)$), $\gamma+\delta$ and $\delta-\gamma$ are the traces of $\rho(ca)$ and $\rho(ac^{-1})$, respectively. So $r = -\gamma^2$ implying $R-2 = \gamma^2$, which is a global square in the trace field of the representation. This implies that the tame symbol (see Subsection \ref{subsec:azumaya}) and hence the residue is trivial. A similar argument handles $Z=-2$. Since $A$ has trivial residue at $Z=\pm 2$ and $B$ does too, then $C$ must have trivial residue there as well. Then $C$ extends to an Azumaya algebra, so by Theorem \ref{thm:resultatClassique}, there is a finite set $S$ of rational primes such that if $l \notin S$, then no specialization of the second symbol is ramified at a prime above $l$. That is for $l \notin S$ and $\mathfrak{l}$ above $l$, the ramification of $A$ and $B$ agree. \par Specializing to $(d,0)$ surgery sends $Z$ in as $2\cos(2\pi/d)$, so $B$ specializes to \[ \HilbertSymbol{4\cos^2(2\pi/d)-4}{R_d-2}{k_{d}}=\HilbertSymbol{2\cos(2\pi/d)-2}{R_d-2}{k_{d}}, \] where $R_d = 2 - r_d$. The above equality follows from the fact that when $d$ is odd, $2\cos(2\pi/d)+2$ is a global square in $\Q(\zeta_d)^{+}$, which is a subfield of $k_d$. One of its square roots is $\zeta_d^{\frac{d+1}{2}}+\zeta_d^{\frac{-(d+1)}{2}}$. Changing $R_d-2$ to $-r_d$ completes the proof. \end{proof} \section{Proof of Theorem \ref{thm:main}} \label{sec:sketch} The basic strategy for proving Theorem \ref{thm:main} is as follows. Proposition \ref{prop:HilbertSymbolForKnot} gives an explicit description for the Hilbert symbol of $(d,0)$ surgeries on the knots. In Lemma \ref{lem:HilbertSymbolTotallyReal} we use that Hilbert symbol to determine places that the associated quaternion algebra is ramified at in terms of splitting conditions on the primes diving $r_d$. Then, in Lemmas \ref{lem:relativeNorm} and \ref{lem:absoluteNorm}, we give conditions for a prime to divide $r_d$ and for it to satisfy the appropriate splitting conditions coming from Lemma \ref{lem:HilbertSymbolTotallyReal}, respectively. Finally Lemmas \ref{lem:totallyRealTotallySplit} and \ref{lem:divisors} show how to find infinitely many such primes. The remainder of the section states these intermediate steps and explains how they add up to a proof of Theorem \ref{thm:main}. \par Theorem \ref{thm:main} then will follow once we can prove that there are infinitely many rational primes that are residue characteristics of ramified primes of the quaternion algebra determined $\HilbertSymbol{2\cos(2\pi/d)-2}{-r_d}{k_d}$ for some $d$. The next lemma describes the ramification of this quaternion algebra in terms of the splitting of primes between $\Q(\zeta_d)^{+} = \Q(\zeta_d+\zeta_d^{-1})$ and $k_d$. \begin{lem} \label{lem:HilbertSymbolTotallyReal} Let $r$ be an algebraic integer inside some fixed finite extension $k_n/\Q(\zeta_d)^{+}$ for $n$ odd. Suppose that $\mathfrak{L}$ is a prime of $k_d$ that does not lie above $2$ or $d$, divides $-r_d$ an odd number of times, and has odd inertia degree over $\Q(\zeta_d)^{+}$. Suppose further than $\mathfrak{l} = \mathfrak{L} \cap \Q(\zeta_d)^{+}$ does not split in $\Q(\zeta_d)$. Then \[ \HilbertSymbol{2\cos(2\pi/d)-2}{-r_d}{k_d}, \] is ramified at $\mathfrak{L}$. \end{lem} To prove that the quaternion algebra at the $(d,0)$ surgery is ramified at a prime $\mathfrak{L}$ of the trace field that lies above neither $2$ nor $d$, it suffices to show: \begin{enumerate}[label={(\arabic*)},ref={(\arabic*)}] \item $\mathfrak{L}$ divides $-r_d$ and odd number of times, \label{stepone} \item $\mathfrak{L}$ has odd inertia degree over $\Q(\zeta_d)^{+}$, and \label{steptwo} \item $\mathfrak{l} = \mathfrak{L} \cap \Q(\zeta_d)^{+}$ does not split in $\Q(\zeta_d)$. \label{stepthree} \end{enumerate} Theorem \ref{thm:main} will follow if we can arrange these conditions for primes above infinitely many distinct rational primes as we vary $d$. To handle conditions \ref{stepone} and \ref{steptwo}, we exploit a connection to the Alexander polynomial of the knot. To find primes dividing $r_d$, we compute its field norm, $N_{k_d/\Q}(r_d)$. The (absolute) field norm will be a rational integer whose prime divisors correspond to prime ideals dividing $r_d$. To compute this norm, we first find the relative field norm, $N_{k_d/\Q(\zeta_d)^+}(r_d)$, which can be expressed in terms of the Alexander polynomial of the knot. Recall from Section \ref{sec:cyclo} that $N_{k_d/\Q}(r_d) = N_{\Q(\zeta_d)^+/\Q}\left(N_{k_d/\Q(\zeta_d)^+}(r_d)\right)$ and that the Alexander polynomial of the knot $K_t$ is $\Delta_{K_t}(x)=\left(\frac{t+1}{2}\right)x^2-tx+\left(\frac{t+1}{2}\right)$. \begin{lem} \label{lem:relativeNorm} Let $d$ be odd, $k_d$ the trace field of the $(d,0)$ surgery of a hyperbolic twist knot $K_t$. Then the norm of $r_d$ in the relative extension $k_d/\Q(\zeta_d)^{+}$ is $\Delta_K(\zeta_d^2)=\left(\frac{t+1}{2}\right)\zeta_d^4-t\zeta_d^2+\left(\frac{t+1}{2}\right)$ for all but finitely many $d$. \end{lem} The proof of the above lemma involves being able to prove that specializing the character variety at $Z=2\cos(2\pi/d)$ produces an irreducible polynomial over the field $\Q(\zeta_d)^{+}$. Calculation of the character variety can be found above in Section \ref{sec:charVars}, and irreducibility of the relevant specializations is treated in Section \ref{sec:irreducibility}. This irreducibility also somewhat justifies the notation $r_d$ as it represents an algebraic number that is well defined up to Galois conjugation. We are still left to find rational primes dividing the absolute norm, $N_{k_d/\Q}(r_d)$. This will be done in Lemma \ref{lem:divisors} once we may state precisely which rational primes ensure the desired ramification. \par To handle condition \ref{steptwo} about the inertia degree, we use the following lemma. \begin{lem} \label{lem:absoluteNorm} Suppose that $d$ is odd and that $l$ is an odd rational prime coprime to $d$ dividing $N_{\Q(\zeta_d)^+/\Q}(\Delta_{K_t}(\zeta_d^2))$ an odd number of times. Then there is a prime $\mathfrak{L}$ of $k_d$ above $l$ such that $\mathfrak{L}$ divides $-r_d$ and odd number of times and $\mathfrak{L}$ has odd inertia degree over $\Q(\zeta_d)^+$. \end{lem} The next lemma applies Lemma \ref{lem:absoluteNorm} to cast condition \ref{stepthree} also in terms of the Alexander polynomial. \begin{lem} \label{lem:totallyRealTotallySplit} Suppose that $d$ is odd and that $l$ is a rational prime below a prime dividing $\Delta_{K_t}(\zeta_d^2)$. Suppose further that $l$ and $d$ are coprime to $t$ and $\frac{t+1}{2}$. Then $l$ is totally split in $\Q(\zeta_d)^+$. Hence, if $l \not\equiv 1\Mod{d}$, then all primes of $\Q(\zeta_d)^+$ above $l$ are inert in $\Q(\zeta_d)$. \end{lem} We work with $d$ of the form $d=p^uq^v$ for $p,q$ as in the statement of Theorem \ref{thm:main} and $u,v$ integers. Our work so far says that if a rational prime $l$ divides $N_{\Q(\zeta_d)^+/\Q}(\Delta_{K_t}(\zeta_d^2))$ an odd number of times and is not equivalent to $1 \Mod{pq}$, then there is some prime $\mathfrak{L}$ of $k_d$ above $l$ at which the quaternion algebra for the $(d,0)$ surgery is ramified. We have not yet proved that any such $l$ exist. The next lemma shows that we may find infinitely many. Its proof combines an analysis of the resultant of the Alexander polynomial with the cyclotomic polynomials and a dynamical result of Furstenberg appearing in \cite{FurstenbergNonlacunary}. \begin{lem} \label{lem:divisors} Let $t \in \Z_{\geq 0}$ be odd, $\Delta(x) = \left(\frac{t+1}{2}\right)x^2-tx+\left(\frac{t+1}{2}\right) \in \Z[x]$. Let $p,q$ be distinct, rational, odd primes and suppose that $\Delta(x) \equiv 1 \Mod{pq}$. Then there are infinitely many positive integers $d$ for which $N_{\Q(\zeta_d)^+/\Q}(\Delta(\zeta_d^2))$ is divisible by a rational prime $l$ an odd number of times and $l \not\equiv 1 \mod{pq}$. \end{lem} Fixing an integer $t \geq 2$, we apply Lemma \ref{lem:divisors} using the polynomial $\Delta_{K_t}(x) = \left(\frac{t+1}{2}\right)x^2-tx+\left(\frac{t+1}{2}\right)$, which is the Alexander polynomial of $K_t$. Then for each of the $d$ produced by Lemma \ref{lem:divisors}, the quaternion algebra for the $(d,0)$ surgery is ramified at some prime with residue characteristic $l$ as in the statement of Lemma \ref{lem:divisors}. The proof of Theorem \ref{thm:main} will be complete one we show (see Lemma \ref{lem:multiplicativeOrder} for details) that each such $l$ can only occur for finitely many $d$. \section{Ramification} \label{sec:ramification} In this section, we prove Lemmas \ref{lem:HilbertSymbolTotallyReal}, \ref{lem:absoluteNorm}, \ref{lem:totallyRealTotallySplit}, and \ref{lem:divisors}. \begin{lem} \label{lem:localSquare} If $n$ is an odd positive integer and $\mathfrak{l}$ is a prime ideal of $\Q(\zeta_d)^{+}$, then $2\cos(2\pi/d) - 2$ is a nonsquare modulo $\mathfrak{l}$ if and only if $\mathfrak{l}$ is inert in $\Q(\zeta_d)$. \end{lem} \begin{proof} Note that $2\cos(2\pi/d) - 2$ is not a global square in $\Q(\zeta_d)^{+}$ as it is negative at the real embeddings. It is, however, a global square in $\Q(\zeta_d)$. To see this, note that when $d$ is odd, $\zeta_d^{(d-1)/2} \in \Q(\zeta_d)$, and \[ \begin{aligned} \left(\zeta_d^{(d-1)/2}\left(\zeta_d-1\right)\right)^2 &= \zeta_d^{d+1} - 2 \zeta_d^{(d+1)/2+(d-1)/2} + \zeta_d^{d-1} \\ &= 2 \cos(2\pi/d) - 2. \end{aligned} \] So if $\mathfrak{l}$ splits (or is ramified) in $\Q(\zeta_d)$, then $\Z[2 \cos(2\pi/d)]/\mathfrak{l}$ coincides with a quotient of $\Z[\zeta_d]$ wherein $2\cos(2\pi/d) - 2$ is a square. On the other hand if $\mathfrak{l}$ is inert, then the finite field containing the square root of the reduction of $2\cos(2\pi/d) - 2$ modulo $\mathfrak{l}$ is a proper extension of $\Z[2 \cos(2\pi/d)]/\mathfrak{l}$. That is, $2\cos(2\pi/d) - 2$ is a nonsquare modulo $\mathfrak{l}$. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:HilbertSymbolTotallyReal}] This ramification is equivalent to $2\cos(2\pi/d)-2$ being a nonsquare modulo $\mathfrak{L}$ by Theorem \ref{thm:MRlocalSymbol}. By hypothesis, the inertia degree is odd, so the residue field has odd degree over the finite field $\Z[2\cos(2\pi/d)]/\mathfrak{l}$. Moreover, since $\mathfrak{l}$ is inert in $\Q(\zeta_d)$, $2\cos(2\pi/d)-2$ is a nonsquare modulo $\mathfrak{l}$ by Lemma \ref{lem:localSquare}. Then no odd degree extension of $\Z[2\cos(2\pi/d)]/\mathfrak{l}$ can contain a square root of $2\cos(2\pi/d)-2$, so it is a nonsquare modulo $\mathfrak{L}$. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:absoluteNorm}] Consider the norm absolute norm $N_{k_d/\Q}(-r)$. By Lemma \ref{lem:relativeNorm}, \[ \begin{aligned} N_{k_d/\Q}(-r) &= N_{\Q(\zeta_d)^+/\Q}\left(N_{k_d/\Q(\zeta_d)^+}(-r)\right) \\ &= N_{\Q(\zeta_d)^+/\Q}(\Delta_K(\zeta_d^2)). \end{aligned} \] Now suppose that $l$ divides $N_{\Q(\zeta_d)^+/\Q}(\Delta_K(\zeta_d^2))$ an odd number of times. Then there is some prime $\mathfrak{L}$ of $k_d$ of odd inertia degree over $\Q$ and hence over $\Q(\zeta_d)^+$. \end{proof} \begin{lem} \label{lem:multiplicativeOrder} Let $d$ be odd and $l$ be a rational prime lying below a prime of $\Q(\zeta_d)$ dividing $\Delta_{K_t}(\zeta_d^2)$. Then $l$ has multiplicative order either $1$ or $2$ modulo $d$. \end{lem} \begin{proof} Write $\mathfrak{l}$ for a prime of $\Q(\zeta_d)$ above $l$. Note that $\Delta_{K_t}(\zeta_d^2)$ is conjugate to $\Delta_{K_t}(\zeta_d)$ over $\Q(\zeta_d)$ as long as $d$ is odd. After multiplying by roots of unity, we may assume that $\mathfrak{l}$ divides the ideal generated by $\frac{t+1}{2}(\zeta_d+\zeta_d^{-1})-t$. Observe that $l$ cannot divide $\frac{t+1}{2}$ because this implies $t\zeta_d \equiv 0 \mod{\mathfrak{l}}$, but then $l$ divides $t$ and $t+1$. Similarly, $l$ cannot divide $t$ as this implies $\left(\frac{t+1}{2} \right)(\zeta_d+\zeta_d^{-1}) \equiv 0 \mod{\mathfrak{l}}$, but $\zeta_d+\zeta_d^{-1}$ is a unit for $d$ odd, so this implies $t+1 \equiv 0 \mod{\mathfrak{l}}$. \par Recall that the multiplicative order of $l$ modulo $d$ is equal to the inertia degree of $l$ in $\Q(\zeta_d)$ by Theorem \ref{thm:cyclosplitting}, so the result will follow once we show that the degree of $\Z[\zeta_d]/\mathfrak{l}$ over $\mathbf{F}_l$ is $1$ or $2$. Now, $\Delta_{K_t}(\zeta_d^2)$ has $3$ nonzero terms, so $\Delta_{K_t}(\zeta_d^2)$ dying modulo $\mathfrak{l}$ forces a linear dependence modulo $\mathfrak{l}$ of $\{1, \zeta_d, \zeta_d^2\}$, so the finite field obtained by reducing modulo $\mathfrak{l}$ must have degree only $1$ or $2$ above its prime subfield. \end{proof} \begin{rem} \label{rem:finitelyManyTimes} It follows from Lemma \ref{lem:multiplicativeOrder} that if we fix $f$ as in the lemma, a given prime $l$ can only have a prime above it in $\Q(\zeta_d)$ dividing $\Delta_{K_t}(\zeta_d^2)$ for finitely many values of $d$ unless $l$ divides $d$ itself infinitely often as $d$ varies. In applying Lemma \ref{lem:divisors}, we will take $d$ equal to $p^uq^v$ for $p,q$ fixed distinct, odd rational primes and vary the powers $u$ and $v$. The primes $l$ produced will be $-1 \Mod{pq}$, so in particular they do not divide $d$. \end{rem} \begin{proof}[Proof of Lemma \ref{lem:totallyRealTotallySplit}] By Lemma \ref{lem:multiplicativeOrder}, we know that $l$ has multiplicative order either $1$ or $2$. Write $\mathfrak{l}$ for a prime of $\Q(\zeta_d)^+$ above $l$ that divides $\left(\frac{t+1}{2}\right)(\zeta_d^2+\zeta_d^{-2})-t$. Note that since $d$ is odd, $\left(\frac{t+1}{2}\right)(\zeta_d^2+\zeta_d^{-2})-t$ is Galois conjugate to $\left(\frac{t+1}{2}\right)(\zeta_d+\zeta_d^{-1})-t$. Recall that $\Z[\zeta_d+\zeta_d^{-1}]$ is the ring of integers of $\Q(\zeta_d)^+$, and consider the reduction map $\Z[\zeta_d+\zeta_d^{-1}] \rightarrow \Z[\zeta_d+\zeta_d^{-1}]/\mathfrak{l}$. Then $\left(\frac{t+1}{2}\right)(\zeta_d+\zeta_d^{-1})-t$ is in the kernel of this map. That is, $(\zeta_d+\zeta_d^{-1}) \equiv \frac{2t}{t+1} \Mod{\mathfrak{l}}$, so $(\zeta_d+\zeta_d^{-1})$ lies in the prime subfield of $\Z[\zeta_d+\zeta_d^{-1}]/\mathfrak{l}$, so then $\Z[\zeta_d+\zeta_d^{-1}]/\mathfrak{l}$ is just $\mathbf{F}_l$. \par The assertion about inertia follows from recalling (see Theorem \ref{thm:cyclosplitting}) that the totally split primes of $\Q(\zeta_d)/\Q$ are exactly those that are $1 \Mod{d}$. \end{proof} Now we must establish some control on the primes dividing $\Delta_{K_t}(\zeta_d^2)$. For notational convenience, we write $f(x)$ in place of $\Delta_{K_t}(x)$ in the next lemma. \begin{lem} \label{lem:residueClassSplitResultant} Let $p,q$ be distinct odd primes, $t$ as in the statement of Theorem \ref{thm:main}, $n$ an odd positive integer, and $f(x) = \left(\frac{t+1}{2}\right)x^2-tx+\left(\frac{t+1}{2}\right)$. Further, let \[\begin{aligned} w &= \dfrac{1}{\sqrt{2(t+1)}}\left(\sqrt{2t+1}+i\right) \\ y &= \sqrt{\dfrac{t+1}{2}}w. \end{aligned}\] Then \begin{enumerate} \item $w$ is a square root of a root of $f$. \\ \item $i\left(\overline{y}^n-y^n\right) \in \Z$, and \[ i\left(\overline{y}^n-y^n\right) \equiv \begin{cases} 1\Mod{pq} & n \equiv 1\Mod{4} \\ -1\Mod{pq} & n \equiv 3\Mod{4}. \end{cases} \] \end{enumerate} \end{lem} \begin{proof} The assertion that $w$ is a square root of a root of $f$ may be checked by direct calculation or with software. We note as well here that $\abs{w} = 1$ as $t$ was assumed real and positive in the statement of Theorem \ref{thm:main}. For the second claim, we first show that $i\left(\overline{y}^n-y^n\right)$ is an integer. Consider the resultant \begin{equation*} \begin{aligned} \res_x\left(x^n-1, f(x^2)\right) &= \res_x\left(x^n-1, \frac{t+1}{2}(x-w)(x+\overline{w})(x+w)(x-\overline{w})\right) \\ &=\res_x\left(x^n-1,\sqrt{\frac{t+1}{2}}(x-w)(x+\overline{w})\right)\res_x\left(x^n-1,\sqrt{\frac{t+1}{2}}(x+w)(x-\overline{w})\right). \end{aligned} \end{equation*} We first observe that the two factors above are equal in absolute value. Indeed, recalling that $n$ is assumed odd, we compute \[ \begin{aligned} \res_x\left(x^n-1,\sqrt{\frac{t+1}{2}}(x-w)(x+\overline{w})\right) &= \left(\sqrt{\frac{t+1}{2}}\right)^n\left(w^n-1\right)\left(\left(-\overline{w}\right)^n-1\right) \\ &= \left(\sqrt{\frac{t+1}{2}}\right)^n\left(-(w\overline{w})^{n}+\overline{w}^n-w^n+1\right) \\ &= \left(\overline{y}^n-y^n\right). \end{aligned} \] Similarly the other factor of $\res_x\left(x^n-1, f(x^2)\right)$ is the complex conjugate of the one just considered, so the two factors are equal in absolute value. They also lie on the imaginary axis, so multiplying by $i$ moves them to the real axis. Then, that $i\left(\overline{y}^n-y^n\right)$ is an integer will follow once we show that $\res_x\left(x^n-1, f(x^2)\right)$ is a square integer. Indeed, writing $\Phi_d$ for the $d$th cyclotomic polynomial, we have \[ \begin{aligned} \res_x\left(x^n-1, f(x^2)\right) &= \prod_{d \mid n}\res_x(\Phi_d(x), f(x^2)) \\ &= \prod_{d \mid n} N_{\Q(\zeta_n)/\Q}(f(\zeta_d^2)). \end{aligned} \] Since $f$ is an integer polynomial, $f(\zeta_d^2)$ is an algebraic integer, so its norm is in $\Z$. Furthermore, $f(\zeta_d^2)$ is in $\Q(\zeta_d)^+$, which has $\Q(\zeta_d)$ as a quadratic extension, so that its norm in $\Q(\zeta_d)/\Q$ must be a square. So then, $i\left(\overline{y}^n-y^n\right)$ is a real square root of a square integer. That is, $i\left(\overline{y}^n-y^n\right) \in \Z$. \par To show the claim about the residue modulo $pq$, the strategy is to first show that $i\left(\overline{y}-y\right) \equiv 1\Mod{pq}$ and $i\left(\overline{y}^3-y^3\right) \equiv -1\Mod{pq}$, then show that $y^5 \equiv y \Mod{pq}$ and $\overline{y}^5 \equiv \overline{y} \Mod{pq}$. We compute $i\left(\overline{y}-y\right) = 1$ and $i(\overline{y}^3-y^3) = \left(3m+1\right)/2$. However, since $pq \mid \dfrac{t+1}{2}$ and $t \equiv -1\Mod{pq}$, we have that $(3t+1)/2 = \dfrac{t+1}{2} + t \equiv -1\Mod{pq}$. Now we turn to showing that $y^5 \equiv y \Mod{pq}$. We remark that this reduction is slightly more delicate since $y$ is not a rational (or in fact an algebraic) integer, so the reduction map is really $\mathcal{O}[1/2] \rightarrow \mathcal{O}[1/2]/(pq)$, where $\mathcal{O}$ is the ring of integers of the extension $\Q(\sqrt{2t+1}, i)$. However, we may still compute \[ y^5 = \dfrac{1}{8}\left(\left(t^2-4t-1\right)\sqrt{2t+1}+i\left(5t^2-1\right)\right), \] and observe that $t^2-4t-1 \equiv 5t^2-1 \equiv 4 \Mod{pq}$, so since \[ y = \dfrac{1}{2}\left(\sqrt{2t+1}+i\right), \] we deduce that $y^5 \equiv y \Mod{pq}$. The analogous computation shows that $\overline{y}^5 \equiv \overline{y} \Mod{pq}$. \end{proof} \begin{lem} \label{lem:productEqualities} Let $t,f,w$, and $y$ be as in Lemma \ref{lem:residueClassSplitResultant}. If $n \equiv 1\Mod{4}$, then \[ \prod_{d \mid n} N_{\Q(\zeta_d)^+/\Q}(f(\zeta_d^2)) = i(\overline{y}^n-y^n) = 2\Im(y^n) = 2\left(\sqrt{\frac{t+1}{2}}\right)^n\Im(w^n). \] \end{lem} \begin{proof} As in the proof of Lemma \ref{lem:residueClassSplitResultant}, $\left(i(\overline{y}^n-y^n)\right)^2$ is equal in absolute value to $\res_x(x^n-1,f(x))$. On the other hand, $\res_x(x^n-1,f(x))$ is also equal to $\prod\limits_{d \mid n} N_{\Q(\zeta_n)/\Q}(f(\zeta_d^2))$. Note, however, that each term in this product is in $\Q(\zeta_d)^{+}$ since $f(x)$ is a reciprocal polynomial. So \[ \begin{aligned} \res_x(x^n-1,f(x)) &= \prod_{d \mid n} N_{\Q(\zeta_n)/\Q}(f(\zeta_d^2)) \\ &= \left(\prod_{d \mid n} N_{\Q(\zeta_d)^+/\Q}(f(\zeta_d^2))\right)^2. \end{aligned} \] Note that, after possibly multiplying by a root of unity, $f(\zeta_d^2) = \left(\frac{t+1}{2} \right)(\zeta_d^2+\zeta_d^{-2}) - t$. Let $p,q$ be as in Lemma \ref{lem:residueClassSplitResultant}. The hypotheses that $pq \mid \frac{t+1}{2}$ and $t \equiv -1 \mod{pq}$ imply $f(\zeta_d^2) \equiv 1 \Mod{pq}$, so its norm is as well. Hence $\prod\limits_{d \mid n} N_{\Q(\zeta_d)^+/\Q}(f(\zeta_d^2))$ and $i(\overline{y}^n-y^n)$ are equal in absolute value and agree modulo $pq$ by Lemma \ref{lem:residueClassSplitResultant}, so they are the same integer. The other equalities are direct calculations. \end{proof} We now want to show that we can change the sign of $\Im(w^n)$ infinitely often as we vary $n$ through powers of $p$ and $q$. To do this, we use a result of Furstenberg. Before stating it, we recall that a multiplicative semigroup of the integers is called \textbf{lacunary} if it consists of powers of a single integer and \textbf{non-lacunary} otherwise. \begin{thm}[\cite{FurstenbergNonlacunary} Theorem IV.1] \label{thm:nonlacunary} If $\Sigma$ is a non-lacunary semigroup of integers and $\eta$ is irrational, then $\Sigma \eta$ is dense modulo $1$. \end{thm} \begin{lem} \label{lem:quadraticFurstenberg} Let $f(x) = ax^2+bx+a \in \Z[x]$ be a reciprocal polynomial of degree $2$ with $b/a < 2$ and $p$ and $q$ distinct positive integers. Let $w$ be a square root of a root of $f$. Then for infinitely many pairs of positive integers $(u, v)$, we have $\Im(w^n) < 0$ where $n = p^{u}q^{v}$. Moreover, each $u$ and $v$ may be taken to be even. \end{lem} \begin{proof} Since $w$ is on the unit circle, we may write it as $w=\exp(2\pi i \eta)$ so that $w^n = \exp(2n \pi i \eta)$. If we let $\Sigma = \{p^{u}q^{v}\}$, then Furstenberg's Theorem \ref{thm:nonlacunary} implies that $\Sigma \eta$ is dense modulo $1$ so that for infinitely many $n \in \Sigma$, $w^n$ is in the lower-half plane. If we desire each $u$ (resp. $v$) produced to be even, we may replace $p$ (resp. $q$) with $p^2$ (resp. $q^2$) in the definition of $\Sigma$. \end{proof} The above lemma holds if the less than sign is replaced with a greater than sign. \begin{proof}[Proof of Lemma \ref{lem:divisors}] Let $n = p^uq^v$ for $p,q$ distinct odd primes and $u,v$ positive integers. If $p$ (resp. $q$) is equivalent to $3\Mod{4}$, then suppose that $u$ (resp. $v$) is even. Note that $\prod\limits_{d \mid n} N_{\Q(\zeta_d)^+/\Q}(f(\zeta_d^2))=2\Im(y^n)$ is always $1\Mod{pq}$ when $n\equiv 1\Mod{4}$ by Lemma \ref{lem:residueClassSplitResultant}. Moreover, by Lemma \ref{lem:productEqualities}, it is equal to $2\left(\sqrt{\frac{t+1}{2}}\right)^n\Im(w^n)$, so when $\Im(w^n)$ is negative, the absolute value of $\prod\limits_{d \mid n} N_{\Q(\zeta_d)^+/\Q}(f(\zeta_d^2))$ must be equivalent to $-1 \Mod{pq}$. Then one of the $N_{\Q(\zeta_d)^+/\Q}(f(\zeta_d^2))$ must be not equivalent to $1\Mod{pq}$, so it must have a rational prime divisor to an odd power (note that all prime divisors must have multiplicative order either $1$ or $2$ modulo $d=p^{u'}q^{v'}$ by Lemma \ref{lem:multiplicativeOrder}, so if all the powers were even, $\abs{N_{\Q(\zeta_d)^+/\Q}(f(\zeta_d^2))}$ would be equivalent to $1 \Mod{p^{u'}q^{v'}}$) that is not $1 \Mod{pq}$, so by Lemma \ref{lem:totallyRealTotallySplit}, it lies below a prime $\mathfrak{l}$ of $\Q(\zeta_d)^+$ that is inert in $\Q(\zeta_d)$. One may repeatedly apply Lemma \ref{lem:quadraticFurstenberg} to change the sign of $\Im(w^n)$ back and forth to produce infinitely many $d$ with $ N_{\Q(\zeta_d)^+/\Q}(f(\zeta_d^2))$ not equivalent to $1 \Mod{pq}$. \end{proof} \begin{rem} The infinitely many $d$ produced by Lemma \ref{lem:divisors} will provide $(d,0)$ surgeries for which quaternion algebra has infinitely many distinct residue characteristics. \end{rem} \section{Irreducibility} \label{sec:irreducibility} The goal of this section is to prove Lemma \ref{lem:relativeNorm}. The main ingredient will be the following proposition. \begin{prop} \label{prop:irreducibility} Let $l$ be odd and $f_l(R,Z)$ the polynomial defining the canonical component of the character variety for the knot $T_l$. Then $f_l(R, 2\cos(2\pi/n))$ is irreducible as an element of $\Q^{ab}[R]$ for all but finitely many $n$. \end{prop} We will ultimately apply the following theorem of Dvornicich and Zannier. \begin{thm}[{\cite[Corollary 1(a)]{DZannier}}] \label{thm:DZ} Let $k$ be a number field and $k^c$ the field obtained by adjoining all roots of unity to $k$. If $g \in k^c[R,Z]$ and $g(R, Z^m)$ is irreducible in $k^c[R,Z]$ for all positive integers $m \leq deg_R{g}$, then $g(R, \zeta)$ is irreducible in $k^c[R]$ for all but finitely many roots of unity $\zeta$. \end{thm} To apply this theorem, we actually consider the polynomials $g_t(R,Z) = Z^{\deg_Z{f_t}}f_t(R,Z+Z^{-1})$, so that specializing $g_t$ at $Z=\zeta_n$ yields $\zeta_n^{\deg_Z{f_t}}f_t(R, 2\cos(2\pi/n))$. In particular, $g_t(R, \zeta_n)$ is irreducible over $\Q(\zeta_n)$ if and only if $f_t(R, 2\cos(2\pi/n))$ is. We record the analogous formula for $g_t(R,Z) = Z^2f_t(R,Z+Z^{-1})$ here. \begin{lem} \label{lem:g_lFormula} Let $t$ be an odd positive integer. \[ g_t(R,Z) = \begin{cases} Z^2R^t - (Z^4+Z^2+1) + \sum\limits_{i=1}^{t-1}\left(a_iZ^2 - b_i \left(Z^4+2Z^2+1\right)\right)R^i & t\equiv 1\Mod{4} \\ Z^2R^t - Z^2 + \sum\limits_{i=1}^{t-1}\left(a_iZ^2 - b_i \left(Z^4+2Z^2+1\right)\right)R^i & t\equiv 3\Mod{4}, \end{cases} \] where $a_i, b_i \in \Z$ and $a_{t-1}=b_{t-1}=1$. \end{lem} Our strategy will be first to show that $f_l(R,Z^m)$ is irreducible for all positive integers $m$ and then to show that the irreducibility of $f_l(R,Z^m)$ implies that of $g_l(R,Z^m)$. We begin by proving that---over $\Q$---changing $Z$ to $Z+Z^{-1}$ and clearing denominators does not affect irreducibility. \begin{lem} \label{lem:poweredUpCharacterVarietyIrreducibility} Let $f_l(R,Z)$ be as above. Then for all positive integers $m$, $f_l(R,Z^m)$ is absolutely irreducible. \end{lem} We need a version of Capelli's theorem due to Kneser. This formulation appears in \cite{SchinzelPolynomials}. \begin{thm}[{\cite[Theorem 19]{SchinzelPolynomials}}] \label{thm:Capelli} Let $k$ be a field and $n$ an integer $\geq 2$. Let $a \in k$. The binomial $Z^n-a$ is reducible over $k$ if and only if either $a=b^p$ for some prime divisor $p$ of $n$ or $4 \mid n$ and $a=-4b^4$. \end{thm} \begin{proof}[Proof of Lemma \ref{lem:poweredUpCharacterVarietyIrreducibility}] Let $f_l(R,Z)$ as an element of $\C(R)[Z]$. Since polynomials in $R$ are units in this ring, we may clear the denominators to obtain the monic polynomial \[ F_l(R,Z^m) = \begin{cases} Z^{2m} - \dfrac{R^l + 1 + \sum\limits_{i=1}^{l-1}a_iR^i}{1+\sum\limits_{i=1}^{l-1}b_iR^i} & l\equiv 1\Mod{4} \\ Z^{2m} - \dfrac{R^l - 1 + \sum\limits_{i=1}^{l-1}a_iR^i}{\sum\limits_{i=1}^{l-1}b_iR^i} & l\equiv 3\Mod{4}, \end{cases} \] which is irreducible if and only if $f_l(R,Z)$ is. Let $a$ denote the constant term and write $a=\dfrac{\alpha(R)}{\beta(R)}$ where $\alpha$ and $\beta$ are coprime in $\C[R]$. Suppose that $a=b^p$ for some prime divisor $p$ of $2m$. Then, there are polynomials $A_l, B_l \in \C[R]$ such that $A_l^p = \alpha_l$ and $B_l^p = \beta_l$. So then for some nonnegative integers $l_1$, $l_2$, we have $\deg(\alpha_l) = pl_1$ and $\deg(\beta_l) = pl_2$. However, $\alpha_l$ is of degree $l-j$ and $\beta_l$ is of degree $l-j-1$ where $j$ is the degree of the original common factors between the numerator and denominator of $a$. In particular, $\deg(\alpha_l)$ and $\deg(\beta_l)$ are comprime integers unless $j=l-1$. In this case, however, $\deg(\alpha_l) = 1$, so it can't be a $p$th power. Finally to apply Theorem \ref{thm:Capelli}, we must check that $a \neq -4b^4$ when $4 \mid 2m$. However, the analogous degree considerations show that $-a/4$ cannot be a fourth power. Then we apply Gauss's lemma to conclude that $f_l(R,Z^m)$ is irreducible as an element of $\C[R,Z]$. \end{proof} \begin{lem} \label{lem:changeOfVarIrreducibility} Let $g_l(R,Z)$ be as above. Then for all positive integers $m$, $g(R,Z^m)$ is irreducible in $\Q[R,Z]$. \end{lem} \begin{proof}[Proof of Lemma \ref{lem:changeOfVarIrreducibility}] Fix a positive integer $m$ and suppose that $g_l(R,Z^m) = G_1(R,Z)G_2(R,Z)$ where $G_i(R,Z) \in \Q[R,Z]$ of positive $R$-degree. Note that there cannot be a factorization involving a term of $R$-degree $0$ since---in the case that $l \equiv 1 \Mod{4}$---the $R$-leading coefficient, $Z^{2m}$ is coprime with the $R$-degree zero term, $-Z^{4m}-Z^{2m}-1$. When $l\equiv 3\Mod{4}$, the gcd of the leading and constant terms is $Z^{2m}$, but reducing modulo $Z$ produces $-\sum_{i=1}^{l-1}b_iR^i$, which is nonzero by Lemma \ref{lem:g_lFormula}. Since the $R$-leading coefficient of $g_l(R,Z^m)$ is $Z^{2m}$, specializing $Z=1$ produces a nontrivial factorization in $\Q[R]$. However, $g_l(R,1^m) = f_l(R,1^m+1^{-m}) = f_l(R,2)$, which is irreducible by \cite{HosteShanahanTwistKnots}. \end{proof} Now we prove \begin{lem} \label{lem:gAbsolutelyIrreducible} Let $g_l$ be as above. Then $g_l(R,Z^m)$ is absolutely irreducible for all positive integers $m$. \end{lem} We apply the following result of Bertone, Ch\`eze, and Galligo. \begin{thm}[{\cite[Proposition 3]{BCGfactorization}}] \label{thm:BCG} Let $k$ be a field and $g(R,Z) \in k[R,Z]$ be an irreducible polynomial. Let $\{(i_1, j_1), \dots, (i_l, j_l)\} \subset \Z^2$ be the vertex set of its Newton polygon. If $\mathrm{gcd}(i_1, j_1, \dots,i_l, j_l) = 1$, then $f(R,Z)$ is irreducible over $\overline{k}$. \end{thm} \begin{proof}[Proof of Lemma \ref{lem:gAbsolutelyIrreducible}] \begin{figure}[h] \includegraphics[height=0.25\textheight]{double_newton_polygon} \caption{The Newton polygons for $g_{29}(R,Z)$ and $g_{29}(R^5,Z)$.} \label{fig:NewtonPolygon} \end{figure} Note that the coefficients of $R^{t-1}$, $Z^2R^t$, and $Z^4R^{t-1}$ in $g_t(R,Z)$ are all nonzero. Moreover, the coefficient of $R^t$ and $Z^4R^t$ is zero, and there are no terms with $Z$ to an odd power or a power greater than $4$. All this implies that the top of the Newton polygon (with $Z$ and $R$ the horizontal and vertical directions respectively) looks like $(0, t-1)$, $(2, t)$, $(4, t-1)$. See Figure \ref{fig:NewtonPolygon} for the Newton polygon of $g_{13}(R,Z)$. These lattice points alone are sufficient to apply Theorem \ref{thm:BCG} together with the rational irreducibility furnished by Lemma \ref{lem:changeOfVarIrreducibility} to conclude that $g_t(R,Z)$ is absolutely irreducible. To apply theorem \ref{thm:DZ}, we must further show that $g_t(R,Z^m)$ is irreducible for all $m \leq \deg_R(g_t) = t$. Indeed, $g_t(R,Z^m)$ is irreducible over $\Q$ for all positive integers $m$. Increasing the power on $Z$ has the effect of horizontally stretching the Newton polygon so that its top consists of the lattice points $(0, t-1)$, $(2m, t)$, $(4m, t-1)$. The coordinates of these points still have $t$ and $t-1$, which are coprime, so Theorem \ref{thm:BCG} still applies, so we find that $g_t(R, Z^m)$ is absolutely irreducible for all positive integers $m$. Then Theorem \ref{thm:DZ} applies so that $g_t(R, \zeta_n)$ is irreducible in $\Q^{ab}[R]$, and hence $f_t(R, 2\cos(2\pi/n))$ is as well. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:relativeNorm}] We may find a monic polynomial that $r_d$ satisfies by specializing the character variety defining polynomial $f_t(R,Z)$ to $f_t(2-r,2\cos(2\pi/d))$. The resulting polynomial in $\Q(\zeta_d^+)[r]$ is irreducible by Proposition \ref{prop:irreducibility}, so $f_t(2-r,2\cos(2\pi/d))$ is the minimal polynomial for $r_d$. Moreover, its constant term coincides with that of $f_t(2,2\cos(2\pi/d))$, which is $-\Delta_{K_t}(\zeta_d)$ by Lemma \ref{lem:constantTerm}. The norm of $r_d$ is then $\Delta_{K_t}(\zeta_d)$ as the norm is the negative of the constant term of the minimal polynomial when the degree of the minimal polynomial is odd. \end{proof} \section{Example: \texorpdfstring{$T_{29}$}{T29}} In this section we wish to show what ramification can be predicted for surgeries on the knot $T_{29}$ by unpacking the proof of Theorem \ref{thm:main}. We remark that for all but the smallest surgeries, it seems totally intractable to use any sort of software to na\"ively compute the ramification. One can use the character variety to get exact minimal polynomials for the trace fields and entries for the quaternion algebra in terms of this minimal polynomial, but computing reduction modulo primes (i.e. trying to apply Theorem \ref{thm:MRlocalSymbol}) involves computing a maximal order in the field, which usually involves factoring a large discriminant. For example, the discriminant of a minimal polynomial for the $(11,0)$ surgery on $T_{29}$ has $359$ decimal digits, which exceeds the record for the largest factored integer not of a special form by over $100$ decimal digits. We will not dwell on the exact nature of the computational complexities involved, but instead use some of the theoretical results in the paper to prove ramification. \par Let us first mention that the canonical component is of the following form. \[ \begin{aligned} f_{29}(R,Z) &=-R^{28} Z^{2} + R^{29} + R^{27} Z^{2} + R^{28} + 26 R^{26} Z^{2} - 28 R^{27} - 25 R^{25} Z^{2} - 27 R^{26} - 301 R^{24} Z^{2} \\ &\qquad+ 351 R^{25} + 277 R^{23} Z^{2} + 325 R^{24} + 2046 R^{22} Z^{2} - 2600 R^{23} - 1792 R^{21} Z^{2} - 2300 R^{22} \\ &\qquad- 9066 R^{20} Z^{2} + 12650 R^{21} + 7506 R^{19} Z^{2} + 10626 R^{20} + 27492 R^{18} Z^{2} - 42504 R^{19} \\ &\qquad- 21335 R^{17} Z^{2} - 33649 R^{18} - 58277 R^{16} Z^{2} + 100947 R^{17} + 41941 R^{15} Z^{2} + 74613 R^{16} \\ &\qquad+ 86662 R^{14} Z^{2} - 170544 R^{15} - 57044 R^{13} Z^{2} - 116280 R^{14} - 89402 R^{12} Z^{2} + 203490 R^{13} \\ &\qquad+ 52834 R^{11} Z^{2} + 125970 R^{12} + 62292 R^{10} Z^{2} - 167960 R^{11} - 32206 R^{9} Z^{2} - 92378 R^{10} \\ &\qquad- 27966 R^{8} Z^{2} + 92378 R^{9} + 12174 R^{7} Z^{2} + 43758 R^{8} + 7476 R^{6} Z^{2} - 31824 R^{7} - 2576 R^{5} Z^{2} \\ &\qquad- 12376 R^{6} - 1036 R^{4} Z^{2} + 6188 R^{5} + 252 R^{3} Z^{2} + 1820 R^{4} + 56 R^{2} Z^{2} - 560 R^{3} - 7 R Z^{2} - 105 R^{2} \\ &\qquad- Z^{2} + 15 R + 1. \end{aligned} \] Let us first consider the $(11,0)$ surgery on $T_{29}$, so our quaternion algebra is of the form \[ \HilbertSymbol{2\cos(2\pi/11)-2}{-r_{11}}{k_{11}}. \] Note that this is not actually the quaternion algebra associated to the $(11,0)$ surgery, but instead the specialization of the Azumaya negative part (see Proposition \ref{prop:HilbertSymbolForKnot} for details). However, all ramification shown in this example will actually be associated to the quaternion algebra for the Kleinian group. This basically amounts to checking that the Azumaya positive part doesn't have any of the same ramification. We also remark that $k_{11}$ is a degree $145$ extension of the $\Q$. Without knowing something about the ring of integers (e.g. the conductor of $\Z[r_d]$ in it), it is difficult to explicitly establish the splitting of primes in this extension. However, applying Lemma \ref{lem:HilbertSymbolTotallyReal}, one may deduce the existence of a ramified prime. In particular, we first need to prove that some prime $\mathfrak{L}$ of $k_{11}$ divides $-r_{11}$ an odd number of times. This is a problem amenable to software. Indeed, $N_{k_{11}/\Q}(-r_{11}) = 43\cdot131\cdot1033$. Since these all appear to the first power, there is no worry of a prime dividing $-r_{11}$ multiple times. Now, we check the other criterion for these rational primes, namely that any prime of $\Q(\zeta_{11})^{+}$ above them does not split in $\Q(\zeta_{11})$. Since the tower of extensions $\Q(\zeta_d)/\Q(\zeta_d)^{+}/\Q$ is Galois, each prime above a particular rational prime has the same splitting behavior. We may check that each of $43$, $131$, and $1033$ satisfy the conditions of Lemma \ref{lem:HilbertSymbolTotallyReal}, so there is a prime in $k_{11}$ above each of these rational primes such that the quaternion algebra for $T_{29}(11,0)$ is ramified at that prime. This procedure can be implemented on a computer; after doing so, we find the residue characteristics listed in Table \ref{tab:residueChars} of ramified primes for the $(d,0)$ surgery. \begin{table} \centering \caption{Ramified residue characteristics for $(d,0)$ surgery on $T_{29}$.} \label{tab:residueChars} \begin{tabular}{c|c c@{\hspace{0.25in}}c|c} $d$ & primes & & $d$ & primes \\ \cline{1-2} \cline{4-5} 5 & $\varnothing$ & & 53 & 42611, 60101 \\ 7 & 13 & & 55 & $\varnothing$ \\ 9 & 431 & & 57 & 12539, 56706539232099509 \\ 11 & 43,131,1033 & & 59 & 1228786844647 \\ 13 & 1117, 1481 & & 61 & 35711295669608681, 41553136798440921281 \\ 15 & 149, 179 & & 63 & $\varnothing$ \\ 17 & 67, 101, 509, 4657 & & 65 & 4679, 656305837760821827656999 \\ 19 & 37 & & 67 & 401 \\ 21 & $\varnothing$ & & 69 & $\varnothing$ \\ 23 & 10938592571969 & & 71 & 283 \\ 25 & 90636599549 & & 73 & 5503792674161 \\ 27 & $\varnothing$ & & 75 & 2699, 15299 \\ 29 & 292319 & & 77 & 5196259971209 \\ 31 & $\varnothing$ & & 79 & 157, 80263 \\ 33 & 659, 24800291 & & 81 & 314974336585075469 \\ 35 & 25409 & & 83 & 74201, 33552749, 27639164173, 19501822788835693 \\ 37 & 73, 294149, 531516948137827 & & 85 & 123419, 4093091532209, 16729850810909 \\ 39 & 35883041 & & 87 & 7386583213044449, 65955561202472999 \\ 41 & 4271162617 & & 89 & 1601 \\ 43 & 3697, 107069 & & 91 & 42223, 122828797084811 \\ 45 & 89 & & 93 & 929, 46197763017488779460706369300779 \\ 47 & $\varnothing$ & & 95 & 569, 145349, 153862768739 \\ 49 & 97 & & 97 & 79151, 149328007, 3899539084760806682641718399966621 \\ 51 & 1055801, 823976217011 & & 99 & 2970791, 3683326481, 9934540457447231. \end{tabular} \end{table} \par The above discussion is meant to illustrate that results in the paper allow relatively easy computation of the residue characteristics of ramified primes for $(d,0)$ surgeries. Of course, Theorem \ref{thm:main} indicates that there are in fact infinitely many distinct such residue characteristics. We now show how one can unpack the methods of the proof to find a subsequence of surgery coefficients that provides the infinitely many distinct residue characteristics. We can summarize previous work in the paper by saying that we ultimately want to find $d$ such that \[ \left| N_{\Q(\zeta_d)^+/\Q}(\Delta_{T_{29}}(\zeta_d^2)) \right| \not\equiv 1 \Mod{15}. \] The detected surgery coefficients for this example are of the form $3^u5^v$. To simplify the explanation for this example, let us define a function \[ \omega(n) = \prod_{d \mid n} N_{\Q(\zeta_d)^+/\Q}\left(\Delta_{T_{29}}(\zeta_d^2)\right). \] Lemma \ref{lem:residueClassSplitResultant} implies that $\omega(n) \equiv 1 \Mod{15}$. The idea is then that if $n_0 \mid n_1$ and $\sigma(n_0)$ and $\sigma(n_1)$ have different signs, then $\abs{\sigma(n_1)/\sigma(n_0)} \not\equiv 1 \Mod{pq}$, so there is a divisor $d_1$ of $n_1$ that does not divide $n_0$ such that \[ \left| N_{\Q(\zeta_{d_1})^+/\Q}(\Delta_{T_{29}}(\zeta_{d_1^2})) \right| \not\equiv 1 \Mod{15}. \] Then the problem is just reduced to showing that for any $n_0$, we can find a $n_1$ as above. For the knot $T_{29}$, this amounts to asking whether---given powers $u$ and $v$---one can find powers $u' > u$ and $v' > v$ such that $\omega(3^{u'}5^{v'})$ has a different sign from that of $\omega(3^u5^v)$. We prove that one can always do this by using Furstenberg's theorem on nonlacunary semigroups (see Theorem \ref{thm:nonlacunary}). However, for the present example, we just compute a few cases to show how one might go about constructing infinitely many distinct residue characteristics. \par For technical reasons, we actually need even powers of $3$. So our first $n_0$ to try is $3^25 = 45$. $\omega(45)$ is negative, so $\abs{\omega(45)}$ has a divisor that works. In fact, consulting the table above, there are three divisors $d$ of $45$ so that the $(d,0)$ surgery has finite ramification, namely $9$, $15$, and $45$. Call one of them $d_0$. The smallest $n_1$ such that $\omega(n_1) > 0$ is $n_1 = 3^25^5 = 28125$. So there must be some divisor $d_1$ of $28125$ that doesn't divide $45$ (and hence $d_0$) such that the $(d_1,0)$ surgery has finite ramification, for example $75$. Next, if we set $n_2 = 3^25^6$, then $\omega(n_2) < 0$, so we may find $d_2$ in the same way. The numbers involved quickly become intractably large even for software, but results in the paper guarantee that proceeding in this manner produces an infinite sequence of $(d_i,0)$ surgeries with finite ramification. Moreover, because of Lemma \ref{lem:multiplicativeOrder}, we can't find the same residue characteristic infinitely often, so this process will in fact produce infinitely many distinct rational primes such that for each one, there is an integer $d$ and a prime of the trace field of the $(d,0)$ surgery above that prime that ramifies the quaternion algebra associated to the $(d,0)$ surgery. \printbibliography \end{document}
2,869,038,155,353
arxiv
\section*{Abstract} \textbf{Motivation:} Recent work has demonstrated the feasibility of using non-numerical, qualitative data to parameterize mathematical models. However, uncertainty quantification (UQ) of such parameterized models has remained challenging because of a lack of a statistical interpretation of the objective functions used in optimization.\\ \textbf{Results:} We formulated likelihood functions suitable for performing Bayesian UQ using qualitative data or a combination of qualitative and quantitative data. To demonstrate the resulting UQ capabilities, we analyzed a published model for IgE receptor signaling using synthetic qualitative and quantitative datasets. Remarkably, estimates of parameter values derived from the qualitative data were nearly as consistent with the assumed ground-truth parameter values as estimates derived from the lower throughput quantitative data. These results provide further motivation for leveraging qualitative data in biological modeling.\\ \textbf{Availability:} The likelihood functions presented here are implemented in a new release of PyBioNetFit, an open-source application for analyzing SBML- and BNGL-formatted models, available online at \url{www.github.com/lanl/PyBNF}.\\ \clearpage \section{Introduction} Mathematical models of the dynamics of cellular networks, such as those defined using BioNetGen Language (BNGL) \citep{Faeder2009} or Systems Biology Markup Language (SBML) \citep{Hucka2003}, require parameterization for consistency with experimental data. Conventional approaches use quantitative data such as time courses and dose-response curves to parameterize models. We and others have demonstrated that it is also possible to use non-numerical, qualitative data in automated model parameterization \citep{Oguz2013, Pargett2013, Pargett2014, Mitra2018a}. Our demonstration \citep{Mitra2018a} used qualitative data in combination with quantitative data. In the method of \cite{Mitra2018a}, the available qualitative data are used to formulate inequality constraints on outputs of a model. Parameterization is performed by minimizing a sum of static penalty functions \citep{Smith1997} derived from the inequalities. Given a list of $n$ inequalities of the form $g_i<0$ for $i=1,...,n$, where the $g_i$ are functions of model outputs, the objective function is defined as \begin{equation} \sum_{i=1}^n C_i\cdot\max(0, g_i) \label{eq:static} \end{equation} Static penalty functions have long been used in the field of constrained optimization \citep{Smith1997}. Each violated inequality contributes to the objective function a quantity equal to a distance from constraint satisfaction (e.g., the absolute difference between the left-hand side and right-hand side of the inequality), multiplied by a problem-specific constant weight $C_i$. The objective function of Equation \ref{eq:static} becomes smaller as inequalities move closer to satisfaction, thus guiding an optimization algorithm toward a solution satisfying more of the inequalities. In the study of \cite{Mitra2018a}, the approach proved effective in obtaining a reasonable point estimate for the parameters of a 153-parameter model of yeast cell cycle control developed by Tyson and coworkers \citep{Chen2000,Chen2004,Csikasz-Nagy2006,Oguz2013,Kraikivski2015}, which had previously been parameterized by hand tuning. The static penalty function approach has limitations. Most notably, the approach requires choosing problem-specific weights $C_i$ for the objective function. Although heuristics exist to make reasonable choices for the weights \citep{Mitra2018a}, there is no rigorous method to do so. A related challenge in using qualitative data is performing uncertainty quantification (UQ). Bayesian UQ (described in many studies, such as \cite{Kozer2013} and \cite{Klinke2009}) is a valuable approach that generates the multivariate posterior probability distribution of model parameters given data. This distribution can be used for several types of analyses. 1) The marginal distribution of each parameter can be examined to find the most likely value of that parameter and a credible interval. 2) Marginal distributions of pairs of parameters can be examined to determine which parameters are correlated. 3) Prediction uncertainty can be quantified by running simulations using parameter sets drawn from the distribution. Unfortunately, meaningful Bayesian UQ cannot be performed for models parameterized using qualitative data and penalty function-based optimization, because the penalty functions are heuristics. They are not grounded in statistical modeling. Here, we present likelihood functions that can be used in parameterization and UQ problems incorporating both qualitative and quantitative data. We first present a likelihood function that can be used with binary categorical data, and then a more general form to use with ordinal data comprising three or more categories. We implemented the option to use these likelihood functions in fitting and in Bayesian UQ in our software PyBioNetFit \citep{Mitra2019a}. We built on existing PyBioNetFit support for qualitative data, which previously allowed only the static penalty function approach. In the first section of Results, we derive the new likelihood functions, which have similarities to both the chi squared likelihood function commonly used in curve fitting with quantitative data, and the logistic function commonly used to model classification error in machine learning. In the second section, we describe how we have added support for the new likelihood functions in PyBioNetFit and provide a guide to using them in optimization and UQ. In the third section, we provide an example application of the new software features. This example shows that qualitative datasets are potentially valuable resources for biological modeling. \section{Methods} Likelihood functions presented in Results were implemented as options in PyBioNetFit v1.1.0, available online at \url{https://github.com/lanl/pybnf}. PyBioNetFit supersedes the earlier BioNetFit \citep{Thomas2016, Hlavacek2018}. To illustrate use of the new functionality, we configured and solved an example UQ problem (described in Section \ref{sec:application}) using PyBioNetFit v1.1.0. Configuration, model, and synthetic data files used for this example are available online (\url{https://github.com/RuleWorld/RuleHub/tree/2019Aug27/Contributed/Mitra2019Likelihood}). The model that we used has been published in BNGL format \citep{Faeder2009} in earlier work \citep{Harmon2017}. We took the published parameterization to be the ground truth. We adapted the simulation commands included in the BNGL file to produce degranulation outputs for specific conditions, as appropriate for our synthetic datasets described below. We considered 11 instances of the problem using different qualitative and quantitative datasets. To generate synthetic quantitative data, we simulated the model with the assumed ground-truth parameterization, and added Gaussian noise to the desired degranulation outputs. To generate synthetic two-category qualitative data, we performed the same procedure, but recorded only whether the noise-corrupted primary degranulation response was greater or less than the noise-corrupted secondary degranulation response. To generate synthetic three-category qualitative data, we followed the same procedure, but recorded that the primary and secondary responses were approximately equal if the difference between the two responses was less than a designated threshold, which was set at $4.2\times 10^4$ arbitrary units. We performed MCMC sampling using PyBioNetFit's parallel tempering algorithm. For each dataset considered, we performed four independent runs and combined all samples obtained. Each run consisted of four Markov chains for each of nine temperatures, for a total of 36 chains, with samples saved from the four chains at temperature 1, run for a total of 50,000 steps including an unsampled 10,000-step burn-in period. Each run was performed using all 36 cores of a single Intel Broadwell E5-2695 v4 cluster node. Complete configuration settings are provided in the PyBioNetFit configuration file online. \section{Results} \subsection{Mathematical derivation} \subsubsection{Notation} \label{sec:problem} By way of introduction to our newly proposed likelihood function for qualitative data, we begin by reviewing Bayesian UQ and its associated likelihood function with a more conventional quantitative dataset. We are given an experimental dataset $\mathbf{y}=\{y_1,...,y_n\}$ and a model $f$. There is no restriction on what type of numerical measurement each $y_i$ represents; for example, it could represent a single data point of a time course, a sample mean of several independent and identically distributed measurements, or an arbitrary function of multiple measured quantities. Within a Bayesian framework, the $y_i$ are taken to be samples from the random variables $\{Y_1,...,Y_n\}$. The model $f$ takes as input a parameter vector $\btheta$ to predict the expected value of each data point $Y_i$, that is, $f_i(\btheta) = E(Y_i)$. $\btheta$ is the realization of the random variable $\mathbf{\Theta}$. $f$ is assumed to be deterministic (e.g., an ODE model). Stochastic models would require additional treatment that is beyond the intended scope of this study. In Bayesian UQ, parameter uncertainty is quantified by the posterior probability distribution $P(\btheta|\mathbf{y})$, the probability of a particular parameter set given the data. Markov chain Monte Carlo (MCMC) algorithms can be used to sample the posterior distribution using the fact that, by Bayes' law, $P(\btheta|\mathbf{y}) \propto P(\mathbf{y}|\btheta)P(\btheta)$. The change in the value of $P(\mathbf{y}|\btheta)P(\btheta)$ is used to determine whether a proposed move by the MCMC algorithm is accepted. $P(\btheta)$ is a user-specified distribution representing prior knowledge about the parameters. Therefore, an important prerequisite for performing Bayesian UQ is an expression for the \textit{likelihood}, $P(\mathbf{y}|\btheta)$. \subsubsection{Chi squared likelihood function} When performing conventional Bayesian UQ using only quantitative data, a common choice of likelihood function (e.g., see \cite{Kozer2013} and \cite{Harmon2017}) is the chi squared function. \begin{equation} -\log P(\mathbf{y}|\btheta) \propto \chi^2(\btheta) = \sum_{i=1}^n \frac{(y_i-f_i(\btheta))^2}{2\sigma_i^2} \label{eq:chisq} \end{equation} Here $\sigma_i$ is the standard deviation of the measurement $y_i$. If $y_i$ represents the sample mean of several independent trials, it is common to estimate $\sigma_i$ as the standard error of the mean. This likelihood function has a strong theoretical motivation. The underlying assumption is that each $Y_i$ has an independent Gaussian distribution with mean $f_i(\btheta)$ and standard deviation $\sigma_i$. Then the probability of a single data point $y_i$ given $\btheta$ is \begin{equation} P(y_i|\btheta) = \frac{1}{\sqrt{2\pi}\sigma_i} \exp(\frac{-(y_i-f_i(\btheta))^2}{2\sigma_i^2}) \end{equation} Given that the $Y_i$ are independent, the probability of the complete dataset $\mathbf{y}$ given $\btheta$ is given by the product \begin{equation} P(\mathbf{y}|\btheta) = \prod_{i=1}^n \frac{1}{\sqrt{2\pi}\sigma_i} \exp(\frac{-(y_i-f_i(\btheta))^2}{2\sigma_i^2}) \label{eq:product} \end{equation} When performing MCMC sampling, we typically only need a value \textit{proportional to} $P(\mathbf{y}|\btheta)$ to calculate the ratio $P(\mathbf{y}|\btheta_1) / P(\mathbf{y}|\btheta_2)$ for two parameter sets $\btheta_1$ and $\btheta_2$. This ratio is used to determine, for example, the probability of transitioning from $\btheta_1$ to $\btheta_2$ in the Metropolis-Hastings algorithm. We therefore can ignore proportionality constants in Equation \ref{eq:product} that are independent of $\btheta$. \begin{equation} P(\mathbf{y}|\btheta) \propto \prod_{i=1}^n \exp(\frac{-(y_i-f_i(\btheta))^2}{2\sigma_i^2}) \label{eq:propproduct} \end{equation} Taking the negative logarithm of Equation \ref{eq:propproduct} results in the conventional chi squared function (Equation \ref{eq:chisq}). Therefore, under the assumptions stated in this section, the chi squared function represents the kernel of the negative log likelihood and can be rigorously used in Bayesian UQ algorithms. \subsubsection{Likelihood function for qualitative data} We now consider the situation in which the experimental data are qualitative. By qualitative data, we specifically mean observations that can be expressed as inequality constraints to be enforced on outputs of a model. Our problem statement is nearly identical to that presented in Section \ref{sec:problem}, except we are no longer given the dataset $\mathbf{y}$. Instead, for each $Y_i$, we are given a constant $c_i$, and told whether $y_i < c_i$ or $y_i > c_i$ was observed. $y_i$ is the sample generated from $Y_i$ and is never observed. $y_i < c_i$ (or $y_i > c_i$) is the observation, which has two possible outcomes. We explicitly write down the procedure to generate these qualitative observations from the $Y_i$, which we refer to as our \textit{sampling model}: \begin{algorithm}[H] To generate observation $i$, sample $y_i$ from $Y_i$ and report whether $y_i < c_i$ or $y_i > c_i$. \end{algorithm} \vspace{-12pt} Without loss of generality, we assume all given observations have the form $y_i < c_i$. If some quantity $A$ yielded an observation $a > k$, we could set $Y_i=-A$ and $c_i=-k$. This form also supports the case of an inequality $A<B$ between two measured quantities, as we could set $Y_i=A-B$ and $c_i=0$. To perform Bayesian analysis, we require an expression for the probability of observing $y_i < c_i$ for all $i$ (rather than observing $y_i > c_i$ for some $i$), given a parameter set $\btheta$. As shorthand, we will write this as $P(\mathbf{y}<\mathbf{c}|\btheta)$, where $\mathbf{y}$ is a vector of the $y_i$ and $\mathbf{c}$ is a vector of the $c_i$. Following the example of the chi squared likelihood function, we assume each $Y_i$ has a Gaussian distribution with a known standard deviation $\sigma_i$. The mean of the distribution is, as before, taken to be given by the model prediction $f_i(\btheta)$. With this distribution, the probability of observing $y_i < c_i$ is, by definition, given by the Gaussian cumulative distribution function (CDF). We will write the CDF of a Gaussian distribution with mean $\mu$ and standard deviation $\sigma$ evaluated at a point $x$ as $\textrm{cdf}(\mu,\sigma,x)$. The conditional probability of interest is as follows: \begin{equation} P(y_i<c_i|\btheta) = \textrm{cdf}(f_i(\btheta),\sigma_i,c_i) \label{eq:qualsingle} \end{equation} We note that for ease of implementation, $\textrm{cdf}(\mu,\sigma,x)$ can be written in terms of the error function $\textrm{erf}(x)$, which is implemented in many standard libraries, including the Python and C++ standard libraries. \begin{equation} \textrm{cdf}(\mu,\sigma,x) = \mu + \frac{1 + \textrm{erf}(\frac{x}{\sigma \sqrt{2}})}{2} \end{equation} As shown in Figure \ref{fig:logistic}, Equation \ref{eq:qualsingle} is intuitively reasonable. If the true mean value of $Y_i$ is much smaller than $c_i$ (relative to the scale of $\sigma_i$), we are very likely to observe $y_i<c_i$, whereas if the mean of $Y_i$ is much larger than $c_i$, we are very unlikely to observe $y_i<c_i$. If the true mean of $Y_i$ is close to $c_i$, we are uncertain whether the observation will be $y_i<c_i$ or $y_i>c_i$ in the face of measurement noise. We note that this function has a similar appearance to the logistic function, which is commonly used to model binary categorization in machine learning. Assuming independence of the $Y_i$, the probability of the entire dataset is given by the product. \begin{equation} P(\mathbf{y}<\mathbf{c}|\btheta) = \prod_{i=1}^n \textrm{cdf}(f_i(\btheta),\sigma_i,c_i) \label{eq:qualproduct} \end{equation} Finally, we take the negative logarithm to obtain \begin{equation} -\log P(\mathbf{y}<\mathbf{c}|\btheta) = \sum_{i=1}^n -\log \textrm{cdf}(f_i(\btheta),\sigma_i,c_i) \label{eq:qualobj} \end{equation} This function can be used for Bayesian UQ when considering qualitative data in an equivalent way to how the chi squared likelihood function is used when considering quantitative data. \begin{figure}[tb] \centering\includegraphics[scale=0.5]{Fig1.eps} \caption{The proposed form for $P(y_i<c_i|\btheta)$ (Equation \ref{eq:qualsingle}). } \label{fig:logistic} \end{figure} \subsubsection{Likelihood function for qualitative data with model discrepancy} \label{sec:twocat} The likelihood function in Equation \ref{eq:qualobj} has a remaining limitation when it comes to real-world experimental data. To illustrate this concern, we point to the model developed by Tyson and co-workers of yeast cell cycle control \citep{Chen2000,Chen2004,Csikasz-Nagy2006,Oguz2013,Kraikivski2015}. Several versions of this model have been parameterized using qualitative data (viability status of yeast mutants) by hand-tuning \citep{Chen2000,Chen2004,Csikasz-Nagy2006,Kraikivski2015} and with optimization algorithms \citep{Oguz2013,Mitra2018a}. In all of these parameterization studies, most but not all of the qualitative observations were satisfied by the reported best-fit parameterization. A few of the observations, however, were different from the model predictions. Due to such anomalous observations, a likelihood model as we have described could give the dataset a very low likelihood given the model and parameters, even though there intuitively is good agreement between the parameterized model and dataset. How can we reconcile anomalous observations? An explanation given by Tyson and co-workers is that a model has a limited amount of detail, which is unable to capture every qualitative observation in the data \citep{Chen2004}. This explanation suggests using a statistical approach known as model discrepancy or model inadequacy \citep{Kennedy2001}. The principle of model discrepancy is that when calculating the likelihood of a dataset, one should take into account the difference between the model and reality. Although many statistical studies ignore model discrepancy, it has been shown to be important for performing effective statistical inference for certain problems \citep{Brynjarsdottir2014}. Given that qualitative data may be generated by high-throughput screening that could easily step outside the scope of a particular model, we believe model discrepancy is an especially important consideration for our applications. Existing treatments of model discrepancy often describe discrepancy with its own probability distribution, such as a Gaussian distribution that is autocorrelated in time \citep{Brynjarsdottir2014}. Such an approach, which uses an assumption that model discrepancy is correlated for similar observations, is hard to apply to our problem formulation in which the $Y_i$ are taken to be independent (possibly coming from different model outputs). Thus, we take a more generic approach of expressing model discrepancy as a constant probability $\epsilon_i$ for each qualitative observation. $\epsilon_i$ relates to the probability that a given observation is outside the scope of the model. We say that when an observation is made, there is a probability $\epsilon_i$ that $y_i < c_i$ is reported regardless of the expected value of $Y_i$ given by the model. Likewise, there is also a probability $\epsilon_i$ that $y_i > c_i$ reported regardless of $Y_i$. These statements can be formalized as part of our sampling model: \begin{algorithm}[H] To generate observation $i$, make a weighted random choice of one of the following possibilities: \begin{itemize} \item With probability $1-2\epsilon_i$, sample $y_i$ from $Y_i$ and report whether $y_i<c_i$ or $y_i>c_i$ \item With probability $\epsilon_i$, report $y_i<c_i$ \item With probability $\epsilon_i$, report $y_i>c_i$ \end{itemize} \end{algorithm} \vspace{-12pt} With this modification, we have the probability distribution \begin{equation} P(y_i<c_i|\btheta) = \epsilon_i + (1-2\epsilon_i) \textrm{cdf}(f_i(\btheta),\sigma_i,c_i) \end{equation} and the likelihood function \begin{equation} -\log P(\mathbf{y}<\mathbf{c}|\btheta) = \sum_{i=1}^n -\log (\epsilon_i + (1-2\epsilon_i) \textrm{cdf}(f_i(\btheta),\sigma_i,c_i)) \label{eq:qualobjfinal} \end{equation} Equation \ref{eq:qualobjfinal} gives our recommended form for a likelihood function incorporating qualitative data with two possible categorical outcomes ($y_i<c_i$ or $y_i > c_i$). We will refer to this function as the \textit{two-category likelihood function}. Note that although we introduced $\epsilon_i$ for dealing with model structure problems, it could also represent a shortcoming of our postulated Gaussian error model. For example, if an experimental instrument had some probability of reporting a false positive or negative, regardless of whether the mean of $Y_i$ is close to the threshold $c_i$, this non-Gaussian error could be accounted for by increasing the value of $\epsilon_i$. \subsubsection{Likelihood function for ordinal data with more than two categories} \label{sec:threecat} We next derive a likelihood function for ordinal categorical data with more than two categories. For simplicity, we suppose an observation has three possible outcomes: $y_i<c_{i,1}$, $c_{i,1}<y_i<c_{i,2}$, and $y_i>c_{i,2}$, for constants $c_{i,1}$ and $c_{i,2}$. An example would be if we were making an ordinary qualitative observation ($y_i<c_i$ or $y_i>c_i$), but another possible outcome of the experiment is $y_i=c_i$ to within the experimental error. Then the cutoffs $c_{i,1}$ and $c_{i,2}$ could be chosen on either side of $c_i$ such that the outcome $c_{i,1}<y_i<c_{i,2}$ corresponds to $y_i$ within measurement error. From the definition of the Gaussian CDF we have \begin{equation} \label{eq:yltc1} P(y_i<c_{i,1}|\btheta) = 1-\textrm{cdf}(f_i(\btheta),\sigma, c_{i,1}) \end{equation} \begin{equation} P(c_{i,1}<y_i<c_{i,2}|\btheta) = \textrm{cdf}(f_i(\btheta),\sigma_i, c_{i,1}) - \textrm{cdf}(f_i(\btheta),\sigma_i, c_{i,2}) \label{eq:pmiddle} \end{equation} \begin{equation} \label{eq:ygtc2} P(y_i>c_{i,2}|\btheta) = \textrm{cdf}(f_i(\btheta),\sigma_i, c_{i,2}) \end{equation} A simplification is possible under the assumption that $c_{i,1}$ and $c_{i,2}$ are far enough separated that for any $E(Y_i)$, at most two of the three categories have non-negligible probability. That is, if $E(Y_i)$ is close enough to $c_{i,2}$ that observing $y_i>c_{i,2}$ is a probable outcome, $E(Y_i)$ is also high enough above $c_{i,1}$ that observing $y_i<c_{i,1}$ has a probability close to zero. Thus, we assume that for all $\btheta$, either $\textrm{cdf}(f_i(\btheta),\sigma_i, c_{i,1}) = 1$ or $\textrm{cdf}(f_i(\btheta),\sigma_i, c_{i,2}) = 0$. This assumption is reasonable because if it were false, it would mean the experiment cannot reliably distinguish between the three categories, and so the data would be better analyzed as two-category data. With this assumption, Equation \ref{eq:pmiddle} can be rewritten as \begin{equation} P(c_{i,1}<y_i<c_{i,2}|\btheta) = \textrm{cdf}(f_i(\btheta),\sigma_i, c_{i,1}) * (1 - \textrm{cdf}(f_i(\btheta),\sigma_i, c_{i,2})) \label{eq:pmiddle2} \end{equation} Note that Equation \ref{eq:pmiddle2} is equivalent to Equation \ref{eq:qualproduct} for two independent constraints $c_{i,1}<y_i$ and $y_i<c_{i,2}$ arising from two-category observations. This makes for a convenient implementation: rather than explicitly considering the two-sided observation $c_{i,1}<y_i<c_{i,2}$, we can rewrite the observation as two independent one-sided observations $c_{i,1}<y_i$ and $y_i<c_{i,2}$ described by Equation \ref{eq:qualproduct}. A modification to the two-category case is necessary when model discrepancy is included as in Equation \ref{eq:qualobjfinal}. Here, care must be taken to ensure that in the sampling model the probability of all possible outcomes sums to 1. For example, a reasonable sampling model for a three-category observation would be the following: \begin{algorithm}[H] To generate observation $i$, make a weighted random choice of one of the following possibilities: \begin{itemize} \item With probability $1-3\epsilon_i$, sample $y_i$ from $Y_i$ and report whether $y_i<c_{i,1}$ or $c_{i,1}<y_i<c_{i,2}$ or $c_{i,2}<y_i$ \item With probability $\epsilon_i$, report $y_i<c_{i,1}$ \item With probability $\epsilon_i$, report $c_{i,1}<y_i<c_{i,2}$ \item With probability $\epsilon_i$, report $c_{i,2}<y_i$ \end{itemize} \end{algorithm} \vspace{-12pt} Recall that in Equation \ref{eq:qualobjfinal}, in the case of model discrepancy, the observation is equally likely to be $y_i>c_i$ or $y_i<c_i$ (each of these events is assumed to have probability $\epsilon_i$). In contrast, using the above sampling model, it is half as likely to report $y_i<c_{i,1}$ (probability $\epsilon_i$) as to report $y_i>c_{i,1}$ (probability $2\epsilon_i$). We generalize Equation \ref{eq:qualobjfinal} to account for the case of three-category observations by allowing for two separate parameters. We define the positive discrepancy rate $\epsilon_i^+$ as the probability that a constraint in the data is satisfied regardless of $Y_i$, and the negative discrepancy rate $\epsilon_i^-$ as the probability a constraint is violated regardless of $Y_i$. For example, with the above sampling model, for the observation $c_{i,2}<y_i$, we would use $\epsilon_i^+ = \epsilon_i$ and $\epsilon_i^- = 2\epsilon_i$ Our modified likelihood function is \begin{equation} -\log P(\mathbf{y}<\mathbf{c}|\btheta) = \sum_{i=1}^n -\log (\epsilon_i^+ + (1-\epsilon_i^+-\epsilon_i^-) \textrm{cdf}(f_i(\btheta),\sigma_i,c_i)) \label{eq:qualobjfinalplus} \end{equation} We will refer to this function as the \textit{many-category likelihood function}. The same formulation can be extended to allow for an arbitrary number of ordinal categories. For example, with four categories defined by the thresholds $c_{i,1}$, $c_{i,2}$, and $c_{i,3}$, we could write expressions analogous to Equations \ref{eq:yltc1}-\ref{eq:ygtc2} for $P(y_i<c_{i,1}|\btheta)$, $P(c_{i,1}<y_i<c_{i,2}|\btheta)$, $P(c_{i,2}<y_i<c_{i,3}|\btheta)$, and $P(y_i>c_{i,3}|\btheta)$. We illustrate the use of Equation \ref{eq:qualobjfinalplus} with a concrete example. Suppose we have a quantity of interest with the corresponding random variable $A$, and we make a qualitative observation with three possible outcomes: $a<100$, $a\approx100$, or $a>100$. Suppose also that based on the sensitivity of the assay, we know that any value of $a$ in the range 85--115 would be reported as ``$a\approx100$.'' Given this knowledge of the assay sensitivity, we take the standard deviation of $A$ to be 5, that is, we can only confidently report $a<100$ if $a$ is 3 standard deviations below the threshold of 100. We choose the sampling model shown in Figure \ref{fig:ex3}A, giving a base probability of 0.03 to each possible outcome due to model discrepancy. Note that this sampling model follows the requirement that the probabilities of all possible outcomes sum to 1. We then formulate the constraint(s) as shown in Figure \ref{fig:ex3}B, depending on whether the actual observation is $a<100$, $a\approx100$, or $a>100$. The resulting probabilities are shown in Figure \ref{fig:ex3}C as a function of the expected value of $A$ predicted by the model. When using the many-category likelihood function, it is important to consider the underlying sampling model, and choose $\epsilon_i^+$ and $\epsilon_i^-$ in a way such that the probabilities in the sampling model sum to 1. An example of how to correctly choose $\epsilon_i^+$ and $\epsilon_i^-$ given a sampling model is presented in Section \ref{sec:application}. \begin{figure}[tb!] \centering\includegraphics{Fig2.eps} \caption{Example constraints and probabilities arising from a qualitative observation with three possible categorical outcomes. (A) The sampling model associated with the observation. (B) Inequalities and $\epsilon^{+}$ and $\epsilon^{-}$ values associated with each possible observation outcome (C) Plots and equations giving the probability of each possible observation outcome as a function of the expected value of model output $A$. } \label{fig:ex3} \end{figure} \subsubsection{Combined likelihood function} If independent quantitative and qualitative data are available, it is straightforward to combine the chi squared likelihood function for quantitative data with one of the newly presented likelihood functions for qualitative data. One would simply sum Equations \ref{eq:chisq} and \ref{eq:qualobjfinal} (or \ref{eq:qualobjfinalplus}) to obtain the kernel of the negative log likelihood for the combined dataset. The relative weighting of the two datasets is determined by the standard deviations for the quantitative data points and the values of $\sigma_i$ and $\epsilon_i$ for the qualitative observations. \subsection{Software implementation} We implemented the likelihood functions described in the previous section in PyBioNetFit v1.1.0. PyBioNetFit supports both the two-category (Equation \ref{eq:qualobjfinal}) and many-category (Equation \ref{eq:qualobjfinalplus}) likelihood functions for qualitative data, and supports combining these functions with the chi squared likelihood function for quantitative data. The new options were added via an extension of the Biological Property Specification Language (BPSL) supported by PyBioNetFit. As previously described \citep{Mitra2019a}, a BPSL statement consists of an inequality, followed by an enforcement condition, followed by a weight. For example, in the statement \begin{equation*} \texttt{A<4 at time=1 weight 2} \end{equation*} the inequality is \texttt{A<4} (referring to some modeled quantity $A$), the enforcement condition is \texttt{time=1} (referring to time 1 in a time course), and the weight is declared by \texttt{weight 2}. This weight declaration refers to $C_i$ in the previously described static penalty function (Equation \ref{eq:static}). Using this formulation, the term added to an objective function for this constraint would be $2 \cdot \max(0,A(1)-4)$, where $A(1)$ is model output $A$ evaluated at time = 1. In PyBioNetFit v1.1.0, we added an alternative to the weight clause to specify parameters of the new likelihood functions. As described in Section \ref{sec:twocat}, for each inequality in the data, the two-category likelihood function has two user-configurable parameters: the probability $\epsilon_i$ of measuring $y_i<c_i$ regardless of the distribution of $Y_i$, and the standard deviation $\sigma_i$ of the quantity $Y_i$. The value of $1-2\epsilon_i$ (i.e., the probability that the distribution of $Y_i$ is relevant to the experimental result) is supplied to PyBioNetFit with the \texttt{confidence} keyword. $\sigma_i$ is supplied to PyBioNetFit with the \texttt{tolerance} keyword. Therefore, an example BPSL statement using the two-category likelihood function is \begin{equation*} \texttt{A<4 at time=1 confidence 0.98 tolerance 0.5} \end{equation*} This statement would result in using the likelihood function of Equation \ref{eq:qualobjfinal} with $\epsilon_i=0.01$, $\sigma_i=0.5$, $c_i=4$, and $Y_i=A(1)$. The resulting term added to the likelihood function is $-\textrm{log}(0.01+0.98\cdot\textrm{cdf}(A(1),0.5,4))$. PyBioNetFit also supports the use of the many-category likelihood function (Equation \ref{eq:qualobjfinalplus}) through the specification of separate positive and negative discrepancy rates. In this case, the \texttt{confidence} keyword is replaced with the keywords \texttt{pmin} to specify $\epsilon_i^-$ (i.e., the minimum value of $P(y_i<c_i|\btheta)$) and \texttt{pmax} to specify $1-\epsilon_i^+$ (i.e., the maximum value of $P(y_i<c_i|\btheta)$). For example, the BPSL statement \begin{equation*} \texttt{A<4 at time=1 pmin 0.01 pmax 0.98 tolerance 0.5} \end{equation*} would use Equation \ref{eq:qualobjfinalplus} with $\epsilon_i^+=0.02$, $\epsilon_i^-=0.01$, $\sigma_i=0.5$, $c_i=4$, and $Y_i=A(1)$. The resulting term added to the likelihood function is $-\textrm{log}(0.01+0.97\cdot\textrm{cdf}(A(1),0.5,4))$. When writing these statements in BPSL, care must be taken to ensure that results are statistically valid. First, note that the \texttt{tolerance} specifies the standard deviation of the final random variable $Y_i$ used to sample $y_i$ in Equation \ref{eq:qualobjfinal}. For example in the above statement, it refers to the standard deviation of $A(1)$. In the statement \texttt{A>B at time=5 confidence 0.98 tolerance 0.5}, \texttt{tolerance} refers to the standard deviation of $A(5)-B(5)$, i.e., the sum of the standard deviations of $A(5)$ and $B(5)$. In the statement \texttt{A>4 always confidence 0.98 tolerance 0.5}, \texttt{tolerance} refers to the standard deviation of $\min(A(t))$, rather than the value of $A$ at any particular time. Second, it is important to keep in mind the underlying sampling model to correctly set \texttt{confidence} or \texttt{pmin} and \texttt{pmax}. For example, in the sampling model of Fig \ref{fig:ex3}A, there are three possible constraints each with probability 0.03 to be satisfied due to model discrepancy and probability 0.06 to be violated due to model discrepancy. Therefore, the correct setting is \texttt{pmin 0.03 pmax 0.94}. Third, when using PyBioNetFit's enforcement keywords \texttt{always}, \texttt{once}, and \texttt{between}, it is important to be sure the possible categories in the sampling model are mutually exclusive and cover all possible outcomes. For example, if one of two possible categorical outcomes is \texttt{A>4 always}, the other must be \texttt{A<4 once} (not \texttt{A<4 always}). Likewise, if one category is \texttt{A>4 between time=5,time=10}, its negation is \texttt{A<4 once between time=5,time=10}. We note that the \texttt{once between} enforcement condition used here is a new feature of BPSL introduced in PyBioNetFit v1.1.1. The sampling model is never explicitly input into PyBioNetFit, as equations \ref{eq:qualobjfinal} and \ref{eq:qualobjfinalplus} are defined regardless of whether the sampling model is well-defined. It is the user's responsibility to choose a well-defined sampling model and specify constraints accordingly to obtain meaningful results. \begin{figure}[tbp!] \centering\includegraphics{Fig3.eps} \caption{Configuration of the example problem in BPSL. As described in the text, we considered the problem assuming either (A) two possible observation categories or (B) three possible categories. The left column shows an example BPSL statement for each possible category. In these BPSL statements, \texttt{p1} refers to the primary degranulation and \texttt{p3\_}$<t>$ refers to the secondary degranulation after a delay of $t$ minutes. Note that in the three-category case, the middle category requires two separate BPSL statements. The right column shows simulated trajectories of the primary (left) and secondary (right) degranulation responses that are consistent with the BPSL statement. For the three-category case, \texttt{degrHigh} and \texttt{degrLow} are functions defined in the BNGL model file for use in the BPSL statements.} \label{fig:setup} \end{figure} \subsection{Example application} \label{sec:application} To demonstrate the use of qualitative likelihood functions in PyBioNetFit, we performed Bayesian UQ on a synthetic example problem based on the study of \cite{Harmon2017}. The model of \cite{Harmon2017} describes the degranulation of mast cells in response to two consecutive stimuli with multivalent antigen. In the original study, it was found that depending on the time delay between the two stimuli, the secondary response could be either stronger or weaker than the primary response. The original data consisted of quantitative degranulation measurements for six different time delays. In our synthetic problem, we suppose that the experimental data took a different form. Rather than quantitative measurements, we assume that it is only possible to measure whether the secondary degranulation is higher or lower than the primary degranulation. These measurements can be seen as case-control comparisons between several conditions of interest (secondary degranulation at various time delays) and a control (primary degranulation). We assume that these measurements can be made at a larger number of time delays than were used in the original study (i.e., we have a less precise but higher throughput instrument than in the actual study). We generated synthetic data of this form using the published parameter values of the model as ground truth. For each time delay in the data, we ran a simulation, and added Gaussian noise to the primary and secondary degranulation outputs before recording whether the primary or secondary was higher. We generated datasets ranging from 4 to 64 time delays. The resulting datasets were implemented in BPSL as illustrated in Figure \ref{fig:setup}A. Note that we set the \texttt{confidence} to 0.98, allowing for a 0.02 chance of model discrepancy (although there is no true model discrepancy in this synthetic problem). We set the \texttt{tolerance} to $1.4 \times 10^4$, which is the standard deviation of the difference between the primary and secondary degranulation values (i.e., twice the standard deviation of the added noise for each individual degranulation value). We configured PyBioNetFit jobs to perform Bayesian UQ by parallel tempering for each dataset. The results are shown in Figure \ref{fig:bayes}A-D and Figures S1--S5. Not surprisingly, as the number of qualitative observations increases, we obtain a narrower distribution of parameter values, and these narrower distributions include the ground truth parameter values. This result demonstrates that with a sufficient amount of qualitative data, it is possible to find nontrivial credible intervals for parameter values. \begin{figure*}[!!!!!tbh!!!!!!!!!!!!] \centering\includegraphics{Fig4.eps} \caption{Posterior distributions calculated by parallel tempering for three selected model parameters under different measurement protocols. (A-D) Datasets consisted of (A) 8, (B) 16, (C) 32, (D) 64 qualitative observations, each with two possible categorical outcomes. (E) The dataset consisted of 64 qualitative observations, each with three possible categorical outcomes. Results for datasets of 4, 8, 16, and 32 measurements are provided in Supplementary information. (F) The dataset consisted of six quantitative data points, similar to in the original study of \cite{Harmon2017}. Ground truth values are marked in red. The posterior distributions of all parameters are provided in Supplementary information. } \label{fig:bayes} \end{figure*} To demonstrate the use of the many-category likelihood function (Equation \ref{eq:qualobjfinalplus}), we repeated the analysis using three-category synthetic data. Our three categories allow the secondary degranulation to be measured as smaller, larger, or within error of the primary degranulation. The three-category dataset was declared in BPSL as illustrated in Figure \ref{fig:setup}B. Compared to the two-category synthetic data, modifications were required as described in Section \ref{sec:threecat}. The assumed sampling model used for the constraints in Figure \ref{fig:setup}B is the following, where $Y_i$ represents the primary degranulation minus the secondary degranulation: \begin{algorithm}[H] To generate observation $i$, make a weighted random choice of one of the following possibilities: \begin{itemize} \item With probability 0.97, sample $y_i$ from $Y_i$ and report whether $y_i<-4.2\times 10^4$ or $-4.2\times 10^4<y_i<4.2\times 10^4$ or $4.2\times 10^4<y_i$ \item With probability 0.01, report $y_i<-4.2\times 10^4$ \item With probability 0.01, report $-4.2\times 10^4<y_i<4.2\times 10^4$ \item With probability 0.01, report $4.2\times 10^4<y_i$ \end{itemize} \end{algorithm} \vspace{-12pt} We have chosen a threshold of $4.2\times 10^4$ for the difference between primary and secondary degranulation that qualifies as ``within error.'' This value is three times the standard deviation of $Y_i$, giving the separation of categories required in Section \ref{sec:threecat} (i.e., any sampled $y_i$ is consistent with at most two possible categories). This condition allows us to define the middle category ($-4.2\times 10^4<y_i<4.2\times 10^4$) using two independent BPSL statements. The choice of threshold is reflected in the BPSL by the use of the model outputs referred to as \texttt{degrHigh} and \texttt{degrLow}. Based on the sampling model, each category has a minimum probability of 0.01 due to model discrepancy, and a maximum probability of 0.98 (because the other two categories each have a minimum of 0.01). Therefore, we set \texttt{pmin} to 0.01 and \texttt{pmax} to 0.98 instead of using the \texttt{confidence} keyword. Finally, the \texttt{tolerance} is set to $1.4\times 10^4$, the same as for the two-category dataset. The results of parallel tempering using this dataset are illustrated in Figure \ref{fig:bayes}E and Figures S6--S10. As expected, compared to the results with two-category dataset of the same size, some parameters are bounded more tightly around their ground truth values. For comparison, we also performed the analysis using synthetic quantitative data generated at the same time delays as in the original study (Figure \ref{fig:bayes}F and Figure S11). The quantitative dataset produced distributions even tighter than those of the three-category qualitative data. It is notable how close we can get to the results with quantitative data by using purely qualitative data. \section{Discussion} Here we have presented a new statistical framework for using qualitative data in conjunction with Bayesian UQ for biological models. In these models, unidentifiable parameters are common, but Bayesian analysis can determine which parameters and correlations are identifiable, and to what extent the model has predictive value despite unidentifiable parameters. We see this framework as a more statistically rigorous improvement upon our previously described static penalty function approach \citep{Mitra2018a} (Equation \ref{eq:static}). Our new framework can be used for statistical analysis, whereas the previous formulation was simply a heuristic for finding a single reasonable parameter set. Our new likelihood function has applications beyond Bayesian UQ. It can also, like the static penalty function, be used with optimization algorithms to find a point estimate of the best parameters. In such a problem, the global minimum (assuming it can be found by an optimization algorithm) is the maximum likelihood estimate, i.e., the maximum of the posterior distribution. The new likelihood function may also be used for UQ by profile likelihood analysis \citep{Kreutz2013}. The static penalty function may remain more efficient at point estimation. The cdf-based likelihood function has the limitation that when far from constraint satisfaction, its gradient is near zero, and so it cannot effectively guide the optimization algorithm toward constraint satisfaction. In contrast, the static penalty function provides useful information for optimization at any distance from constraint satisfaction. One potential workflow could be to use the static penalty function for initial optimization, followed by the likelihood function for refinement and evaluation of the best fit. We note that under our new framework, each constraint now has two adjustable settings: $\epsilon_i$ and $\sigma_i$. This may appear worse than the single weight parameter $C_i$ in the static penalty formulation, but the advantage is that both of these parameters have a statistical interpretation. $\epsilon_i$ represents the probability of model discrepancy resulting in a qualitative observation that occurs regardless of the model and its predicted mean. $\sigma_i$ represents the standard deviation of the quantity considered in the constraint. This value might seem challenging to estimate, given we may not even be able to quantitatively measure the quantity of interest. However, much of the same intuition holds as when dealing with Gaussian-distributed quantitative data. In particular, if there is a difference of $2\sigma_i$ between a threshold and the mean, we can be reasonably confident (probability 97.7\%) that an observation would yield the correct result (greater or less than the threshold). With a difference of $3\sigma_i$, we can be extremely confident (probability 99.87\%). To choose $\sigma_i$, a reasonable thought process would be to ask, ``How large of a difference would there have to be for the experiment to be sure to detect the difference?'', and set $\sigma_i$ equal to one third of that difference. Both parameters can be seen as optional. If we don't expect a scenario in which a constraint is impossible to reconcile with our model, we can set $\epsilon_i=0$, ignoring this aspect of the likelihood function. Likewise, if we have no way to estimate the standard deviation of the measured quantity, we could set $\sigma_i=0$ and use $\epsilon_i$ to set a fixed probability of satisfying the constraint. Thus, the two adjustable constants should be seen as an opportunity to provide all available information about a qualitative observation of interest, rather than as a burden for manual adjustment. We expect that our new formulation of a likelihood function derived from qualitative data will be useful in future modeling studies and will help facilitate the wider adoption of qualitative data as a data source for model parameterization. \section*{Acknowledgements} We acknowledge computational resources provided by the Institutional Computing program at Los Alamos National Laboratory, which is operated by Triad National Security, LLC for the NNSA of DOE under contract 9233218CNA000001. We thank Steven Sanche for useful discussions.\vspace*{-12pt} \section*{Funding} This work has been supported by NIH/NIGMS grant R01GM111510.\vspace*{-12pt}
2,869,038,155,354
arxiv
\section{Introduction} \label{Intro} Convolutional neural networks (CNNs) \cite{le1990handwritten} currently play an important role in the deep learning and computer vision fields. In the past several years, researchers have revealed that CNNs can give the state-of-the-art performance in many computer vision tasks, especially for image classification and recognition tasks \cite{NIPS2012_AlexNet,GoogleLeNet_2014,VGG_2015}. Comparing with the fully connected deep neural networks (DNNs), CNNs are superior in exploring spacial constraints and in turn extracting better local features from input images using the convolution layers and weight sharing, and furthermore may provide better invariance through the pooling mechanism. All of these make CNNs very suitable for image-related tasks \cite{lecun1995convolutional}. Moreover, large-scale deep CNNs can be effectively learned end-to-end in a supervised way from a large amount of labelled images. In the past several years, a tremendous amount of research effort has been devoted to further improve the performance of deep CNNs. In \cite{hinton2012improving,srivastava2014dropout}, the dropout method has been proposed to prevent CNNs from overfitting by randomly dropping a small portion of hidden nodes in the network during the training procedure. Many experiments have confirmed that the dropout technique can significantly improve the network performance, especially when only a small training set is available. Besides, a similar idea, called dropconnect \cite{wan2013dropconnect}, has been proposed to drop connections between layers instead of hidden nodes during the training stage. Another interesting research field is to design good nonlinear activation functions for neural networks beyond the popular rectified linear function (ReLU), such as maxout \cite{goodfellow2013maxout} and PReLU \cite{he2015delving}, which are also demonstrated to yield improvement in terms of classification performance. On the other hand, another important path to improve model performance is to search for some new CNN structures. For example, in \cite{lin2013network}, Network in Network (NIN) has been proposed, in which one micro neural network is used to replace the regular linear convolutional filter. Recurrent Convolutional Neural Network (R-CNN) \cite{liang2015recurrent} is another new CNN structure, which introduce recurrent connections into the convolution layers. In \cite{rippel2015spectral}, the spectral pooling method is proposed, which applies discrete Fourier transform into the pooling layers to preserve more useful information after the dimensionality reduction. More recently, a novel model, called Hybrid Orthogonal Projection and Estimation (HOPE) \cite{zhang2015hybrid}, has been proposed to learn neural networks in either supervised or unsupervised ways. This model introduces a linear orthogonal projection to reduce the dimensionality of the raw high-dimension data and then uses a finite mixture distribution to model the extracted features. By splitting the feature extraction and data modeling into two separate stages, it may derive a good feature extraction model that can generate better low-dimension features for the further learning process. More importantly, based on the analysis in \cite{zhang2015hybrid}, the HOPE model has a tight relationship with neural networks since each hidden layer of DNNs can also be viewed as a HOPE model being composed of the feature extraction stage and data modeling stage. Therefore, the maximum likelihood based unsupervised learning as well as the minimum cross-entropy error based supervised learning algorithms can be used to learn neural networks under the HOPE framework for deep learning. In this case, the standard back-propagation method may be used to optimize the objective function to learn the models except that the orthogonal constraints are imposed for all projection layers during the training procedure. However, \cite{zhang2015hybrid} has not taken CNNs into account but merely investigated the HOPE models for the fully connected neural networks and demonstrated good performance in the small MNIST data set. In this paper, we extend the HOPE model to the popular CNNs by considering the special model structures of both convolution and pooling layers, and further consider how to introduce the orthogonal constraints into the CNN model structure and learn CNNs under the HOPE framework. The most straightforward idea is to use a HOPE layer as the first hidden layer in CNNs to de-correlate the high-dimension input CNN features and remove the irrelevant noises as a result, which we call HOPE-Input layer. This idea is similar as the original formulation in \cite{zhang2015hybrid} except the HOPE model is applied to each convolutional filter. Moreover, the pooling layers, using either average pooling or max pooling, are a critical step in CNNs \cite{jarrett2009best} since they can reduce the resolution of the lower-level feature maps and then make the models more tolerable to the slight distortion or translation in the orignal images \cite{le1990handwritten}. In \cite{boureau2010theoretical}, a theoretical analysis of average pooling and max pooling is made to reflect how pooling can affect the network performance. However, in most cases, the pooling layers are still used based on empirical information. In \cite{springenberg2014striving}, it proposes a new CNN structure using larger stride convolution layers to replace the pooling layers, and the authors argue that the larger stride convolution layers can perform equally well as the pooling layers and also achieve similar performance in the experiments. Hinted by this idea, we propose another method to apply the HOPE models to CNNs, namely using the HOPE models to replace the regular pooling layers, called a HOPE-Pooling layer. In this way, the orthogonality is further introduced to the models in this stage. Our experimental results on both CIFAR-10 and CIFAR-100 data sets have shown that the both HOPE-Input layers and HOPE-Pooling layers result in significant performance improvement over the regular CNN baseline models \footnote{The code of our HOPE CNN can be downloaded via: https://github.com/mowangphy/HOPE-CNN}. The structure of the rest of this paper is listed below: section~\ref{HOPE} will briefly review the HOPE model and its usage in DNNs. In section~\ref{Method}, we present both HOPE-Input layers and HOPE-Pooling layers. In section~\ref{Experiments}, we report experimental results on two popular data sets, namely CIFAR-10 and CIFAR-100, and compare with other CNN models. Finally, in section~\ref{Conclusion}, we conclude the paper with the findings and conclusions. \section{Hybrid Orthogonal Projection and Estimation (HOPE) Framework} \label{HOPE} In the original Hybrid Orthogonal Projection and Estimation (HOPE) formulation \cite{zhang2015hybrid}, it is assumed that any high-dimension feature vector can be modelling by a hybrid model consisting of feature extraction using a linear orthogonal projection and statistic modeling using a finite mixture model. Assume each high-dimension feature vector ${\bf x}$ is of dimension $D$, the linear orthogonal projection will map ${\bf x}$ to an $M$-dimension feature space ($M < D$), and the projected vector may attain the most useful information of ${\bf x}$. Specifically, we can define a $D \times D$ orthogonal matrix $[ {\bf U} ; \;\; V ]$ which satisfies: \begin{equation} \label{eq-signal-noise} [{\bf z}; \;\; {\bf n}] = [ {\bf U} ; \;\; V ] \; {\bf x} \end{equation} where ${\bf z}$ is an $M$-dimension vector, called the signal component, and ${\bf n}$ is the residual noise vector with the dimensionality of $D - M$. In practice, ${\bf z}$ is heavily de-correlated but it may still locate in a rather high dimension feature space. In the HOPE formulation, it is proposed to model ${\bf z}$ with a finite mixture model: \begin{equation} \label{eq-signal-model} p({\bf z}) = \sum_{k=1}^K \pi_k \cdot f_k({\bf z}|\theta_k) \end{equation} where $K$ is the number of mixture components, $\pi_k$ is the mixture weight of the $k$th component ($\sum_{k=1}^K \pi_k = 1$), $f_k()$ denotes a selected distribution from the exponential family, and $\theta_k$ denotes all model parameters of $f_k()$. As discussed in \cite{zhang2015hybrid}, if the von Mises-Fisher (vMF) distribution is chosen for $f_k()$, the resultant HOPE model is equivalent in mathematical formulation to a hidden layer in neural networks using the popular rectified linear units (ReLU). The HOPE model combines a linear orthogonal projection and a finite mixture model under a unified generative modeling framework. It can be learned unsupervisely based on the maximum likelihood estimation from unlabelled data as well as discriminatively from labelled data. In \cite{zhang2015hybrid}, the HOPE model has been applied to the fully connected DNNs and learn the models accordingly in either supervised or unsupervised ways. For one hidden layer with input vector ${\bf x}$ $({\bf x} \in R^D)$ and output vector ${\bf y}$ $({\bf y} \in R^G)$, it is first splited into two layers: i) The first layer is a linear orthogonal projection layer, which is used to project ${\bf x}$ to a feature vector ${\bf z}$ $ ({\bf z} \in R^M, M < D)$ and remove the noise signals by using an orthogonal projection matrix ${\bf U}$: \begin{equation} \label{eq-DNN-projection} {\bf z} = {\bf U}{\bf x}. \end{equation} ii) The second layer is a non-linear model layer, which convert ${\bf z}$ to the output vector ${\bf y}$ following the selected model $f_k()$ and a nonlinear log-likelihood pruning operation. An example of a HOPE layer in DNNs is shown in Figure \ref{Fig:HOPEDNN}. \begin{figure}[ht] \begin{center} \includegraphics[width=0.8\linewidth]{HOPEDNN.png} \end{center} \caption{The HOPE model is viewed as a hidden layer in DNNs.} \label{Fig:HOPEDNN} \end{figure} As in \cite{zhang2015hybrid}, all HOPE model parameters, including the projection matrix $U$ and the model matrix $W$, can be learned, using the error back-propagation algorithm with stochastic gradient descent, to optimize an objective function subject to an orthogonal constraint, ${\bf U} {\bf U}^T = {\bf I}$, for each projection layer. As in \cite{zhang2015hybrid}, for computational simplicity, the constraint is cast as the following penalty term to gradually de-correlate the matrix ${\bf U}$ during the learning process: \begin{equation} \label{eq-Upenalty} P({\bf U}) = \sum_{i=1}^{M} \sum_{j=i+1}^{M} \frac{|{\bf u}_i \cdot {\bf u}_j|}{|{\bf u}_i| \cdot |{\bf u}_j|}. \end{equation} In \cite{zhang2015hybrid}, both unsupervised learning and supervised learning are studied for DNNs under the HOPE framework. The above orthogonal constraint is found to be equally important in both scenarios. In this paper, we will study how to supervised learn CNNs under the HOPE formulation and more specifically investigate how to introduce the orthogonality into the CNN model structure. \section{Our Proposed Method} \label{Method} In \cite{zhang2015hybrid}, the authors have applied the HOPE model to the fully connected DNNs and have achieved good performance in experiments on small data sets like MNIST. However, more widely used neural models in computer vision, i.e. convolutional neural networks (DCNNs), have not been considered. Unlike DNNs, CNNs adopt some unique model structures and have achieved huge successes in many large-scale image classification tasks. Therefore, it is interesting to consider how to combine the HOPE model with CNNs to further improve image classification performance. \subsection{Applying the HOPE model to CNNs} To apply the HOPE model to CNNs, the most straightforward solution is to split each convolution layer into a concatenation of a projection layer and a model layer and impose the orthogonal constraints onto the projection layer as in \cite{zhang2015hybrid}. Assume that we have a regular convolution layer in CNNs, which uses some $S \times S$ linear filters to map from $C_i$ input feature maps to $C_m$ output feature maps. As shown in Figure \ref{Fig:Conv2DNN}, under the HOPE framework, we propose to split this convolution layer into the two separate layers: \begin{enumerate} \item[i)] One linear orthogonal projection layer with the projection matrix ${\bf U}$: it linearly maps a 3-dimension tensor with the size of $S \times S \times C_i$ into a vector $1 \times 1 \times C_p$, $C_p$ denotes the feature maps to be used in this projection layer. As the projection filters convolve with the input layer, it generates a total of $C_p$ feature maps in the projection layer. The projection filter itself is a 4-dimension tensor with the size of $S \times S \times C_i \times C_p$. Based on the definition of the convolution procedure and follow the formulation in \cite{zhang2015hybrid}, we can reshape this 4-dimension tensor as a matrix ${\bf U}$ with the size of $ (S \cdot S \cdot C_i) \times C_p$, as shown in Figure \ref{Fig:Conv2DNN} \item[ii)] One model layer with the weight matrix $W$: it has exactly same structure as a regular convolutional layer, which mapping the $C_p$ projected feature maps into $C_m$ output feature maps. Differing from \cite{zhang2015hybrid}, instead of only mapping the projected vector, the proposed model layer here takes all projected vectors within each $S \times S$ region and map all projected features within this region into the final output feature maps. We have found that this modification is critical in CNNs for better performance in image classification. \end{enumerate} Figure \ref{Fig:Conv2DNN} shows the whole structure of one HOPE layer in CNNs. Since the projection layer is linear, we may collapse these two layers to derive a normal convolution layer in CNNs. However, as argued in \cite{zhang2015hybrid}, there are many advantages to separate them so as to learn CNNs under the HOPE framework. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.7\linewidth]{Conv2DNN_Full.png} \end{center} \caption{A convolution layer in CNNs may be viewed as a HOPE model.} \label{Fig:Conv2DNN} \end{figure*} Note that $C_p$ is always far less than $S \times S \times C_i$ in the above HOPE formulation, it implies that the orthogonal projection may help to remove irrelevant noises in this step. In this paper, we only consider the supervised learning of CNNs under the HOPE framework. In this case, the model parameters in the model layer can be learned in the same way as in the convolutional CNNs. However, for the projection layers, we need to impose the orthogonal constraint, ${\bf U} {\bf U}^T = {\bf I}$ during the learning process. Following \cite{zhang2015hybrid}, we cast this constraint as a penalty term in eq. (\ref{eq-Upenalty}). First of all, we need to derive the gradient of the penalty term $P({\bf U})$ with respect to ${\bf U}$ as follows: \begin{equation} \label{eq-grad-U} \frac{\partial P({\bf U})}{\partial {\bf u}_i} = \sum_{j=1}^{M} (\frac{|{\bf u}_i \cdot {\bf u}_j|}{|{\bf u}_i| \cdot |{\bf u}_j|}) \cdot \bigg((\frac{{\bf u}_j}{{\bf u}_i \cdot {\bf u}_j}) - (\frac{{\bf u}_i}{{\bf u}_i \cdot {\bf u}_j}) \bigg) \end{equation} To facilitate the above computation in GPUs, we may equivalently represent the above gradient computation as a matrix form, i.e., essentially a multiplication of the two matrices ${\bf D}$ and ${\bf B}$ as follows: \begin{equation} \label{eq-grad-U-mat} \frac{\partial P({\bf U})}{\partial {\bf U}} = ({\bf D} - {\bf B}){\bf U} \end{equation} where ${\bf D}$ is an $M$-by-$M$ matrix of $ d_{ij} = \frac{\mbox{sign}({\bf u}_i \cdot {\bf u}_j)}{|{\bf u}_i| \cdot |{\bf u}_j|} $ $(1< i, j <M)$ and ${\bf B}$ is another $M$-by-$M$ diagonal matrix of $ b_{ii} = \frac{\sum_{j}g_{ij}}{{\bf u}_i \cdot {\bf u}_i} $ with $ g_{ij} = \frac{|{\bf u}_i \cdot {\bf u}_j|}{|{\bf u}_i| \cdot |{\bf u}_j|}$ $(1< i, j <M)$. Secondly, we can combine the above $\frac{\partial P({\bf U})}{\partial {\bf U}}$ with the gradient $\Delta {\bf U}$, which is calculated from the objective function: \begin{equation} \label{eq-grad-update} \widetilde{\Delta {\bf U}}= \Delta {\bf U} + \beta \cdot \frac{\partial P({\bf U})}{\partial {\bf U}} \end{equation} where $\beta$ is a pre-defined parameter to balance the orthogonal penalty term. Finally, the projection matrix ${\bf U}$ can be updated as follows: \begin{equation} \label{eq-U-update} {\bf U}^{(n)} = {\bf U}^{(n-1)} - \gamma \cdot \widetilde {\Delta {\bf U}} \end{equation} where $\gamma$ is the learning rate for the weight update. During the learning process, ${\bf U}$ is gradually de-correlated and eventually becomes an orthogonal matrix. \subsection{HOPE-Input Layers} The first way to apply the HOPE model to CNNs is to use the above HOPE layer to replace the first convolution layer right after the image pixel input. The HOPE formulation may help to de-correlate the raw image pixel inputs and filter out irrelevant noises in the first place. This is called as one HOPE-Input layer. In practice, we may apply more HOPE layers to replace the following convolution layers in CNNs as well. \subsection{HOPE-Pooling Layers} In CNNs, the pooling layers \cite{krizhevsky2012imagenet} are traditionally considered as important for good performance. \cite{springenberg2014striving} has shown that the pooling layers result in the reduction of feature dimensionality, which help the CNNs to {\em view} much larger regions of the input feature maps, and generate more stable and invariant high level features. Moreover, \cite{springenberg2014striving} argues to use regular convolution layers with larger stride to replace the pooling layers, and claims to achieve the similar performance as the pooling layers. The particular network structure is called 'ALL-CNN'. ALL-CNN models provide a useful idea that we may use the convolution layers with extra parameters in place of the simple pooling layers in CNNs. As an alternative way to apply the HOPE model to CNNs, we propose to use the HOPE layer in Figure \ref{Fig:Conv2DNN} to replace the normal pooling layers in CNNs. Comparing with the regular pooling layers, we believe that the HOPE layer may be advantageous in feature extraction since the linear orthogonal projection may help to de-correlate the input feature maps and generate better features for the upper layers. Assume we have a regular pooling layer, which takes a 3-dimension tensor $S \times S \times C_p $ from one region in the input and generate a vector with the size of $1 \times 1 \times C_p $ based on the simple pooling operation, such as {\em max}. Normally, we do not have any learnable parameters in the pooling layers. In this paper, we propose to use a linear orthogonal projection layer with a weight matrix of $(S \cdot S \cdot C_p) \times C_p$ in size, to replace the regular pooling layer. The projection matrix will be learned as above to ensure the orthogonality. We call the linear orthogonal projection layer along with the model layer a HOPE-Pooling layer. In practice, for simplicity, we just use a linear orthogonal projection layer to replace a pooling layer in CNNs, and the convolution layer next to it can be viewed as a model layer. In this way, we can reduce the number of new parameters to be introduced in our formulation. In this case, we still introduce about $(S \cdot S \cdot C_p \cdot C_p)$ more parameters. To make sure our model is still comparable with the baseline in terms of model size, we only use one HOPE-Pooling layer to replace the first pooling layer in CNNs, which normally has much less feature maps (where $C_p$ is quite small) and keep the other regular pooling layers unchanged. Adding more HOPE layers may result in a much bigger model, which may quickly overfit a small training set. \begin{table*}[htb] \caption{The structure of several CNNs examined in this work} \label{table-CNNs} \centering \begin{tabular}{p{3.5cm}<{\centering}|p{3.5cm}<{\centering}|p{3.5cm}<{\centering}} \toprule Baseline & HOPE-Input & Single HOPE-Pooling \\ \midrule \multicolumn{3}{c}{Input: 32-by-32 images in RGB color channel} \\ \midrule \multirow{4}*{-} & {\bf $3 \times 3$ filter} & \multirow{4}*{-} \\ & {\bf 20 feature maps } \\ & {\bf using orthogonal } \\ & {\bf projection } \\ \midrule \multicolumn{3}{c}{$3 \times 3$ filter, 64 feature maps, batch normalization, ReLU, dropout 0.3} \\ \multicolumn{3}{c}{$3 \times 3$ filter, 64 feature maps, batch normalization, ReLU} \\ \midrule $2 \times 2$ & $2 \times 2$ & {\bf $2 \times 2$ filter} \\ max-pooling & max-pooling & {\bf 64 feature maps} \\ stride = 2 & stride = 2 & {\bf using orthogonal projection, stride = 2} \\ \midrule \multicolumn{3}{c}{$3 \times 3$ filter, 128 feature maps, batch normalization, ReLU, dropout 0.4} \\ \multicolumn{3}{c}{$3 \times 3$ filter, 128 feature maps, batch normalization, ReLU} \\ \midrule $2 \times 2$ & $2 \times 2$ & $2 \times 2$ \\ max-pooling & max-pooling & max-pooling \\ stride = 2 & stride = 2 & stride = 2 \\ \midrule \multicolumn{3}{c}{\{$3 \times 3$ filter, 256 feature maps, batch normalization, ReLU, dropout 0.4\} $\times 2$} \\ \multicolumn{3}{c}{$3 \times 3$ filter, 256 feature maps, batch normalization, ReLU} \\ \midrule $2 \times 2$ & $2 \times 2$ & $2 \times 2$ \\ max-pooling & max-pooling & max-pooling \\ stride = 2 & stride = 2 & stride = 2 \\ \midrule \multicolumn{3}{c}{ \{ $3 \times 3$ filter, 512 feature maps, batch normalization, ReLU, dropout 0.4 \} $\times 2$} \\ \multicolumn{3}{c}{$3 \times 3$ filter, 512 feature maps, batch normalization, ReLU} \\ \midrule $2 \times 2$ & $2 \times 2$ & $2 \times 2$ \\ max-pooling & max-pooling & max-pooling \\ stride = 2 & stride = 2 & stride = 2 \\ \midrule \multicolumn{3}{c}{ \{ $3 \times 3$ filter, 512 feature maps, batch normalization, ReLU, dropout 0.4 \} $\times 2$} \\ \multicolumn{3}{c}{$3 \times 3$ filter, 512 feature maps, batch normalization, ReLU} \\ \midrule $2 \times 2$ & $2 \times 2$ & $2 \times 2$ \\ max-pooling & max-pooling & max-pooling \\ stride = 2 & stride = 2 & stride = 2 \\ \midrule \multicolumn{3}{c}{Fully connected layer, 512 nodes, batch normalization, ReLU, dropout 0.5} \\ \multicolumn{3}{c}{Fully connected layer, 10 nodes, with softmax} \\ \bottomrule \end{tabular} \end{table*} \section{Experiments} \label{Experiments} In this paper, we use two widely used image classification data sets, namely CIFAR-10 and CIFAR-100 \cite{krizhevsky2009learning}, to evaluate the performance of our proposed HOPE-Input and HOPE-Pooling methods. \subsection{Databases} CIFAR-10 and CIFAR-100 are two popular data sets that are widely used in computer vision. Both data sets contain 50,000 32-by-32 RGB images for training and 10,000 images for validation. The main difference between these two data sets is that CIFAR-10 only divides all images into 10 coarse classes, but CIFAR-100 divides them into 100 fine classes. In our experiments, we should transform all images into the YUV color channels. \subsection{Experimental Configurations} In our experiments, we consider several different CNN structures as specified in Table \ref{table-CNNs} in detail. Firstly, we follow the CNN structure that is defined by Sergey Zagoruyko as our baseline CNNs.\footnote{See https://github.com/szagoruyko/cifar.torch for more information. According to the website, without using data augmentation, the best performance on the CIFAR-10 test set is 8.7\% in error rate. By using RGB color channel instead of YUV, our reproduced baseline performance is 8.30\% in this paper.} Then we evaluate the HOPE-Input CNN and HOPE-Pooling CNN as discussed in Section ~\ref{Method}, and compare them with the baseline model. In Table \ref{table-CNNs}, we have provided the detailed description of the structure of 3 CNNs (baseline, HOPE-Input and HOPE-Pooling) used in our experiments. Moreover, we also consider another configuration by combining HOPE-Input and HOPE-Pooling, i.e., using one HOPE-Input layer and one HOPE-Pooling layer at the same time. To further investigate the performance of the HOPE-Input CNN, we also consider a model configuration called as LIN-Input CNN, which uses the same model structure as the HOPE-Input CNN except that the orthogonal constraint in eq. (\ref{eq-Upenalty}) is NOT applied in training. Similarly, for the HOPE-Pooling CNN, we also consider another model configuration, named as LIN-Pooling, which uses the same model structure as the HOPE-Pooling CNN but removes the orthogonal constraint in eq. (\ref{eq-Upenalty}). Moreover, the combination of LIN-Input and LIN-Pooling is also used as another baseline for comparison. In all experiments, we use the mini-batch SGD with a batch size of 100 images to perform 400 epochs of network training. The initial learning rate is $0.06$, and the learning rate should be halved after every 25 epochs. We also use momentum of 0.9 and weight decay rate of 0.0005. In batch normalization \cite{ioffe2015batch}, we set $\epsilon = 0.001$. For the HOPE-Input and HOPE-Pooling layers, we use a initial $\beta$ that equals to $0.15$, and the $\beta$ should be divided by $1.75$ after every 25 epochs. All weights in CNNs will be initialized by using the method proposed by He et al\cite{he2015delving}. Note that we will not use any data augmentation in this work. \subsection{Learning Speed} We firstly consider the computational efficiency of the proposed HOPE methods in learning CNNs. Our computing platform includes Intel Xeon E5-1650 CPU (6 cores), 64 GB memory and a Nvidia Geforce TITAN X GPU (12 GB memory). Our method is implemented with MatConvNet \cite{MatConvNet-2014}, which is a CUDA based CNN toolbox in Matlab. The learning speed of all DCNNs are listed in Table ~\ref{table:time}. \begin{table}[htb] \caption{The learning speed of different DCNNs.} \label{table:time} \centering \begin{tabular}{c|c} \toprule Methods & Learning Speed \\ \midrule Baseline & 220 images/s \\ LIN-Input & 206 images/s \\ HOPE-Input & 203 images/s \\ LIN-Pooling & 211 images/s \\ HOPE-Pooling & 208 images/s \\ LIN-Input + LIN-Pooling & 195 images/s \\ HOPE-Input + HOPE-Pooling & 190 images/s \\ \bottomrule \end{tabular} \end{table} From Table ~\ref{table:time}, we can see that using the more complicated HOPE layers in CNNs only slightly slow down the computation of CNNs in GPUs. Moreover, the learning speed of the HOPE methods is similar with the corresponding LIN methods, which implies that the computational overhead for the orthogonal projection constraint is negligible in training. \subsection{Performance on CIFAR-10 and CIFAR-100} We use the classification error rate on the validation sets of the selected databases to evaluate the performance of all CNN models. Besides the 7 CNN configurations we mentioned above, we also include several well-known CNN models from the previous work to compare with our methods, including Tree-Pooling \cite{lee2015generalizing}, BinaryConnect \cite{courbariaux2015binaryconnect} (the performance on CIFAR-100 is not provided), Spectral Pooling \cite{rippel2015spectral}, R-CNN \cite{liang2015recurrent}, Fractional Maxpooling \cite{graham2014fractional} (the performance on CIFAR-10 without data augmentation is not provided), ALL-CNN \cite{springenberg2014striving}, Maxout Networks \cite{goodfellow2013maxout} and Network in Network \cite{lin2013network}. From all results summarized in Table \ref{table:performance}, we can see that the proposed HOPE-based CNNs models work well in both data sets. And the proposed CNN model that combines HOPE-Input and HOPE-Pooling can achieve the best performances on both CIFAR-10 and CIFAR-100, which are also the state-of-the-art performance when data augmentation is not used in training. Moreover, we can see that both HOPE-Input and HOPE-Pooling CNNs consistently outperform the counterpart LIN models that do not use the orthogonal constraints. This implies that the orthogonality introduced by the HOPE methods is quite useful to improve the performance of CNNs in both image classification tasks. \begin{table*}[htb] \caption{The classification error rates of all examined CNNs on the validation set of CIFAR-10 and CIFAR-100 (without using data augmentation).} \label{table:performance} \centering \begin{tabular}{c|c|c} \toprule & CIFAR-10 & CIFAR-100 \\ \midrule Baseline & 8.30\% & 30.71\% \\ LIN-Input & 7.97\% & 30.13\% \\ HOPE-Input & 7.81\% & 29.96\% \\ LIN-Pooling & 8.30\% & 31.85\% \\ HOPE-Pooling & 8.21\% & 30.60\% \\ LIN-Input + LIN-Pooling & 8.22\% & 31.55\% \\ HOPE-Input + HOPE-Pooling & {\bf 7.57\%} & {\bf 29.80\%} \\ \midrule Tree-Pooling \cite{lee2015generalizing} & 7.62\% & 32.37\% \\ BinaryConnect \cite{courbariaux2015binaryconnect} & 8.27\% & - \\ Spectral Pooling \cite{rippel2015spectral} & 8.60\% & 31.60\% \\ R-CNN \cite{liang2015recurrent} & 8.69\% & 31.75\% \\ F-maxpooling \cite{graham2014fractional} & - & 31.20\% \\ ALL-CNN \cite{springenberg2014striving} & 9.08\% & 33.71\% \\ Maxout \cite{goodfellow2013maxout} & 11.68\% & 34.54\% \\ Network in Network \cite{lin2013network} & 10.41\% & 35.68\% \\ \bottomrule \end{tabular} \end{table*} \section{Conclusions} \label{Conclusion} In this paper, we have proposed several methods to apply the recent HOPE model to CNNs for image classification. We have analyzed the relationship between the CNNs and HOPE model, and found a suitable way to use the HOPE method to replace the convolution and pooling layers in CNNs. Experimental results on the CIFAR-10 and CIFAR-100 data sets have shown that our proposed HOPE methods work well with CNNs, and can yield the state-of-the-art classification performance in these two data sets. This study has confirmed that the orthogonal constraints imposed by the HOPE models can significantly improve the performance of CNNs in these image classification tasks. {\small \bibliographystyle{unsrt}
2,869,038,155,355
arxiv
\section{Introduction} Stars are detectable as X-ray sources at several important stages of their lives. Pre-main sequence stars are X-ray sources because of their enhanced magnetic activity \citep{pf05}. Massive OB and Wolf-Rayet stars produce X-rays through shocks in their stellar winds \citep{ber97,gagne05}, and possibly from magnetically-confined plasma close to their stellar surfaces \citep{wc07}. Neutron stars are bright X-ray sources if they are young and still have latent heat from what was once the stellar core \citep{wwn96}, if they accelerate particles in rotating, moderate-strength ($B$$\sim$$10^{12}$ G) fields \citep{gs06}, or if they have extremely strong fields ($B$$\sim$$10^{14}$ G) that decay and accelerate particles \citep{wt06}. White dwarfs, neutron stars, and black holes are bright X-ray sources if they are accreting matter from a binary companion \citep{war95,psa06}, or in principle from the interstellar medium \citep[see, e.g.,][]{per03}. Therefore, X-ray surveys can be used to study the life cycles of stars, particularly their start and end points. Here we present a catalog of X-ray sources detected in {\it Chandra}\ observations toward the inner 2$^\circ$\ by 0.8$^\circ$\ of the Galaxy. The region encompasses about 1\% of the Galactic stellar mass \citep{lzm02}, and possibly up to 10\% of the Galactic population of young, massive stars \citep{mp79,fig04}. Therefore, these data provide a statistically meaningful sample of the Galactic stellar population. Previous catalogs based on {\it Chandra}\ data on the Galactic center have been published by \citet{m-cat} using 630 ks of data taken through 2002 June on the central 17\arcmin$\times$17\arcmin\ around \mbox{Sgr A$^*$}, and by \citet{m-wide} using observations taken through 2005 June on the inner 2$^\circ$$\times$0.8$^\circ$\ of the Galaxy. However, since the publication of these catalogs, a large amount of new data have been obtained. These data increase the number of point sources identified by a factor of 2.5. They also provide much better astrometry for individual X-ray sources. The improvement in astrometry enables the identification of rare objects such as Wolf-Rayet stars, X-ray binaries, and rotation-powered pulsars, through comparisons of our X-ray catalog with radio and infrared data sets (e.g., Mauerhan, J. \mbox{et al.}, in prep). Therefore, we provide here an updated catalog of point sources, which incorporates and supercedes the previous catalogs. We also describe the spatial and luminosity distributions of the X-ray sources. Throughout this paper, we adopt a distance to the Galactic center of $D$=8~kpc \citep{reid93,mcn00}, and an average absorption column of $N_{\rm H}$=$6\times10^{22}$~cm$^{-2}$ \citep{bag03}. \section{Observations\label{sec:obs}} As of 2007 August, the central 2$^\circ$$\times$0.8$^\circ$\ of the Milky Way has been observed with the imaging array of the {\it Chandra}\ Advanced CCD Imaging Spectrometer \citep[ACIS-I;][]{wei02}\footnote{See also http://cxc.harvard.edu/proposer/POG/html/ACIS.html} on numerous occasions. The majority of the new sources in this catalog come from 600 ks of exposure that we obtained in fifteen 40 ks pointings covering $\approx$1$^\circ$\ of the Galactic center. The new data since 2005 also includes 370 ks on the central 20 pc around \mbox{Sgr A$^*$}, 100~ks on the Arches cluster \citep{wdl06}, and 100~ks on Sgr C. We also include sources previously identified in 630 ks of data on the inner 20 pc around \mbox{Sgr A$^*$}\ \citep{m-cat, m-ps}; thirty 12 ks exposures of the 2$^\circ$$\times$0.8$^\circ$\ survey obtained by \citet[][see also Muno \mbox{et al.}\ 2006a]{wgl02}\nocite{m-wide}; and deep pointings toward the Radio Arches \citep[50 ks;][]{lyz04} and Sgr B2 \citep[100 ks;][]{tak02}. The dates, observation identifiers (ObsIds), durations, locations, roll angles, and some values relevant for the astrometry (\S2.1) for each exposure are listed in Table~\ref{tab:obs}. The observations in the table are sorted by right ascension and declination, so that observations near the same point are grouped. The ACIS-I is a set of four, 1024-by-1024 pixel CCDs, covering a field of view of 17\arcmin\ by 17\arcmin. When placed on-axis at the focal plane of the grazing-incidence X-ray mirrors, the imaging resolution is determined primarily by the pixel size of the CCDs, 0\farcs492. The CCD frames are read out every 3.2~s, which provides the nominal time resolution of the data. The CCDs also measure the energies of incident photons within a calibrated energy band of 0.5--8~keV, with a resolution of 50--300 eV (depending on photon energy and distance from the read-out node). However in some of the earlier, shallow exposures (ObsIDs 2267 through 2296), an event filter was employed on the satellite that removed X-rays with energies below 1 keV before the data were sent to the ground. The lack of 0.5--1.0 keV photons had a minor impact on our results, because there were only 76 sources that were detected below 2~keV for which the photometry was derived entirely from the ObsIDs 2267 through 2296.\footnote{For these sources, we under-estimate the flux by $\approx$25\%. The soft color is also systematically high. For instance, sources with $N_{\rm H} \approx 10^{21}$ cm$^{-2}$ will have $HR0$$\approx$$-0.5$ using a 0.5--2.0 keV soft band, and $HR0$$\approx$$-0.3$ using a 1.0--2.0 keV soft band.} We omitted ObsID 242 from our analysis, because it was taken with the detector at a cooler temperature (110 K, versus 120 K). A flux image and composite exposure map is displayed in Figure~\ref{fig:obs}, and an adaptively-smoothed three-color image is displayed in Figure~\ref{fig:rgb}. The data were processed using the {\it Chandra}\ Interactive Analysis of Observations (CIAO)\footnote{http://cxc.harvard.edu/ciao/} package. The data were processed as they arrived, so we used CIAO versions 3.3 and 3.4. Information on the detectors was taken from the Calibration Database (CALDB)\footnote{http://cxc.harvard.edu/caldb/} versions 3.2.1 and 3.3.0. The differences between the two versions of the software were too minor to justify re-processing the older portions of the dataset. We only used data from ACIS-I; data from the S array was omitted because was offset far from the aim point, and the large point spread function on the detector resulted in bad stellar confusion. We started with the level 1 event files provided by the {\it Chandra}\ X-ray Center (CXC), and reprocessed the data using the tool {\tt acis\_process\_events} in order to remove pixel randomization and apply more recent energy calibration. We then removed events associated with bad pixels, and applied the standard grade filters to the events and good-time filters supplied by the CXC. We applied {\tt acis\_run\_hotpix} to flag events associated with cosmic-rays, and removed them from the event list (We did not run {\tt acis\_detect\_afterglow}, because it sometimes removes genuine X-ray events. We did, however, later remove sources that were cosmic ray afterglows; see \S2.2). We then searched each observation for time intervals when particles encountering the detector caused the background event rate to flare to $\ge$3$\sigma$ above the mean level, and removed them. These background flares were found in 12 observations, and lasted $<$5\% of the duration of each observation. Next, we applied the sub-pixel event repositioning algorithm of \citet{li04}. Finally, if an astrometric correction was available from the CXC for any observation, we applied it at this point, by modifying the header keywords for the event file, and by correcting the columns for the right ascension and declination in the aspect solution provided with the observation. \begin{figure*}[htp] \centerline{\epsfig{file=survey_view.ps,width=0.9\linewidth}} \caption{ Basic results from the survey. The {\it top panel} contains a composite image of the field, in which the counts have been divided by an exposure map to provide an estimate of the 0.5-8.0 keV flux. The {\it middle panel} contains the exposure map for an energy of 4~keV, in units of the product of the effective area times the exposure time. Some holes are visible where we have excluded regions where bright transients and their associated dust-scattering halos were present in some individual observations, because these degraded the sensitivity. The {\it bottom panel} illustrates the locations of point sources in our sample. The regions with the largest exposure have the greatest concentrations of point sources. } \label{fig:obs} \end{figure*} \begin{figure*}[htp] \centerline{\epsfig{file=frame_rgb.ps,width=0.95\linewidth}} \caption{ Three-color image of the survey area. Red is 1--3~keV, green is 3--5~keV, and blue is 5--8~keV. Each band was adaptively smoothed using the CIAO tool {\tt csmooth}, and then normalized using an exposure map. Some artifacts can be seen at the boundaries of chip edges, particularly near where bright, transient X-ray sources appeared. } \label{fig:rgb} \end{figure*} Before proceeding to explain our algorithms for source detection, we would like to explain some minor weaknesses of our approach. Unfortuantely, because the data were searched for sources as they arrived, and because the exposures were highly non-uniform across the field, some parameters of our detection algorithm, particularly the detection threshholds, were not kept consistent. To compensate for this, we did two things. First, we verified the reality of each source as part of our photometric algorithm (\S2.2). This should eliminate most spurious sources on the faint end, in a uniform manner. Second, we determined the completeness limits of our survey using Monte Carlo simulations that mimicked our source detection algorithms (\S2.3). This is the best way to establish what portion of our sample is complete. With the experience we have gained, in principle we could develop a more streamlined and straightforward approach to building the inital catalog. However, it would take several months of computer time to reprocess the data, or a similar amount of time re-writing our software to be more efficient. The improvements in the final catalog would be slight, so we decided not to delay releasing our catalog any further. We are making the data products from the following sections available in FITS format from a web site.\footnote{http://www.srl.caltech.edu/gc\_project/xray.html} The catalog itself will also be available with the electronic version of the paper. \subsection{Source Detection and Initial Localization} Source detection and localization was approached iteratively. We first searched for point sources in each observation separately. The locations of the point sources found in the first stage were used to refine the astrometry. Second, the astrometrically-corrected images were combined to search for fainter point sources. Finally, the source lists from the individual observations were merged with those from the combined images. We searched each observation individually for point sources using the wavelet decomposition algorithm {\tt wavdetect} \citep{free02}. We employed the default ``Mexican Hat'' wavelet, and used a sensitivity threshold of $10^{-7}$. This threshold roughly corresponds to the chance of detecting a spurious source in an area corresponding to the point spread function (PSF), if the local background is spatially uniform. For the earlier data, taken before 2006, we used images at three different resolutions: one at 0\farcs5 resolution covering the inner 1024$\times$1024 pixels, one at 1\arcsec\ resolution covering the inner 2048$\times$2048 pixels, and one at 2\arcsec\ resolution covering the entire field. For later observations, we simplified the process and used only two resolutions: 0\farcs5 covering the inner 2048$\times$2048 pixels, and 2\arcsec\ covering the entire field. Using a test field, we confirmed that there was no difference in the number of sources detected using the two techniques; the only difference is that the technique that used three, smaller images was computationally faster. We used wavelet scales that increased by a factor of $\sqrt{2}$, over the range of 1--4 pixels for the 0\farcs5 image, 1--8 pixels for the 1\arcsec\ image, and 1--16 pixels for the 2\arcsec\ image. For each resolution, we made images in three energy bands: 0.5--8.0~keV to cover the full bandpass, 0.5--2.0~keV to provide sensitivity to foreground sources, and 4--8 keV to provide sensitivity to highly-absorbed sources. For each image, a matching exposure map was generated for photons with an energy of 4~keV, so that the wavelet algorithm could keep track of regions with rapidly-varying exposure, such as bad columns and the edges of the CCDs. The lists derived from each image resolution were combined to form master source lists for each energy band. We found that the positions would be most accurate from the sources identified in the image with the finest resolution. Therefore, we discarded sources from the lower-resolution images if their separations from sources identified at high resolution were smaller than the radii of the 90\% contour of the PSF. In this way, we produced three lists for each observation, one for each energy band. Next, we used the point sources detected so far to register the absolute astrometry to the Two Micron All-Sky Survey \citep[2MASS;][]{skr06}. The 2MASS frame is consistent with the International Celestrial Reference System to within 15 mas. We compared the positions of 2MASS sources to those of X-ray sources detected in the 0.5--2.0 keV band that were identified by {\tt wavdetect} as having $>$3$\sigma$ significance, and identified matches as those with offsets $<$1\arcsec. The offsets between the 2MASS and {\it Chandra}\ frames were computed using a least-chi-squared algorithm. The X-ray sources that we used for astrometry are flagged in Table~\ref{tab:positions} (see \S\ref{sec:tdesc}). For observations longer than 20 ks, we found between 3 and 36 X-ray sources in the soft band that could be associated unambiguously with stars in the 2MASS catalog, and so we used the average offsets between the X-ray and infrared sources to correct the astrometry of the X-ray observations. We evaluated the accuracy of the registration based on the standard deviation in the mean of the offsets of the individual stars. The registraton was accurate to 0\farcs06 for the deepest exposures, and to 0\farcs2 for the shallower ones (1$\sigma$). Unfortunately, for exposures shorter than 20~ks, too few X-ray sources were found with 2MASS counterparts to correct the astrometry to better than the default value, 0\farcs5 (1$\sigma$). Once the deepest observation at each point was registered to the 2MASS frame, the shallower observations were registered using the offsets of X-ray sources detected in the 0.5--8.0 keV band in pairs of observations. Between 2 and 759 X-ray sources matched between the deepest and shallower observations, depending upon the exposure time of the shallower observation. The uncertainty in the astrometry of each observation is listed in the last column of Table~\ref{tab:obs}. The composite image and exposure map for our survey are displayed in Figure~\ref{fig:obs}. Having corrected the astrometry for fields that included deep observations, we then combined subsets of the images in order to perform a deeper search for point sources. Two wavelet algorithms were used, on the series of images listed below. First, the tool {\tt wavdetect} \citep{free02} was used to identify point sources in: \begin{itemize} \item Composite images made from all of the observations of \mbox{Sgr A$^*$}. Twelve, 1024$\times$1024 images were made, in four resolutions (0\farcs25, 0\farcs5, 1\arcsec, and 2\arcsec) and using three energy bands for each resolution (0.5--8.0~keV, 0.5--2.0~keV, and 4--8~keV). \item Three sets of composite images made from observations of \mbox{Sgr A$^*$}\ in 2002, 2004, and 2005. These images were designed to be sensitive to faint, variable sources. The same image resolutions and energy bands were used as for the composite image of all of the \mbox{Sgr A$^*$}\ data. \item Composite images made from three pointings that were taken with same same roll angle, because the original 40~ks exposure had to be split up to accomodate scheduling constraints (ObsIDs 7038, 7041, and 7042). Three images were made for each aimpoint, one for each of the 0.5--8.0~keV, 0.5--2.0~keV, and 4--8~keV energy bands. Each image was made at 0\farcs5 resolution, and had 2048$\times$2048 pixels. \end{itemize} The parameters used with {\tt wavdetect} were the same as for the individual observations. Second, we used the tools {\tt wvdecomp} and {\tt findpeak} in the {\it zhtools} package written by A.\ Vikhlinin\footnote{{\tt http://hea-www.harvard.edu/RD/zhtools}} to search for faint sources that fell below the {\tt wavdetect} threshold. We searched on wavelet scales of 1--3 pixels, and required that a candidate source be identified with a minimum signal-to-noise of 4.5, corresponding to 16 spurious sources per 2048 by 2048 pixel image. Five iterations of the search procedure were performed. The tool {\tt wvdecomp} iteratively cleans the image of point sources identified in previous passes through the data, so it is more efficient at separating close pairs of sources. Moreover, unlike {\tt wavdetect}, {\tt wvdecomp} does not use any information about the shape of the point spread function in searching for sources, so it is better at identifying point sources when observations with very different aimpoints have been combined. Therefore, we used {\tt wvdecomp} on composite images generated from all data covering the positions at which deep observations were obtained (i.e., ObsIDs 3392, 4500, 5892, 7034--7048, and 944). Each image was produced with 2048$\times$2048 pixels at 0\farcs5 resolution for the 0.5--8.0 keV band. We then generated a list containing the unique sources, by merging the lists generated by {\tt wavdetect} from individual observations and from combined images, and from the lists generated by {\tt wvdecomp}. We found that almost all the duplicates could be removed by identifying sources with separations smaller than the prescription for positional uncertainties in \citet{brandt01}: for sources offset from the center of each image by $\theta$$<$5\arcmin, the separation was cut at 0\farcs6, whereas for larger offsets it was cut at $0\farcs6 + (\theta - 5.0)/8.75$ ($\theta$ is in arcmin).\footnote{We use a different prescription in \S2.4 for the uncertainties on the positions in the catalog.} For each image, we gave preference to positions from the full-band sources, then from the soft sources, and finally from the hard sources. Across observations, the priority was given to the deeper observations, and to sources detected with {\tt wavdetect} over those detected with {\tt wvdecomp}. We examined the final list visually by comparing it to images of the survey fields, and we removed several hundred sources that were portions of extended, diffuse features \citep[from][]{m-pwn}, and a couple dozen duplicates that were not identified automatically. Finally, two sources were not picked up by the detection algorithms because they were blended with nearby, brighter sources. We added these to our catalog by hand (CXOUGC J174502.8--282505 and J174617.4--281246). At this stage, we considered our source lists to be provisional, both because the search algorithms used non-uniform parameters, and because the large, spatially-variable background was likely to cause our wavelet algorithms to generate a significant number of spurious sources. In order to confirm their validity, we next computed photometry for each provisional source. \begin{figure} \centerline{\epsfig{file=lumdist.ps,width=0.95\linewidth}} \caption{{\it Top panels:} The distributions in net counts from individual sources. No corrections were applied to account for the exposure across the survey, which varies by a factor of 10. Values for the 0.5--2.0 keV band are plotted on the left, and for the 2--8 keV band on the right. {\it Bottom panels:} The distribution of fluxes (\mbox{photons cm$^{-2}$ s$^{-1}$}) from individual sources. The 0.5--2.0 keV~fluxes were derived by dividing the net count rates by the effective area and exposure the 0.5--2.0 keV band, whereas the 2--8~keV fluxes were computed by dividing the counts into into three energy bands (2.0--3.3~keV; 3.3--4.7~keV; and 4.7--8.0~keV), dividing by the respective effective areas and exposures, and summing the result. There are two peaks in each histogram, because the deeper observations were more sensitive to faint sources. In all panels, the solid lines are used for detections, and the dashed lines are 90\% upper limits derived when a source was detected in one band, but not the other. } \label{fig:lumdist} \end{figure} \subsection{Photometry} We computed aperture photometry for each source using the \anchor{http://www.astro.psu.edu/xray/docs/TARA/ae\_users\_guide.html}{ACIS Extract} package, versions 3.96, 3.101, and 3.128 \citep{broos02, tow03, get05}, along with some custom code. The algorithm proceeded in several steps. First, for each source and each observation, we obtained a model PSF with the CIAO tool {\tt mkpsf}. For most sources, we used a PSF for a fiducial energy of 4.5 keV. However, if a source was only detected with {\tt wavdetect} in the soft band, we used a PSF for an energy of 1.5 keV. To determine a region from which we would extract source counts, we then constructed a polygon enclosing 90\% of the PSF. If the polygons for two sources overlapped in the observations in which the sources were closest to the aim-point, we generated a smaller polygon. The final extraction regions enclosed between 70\% and 90\% of the PSF. Sources for which the PSF fraction was $<$90\% were considered to be confused. Moreover, because the PSF grows rapidly beyond 7\arcmin\ from the aim-point, we also considered sources to be confused if they were located beyond 7\arcmin\ from the aim-point and their PSFs overlapped. Photometry was not computed for observations in which confused sources fell $>$7\arcmin off-axis. Fortunately, these second type of confused sources were always located on-axis in another observation, or else they would not have been identified. Finally, for similar reasons, we only computed photometry for sources that lay within 7\arcmin\ of \mbox{Sgr A$^*$}\ if the relevant observations had \mbox{Sgr A$^*$}\ as the aim point. Second, we extracted source event lists, source spectra, effective area functions, and response matrices for each source in each observation. The detector responses and effective areas were obtained using the CIAO tools {\tt mkacisrmf} and {\tt mkarf}, respectively. For each source, the spectra from all of the relevant observations were summed. The responses and effective areas were averaged, weighted by the exposures in each observation. Third, we extracted background events from circular regions surrounding each point source in each observation, omitting events that fell within circles that circumscribed $\approx$90\% of the PSFs around any point sources. The background regions were chosen to contain $\approx$100 total counts for the wide survey, and $\approx$1000 total counts for the deeper \mbox{Sgr A$^*$}\ field. Fewer than 1\% of the counts in the background region originate from known point sources. For each source, the background spectra from all of the relevant observations were scaled by the integrals of the exposure maps (in units of cm$^2$ s) over the source and background regions, and then summed to create composite background spectra. Fourth, we eliminated spurious sources. We compared the number of source and background counts to estimate the probability that no source was present, based on Poisson statistics \citep{wei07}. If a source had a $>$10\% chance of being spurious, we eliminated it from our catalog. We eliminated 1962 sources in this way. We also eliminated sources in which the majority of events were cosmic ray afterglows. Specifically, we removed 46 sources because the events associated with the candidate source fell in a single pixel during 5--10 consecutive frames. Our final catalog contains 9017 X-ray sources, and is listed in Table~\ref{tab:positions}. The majority of sources, 7152, were found with {\tt wavdetect}. Of the sources detected with {\tt wavdetect}, 4823 were detected in the full band, 948 in the soft band, and 1381 in the hard band. Another 1865 sources were only detected with {\tt wvdecomp}. In the \mbox{Sgr A$^*$}\ field alone, we found 3441 sources with {\tt wavdetect}, of which 2715 were detected in the full composite image, 275 in 2002, 48 in 2004, 90 in 2005, and 313 in individual observations. An additional 364 were found in the \mbox{Sgr A$^*$}\ field with {\tt wvdecomp}. Fifth, we compared the source and background spectra using the Kolmogorov-Smirnov (KS) statistic, in order to flag potentially-spurious objects that could be variations in the background. Caution should be used when studying sources that resemble the background. For instance, in the central parsec around \mbox{Sgr A$^*$}, there is an over-abundance of faint ($\la$$10^{-6}$ \mbox{photons cm$^{-2}$ s$^{-1}$}), soft point sources that have spectra consistent with that of the background warm plasma (note that almost all of the excess bright sources that we discuss in \S3.1 do have spectra that are distinct from the background). Therefore, we suspect that most of these are $\sim$0.1~pc scale variations in the density of that plasma. Unfortunately, we cannot be certain. Indeed, the spectrum of the bright X-ray source associated with IRS~13 (CXOUGC~J174539.7--290029) resembles the background according to the KS test. If the diffuse background is merely unresolved point sources \citep{wgl02,rev06,rs07}, then most faint point sources should have spectra that resemble the background. Sixth, we computed the net counts in the 0.5--2.0~keV, 2.0--8.0~keV, 2.0--3.3~keV, 3.3--4.7~keV, and 4.7--8.0~keV bands. We estimated the photon flux from each source, by dividing the net counts by the average of the effective area function in each band. Table~\ref{tab:phot} lists the 0.5--2.0 keV and the 2--8 keV flux, the latter of which is the sum of the fluxes in the three sub-bands. Figure~\ref{fig:lumdist} displays histograms of the net counts and fluxes in the 0.5--2.0~keV and 2--8~keV bands. Histograms of upper limits are also plotted, for sources that were detected in one band but not the other. \begin{figure} \centerline{\epsfig{file=hr_dist.ps,width=0.9\linewidth}} \caption{Distribution of measured hardness ratios, $(h-s)/(h+s)$, where $h$ and $s$ are the numbers of counts in the higher and lower energy bands, respectively. The {\it top} displays $HR0$, constructed from counts in the 2.0--3.3~keV and 0.5--2.0~keV bands; the {\it middle} displays $HR1$, using counts in the 3.3--4.7~keV bands and 2.0--3.3~keV; the {\it bottom} displays $HR2$, using counts in the 4.7--8.0~keV and 3.3--4.7~keV bands. Foreground sources are defined as those with $HRO$$<$$-0.175$, and are plotted with the dashed line. Galactic center sources have $HRO$$\ge$$-0.175$, and are plotted with a solid line. Most Galactic center sources do not have measured $HR0$, and their $HR1$ is skewed to higher values by absorption. \label{fig:hrdist} } \end{figure} Finally, using custom code that was not part of ACIS Extract, we computed 90\% uncertainties on the net counts in each band, through a Bayesian analysis of the Poisson statistics, with the simplifying assumption that the uncertainty on the background is negligible \citep{kbn91}. We used the net counts to compute the hardness ratios $(h-s)/(h+s)$, where $h$ and $s$ are the numbers of counts in the higher and lower energy bands, respectively. The resulting hardness ratios are bounded by $-1$ and $+1$. We defined a soft color using counts in the 2.0--3.3~keV and 0.5--2.0~keV bands ($HR0$), a medium color using counts in the 3.3--4.7~keV bands and 2.0--3.3~keV ($HR1$), and a hard color using counts in the 4.7--8.0~keV and 3.3--4.7~keV bands ($HR2$). We calculated uncertainties on the ratios using the 90\% uncertainties on the net counts and Equation~1.31 in Lyons (1991; page 26). The hardness ratios are listed in Table~\ref{tab:phot}, and histograms showing their distributions are displayed in Figure~\ref{fig:hrdist}. The soft color, $HR0$, was used to distinguish foreground sources from objects that were likely to lie near or beyond the Galactic center. We select foreground X-ray sources as those with soft colors in the range $-1.0$$\le$$HR0$$<$$-0.175$, which corresponds to absorption columns equivalent to $N_{\rm H}$$\la$$4\times10^{22}$~cm$^{-2}$. Most of these should lie within 4 kpc of Earth \citep[e.g.,][]{mar06}. We selected X-ray sources that were located near or beyond the Galactic center as those either that had soft colors $HR0$$\ge$$-0.175$, or that were not detected in either of the 0.5--2.0 and 2.0--3.3~keV bands. This X-ray selection corresponds to absorption columns equivalent to $N_{\rm H}$$\ga$$4\times10^{22}$~cm$^{-2}$. We find 2257 foreground X-ray sources, and 6760 sources near or beyond the Galactic center. Foreground and absorbed sources are plotted separately in Figure~\ref{fig:hrdist}. Most absorbed sources do not have measured soft colors. \subsection{Variability} We searched for variability using the arrival times of the events. We searched for three kinds of variations: long-term variations that occurred between observations, short-term variability within individual observations, and periodic variability within individual observations. \subsubsection{Long-term Variability} We searched for variations that occurred between observations by comparing the event arrival times from all of the observations to a constant flux model using the KS statistic. Any source with a $<$0.1\% chance of being described by a constant flux model was considered to vary on long time scales. There were 856 sources that exhibited long-term variability, 137 of which also exhibited short-term variability. Therefore, about 10\% of sources vary on the day to month time scales between observations. We characterized these long-term variations by computing the mean photon flux during each observation of a variable source. Table~\ref{tab:longterm} lists the source name, observations in which the largest and smallest fluxes were observed, the values of the largest and smallest fluxes, and the ratios of those values. Figure~\ref{fig:longterm} compares the amplitude of the variations to the maximum flux. In order to exclude measurements with poor signal-to-noise, the largest flux was defined as the measurement with the largest lower limit, and the smallest flux was defined as the measurement with the smallest upper limit. In most (740) cases, the smallest flux was consistent with zero, and the lower limit to the flux ratios was provided. In 224 cases, the uncertainties in the largest and smallest fluxes overlapped, and the formal lower limit to the ratio was less than 1. The statistics on faint sources with low-amplitude variability tended to be poor, so Table~\ref{tab:longterm} would be best used to identify highly-variable sources for further study. \begin{figure} \centerline{\epsfig{file=long_term.ps,width=0.95\linewidth}} \caption{ Summary of the properties of long-term variables. We plot the ratio of the maximum to minimum fluxes against the maximum flux. Measurements are represented with diamonds, and lower limits with upward-pointing arrows. The largest-amplitude variations necessarily have the largest peak fluxes, because the minimum fluxes generally represent non-detections, and are therefore equivalent to the sensitivity of our observations. } \label{fig:longterm} \end{figure} \subsubsection{Short-term Variability} We searched for variability within each observation by comparing the light curves to constant count rate models using the KS statistic. If the arrival times of events had a $<$0.1\% chance of being described by a uniform distribution, we considered a source to have short-term variability. We identified 294 sources, or 3\% of our sample, as having clear short-term variations. We roughly characterized the nature of the variability by dividing each time series into intervals that were consistent with having constant count rates, using the Bayesian Blocks algorithm of \citet{sca98}. In brief, the algorithm compared the probability that an interval could be described by two different count rates to the null hypothesis that the photons arrived with a single rate. If the ratio of the two probabilities exceeded a user-specified prior odds ratio, then the interval was divided at the time that produced two intervals with the largest calculated likelihood. This process was iterated until no sub-intervals were divided any further. We chose to apply the algorithm in order to describe each variable light curve with the fewest intervals with distinct rates (blocks). We applied three progressively looser odds ratios, successively demanding that the probability for the two-rate model exceed the null hypothesis by factors of 1000, 100, and 10 if the larger odds ratio failed to identify a change point. In this way, large flares were described with a few ``blocks'' using a large odds ratio, whereas small-amplitude variations were still characterized using a smaller odds ratio. This approach was deemed necessary in part because the Bayesian interpretation of the odds ratios does not have a good frequentist analog that could be compared to the probabilities returned by the KS test, and in part because the KS test and the Bayesian Blocks tests are most sensitive to slightly different forms of variability. Ultimately, only 60\% of the variable sources identified with the KS test were characterized with more than one block in the Bayesian Block algorithm. Despite the mismatch between the two tests, the characteristics of the variable sources identified by both the KS and Bayesian Blocks tests are illustrative. In Table~\ref{tab:shortterm}, we list some properties of the variable sources: their names, the ObsIDs in which variability occurred, the odds ratio at which the Bayesian blocks algorithm identified a source as variable, the number of blocks used to describe the events, the durations of the brightest portions of the light curves, the minimum and maximum fluxes, and the ratios of the maximum to minimum fluxes. Figure~\ref{fig:shortterm} compares the duration and amplitude of the variability. For 40\% of the variable sources, the minimum flux was consistent with zero, so the ratio represents a lower limit to the variability amplitude. We find that all variations had time scales of $>$10 minutes. The amplitudes ranged from barely-detectable 30\% variations in the flux (CXOUGC J174534.8--290851), to one flare in which the flux increased by a factor of 250 (CXOUGC J174700.7--283205). Foreground sources are over-represented among variable sources --- they compose only 25\% of our entire catalog, but 50\% of the short-term variables --- which is consistent with the expectation that they are nearby K and M dwarf flare stars \citep[e.g.,][]{lay05}. \begin{figure} \centerline{\epsfig{file=short_term.ps,width=0.95\linewidth}} \caption{ Summary of the properties of short-term variables, using parameters returned from the Bayesian Blocks algorithm. We plot the ratio of the maximum to minimum fluxes against the duration of the peak-flux interval in an observation. Measurements are represented with diamonds, and lower limits with upward-pointing arrows. Low-amplitude variations are not represented among the short-duration events, because poor counting statistics prevents us from identifying them. } \label{fig:shortterm} \end{figure} \subsubsection{Periodic Variability} We searched for periodic variability in the brightest sources by adjusting the arrival times of their photons to the Solar System barycenter and computing Fourier periodograms using the Rayleigh statistic \citep{buc83}. The individual X-ray events were recorded with a time resolution of 3.2~s, so the Nyquist frequency was $\approx$0.15~Hz, which represents the limit above which our sensitivity could not be well-characterized. However, we computed the periodogram using a maximum frequency of $\approx$0.2 Hz, to take advantage of the limited sensitivity to higher frequency signals, and to ensure that any observed signal was not an alias. We considered sources that, in individual observations, produced a large enough number of counts ($N_{\gamma}$) that a fully-modulated signal could be detected with 99\% confidence. The power $P_{\rm meas}$ required to ensure that a source had a chance $<$$1-C$ of being produced by white noise can be computed if one knows the number of trials in a search, ($N_{\rm trial}$), and is given by inverting $C \approx N_{\rm trial} e^{-P_{\rm meas}}.$ Here, $P_{\rm meas}$ is normalized to have a mean value of 1, and the approximation is valid for $P_{\rm meas}$$\gg$1 \citep{rem02}. A count threshold can be determined by noting that, if background photons are negligible, the fractional root-mean-squared amplitude of a sinusoidal signal ($A$) is given by $A \approx (2P_{\rm meas}/N_\gamma)^{1/2}$. A fully-modulated signal has $A$=0.71. After iterating to determine the number of trials corresponding to each count limit, we found that a source with $N_\gamma$=86 could be identified with $C$=0.99 if it produced a fully-modulated signal. In total, we searched for pulsations in 717 event lists from 256 different sources, which required $2\times10^7$ trials. A single signal that had $C$$>$0.99 given this number of trials must have had $P_{\rm meas}$$>$21.4. However, multiple observations were searched for many sources, so we recorded signals with lower powers and checked whether they also appeared at the same frequency in other observations. We identified two sources with periodic variability at $>$99\% confidence, CXOUGC J174532.7--290550 and CXOUGC J174543.4--285841. We had previously identified both of these by combining 500~ks of exposure over the course of two weeks \citep{m-per}. The other sources in \citet{m-per} were too faint for their periodic variability to be identified in individual observations. We also identified a third source as a good candidate for having periodic variability, CXOUGC J174622.7--285218. Signals from this source were identified with periods of $\approx$1745~s in observation 4500 with $P_{\rm meas}$=10.9 from 763 photons, and in observation 7048 with $P_{\rm meas}$=13.4 from 310 photons (Figure~\ref{fig:fft}). The joint probability that these signals were produced at the same frequency by noise \citep{rem02}, given $N_{\rm trial}$=$2\times10^7$, was only 1.4\%. Periodic signals were not detected from this source in observation 945 because the source fell on a chip edge, nor in observations 2273 and 2276 because their exposures were too short. \begin{figure} \centerline{\epsfig{file=chandra_fft.ps,width=0.95\linewidth}} \caption{ Fourier periodograms for the two observations in which a 1745~s signal was detected from CXOUGC J174622.7--285218. The signal has a joint probability of 1.4\% of resulting from white noise, given $N_{\rm trial}$=$2\times10^7$. The downward-pointing arrows show the fundamental and first two harmonics of the periods with which the satellite point was dithered in pitch and yaw. } \label{fig:fft} \end{figure} We refined our initial estimates of the period for CXOUGC J174622.7--285218 for each observation by computing pulse profiles from non-overlapping $10^4$ s intervals, and modeling the differences between the assumed and measured phases using a first-order polynomial. The reference epochs of the pulse maxima for the two observations were 53165.3781(6) and 54145.1644(7) (MJD, Barycentric Dynamical Time). The best-fit periods were 1745$\pm$3 s and 1734$\pm$16 s for observations 4500 and 7048, respectively. The pulse profiles for each observation are displayed in Figure~\ref{fig:prof}. The fractional root-mean-squared amplitudes of the pulsations were 21\% and 32\%, respectively. Given the long period for this source, it is most likely a magnetically-accreting white dwarf \citep{m-per}. \begin{figure} \centerline{\epsfig{file=profiles.ps,width=0.95\linewidth}} \caption{ Pulse profiles for the two observations in which the 1745~s signal was detected from CXOUGC J174622.7--285218. Two identical cycles have been are displayed in each panel. The profiles are consistent with sinusoids, within their uncertainties. } \label{fig:prof} \end{figure} \begin{figure*}[htp] \centerline{\epsfig{file=sensitivity.ps,width=0.95\linewidth}} \caption{ Map of the limiting flux for our survey. Sources brighter than the limiting flux at each point have a $>$90\% chance of being detected.} \label{fig:sens} \end{figure*} \begin{figure} \centerline{\epsfig{file=area_curve.ps,width=0.95\linewidth}} \caption{ The area over which we were sensitive to sources of given fluxes ({\it bottom axis}) and luminosities ({\it top axis}, assuming $D$=8~kpc, a $\Gamma$=0.5 power-law spectrum, and $N_{\rm H}$=$6\times10^{22}$~cm$^{-2}$), with 50\% and 90\% confidence. } \label{fig:area} \end{figure} \subsection{Sensitivity} We calculated the sensitivity of our observations using synthetic-star tests, following the basic methods described in \citet{bau04} and Muno \mbox{et al.}\ (2006a; see Wang 2004 for another approach). We generated maps of our sensitivity both for each of the stacked observations (i.e., centered on ObsIDs 3392, 4500, 5892, 7034--7048, and 944), and for a fiducial field with an exposure time of 12 ks for those regions only covered by the shallow exposures of \citet{wgl02}. In brief, for each pointing, we generated a background map by (1) removing events from within a circle circumscribing $\approx$90\% of the energy of the PSF around each detected source, and then (2) filling the ``holes'' in the image with numbers of counts drawn from Poisson distributions with means equal to those of surrounding annuli. We then simulated 100 star fields per pointing. We placed $\approx$5000 point sources at random positions in each background image, with fluxes distributed as $N(>S) \propto S^{-\alpha}$ with a slope $\alpha = 1.5$, and minimum fluxes that would produce 3 counts in a 100~ks exposure. We converted these fluxes to expected values for the numbers of counts using an exposure map. The exposure map was normalized to produce the mean flux-to-counts conversion for X-ray sources located at or beyond the Galactic center \citep[$HR0 > -0.175$;][]{m-wide}.\footnote{We note that in \citet{m-wide}, we calculated flux limits from a mono-energetic exposure map that over-estimated the effective area by 50\%, which caused us to report limiting fluxes that were erroneously low.} Then, to account for the Eddington bias, we drew observed numbers of counts from Poisson distributions with mean values equal to the expected counts. Next, we obtained model images of the PSF from the routine {\tt mkpsf}, averaged them when appropriate, and used the composite PSF as the probability distribution to simulate the 2-dimensional image of the counts. These were added to the synthetic exposure. Finally, we searched the synthetic image for point sources using \program{wavdetect} for the 12 ks exposures, and {\tt wvdecomp} for the stacked observations. By comparing the input and output lists, we estimated the minimum flux at which a source would be detected in 50\% and 90\% of trials over a grid of points covering our survey. We interpolated between these points to make a map of the sensitivity for each image. None of our observations are formally confusion-limited at our completeness limits (Hogg 2001; see also Muno \mbox{et al.}\ 2003). If the background diffuse X-ray emission is unresolved stellar sources, then confusion caused by undetected sources is accounted for naturally by our background maps. In order to produce a global sensitivity map, we combined the sensitivity maps from the above simulations by recording the best sensitivity at each point in the image. The map of 90\% confidence limits is displayed in Figure~\ref{fig:sens}. The effective area of the survey as a function of limiting photon flux and luminosity is displayed in Figure~\ref{fig:area}. We are sensitive to $\approx$$4\times10^{32}$~\mbox{erg s$^{-1}$}\ (0.5-8.0 keV, assuming $D$=8~kpc) at 90\% confidence over one square degree, and to $\approx$$1\times10^{32}$~\mbox{erg s$^{-1}$}\ over 0.1 square degrees. This is a factor of $\approx$2 improvement over \citet{m-wide}. However, we still find that the majority of X-ray sources are detected at fluxes below our completeness limits. Of 6760 sources that are likely to lie at or beyond the Galactic center ($HR0 > -0.175$), only 15\% are brighter than the 90\% completeness limit at the point at which they were detected, and only 40\% are brighter than the 50\% completeness limit. This is caused by two effects. First, 20\% of the sources are detected only in the hard band, whereas our completeness limits are for the full band. Second, the number-flux distribution is steep (\S3.3), such that many of the faint sources are only detected because of positive Poisson fluctuations in their count rates. These maps are used in selecting complete samples of sources for measuring the spatial (\S3.2) and flux (\S3.3) distributions. Sources below our completeness limits are still securely detected, although other sources with similar intrinsic fluxes have been missed. \subsection{Refined Source Positions} Experience with matching X-ray and optical sources as part of the {\it Chandra}\ Deep Fields and Orion Ultradeep projects suggests that the positions of the X-ray sources can be refined with respect to those provided by the wavelet algorithms \citep{alex03,get05}. Therefore, we used the implementations of their techniques in ACIS Extract to refine the positions of our sources. For each source, we made a composite image by combining the event lists from each relevant observation, and then made a matching composite PSF image that we weighted by the values of the exposure maps at the source positions. From this image, we computed two additional estimates for the source position: the mean positions of the events within each source region, and a centroid determined by cross-correlating the PSF and source images. Following \citet{get05}, if a source lay within 5\arcmin\ of the aim point, we used the mean position of the events within the source extraction region. If the source lay beyond 5\arcmin, we used the position determined by cross-correlating the source image and PSF image. However, if the offset of the refined position from the wavelet position was larger than the smallest source extraction radius that we used, we assumed that a nearby source had caused confusion, and retained the wavelet position. Unfortunately, we could not empirically calibrate the uncertainties on our source positions, because even the foreground infrared sources had such a high density that $\approx$50\% of those that fell within 3\arcsec\ of an X-ray source were chance alignments. Therefore, we computed 95\% positional uncertainties using Equation~5 in \citet{hon05},\footnote{This differs from the equation we used for eliminating duplicates, because that step was implemented much earlier in the process of producing the catalog, before we had settled on a final uncertainty estimate. The difference has no practical impact on the catalog.} which is based on the positions of sources reported by {\tt wavdetect} in simulated observations: \begin{eqnarray} r_{\rm err} & = & 0\farcs25 + \frac{0\farcs1}{\log_{10}(c + 1)} \left[1 + \frac{1}{\log_{10}(c + 1)}\right] \nonumber \\ & + & 0\farcs03 \left[ \frac{\theta}{\log_{10}(c + 2)}\right]^2 + 0\farcs0006 \left[\frac{\theta}{\log_{10}(c + 3)}\right]^4 \label{eq:posunc}. \end{eqnarray} Here, $\theta$ is the offset in arcminutes of the source from the nominal aim point, and $c$ is the net number of counts. For sources detected in composite images, we defined $c$ to be the net counts summed over all observations, and $\theta$ to be the exposure-weighted averages of the sources' offsets from the aim points of their respective observations. For sources detected in individual observations, we defined $c$ and $\theta$ to be the values for the observations in which the sources were identified. If $r_{\rm err}$ is larger than the smallest radius of the region used to extract photometry for the source (the ``source radius''), the uncertainty was set equal to the radius of the extraction region. These sources are marginally detected, and the high background in the Galactic center produces a large tail in the distribution of possible positions. We retained them because they passed all of our other selection criteria. \citet{hon05} established equation~\ref{eq:posunc} by running {\tt wavdetect} on simulated, single observations that were generated using a ray-tracing code. Unfortunately, our observations are more complicated. On the one hand, most of the positions are determined from composite images generated from observations with very different aim points. The inclusion of data with large $\theta$ could add uncertainty to our measurements. On the other hand, our positions have been refined compared to the {\tt wavdetect} values, so the uncertainty on some sources could be smaller. Therefore, we view Equation~\ref{eq:posunc} as a compromise. Nonetheless, a comparison of the offsets between 500 foreground X-ray sources and the blue 2MASS sources that are their counterparts (as described in detail in J. Mauerhan \mbox{et al.}, in prep) reveals that the positions in the new catalog are $\approx$60\% better than in \citet{m-cat} and \citet{m-wide}. We also note that because of the way we averaged the PSF, the positions and uncertainties for the of sources that vary in flux between observations (10\% of our sample) could be mis-estimated. For example, a variable source that was only bright in an off-axis observation would have a larger uncertainty than might be expected if it were also bright during an on-axis observation. We have not evaluated whether systematic offsets in the positions are expected. \subsection{Details of the Tables\label{sec:tdesc}} Table~\ref{tab:positions} contains the locations of the point sources, parameters related to the observations of each source, and information on the data quality. Its columns are as follows: \noindent (1) Record locators that can be used to cross-correlate with other tables. \noindent (2) The source names, which are derived from the coordinates of the source based on the IAU format, in which least-significant figures are {\it truncated} (as opposed to rounded). The names should not be used as the locations of the sources. \noindent (3--4) The right ascensions and declinations of the sources, in degrees (J2000). \noindent (5) The 95\% uncertainties in the positions (the error circles). There are 5810 sources with uncertainties $\le$1\arcsec\ (half of which are within 7\arcmin\ of \mbox{Sgr A$^*$}), and 1950 with uncertainties $\le$0\farcs5 (85\% of which are within 7\arcmin\ of \mbox{Sgr A$^*$}). \noindent (6) Flag indicating how the positions were derived. A ``d'' indicates the position is from the mean position of events, a ``c'' indicates it was derived by cross-correlating the image and the PSF, and a ``w'' indicates it was derived from a wavelet algorithm. Sources marked with a ``w'' are likely to be confused with a nearby source, or in a region of high background. \noindent (7) The images in which the sources were identified. The tags ``full'', ``2002'', ``2004'', and ``2005'' indicate a source was found in composite images of the \mbox{Sgr A$^*$}\ field. All other values are the observations in which a source was detected. Two sources added manually are tagged with ``hand''. \noindent (8) Additional information about how the sources were detected. The tag ``full'' refers to any source detected with {\tt wavdetect} in the 0.5--8.0~keV band; ``soft'' sources were detected in the 0.5--2.0~keV but not the full band; ``hard'' sources were detected in the 4--8~keV band but neither of the other two bands. The tag ``tile'' indicates that the source was detected in a composite 0.5--8.0~keV image with {\tt wvdecomp}. \noindent (9) The offsets ($\theta$) from the aim point, in arcmin. If a source position was estimated from a composite image, $\theta$ is the mean offset weighted by the exposure. If a position was taken from a single observation, $\theta$ is the offset for that observation. \noindent (10) The number of observations used to compute the photometry for each source. \noindent (11) The exposure times in seconds. \noindent (12) The fractions of the PSF enclosed by the source extraction regions. \noindent (13) The fiducial energies of the PSFs used to construct the source extraction regions. \noindent (14) The smallest radius for the extraction region that was used for a source, in arcseconds. This is determined from the observation in which the source was closest to the aim point. It is also an absolute upper bound to the positional uncertainty for a source. \noindent (15) The 50\% completeness limit at the position of the source. Sources brighter than these completeness limits can be used to compute spatial and flux distributions, although the sensitivity map (Fig.~\ref{fig:sens}) is needed to compute the corresponding survey area. \noindent (16) Flags denoting quality, and other information: ``a'' for sources used to register the astrometry of fields; ``s'' for sources variable on short time scales, as indicated by probabilities of $<$0.1\% that the event arrival times for at least one observation were consistent with a uniform distribution according to the KS test; ``l'' for sources that were variable on long time scales, as indicated by a probability of $<$0.1\% that the fluxes for all observations were consistent with a uniform distribution according to the KS test; ``e'' for sources that may be part of an extended, diffuse feature \citep{m-diff}; ``c'' for sources confused with another nearby source; ``g'' for sources that fell near the edge of a detector in one or more observations; ``b'' for sources for which the source and background spectra have a $>$10\% chance of being drawn from the same distribution according to a KS test; ``x'' for sources for which the 0.5--2.0~keV band photometry is inaccurate because the satellite was programmed to omit photons below 1~keV from the telemetry; and ``p'' for sources that suffered from photon pile-up. Table~\ref{tab:phot} contains the X-ray photometry for each source. It contains the following columns \noindent (1) The record locators. \noindent (2) The source names. \noindent (3) The log of the probabilities that the source and background spectra are derived from the same distribution, according to a KS test. Large negative values indicate that the source and background spectra are distinct, and therefore that the source is most likely real. \noindent (4) The total numbers of counts in the 0.5--2.0 keV band. \noindent (5) The estimated numbers of background counts in the 0.5--2.0 keV band. \noindent (6) The net numbers of counts in the 0.5--2.0 keV band, and the 90\% lower and upper uncertainties. In the case of non-detections, an upper limit is provided. \noindent (7) The total numbers of counts in the 2--8 keV band. \noindent (8) The estimated numbers of background counts in the 2--8 keV band. \noindent (9) The net numbers of counts in the 2--8 keV band, and the 90\% lower and upper uncertainties. In the case of non-detections, an upper limit is provided. \noindent (10) The fluxes in the 0.5--2.0 keV band, in units of \mbox{photons cm$^{-2}$ s$^{-1}$}. \noindent (11) The fluxes in the 2--8 keV band, in units of \mbox{photons cm$^{-2}$ s$^{-1}$}. \noindent (12) The mean energy of photons in the source region, statistically corrected for the background. \noindent (13) The soft colors and 90\% upper and lower uncertainties. \noindent (14) The medium colors and 90\% upper and lower uncertainties. \noindent (15) The hard colors and 90\% upper and lower uncertainties. These tables were designed to be inclusive, so sources of questionable quality are included. For instance, 134 sources have net numbers of counts in the 0.5--8.0 keV band that are consistent with 0 at the 90\% confidence level. These sources are only detected in a single band and are presumably either very hard or very soft, detected in single observations because they were transients, or detected in stacked observations with {\tt wvdecomp} at marginal significance. We have chosen to include them because they passed the test based on Poisson statistics from \citet{wei07}. \begin{figure*}[tbhp] \centerline{\epsfig{file=hard_int.ps,width=0.95\linewidth}} \caption{The hard color plotted against the photon flux from each source. Foreground sources are plotted as open red circles, and Galactic center sources as filled blue circles. Sources detected in only in the 3.3--4.7 keV band are assigned hard colors of $-$1; those only detected in the 4.7--8.0 keV band are assigned $HR2$=$+$1, and those detected in neither band are assigned $HR2$=$-$1.1. We also have plotted the colors expected for sources of varying luminosities at a distance of 8 kpc, and absorbed by $6\times10^{22}$ cm$^{-2}$ of interstellar gas and dust. The dotted lines are for power-law spectra, and the solid lines for thermal plasma spectra. } \label{fig:hit} \end{figure*} \section{Results} With a catalog of X-ray sources and associated maps of our sensitivity, it is straightforward to examine the flux and spatial distributions of our sources. We have previously reported these quantities based on the catalogs produced for the central 20 pc around \mbox{Sgr A$^*$}\ \citep{m-cat} and on the wide, shallow survey data that was in the archive as of 2005 June \citep{m-wide}. Here, we derive these quantities for the new catalog, and briefly compare the distributions to recent results from \citet{koy07} on the distribution of diffuse iron emission. \subsection{X-ray Colors and Intensity} In Figure~\ref{fig:hit}, we plot the hard color versus the flux from each source. Foreground sources are indicated with open red circles, and sources at or beyond the Galactic center with filled blue circles. There are 6381 Galactic center sources and 1091 foreground sources with measured hard colors. We have calculated the hardness ratios and photon fluxes that we would expect to get from these energy bands for a variety of spectra and 0.5--8.0 keV luminosities using \program{PIMMS} and \program{XSPEC}. In Figure~\ref{fig:hit}, we plot the colors and fluxes expected for power-law spectra with the dotted lines, and for a optically-thin thermal plasma with the solid lines. We have assumed a distance of 8 kpc and $6\times10^{22}$ cm$^{-2}$ of absorption from interstellar gas and dust. The median hard color for the Galactic center sources is 0.17. For interstellar absorption, this corresponds to a $\Gamma$$\approx$0.5 power law. Using a simulated spectrum, we have determined that the photon fluxes can be converted to energy fluxes according to 1~\mbox{photons cm$^{-2}$ s$^{-1}$}~$= 8.7\times10^{-9}$~\mbox{erg cm$^{-2}$ s$^{-1}$}\ (0.5--8.0~keV). The de-absorbed 0.5--8.0 keV flux is approximately 1.7 times larger, so that for a distance $D$=8 kpc, $10^{34}$ \mbox{erg s$^{-1}$}\ equals $9\times10^{-5}$ \mbox{photons cm$^{-2}$ s$^{-1}$}. The large median value of the hard color is inconsistent with that expected from a thermal plasma (of any temperature) attenuated by interstellar gas and dust. However, our earlier study of the spectra of brighter sources suggest that intrinsic absorption is present, and that the underlying spectrum is consistent with a $kT$=7--9~keV thermal plasma \citep{m-ps}. For sources that are intrinsically absorbed, the luminosities will be significantly higher than implied by Figure~\ref{fig:hit}. \begin{figure*}[htp] \centerline{\epsfig{file=spatial_distribution.ps,width=0.85\linewidth}} \caption{ The spatial distribution of point sources that are securely detected, with fluxes $>$$2\times10^{-6}$ \mbox{photons cm$^{-2}$ s$^{-1}$}\ (0.5--8.0 keV), and that lie near or beyond the Galactic center ($HR0$$>$$-0.175$). In the {\it top panel}, we show the two-dimensional distribution. Over much of the region, we are less sensitive than our nominal limit, so we have indicated these regions with grey. Regions in which we were more sensitive are in white, and the detected sources are indicated with filled black circles. {\it Middle panel:} Histogram of the number of sources per square arcminute, computed as a function of Galactic longitude. The area used to normalize the histogram is derived from the white area in the panel above. The solid line illustrates the model stellar distribution from \citet{lzm02} and \citet{kdf91}, which originally was derived from infrared observations. The model distribution also was computed for the white area in the top panel. {\it Bottom panel:} Same as for the middle panel, except that the source distribution is plotted as a function of Galactic latitude. } \label{fig:londist} \end{figure*} \subsection{Spatial Distribution} We present the spatial distribution of X-ray sources located near or beyond the Galactic center ($HR0$$>$$-0.175$) in Figure~\ref{fig:londist}. We examined only sources brighter than $2\times10^{-6}$ \mbox{photons cm$^{-2}$ s$^{-1}$}, and only included a source if the 50\%-confidence flux limit at its position was less than or equal to $2\times10^{-6}$ \mbox{photons cm$^{-2}$ s$^{-1}$}. This flux limit was chosen as a compromise between the area over which the distribution is derived, which decreases for lower flux limits (Fig.~\ref{fig:sens}), and the number of sources used in the distribution, which tends to increase for lower flux limits (Fig.~\ref{fig:lumdist}). In the top panel of Figure~\ref{fig:londist}, we display the locations of each of the 479 sources that met the flux criteria. The area over which the flux limit is $<$$2\times10^{-6}$ \mbox{photons cm$^{-2}$ s$^{-1}$}\ is displayed in white, and the greyed areas indicate regions of poorer sensitivity. A concentration of X-ray sources is evident near the position of \mbox{Sgr A$^*$}. In the bottom panels of Figure~\ref{fig:londist}, we display histograms of the numbers of sources per unit area, as functions of Galactic longitude and latitude. Only regions of good sensitivity are used. We then compared the spatial distributions to that of the stellar mass that has been inferred from infrared observations. Our mass model consists of the young nuclear bulge and cusp and the old Galactic bulge from \citet{lzm02}, and the model for the Galactic disk from \citet[][see Muno \mbox{et al.}\ 2006a for further details]{kdf91}. To make a direct comparison with our unevenly-sampled spatial distributions, we integrated the model for the stellar mass from 6 to 14 kpc along the line of sight at points on a 1\arcmin\ grid covering our survey region, and interpolated the resulting values onto the image. We then summed the values of the integrated mass over areas of good sensitivity, to match the longitude and latitude bins of the observed histogram. Finally, we minimized chi-squared over one parameter to scale the binned mass model to the observed distributions of X-ray sources. We find a best-fit scaling factor of $5\times10^{-7}$ X-ray sources per solar mass for sources brighter than $2\times10^{-6}$ \mbox{photons cm$^{-2}$ s$^{-1}$}\ for both the latitude and longitude distributions. For the longitude distribution, $\chi^2/\nu$=22.4/19, and for the latitude distribution $\chi^2/\nu$=20.6/17. The best-fit models are displayed with solid lines in the bottom panels of Figure~\ref{fig:londist}. The models are acceptable descriptions of the data. However, in the plot as a function of longitude, at the inner few arcminutes around \mbox{Sgr A$^*$}, and just to the east toward the Arches and Quintuplet regions, there is an $\approx$2.8$\sigma$ excess in the number of observed X-ray sources. We find that this excess is also present with similar significance if we chose tighter or looser flux limits between 1 and $5\times10^{-6}$~\mbox{photons cm$^{-2}$ s$^{-1}$}. The model also predicts more sources than observed at $L$=0.5--0.6$^\circ$\ at the 1$\sigma$ level, but this is probably because the Sgr B molecular complex attenuates X-rays from sources behind it. \subsection{Number-Flux Distribution} We computed the number-flux distribution based on the maximum-likelihood algorithm described in \citet{mcj73}, which we modified to use Poisson statistics in the manner described in Appendix B of \citet{m-wide}. We examined three regions that had well-defined flux limits and effective exposure times: the inner 8\arcmin\ around \mbox{Sgr A$^*$}, the 8\arcmin\ around the Arches cluster (excluding the overlap with the \mbox{Sgr A$^*$}\ field), and the portions of the survey covered by the 40 ks pointings taken between 2006 and 2007. We assumed that the number-flux distribution was a single power law over the ranges of fluxes that we measured, $N(>S) = N_0 (S/S_0)^{-\alpha}$. We display the resulting cumulative number-flux distributions in Figure~\ref{fig:lognlogs}, and list the best-fit parameters in Table~\ref{tab:lognlogs}. Our distributions extend a factor of $\approx$2 deeper than in \citet{m-wide}. The fit to the distribution from the \mbox{Sgr A$^*$}\ region is formally poor, because the distribution steepens at low fluxes \citep{m-cat}. However, we do find that the Arches region has a flatter flux distribution ($\alpha$=1.0$\pm$0.3) than either the inner 8\arcmin\ around Sgr A (($1.55$$\pm$0.09) or the wide survey field ($1.3$$\pm$0.1). The difference is only significant at the 1.4$\sigma$ level. Nonetheless, given that there are also excess sources coincident with the Arches region in the spatial distribution, we suggest that there is a genuine over-abundance of bright X-ray sources in this region of recent star formation. \begin{deluxetable*}{lcccccc}[htp] \tablecolumns{7} \tablewidth{0pc} \tablecaption{Parameters of the $\log N - \log S$ Distribution\label{tab:lognlogs}} \tablehead{ \colhead{Field} & \colhead{$S_{\rm lim}$} & \colhead{Num.} & \colhead{Area} & \colhead{$\alpha$} & \colhead{$N_0$} & \colhead{$P_{\rm KS}$} \\ \colhead{} & \colhead{$10^{-6}$ \mbox{photons cm$^{-2}$ s$^{-1}$}} & \colhead{Sources} & \colhead{(arcmin$^{2}$)} & \colhead{} & \colhead{(arcmin$^{-2}$)} & \colhead{} } \startdata Sgr A* & 0.5 & 323 & 44 & 1.55$\pm$0.09 & 0.41 & 0.00 \\ Arches & 1 & 17 & 22 & 1.0$\pm$0.3 & 0.08 & 0.88 \\ Field & 3 & 92 & 813 & 1.3$\pm$0.1 & 0.02 & 0.88 \enddata \tablecomments{The normalization of the $\log N - \log S$ distribution, $N_0$ is listed for a fiducial flux of $2\times10^{-6}$ \mbox{photons cm$^{-2}$ s$^{-1}$}, to match the spatial distribution in Figure~4. $P_{\rm KS}$ represents the probability under a Kolgoromov-Smirnov test of seeing the observed difference between the observed and model distribution assuming that they are identical, so that very small values would indicate a poorer match.} \end{deluxetable*} \begin{figure} \centerline{\epsfig{file=logn_logs.ps,width=0.95\linewidth}} \caption{ The cumulative number of sources as a function of limiting flux, for three regions of interest: the inner 8\arcmin\ radius around \mbox{Sgr A$^*$}, the 8\arcmin\ radius around the Arches cluster (excluding the overlap with the \mbox{Sgr A$^*$}\ region), and the wide survey (excluding the fields around Sgr B, Sgr C, the Arches, and \mbox{Sgr A$^*$}). The solid line indicates the best-fit power law, which we determined from the un-binned distribution. The top axis provides an estimate of the luminosity corresponding to the observed flux. The luminosity is calculated assuming $D$=8~kpc and a mean photon energy of $8.7\times10^{-9}$ erg (corresponding to a $\Gamma=$=0.5 power law absorbed by $N_{\rm H}$=$6\times10^{22}$~cm$^{-2}$). } \label{fig:lognlogs} \end{figure} A similar asymmetry has been identified in the flux of diffuse emission from helium-like iron \citep{koy07}. We suggest that both the excess point sources and the excess iron emission are related to the concentration of young stars in this region, the most dramatic manifestations of which are the Arches and Quintuplet clusters \citep[e.g.,][]{fig99}. The iron emission is probably diffuse, hot plasma that forms in shocks where the stellar winds from the clusters impact the ISM \citep{yz02,lyz04,wdl06}. The excess point sources are probably young, OB and Wolf-Rayet stars in binaries \citep[e.g.,][]{mau07}. \section{Discussion} We have presented a catalog of 9017 X-ray sources located in the inner 2$^\circ$\ by 0.8$^\circ$\ around the Galactic center. This increases the number of sources known in the region by a factor of 2.5. For all of the sources, we provide tables listing their positions (Table~\ref{tab:positions}), photometry, and colors (Table~\ref{tab:phot}). Of these sources, 6760 have hard colors that are consistent with high absorptions columns $N_{\rm H}$$\ga$$4\times10^{22}$~cm$^{-2}$, which indicates that they lie at or beyond the Galactic center. In addition, the positions of the X-ray sources in this catalog are more accurate than earlier versions. This catalog contains 2029 sources with $<$0.5\arcsec\ uncertainties (90\% confidence), and another 3981 with uncertainties between 0.5\arcsec\ and 1\arcsec. This catalog will be excellent for comparisons with multi-wavelength ones, in order to search for young stars, high-mass X-ray binaries, and pulsars \citep[e.g.,][]{wll02b,lwl03,mik06,m-ys,mau07}. The luminosity range that we cover, from $10^{31}$ to $10^{34}$~\mbox{erg s$^{-1}$}\ (0.5--8.0 keV; assuming a $\Gamma$=1.5 power law, $N_{\rm H}$=$6\times10^{22}$ cm$^{-2}$, and $D$=8 kpc), is at least an order of magnitude fainter than studies of Local Group galaxies \citep[e.g.,][]{tp04,kil05,plu08}. Consequently, the natures of the sources that we study are also very different. Whereas the detectable stellar population of external galaxies in X-rays is dominated by accreting black holes and neutron stars, most of our sources are probably cataclysmic variables \citep[e.g.,][]{m-wide}. The hardness of the X-ray colors (Fig.~\ref{fig:hit}) suggests that the sources are specifically magnetically-accreting white dwarfs \citep{ei99,m-wide}. Therefore, the X-ray population probably represents old stars. Indeed, the spatial distribution of sources brighter than $2\times10^{-6}$~\mbox{photons cm$^{-2}$ s$^{-1}$}\ (2--8 keV) traces that of the old stellar population (Fig.~\ref{fig:londist}). This makes the population of X-ray sources in the Galactic center similar to those seen in globular clusters \citep[e.g.,][]{ver97,hei06}. Although the distribution of the majority of the X-ray sources traces that of the old stellar population, we have found 2.8$\sigma$ evidence for an excess of sources in two regions where young, massive stars are forming: in the inner few arcminutes around \mbox{Sgr A$^*$}, and in the region where the Arches and Quintuplet star clusters lie. The excess of sources near these young star clusters also appears in the number of sources as a function of limiting flux, in which relatively more bright X-ray sources are found near the Arches and Quintuplet (Fig.~\ref{fig:lognlogs} and Table~\ref{tab:lognlogs}). In total, these two regions contain a couple dozen more bright sources than our stellar mass model predicts. We suggest that these excess X-ray sources are part of the young stellar population in these region \citep{mik06,m-ys,mau07}. In the near future, we will publish additional OB and Wolf-Rayet stars that have been identified through infrared spectroscopy of counterparts to X-ray sources (J. Mauerhan \mbox{et al.}, in prep). \begin{deluxetable*}{llcccl}[htp] \tabletypesize{\scriptsize} \tablecolumns{6} \tablewidth{0pc} \tablecaption{Luminous X-ray Binaries Covered by Our Observations\label{tab:transients}} \tablehead{ \colhead{{\it Chandra}\ name} & \colhead{Common Name} & \colhead{RA} & \colhead{DEC} & \colhead{uncertainty} & \colhead{Reference} \\ \colhead{(CXOUGC J)} & \colhead{} & \multicolumn{2}{c}{(Degrees, J2000)} & \colhead{(arcsec)} & colhead{} } \startdata 174354.8-294441 & 1E 1740.7-2942 & 265.97864 & $-29.74499$ & 0.5 & \citet{sid99} \\ 174417.2-293943 & AX J1744.3-2940 & 266.07190 & $-29.66234$ & 0.5 & \citet{sid01} \\ 174433.0-284427 & Bursting Pulsar & 266.13788 & $-28.74096$ & 0.5 & \citet{ww02} \\ 174451.6-292042 & KS 1741-293 & 266.21515 & $-29.34522$ & 0.5 & \citet{int97} \\ 174457.4-285021 & XMM J174457-2850.3 & 266.23944 & $-28.83917$ & 0.3 & \citet{sak05} \\ [2pt] 174502.3-285449 & Granat 1741.9-2853 & 266.25983 & $-28.91397$ & 0.4 & \citet{m-grs} \\ 174535.6-290133 & AX J1745.6-2901 & 266.39853 & $-29.02612$ & 0.4 & \citet{mae96} \\ 174535.5-290124 & \nodata & 266.39822 & $-29.02337$ & 0.3 & \citet{m-trans}\\ 174537.1-290104 & 1A 1742-289 & 266.40494 & $-29.01796$ & 0.4 & \citet{dav76} \\ 174538.0-290022 & \nodata & 266.40863 & $-29.00623$ & 0.3 & \citet{m-trans} \\ [2pt] 174540.0-290005 & \nodata & 266.41699 & $-29.00160$ & 0.4 & \citet{m-trans} \\ 174540.0-290030 & \nodata & 266.41684 & $-29.00859$ & 0.3 & \citet{m-trans} \\ 174540.9-290014 & \nodata & 266.42078 & $-29.00398$ & 0.4 & \citet{m-trans} \\ 174553.9-290346 & SWIFT J174553.9-290347 & 266.47467 & $-29.06305$ & 0.4 & \nodata \\ 174554.4-285455 & XMM J174554.4-285456 & 266.47690 & $-28.91533$ & 0.4 & \citet{por05} \\ [2pt] 174621.0-284342 & 1E 1743.1-2843 & 266.58768 & $-28.72868$ & 0.4 & \citet{por03} \\ 174702.5-285259 & SAX J1747.0-2853 & 266.76080 & $-28.88307$ & 0.4 & \citet{wmw02} \\ \nodata & XTE J1748-288 & 267.02108 & --28.47383 & 0.6 & \citet{hrm98} \\ \nodata & XMM J174544-2913.0 & 266.43546 & --29.21683 & 4.0 & \citet{sak05} \\ \enddata \end{deluxetable*} A small fraction of the X-ray sources should be accreting black holes and neutron stars. Around 300 such X-ray binaries are known in the Galaxy, about half of which contain low-mass donors that over fill their Roche lobe, and half of which contain high-mass (OB and Wolf-Rayet) stars that donate mass through a stellar wind \citep{liu06,liu07}. These X-ray binaries are most-easily identified when they are bright and variable \citep{m-trans}. In total, over the history of X-ray astronomy, 19 X-ray sources in our survey field have been observed to be $>$$10^{34}$~\mbox{erg s$^{-1}$}\ in X-rays, and have varied by at least an order of magnitude in X-ray flux (Table~\ref{tab:transients}). Fifteen of these transient X-ray sources were bright during the time span of our {\it Chandra}\ observations (1A 1742-289 and XTE J1748-288 never entered outburst). Half of them have been discovered in the last 9 years using {\it Chandra}, {\it XMM-Newton}\, or {\it Swift} \citep[e.g.,][]{sak05,por05,m-trans,wij06,ken06}. Surprisingly, despite having obtained 600 ks of new data in 2006 and 2007, we did not detect any new, bright ($>$$10^{34}$~\mbox{erg s$^{-1}$}), transient X-ray sources. This suggests that we have identified all of the X-ray binaries that are active on time scales of a decade. As mentioned in \S\label{ref:obs}, the tables from this work will be available in the electronic edition of this journal, and additional products will be made available from the authors' web site.\footnote{{\tt http://www.srl.caltech.edu/gc\_project/xray.html}} The data available from the authors' site includes FITS images of all of the images presented in this paper, as well as the averaged event lists, snapshot images, spectra, and calibration files for each source in the catalog. Combined with an increasing amount of multi-wavelength data, this data set can be used to better understand the interactions between stars and interstellar media in the Galactic center, and the population of X-ray emitting objects in general. \acknowledgements MPM, RMB, WNB, GCB, PSB, AC, SDH, JCM, QDW, ZW, and FYZ received support from NASA thorugh Chandra Award Number G06-7135 issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophsyical Observatory for and on behalf of the National Aeronautics Space Administration under contract NAS8-03060. TJWL and NEK received funding for basic research in astronomy at the Naval Research Laboratory, which is supported by 6.1 base funding.
2,869,038,155,356
arxiv
\section{Introduction} The unidentified $\gamma$-ray source HESS~J1943+213 is intriguing because of its low Galactic latitude ($-1\fdg29$). It could be the first BL Lac object to be observed through the Galactic plane. Proving on the other hand that HESS~J1943+213 is a pulsar wind nebula (PWN), residing in a supernova remnant (SNR) shell at the outer edge of our Galaxy, could help solve the long-standing problem of the missing Galactic SNRs. With a Galactic supernova rate of one per $30-50$ years \citep{tammann1994}, and a remnant radio-lifetime average of $\geq6\times10^4$ years \citep{frail1994}, there should be $\sim1.6\times10^3$ SNRs at any given time. In reality the detected SNRs make up only $\sim$18$\%$ of this number \citep{green2014}. HESS~J1943+213 was first discovered as the hard X-ray source \mbox{IGRD~ J19443+2117} by \cite{landi2009} with {\em INTEGRAL}. \cite{landi2009} identified the source in several wavelength bands. In the X- and $\gamma$-rays and in the infrared (IR; J-, H-, and K-band) it corresponds to the source \mbox{2MASS~J1943562+2118233}, and in radio at 1.4\,GHz to \mbox{NVSS~J194356+211826}. A power law was fitted to the {\em INTEGRAL} data in the 0.9\,$-$\,100\,keV energy band, which provides evidence for absorption in excess of the Galactic value and a slope typical to the spectral indices of an active galactic nucleus (AGN). The source is highly absorbed in the IR J, H, and K band, which gives an E\textit{(B$-$V)} also in excess of the Galactic value. The excess of Galactic absorption provides evidence for the source's extragalactic nature and in combination with the X-, $\gamma$-rays and radio data, \citet{landi2009} propose HESS\,J1943+213 is a radio-quiet AGN. \setcounter{footnote}{0} In order to localize and determine the nature of the source, \mbox{IGR J19443+2117} was followed up in X-rays with {\em Chandra} \citep{tomsick2009}. {\em Chandra}'s higher positional accuracy confirmed the association between the X-ray, IR and radio sources. An absorbing column density significantly higher than the Galactic value ($\rm{N_{H}/N_{H_2}} = 0.84/0.054$) provided more evidence for the extragalactic source nature. \setcounter{footnote}{0} In 2011 the source was discovered to also emit at TeV energies \citep{HESS2011,cerruti2011}, as HESS J1943+213. Its detection at very high energies (VHE) together with its flat radio spectrum shown by \cite{HESS2011} make it plausible that the source is a PWN. Many unresolved VHE Galactic sources turn out to be PWNe\footnote{TeVCat, an online catalog for TeV Astronomy, \url{http://tevcat.uchicago.edu/}}. Although the source has now been observed at multiple energies and wavelengths, its nature remained unclear. Two plausible scenarios remain for the nature of this source. It is either a high-frequency-peaked BL Lac (HBL) object, evidenced by its TeV emission and soft VHE spectrum \citep{HESS2011}, or it could be a Galactic PWN \citep[e.g.][]{HESS2011, gabanyi2013} which is supported by the lack of variability \citep{shahinyan2015b}. \cite{HESS2011} also proposed that the source could be a gamma-ray binary, however, this scenario was quickly discarded because no massive companion was detected to a distance limit of $\sim$25\,kpc. This would place the potential binary outside our Galaxy, making it 100\,$-$\,1000 times brighter in X-rays than any known gamma-ray binary. In support, the observed 10s of arcseconds to $\sim1$-arcminute-scale radio structure \citep{gabanyi2013} cannot be explained by colliding winds the binary would produce. The lower limit of the distance of 16\,kpc derived by HI absorption favours its extragalactic nature \citep{leahy2012}, but is inconclusive, because \cite{vallee2008} shows that the furthest spiral arm of the Milky Way reaches distances greater than 20\,kpc. However, the soft TeV spectrum, $\Gamma=3.1\pm0.5$, is softer than of all known PWNe \citep{kargaltsev2010} and argues in favour of the blazar hypothesis \citep{HESS2011}. Also, \cite{peter2014} interpret the K-band counterpart to be a massive elliptical galaxy, with only 10$\%$ chance it being a star and found a weak 5.1$\sigma$ detection above 1\,GeV of the counterpart of HESS~J1943+213 in 5 years of {\em Fermi} data. This supports the blazar hypothesis as most of the blazars are detected by {\em Fermi} \citep{piner2014}. Recent {\em VERITAS} observations, conducted by \cite{shahinyan2015b}, show no brightness variability, however, BL Lac objects are known for their wide range in variability time-scales and intensities. Despite the evidence for its BL Lac nature, the radio spectral index obtained by \cite{HESS2011} is also compatible with the PWN scenario. Supportingly, \cite{piner2014} argue that the brightness of the source in TeV is two orders in magnitude too low to be a HBL object, which would leave the PWN scenario as the only plausible scenario. \subsection{PWN hypothesis} \label{sec:intro_pwn} The PWN hypothesis is strengthened after the re-analysis of archival VLA large-scale HI data \citep[VGPS;][]{stil2006} in \cite{gabanyi2013}. These data revealed the presence of a shell-like feature of $\sim1$\textdegree diameter, the radio/X-ray/TeV point source near its center\citep[see Fig.~3,][]{gabanyi2013}. This shell-like feature can be interpreted as a consequence a supernova explosion where the central compact source is the PWN, powered by a young pulsar. The expansion suggests the supernova explosion to have occurred $4\,\times\,\rm{10^5}$\,yrs ago. Indeed, if one puts the Crab or 3C58 PWNe at the proposed distance of 17\,kpc \citep{gabanyi2013} they would appear the same size as HESS~J1943+213 seen in the archival VLA data. We then expect the young, energetic pulsar powering the PWN to have a period of 30--300\,ms. At the proposed distance, the dispersion measure (DM) would be of order 500\,pc\,cm$^{-3}$ in this line of sight \citep[from NE2001,][]{cordes2002}. \subsection{BL Lac hypothesis} In 2011, \cite{gabanyi2013} performed EVN (European VLBI Network) observations of the source at 1.6\,GHz and found the radio counterpart of the high-energy source at an offset of $3\farcs75$ to the NVSS catalog coordinates. The recovered flux density was 31$\pm$3\,mJy, which is only one-third of the flux density ($95\pm9$\,mJy) recovered simultaneously with the Westerbork Synthesis Radio Telescope (WSRT). This latter corresponds well to the flux density obtained from archival data of the Very Large Array (VLA) taken at 1.4 GHz on 1985 September 30 (project: AH196) and thus shows a discrepancy in flux density between the separate angular scales. Using these observations to compare to, we will discuss the proposed BL Lac nature of the source by imaging the source in order to further investigate its sub-arcsecond radio structure.\\ In this paper we present new time-domain, high-resolution imaging, and continuum investigations of HESS~J1943+213 (hereafter J1943+213). In Sect. \ref{sec:observations} we describe the observations and data reduction of all three studies. Section \ref{sec:results} contains our findings. We discuss these results in Sect.~\ref{sec:discussion} before concluding, in Sect.~\ref{sec:conclusion}, on the nature of this intriguing source. \section{Observations} \label{sec:observations} In order to investigate the nature of the source we took a three-pronged approach. To determine if the source is a PWN, we have performed high-time resolution observations with the Arecibo radio telescope to find the putative pulsar powering the PWN. These will be addressed first. Secondly, we investigate any sub-arcsecond-scale radio structures of the source for which we have obtained e-MERLIN (electronic Multi-Element Remotely Linked Interferometer Network) observations. Finally, we discuss flux density measurements obtained by the new survey VLITE (VLA Low Band Ionospheric and Transient Experiment, Clarke et al.\ 2016, in preparation). Together with survey catalog flux density measurements, this low-frequency observation at 340\,MHz will provide information on the continuum spectrum which in turn could help to rule out either the proposed PWN or BL Lac object scenario. \subsection{Pulsar search} Arecibo, with its 305-m diameter, is the largest single-dish telescope in the world. Combined with its broadband observing capabilities it provides instantaneous sensitivity and is highly suitable for pulsar searches. The source J1943+213 was observed in two frequency bands to improve the chance of detecting the putative pulsar, given the various frequency-dependent effects. If the source is a PWN in a $\sim1$\textdegree\, SNR shell, it is expected to be at a distance of $\sim$17\,kpc \citep{gabanyi2013}. The free electrons in the line of sight create a dispersive time delay in the putative pulsar signal which is proportional to $f^{-2}$, where $f$ is the observing frequency. This delay is expressed through the dispersion measure (DM), the density of free electrons integrated over distance in the line of sight, given in $\rm{pc\,cm^{-3}}$. Observing at higher frequencies allows for less dispersive time delay, which increases the signal-to-noise (S/N) ratio of a pulsar signal. Beyond this dispersion, the signal will suffer from scattering which is proportional to $f^{-4}$. Scattering stretches the pulse shape, also resulting in a reduction of the S/N ratio. These two effects make observing at low frequencies a challenge. In contrast, pulsars are known to have a steep power spectrum towards low frequencies, mostly peaking in the range of 200$-$400\,MHz. As the pulsar signal is stronger towards lower frequencies, observing at lower frequencies is a boon. It might enable the detection of the pulsar where at higher frequencies it would be too weak. However, taking the above mentioned effects into account, observing below 1.4\,GHz might smear out most of the signal for a far away, high-DM pulsar. \subsubsection{Observations with Arecibo} Arecibo was pointed to the radio counterpart of HESS~J1943+213, \mbox{NVSS J194356+211826} \citep{landi2009} on 2012 June 1. The source was observed in the L-wide band around 1.44\,GHz and in the S-wide band around 2.85\,GHz with the Arecibo Mock spectrometers in single-pixel mode. The field-of-view (FoV) for the pointing in the L-wide band was $3\farcm1\,\times\,3\farcm5$, and in the S-wide band $1\farcm8\,\times\,2\farcm0$. In the L-wide band we observed for 54 minutes with a large total bandwidth of 688\,MHz. This is more than twice as large as the PALFA Survey bandwidth of 300\,MHz \citep{knispel2011}. The PALFA Survey is the Arecibo L-band Feed Array 1.4\,GHz Survey for radio Pulsars, which is currently the most sensitive survey of the Galactic plane. In the S-wide band we observed using a slightly smaller bandwidth of 494\,MHz for 69 minutes. More observing details can be found in Table~\ref{table:obsspecs_arecibo}. \begin{table} \begin{center} \caption{Observing specifications pulsar search with Arecibo\label{table:obsspecs_arecibo}} \begin{tabular}{l r r} \tableline\tableline & L-wide & S-wide\\ \tableline Central frequency (MHz) & 1444.1 & 2852.3\\ Total Bandwidth (MHz) & 688 & 493.6\\ Number of Channels & 4096 & 2940\\ Sample time ($\rm{\mu}$s) & 65.45 & 65.48\\ Total time (s) & 3237.6 & 4181.9\\ \tableline \end{tabular} \end{center} \end{table} \subsubsection{Data reduction} The obtained data were streamed to the Cartesius super computer at SURFsara\footnote{\url{https://www.surfsara.nl/nl/systems/cartesius}}. There it was converted from 16-bit to 8-bit data, and the Mock subbands were combined. The resulting PSRFITS files were further analysed with the pulsar search software PRESTO \citep{ransom2001}. Radio frequency interference (RFI) was masked out of the data. The L-wide data were searched for periodic signals in the DM range from 0 to 1000 $\rm{pc\,cm ^{-3}}$, which is twice the expected DM value (see Sect. \ref{sec:intro_pwn}). The DM range was searched starting with steps of 0.05 $\rm{pc\,cm^{-3}}$ and no down-sampling, up to steps of 0.30 $\rm{pc\,cm^{-3}}$ and a down-sampling factor of 4. The candidate signals were sorted on their S/N ratio and searched down to a detection limit of 4.1$\sigma$, equivalent to $\chi^2 \sim$ 1.90. These detection limits, as output by PRESTO, signify how much the candidate signal deviates from a straight line. The same DM range was searched in S-wide with steps of 1.00 $\rm{pc\,cm^{-3}}$ and no down-sampling. Here we could afford to search using larger DM steps, because the frequency dependent smearing is less at higher frequencies. Further analysis was performed following the same steps, where in S-wide all candidate signals were inspected. For every DM step in both bands, we also looked for single pulses of widths between 0.064 and 10\,ms, down to a S/N = 8. \subsection{e-MERLIN radio imaging and data reduction} Radio imaging data were obtained by e-MERLIN, which is a UK-based long baseline interferometry array using six 25-m dishes with the optional inclusion of the Lovell telescope (76\,m diameter). The total network can reach a resolution of 150 milli-arcseconds (mas) in the L-band (1.5\,GHz). The e-MERLIN observations were obtained at 1.5- and 5-\,GHz \citep[project CY1017;][]{gabanyi2015}. The 1.5-\,GHz observation took place on 2013 December 7 and lasted for 12 hours. From the total observing time, approximately 6\,h was on-source time. Unfortunately, the self-calibration did not provide meaningful solutions in the first few hours of the observation and only $\sim$ 4.3\,h could be used for the imaging of the source. The phase calibrator was J1946+2300. The 5-GHz observation had to be split in three 4.5-h long runs, because due to maintenance, the Jodrell Bank MkII telescope could only participate in observations during the night in October 2013. The three runs were carried out between 2013 October 11 and 14. To cover the missing hour angles, two additional runs were carried out on 2014 June 12 and 13. Except for the last run, when MkII was not involved in the observation, all e-MERLIN telescopes participated. The on-source time was approximately 13-h. The phase-reference calibrator was J1925+2106. Data reduction was done using the National Radio Astronomy Observatory (NRAO) Astronomical Image Processing System \citep[AIPS,][]{greisen2003}, following the e-MERLIN cookbook version 2.4a. \subsection{VLITE data reduction} The National Radio Astronomy Observatory's Very Large Array (VLA) is a 27-antenna interferometer operating between 56\,MHz and 50\,GHz \citep{perley2011}. We obtained data from a new commensal observing system on the VLA called the VLA Low Band Ionospheric and Transient Experiment (VLITE). This system records data for a 64\,MHz bandwidth of the low-band receiver \citep{clarke2011} centered at 352\,MHz for 10 VLA antennas (Clarke et al.\ 2016, in preparation). The correlator is a custom DiFX correlator \citep{deller2011} operating in real time on the VLITE data stream. Science operations began in November 2014 and VLITE data have been recorded for nearly all pointed VLA observations with primary science programs above 1\,GHz since that time. We searched the VLITE archive and found observations with J1943+213 within the field of view for two separate observations on 2014 December 4. The observations were split across two phase centers for a combined observing time of 11.1 minutes. The VLITE data were calibrated following standard reduction procedures with each observation processed separately before combining the final images. RFI was excised using automated routines. Additional fixed flags were applied to remove known bright RFI and aliasing in the $360-384$\,MHz portion of the spectrum. Next the data were corrected for delay offsets followed by an initial round of calibration. Additional flagging was undertaken following that calibration and a second round of calibration was applied. For our target source, the flux density calibration used 3C286 and 3C48 to set the scale. We have converted measured flux densities to the scale of the other measurements using the known scaling for the flux density calibrators. Following calibration we attempted to undertake phase self-calibration of the data but found too little flux density in the field to improve the phase solution of the data. Each of the individual pointings was convolved to a matched circular beam (52\arcsec) and then corrected for primary beam attenuation using a recently derived VLITE primary beam appropriate for the VLA subreflector position of the observations. The two pointings were then combined in the image plane for a final image which has an rms of 36.2\,mJy\,beam$^{-1}$. \section{Results} \label{sec:results} \subsection{Pulsar search} \begin{figure} \includegraphics[width=0.5\textwidth]{PSR_1953+29_adjusted} \caption{This figure shows the detected pulse signal of the test pulsar B1953+29 observed in the S-wide band as output by PRESTO. The left-hand side shows the cumulative pulse signal and its progression in time. The right-hand side shows (upper) the pulse signal as a function of frequency where the white stripe is masked-out radio frequency interference (RFI). The right-bottom side displays the strength of the pulsar signal as a function of dispersion measure ($\rm{pc \,cm^{-3}}$).\label{fig:testpsr}} \end{figure} The Arecibo instrument and software setup was verified by observing three known, bright pulsars (PSRs B1937+21, B1953+29, B2016+28) in nearby directions in the sky. All were easily detected (cf.~Fig.~\ref{fig:testpsr}). Our sensitive search of J1943+213 resulted in $\sim$2000 candidate signals in L-wide and $\sim$1500 candidate signals in S-wide. All these candidate signals were inspected by eye, where we focused on a clean pulse profile as seen in the top left of Fig.~\ref{fig:testpsr} and a peak in the detected DM, as seen on the bottom right of the same figure. A peak in the DM indicates that the signal originates from a specific location. The bottom left graph shows the intensity of the pulse profile as a function of phase and time, which for a single pulsar should look similar to that of the shown test pulsar. No candidate signal down to a detection limit of $\chi^2$=1.90 and 4.1\,$\sigma$ appeared to be a pulsar. For an indication of the thoroughness of these limits, we compare to the PALFA survey. There, the ten most recent detections were on average $\chi^2$ = 8, and 23\,$\sigma$. More specifically the five weakest pulsars in our expected period regime (30\,ms $\le$ P $\le$ 300\,ms) have $\chi^2$= 2.12 -- 4.73 and $\sigma$= 6.1 -- 14.7. These would all have been handily detected. To determine the corresponding flux density limits, we use the modified radiometer equation \citep[Eq. \ref{eq:radiometer}, after ][]{dewey1985}: \begin{equation} \label{eq:radiometer} S_{\rm{min}} = \frac{\rm{S/N}}{\rm{Gain}} \frac{T_{\rm{sys}}} {\sqrt{n_{\rm{pol}}\times \tau \times \rm{BW}}}\sqrt{\frac{W}{P-W}} \end{equation} In our observations we searched down to S/N = 6 in L-wide and S/N = 4 in S-wide. The gain of Arecibo is 10.5\,K\,Jy$^{-1}$ for L-wide and slightly less (9.5\,K\,Jy$^{-1}$) for S-wide. The frequency-dependent system temperature ($T_{\rm{sys}}$) is 25\,K in L-wide and 32\,K in S-wide. In both bands we recorded both polarization directions ($n_{\rm{pol}}=2$). The bandwidth (BW), expressed in Hz, and observing time ($\tau$) in seconds, are given in Table~\ref{table:obsspecs_arecibo}. $W$ and $P$ are the pulse width and rotation period of the pulsar, respectively. Together, ($W/P$) provide the duty cycle of the pulsar signal (i.e. the 'on'-time of a pulsar). Because W and P are unknown when searching for a new pulsar, we use the average $W_{\rm{50}}/P=0.071$ duty cycle over all PWN pulsars in the ATNF pulsar database \citep{manchester2005}. We find we were sensitive to signals of 1.90\,$\mu$Jy in L-wide and of 1.45\,$\mu$Jy in S-wide. The pseudo luminosity, $L=Sd^2$, that accounts for the distance, is 0.55\,mJy\,kpc$^2$ (L-wide) and 0.42\,mJy\,kpc$^2$ (S-wide) for the assumed distance of $d$=17\,kpc. This is 3$-\sim$300 times more sensitive than the luminosity of 88$\%$ of all known pulsars in a PWN. Next to the search for periodic pulsar signals described above, we have also conducted a single-pulse search. Such a search is sensitive to pulsars that emit only irregularly. We identified all 3000 single pulses with S/N ratios above 8, in both bands, and evaluated how their DM versus time signature compared to the shape expected for a pulsar (D. Michilli, \emph{priv.~comm.}). For the best 30 candidates we looked for the dispersed pulse curve in the high time-resolution dynamic spectrum, which would be characteristic of a pulsar. We did not find any such single pulses. \subsection{e-MERLIN imaging} \label{sec:results_eMERLIN} \begin{figure*} \includegraphics[width=\textwidth]{gabanyi2015_fig} \caption{e-MERLIN images of HESSJ943+213. Left-hand side: L-band image. The peak is 19.2\,mJy\,beam$^{-1}$, the beam size is 252\,mas\,$\times$\,117\,mas at a position angle of 26\textdegree. The lowest positive contour is at 0.34\,mJy/beam (7$\sigma$-level), further contours increase with a factor of two. Right-hand side: C-band image. The peak is 19.4\,mJy\,beam$^{-1}$, the beam size is 92\,mas × 45\,mas at a position angle of 46\textdegree. The beams are shown in the lower left corner in each plot. The lowest positive contour is at 0.2\,mJy\,beam$^{-1}$ (7$\sigma$ -level), further contours increase with a factor of two. The dashed contours in both plots indicate the 7$\sigma$ negative contours. \label{fig:e-merlin}} \end{figure*} The e-MERLIN data reduction resulted in an unresolved point source at both frequencies, as shown in Fig.~\ref{fig:e-merlin}. None show any large-scale feature visible down to a 7$\sigma$ level (0.34\,mJy\,beam$^{-1}$) in the $23\arcsec\times 23\arcsec$ L-band FoV. Brightness distribution models were fit to the visibilities with Difmap \citep{shepherd1994} and we found that single, circular, Gaussian components best describe the source at both frequencies. The emission has a flux density of 22.2$\pm$0.7\,mJy at 1.5\,GHz and of 22.4$\pm$0.3\,mJy at 5\,GHz. If we assume that the source did not show variability between the observations taken at the two frequencies, J1943+213 has a flat spectrum. Since the 5-GHz observations were split into several chunks, we were able to compare the source flux density in the different observing runs to check for flux density variability. Specifically we did self-calibration and imaging with using the first 13.5 hours, which were observed on three consecutive days in 2013 October. Separate self-calibration and imaging were done with using only the observations performed in 2014 June. We did not detect significant variability between these two epochs. On the other hand, in the L-band J1943+213 was significantly fainter during the e-MERLIN observation in 2013 December compared to the EVN L-band observation in 2011 May \citep{gabanyi2013}. \subsection{Radio continuum spectrum of J1943+213} The radio continuum spectrum of J1943+213 is obtained by combining several radio survey flux density measurements of the source and measurements from observations pointed to the NVSS source, which is the accepted radio counterpart of J1943+213. \subsubsection{Flux density measurements from radio surveys} The VLITE observations detect a source at the NVSS position with almost 8-$\rm{\sigma}$ significance. The flux densities at 340\,MHz are 0.23$\pm$0.5\,Jy and 0.36$\pm$0.07\,Jy integrated. From the fit, the structure appears slightly extended (56\arcsec) compared to the 52\arcsec$\times$52\arcsec resolution. The reported extension is only marginally larger than the beam size, thus it is unclear if the source is resolved, and we therefore have chosen to use the \emph{peak} flux density which is appropriate for an unresolved source. The source is also detected by the re-reduction of the VLA low-frequency Sky Survey \citep[VLSSr,][]{lane2014} at 73.8\,MHz. The source however did not end up in the VLSSr catalog due to a 5-$\rm{\sigma}$ cutoff. Careful data reduction shows that the source is detected with a detection significance just below 4\,$\rm{\sigma}$. The smaller-than-5-$\rm{\sigma}$ detection is strengthened by the spatial match (within 23\arcsec) to the NVSS source. The source is unresolved by the $75\arcsec\,\times\,75\arcsec$ beam of VLSSr. The 73.8-MHz flux density obtained is $0.37\pm0.09$\,Jy where we have corrected for clean bias and included a 12$\%$ flux density scale uncertainty. Finally, the radio counterpart of J1943+213 is also detected at higher radio frequencies. We find a detection of J1943+213 in the Arcminute Microkelvin Imager Galactic Plane Survey \citep[AMIGPS,][]{perrott2015} at 15.7\,GHz, well within the $\sim$\,5\arcsec error circle of AMIGPS. With its 3\arcmin resolution, the source is detected as a point source with a flux density of $23.5\pm3$\,mJy. \subsubsection{Radio continuum spectrum} \begin{figure*} \includegraphics[width=\textwidth]{beamshapes_2_VLA_thick_full} \caption{This image shows the FWHM of the beams of the observations of HESS J1943+213 with different radio telescopes or interferometers. The corresponding sizes and observed flux densities can be found in Table \ref{table:fluxes}. In the left panel the VLA 1\farcm1\,$\times$\,0\farcm8 structure \citep{gabanyi2013} is overplotted. The instruments corresponding to the right-most panel are unable to probe the extended structure observed by VLA. \label{fig:radiobeams} } \end{figure*} Our radio flux densities, obtained from radio surveys and pointed observations, are listed in Table \ref{table:fluxes}. The flux density measurements have been brought to the absolute flux density scale using the VLA formula with 1999.2 coefficients, and the WSRT flux density measurements using Perley-Butler time dependent coefficients. These are sorted by observing band and thereafter by observing epoch to facilitate the identification of variability in time. To also distinguish between various spatial scales, we list the beam sizes (full width at half-maximum, FWHM). These beam sizes are next shown in Fig.~\ref{fig:radiobeams}, centered on the NVSS catalog coordinates of J1943+213. For the sake of clarity the large, highly elongated Nanc\c{}ay Radio Telescope (NRT) beam \citep{HESS2011} is omitted. Also overplotted is the $1\farcm1\,\times\,0\farcm8$ extended radio structure detected by the VLA at 1.4\,GHz \citep{gabanyi2013}. Figure~\ref{fig:radiobeams} shows which observations can be expected to resolve the extended radio structure. Both AMIGPS and VLSSr see J1943+213 as a point source. Given its match to the structure size, the VLITE beam may or may not marginally resolve the source. The observations of J1943+213 can be sorted in two groups based on the structure size each observation was able to probe. Eight observations are able to see the extended structure of J1943+213 (shown in the two left panels of Fig. \ref{fig:radiobeams}). The other group (EVN and e-MERLIN) is only able to observe the mas-scale structures which we hereafter call the core. The observations were unable to measure small-scale structures larger than 2\arcsec, limited by the maximum angular scale that e-MERLIN can recover in the L-band. The right panel of Fig.~\ref{fig:radiobeams} is a magnification of the left panel by a factor of approximately 700. The EVN and e-MERLIN observations were unable to probe the observed VLA structure. In Fig.~\ref{fig:powerlaw} we plot the obtained flux densities against their observed frequency. We then fit power laws ($f_{\rm{x}} = bx^{\alpha}$, where $b$ is the offset and $\alpha$ the index) to each of the two above mentioned groups. Observations sensitive to the sum of the extended structure plus the core follow a power law with index $\alpha = -0.54\pm 0.04$. Although the small-scale structure appears to be variable in time (see Section \ref{sec:results_eMERLIN}), the flux density variability falls well within the error bars of the larger-scale flux density measurements and therefore we do not expect to be able to detect variability in the large-scale structure. The smaller-scale observations that resolved out the extended structure show a flat radio spectrum. Although we observed variability between the 2011 EVN observation and the 2013 e-MERLIN observation, we fit a power law to all observing epochs in order to obtain the average spectral index $\alpha = -0.03\pm0.03$. As visible in Fig.~\ref{fig:powerlaw}, this flat core spectrum completely accounts for the AMIGPS flux density; there is no 16-GHz extended-structure emission. \begin{table*} \begin{center} \caption{Obtained flux densities from different radio instruments\label{table:fluxes}.} \begin{tabular}{l r c c c c} \tableline\tableline Instrument & Band & Flux density & Beam size & Observation date & Ref.\\ & & (mJy)& & & \\ \tableline VLSSr & 73.8\,MHz & $370\pm94$&$75\arcsec$ & 2003 September 20 & This work\\ VLITE & 340\,MHz & $229\pm47$ & $52\arcsec$ & 2014 December 04 & This work\\ VLA & 1.4\,GHz & $91\pm5$ & $17\farcs8\times15\farcs1$ & 1985 September 30& a\\ NVSS & 1.4\,GHz & $102.6\pm3.6$ & $45\arcsec$ & 1993$-$1996 & b\\ NRT & 1.4\,GHz & $111\pm20$ & $2\farcm94\,\times\,20\farcm6$ & 2010 March$-$May & c\\ EVN & 1.6\,GHz & $31\pm3$ & 43.9\,mas\,$\times$\,28.5\,mas & 2011 May 18 & a\\ WSRT & 1.6\,GHz & $95\pm9$ & $2\farcm82\,\times\,0\farcm21$ & 2011 May 18 & a\\ e-MERLIN & 1.5\,GHz & $22.7\pm0.7$ & 117\,mas\,$\times$\,252\,mas & 2013 December 7 & This work\\ NRT & 2.4\,GHz & $86\pm14$ &$1\farcm82\,\times\,15\farcm6$ & 2010 March$-$May & c\\ e-MERLIN & 5\,GHz & $22.4\pm0.3$ & 45\,mas\,$\times$\,92\,mas & 2013 October $\&$ 2014 June & This work\\ AMIGPS & 15.7\,GHz & $23.5\pm3.4$ & $3\arcmin$ & 2011 April 12$-$17 & d\\ \tableline \end{tabular} \tablecomments{The flux density measurements have been brought up to the absolute flux density scale using the VLA formula with 1999.2 coefficients, and the WSRT flux density measurements using Perley-Butler time dependent coefficients.} \tablerefs{(a) \cite{gabanyi2013}, (b) \cite{condon1998}, (c) \cite{HESS2011}, (d) \cite{perrott2015}} \end{center} \end{table*} \begin{figure} \includegraphics[width=0.5\textwidth]{flux_powerlaw_fit_2} \caption{Radio flux densities obtained from surveys or pointed observations plotted against their observed frequency. The observations of VLSSr, VLITE, NRT, NVSS, VLA, WSRT, and AMIGPS probe the sum of the extended structure and core component. The e-MERLIN and EVN observations only probe the core component (see Fig.~\ref{fig:radiobeams}). The epochs for the L-band EVN and e-MERLIN observations are shown, to indicate the variability between both observations. The blue dash-dotted line is the power-law fit to the flux densities of the extended structure and core component. The green, dashed line is the power-law fit to the core component flux density measurements. \label{fig:powerlaw}} \end{figure} \section{Discussion} \label{sec:discussion} \subsection{PWN hypothesis} \label{sec:discussion_pwn} Our Arecibo observations would be able to detect 88$\%$ of all known pulsars in a PWN, placed at the assumed distance of $\sim$17\,kpc for J1943+213, and if beamed towards us. \cite{ravi2010} show that the radio beaming fraction of $\gamma$-ray pulsars is close to unity for the highest energetic pulsars and goes down to $\sim$0.5 for lower-energetic $\gamma$-ray pulsars. From the ATNF pulsar catalog \citep{manchester2005} we obtain that three-quarters of the pulsars in a PWN emit at high energies (i.e. X-rays and $\gamma$-rays). They are also amongst the highest energetic pulsars and are thus expected to have a large beaming fraction. This means they are observable over a large range of lines of sight. Using the observationally deduced beaming fraction average of $0.75\pm0.25$ and our obtained sensitivity, we can conclude with $0.7\pm0.2$ certainty that there is no pulsar in this source and it is therefore not a PWN. In support, we find that the radio continuum emission can be best described by two components, whereas the radio synchrotron emission of PWNe is described by a single power-law distribution with typical indices between $-0.3$ and 0 \citep{gaensler2006}. In principle, a two-component radio continuum emission could also be explained by a \emph{composite type} SNR, where PWN is immersed in a faint SNR shell. This is seen in e.g. the composite SNR G292.0+1.8 \citep{gaensler2003}, where the contrast in surface brightness between the core and the surrounding plateau is one order of magnitude, and the respective spectral indices are $\alpha = -0.05$ and $\alpha = -0.5$. There is, however, a discrepancy in scale: if one were to observe G292.0+1.8 and similar composite SNRs (G0.9+0.1 \citet{dubner2008}; HESS~J1818-154 \citet{HESSJ18182014}; or LMC~0540-69.3 \citet{brantseg2014}) at the assumed distance to J1943+213 of 17\,kpc, they would be a factor of $\sim 10^3$ larger than observed for J1943+213. Also, in the known sources, the SNR/PWN size ratio is 2.2 -- 7.5, whereas for J1943+213 this ratio would be approximately $4\times10^3$. This would imply that the expansion of the SNR is orders of magnitude larger than that of the PWN. Overall it appears highly unlikely that J1943+213 is a SNR of the composite type. \subsection{Blazar hypothesis} \label{sec:discussion_blazar} The compactness of the source and the observed radio flux density variability agree with the proposed BL Lac nature of the source. Observations of J1943+213 taken 2014 November with EVN at 1.6\,GHz reveal a complex core-jet morphology (Akiyama et al. 2016). The core brightness-temperature lower limit of $1.8 \times 10^9$ K is consistent with a BL Lac. Akiyama et al. (2016) recover 42 mJy of flux density, 10 mJy higher than found in our EVN observation 2011, confirming the source variability. Compared to the low-resolution observations significant amounts of flux density remain missing. The large-scale emission observed by the VLA in 1985 and again confirmed by the WSRT observation during our EVN run in 2011, seems to be resolved out in the EVN and e-MERLIN observations. Compared to the WSRT, archival VLA, NRT and NVSS values, approximately 70\,mJy flux density is missing at L-band. This cannot be explained by source-intrinsic changes, since the WSRT measurement was obtained as part of our EVN observation and thus has to come from the large-scale emission. BL Lac objects are a sub-class of blazars in which the spectral energy distribution (SED) is dominated by the emission from a relativistic jet pointing close to our line of sight \citep{urry1995}. In the radio, we witness a highly variable emission, compact on mas scales just like in flat-spectrum quasars. In the optical regime, however, a striking difference is the lack of broad emission lines. This is believed to be the result of the different nature of the accretion flows: quasars having a geometrically thin but optically thick accretion disc and accrete close to critical Eddington rates, while BL Lacs accrete at a much lower rate through a thick accretion disc and are radiatively inefficient \citep{maraschi2003,ghisellini2008}. Therefore, instead of the classical distinction between the two classes based on the equivalent width of their emission lines, a physically more motivated approach is the division by the Eddington luminosity of the broad-line regions with a division line $L_{\rm{BLR}}/L_{\rm{Edd}}< 5\times10^{-4} $\citep{ghisellini2011}. One may expect then that at the lower end of BL Lac accretion rates, the SED of the galaxy will not necessarily be dominated by the relativistic jet from the active nucleus, and the radio emission from the Doppler-boosted jet base, optically thick to synchrotron emission in the radio (resulting the flat spectrum and mas-scale compact structure) may not necessarily dominate over the large-scale radio emission. Indeed, \citet{giroletti2004, giroletti2006} have shown for low-redshift ($z<$0.2) BL Lac objects selected from flux density limited samples in the radio, have weak radio cores and a variety of radio morphologies on kpc scales (jets, halos, secondary compact components). The properties of local BL Lacs are found to be in fact similar to their Fanaroff--Riley type I \citep{fanaroff1974} radio galaxy parent population. Further to this, a sample of 42 low-redshift ($z<$0.2) BL Lac objects, selected based on their broad-band properties (with no constraints on their radio and gamma-ray emission), revealed a number of $\lq\lq$non-classical" BL Lacs that have low source compactness, core dominance, and/or show no $\gamma$-ray emission and have steep radio spectra \citep{liuzzo2013}. From combining observations of surveys and pointed observations we find that the radio continuum emission of the extended structure follows a steep power law with spectral index $\alpha = -0.54\pm 0.04$. At $\sim$16\,GHz the contribution of the extended structure to the total emission becomes almost negligible as can be seen in Fig.~\ref{fig:powerlaw}. The measured flux density at $\sim$16\,GHz agrees well with the expected flux density for the flat-spectrum ($\alpha = -0.03\pm0.03$) core. Our obtained index for the extended structure agrees well with the index ($\alpha_R = -0.59\pm0.16$) that the \cite{HESS2011} obtained from their NRT observations, but is somewhat steeper than the obtained index of $\alpha_R = -0.39$ from archival observations of the source at positions that differ 1\farcs7$-$40\arcsec of the NVSS catalog coordinates. This can be attributed to the fact that we only take measurements into account from which we can confirm the spatial match to the NVSS source, which might not be the case for some of the nine archival observations used by \citet{HESS2011}. The high-energy spectrum of J1943+213 also corresponds well with the HBL object hypothesis. \citet{peter2014} computed a fit for the spectral energy distribution from radio up to TeV energies. The spectrum can be well described by a self-synchroton model with a black-body component for the host galaxy \citep[see Fig.~7 in][]{peter2014}. The radio fluxes listed in Table \ref{table:fluxes} correspond well to the model provided by \cite{peter2014}. When limiting only to the flux densities attributed to the core structure (typically the dominant emission component in HBL objects), we find that these even agree better with the fit than the single NVSS radio point \cite{peter2014} used. One may conclude that the compact radio emission in HESS J1943+213 might represent another $\lq\lq$non-classical" BL Lac related low-power AGN activity in a low-redshift galaxy. We note however that this would still be an extreme case, with the lowest core dominance ever witnessed in a BL Lac object (only $\sim30\%$ flux density in the core). Other differences are that the extent of the extended radio emission is larger ($\sim$ 1\arcmin; but the redshift and therefore the linear size is not known), and that there is no apparent connection found yet between the mas-scale core and the diffuse radio emission, in spite of our attempt to reveal it with e-MERLIN that is sensitive to structures between $\sim$100 mas to 2\arcsec. \citet{peter2014} gave a possible redshift interval for the source as $0.03\leq z\leq 0.45$. Using these values and assuming a standard flat $\Lambda$CDM cosmology ($H_0=67.3\,\rm{km\,s}^{-1}\,\rm{Mpc}^{-1}, \rm{\Omega_{m}=0.315}$, \citealt{planck2014}), the extended 1-arcmin structure seen in the archival VLA image can be translated into a linear size of 38\,kpc $-$ 358\,kpc. Additionally, we can set a lower size limit for the extended feature of 2 arcsec from our L-band e-MERLIN observation. This translates into 1.3\,kpc $-$ 11.9\,kpc. Thus the extended structure of J1943+213 responsible for the $\sim$ 70\,mJy flux density missing from the EVN observation should be between 1.3\,kpc and 36\,kpc in linear size, if the source is close, at a redshift of $z =0.03$, or between 11.9\,kpc and 349\,kpc in linear size, if the source is more distant at a redshift of $z=0.45$. However, since the source is at low Galactic latitude ($-1\fdg29$), it cannot be discarded that there is a chance alignment along the line of sight of a Galactic non-thermal source, our $\lq\lq$large-scale" structure, and the compact presumably extragalactic source, implying that HESS~J1943+213 is a core-dominated BL Lac object without extended structure. \section{Conclusion} \label{sec:conclusion} In order to classify HESS J1943+213, we have presented imaging and time-domain observations. Our non-detection of a pulsar in the time-domain observations allows us to conclude with $\sim$70$\%$ probability that there is no pulsar in this source. Together with the two components with different power-law radio spectra we obtain from the radio continuum flux densities of the source, which is unlikely to represent an immersed PWN in a SNR shell, and previous arguments against the PWN scenario such as the overly soft X-ray spectrum, we conclude that the source is not a PWN. The HBL object classification is further strengthened by our new e-MERLIN imaging observations and the radio continuum spectrum we obtain from flux density measurements of surveys (e.g. VLITE and VLSSr) and pointed observations. From the continuum spectrum and our observations with different angular resolutions we find that the object can be best described by two structures: an extended structure with a somewhat steep spectrum ($\alpha=-0.54\pm 0.04$) and a flat-spectrum core with $\alpha=-0.03\pm0.03$. Such a structure is common to BL Lac objects. Quite striking however is the large 70\% flux density fraction originating from the extended structure, unseen in typical HBL objects. We conclude that HESS J1943+213 is most likely a non-classical high-frequency peaked BL Lac object. Alternatively, since the source is at low Galactic latitude, we cannot rule out that the compact emission is of extragalactic origin while the extended emission is from the Galaxy in the same the line of sight and is physically unrelated to the compact component. \section*{Acknowledgements} We thank Daniele Michilli for providing us the sifting algorithms for the single-pulse search, Adam Deller for interesting discussions, and Pierre E. Belles from the e-MERLIN staff for extensive support and help in data reduction. The e-MERLIN is a National Facility operated by the University of Manchester at Jodrell Bank Observatory on behalf of the UK Science and Technology Facilities Council (STFC). The Arecibo Observatory is operated by SRI International under a cooperative agreement with the National Science Foundation (AST-1100968), and in alliance with Ana G. M\'endez-Universidad Metropolitana, and the Universities Space Research Association. This work made use of data from the VLA Low-band Ionospheric and Transient Experiment (VLITE). Construction and installation of VLITE was supported by the Naval Research Laboratory (NRL) Sustainment Restoration and Maintenance funding The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 617199 (ALERT) and n. 283393 (RadioNet3); from the Netherlands Research School for Astronomy (NOVA4-ARTS); and from the Hungarian Scientific Research Fund (OTKA K104539). Basic research in radio astronomy at the Naval Research Laboratory is funded by 6.1 Base funding. GD and EG are members of the CIC (CONICET, Argentina) and acknowledge support from ANPCyT and CONICET funding. Part of this work was carried out on the Dutch national e-infrastructure with the support of SURF Cooperative. Computing time was provided by NWO Physical Sciences. \bibliographystyle{yahapj}
2,869,038,155,357
arxiv
\section{Introduction} The notion of quantum entanglement plays a key role in the current study of quantum information and quantum computation theory. There are two main criteria to distinguish entanglement from separable states: The PPT criterion \cite{choi-ppt, peres} tells us that the partial transpose of a separable state is positive, that is, positive semi-definite. The converse is not true in general by a work of Woronowicz \cite{woronowicz} who gave an example of a $2\otimes 4$ PPT entangled state. Such examples were also given in \cite{choi-ppt,stormer82} for the $3\otimes 3$ cases, in the early eighties. Another complete criterion was given by Horodecki's \cite{horo-1} using positive linear maps between matrix algebras, and this was formulated as the notion of entanglement witnesses \cite{terhal}. This is equivalent to the duality theory \cite{eom-kye} between positivity of linear maps and separability of block matrices, through the Jamio\l kowski-Choi isomorphism \cite{choi75-10,jami}. Through this isomorphism, an entanglement witness is just a positive linear map which is not completely positive. We refer to \cite{ssz,ZB} for systematic approaches to the duality using the JC isomorphism. For a linear map $\phi$ from the $C^*$-algebra $M_m$ of all $m\times m$ matrices into $M_n$, the Choi matrix $C_\phi$ of $\phi$ is given by $$ C_\phi=\sum_{i,j}^n e_{ij}\ot \phi(e_{ij})\in M_m\ot M_n, $$ where $e_{ij}=| i\ran\lan j| $ is the usual matrix units in $M_m$. The correspondence $\phi\mapsto C_\phi$ is called the JC isomorphism. It is known that $\phi$ is positive if and only if $C_\phi$ is block-positive, and $\phi$ is completely positive if and only if $C_\phi$ is positive. For a linear map $\phi:M_m\to M_n$ and a block matrix $A\in M_m\otimes M_n$, we define the bilinear pairing by $$ \lan A,\phi\ran=\tr(AC_\phi^\ttt). $$ It turns out that $A$ is separable if and only if $\lan A,\phi\ran \ge 0$ for every positive map $\phi$, and $A$ is of PPT if and only if $\lan A,\phi\ran \ge 0$ for every decomposable positive map $\phi$. Therefore, every entangled state $A$ is detected by a positive linear map $\phi$ in the sense that $\lan A,\phi\ran<0$, and every PPT entangled state is detected by an indecomposable positive linear map. A positive linear map is said to be an optimal entanglement witness if it detects a maximal set of entanglement, and an optimal PPTES witness if it detects a maximal set of PPT entanglement, as was introduced in \cite{lew00}. See also \cite{ha_kye_opt_ind} for the terminology. For a given entangled state, it is easy to find an entanglement witness to detect it, as it was suggested in \cite{lew00}. But, it is not clear at all how to construct an optimal entanglement witness to detect a given entangled state. The primary purpose of this note is to construct optimal PPTES witnesses which detect PPT entangled edge states constructed in \cite{kye_osaka}. These states are the first examples of two qutrit PPT entangled edge states of type $(6,8)$, whose existence had been a long standing question \cite{sbl}. We also suggest a method to check the optimality of entanglement witness without the spanning property. For nonnegative real numbers $a,b,c$ and $-\pi\le\theta\le\pi$, we consider the map $\Phi[a,b,c;\theta]$ between $M_3$ defined by $$ \Phi[a,b,c;\theta](X)=\\ \begin{pmatrix} ax_{11}+bx_{22}+cx_{33} & -e^{i\theta}x_{12} & -e^{-i\theta}x_{13} \\ -e^{-i\theta}x_{21} & cx_{11}+ax_{22}+bx_{33} & -e^{i\theta}x_{23} \\ -e^{i\theta}x_{31} & -e^{-i\theta}x_{32} & bx_{11}+cx_{22}+ax_{33} \end{pmatrix}, $$ for $X\in M_3$. Note that the Choi matrix $C_\Phi$ of the map $\Phi[a,b,c;\theta]$ is given by $$ W[a,b,c;\theta]=\left( \begin{array}{ccccccccccc} a &\cdot &\cdot &\cdot &-e^{i\theta} &\cdot &\cdot &\cdot &-e^{-i\theta} \\ \cdot &c &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot \\ \cdot &\cdot &b &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot \\ \cdot &\cdot &\cdot &b &\cdot &\cdot &\cdot &\cdot &\cdot \\ -e^{-i\theta} &\cdot &\cdot &\cdot &a &\cdot &\cdot &\cdot &-e^{i\theta} \\ \cdot &\cdot &\cdot &\cdot &\cdot &c &\cdot &\cdot &\cdot \\ \cdot &\cdot &\cdot &\cdot &\cdot &\cdot &c &\cdot &\cdot \\ \cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &b &\cdot \\ -e^{i\theta} &\cdot &\cdot &\cdot &-e^{-i\theta} &\cdot &\cdot &\cdot &a \end{array} \right). $$ The map of the form $\Phi[a,b,c;0]$ and its variants have been investigated by many authors in various contexts, as it was summarized in \cite{ha_kye_opt_ind}. We just note here that $W[a,b,c;0]$ is separable if and only if it is of PPT \cite{kye_osaka}. On the other hand, many interesting examples of PPT states are of the form $W[a,b,c;\theta]$. For example, the PPT entangled states \cite{stormer82} considered in the early eighties are just $W[1,b,\frac 1b,\pi]$ which turn out to be PPT entangled edge states of type $(6,7)$. These states were reconstructed systematically from indecomposable positive linear maps together with other types of PPT entangled edge states \cite{ha-kye-2}. The PPT entangled edge states of type $(6,8)$ constructed in \cite{kye_osaka} are given by $W[e^{i\theta}+e^{-i\theta},b,\frac 1b;\theta]$ for $-\frac\pi 3<\theta<\frac\pi 3$ and $\theta\neq 0$. Recently, the authors \cite{ha_kye_geom} analyzed $W[a,b,c;\pi]$ to understand the boundary structures between separability and inseparability among PPT states. For indecomposable positive linear maps, we have to be very careful to use the term \lq optimal\rq\ as was noticed in \cite{ha_kye_opt_ind}. We denote by $\mathbb P_1$ the convex cone of all positive linear maps. Recall that a positive map $\phi$ is an optimal (respectively co-optimal) entanglement witness if and only if the smallest face of $\mathbb P_1$ determined by $\phi$ has no completely positive (respectively completely copositive) map. We also note that $\phi$ has the spanning property (respectively co-spanning property) if and only if the smallest exposed face of $\mathbb P_1$ determined by $\phi$ has no completely positive (respectively completely copositive) map. We say that $\phi$ is bi-optimal if it is both optimal and co-optimal. The term bi-spanning is defined similarly. After we give conditions on parameters $a,b,c$ and $\theta$ for which the map $\Phi[a,b,c;\theta]$ is a positive linear map in the next section, we characterize for each fixed $\theta$ the facial structures of the $3$-dimensional convex body representing the positivity in Section 3. From this facial structures, it is clear that some of positive maps are not optimal, and/or not co-optimal. We examine the spanning and co-spanning properties for the map $\Phi[a,b,c;\theta]$ in Section 4, and check various notions of optimality for all cases in Section 5. To do this, we suggest a more efficient method to check the optimality of an entanglement witness when it has not the spanning property. In the Section 6, we find optimal entanglement witnesses to detects PPT entangled edge states \cite{kye_osaka} of type $(6,8)$. We conclude this note to report that our constructions give counter-examples to the SPA conjecture \cite{korbicz}. \section{Positivity} To begin with, we first find the conditions for complete positivity and complete copositivity. We note that the map $\Phi[a,b,c;\theta]$ is completely positive if and only if $W[a,b,c;\theta]$ is positive if and only if the following $3\times 3$ matrix $$ P[a,\theta]= \left( \begin{matrix} a & -e^{i\theta} & -e^{-i\theta}\\ -e^{-i\theta} & a & -e^{i\theta}\\ -e^{i\theta} & -e^{-i\theta} & a \end{matrix} \right) $$ is positive. We mention again that \lq positivity\rq\ of matrices means the positive semi-definiteness, throughout this note. We see that the polynomial $$ \begin{aligned} \det P[a,\theta] &=a^3-3a-(e^{3i\theta}+e^{-3i\theta})\\ &=[a-(e^{i\theta}+e^{-i\theta})]\,[(a^2+a(e^{i\theta}+e^{-i\theta})+(e^{2i\theta}+e^{-2i\theta}-1)] \end{aligned} $$ has the following three real zeroes: $$ q_{(\theta-\frac 23 \pi)}=e^{i(\theta-\frac 23 \pi)}+e^{-i(\theta-\frac23 \pi)},\quad q_\theta=e^{i\theta}+e^{-i\theta},\quad q_{(\theta+\frac 23 \pi)}=e^{i(\theta+\frac 23 \pi)}+e^{-i(\theta+\frac 23 \pi)}. $$ We denote by $$ p_\theta=\max\{ q_{(\theta-\frac 23 \pi)}, q_\theta, q_{(\theta+\frac 23 \pi)}\}. $$ \begin{figure}[h!] \begin{center} \includegraphics[scale=0.7]{figure_1.eps} \end{center} \caption{The graph of $y=p_{\theta}$. } \end{figure} Then we see that $1\le p_\theta\le 2$ for each $\theta$, and $P[a,\theta]$ is positive if and only if \begin{equation}\label{cp} a\ge p_\theta, \end{equation} if and only if $\Phi[a,b,c;\theta]$ is completely positive. We note that $p_\theta=2$ if and only if $\theta=0,\pm\frac {2}3\pi$, and $p_\theta=1$ if and only if $\theta=\pm \frac \pi 3,\pm\pi$. It is easy to see that the map $\Phi[a,b,c;\theta]$ is completely copositive if and only if \begin{equation}\label{ccp} bc\ge 1. \end{equation} \begin{theorem} The map $\Phi[a,b,c;\theta]$ is completely positive if and only if the condition {\rm (\ref{cp})} holds, and completely copositive if and only if the condition {\rm (\ref{ccp})} holds. \end{theorem} In order to get a necessary condition for the positivity of $\Phi[a,b,c;\theta]$, we note that a linear map $\phi$ is positive if and only if $\lan zz^*,\phi\ran\ge 0$ for every product vector $z=x\otimes y\in\mathbb C^m\ot\mathbb C^n$. Here, $z$ is considered as a column vector, and so $zz^*$ belongs to $M_m\otimes M_n$. We write $$ z=x\otimes y=( x_1y_1, x_1y_2, x_1y_3\,;\, x_2y_1, x_2y_2, x_2y_3\,;\, x_3y_1, x_3y_2, x_3y_3)^{\rm t}. $$ By a direct calculation, we see that the pairing $\lan zz^*, \Phi[a,b,c;\theta]\ran$ is equal to $$ \begin{aligned} &a(|x_1y_1|^2+|x_2y_2|^2+|x_3y_3|^2) + b(|x_1y_3|^2+|x_2y_1|^2+|x_3y_2|^2) + c(|x_1y_2|^2+|x_2y_3|^2+|x_3y_1|^2)\\ &-e^{i\theta} x_1 y_1 \bar x_2 \bar y_2 -e^{-i\theta} \bar x_1 \bar y_1 x_2 y_2 -e^{i\theta} x_2 y_2 \bar x_3 \bar y_3 -e^{-i\theta} \bar x_2 \bar y_2 x_3 y_3 -e^{i\theta} x_3 y_3 \bar x_1 \bar y_1 -e^{-i\theta} \bar x_3 \bar y_3 x_1 y_1.\\ \end{aligned} $$ From now on, we suppose that $\Phi[a,b,c;\theta]$ is positive, and put the product vectors $$ (1,1,1)^{\rm t}\ot(1,1,1)^{\rm t},\quad (e^{\frac 23\pi i},1,1)^{\rm t}\ot(1,1,1)^{\rm t},\quad (e^{-\frac 23\pi i},1,1)^{\rm t}\ot(1,1,1)^{\rm t} $$ in the above quantity, to get the following necessary condition \begin{equation}\label{p1} a+b+c\ge p_\theta. \end{equation} We also take product vectors \[ (\sqrt te^{-i\theta},t,0)^{\rm t}\ot (\sqrt t,1,0)^{\rm t},\quad (\sqrt te^{-i\theta},t,0)^{\rm t}\ot (\sqrt t,e^{\frac 23\pi i},0)^{\rm t},\quad (\sqrt te^{-i\theta},t,0)^{\rm t}\ot (\sqrt t,e^{-\frac 23\pi i},0)^{\rm t} \] for $t\ge 0$, to get the condition $2at^2+ct+bt^3\ge 2t^2$ for each $t\ge 0$ if and only if \begin{equation}\label{p2} a\le 1 \Longrightarrow\ bc\ge (1-a)^2. \end{equation} Therefore, we get necessary conditions (\ref{p1}) and (\ref{p2}) for the positivity of $\Phi[a,b,c;\theta]$. Now, we proceed to show that two conditions (\ref{p1}) and (\ref{p2}) are sufficient for positivity. Note that $\Phi[a,b,c;\theta]$ is positive if and only if the matrix \begin{equation}\label{mat} \begin{pmatrix} a|x|^2+b|y|^2+c|z|^2 & -e^{i\theta}x\bar y & -e^{-i\theta}x\bar z \\ -e^{-i\theta}y\bar x & c|x|^2+a|y|^2+b|z|^2 & -e^{i\theta}y\bar z \\ -e^{i\theta}z\bar x & -e^{-i\theta}z\bar y & b|x|^2+c|y|^2+a|z|^2 \end{pmatrix} \end{equation} is positive semi-definite for any $(x,y,z)\in\mathbb C^3$. We first consider the determinant $$ \begin{aligned} (a|x|^2+b|y|^2+c|z|^2)(c|x|^2+a|y|^2+b|z|^2)(b|x|^2+c|y|^2+a|z|^2) -(e^{3i\theta}+e^{-3i\theta})|xyz|^2\\ -(a|x|^2+b|y|^2+c|z|^2)|yz|^2-(c|x|^2+a|y|^2+b|z|^2)|zx|^2 -(b|x|^2+c|y|^2+a|z|^2)|xy|^2. \end{aligned} $$ We may replace $|x|^2, |y|^2$ and $|z|^2$ by nonnegative $x,y$ and $z$ to get $$ \begin{aligned} F(x,y,z):=(ax+by+cz)(cx+ay+bz)(bx+cy+az) -(e^{3i\theta}+e^{-3i\theta})xyz\\ -(ax+by+cz)yz-(cx+ay+bz)zx -(bx+cy+az)xy. \end{aligned} $$ First of all, we check that all the $2\times 2$ principal minors are nonnegative. For example, the third $2\times2$ minor is $$ M_1:=(cx+ay+bz)(bx+cy+az)-yz, $$ and we have $$ M_1\ge \frac1{by+cz}F(0,y,z) = (ay+bz)(cy+az)-yz =ac y^2 +(a^2+bc-1)yz+abz^2. $$ This is nonnegative when $a\ge 1$. If $0\le a\le 1$ then we use the condition (\ref{p2}) to see easily that this quadratic form is nonnegative for each $y,z\ge 0$. In the same way, we see that all $2\times2$ minors are nonnegative, and $F(x,y,z)\ge 0$ whenever one of $x,y$ or $z$ is zero. Now, we show that $F(x,y,z)\ge 0$ on the region $\{(x,y,z): x,y,z>0\}$. First, we note that all of $\dfrac{\partial F}{\partial x}$, $\dfrac{\partial F}{\partial y}$ and $\dfrac{\partial F}{\partial z}$ are quadratic forms associated with the following symmetric matrices; $$ \left(\begin{matrix}P&R&Q\\R&Q&S\\Q&S&R\end{matrix}\right),\qquad \left(\begin{matrix}R&Q&S\\Q&P&R\\S&R&Q\end{matrix}\right),\qquad \left(\begin{matrix}Q&S&R\\S&R&Q\\R&Q&P\end{matrix}\right),\qquad $$ where $$ \begin{aligned} P&=3abc,\\ Q&=a^2c+b^2a+c^2b-c,\\ R&=a^2b+b^2c+c^2a-b,\\ 2S&=a^3+b^3+c^3+3abc-3a-(e^{3i\theta}+e^{-3i\theta}). \end{aligned} $$ That is, $\dfrac{\partial F}{\partial x}(x,y,z)$ is expressed by \[ \dfrac{\partial F}{\partial x}(x,y,z)= \begin{pmatrix} x & y & z \end{pmatrix} \begin{pmatrix} P&R&Q\\R&Q&S\\Q&S&R \end{pmatrix} \begin{pmatrix} x\\y\\z\end{pmatrix}, \] for example. In the case of $a\ge 1$, we have $$ P+Q+R\ge 3bc+(b^2+c^2)+bc(b+c)\ge 0, $$ and the equality holds if and only $b=c=0$. In the case of $0\le a <1$, we have $bc\ge (1-a)^2$ by (\ref{p2}). Hence, it follows that $$ \begin{aligned} P+Q+R &=abc+(b+c)(a^2+a(b+c)+bc-1)\\ &\ge abc+(b+c)(a^2+2a(1-a)+(1-a)^2-1)=abc, \end{aligned} $$ and the equality holds if and only if $a=0,\,bc=1$. Consequently, we have $P+Q+R\ge 0$ for all cases. First, we consider the case of $P+Q+R=0$, from which we have the following two cases: \begin{itemize} \item $a=0,\, bc=1$, \item $a\ge 1,\, b=c=0$. \end{itemize} In the first case, we already know that the map is completely copositive. In the second case, we have $a\ge p_{\theta}$ by (\ref{p1}), and so the map is completely positive. Therefore, if $P+Q+R=0$ then we see that $\Phi[a,b,c;\theta]$ is positive. Now, we assume that $P+Q+R>0$. In this case, we have $abc\neq 0$ except for the following two cases: \begin{itemize} \item $ab\neq 0$ and $c=0$, \item $ac\neq 0$ and $b=0$. \end{itemize} For the case of $ab\neq 0$ and $c=0$, we have $a\ge 1$ from the condition \eqref{p2}, and so \begin{equation*} \begin{aligned} F(x,y,z) =&b(a^2-1)(x^2 y+y^2 z+z^2 x)+ab^2(x^2 z+y^2 x+z^2 y)\\ &\phantom{ZZZZZZZZZZZZZ}+(a^3+b^3-3a-(e^{3i\theta}+e^{-3i\theta}))xyz\\ \ge &(3b(a^2-1)+3ab^2+a^3+b^3-3a-(e^{3i\theta}+e^{-3i\theta}))xyz\\ =&((a+b)^3-3(a+b)-(e^{3i\theta}+e^{-3i\theta}))xyz, \end{aligned} \end{equation*} which is nonnegative by the condition \eqref{p1}. Similarly, one can show that $F(x,y,z)$ is nonnegative for the case of $ac\neq 0$ and $b=0$. Now, we consider the case of $abc\neq 0$. In this case, the coefficients of $x^3,y^3$ and $z^3$ in the polynomial $F$ are positive, and so there exists a sufficiently large cube $R=\{(x,y,z): 0\le x,y,z\le M\}$ so that $F(x,y,z)\ge 0$ outside of $R$. Furthermore, we already know that $F(x,y,z)\ge 0$ if $xyz=0$. Therefore, it suffices to show that the local minimums of $F$ in the region $\{(x,y,z): x,y,z>0\}$ are nonnegative. If $(x,y,z)$ is a nontrivial common solution of $\dfrac{\partial F}{\partial x}$, $\dfrac{\partial F}{\partial y}$ and $\dfrac{\partial F}{\partial z}$ then it is also a nontrivial solution of homogeneous quadratic equation given by \begin{equation}\label{quadratic_form} \left(\begin{matrix}P+Q+R&Q+R+S&Q+R+S\\Q+R+S&P+Q+R&Q+R+S\\Q+R+S&Q+R+S&P+Q+R\end{matrix}\right). \end{equation} This means that a nontrivial common solution $(x,y,z)$ of $\dfrac{\partial F}{\partial x}$, $\dfrac{\partial F}{\partial y}$ and $\dfrac{\partial F}{\partial z}$ satisfies \begin{equation}\label{expansion} \begin{aligned} &\begin{pmatrix} x & y & z\end{pmatrix} \begin{pmatrix}P+Q+R&Q+R+S&Q+R+S\\Q+R+S&P+Q+R&Q+R+S\\Q+R+S&Q+R+S&P+Q+R\end{pmatrix} \begin{pmatrix} x\\y\\z \end{pmatrix}\\ =& (P+Q+R)(x^2+y^2+z^2)+2(Q+R+S)(xy+yz+zx)=0. \end{aligned} \end{equation} If $P=S$ then common solutions are on the plane $x+y+z=0$, and so there is no nonzero common solution in the region $\{(x,y,z):x,y,z>0\}$. Therefore, we may assume $P\neq S$. In this case, the $2\times 2$ minors is not zero, and so the rank of the matrix~\eqref{quadratic_form} is more than $2$. We also note that the determinant of this matrix~\eqref{quadratic_form} is $$ (P-S)^2(3Q+3R+P+2S)=(P-S)^2[(a+b+c)^3-3(a+b+c)-(e^{3i\theta}+e^{-3i\theta})]\ge 0 $$ by the condition (\ref{p1}). If $P> S$ then we see that the above matrix is positive semi-definite, and so it must be singular with rank two. Therefore, a common solution must belong to the $1$-dimensional kernel space of the matrix~\eqref{quadratic_form}. Consequently, all common solutions are of the form $(x,x,x)$. We consider the case $P<S$. In this case, we see that common solution satisfies $$ \begin{aligned} & (P+Q+R)(x^2+y^2+z^2)+2(Q+R+S)(xy+yz+zx)\\ =&(P+Q+R)(x+y+z)^2+2(S-P)(xy+yz+zx)=0 \end{aligned} $$ which is impossible in the region $\{(x,y,z):x,y,z>0\}$. Summing up, we see that if $F$ takes a local minimum at $(x,y,z)$ with $x,y,z>0$ then $x=y=z$. We note that $$ \begin{aligned} \frac 1{x^3}F(x,x,x) &= a^3 + b^3 + c^3 + 3(a^2b+a^2c+b^2c+b^2a+c^2a+c^2b+2abc)\\ &\phantom{ZZZZZZZZ}-(e^{3i\theta}+e^{-3i\theta})-3(a+b+c)\\ &=(a+b+c)^3-3(a+b+c)-(e^{3i\theta}+e^{-3i\theta}), \end{aligned} $$ which is nonnegative by (\ref{p1}) when $x\neq 0$. This completes the proof for the following: \begin{theorem} The map $\Phi[a,b,c;\theta]$ is positive if and only if both conditions {\rm (\ref{p1})} and {\rm (\ref{p2})} hold. \end{theorem} \section{Facial structures} For each $\theta$, we denote by $\Gamma^\theta$ the $3$-dimensional convex body determined by (\ref{p1}) and (\ref{p2}). The facial structures of the convex body $\Gamma^0$ has been analyzed in \cite{ha_kye_opt_ind} for the case of $\theta=0$. Facial structures of $\Gamma^\theta$ is similar as those of $\Gamma^0$ except several differences. We first consider the case $1<p_\theta\le 2$. In this case, the convex body $\Gamma^\theta$ has the following four $2$-dimensional faces: \begin{itemize} \item $f^\theta_{\rm ab}=\{(a,b,c): c=0,\ a+b\ge p_\theta,\ a\ge 1\}$, \item $f^\theta_{\rm ac}=\{(a,b,c): b=0,\ a+c\ge p_\theta,\ a\ge 1\}$, \item $f^\theta_{\rm bc}=\{(a,b,c): a=0,\ bc\ge 1\}$, \item $f^\theta_{\rm abc}=\{(a,b,c): a+b+c=p_\theta,\ 0\le a\le 1\Longrightarrow bc\ge (1-a)^2\}$. \end{itemize} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.8]{figure_2.eps} \end{center} \caption{Picture of the face $f^\theta_{\rm abc}$ in the plane $a+b+c=p_{\theta}$.} \end{figure} In the case of $p_\theta=1$, the face $f^\theta_{\rm abc}$ shrinks to a single point $(1,0,0)$. In order to figure out the shape of the face $f^\theta_{\rm abc}$, we modify the parametrization in \cite{ha+kye_indec-witness} to put \begin{equation}\label{para} \begin{aligned} a_\theta(t)&=(p_\theta-1)\cdot\dfrac{(1-t)^2}{1-t+t^2}+(2-p_\theta)=1-\dfrac {(p_\theta-1)t}{1-t+t^2},\\ b_\theta(t)&=(p_\theta-1)\cdot\dfrac{t^2}{1-t+t^2},\\ c_\theta(t)&=(p_\theta-1)\cdot\dfrac{1}{1-t+t^2}, \end{aligned} \end{equation} for $0< t<\infty$. Then we have $$ a_\theta(t)+b_\theta(t)+c_\theta(t)=p_\theta,\qquad 0\le a_\theta(t)\le 1,\qquad b_\theta(t)c_\theta(t)=(1-a_\theta(t))^2. $$ If $p_\theta=2$ then this face touches the face $f_{\rm bc}$ at the point $(0,1,1)$ which gives rise to a completely copositive map. On the other hand, if $1<p_\theta<2$ then this face does not touch the face $f_{\rm bc}$. The convex body $\Gamma^\theta$ also has the following $1$-dimensional faces: \begin{itemize} \item $e^\theta_{\rm a}=\{(a,0,0): a\ge p_\theta\}$, \item $e^\theta_{\rm b}=\{(1,b,0): b\ge p_{\theta}-1\}$, \item $e^\theta_{\rm c}=\{(1,0,c): c\ge p_{\theta}-1\}$, \item $e^\theta_{\rm ab}=\{(a,b,0): a+b=p_\theta,\, 1\le a \le p_\theta\}$, \item $e^\theta_{\rm ac}=\{(a,0,c): a+c=p_\theta,\, 1\le a \le p_\theta\}$, \item $\displaystyle{ e^\theta_t=\left\{\left(1-s,st,\frac st\right): \dfrac{(p_\theta-1)t}{1-t+t^2}\le s\le 1\right\} }$ for $t>0$. \end{itemize} We note that $e^\theta_t$ is the line segment from the point $(a_\theta(t),b_\theta(t),c_\theta(t))$ to the point $(0,t,\frac 1t)$, and lies on the surface $bc= (1-a)^2$ for $0\le a < 1$. We also note that $e^\theta_1$ shrink to a single point $(0,1,1)$ if $p_\theta=2$. It remains to list up $0$-dimensional faces as follows: \begin{itemize} \item $v_{(p_\theta,0,0)}$, \item $v_{(1,0,p_\theta-1)},\ v_{(1,p_\theta-1,0)}$, \item $v_{(a_\theta(t),b_\theta(t),c_\theta(t))}$ for $t>0$, \item $v_{(0,t,1/t)}$ for $t>0$. \end{itemize} It $p_\theta=1$ then all of the following faces $$ f^\theta_{\rm abc},\quad e^\theta_{\rm ab},\quad e^\theta_{\rm ac}, \quad v_{(p_\theta,0,0)},\quad v_{(1,0,p_\theta-1)},\quad v_{(1,p_\theta-1,0)},\quad v_{(a_\theta(t),b_\theta(t),c_\theta(t))} $$ shrink to the single point $(1,0,0)$. Furthermore, the face $e^\theta_t$ connects the two points $(1,0,0)$ and $(0,t,\frac 1t)$ which represent completely positive and completely copositive maps, respectively. Therefore, every positive linear map $\Phi[a,b,c;\theta]$ is decomposable. Since we are interested in indecomposable cases, we assume $p_\theta>1$ throughout this note. By the exactly same argument as in \cite{ha_kye_opt_ind}, we have the following: \begin{itemize} \item Interior points of $f^\theta_{\rm ab}$, $f^\theta_{\rm ac}$, $f^\theta_{\rm bc}$, $e^\theta_{\rm a}$, $e^\theta_{\rm b}$ and $e^\theta_{\rm c}$ are neither optimal nor co-optimal. \item Interior points of $f^\theta_{\rm abc}$, $e^\theta_{\rm ab}$, $e^\theta_{\rm ac}$, $v_{(p_\theta,0,0)}$ are not optimal. \item Interior points of $e^\theta_t$ and $v_{(0,t,1/t)}$ are not co-optimal. \item $v_{(1,0,p_\theta-1)}$ and $v_{(1,p_\theta-1,0)}$ are not spanning. \end{itemize} We recall that if two positive map $\phi_1$ and $\phi_2$ determine a common smallest face containing them, then they are interior points of the common face, and share the above properties related with the optimality. \begin{figure}[h!] \includegraphics[scale=0.43]{conv_1.eps}\, \includegraphics[scale=0.43]{conv_p.eps}\, \includegraphics[scale=0.43]{conv_2.eps}\, \caption{Figures of the convex bodies for $p_{\theta}=1,\,1<p_{\theta}<2$ and $p_{\theta}=2$.} \end{figure} \section{Spanning Properties} In this section, we determine which positive linear maps have the spanning property and/or the co-spanning property. We remind the readers that we are assuming that $p_\theta>1$. We also assume that $p_\theta<2$, since the case of $\theta=0$ is already considered in \cite{ha_kye_opt_ind}. We note that the spanning property (respective co-spanning property) implies the optimality (respectively co-optimality). By the discussion of the previous section, it remains to consider the following cases: \begin{enumerate} \item[(i)] $0< a\le 1,\ bc=(1-a)^2,\ a+b+c> p_\theta$, \item[(ii)] $2-p_\theta\le a\le 1,\ bc=(1-a)^2,\ a+b+c= p_\theta$, \item[(iii)] $a=0, \ bc= 1$, \item[(iv)] $1<a\le p_\theta,\ a+b+c= p_\theta$. \end{enumerate} We recall \cite{kye_ritsu} that $\phi\in\mathbb P_1$ has the spanning property if and only if the set $$ P[\phi]:=\{z=\xi\ot\eta:\lan zz^*,\phi\ran=0\} $$ spans the whole space $\mathbb C^m\ot\mathbb C^n$, and $$ \lan zz^*,\phi\ran =\lan \xi\xi^*\ot \eta\eta^*,\phi\ran =\tr(\phi(\xi\xi^*)\bar \eta\bar \eta^*) =(\phi(\xi\xi^*)\bar \eta|\bar \eta). $$ Therefore, we see that $\xi\ot\eta\in P[\phi]$ if and only if $\phi(\xi\xi^*)\bar\eta=0$. In order to determine the set $P[\Phi[a,b,c;\theta]]$, we first find vectors $(x,y,z)\in\mathbb C^3$ such that the matrix (\ref{mat}) is singular. In other words, we look for $(x,y,z)$ for which $F(x,y,z)=0$. The only possibility of $F(x,y,z)=0$ with nonzero $x,y,z$ is $F(x,x,x)=0$, and this happens only when the equality holds in (\ref{p1}). Now, we consider the case (i). In this case, we see that $F(x,y,z)=0$ holds only if $xyz=0$. We first consider the case $z=0$, for which we have $$ F(x,y,z) =(acx^2+(a^2+bc-1)xy+aby^2)(bx+cy) =a(\sqrt cx-\sqrt by)^2(bx+cy). $$ Therefore, the matrix (\ref{mat}) is singular with $z=0$ if and only if $(x,y,0)=(b^{1/4}\alpha,c^{1/4}\beta,0)$ with complex numbers $\alpha,\beta$ with modulus one. In this case, (\ref{mat}) is given by $$ \left(\begin{matrix} \sqrt b &-e^{i\theta}\alpha\bar\beta\sqrt{1-a} &0\\ -e^{-i\theta}\bar\alpha\beta\sqrt{1-a}&\sqrt c&0\\ 0&0&b\sqrt b+c\sqrt c \end{matrix}\right), $$ and the kernel is $(\alpha e^{i\theta}\sqrt{1-a},\beta\sqrt b,0)$. In the same way, we see that $z$ belongs to $P[\Phi[a,b,c;\theta]]$ if and only if $z$ is one of the following: \begin{equation}\label{prod} \begin{aligned} z_1[\alpha,\beta]=&(b^{1/4}\alpha,c^{1/4}\beta,0)^{\rm t}\otimes (\bar{\alpha} e^{-i\theta}\sqrt{1-a},\bar{\beta}\sqrt b,0)^{\rm t},\\ z_2[\alpha,\beta]=&(c^{1/4}\beta,0,b^{1/4}\alpha)^{\rm t}\otimes (\bar{\beta}\sqrt b,0,\bar{\alpha} e^{-i\theta}\sqrt{1-a})^{\rm t},\\ z_3[\alpha,\beta]=&(0,b^{1/4} \alpha,c^{1/4}\beta)^{\rm t}\otimes (0,\bar{\alpha} e^{-i\theta}\sqrt{1-a},\bar{\beta}\sqrt b)^{\rm t}, \end{aligned} \end{equation} with complex numbers $\alpha,\,\beta$ with modulus one. It is clear that these vectors do not span the whole space if $a=1$ which implies $bc=0$ in this case. We consider the case $0<a<1$. We take $\beta_1=1,\, \beta_2=-1,$ and $\beta_3=i$, and consider the $9\times 9$ matrix whose columns are nine vectors $z_k[1,\beta_\ell]$ for $k,\,\ell=1,2,3$. Then the determinant of $M$ is given by $$ |\det M|=|64b^{\frac 92}c^{\frac 94}e^{-3i \theta} i (1+e^{-3i\theta})| $$ which is nonzero, since $\theta\neq\pm\frac\pi 3,\pm\pi$, and $a<1$ implies that $bc\neq 0$. Therefore, we conclude that $\Phi[a,b,c;\theta]$ has the spanning property if and only if $a<1$ for the case (i). It is clear that it has not the co-spanning property from the facial structures in the previous section. Now, we consider the case (ii). First of all, we note that product vectors in (\ref{prod}) already belong to $P[\Phi[a,b,c;\theta]]$, and so we see that $\Phi[a,b,c;\theta]$ has the spanning property if $a<1$. We will see in the next section that $\Phi[1,0,p_\theta-1;\theta]$ and $\Phi[1,p_\theta-1,0;\theta]$ are optimal, but do not have the spanning property. We look for another product vectors in $P[\Phi[a,b,c;\theta]]$ to determine if they have the co-spanning property. If $x=(x_1,x_2,x_3)^{\rm t}$ with $|x_1|=|x_2|=|x_3|$, then $\Phi[a,b,c;\theta](xx^*)$ is given by \[ \begin{pmatrix} |x_1|^2 p_{\theta} & -e^{i\theta} x_1\bar{x}_2 & -e^{-i\theta}x_1\bar{x}_3\\ -e^{-i\theta}x_2\bar{x}_1 & |x_1|^2 p_{\theta} & -e^{i\theta} x_2\bar{x}_3\\ -e^{i\theta}x_3\bar{x}_1 & -e^{-i\theta}x_3 \bar{x}_2& |x_1|^2 p_{\theta} \end{pmatrix}, \] for which \begin{itemize} \item $(x_1,x_2e^{i\frac 23\pi },x_3e^{-i\frac 23\pi })^{\rm t}$ is a kernel vector if $-\pi \le \theta \le -\frac \pi 3$. \item $(x_1,x_2,x_3)^{\rm t}$ is a kernel vector if $-\frac \pi 3\le\theta\le\frac \pi 3$. \item $(x_1,x_2e^{-i\frac 23\pi },x_3e^{i\frac 23\pi })^{\rm t}$ is a kernel vector if $\frac \pi 3\le\theta\le\pi$. \end{itemize} Therefore, we see that $z\in P[\Phi[a,b,c;\theta]]$ if and only if $z$ is either one of the vectors in \eqref{prod} or one of the following form: \begin{equation}\label{prod2} \begin{aligned} w[\alpha,\beta,\gamma]=& (\alpha,\beta,\gamma)^{\rm t}\otimes (\bar{\alpha} ,\bar{\beta}e^{-i \frac{2}3\pi},\bar{\gamma}e^{i \frac{2}3\pi})^{\rm t} \quad \text{ if }{\textstyle -\pi < \theta < -\frac \pi 3}, \\ w[\alpha,\beta,\gamma]=& (\alpha,\beta,\gamma)^{\rm t}\otimes (\bar{\alpha},\bar{\beta},\bar{\gamma})^{\rm t} \quad \text{ if }{\textstyle -\frac \pi 3<\theta<\frac \pi 3},\\ w[\alpha,\beta,\gamma]=& (\alpha,\beta,\gamma)^{\rm t}\otimes (\bar{\alpha} ,\bar{\beta}e^{i \frac{2}3\pi},\bar{\gamma}e^{-i \frac{2}3\pi})^{\rm t} \quad \text{ if }{\textstyle \frac \pi 3<\theta<\pi}, \\ \end{aligned} \end{equation} with $|\alpha|=|\beta|=|\gamma|$. Now, we take product vectors in $P[\Phi[a,b,c;\theta]]$ as follows: \begin{equation}\label{cond2_prod1} z_1[1,1],\,z_1[1,-1],\,z_2[1,1],\,z_2[1,-1],\,z_3[1,1],\,z_3[1,-1],\,w[1,1,1],\,w[1,-1,1],\,w[1,i,-i]. \end{equation} We consider the $9\times 9$ matrix whose columns the partial conjugates of the above nine vectors, then the determinant is given as follows: \begin{equation*} |\det M|=\begin{cases} 16\sqrt{2}b^{\frac 94}c^{\frac 34}|(\sqrt{b}-\sqrt{c}e^{i(\theta+\frac 23 \pi)})^3(1+e^{3i\theta})| & \text{ if\ \ } {\textstyle -\pi < \theta < -\frac \pi 3},\\ 16\sqrt{2}b^{\frac 94}c^{\frac 34}|(\sqrt{b}-\sqrt{c}e^{i\theta})^3(1+e^{3i\theta})| & \text{ if \ \ } {\textstyle -\frac \pi 3<\theta<\frac \pi 3},\\ 16\sqrt{2}b^{\frac 94}c^{\frac 34}|(\sqrt{b}-\sqrt{c}e^{i(\theta-\frac 23 \pi)})^3(1+e^{3i\theta})| & \text{ if \ \ } {\textstyle \frac \pi 3<\theta<\pi}.\\ \end{cases} \end{equation*} We note that $\det M=0$ implies $b=c$ and $\theta=0,\pm\frac 23\pi$, which is not possible by the assumption $p_{\theta}<2$. Therefore, partial conjugates of the product vectors in \eqref{cond2_prod1} span the whole space $\mathbb C^3\otimes \mathbb C^3$, and $W[a,b,c;\theta]$ has the co-spanning property for the case (ii). Now, we consider the case (iii). In this case, we note that $\Phi[a,b,c,\theta]$ is completely copositive, and so they never satisfies the co-spanning property. Note that the matrix \eqref{mat} is given by \[ \begin{pmatrix} b|y|^2 & -e^{i\theta} x \bar y & 0\\ -e^{-i\theta} y\bar x & c|x|^2 & 0 \\ 0 & 0 & b|x|^2+c|y|^2 \end{pmatrix}, \] and the kernel is $(x,e^{-i\theta} yb,0)^{\rm t}$. Therefore, the following vectors $$ \begin{aligned} w_1[\alpha,\beta]=&(\alpha,\beta,0)^{\rm t}\otimes (\bar{\alpha},e^{i\theta}\bar{\beta} b,0)^{\rm t},\\ w_2[\alpha,\beta]=&(\beta,0,\alpha)^{\rm t}\otimes (e^{i\theta}\bar{\beta}b,0,\bar{\alpha})^{\rm t},\\ w_3[\alpha,\beta]=&(0,\alpha,\beta)^{\rm t}\otimes (0,\bar{\alpha},e^{i\theta}\bar{\beta}b)^{\rm t} \end{aligned} $$ belong to $P[\Phi[a,b,c;\theta]$. We see that the set of the following vectors \[ w_1[1,1],\,w_1[1,-1],\,w_1[1,i],\,w_2[1,1],\,w_2[1,-1],\,w_2[1,i],\,w_3[1,1],\,w_3[1,-1],\,w_3[1,i] \] span the whole space $\mathbb C^3\otimes \mathbb C^3$, because the determinant of $9\times 9$ matrix $M$ whose columns are the above vectors is given by \[ |\det(M)|=64b^3 |1+b^3 e^{3i\theta}|, \] which is nonzero by the assumption $p_\theta>1$. Therefore, we see that the map $\Phi[a,b,c;\theta]$ has the spanning property for the case (iii). It remains to consider the case (iv). In this case, they never have the spanning property, since $\Phi[p_\theta,0,0;\theta]$ is completely positive. We consider interior points of the $2$-dimensional face $f^\theta_{\rm abc}$ on the plane $a+b+c=p_\theta$. In this case, the only possible product vectors in $P[\Phi[a,b,c;\theta]]$ are of the form (\ref{prod2}). It is clear that the partial conjugate of them do not span the whole space, and so they do not have the co-spanning property. In the next section, we will see that they are not co-optimal. Finally, we consider the line segment $e^\theta_{\rm ac}$ between two points $(p_\theta,0,0)$ and $(1,0,p_\theta-1)$. We note that the smallest exposed face $F$ containing $\Phi[1,0,p_\theta-1;\theta]$ is bigger than $e^\theta_{\rm ac}$. We already shown that $\Phi[1,0,p_\theta-1;\theta]$ has the co-spanning property, and so $F$ has no completely copositive map. See \cite{choi_kye} for the case of $\theta=0$. This shows that the line segment $e^\theta_{\rm ac}$ has the co-spanning property. \begin{theorem}\label{thm:spanning} Suppose that the map $\Phi[a,b,c;\theta]$ is positive, and $1<p_\theta<2$. Then we have the following: \begin{enumerate} \item[(i)] $\Phi[a,b,c;\theta]$ has the spanning property if and only if $$ 0\le a<1,\quad bc=(1-a)^2. $$ \item[(ii)] $\Phi[a,b,c;\theta]$ has the co-spanning property if and only if $$ 2-p_\theta\le a\le 1,\quad bc=(1-a)^2,\quad a+b+c= p_\theta $$ holds or $$ 1\le a\le p_\theta,\quad bc=0,\quad a+b+c= p_\theta. $$ \end{enumerate} \end{theorem} We summarize the results in terms of faces for $1<p_\theta<2$ as follows: \begin{itemize} \item $e^\theta_t$, $v_{(a_\theta(t),b_\theta(t),c_\theta(t))}$ and $v_{(0,t,1/t)}$ have the spanning property. \item $v_{(1,0,p_\theta-1)}$ and $v_{(1,p_\theta-1,0)}$ have not the spanning property. It should be checked if they are optimal or not. \item $e^\theta_{\rm ab}$, $e^\theta_{\rm ac}$, $v_{(p_\theta,0,0)}$, $v_{(1,0,p_\theta-1)}$, $v_{(1,p_\theta-1,0)}$ and $v_{(a_\theta(t),b_\theta(t),c_\theta(t))}$ have the co-spanning property. \item $f^\theta_{\rm abc}$ have not the co-spanning property. It should be checked if it is co-optimal or not. \end{itemize} \section{Optimality without the spanning property} It remains to check the optimality of $\Phi[1,0,p_\theta-1;\theta]$ and $\Phi[1,p_\theta-1,0;\theta]$, and co-optimality of interior points of the face $f^\theta_{\rm abc}$. We recall that $\Phi[1,0,1;0]$ and $\Phi[1,1,0;0]$, which are usually called the Choi map, are extremal when $\theta=0$ by \cite{choi-lam}, and so they are optimal. In order to check the optimality of a positive map $\phi$, we first find all extremal completely positive maps $\phi_V$ in the smallest exposed face determined by $\phi$, and check if they belong to the smallest face determined by $\phi$. Recall that $\phi_V$ is the completely positive map given by $$ \phi_V(X)=V^*XV,\qquad X\in M_m, $$ where $V$ is an $m\times n$ matrix. We also recall \cite{kye_ritsu} that $\phi_V$ belongs to the smallest exposed face determined by $\phi$ if and only if $V$ is orthogonal to $P[\phi]$ when we identify an $m\times n$ matrix and a vector in $\mathbb C^m\otimes \mathbb C^n$. We proceed to show that $\Phi[1,p_\theta-1,0;\theta]$ is optimal. We first consider the case $-\frac\pi 3<\theta<\frac\pi 3$. In this case, $P[\Phi[1,p_\theta-1,0;\theta]]$ consists of $$ e_1\ot e_2,\quad e_2\ot e_3,\quad e_3\ot e_1,\quad (\alpha,\beta,\gamma)^{\rm t}\otimes (\bar{\alpha},\bar{\beta},\bar{\gamma})^{\rm t}, $$ with complex numbers $\alpha,\beta$ and $\gamma$ with modulus one, from (\ref{prod}) and (\ref{prod2}). Every vector orthogonal to all of these vectors is of the form $$ (\xi,0,\,0;\, 0,\eta,0,\,;\, 0,0,\zeta)^\ttt, \qquad \xi+\eta+\zeta=0, $$ and the Choi matrix of the completely positive map associated with this vector is given by $$ V[\xi,\eta,\zeta] =\left( \begin{array}{ccccccccccc} |\xi|^2 &\cdot &\cdot &\cdot &\xi\bar\eta &\cdot &\cdot &\cdot &\xi\bar\zeta \\ \cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot \\ \cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot \\ \cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot \\ \eta\bar\xi &\cdot &\cdot &\cdot &|\eta|^2 &\cdot &\cdot &\cdot &\eta\bar\zeta \\ \cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot \\ \cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot \\ \cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot \\ \zeta\bar\xi &\cdot &\cdot &\cdot &\zeta\bar\eta &\cdot &\cdot &\cdot &|\zeta|^2 \end{array} \right). $$ In order to show the optimality of $\Phi[1,p_\theta-1,0;\theta]$, we show that if $$ W[1,p_\theta-1,0;\theta]-pV[\xi,\eta,\zeta] $$ is block-positive for a nonnegative $p>0$ with $\xi+\eta+\zeta=0$ then $\xi=\eta=\zeta=0$. First, we take product vectors $z_t=(\sqrt t e^{-i\theta},t,0)^{\rm t}\otimes (\sqrt t,1,0)$ for $t>0$, then we have \[ \lan z_tz_t^*,W[1,p_\theta-1,0;\theta]- pV[\xi,\eta,\zeta]\ran =t^2\left( t(p_{\theta}-1)-p|\xi e^{-i\theta}+\eta|^2\right)\ge 0 \] for all $t>0$ if and only if $\eta =-e^{-i\theta} \xi$. This implies $\zeta=(e^{-i\theta}-1)\xi$ from the relation $\xi+\eta+\zeta=0$. Now, we take product vectors $w_t=(0,\sqrt t e^{-i\theta},t)^{\rm t}\otimes (0,\sqrt t,1)$ for $t>0$. Then we have \[ \lan w_tw_t^*,W[1,p_\theta-1,0;\theta]- pV[\xi,-e^{-i\theta}\xi,(e^{-i\theta}-1)\xi]\ran =t^2\left( t(p_{\theta}-1)-p|\xi|^2(1-2\cos \theta)^2\right)\ge 0 \] for all $t>0$ if and only if $|\xi|^2(1-2\cos\theta)^2=0$. Since $(1-2\cos \theta)\neq 0$ for $|\theta|<\pi/3$, we conclude that $\xi=0$. Consequently, we have $\xi=\eta=\zeta=0$, and this completes the proof of the optimality of $\Phi[1,p_\theta-1,0;\theta]$. For another ranges of $\theta$, we have the similar argument. Now, we show that an interior point $\Phi[1,b,c;\theta]$, with $b+c=p_\theta-1$, of the face $f^\theta_{\rm abc}$ is not co-optimal. In the case of $\theta=0$, it is clear. We first consider the case $-\frac\pi 3<\theta<\frac\pi 3\, (\theta\neq 0)$. To see this, we consider the completely copositive linear map whose Choi matrix is given by $$ W[0,1,1;0] =\left( \begin{array}{ccccccccccc} \cdot &\cdot &\cdot &\cdot &-1 &\cdot &\cdot &\cdot &-1 \\ \cdot &1 &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot \\ \cdot &\cdot &1 &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot \\ \cdot &\cdot &\cdot &1 &\cdot &\cdot &\cdot &\cdot &\cdot \\ -1 &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &-1 \\ \cdot &\cdot &\cdot &\cdot &\cdot &1 &\cdot &\cdot &\cdot \\ \cdot &\cdot &\cdot &\cdot &\cdot &\cdot &1 &\cdot &\cdot \\ \cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &1 &\cdot \\ -1 &\cdot &\cdot &\cdot &-1 &\cdot &\cdot &\cdot &\cdot \end{array} \right), $$ and look for a small positive number $p>0$ so that $W[1,b,c;\theta]-pW[0,1,1;0]$ is block-positive. We note that $$ W[1,b,c;\theta]-pW[0,1,1;0] =|e^{i\theta}-p|W\left[\frac 1{|e^{i\theta}-p|},\frac{b-p}{|e^{i\theta}-p|},\frac{c-p}{|e^{i\theta}-p|};\theta^\prime\right], $$ where $\theta^\prime$ is the argument of $e^{i\theta}-p$. First, we take the positive number $t_0$ so that $e^{i\theta}-t_0=|e^{i\theta}-t_0|e^{\pm i \frac{\pi}3}$, and then pick a positive number $p<t_0$. \begin{figure}[h!]\label{fig:find_p} \includegraphics[scale=0.6]{figure_3.eps}\, \caption{The argument of $e^{i\theta}-p$ is less than that of $e^{i\theta}-t_0$ for $0<\theta<\pi/3$} \end{figure} Now, it is clear that $|e^{i\theta}-p|<1$, and thus the condition~\eqref{p2} is automatically satisfied. We alse see that the argument $\theta^\prime$ of $e^{i\theta}-p$ satisfies the condition $|\theta^\prime |<\frac {\pi}3$. Then we have that \[ \frac {1+b-p+c-p}{|e^{i\theta}-p|}=\frac {p_{\theta}-2p}{|e^{i\theta}-p|} =\frac {(e^{i\theta}-p)+(e^{-i\theta}-p)}{|e^{i\theta}-p|}=\frac{|e^{i\theta}-p|(e^{i\theta^\prime}+e^{-i\theta^\prime})}{|e^{i\theta}-p|} =p_{\theta^\prime}. \] Therefore, the condition~\eqref{p1} is also satisfied, and interior points of $\Phi[a,b,c;\theta]$ are not co-optimal. The other cases for $\theta$ are similar. We summarize the results in TABLE 1. \begin{table}[h!] \begin{tabular}{ccccccccccc} \hline\hline & & &\multicolumn{3}{c}{(Co-)Spanning property} & & &\multicolumn{3}{c}{(Co-)Optimality}\\\cline{4-6}\cline{9-11} Faces & & &Span. & Co-span. &Bi-span.& & &Opt. &Co-opt. &Bi-opt.\\\hline $ f_{\rm abc}^{\theta},f_{\rm ab}^{\theta}, f_{\rm ac}^{\theta}, f_{\rm bc}^{\theta}, e_{\rm a}^{\theta}, e_{\rm b}^{\theta},e_{\rm c}^{\theta}$ & & &N&N&N& & &N&N&N\\ $e_{\rm ab}^{\theta},e_{\rm ac}^{\theta},v_{(p_{\theta},0,0)}$ & & &N&Y&N& & &N&Y&N\\ $e_t^{\theta}, v_{(0,t,1/t)}$ & & &Y&N&N& & &Y&N&N\\ $v_{(1,0,p_{\theta}-1)},v_{(1,p_{\theta}-1,0)}$ & & &N&Y&N& & &Y&Y&Y\\ $v_{(a(t),b(t),c(t))}$ & & &Y&Y&Y& & &Y&Y&Y\\ \hline\hline \end{tabular}\caption{Summary of (co-)optimality and (co-)spanning property for faces of the convex body $\Gamma^{\theta}$} \end{table} We note that the the bi-optimality automatically implies indecomposability, and so we see that $v_{(1,0,p_\theta-1)}$, $v_{(1,p_\theta-1,0)}$ and $v_{(a_\theta(t),b_\theta(t),c_\theta(t))}$ give rise to an indecomposable maps. This can be seen directly. We note that $$ \lan W[p_{\pi-\theta}, t,{\textstyle\frac 1t};\pi-\theta], \Phi[a,b,c;\theta]\ran =3( ap_{\pi-\theta}+bt+\frac ct -2). $$ Assume $bc=(1-a)^2$ and take $t=\sqrt{\frac cb}$, to get $$ \lan W[p_{\pi-\theta}, t,\textstyle\frac 1t;\pi-\theta], \Phi[a,b,c;\theta]\ran =3(ap_{\pi-\theta}+2\sqrt {bc}-2)=3a(p_{\pi-\theta}-2). $$ We note that $p_{\pi-\theta}<2$ if and only if $\theta\neq \pm\frac\pi 3,\pm\pi$. Therefore, we see that $\Phi[a,b,c;\theta]$ is an indecomposable positive map whenever the condition $$ 0<a\le 1,\quad a+b+c\ge p_\theta,\quad bc=(1-a)^2,\quad \theta\neq \pm\frac\pi 3,\pm\pi $$ holds. Recall that we have already seen that positivity of $\Phi[a,b,c;\theta]$ implies decomposability in the case of $\theta= \pm\frac\pi 3,\pm\pi$ for which $p_\theta=1$. \section{Detecting PPT edge states} We proceed to find an optimal entanglement witness which detects the PPT entangled edge state \cite{kye_osaka} $W[p_\theta,b,\frac 1b, \theta]$ for $-\frac \pi 3<\theta<\frac\pi 3$ with $\theta\neq 0$ and $b>0$. We note that if we put $z=(1,0,0\,;\, 0,1,0\,;\, 0,0,1)^\ttt$ and $$ \begin{aligned} w_1&=(0,\sqrt b,0\,;\, \textstyle\frac 1{\sqrt b}e^{i\theta},0,0\,;0,0,0)^\ttt,\\ w_2&=(0,0,0\,;\, 0,0,\sqrt b\,;0,\textstyle\frac 1{\sqrt b}e^{i\theta},0)^\ttt,\\ w_3&=(0,0,\textstyle\frac 1{\sqrt b}e^{i\theta}\,;\, 0,0,0\,;\,\sqrt b,0,0)^\ttt,\\ \end{aligned} $$ then we see that $$ \lan zz^*,W(p_\theta,b,\textstyle\frac 1b;\theta]\ran= \lan w_iw_i^*,W(p_\theta,b,\textstyle\frac 1b;\theta]^\Gamma\ran=0. $$ Therefore, the most natural candidate is $$ W=\left( \begin{array}{ccccccccccc} 1-\alpha &\cdot &\cdot &\cdot &1+e^{-i\theta} &\cdot &\cdot &\cdot &1+e^{i\theta} \\ \cdot &b-\beta &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot \\ \cdot &\cdot &\frac 1b-\gamma &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot \\ \cdot &\cdot &\cdot &\frac 1b-\gamma &\cdot &\cdot &\cdot &\cdot &\cdot \\ 1+e^{i\theta} &\cdot &\cdot &\cdot &1-\alpha &\cdot &\cdot &\cdot &1+e^{-i\theta} \\ \cdot &\cdot &\cdot &\cdot &\cdot &b-\beta &\cdot &\cdot &\cdot \\ \cdot &\cdot &\cdot &\cdot &\cdot &\cdot &b-\beta &\cdot &\cdot \\ \cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot &\frac 1b-\gamma &\cdot \\ 1+e^{-i\theta} &\cdot &\cdot &\cdot &1+e^{i\theta} &\cdot &\cdot &\cdot &1-\alpha \end{array} \right) $$ which equal to $$ 2\cos\frac\theta 2 W\left[\dfrac{1-\alpha}{2\cos\frac \theta 2},\ \dfrac{\frac 1b-\gamma}{2\cos\frac \theta 2},\ \dfrac{b-\beta}{2\cos\frac \theta 2},\ \pi-\frac \theta 2\right] $$ since $1+e^{-i\theta}=2\cos\frac\theta 2 e^{-\frac{i\theta}2}$. To begin with, we note that $$ \begin{aligned} \frac 13\langle W[p_\theta,\frac 1b, b,\theta], W\rangle &=p_\theta(1-\alpha)+\frac 1b (b-\beta)+b (\frac 1b-\gamma)-p_\theta-2\\ &=-p_\theta\alpha-\frac\beta b-b\gamma<0. \end{aligned} $$ We search for $\alpha,\beta,\gamma>0$ so that $W$ is bi-optimal. To do this, we look for $\alpha,\beta$ and $\gamma$ satisfying the conditions \[ \begin{aligned} &2\cos\frac{\theta}2 (2-p_{\pi-\frac{\theta}2})<1-\alpha<2\cos\frac{\theta}2,\quad b-\beta>0,\quad \frac 1b-\gamma>0,\\ &(1-\alpha)+(b-\beta)+(\frac1b-\gamma)=2p_{\pi-\frac\theta 2}\cos\frac \theta 2,\\ &(b-\beta)(\frac 1b-\gamma)=\left(2\cos\frac\theta 2-(1-\alpha)\right)^2. \end{aligned} \] from Theorem~\ref{thm:spanning}. For the simplicity, we put $$ t:=\cos\frac{\theta}2,\quad \tilde \alpha=1-\alpha,\quad \tilde \beta=b-\beta,\quad \tilde \gamma=\frac 1 b-\gamma. $$ Then we have $\frac {\sqrt3}2<t<1$ and $p_{\pi-\frac{\theta}2}=t+\sqrt{3(1-t^2)}$. Now, the problem is reduced to look for $\tilde \alpha,\tilde \beta$ and $\tilde \gamma$ satisfying the conditions \begin{align} \label{cond:oew1}&2t(2-t-\sqrt{3(1-t^2)})<\tilde \alpha <2t,\\ \label{cond:oew2}&\tilde \beta+\tilde \gamma = 2t(t+\sqrt{3(1-t^2)})-\tilde\alpha,\quad \tilde \beta \tilde \gamma=(2t-\tilde\alpha)^2,\quad \tilde \beta>0,\quad \tilde \gamma>0. \end{align} It is easy to see that \[ 1<t+\sqrt{3(1-t^2)}<\sqrt 3, \] for $\sqrt 3/2<t<1$. Therefore, if we choose $\tilde \alpha$ satisfying the condition~\eqref{cond:oew1} for each $\frac {\sqrt3}2<t<1$, then we see that \[ \tilde \beta +\tilde \gamma >0\ \text{ and }\ \tilde \beta \tilde \gamma>0 \] in \eqref{cond:oew2} and \[ (\tilde \beta+\tilde \gamma)^2-4\tilde \beta\tilde \gamma = -[\tilde \alpha -2t(2-t-\sqrt{3(1-t^2)})][3\tilde \alpha-2t(2+t+\sqrt{3(1-t^2)})]\ge 0. \] Consequently, we can find $\tilde \beta$ and $\tilde \gamma$ as positve roots of the quadratic equation \begin{equation}\label{cond:oew3} x^2 -\left [2t(t+\sqrt{3(1-t^2)})-\tilde \alpha\right] x+(2t-\tilde \alpha)^2=0. \end{equation} This completes the proof of the following: \begin{theorem} For each $0<|\theta|<\frac{\pi}3$ and $b>0$, let $\tilde \alpha$ be a positve number satisfying the condition~\eqref{cond:oew1} with $t=\cos\frac{\theta}2$, and $\tilde \beta$ and $\tilde \gamma$ be roots of the quadratic equation~\eqref{cond:oew3}. Then \[ \frac{2\cos(\theta/2)}{3(\tilde \alpha+\tilde \beta+\tilde \gamma)}W\left[\frac{\tilde \alpha}{2\cos(\theta/2)}, \frac{\tilde \beta}{2\cos(\theta/2)},\frac{\tilde \gamma}{2\cos(\theta/2)},\pi-\frac{\theta}2\right] \] is an optimal PPTES witness which detects $W[p_{\theta},b,\frac1 b,\theta]$. \end{theorem} We note that a general method had been suggested in \cite{lew00} to construct entanglement witnesses detecting a given entangled state. If we follow this method for $W[p_\theta,b,\frac 1b, \theta]$, then we get the above $W$ with $\alpha=\beta=\gamma$. But, it turns out that this method does not give us an {\sl optimal} PPTES witness in general. In fact, one can show that if $W$ is an optimal PPTES witness with positive $\alpha=\beta=\gamma$ then we have the restriction $$ b+\frac 1b \le 2-\sqrt{3}+\sqrt{6\sqrt{3}-6} \quad {\text{\rm and}}\quad \cos\frac{\theta}2\le\frac 18 (3+\sqrt{21}). $$ That is why we consider the above $W$ with different $\alpha,\beta$ and $\gamma$ for the full generality. \section{Conclusion} We determined positivity of linear maps with four parameters which are of Choi map types involving complex entries. We also determined their optimality, co-optimality, spanning property and co-spanning property. In this way, we found parameterized examples of indecomposable positive linear maps with the bi-spanning properties. They are optimal PPTES witnesses, which are \lq nd-OWE\rq s in the sense of \cite{lew00}. Optimality is not so easy to determine for a given positive linear map, because we do not know the whole facial structures of the convex cone $\mathbb P_1$ consisting of all positive maps. The spanning property is stronger than optimality and relatively easy to check. We suggest a general method to check optimality of a positive map $\phi$: We first find all extremal completely positive maps in the smallest exposed face of $\mathbb P_1$ containing $\phi$, and check if they belong to the smallest face containing $\phi$. The optimal PPTES witnesses we constructed detect two qutrit PPT entangled edge states of type $(6,8)$ in \cite{kye_osaka}, whose existence had been a long standing question \cite{sbl}. We report here one interesting byproduct of our construction. Our constructions give counter-examples to the conjecture \cite{korbicz} regarding the structural physical approximations, which claims that the SPA of an optimal entanglement witness is separable. Several authors \cite{aug_bae,chru_pyt,qi} checked recently various kinds of entanglement witnesses to support the conjecture. In the forth-coming paper \cite{ha_kye_spa}, the authors will consider the SPA conjecture in a systematic way. We introduce the notions of positive type and copositive type for entanglement witnesses depending on the distances to the positive part and the copositive part. We will show that if the SPA of an entanglement witness is separable then it must be of copositive type, and so the SPA is meagingful only for those of copositive types. Our construction in this paper shows that the SPA conjecture does not hold even in cases of copositive types. \color{black}
2,869,038,155,358
arxiv
\section{Introduction} \label{Sect01} Imminent earthquake prediction is a difficult problem. Only long-term (years to decades) and medium-term (months to years) predictions are regarded possible at present, while imminent/short term predictions are commonly thought to be negative \footnote{Approaches seismologists have used to investigate earthquakes include the researches on seismicity patterns, crustal movements, ground water level in wells, radon or hydrogen gas emissions, changes of seismic wave velocities, electromagnetic fields (seismo-electromagnetics), large-scale changes in soil temperature, changes in ion concentration in the ionosphere, and so on.}. See \href{http://en.wikipedia.org/wiki/Earthquake_prediction}{\it{Wikipedia: earthquake prediction}}, and Refs.\cite{MainNature1997,MainNature1999,GellerScienc1997,BBCnews2008,EqReview1,EqReview2}. The conjecture and research proposal of this paper are a suggestion and attempt only. This paper is arranged as follows. The conjecture will be outlined in \S\ref{Sect02}, of which the central idea is \textit{vertical fast air emission/absorption causing cloud patterns}. This idea is inspired by an observation of strange patterns on shaving foam of \S\ref{Sect03}. Discussions on geological deformations of crustal rock strata and the induced vertical fast air movement will be presented in \S\ref{Sect04}. This air movement may lead to formation of \textit{earthquake cloud} patterns, hence a literature survey for earthquake cloud, including some sorts of explanations, is presented in \S\ref{Sect05}. Potential evidences for the conjecture, as well as the induced side-effects, will be listed in \S\ref{Sect06}. An experiment designed to test the conjecture will be given in \S\ref{Sect07}. Finally, the paper will be summarized in \S\ref{Sect10}, followed by some remarks in \S\ref{Sect11}. \section{Conjecture and research proposal} \label{Sect02} A conjecture is made on imminent/short term earthquake prediction based on cloud patterns caused by fast air emission/absorption between ground and sky: \begin{enumerate} \itemsep=2pt \parsep=1pt \parskip=0pt \item Immense volume of air (or gas) stuffs gaps and cracks among crustal rock strata. Geological deformations could either squeeze this air out, or produce more room in the rock strata gaps to absorb in the air from outside. \item It is a reasonable hypothesis that earthquakes (or, a part of earthquakes) are three-stage events, from energy preparation to release \cite{BreakingVideos}: \begin{enumerate} \itemsep=2pt \parsep=1pt \parskip=0pt \item \textsf{Long-term preparation (years prior to an earthquake)}: Gradual geologic motions of tectonic plates push rock strata to undergo geologic deformations, such as bending, compression, etc.. Seismological energy is accumulated in this stage. Geologic deformations become more and more severe when close to the earthquake, accompanied by occurrence of more and more minor breakings. \item \textsf{\textbf{Eve of event} (hours to days prior to the earthquake)}: This short stage is the final preparation period. Occurrence of one or two medium-size breakings make the rock strata suffer \textit{a series of drastic geologic deformations} within a short time, which will trigger the eventual major breaking and collapses. \item \textsf{Occurrence of event (disastrous moment)}: Major breaking and collapses take place, seismological energy being released. \end{enumerate} \item The \textsf{eve of event} stage is crucial for imminent earthquake prediction. I conjecture that those \textit{drastic deformations of rock strata} will cause a huge pressure difference of air between the two sides of the Earth soil surface, because the thick soil surface covers the rock strata and acts as an obstruction of air release (see Fig.\ref{Fig-05} below). Consequently, vertical high-speed air emission and absorption can be produced between the ground and sky. In regard to the observation of the following \S\ref{Sect03}, I conjecture that this air movement will lead to formation of unusual cloud patterns on interfaces between atmosphere levels. This provides a possible origin for the folk-called \textit{earthquake cloud}. \item This air emission/absorption is vertical and drastic, different from the horizontal and moderate meteorological movements of atmosphere, hence its induced cloud patterns are expected to be different from meteorological cloud patterns\footnote{Vertical air movements may also appear in extreme weather situations. However, since earthquakes and extreme weather are both small-probability events, the probability of co-existence of them is even smaller.}. Therefore, earthquake cloud appearing hours/days before an earthquake could be a candidate for imminent/short term earthquake prediction, with support of physics. \item Furthermore, it is thought that different magnitude, strength and velocity of air emission/absorption could produce different cloud patterns. Hence the patterns provide a way to reveal more quantitative information of location, magnitude and strength of the geologic deformation of rock strata, and hence of the impending earthquake. \end{enumerate} \noindent The conjecture leads to the following \textbf{research proposal}, which is a study of inverse problem: \begin{itemize} \itemsep=2pt \parsep=1pt \parskip=0pt \item[$\triangleright$] \textit{Step 1 (Cloud patterns):} Distinguishing the cloud patterns of earthquakes from the cloud patterns of meteorological movements of atmosphere, and further recognizing different sorts of patterns of earthquake cloud. \item[$\triangleright$]\textit{Step 2 (Air movement):} Estimating magnitude, strength and velocity of vertical air emission and absorption --- establishing workable mathematical/mechanical models. \item[$\triangleright$]\textit{Step 3 (Geologic deformations):} Estimating locations, magnitude and strength of geologic deformations of rock strata, with the aid of long- and medium-term predictions of seismology. \item[$\triangleright$]\textit{Step 4 (Final purpose):} Imminently predicting earthquakes. \end{itemize} \section{Patterns appearing on shaving foam} \label{Sect03} The idea of \textit{air-emission causing patterns} is inspired by the following observation on strange patterns appearing on shaving foam when the foam is sprayed out from an aerosol can. See Fig.\ref{Fig-01}. Before that observation I took for granted that the sprayed foam should always be smooth, as in Fig.2; however, it was not the case. Instead, in some situations, such as ``\textit{half-filled can plus shaking the can 4 or 5 times}'', patterns appeared on the sprayed foam. See Figs.\ref{Fig-01} and \ref{Fig-03}. \vfill\begin{figure}[h] \centering \includegraphics[width=0.55\textwidth]{Fig-01} \caption{Shaving foam sprayed out from an aerosol can. \textit{Left}: The sample used was a half-filled can of \textit{Gillette Lemon Lime}, which was shaken 4 or 5 times before spraying. \textit{Right}: The foam was sprayed out from the can and patterns appeared on the foam surface.} \label{Fig-01} \end{figure} \vfill\begin{figure}[h] \centering \includegraphics[width=0.17\textwidth]{Fig-02} \caption{Smooth foam containing no patterns.} \label{Fig-02} \end{figure} \vfill\begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{Fig-03} \caption{More patterns on shaving foam.} \label{Fig-03} \end{figure} \noindent In Figs.\ref{Fig-01} and \ref{Fig-03} different patterns are shown, including the eye-like and the rose-like. I attribute the formation of these patterns to the fast aerosol spray. This spray was caused by the pressure difference between the inside and outside of the aerosol can, while the pattern formation was governed by some fluid dynamical mechanism that is still unknown \footnote{For instance, a possible mechanism is that the fast spray causes oscillation and waves in the aerosol which form the patterns at the interface between the aerosol and the outer air, just like sound waves in air which are able to cause ripples when travelling over smooth water surface.\label{FTNote}}. See Fig.\ref{Fig-04} below. \begin{figure}[h] \centering \includegraphics[width=0.85\textwidth]{Fig-04} \caption{Aerosol is fast sprayed out from a can. During the spray process, the interface between the aerosol and the outer air is strongly pushed forward by the aerosol, where some unknown fluid dynamical mechanism causes appearance of strange patterns on the interface.} \label{Fig-04} \end{figure} \vfill \section{Fast air emission/absorption due to geological deformations} \label{Sect04} The phenomena observed above could occur on cloud prior to an earthquake (see Fig.\ref{Fig-05} below). As mentioned in \S\ref{Sect02}, many an earthquake has a crucial short stage immediately before the major breaking moment, where drastic geological deformations take place. The thick soil surface covering the rock strata causes a huge pressure difference of air between the two sides of the soil surface, which leads to vertical fast air emission and absorption. The bigger the deformations are, the more violent the air emission and absorption are. \vfill \newpage \begin{figure}[H] \includegraphics[width=1.03\textwidth,height=0.90\textheight]{Fig-05} \caption{Three stages of earthquake event. The second stage, \textit{eve of event}, is crucial for imminent earthquake prediction, where vertical fast air emission and absorption could form unusual cloud patterns on interfaces between atmosphere levels.} \label{Fig-05} \end{figure} \section{Literature survey of earthquake cloud} \label{Sect05} \vspace*{-8mm} \begin{figure}[H] \centering \includegraphics[width=1.0\textwidth]{Fig-06} \caption{Photos of the so-called earthquake clouds from different angles of view. (Meteorological satellite images are from Ref.\cite{TerraResLAPark}.) } \label{Fig-06} \end{figure} Earthquake cloud has not been commonly accepted by the scientific community as a sign of impending earthquakes \cite{WikiEqCloud}. There are diametrically opposite opinions: some people treat simultaneous occurrence of unusual-looking cloud and an earthquake as a coincidence; but some people believe the strange-looking clouds in Fig.\ref{Fig-06} are associated with seismic events. Earthquake cloud has been observed and studied for years, based on which some earthquake predictions have been made. For instance, a Chinese folk-scientist named Z. Shou claimed that he had made dozens of earthquake predictions based on cloud patterns in satellite images, with a 68\% accuracy \cite{ShouZ2005,ShouZwebsite}. He identified five types of earthquake cloud --- line-shaped, feather-shaped, lantern-shaped clouds, etc. --- and claimed that appearance of any one of these clouds indicates an impending earthquake to occur within several hours to 103 days (averagely 30 days). Some explanations for earthquake cloud are the following \cite{WikiEqCloud}: \begin{itemize} \item[$\triangleright$] \textit{Heat-flow paradox} \textit{Claim}: Deformations and motions of rock strata cause rock frictions, and hence produce a vast amount of thermal energy. This energy heats water to 1500$^{\circ}$C such that hot vapor obtains an updraft to form earthquake cloud. This explanation is held by Z. Shou. \textit{Deficiency}: If this is true, such hot vapor should have been noticed by human beings, but that is not the case. In addition, geophysicists have conducted experiments and pointed out that this heating is only 4$^{\circ}$C or so, not enough to produce earthquake cloud \cite{TerraResLAPark}. \item[$\triangleright$] \textit{Effect of piezoelectricity} \textit{Claim}: Piezoelectricity occurring inside the Earth causes local variation of geomagnetic field, which leads to variation in electromagnetic fields in sky and thus forms earthquake cloud. \textit{Deficiency}: This explanation is unconvincing. Piezoelectricity is a kind of electromagnetic effect, which is far too weak to affect motions of atmosphere. Moreover, if the piezoelectricity is strong enough to drive motions of air, then horrible electromagnetic noise must have been created to disrupt communications and destruct electronic devices. But that is not the case apparently. \end{itemize} \noindent From my point of view, only the explanations mostly based on mechanical reasons are acceptable. My explanation is the following, associated with the conjecture of \S\ref{Sect02}: \begin{quote} An earthquake event has a final preparation stage immediately before the major breaking moment, where the drastic geological deformations cause vertical fast air movements which produce earthquake cloud. Some unknown mechanism of fluid dynamics takes effect in this process, similar to that of the footnote on Page \pageref{FTNote}. I.e., fast air movements produce oscillation and (sound) waves in the atmosphere when the waves propagate to the interfaces between atmosphere levels. The sound waves produced by the air movement could be ultra-, acoustic or infrasound. \end{quote} \vfill \section{Potential evidences and side effects} \label{Sect06} Vertical fast air emission/absorption via the Earth soil surface, either by rushing through caves and tunnels or by penetrating through soil (as shown in Fig.\ref{Fig-05}), could have potential evidences and induced side effects: \begin{itemize} \item[$\triangleright$] \textit{Gas emission prior to volcanic eruption} It is instructive to refer to the phenomenon of gas emission before volcanic eruption, which is an example of vertical fast air emission associated with geological motions of rock strata of tectonic plates. \item[$\triangleright$] \textit{Sound effect} Air emitted through caves, tunnels or soil may produce sound waves of high, intermediate or low frequency, as a horn does when air passing through it. Hence ultra-, acoustic or infrasound could be heard by human ears or devices before earthquake events. \footnote{This sound effect might be different from the so-called phenomenon of \textit{earthquake sound} which takes place seconds or minutes ahead of earthquake occurrence.} \item[$\triangleright$] \textit{Light effect} High-speed air movement may cause frictions among ionized air masses which result in the phenomenon of lightening. This could be an origin for the so-called \textit{earthquake light}. \item[$\triangleright$] \textit{Atmospheric ionization} According to Wadatsumi (Okayama University of Science, Japan) \cite{Wadatsumi}, ionization of atmospheric aerosol rises remarkably prior to an earthquake, which results in a phenomenon that the color of sky turns red. Human beings may feel uncomfortable and depressed in such a gas emitted from the inside Earth. \item[$\triangleright$] \textit{Release of radon gas} Radon (\textsf{Rn}, atomic number 86) is a radioactive, colorless, odorless, tasteless noble gas, which is thought to be released from fault zones prior to earth slipping. Researchers have investigated changes in groundwater radon concentrations for earthquake prediction. See the work of the research group led by G. Charpak (Nobel laureate in physics, 1992) in \cite{PhysicsWorldNews}. \item[$\triangleright$] \textit{Anomalous animal behaviors} Animals, especially those burrowing and underground-living animals, will probably be disturbed by the bad smell or even toxic gas emitted from the inside Earth. \end{itemize} \vfill \section{Design of experiment to test vertical fast air movement} \label{Sect07} An experiment is designed to test the conjecture of \S\ref{Sect02} and \S\ref{Sect04}: \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{Fig-07} \caption{Experiment designed to observe patterns on atmosphere interfaces formed by air emission/absorption. In this experiment sponge is used to simulate the Earth soil surface, and straws to simulate caves and tunnels.} \label{Fig-07} \end{figure} \begin{itemize} \itemsep=5pt \parsep=2pt \parskip=2pt \item[$\triangleright$]\textit{Upper part of Fig.\ref{Fig-07} --- air penetrating through soil} This set-up is divided into three modules: \begin{itemize} \itemsep=0pt \parsep=5pt \parskip=2pt \item \textit{Middle --- Air-gate module}: It is an empty quarter-cylinder, which has two entries marked as Gate \textbf{A} and \textbf{B}. This module contains an air marked as Atmosphere 1. A standing heavy door is placed in this module, which can fall down by rotating about a hinge. This door serves as an air piston. \item \textit{Left --- Air absorption module}: It is connected to the Air-gate via Gate \textbf{A}. This module contains not only Atmosphere 1, but also another air marked as Atmosphere 2 located on top of Atmosphere 1. Atmosphere 2 is lighter in density. These two Atmospheres should be carefully chosen, such that they do not mix up with each other and have a clear interface on which the desired patterns can be demonstrated. In the middle of this module a layer of sponge is placed to play the role of the Earth soil surface. \item \textit{Right --- Air emission module}: It is connected to the Air-gate via Gate \textbf{B}. This module also contains Atmosphere 1 and 2, with the latter on top of the former, to form a clear interface. In the middle of this module there is also a sponge. \end{itemize} In the Middle module, when the piston door falls down, air is fast absorbed in from the Left module via Gate A. At the same time, in the Left module, the part of Atmosphere 1 above the sponge is absorbed down to penetrate through the sponge. During this process, \textit{patterns are expected to appear on the interface between Atmospheres 1 and 2 in the Left module}. Similarly, when the piston door of the Middle module falls down, air is fast pumped out from the Middle to the Right module via Gate B. At the same time, in the Right module, the part of Atmosphere 1 beneath the sponge is emitted up to penetrate through the sponge, and \textit{patterns are expected to appear on the interface between Atmospheres 1 and 2 in the Right module}. The purpose of this experiment is to observe the patterns formed on the interfaces, with the presence of the obstructive sponge which simulates the Earth soil surface. \item[$\triangleright$]\textit{Lower part of Fig.\ref{Fig-07} --- air spraying through caves and tunnels or penetrating through soil} This set-up is almost the same as the upper part. The only difference lies in that several straws are placed in the sponge, for the purpose of simulating caves and tunnels of the soil surface. They provide another air passageway in addition to the sponge, such that air can either rush through the straws or penetrate through the sponge. It is expected that different patterns could be obtained on the interfaces between Atmospheres 1 and 2 in the Left and Right modules. \end{itemize} \noindent In this experiment we can choose different-sized Air-gate modules, to produce air movement with different magnitude, strength and velocity. We expect to obtain different patterns on the interfaces. Moreover, different choices of Atmospheres 1 and 2 are expected to bring extra alteration to the patterns obtained. \vfill \section{Summary} \label{Sect10} In this paper a conjecture is made on imminent earthquake prediction based on cloud patterns. In \S\ref{Sect02} the contents of the conjecture are outlined. In \S\ref{Sect03} an observation of the strange patterns appearing on shaving foam are presented. In \S\ref{Sect04} it is illustrated that drastic geological deformations of rock strata, taking place immediately (hours/days) before an earthquake, may cause fast air emission/absorption through the Earth soil surface, vertically in between ground and sky. Inspired by the observation of \S\ref{Sect03}, it is conjectured that this fast movement of air fluid may produce unusual cloud patterns at interfaces between atmosphere levels. Different from the horizontal and moderate meteorological air movement, this air movement is vertical and drastic, hence its caused cloud patterns are expected to be different from meteorological cloud patterns. This provides a possible origin for the so-called \textit{earthquake cloud}. Recognition of different earthquake patterns could provide a practical way to estimate magnitude, strength and location of geological deformations of rock strata, and hence a physical method for imminent earthquake prediction. In \S\ref{Sect05} literature survey and explanations for earthquake cloud are presented. In \S\ref{Sect06} potential evidences of the vertical fast air emission/absorption and some induced side effects are listed. Finally in \S\ref{Sect07} an experiment is designed to test the conjecture. \vfill \section{Remarks} \label{Sect11} \begin{itemize} \item[$\triangleright$] In \S\ref{Sect03} an observation of the strange patterns on shaving foam has been shown. The sample used was \textit{Gillette Lemon Lime}, and the experimental condition is ``\textit{half-filled can plus 4--5 shakes}''. Another sample, \textit{Gillette Sensitive Skin} foam, has been tried as well, for which it is found that the condition of producing patterns is different, and the patterns obtained are not so clear. This implies that the ingredients of aerosol may somehow affect the patterns formed. \item[$\triangleright$] The personal research areas of the author are \textit{topological fluid mechanics} and \textit{topological quantum field theory}, far from the topic of this paper. This paper stems from my personal interest; it is an attempt to share ideas with colleagues. There is not much mathematics in this paper, and the ideas could be incorrect. \end{itemize} \vfill \begin{small}
2,869,038,155,359
arxiv
\section{Introduction} The 4 m International Liquid Mirror Telescope (ILMT) which will observe in the Time Delayed Integration (TDI) mode is expected to be commissioned soon on the Aryabhatta Research Institute of Observational Sciences (ARIES) site in Devasthal, India (Surdej et al. 2018). The ILMT will be repeatedly scanning the sky within a narrow stripe of width $\sim$27'. The positions of the celestial objects in the sky change with time as the sky moves across the fixed detector surface due to the rotation of the Earth around the polar axis. The TDI mode of the charge couple device (CCD) mounted at the ILMT helps to track the stars by registering the electronic charges with the rate at which the target source drifts across the detector (Surdej et al. 2018) and the positions of each object in the observed image come out in pixel units. To convert the observations from the pixel coordinate system to the world coordinate system ($\alpha$, $\delta$) we need to carry out astrometric calibration of the ILMT fields. For that, we choose quasars as astrometric standards because of their negligible proper motions (PM) and trigonometric parallaxes. The number of quasars we know as of today has significantly increased since their first identification about six decades ago (Schmidt 1963). This increase is mainly due to large surveys such as the Two degree Field (2dF) QSO survey (Croom et al. 2004) in the southern sky and the Sloan Digital Sky Survey (SDSS) in the northern sky (Abolfathi et al. 2018, P\^aris et al. 2018). Quasars known from different surveys are also gathered and put together in the form of catalogues, through several releases by V\'eron-Cetty \& V\'eron (2006, 2010) and the Million Quasars (Milliquas) catalogue by Flesch (2017). These catalogues have quasars from different origins with different accuracies in their optical positions. But these catalogues do not provide the errors associated with the equatorial coordinate positions as well as other information such as parallax and PM. Therefore, quasars taken from these catalogues cannot be directly used as sources to carry out astrometric calibration. The main motivation of the present work is therefore to construct a catalogue of quasars that will be used to carry out astrometric calibration of the ILMT fields by including (i) more precise positions of the sources with uncertainties, (ii) additional important information such as parallax and PM and, (iii) the photometry of the objects. We describe the data used in this work in Section 2. The procedures followed to make the quasar catalogue and the outcome is presented in Section 3. Applications of the catalogue are discussed in Section 4 followed by the Summary in the final Section. \begin{figure} \includegraphics[scale=0.7]{sepn.pdf} \caption{Distribution of the angular separation between the quasars in the Million Quasars catalogue and their counterparts in {\it Gaia}-DR2.} \label{fig:fig-1} \end{figure} \section{Data used} To select a catalogue of quasars suitable for astrometric calibration of the ILMT observations we need to consider all quasars we know as of today. This can come from a wide variety of surveys carried out at different wavelengths such as optical, infrared, radio, etc. One such quasar catalogue suited for our purpose is the Milliquas catalogue. This is the largest compilation of quasars we have as of today. This catalogue contains about 1998464 quasars taken from all the quasar surveys available in the literature. The majority of quasars in the Milliquas catalogue comes from the SDSS, one of the ambitious sky surveys covering more than a quarter of the sky in the northern hemisphere in five optical filters. Other quasars included in Milliquas are from the NBCKDE, NBCKDE-v3 (Richards et al. 2009, 2015), XDQSO (Bovy et al. 2011; Myers et al. 2015), AllWISE and Peters photometric quasar catalogues (Peters et al. 2015) as well as quasars from all-sky radio/X-ray surveys. As the Milliquas catalogue is a compilation of various quasar surveys, it has varied uncertainties in the equatorial coordinates. For astrometric calibration, one needs to have quasars with precise positions. A source that provides precise positions of celestial sources is the survey being presently carried out by the European Space Agency {\it Gaia} mission. {\it Gaia}-DR2 (Lindegren et al. 2018, Gaia Collaboration et al. 2018) contains data from the all sky astrometric and photometric survey conducted by {\it Gaia} and provides accurate positions for about 1.7 billion sources, with PM and parallax measurements for about 1.3 billion sources (Marrese et al. 2019). Therefore, to get accurate positions for the quasars in the Milliquas catalogue we used the precise and homogeneous measurements from {\it Gaia}-DR2. \begin{table*} \caption{The ILMT Quasar (IQ) catalogue. The details on each column of this table is given in Table 2. The full catalogue is available in the electronic version of the article.} \label{tab:table-1} \small \setlength{\tabcolsep}{4pt} \begin{tabular}{lccccccccccl} \hline ID-1 & RA & RA-ERR & DEC & DEC-ERR & z & ... & ... & PM-DEC & PM-DECERR & EPSILON & D \\ (1)* & (2)* & (3)* & (4)* & (5)* & (6) & ... & ... & (16)* & (17)* & (18) & (19) \\ \hline 2.85518e+18 & 0.06120 & 0.54785 & 29.23513 & 0.28529 & 1.90 & ... & ... & 1.31259 & 0.50398 & 0.00 & 0.00 \\ 2.85525e+18 & 0.07183 & 0.22781 & 29.50171 & 0.15339 & 1.40 & ... & ... & 0.27730 & 0.26071 & 0.00 & 0.00 \\ 2.85525e+18 & 0.10806 & 0.62917 & 29.50235 & 0.56513 & 2.51 & ... & ... & -1.02912 & 1.06549 & 1.44 & 0.87 \\ \hline \multicolumn{12}{l}{\footnotesize{ *The values are up to 5 decimal places, the original values retrieved from the {\it Gaia}-DR2 catalogue are mentioned in the IQ catalogue}} \\ \multicolumn{12}{l}{\footnotesize{ available in the electronic version of the article.}}\\ \end{tabular} \end{table*} \begin{table*} \caption{Column information of the ILMT Quasar (IQ) catalogue.} \label{tab:table-2} \small \setlength{\tabcolsep}{3pt} \begin{tabular}{llccl} \hline Number & Column Name & Format & Unit & Description \\ \hline 1 & ID-1 & String & & Object name as given in {\it Gaia}-DR2 \\ 2 & RA & Double & degree & Right Ascension (J2000) \\ 3 & RA-ERR & Double & $mas$ & Error in Right Ascension retrieved from {\it Gaia}-DR2 \\ 4 & DEC & Double & degree & Declination (J2000) \\ 5 & DEC-ERR & Double & $mas$ & Error in Declination retrieved from {\it Gaia}-DR2 \\ 6 & z & Double & & Redshift \\ 7 & ID-2 & String & & Object ID in the Milliquas catalogue \\ 8 & TYPE & String & & Classification of the object \\ 9 & PROB & Double & & Probability that the object is a quasar$^\bullet$ \\ 10 & MAG & Double & & {\it Gaia} G-band magnitude \\ 11 & MAG-ERR & Double & & Error in {\it Gaia} G-band magnitude \\ 12 & PLX & Double & $mas$ & Parallax \\ 13 & PLX-ERR & Double & $mas$ & Error in parallax \\ 14 & PM-RA & Double & $mas \, yr^{-1}$ & Proper motion in RA \\ 15 & PM-RAERR & Double & $mas \, yr^{-1}$ & Error in proper motion in RA \\ 16 & PM-DEC & Double & $mas \, yr^{-1}$ & Proper motion in DEC \\ 17 & PM-DECERR & Double & $mas \, yr^{-1}$ & Error in proper motion in DEC \\ 18 & EPSILON & Double & & Astrometric excess noise \\ 19 & D & Double & & Significance of excess noise \\ \hline \multicolumn{5}{l}{\footnotesize{ $^\bullet$ The details on how the probability is assigned to each quasar can be found in Flesch (2015).}} \\ \end{tabular} \end{table*} \section{Methods followed and the resulting catalogue} It is known that quasars represent quasi-ideal astrometric reference sources over the celestial sphere because of their negligible PM and are thus suitable candidates for use to carry out astrometric calibration (Souchay et al. 2015) of the ILMT survey. We therefore aim to gather accurate position, PM and trigonometric parallax for all quasars available in the Milliquas catalogue from the {\it Gaia}-DR2 database and then select a sub-set of them for the ILMT use. To calculate the absolute or resultant PM $\mu$ we used the relation given by Varshni (1982) \begin{gather} \mu = (\mu_{\alpha}^2cos^2\delta + \mu_{\delta}^2)^{1/2} \label{eq:pm_eq} \end{gather} where $\alpha$ and $\delta$ are the right ascension (RA) and declination (DEC) respectively. We collected $\mu_{\alpha}cos\delta$ and $\mu_{\delta}$ values from the {\it Gaia}-DR2 database. The error in $\mu$ was calculated using the standard error propagation method. To arrive at a separate list of quasars for the ILMT field of view (FoV), we followed the following steps: \begin{enumerate} \item We cross-correlated nearly 2 million objects in the Milliquas catalogue with {\it Gaia}-DR2 with angular proximity of less than 2". We used a 2" angular separation because a large fraction of the objects in the Milliquas catalogue are from SDSS that has imaging data with seeing less than 2" (Ross et al. 2011). By cross-correlating the Milliquas catalogue with the {\it Gaia}-DR2, we arrived at a sample of 1235600 objects spanning a range of redshifts up to $z$ = 6.4. The distribution of the angular separation for the matched objects between the position in the Milliquas catalogue and the position in {\it Gaia}-DR2 is shown in Fig. 1. The distribution has a range between 0 and 1.97" with a mean of 0.15" and a standard deviation of 0.18". About $99.8\%$ of the objects are matched within 1". The distributions of $z$ and G-band brightness of these objects are shown in Fig. 2. \item Among the many parameters provided by {\it Gaia}-DR2, two parameters that are relevant for quasar target selection are the astrometric excess noise ($\epsilon_i$) and its significance namely the astrometric excess noise significance (D). Excess noise $\epsilon_i$ quantifies the disagreement between the observations and the best-fitting standard astrometric model adopted by {\it Gaia} (Lindegren et al. 2012). A value $\epsilon_i$ = 0 implies that the source is astrometrically well behaved, and a value $\epsilon_i >$ 0 indicates that the residuals are statistically larger than expected. However, practically there is the possibility that some sources may not behave exactly according to the adopted astrometric model. Therefore, the significance of $\epsilon_i$ is characterised by its significance namely D (Lindegren et al. 2012). If, D $\leq 2$, $\epsilon_i$ is probably not significant and the source may be astrometrically well-behaved even if $\epsilon_i$ is large\footnote{https://gea.esac.esa.int/archive/documentation/GDR2/Gaia$\_$archive/\\chap$\_$datamodel/sec$\_$dm$\_$main$\_$tables/ssec$\_$dm$\_$gaia$\_$source.html}. Therefore, we only selected sources with D $\leq$ 2 from {\it Gaia}-DR2. This yielded a total of 1047747 quasars covering the whole sky. For these quasars, we calculated PM using Equation 1. The distribution of their PM is shown in Fig. 3. From this figure, it is evident that except for a few objects (about $0.25\%$) most of them have PM less than 20 $mas ~yr^{-1}$ with a mean value and standard deviation of $1.808 \, mas \, yr^{-1}$ and $2.878 \, mas \, yr^{-1}$, respectively. \item From the list of quasars obtained at step 2 above, we made a separate catalogue of quasars for the ILMT stripe. The Devasthal observatory where the ILMT is being installed is located at a latitude near $29^{\circ} 22' 26"$ (Surdej et al. 2018). The width of the ILMT FoV is $\sim$27'. However, the ILMT sky at zenith will change with time due to precession as shown in Fig. 4. It has been found that if we take a $\sim34'$ wide stripe instead of $\sim27'$, the effect of precession during the next 10 years will be taken into account. So as {\it Gaia} has a limiting G-band magnitude of 21\footnote{https://www.cosmos.esa.int/web/gaia/dr2}, we selected only those quasars having a declination ($\delta$) in the range $29.09^{\circ} \leq \delta \leq 29.66^{\circ}$ and G-mag $\leq$ 21 from the sample of 1047747 quasars obtained from step 2 since only these will be accessible for observations with the ILMT. Using the above criteria, we arrived at a sample of 6904 quasars. For slightly less than $2\%$ of these, we do not have the redshift information in the Milliquas catalogue. Excluding those, we arrived at a final catalogue of 6755 quasars available within the ILMT stripe. \item A plot of the PM of these objects as a function of their G-band brightness is shown in Fig. 5. From this figure, it is evident that the majority of the quasars have PM $<$ 20 $mas \, yr^{-1}$. We only found 17 quasars in this list with PM $>$ 20 $mas \, yr^{-1}$ and all of them are fainter than 19.5 mag in the G-band. The nature of these 17 objects could not be ascertained due to the lack of optical spectra for them. Therefore, we neglected those 17 quasars from our list and arrived at a final sample of 6738 quasars that will be visible with the ILMT and could be used as astrometric calibrators. Varshni (1982) claimed the existence of high PM quasars namely PHL 1033, LB 8956 and LB 8991 with PM values of 0.049 $\pm$ 0.013, 0.061 $\pm$ 0.018 and 0.050 $\pm$ 0.018 $arcsec \, yr^{-1}$ respectively. We checked the PM of these objects in {\it Gaia}-DR2 and found PM values of 0.121 $\pm$ 0.435, 0.188 $\pm$ 0.151 and 0.056 $\pm$ 0.072 $mas ~yr^{-1}$ for PHL 1033, LB 8956 and LB 8991 respectively. This along with the observations in Fig. 5 point to quasars having PM $<$ 20 $mas \, yr^{-1}$. \item The distributions of redshifts, G-band magnitude and parallax of the ILMT quasars are illustrated in Fig. 6. They span redshifts up to $z$ = 4.9. Their distribution in the galactic coordinate system is shown in Fig. 7. The sample catalogue and the description of its columns are given in Table 1 and Table 2, respectively. The full catalogue is available in the electronic version of the present article. \end{enumerate} \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.65]{all_distfn.pdf} \end{center} \caption{Distribution of redshift (top) and {\it Gaia} G-band magnitude (bottom) of the quasars selected from the Million Quasars catalogue.} \label{fig:fig-2} \end{figure} \begin{figure}[!ht] \includegraphics[scale=0.65]{pmall_dist1n.pdf} \caption{Distribution of the PM for quasars in the Million Quasars catalogue with D $\leq$ 2. There are about 2615 quasars with PM $>$ 20 $mas \, yr^{-1}$, their PM distribution is shown in the small box on the same figure.} \label{fig:fig-3} \end{figure} \begin{figure}[!ht] \includegraphics[scale=0.5]{precession_finaln.pdf} \caption{(Top) Polar plot showing the astrometric deviation of the ILMT stripe between the 2019 and 2029 epochs due to precession. (Bottom) Deviation in astrometry between the 2019 and 2029 epochs considering that the original ILMT stripe is rectangular in 2019 (blue lines) and pink lines represent the same in 2029.} \label{fig:fig-4} \end{figure} \begin{figure}[!ht] \hspace*{-0.3cm}\includegraphics[scale=0.6]{pm_ilmtn.pdf} \caption{Proper motion versus G-mag of the quasars in the ILMT stripe.} \label{fig:fig-5} \end{figure} \begin{figure}[!ht] \includegraphics[scale=0.8]{ilmt_distn.pdf} \caption{Distributions of the redshift, G-band magnitude and absolute parallax of the selected quasars in the ILMT stripe.} \label{fig:fig-6} \end{figure} \begin{figure}[!ht] \includegraphics[scale=0.52]{galacticn.pdf} \caption{Sky distribution of the selected ILMT quasars in the galactic coordinate system. The real surface density of quasars is not considered in this plot.} \label{fig:fig-7} \end{figure} \section{Applications of the catalogue} The ILMT will be continuously scanning the sky passing over zenith. Such observations will be of interesting use for a wide range of astrophysical applications such as the detection of many extragalactic objects like supernovae, galaxy clusters, active galactic nuclei (AGN)/quasars, gravitationally lensed systems etc. (Surdej et al. 2018). Also, as zenith region of the sky will be repeatedly scanned by the ILMT, accumulated observations will be very useful for photometric variability studies of different types of celestial sources. As we have already arrived at a catalogue of 6738 quasars that will be covered by the ILMT, we describe below some of the potential applications of this quasar catalogue. \begin{figure}[!ht] \includegraphics[width=0.5\textwidth]{imag_vs_centroidn.png} \caption{Estimation of the $1\sigma$ uncertainty in the astrometric position of point sources with different magnitudes in the ILMT CCD images.} \label{fig:fig-8} \end{figure} \begin{figure}[!ht] \includegraphics[width=0.5\textwidth]{alpha_distn.pdf} \caption{Distribution of the selected ILMT quasars in RA.} \label{fig:fig-9} \end{figure} \subsection{Astrometric calibration of the ILMT field} The main application of this catalogue of quasars is to calibrate the ILMT observations in the world coordinate system. As this catalogue has accurate positions from the {\it Gaia}-DR2, with errors in the positions of the order of a few $mas$, using these quasars we expect to achieve sub-arcsec astrometric accuracy in the ILMT survey. We performed a Monte Carlo simulation to estimate the astrometric accuracy for the survey. Given the pixel scale of 0.4" and median seeing at Devasthal observatory of 1.1" (Sagar et al. 2000; Sagar, Kumar \& Omar 2019 and references therein), several synthetic CCD frames were generated having a circular Gaussian point spread function at random locations corresponding to different SDSS $i'$ magnitudes assuming a single scan (i.e. exposure time of 102 seconds) as demonstrated by Kumar et al. (2018). The Source Extractor software developed by Barbary (2016) was then used to estimate the centroid of each synthetic source. The $1\sigma$ accuracy in estimating the centroid of a point source having a limiting $i'$ magnitude of 21.4 mag with the 4-m ILMT was found to be 0.09" (see Fig. 8). The distribution of the ILMT quasars in RA is shown in Fig. 9. This Figure indicates that ILMT quasars cover the entire range of RA, however for RA between 3$-$6 hr and between 19$-$21 hr the numbers of quasars are around 60 and 20 per hr angle in the ILMT field, respectively, much lower than the number of quasars in the other RA ranges. In this range, we have to compromise with a separate database of astrometric standards such as the Tycho-2 catalogue (Hog et el. 2000) which has an astrometric uncertainty of 0.06" per coordinate and an average star density of $\sim$180 per hr angle. Hence, after adding this uncertainty in quadrature to our estimate the $1\sigma$ positional accuracy for the faintest stars detectable by ILMT will be degraded to 0.11" in the aforementioned RA ranges. \subsection{Quasar variability} Optical flux variations in quasars are known since their discovery. They have been studied for optical flux variations over a range of time scales from minutes to days (Wagner \& Witzel 1995, Ulrich et al. 1997). Most of the available studies are limited by the time resolution of the observations manifested as gaps in the data. The 4K$\times$4K CCD camera mounted on the ILMT can operate over the 4000 to 11000 $\AA$ spectral range in three different SDSS equivalent filters $g^{\prime}$, $r^{\prime}$ and $i^{\prime}$. The typical exposure time for a single frame is $\sim$ 104 s (Surdej et al. 2018). Only one of those filters will be used throughout a single night. The observing strategy of the ILMT will enable one to collect good quality data for most of the 6738 quasars that we have arrived at in this work primarily for astrometric calibration. Therefore, the quasars catalogued in this work can be studied for optical flux variability in different optical bands as well as colour variability. When more epochs of observations become available from the ILMT in the future, new candidate quasars can also be discovered based on colour-colour diagram (Richards et al. 2002) as well as optical variability characteristics (Graham et al. 2016). \subsection{Variability of lensed quasar} Gravitational lensing, the effect of deflection of light by a foreground intervening compact object (galaxy, cluster, etc.) constitutes a powerful tool that finds applications in many astrophysical areas. Gravitational lensing of distant quasars leads to the formation of multiply imaged quasars (Narayan \& Bartelmann 1999; Ehlers \& Schneider 1992). For such lensed quasars that show photometric variations, it is possible to measure time delays between the lensed quasar images by cross-correlating their light curves, which in turn can be used to determine the Hubble-Lema\^itre constant $H_{0}$ (Refsdal 1964, 1966a), which can help in constraining the dark energy equation of state (Kochanek \& Schechter 2004). To date time delays of about 24 lensed quasars are known that range from a few days to a few years (Rathna Kumar et al. 2015). Measuring such time delays requires long term monitoring of lensed quasars. Among the catalogue of quasars arrived at in this work we have only identified one gravitationally lensed quasar, namely J1251+295 ($\alpha_{2000}$ = 12:51:07.57, $\delta_{2000}$ = 29:35:40.50) which has 4 lensed images with maximum angular separation of $\sim1.8"$ and can be easily resolved with the ILMT (the median seeing at Devasthal site is of the order of 1.1"). The ILMT will be able to provide good light curves for this source and many others. Moreover, the ILMT also expects to detect about 50 new multiply imaged quasars (Surdej et al. 2018) which opens up the ability of the ILMT to derive more time delays among lensed quasars. \section{Summary} This work was aimed to arrive at a catalogue of quasars that could be used as calibrators to calibrate the ILMT observations in the world coordinate system. The details are summarized below. \begin{enumerate} \item By cross-correlating the Milliquas catalogue with the {\it Gaia}-DR2, and imposing the condition of matched sources to have astrometric excess noise significance D $\leq$ 2, we arrived at a sample of 1047747 quasars over the whole sky. Of these, 6755 quasars are available in the ILMT stripe. \item An investigation of the distribution of proper motion for these 6755 quasars have revealed 17 sources ($\sim$0.3\%) to have a PM greater than 20 $mas \, yr^{-1}$. This confirms that quasars in the ILMT stripe have PM lesser than 20 $mas \, yr^{-1}$. As the nature of these 17 objects could not be ascertained due to the lack of optical spectra, they were excluded from our list. \item Our final quasar catalogue for the ILMT contains 6738 quasars. Out of which, as per the Milliquas catalogue, 2405 candidates are spectroscopically confirmed type I quasars with broad emission lines, 3 are AGN, 7 are BL Lac objects, 1 is a Type II AGN and 4322 objects are selected through photometric techniques with a probability $>$ 90\% to be quasars. This information has been incorporated in the 8th column of our catalogue (see Table 2). The catalogue that is made available in this work, in addition to their use as astrometric calibrators, can also serve as a large sample for quasar variability studies. \item We expect to achieve an astrometric accuracy of better than 0.1" in the ILMT survey by using our proposed quasar catalogue. \end{enumerate} \section*{Acknowledgements} We thank the referee for her/his critical review of our manuscript. AKM and RS thank the National Academy of Sciences, India for research grant. AKM specially acknowledges the Universit\'e de Li\`ege for providing the scholarship "Erasmus + International Credit Mobility". AKM, CSS and RS acknowledge the Alexander von Humboldt Foundation, Germany for the award of Group linkage long-term research program between IIA, Bengaluru and European Southern Observatory, Garching, Germany. AKM and RS are thankful to the Director, IIA for providing institutional infrastructural support during this work. This research has used data from the SDSS and {\it Gaia}, operated by the European Space Agency. \vspace{-1em} \begin{theunbibliography}{} \vspace{-1.5em} \bibitem{latexcompanion} Abolfathi B., et al., 2018, ApJS, 235, 42 \bibitem{latexcompanion} Bovy J., et al., 2011, The Astrophysical Journal, 729, 141 \bibitem{latexcompanion} Barbary, K. (2016). SEP: Source Extractor as a library. Journal of Open Source Software, 1(6), 58. \bibitem{latexcompanion} Croom S., et al., 2004, in M\'ujica R., Maiolino R., eds, Multiwavelength AGN Surveys. pp 57–62, doi:10.1142/9789812702432 0015 \bibitem{latexcompanion} Ehlers J., Schneider P., 1992, Gravitational Lensing. p. 1, doi:10.1007/3-540-56180-3 1 \bibitem{latexcompanion} Flesch E. W., 2015, Publ. Astron. Soc. Australia, 32, e010 \bibitem{latexcompanion} Flesch E. W., 2017, VizieR Online Data Catalog, p. VII/280 \bibitem{latexcompanion} Gaia Collaboration Mignard F., Klioner S., Lindegren L., Hernandez J., Bastian U., Bombrun A., 2018, Astronomy and Astrophysics, 616, A14 \bibitem{latexcompanion} Graham M., Djorgovski S. G., Stern D., Drake A. J., Mahabal A. A., Glikman E., 2016, in American Astronomical Society \bibitem{latexcompanion} Hog, E., et al. The Tycho-2 catalogue of the 2.5 million brightest stars. NAVAL OBSERVATORY WASHINGTON DC, 2000. \bibitem{latexcompanion} Kochanek C. S., Schechter P. L., 2004, in Freedman W. L., ed., Measuring and Modeling the Universe. p. 117 (arXiv:astroph/0306040) \bibitem{latexcompanion} Kumar, B., Pandey, K. L., Pandey, S. B., Hickson, P., Borra, E. F., Anupama, G. C., \& Surdej, J. (2018). The zenithal 4-m International Liquid Mirror Telescope: a unique facility for supernova studies. Monthly Notices of the Royal Astronomical Society, 476(2), 2075-2085. \bibitem{latexcompanion} Lindegren L., et al., 2018, Astronomy and Astrophysics, 616, A2 \bibitem{latexcompanion} Lindegren L., Lammers U., Hobbs D., O’Mullane W., Bastian U., Hern\'andez J., 2012, A\&A, 538, A78 \bibitem{latexcompanion} Marrese P. M., Marinoni S., Fabrizio M., Altavilla G., 2019, A\&A, 621, A144 \bibitem{latexcompanion} Myers A. D., et al., 2015, ApJS, 221, 27 \bibitem{latexcompanion} Narayan R., Bartelmann M., 1999, in Dekel A., Ostriker J. P., eds, Formation of Structure in the Universe. p. 360 \bibitem{latexcompanion} P\^aris I., et al., 2018, A\&A, 613, A51 \bibitem{latexcompanion} Peters C. M., et al., 2015, ApJ, 811, 95 \bibitem{latexcompanion} Rathna Kumar S., Stalin C. S., Prabhu T. P., 2015, A\&A, 580, A38 \bibitem{latexcompanion} Refsdal S. 1964, MNRAS, 128, 4, p. 307-310 \bibitem{latexcompanion} Refsdal, S. 1966a, MNRAS, 132, 101 \bibitem{latexcompanion} Richards G. T., et al. 2002, AJ, 123, 2945 \bibitem{latexcompanion} Richards G. T., et al., 2009, ApJS, 180, 67 \bibitem{latexcompanion} Richards G. T., et al., 2015, VizieR Online Data Catalog, p. J/ApJS/219/39 \bibitem{latexcompanion} Ross A. J., et al., 2011, MNRAS, 417, 1350 \bibitem{latexcompanion} Sagar et al. 2000, Astron. Astrophysics Suppl. Vol 114, p. 349-362. \bibitem{latexcompanion} Sagar, R. Kumar, B. \& Omar, A. 2019 Current Science Vol. 117, p. 365 \bibitem{latexcompanion} Schmidt M., 1963, Nature, 197, 1040 \bibitem{latexcompanion} Souchay J., et al., 2015, A\&A, 583, A75 \bibitem{latexcompanion} Surdej J., et al., 2018, Bulletin de la Societe Royale des Sciences de Liege, 87, 68 \bibitem{latexcompanion} Ulrich M., Maraschi L., Urry C.M., 1997, ARA\&A, 35, 445 \bibitem{latexcompanion} Varshni Y. P., 1982, SScT, 521, 532 \bibitem{latexcompanion} V\'eron-Cetty M. P., V\'eron P., 2006, A\&A, 455, 773 \bibitem{latexcompanion} V\'eron-Cetty M.-P., V\'eron P., 2010, A\&A, 518, A10 \bibitem{latexcompanion} Wagner S.J., Witzel A., 1995, ARA\&A, 33, 163 \end{theunbibliography} \clearpage \end{document}
2,869,038,155,360
arxiv
\section{Introduction} \label{sect:intro} Magnetic Resonance (MR) images are obtained by means of a technique based on magnetic fields where frequencies in the range of radio waves (8--130 MHz) are used. This technique obtains medical images with the maximum level of detail so far. In recent years, MR images became essential to obtain quality images from any part of the human body thanks to the fact that MR images provide either functional and morphological information of both anatomy and pathological processes; with a spatial resolution and constrast much higher than the obtained by means of other techniques for medical image acquisition. Concerning lumbar pathologies, MR imaging is the preferred type of images between radiologists and physicians specialized in the lumbar spine and the spine in general. Thanks to MR images they can find disorders in nerve structures, vertebrae, intervertebral discs, muscles and ligaments with much more precision than ever \citep{roudsari2010lumbar}. Manual inspection and analysis carried out by human experts (typically radiologists) is the most common methodology to extract information from MR images. Visual inspection is carried out slide by slide in order to determine the location, size and pattern of multiple clinical findings in the lumbar structures, that can be either normal or pathological. Manual inspection of slides has a strong dependency on the experience of each expert, so that the variability due to different criteria of experts is a challenge that cannot be ignored \citep{carrino2009lumbar, berg2012reliability}. Radiologists, even those with a great experience, need a lot of time to perform the visual inspection of images, so this is a very slow task as well as tedious and repetitive. In fact, the excess of information to be processed visually causes fatigue and loss of attention, which leads radiologists to not perceive some obvious nuances because of the ``temporary blindness due to workload excess'' \citep{konstantinou2012visual}. Current progress of Artificial Intelligence (AI) and its application to medical imaging is providing new and more sophisticated algorithms based on Machine Learning (ML) techniques. These new algorithms are complementary to the existing ones in some cases, but in general they perform much better because most of the existing ones are knowledge based (do not learn from data). The new algorithms are much more robust to detect the lumbar structures (i.e., vertebrae, intervertebral discs, nerves, blood vessels, muscles and other tissues) and represent a significant reduction in the workload of radiologists and traumatologists \citep{coulon2002quantification,van2005semi,de2014robust,de2015automatic}. In the context of AI, automatic semantic segmentation is currently the most widely used technique \citep{litjens2017survey}. This technique classifies each individual pixel from an image into one of several classes or categories; each class or category corresponds to a type of objects from real world to be detected. In recent years, Convolutional Neural Networks (CNNs) are considered the best ML technique to address semantic segmentation tasks. However, CNNs require a very large amount of manually annotated images to properly estimate the values of the millions of weights corresponding to all the layers of any CNN topology designed by a Deep Learning (DL) expert. Robustness and precision of any classifier based on CNNs strongly depend on the number of samples available to train the weights of the CNN. So, the challenge in all the projects addressing the task of semantic segmentation is the availability of large enough datasets of medical images. In order to have a minimum of samples to train models, a manual segmentation procedure was designed in this work, where both MR image types T1w and T2w were used to manually adjust the boundaries between structural elements and tissues. Subsection \ref{image:labels:ground:truth} provides more detail about both MR image types. The main objective of this study is to use a limited dataset of MR images to reach an accurate and efficient segmentation of the structures and tissues from the lumbar region by means of using individually optimized CNNs or ensembles of several CNNs; all the used topologies were based on the original U-Net architecture, i.e., they are variants from the U-Net. \medskip This paper is organised as follows: Section \ref{sect:sota} reviews the state of the art and references other works related to the automatic semantic segmentation of medical images. Section \ref{sect:resources} provides details about the used resources, where Subsection \ref{sect:dataset} describes the dataset used in this work, and Subsection \ref{sect:soft:and:hw} provides details of the hardware infrastructure and software toolkits. Section \ref{sect:methodology} describes the block types used in this work to design CNN topologies as variants from the original U-Net architecture. Section \ref{sect:experiments} describes the experiments carried out in this work. Sections \ref{sect:results} and \ref{sect:discussion} present and discuss the results respectively. Finally, Section \ref{sect:conclusions} concludes by taking into account the defined objectives and draws possible future works. \section{Related work} \label{sect:sota} Fully Convolutional Networks (FCNs) are one of the topologies of Deep Neural Networks (DNNs) successfully used for semantic sementation \citep{long2015fully}. FCNs come from the adaptation of CNNs used for image classification, and generates a map of spatial labels as output. FCNs are compared with AlexNet \citep{krizhevsky2017imagenet}, VGG16 \citep{simonyan2014very} and GoogLeNet \citep{szegedy2015going} in \cite{long2015fully}. The topology known as FCN-8 that comes from an adaptation of VGG16 was the one which obtained the best results on the 2012 PASCAL VOC segmentation challenge \citep{everingham2010pascal}. Notwithstanding, FCNs present an important limitation to cope with semantic segmentation: the fixed size of the receptive field cannot work with objects whose size is different, the effect is that such objects are fragmented or missclassified. Furthermore, relevant details of the objects are lost because the deconvolution process is too coarse \citep{noh2015learning}. New approaches arose to overpass limitations of FCNs. A subset of the new approaches come from the FCNs and use a deep deconvolution. Both SegNet \citep{badrinarayanan2015segnet, badrinarayanan2017segnet} and DeConvnet \citep{noh2015learning} belong to this subset. SegNet is an autoencoder based on convolutional layers, where each layer in the encoder branch is paired with a layer in the decoder branch, in the sense their shapes are the same. The \emph{softmax} activation function is used at the output of the last layer of the decoder branch. The addition of deeper encoder-decoder layer pairs provides a greater spatial context, what leads to smoother predictions and better accuracy as more pairs are added. The potential in performace of SegNet is shown in \cite{al2019boundary}, where it is proposed a methodology to detect lumbar spinal stenosis in axial MR images by means of semantic segmentation combined with boundary delimitation. The network architecture that is currently obtaining the best results is the U-Net \citep{ronneberger2015u}. This is a encoder-decoder architecture whose main feature is layer mergence by concatenating the features of the layers at the same level of depth, these concatenations are known as skip connections. U-Net has been used with success for the semantic segmentaion of liver \citep{christ2016automatic}, kidney \citep{cciccek20163d}, skin lesions \citep{lin2017skin}, prostate \citep{yu2017volumetric}, retinal blood vessels \citep{xiao2018weighted}, eye iris \citep{lian2018attention} and brain structures \citep{roy2018quicknat} in medical images, and in general for other types of tissues in different medical imaging datasets \citep{ibtehaz2020multiresunet}. This work is an extension of our previous one focused on the task of segmenting MR sagittal images to delineate structural elements of the anatomy of the lumbar region \citep{saenz2020semsegspinal}. There, we analysed some variations of the U-Net architecture by using (a) convolutional blocks \citep{simonyan2014very, ronneberger2015u}, (b) spatial attention models \citep{schlemper2019attention}, (c) deep supervision \citep{zeng20173d, goubran2020hippocampal} and (d) multi-kernels at input \citep{szegedy2015going}; the last one is based on a naive version of the architecture Inception \citep{szegedy2015going}. The integration of these block types improved the performance of the original U-Net architecture. However, not all the topologies, designed by combining different block types, obtained good results due to the limited size of the dataset available when the experimentation was carried out. In our previous work we used manually annotated MR slides from $75$ patients, in this work we used slides from $181$ patients. In order to improve the results obtained by classifiers when operating alone, a widely used strategy is the use of ensembles of classifiers, that is, combinations of predictive models with similar but different features. In an ensemble, the predictions of several classifiers are combined to reduce the variance, assuming that the type of errors of one classifier is different from the others \citep{goodfellow2016deep}. In general, the prediction accuracy of an ensemble is better than the accuracy of each single classifier used in the ensemble \citep{bishop1995neural}. A comparative study of the performance of four strategies to combine the output of classifiers within ensembles for image recognition tasks is presented in \cite{ju2018relative}. The four strategies are ``Unweighted Average'' \citep{breiman2001random}, ``Majority Voting'', ``Bayes Optimal Classifier'' and ``Stacked Generalization'' \citep{wolpert1992stacked,van2007super}. The study presents experiments in which distinct network structures with different control points were used, and analyses the problem of overfitting, a typical problem of neural networks, and its impact on ensembles. Other approaches using ensembles in semantic segmentation tasks are based on transfer learning, where networks trained with different datasets from the one of the target task are retrained \citep{nigam2018ensemble}, or are based on ``Stacked U-Nets'' trained in two stages. In this last case, ensembles of classifiers are used to detect morphological changes in the cell nucleus from the automatic segmentation of both nuclei regions and regions of overlapping nuclei \citep{kong2020nuclear}. The relevance of ensembles leads to work in which model compression techniques are applied to achieve real-time performance to do predictions in production environments \citep{holliday2017speedup}. In this work, we propose new network topologies derived from the U-Net architecture which are improvements of the topologies we presented in our previous work \cite{saenz2020semsegspinal}. The results presented here were obtained using both individual networks and ensembles. The proposed ensembles combine distinct network topologies. The dataset used to obtain the results presented here is an extension of the one used in our previous work where more manually segmented MR images from other patients have been added. \section{Resources} \label{sect:resources} Figure \ref{fig:modularArch} schematically shows the sequence of steps followed in this work. In the first step, the lumbar spine MR imaging dataset was selected, processed and partitioned into two subsets, one for training and validating corresponding to $80\%$ of patients, and another for testing with images from the remaining $20\%$ of the patients. In turn, the first subset was partitioned into two subsets: one of them to train the models ($53\%$ of the entire dataset and referred to as the training subset) and the other to adjust hyperparameters according to the results obtained ($27\%$ of the entire dataset and referred to as the validation subset). This way of partitioning the largest subset was repeated three times in order to obtain three pairs of training and validation subsets to evaluate all the models in a $3$-fold cross-validation procedure. In the second step, it was designed the modular framework from which distinct network topologies derived from the U-Net architecture can be easily defined; each derived topology was the result of combining several complementary and interchangeable blocks. Finally, the design and evaluation of distinct topologies was carried out in the third and last step, where different configurations of ensembles were also evaluated. It can be observed that all the variants derived from the U-net architecture have two branches. The descending branch plays the role of an encoder whereas the ascending one acts as a decoder. Both branches have four levels in all the variants tested in this work, and are connected by a bottleneck block in the deepest level. The classification block is connected to the top layer of the decoder branch and includes the output layer. Predictions from the best variants were combined using Ensemble Learning techniques \citep{perrone1992networks,bishop1995neural,goodfellow2016deep}. Results of both individual networks and ensembles are presented in Section \ref{sect:results}, and the different ensembling strategies are detailed in Subsection \ref{subsect:ensembles}. \begin{figure*}[t \centerline{\includegraphics[]{schematicWork.pdf} \caption{Steps taken in this work: a) data preparation and manual segmentation to create the ground-truth metadata, b) design of the modular framework to easily define U-Net variants, and c) evaluation of individual networks and ensembles to create more sophisticated models by combining different topologies.} \label{fig:modularArch} \end{figure*} \subsection{Lumbar Spine MR Imaging Dataset} \label{sect:dataset} The MIDAS dataset is a large collection of Magnetic Resonance (MR) images corresponding to the lumbar spine. This dataset is one of the main outcomes of the homonym project Massive Image Data Anatomy of the Spine (MIDAS). All the images from the same scanning session are accompanied with the report generated by the radiologist who performed the scan. In numbers, the MIDAS dataset contains more than 23,600 studies with a total of more than 124,800 MR images. All the studies and images correspond to patients who presented lumbar pathologies during 2015 and 2016, and were attended in the Health Public System of the Valencian Region. The public use of the MIDAS dataset was approved by the Ethics committee DGSP-CSISP Nº 20190503/12 once all the data (images, DICOM metadata and reports from radiologists) were properly anonymised by the \emph{``Banco de Im\'{a}genes M\'{e}dicas de la Comunidad Valenciana''} (BIMCV) \citep{de2014bimcv} (\url{https://bimcv.cipf.es/bimcv-projects/project-midas/}). Data management and organisation, including the process of data curation, was done by following the standard Medical Imaging Data Structure (MIDS) \citep{saborit2020medical}. The dataset used in this work is a subset of the MIDAS dataset, where all the selected images were converted from DICOM format to NIfTi, and the reports, together with other metadata, were stored using the JSON format. The hierarchical organization of the NIfTI and JSON files follows the same tree structure of MIDS, where all the images of a particular scan are located in the same directory, and the directories of all the sessions belonging to one patient are in the same directory of a higher level. \subsubsection{Selection and preparation of the dataset} \label{sect:dataset:preprocessing} \begin{table \begin{center} \caption{\label{table:demographic}Demographic Statistics of the 181 patients whose scans were used in this work} \begin{tabular}{|l|r|r|r|r|} \hline & \textbf{Mean} & \textbf{Std} & \textbf{Min} & \textbf{Max} \\ \hline Age (year) & $53$ & $16.5$ & $9$ & $88$ \\ \hline Weight (kg) & $74.1$ & $14.6$ & $29$ & $120$ \\ \hline \end{tabular} \end{center} \end{table} The ground-truth for the task of semantic segmentation was generated by manually segmenting a subset of the MIDAS dataset obtained by randomly selecting studies corresponding to 181 patients. Each study contains several scanning sessions and each session several MR images. The age of selected patients ranges from 9 to 88 years, with an average of 53 years, and an unbalanced gender distribution with 105 women and 76 men. Table \ref{table:demographic} provides some statistics of the dataset used in this work to carry out all the experiments. The studies used in this work were selected according to the following criteria: \begin{itemize} \item Lumbar vertebrae must be included, together with other adjacent anatomical elements, in particular the upper sacral bones. \item Each scan should have available both types of sagittal MR images (T1w and T2w) because will be jointly used as input to the models. \item T1w and T2w from each study must fulfil with a predefined quality requirements in terms of brightness and noise. \item Selected patients cannot have lumbar surgery. \end{itemize} Due to the different scan devices used (distinct manufacturers and different models), the MR images were acquired with different settings parameters, but the magnetic field intensity was of $1.5$ Teslas in all cases. Table \ref{tab:scan:settings} lists the range of values for the relevant configuration parameters according to the metadata accompanying each MR image. \begin{table \begin{center} \caption{\label{tab:scan:settings}Ranges of values of the most relevant configuration parameters of the scan devices} \begin{tabular}{|l|c|c|} \hline View Plane Types & \multicolumn{2}{c|}{Sagittal} \\ \hline Sequence Types & T1-weighted & T2-weighted \\ \hline \begin{tabular}[c]{@{}l@{}}Repetition Time \\ ($ms$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}300.0 to \\ 764.38\end{tabular} & \begin{tabular}[c]{@{}c@{}}2000.0 to \\ 10172.214\end{tabular} \\ \hline Echo Time ($ms$) & 6.824 to 17.424 & 84.544 to 145.0 \\ \hline \begin{tabular}[c]{@{}l@{}}Spacing Between \\ Slices ($mm$)\end{tabular} & 3.6 to 6.0 & 3.6 to 6.0 \\ \hline \begin{tabular}[c]{@{}l@{}}Imaging Frequency \\ ($MHz$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}42.568 to \\ 127.745\end{tabular} & \begin{tabular}[c]{@{}c@{}}42.568 to \\ 127.745\end{tabular} \\ \hline Echo Train Length & 2.0 to 10.0 & 13.0 to 36.0 \\ \hline Flip Angle & 80.0 to 160.0 & 90.0 to 170.0 \\ \hline Height ($px$) & 320.0 to 800.0 & 320.0 to 1024.0 \\ \hline Width ($px$) & 320.0 to 800.0 & 320.0 to 1024.0 \\ \hline Pixel Spacing ($mm$) & 0.4688 to 1.0 & 0.3704 to 1.0 \\ \hline Echo Number & 0.0 to 1.0 & 0.0 to 1.0 \\ \hline \end{tabular} \end{center} \end{table} Sagittal T1- and T2-weighted slices of each scanning session were aligned at the pixel level by using the FLIRT functionality \citep{jenkinson2001global, jenkinson2002improved} of the FSL toolkit \cite{jenkinson2012fsl}. The input to the neural networks for each single slice is a 3D tensor of $H \times W \times 2$, where $H$ and $W$ are the height (rows) and the width (columns) of the image in pixels, and $2$ is the number of channels. Channel 0 corresponds to T2-weighted and channel 1 to T1-weighted. Once aligned, all the pixels of both channels (T1w and T2w) are normalised to zero mean and unit variance. Normalisation is carried out for each channel independently. There are a total of 1,572 MR images in our dataset corresponding to different slices of the lumbar spine area. Most of the slices have an image resolution of $512 \times 512$ pixels. The number of slices per scanning session ranges from 8 to 14. \subsubsection{Image Labels and Ground-Truth Metadata} \label{image:labels:ground:truth} The ground-truth metadata for the task of semantic segmentation in this work consists of bit masks generated from the manual segmentation carried out by two radiologists with high expertise in skeletal muscle pathologies. As mentioned above, input for neural networks is composed by T1- and T2-weighted slices aligned at the pixel level. Sagittal T2-weighted images are characterised by highlighting fat and water within the tissues, and are used by radiologists to distinguish the anatomical silhouette of the different structural elements of the lumbar region. Sagittal T1-weighted images highlight fat tissue, and are used in cases where radiologists have doubts regarding some anatomical structures or findings, e.g., spinal cavity, epidural fat or radicular cysts. Figure \ref{fig:example:of:image} shows an example of two different slices from T1- and T2-weighted sagittal images and their semantic segmentation with the labels corresponding to 11 target classes plus the background. The output used to train the neural networks is a stacked 3D tensor containing one bit mask per class. In other words, the ground-truth masks are tensors of $H \times W \times 12$, with 12 values per pixel, all set to 0 but one, the value corresponding to the class is set to 1. Figure \ref{fig:example:of:image} represents each class with a different colour. \begin{figure*}[t \centerline{\includegraphics[]{labels.pdf} \caption{Example of two different slices with the corresponding bit masks merged in a single coloured MR image using one different colour per class. From left to right: T1- and T2-weighted MR images, ground-truth semantic segmentation, and labels summary.} \label{fig:example:of:image} \end{figure*} \subsubsection{Patch Extraction} \label{subsect:patches} As indicated in Subsection \ref{sect:dataset}, image acquisition was done with different settings parameters and different sizes. The dimension of input samples is relevant when using neural networks because height and width in pixels must be fixed at network input. One possible strategy adopted in many works is to resize all the images to a fixed size. The strategy followed in this work is different, given an image of $H \times W$ pixels, where both $H$ and $W$ can vary from 320 to 1024, squared fragments of fixed size $D \times D$ were extracted by shifting approximately $S$ pixels in horizontal and vertical. Given an input sample, i.e., a 3D tensor with dimensions $H \times W \times 2$, it is split into overlapping patches with a size of $D \times D \times 2$ extracted using a stride of $S \times S$. We selected $D = 256$ and $S = 192$ based on experimental results from our previous work \citep{saenz2020semsegspinal}, these values for $D$ and $S$ yield a balance between efficiency and accuracy. In order to prepare training and evaluating samples, the same process of patch extraction was applied to input images and the corresponding ground-truth masks. As already mentioned above, ground-truth masks are generated from the manual segmentation. Table \ref{table:summary} summarises the figures of the dataset, detailing the number of images per partition, the available 2D slices and the resulting squared fragments or patches. The set of patients of each partition are disjoint sets, i.e., all 2D images (and patches) from one patient are in the same partition. Figure \ref{fig:preprocessing} shows the image preprocessing steps followed in this work and the resulting patches as explained in Subsection \ref{sect:dataset:preprocessing}. \begin{table \begin{center} \caption{\label{table:summary}Dataset used for training and testing in figures} \begin{tabular}{|l|r|r|r|} \hline & \textbf{Train \& } & & \\ & \textbf{Validation} & \textbf{Test} & \textbf{Total} \\ \hline MR images T2w and T1w & $148$ & $33$ & $181$ \\ \hline Images 2D & $1,176$ & $396$ & $1,572$ \\ \hline Patches 256$\times$256 & $18,147$ & $4,113$ & $22,260$ \\ \hline \end{tabular} \end{center} \end{table} \begin{figure*}[t \centerline{\includegraphics[]{preprocessing.pdf}} \caption{Image preprocessing steps: (a) linear Image Registration, Sagittal T1-weighted is aligned with T2-weighted, (b) both planes (T1- and T2 weighted) are normalised using the Z score procedure, (c) joint both 2D slices in a 3D tensor of $H \times W \times 2$, and then, (d) split each 3D tensor and its corresponding ground truth mask into overlapping patches of 256 x 256 pixels.} \label{fig:preprocessing} \end{figure*} \subsection{Software and Hardware} \label{sect:soft:and:hw} The proposed network topologies were implemented using TensorFlow \citep{abadi2016tensorflow} and Keras \citep{keras} toolkits. The linear (affine) image transformations were done using FLIRT \citep{jenkinson2001global,jenkinson2002improved} from the FSL software \citep{jenkinson2012fsl}. The ground-truth masks were manually segmented by using ITK-SNAP software \citep{py06nimg}. Training and evaluation was run on the high performance computing infrastructure Artemisa from the \emph{``Instituto de Física Corpuscular''} \url{https://artemisa.ific.uv.es} formed by 20 worker nodes equipped with: 2 x Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz 20c, 384 GBytes ECC DDR4 at 2933 MHz, 1 x GPU Tesla Volta V100 PCIe. \section{Methodology} \label{sect:methodology} \subsection{Topologies based on the U-Net architecture} \label{sect:variants} In this work, different topologies have been designed from the U-Net architecture. Original U-Net architecture is used to obtain baseline results. To do this, we defined a set of distinct interchangeable block types which are strategically combined to form encoder and decoder branches. Some of the topologies presented here were designed using in the decoder branch different block types from the ones used in the encoder branch. Other topologies use the same block type in both branches. Figure \ref{fig:unet_Blocks} illustrates an example of a variant from the U-Net architecture and the distinct block types used in different parts of the topology. Next subsections are dedicated to explain all the block types used in this work. \begin{figure*}[t \centerline{\includegraphics[]{unetBlocks.pdf} \caption{Example of how the proposed topologies based on the U-Net architecture (referred to with the identifier U1 in this document) are built from complementary block types: % (a) Multi-kernels at input (M), % (b) three types of convolutional blocks (U-net (U), VGG16 (V) and Dense Block (Q)), where U and Q are used in both encoder and decoder branches while V is used only in the encoder branch, % (c) Attention Gates (AG) for replacing the skip connections between encoder and decoder branches with the purpose of fusing and selecting relevant features at each level between both branches, % and (d) Deep Supervision (DS) illustrated in Figure \ref{fig:ds:blocks}.} \label{fig:unet_Blocks} \end{figure*} \begin{figure*}[h \centerline{\includegraphics[]{DSBlocks.pdf} \caption{Deep supervision block types. % DS.v1 and DS.v2 are used as alternatives to enhance the connections between encoder and decoder branches. % DS.v3 is used to enrich the input to the classification block; the output of the convolutional block at each level of the decoder branch is combined with an element-wise sum with the supervised signals coming from the previous level of the decoder branch too.} \label{fig:ds:blocks} \end{figure*} \subsubsection{Convolutional Block} Three types of convolutional blocks were tested in this work: (i) The typical block used in the original U-Net \citep{ronneberger2015u}, which consists of two convolutional layers preceding a batch normalisation layer that is followed by an activation layer that uses the ``Rectified Linear Unit'' (ReLU). The size of the kernel for both convolutional layers is $3\times 3$. (ii) The convolutional block of the VGG16 \citep{simonyan2014very} composed by two or three convolutional layers with a $3\times 3$ kernel and followed by an activation layer with ``Parametric Rectified Linear Unit'' (PReLU). And (iii) the convolutional dense block \citep{roy2018quicknat} consisting of three convolutional layers. Each convolutional layer of this block type is preceded by a pair of consecutive layers: a batch normalisation layer followed by an activation layer that uses the ``Rectified Linear Unit'' (ReLU). The kernel sizes for these three convolutional layers are, respectively, $5\times 5$, $3\times 3$ and $1\times 1$. The number of channels for the three layers is set to 64. The input to the second layer is the concatenation of the input to the first layer and the output of the first layer. The input to the third layer is the concatenation of the input to the first layer, the output of the first layer and the output of the second layer. \cite{huang2017densely} refers to this type of connections as dense connections. As Figure \ref{fig:unet_Blocks} shows, the number of filters (or channels) per block is given by the parameter $m$ at the first (or top) level of the encoder branch (i.e., the descending path); $m$ is multiplied by 2 when descending from one level to the next one; except in the case of the convolutional dense block type which was set to 64 for all the levels. Analogously, $m$ is divided by 2 when ascending from one level to the next one in the decoder branch (i.e., the ascending path). \subsubsection{Multi-kernels at Input} \label{subsubsect:multikernels} In four of the proposed topologies, the input layer is connected to a multilevel feature extractor rather than using only one convolutional block. The multilevel feature extractor consists in four independent convolutional blocks with different kernel sizes $1\times 1$, $3\times 3$, $5\times 5$ and $7\times 7$. The output of the four convolutional blocks is concatenated before entering to the encoder branch in order to extract spatial features at different scales. This is a variant of the naive version of the Inception network \citep{szegedy2015going}. \subsubsection{Encoder Branch} The encoder branch is made up of four consecutive convolutional blocks. Each block is followed by a two-dimensional max pooling layer with kernel and stride size equal to $2\times 2$ to shrink the feature maps to $\sfrac{1}{4}$ in terms of features (rows and columns divided by 2 each), but maintaining the depth (number of channels). \subsubsection{Feature Fusion} Three strategies of Feature Fusion were tested in this work: \begin{enumerate}[(i)] \item The skip connections used in the original U-Net architecture to connect blocks at the same level between encoder and decoder branches. Feature maps $C_n$ from level $n$ in the encoder branch are concatenated with the feature maps $T_{n+1}$ coming from the previous level in the decoder branch. This can be seen in Figure \ref{fig:unet_Blocks} where $S_n = C_n$ and $D_n = concat(S_n, transposed\_conv(T_{n+1}))$ is the input to the convolutional block at level $n$ in the decoder branch. The bottleneck output is the special case when $T_5 = C_5$. \item Deep Supervision (DS). The underlying idea of Deep Supervision is to provide a complementary feature-map flow in the decoder branch. We use three versions, DS.v1 and DS.v2 are variants of deep supervision to generate complementary input to the convolutional blocks at each level of the decoder branch, while DS.v3 takes the outputs from the convolutional blocks of the decoder branch to generate a complementary input to the classification block. Deep supervision was introduced by \cite{lee2015deeply} to perform semantic discrimination at different scales in the intermediate layers and also as a strategy to mitigate the gradient vanishing problem, as shown by \cite{szegedy2015going} in GoogleNet and by \cite{sun2015deepid3} and \cite{shen2019object} in DeepID3. \medskip What is proposed in this work as DS.v1 (graphically illustrated in Figure \ref{fig:ds:blocks}) is a deep supervision block to replace the skip connections between encoder and decoder branches. Block type DS.v1 is similar to the block used in DeepID3 by \cite{sun2015deepid3}, \cite{zeng20173d} and \cite{shen2019object} for the same purpose. In more detail, at each level $n$ of the encoder branch, including the bottleneck, the convolutional block generates a feature map, referred to as $C_n$, that is transformed by a convolutional layer with a $1\times 1$ kernel with $m$ channels, where $m$ is the original number of channels at the first level of the encoder branch. % The output tensor at the bottleneck level, i.e., the feature map used to start the decoder branch, is referred to as $C_5$ in Figure \ref{fig:ds:blocks}. The output of the convolutional blocks at each level of the encoder branch are referred to as $C_n$. When deep supervision is used, all $C_n$ are transformed by a convolutional layer with a $1\times 1$ kernel before being combined with the ``supervised signal'' $S_{n+1}$ coming from the previous level. In DS.v1, the supervised signals are computed as $S_n = conv_{1 \times 1}(C_n) + up\_sampling(S_{n+1})$, with the especial case of $S_5 = conv_{1 \times 1}(C_5)$. Each $S_n$ is concatenated with $transposed\_conv(T_{n+1})$, i.e., the output of the convolutional block from the previous level in the decoder branch, $T_{n+1}$, is transformed by a transposed convolutional before being concatenated with $S_n$ to obtain the input to the convolutional block at level $n$ of the decoder branch: $D_n = concat(S_n, transposed\_conv(T_{n+1}))$, as in the case of the original U-Net described above. \medskip A second deep supervision block type, referred to as DS.v2 and also illustrated in Figure \ref{fig:ds:blocks}, is used between encoder and decoder branches. The output of each DS.v2 block at each level is downsampled by a max pooling layer with kernel and stride size equal to $2\times 2$, in order to shrink the feature maps to $\sfrac{1}{4}$ in terms of features (rows and columns divided by 2 each), while keeping the depth (number of channels) untouched. The output of a DS.v2 block at one level is combined with the output of the DS.v2 block coming from the above level: $S_n = conv_{1 \times 1}(C_n) + max\_pool(S_{n-1})$, with the especial case of the top level in both branches where $S_1 = conv_{1 \times 1}(C_1)$. \medskip One additional deep supervision block type referred to as DS.v3 was used to enrich the input to the classification block. Figure \ref{fig:ds:blocks} illustrates how the output of the convolutional blocks at each level of the decoder branch, $T_n$, are combined with the ``supervised signals'' coming from the previous level, $Z_{n+1}$. The supervised signals are upsampled to achieve the same size of $T_n$ to compute the element-wise sum: $Z_n = conv_{1 \times 1}(T_n) + up\_sampling(Z_{n+1})$, being $Z_1$ the input to the classification block in this case. % DS.v3 block type was also used in our previous work \citep{saenz2020semsegspinal} for the same purpose, where was referred to as DS. % \item Attention Gate (AG). In three of the topologies proposed in this work, the skip connections between encoder and decoder branches are replaced by a spatial attention model, known as Attention Gate (AG), \citep{schlemper2019attention}. The purpose of AG is to fuse and select relevant features at each level between both branches. % Thanks to this, the relevant features automatically selected by the AG from the encoder branch are provided to the corresponding level of the decoder branch. With this strategy, the different levels of the decoder branch can use the relevant features extracted at its paired level in the encoder branch for the progressive reconstruction of the output mask. % AGs only hold relevant features from the encoder branch that are concatenated with the feature maps obtained as output of each level in the decoder branch. % The feature maps from encoder and decoder branches are transformed individually by a single convolutional layer with a $1\times 1$ kernel, then are combined with an element-wise add operator and passed through a ReLU activation layer followed by another $1\times 1$ convolutional layer that in turn is followed by a sigmoid activation layer. % % Sigmoid output values within the range [0, 1] act as a 2D mask used to filter the feature map coming from the respective level of the encoder branch. % Then, both the AG output $S_n$ and the feature map from the previous level of the decoder $T_{n+1}$ are concatenated to connect blocks at the same level; as explained previously $D_n = concat(S_n, transposed\_conv(T_{n+1}))$. % The transposed convolutional resizes $T_{n+1}$ to reach the same size $S_n$ has. Transposed convolutional layers are represented in orange arrows in Figure \ref{fig:unet_Blocks}. \end{enumerate} \subsubsection{Bottleneck} The bottleneck is a convolutional block that performs feature estimation at an additional depth level and is the main union point between encoder and decoder branches. \subsubsection{Decoder Branch} The decoder branch consists of a set of four consecutive convolutional blocks, each one preceded by a feature-fusion block, in such a way that each level of the decoder branch uses the set of relevant features obtained by fusing both ($a$) the output of the paired convolutional block in the encoder branch with ($b$) the output of the transposed convolutional layer in the previous level of the decoder branch. Transposed convolutional layers are better at reconstructing the spatial dimension of feature maps in the decoder branch than performing interpolation using an upsampling layer followed by a normal convolution. They can learn a set of weights that can be used to progressively reconstruct original inputs; but in this work, we use them to generate masks for semantic segmentation. The use of transposed convolutional layers is very important when the task includes the segmentation of very small structural elements. \subsubsection{Classification Block} The output generated by the last level of the decoder branch, or the last level of the deep supervision block (DS.v3) when applicable, is used as input to the classification block. This block consists in one convolutional layer with a $1\times 1$ kernel and as many channels as classes into which classify each single pixel. In our case the number of classes is 12. The \emph{softmax} activation function was used at the output layer of all the topologies tested in this work. In this case, the output values can be considered as \emph{a posteriori} probabilities normalised over the predicted output classes. That is, for every pixel of the output mask, each class is weighted by a score in the range $[0, 1]$ and the sum of the scores of all classes for a single pixel sums $1$. Accordingly, the ground-truth masks used to train the networks have 12 channels, in such a way that each single pixel of the output mask is represented by one 1-hot vector of length 12. For each pixel of the ground-truth mask only one of the channels is set to 1. \subsection{Ensembles} \label{subsect:ensembles} In addition to testing with individual networks, every one of the topologies proposed in this work as variants from the U-Net architecture for the semantic segmentation task was used in ensembles of several networks. The output of several networks corresponding to different topologies is combined to form a classifier that is an ensemble of classifiers. From each topology, it was selected the network that obtained the best results, i.e., the one adjusted with the best combination of the values of the hyperparameters. When used in ensembles, the outputs of single classifiers were combined by two distinct approaches: model averaging and stacking model. Figure \ref{fig:ensembles} illustrates both approaches. \begin{figure*}[t \centerline{\includegraphics[]{ensembleModels.pdf} \caption{Block diagram of methods tested in this work to compute the output of ensembles. Top: Model averaging. Bottom: Stacking model.} \label{fig:ensembles} \end{figure*} \subsubsection{Model Averaging} Model averaging is a technique where $R$ models equally contribute to obtain the output of the ensemble, i.e., the prediction provided by the ensemble is the combination of the prediction of each single model. Two strategies can be used for merging the output of several models: \begin{equation} \text{Arithmetic Mean: } \overline{Z}={\frac {1}{R}}\sum _{r=1}^{R}Z_{r} \label{eqn:arithmetic:c} \end{equation} \begin{equation} \text{Geometric Mean: } \overline{Z}={\sqrt[{R}]{\prod _{r=1}^{R}Z_{r}}} \label{eqn:geometric:c} \end{equation} \subsubsection{Stacking Model} \label{subsect:Ensemble:Stacking} Stacking models learn to obtain a better combination of the predictions of $R$ single models in order to achieve the best prediction. An ensemble following the stacking model is implemented in three stages: (a) \emph{layer merging}, (b) \emph{meta-learner}, and (c) \emph{prediction}. The first stage, \emph{layer merging}, takes as input a list of tensors and returns a unique tensor that could be the result of concatenating, averaging or adding. The tensors to be merged come from every single model in the ensemble. They can be the normalised output values, i.e., the output of the \emph{softmax}, or the tensors used as input to the classification block, i.e., the outputs generated by the last level of the decoder branch or DS.v3 when applicable. In the second stage a dense layer with a ReLU activation function plays the role of \emph{meta-learner}. The last stage, \emph{prediction}, consists of a dense layer with the \emph{softmax} activation function. \subsection{Image Reconstruction and Pixel Level Labelling} \label{subsect:Binarisation} The $P$ patches corresponding to an original 2D slice of size $H \times W$, will be placed in the corresponding position. Each pixel of the reconstructed mask can belong to 1, 2 or 4 patches. In the case of overlapping (i.e., 2 or 4 patches), the score of each target class per pixel is calculated by using the arithmetic mean of the occurrences of the respective pixel in the overlapping patches. Then, each single pixel is labelled with one class according to one of the following two methods. \subsubsection{Maximum A Posteriori Probability Estimate (MAP)} The output of the \emph{softmax} activation function in the classification block is a vector of normalised scores, $y^{m,n} \in \mathbb{R}^{12}$, for each single pixel $X^{m,n}$, where $X$ refers to the input image. The element $y^{m,n}_{c}$ is the confidence of the network that pixel $X^{m,n}$ belongs to class $c$. According to the MAP criterion, each single pixel is assigned to the class $c^*$ with the highest score, i.e., $c^* = \argmax_c\{y^{m,n}_{c}\}$. \subsubsection{Threshold Optimisation (TH)} In this work, we used a naive adaptation of the \emph{Threshold Optimisation} strategy explained in \cite{NIPS2016_Lepora}. A threshold per target class was tuned using the validation subset of the three partitions created to carry out the $3$-fold cross-validation procedure. Section \ref{sect:resources} explains how the dataset was partitioned. The threshold of each class was adjusted by finding the maximum value of the IoU metric for different thresholds ranging from 0.05 to 0.95 every 0.05. Each single pixel at output is assigned to the class with the highest score generated by the \emph{softmax} activation function if such score is greater than the threshold for such class. Otherwise, the score of the next best scoring class is checked until the score of a class that is greater than or equal to its respective threshold is found. Classes are checked in descending order according to its score. The pixel is assigned to the background class if this process ends unsuccessfully. MAP or TH will be suffixed to the identifier of each experiment to indicate the method used for labelling each single pixel. \section{Experiments and Implementation} \label{sect:experiments} The dataset used in this work was extracted from the MIDAS corpus referenced in Subsection \ref{sect:dataset}. The MR images come from scanning sessions corresponding to 181 different patients. Each scanning session has a different number of slices. How the dataset was partitioned into training, validation and test subsets is explained in Section \ref{sect:resources}. However, let us emphasize here that all the generated subsets are disjoint at the level of patient, i.e., it is guaranteed no 2D images from the same patient appear in different subsets. Table \ref{table:summary} summarises the figures of the dataset. The experiments for each evaluated network topology or ensemble were carried out following the same $3$-fold cross-validation procedure. As explained in Section \ref{sect:resources}, $80\%$ of patients were used for training and validating and $20\%$ for testing. In turn, the $80\%$ of patients for training and validating was split into three different partitions to perform a $3$-fold cross-validation procedure. In each cross-validation iteration, images from $\sfrac{2}{3}$ and $\sfrac{1}{3}$ of the patients were used for training and validating, respectively. The reported results were obtained with the test subset as an average of the results obtained by the three model versions (one per cross-validation iteration). The reported results were computed after labelling each single pixel with both MAP and TH criteria (see Subsection \ref{subsect:Binarisation}). \subsection{Data Augmentation} \label{subsect:DataAugmentation} In order to mitigate the overfitting problem, training data was randomly modified by the combination of several 2D image transformations: ($a$) random rotation up to $\pm 20$ degrees, ($b$) zoom in/out by a factor randomly selected from $0.5$ to $1.5$, ($c$) random shift in both axes up to $10\%$ of height and width, and ($d$) horizontal flip according to a Bernoulli probability distribution with $p=\sfrac{1}{2}$. \subsection{Model hyper-parameters} \begin{table*}[t \begin{center} \caption{\label{table:cnn:settings}Parameter settings of the CNN topologies. Network IDs are also used in Table \ref{table:cnn:ensembles} and Table \ref{tab:results:Best:2}. DS.v2 is only used in topology UDD2} \begin{tabular}{|l|l|l|l|l|} \hline \textbf{ID} & \textbf{Configuration} & \textbf{Optimiser} & \textbf{Learning Rate} & \textbf{Act-Conv} \\ \hline UDD2 & U-Net + DS.v3 + DS.v2 & Adam & $0.00033$ & ReLU \\ UMDD & U-Net + multi-kernel + DS.v3 + DS.v1 & Adam & $0.00033$ & ReLU \\ UDD & U-Net + DS.v3 + DS.v1 & Adam & $0.00033$ & ReLU \\ UQD & U-Net + DenseBlock + DS.v3 & Adam & $0.00033$ & ReLU \\ UVDD & U-Net + VGG16 + DS.v3 + DS.v1 & Adam & $0.00033$ & PReLU \\ UVMD & U-Net + VGG16 + multi-kernel + DS.v3 & Adam & $0.00033$ & ReLU \\ UAMD & U-Net + attGate + multi-kernel + DS.v3 & Adam & $0.00033$ & ReLU \\ UMD & U-Net + multi-kernel + DS.v3 & Adam & $0.00033$ & ReLU \\ UAD & U-Net + attGate + DS.v3 & RMSprop & $0.001$ & ReLU \\ UD & U-Net + DS.v3 & Adam & $0.00033$ & ReLU \\ UA & U-Net + attGate & Adam & $0.00033$ & ReLU \\ U1 & U-Net & Adadelta & $1.0$ & ReLU \\ FCN & FCN8 & Adam & $0.00033$ & ReLU \\ \hline \end{tabular} \end{center} \end{table*} All the proposed topologies but one are variations from the U-Net architecture. Let us identify each complementary block with a letter in order to compose the network identifiers: \begin{description} \setlength{\itemindent}{-16pt} \setlength{\labelsep}{8pt} % \item[A] Attention Gates for replacing the skip connections. % \item[D] Deep Supervision between encoder and decoder branches to replace the skip connections (DS.v1 and DS.v2), and between convolutional blocks of the decoder branch to provide an alternative input to the classification block (DS.v3). % \item[M] A previous step after the input is added just before the first block of the encoder branch. This step is defined by several convolutional layers with different kernel sizes whose outputs are concatenated (see Subsection \ref{subsubsect:multikernels}). % \item[V] Use of VGG16-like convolutional blocks in the encoder branch (i.e. the descending path). These convolutional blocks are also connected with the convolutional blocks of the decoder branch. % \item[U] The typical convolutional block used in the original U-Net. % \item[Q] Convolutional blocks with dense connections (Dense Block) for replacing U-Net convolutional blocks. \end{description} Table \ref{table:cnn:settings} shows the combination of the configuration parameters that obtained the best results for each network topology. All the topologies listed in Table \ref{table:cnn:settings} were trained and evaluated with different combinations of optimiser, learning rate, and activation function of the hidden convolutional layers (ReLU or PReLU), and with same initial number of channels fixed to $64$. In all cases, the activation function of the output layer was the \emph{softmax} and the categorical cross entropy was used as the loss function. Only the results of few topologies and ensembles are reported in this document, the results of all the listed topologies are reported in the \nameref{sect:SupplementaryMaterial}. For the sake of brevity, other designed topologies and combinations of configuration parameters which obtained poor results have been excluded too. The two variants that include VGG16 do not use transfer learning, i.e. the weights of the VGG16 were estimated from scratch. In other words, transfer learning is not used in any of the designed and evaluated topologies. The standard U-Net and the FCN were evaluated to have baseline results. \subsection{Model training} All variations designed from the U-Net architecture were trained for 300 epochs by using the training subset in each of the $3$-fold cross-validation iterations. The best version of each model at each cross-validation iteration corresponds to the weight values of the epoch in which the model got the highest accuracy with the validation subset. \subsection{Ensembles} \label{subsect:Ensembles} In addition to train and evaluate individual semantic segmentation models designed as variations from the U-Net architecture, a set of ensembles were created by grouping from 4 to 13 models. Table \ref{table:cnn:ensembles} shows all the ensembles used in this work, where it can be observed that the FCN network was only used in ensembles $E8$ and $E13$. \begin{table}[t \begin{center} \caption{\label{table:cnn:ensembles}Short names of the ensembles used in this work and the identifiers of the networks that constitute each ensemble} \begin{tabular}{|l|p{60mm}|} \hline \textbf{Ensemble Id} & \textbf{Networks (IDs) Included} \\ \hline $E4$ & UAD UMD UQD UDD \\ \hline $E5$ & UD UAD UMD UAMD UDD2 \\ \hline $E6$ & UD UAD UMD UAMD UVMD UVDD \\ \hline $E7$ & UD UAD UMD UAMD UVMD UQD UDD2 \\ \hline $E8$ & FCN UD UAD UMD UAMD UVMD UQD UDD2 \\ \hline $E9$ & UD UAD UMD UAMD UVMD UVDD UQD UDD UMDD \\ \hline $E10$ & UD UAD UMD UAMD UVMD UVDD UQD UDD UMDD UDD2 \\ \hline $E11$ & UA UD UAD UMD UAMD UVMD UVDD UQD UDD UMDD UDD2 \\ \hline $E12$ & U1 UA UD UAD UMD UAMD UVMD UVDD UQD UDD UMDD UDD2 \\ \hline $E13$ & FCN U1 UA UD UAD UMD UAMD UVMD UVDD UQD UDD UMDD UDD2 \\ \hline \end{tabular} \end{center} \end{table} A dual evaluation was performed to compare the two strategies used in ensembles: model averaging and stacking model. Additionally, in the case of model averaging, results with the arithmetic \eqref{eqn:arithmetic:c} mean and the geometric \eqref{eqn:geometric:c} mean were compared. Figure \ref{fig:ensembles} shows the schemes followed in both model averaging and stacking model techniques. Let $R$ be the number of models in an ensemble, let $y_r \in \mathbb{R}^{12}$ be the output of model $r$ for each single pixel with one score $y_{r,c}$ per class (our semantic segmentation task targets $12$ classes), and let $y \in \mathbb{R}^{12}$ be the output of the ensemble per pixel. As all the models use the \emph{softmax} activation function in the output layer, their outputs are normalised and sum 1, i.e., $\sum_{c} y_{r,c} = 1$ and $\sum_{c} y_{c} = 1$. This is why $y_r$ and $y$ can be considered as vectors of posterior probabilities that we also refer to as vectors of normalised scores. The model averaging technique computes the score of each class $y_c$ as either the arithmetic mean \eqref{eqn:arithmetic:c} or the geometric mean \eqref{eqn:geometric:c} from $y_{r,c} \forall r \in [1..R]$. \begin{table*}[t] \begin{center} \caption{\label{table:cnn:stacking}Parameter settings of the best performing stacking models} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Stacking model \\ ID\end{tabular}}}} & \multicolumn{5}{c|}{\textbf{Configuration}} \\ \cline{2-6} \multicolumn{1}{|c|}{} & \textbf{Input} & \textbf{Merging Layers} & \textbf{Meta-learner} & \textbf{Optimiser} & \textbf{Learning Rate} \\ \hline NAD & Normalised & Average & Dense Layer & Adam & 0.00033 \\ \hline TCD & Tensor & Concatenate & Dense Layer & Adam & 0.00033 \\ \hline \end{tabular} \end{center} \end{table*} By the other hand, the stacking model technique was used with two different approaches to prepare the input to the layer-merging stage: (a) the output of the \emph{softmax} activation layer from each model $r$ in the ensemble, i.e., the vector $y_r$, and (b) the 64-channel tensor at the input to the classification block, i.e., the output generated by the last level of the decoder branch, or the last level of the deep supervision block (DS.v3) when applicable. The combination of the inputs in the layer-merging stage can be done by concatenation, averaging, or adding. When the inputs to the ensemble are ready, the two dense layers of the stacking model are trained (see Figure \ref{fig:ensembles}). The output of the ensemble is also one vector of normalised scores per pixel $y \in \mathbb{R}^{12}$. Table \ref{table:cnn:stacking} shows input formats and layer configurations for the best performing ensembles based on the stacking model assembling technique. Ensemble configurations are identified by a three letter acronynm. First letter identifies the input type, \textbf{N} and \textbf{T} which stand for normalised scores (\emph{softmax} output) and 64-channel tensors, respectively. Second letter indicates layer merging operator: averaging (\textbf{A}) and concatenation (\textbf{C}). The addition operator was also used in the whole experimentation, however, its results are not presented here because they were too poor. The third letter corresponds to the type of meta-learner used, in this case only dense layers were used, so the third letter is fixed to \textbf{D}. Ensembles based on the stacking model were trained during 50 epochs using the same data-augmentation transformations used to train each single network (see Subsection \ref{subsect:DataAugmentation}), and following the $3$-fold cross-validation procedure with the same partitions of the dataset. The best version of each stacking model at each cross-validation iteration corresponds to the weight values of the epoch in which the stacking model got the highest accuracy with the validation subset. In both assembling strategies, namely, model averaging and stacking model, the output masks corresponding to $256 \times 256$ patches are used to be combined and generate a single mask per original slide (medical image) in order to evaluate the quality of the automatic semantic segmentation. According to the procedure followed to generate the patches from one slice, each single pixel of the reconstructed mask can belong to one, two or four patches. In the case of two or four patches arithmetic mean is used to compute the score of each class within the vector of scores of each single pixel. The vector corresponding to each single pixel of the reconstructed mask is used to assign each pixel to one of the 12 classes by either the maximum \emph{a posteriori} probability (MAP) criterion or the Threshold Optimisation (TH) strategy (see Subsection \ref{subsect:Binarisation}). Both labelling criteria, MAP and TH, were tested for all single networks and ensembles. \subsection{Evaluation Metrics} \emph{Intersection over Union} (IoU) \citep{long2015fully} was used as the metric to compare the performance of the evaluated network architectures. IoU is a variant of the Jaccard index to quantify the overlapping between the ground-truth mask and the predicted mask. The IoU for each individual class $c$ is defined as follows: \begin{equation} IoU_{c} = \frac{m_{cc}}{t_{c} + m_{c} - m_{cc}} \label{eqn:iou:c} \end{equation} where $m_{cc}$ is the count of pixels of class $c$ correctly predicted by the model into the class $c$, $t_{c}$ is the total amount of pixels of the class $c$ according to the ground-truth, and $m_{c}$ is the total amount of pixels assigned to class $c$ by the model. The global metric reported in the results is the average for all target classes, i.e., all the classes except the background class. Averaged IoU is computed according to the following formula: \begin{equation} IoU = \frac{1}{|C^{*}|} \underset{c \in C^*}\sum IoU_{c} \label{eqn:iou} \end{equation} where $C^{*}$ is the set of classes excluding the background class, i.e., the set of target classes which correspond to each one of the structural elements to be detected and delimited. \section{Results} \label{sect:results} \if false \begin{table* \begin{center} \caption{\label{tab:results:Best:1} Performance of the Automatic Semantic Segmentation generated by several network topologies and ensembles, using model averaging in some ensembles and stacking model in others. The Intersection over Union (IoU) is the metric used to evaluate the performance with respect to each and every one of the 12 classes by using equation \eqref{eqn:iou:c}. The average with and without the background class was computed using equation \eqref{eqn:iou} -- Background is not a target class} \begin{tabular}{|c|l|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{13}{|c|}{\textbf{Best performing topologies}} \\ \hline \multicolumn{2}{|c|}{\multirow{2}{*}{\textbf{Class}}} & \multicolumn{3}{c|}{\textbf{Baseline}} & \multicolumn{2}{c|}{\textbf{Best Variant}} & \multicolumn{2}{c|}{\textbf{Ensemble Average}} & \multicolumn{4}{c|}{\textbf{Stacking Ensemble}} \\ \cline{3-13} \multicolumn{2}{|c|}{} & \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}FCN \\ TH\end{tabular}}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}U1\\ MAP\end{tabular}}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}U1\\ TH\end{tabular}}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}UMD\\ MAP\end{tabular}}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}UMD \\ TH\end{tabular}}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}E13 AA\\ MAP\end{tabular}}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}E13 GA\\ MAP\end{tabular}}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}E10 TCD\\ MAP\end{tabular}}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}E10 TCD\\ TH\end{tabular}}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}E11 NAD\\ MAP\end{tabular}}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}E12 NAD\\ TH\end{tabular}}}} \\ \cline{1-2} \multicolumn{1}{|l|}{\textbf{\#}} & \multicolumn{1}{l|}{Id} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline 0 & \textbf{Background} & $91,81\%$ & $92,19\%$ & $92,27\%$ & $92,16\%$ & $92,23\%$ & $92,58\%$ & $92,58\%$ & $92,39\%$ & $92,45\%$ & $92,58\%$ & $\boldsymbol{92,61\%}$ \\ 1 & \textbf{Vert} & $84,06\%$ & $85,96\%$ & $86,19\%$ & $86,12\%$ & $86,28\%$ & $86,84\%$ & $86,88\%$ & $86,60\%$ & $86,73\%$ & $86,88\%$ & $\boldsymbol{87,01\%}$ \\ 2 & \textbf{Sacrum} & $80,99\%$ & $84,09\%$ & $84,30\%$ & $84,38\%$ & $84,75\%$ & $85,24\%$ & $85,27\%$ & $84,83\%$ & $84,99\%$ & $85,08\%$ & $\boldsymbol{85,40\%}$ \\ 3 & \textbf{Int-Disc} & $86,91\%$ & $88,69\%$ & $88,87\%$ & $88,86\%$ & $89,10\%$ & $89,35\%$ & $89,36\%$ & $89,13\%$ & $89,29\%$ & $89,43\%$ & $\boldsymbol{89,48\%}$ \\ 4 & \textbf{Spinal-Cavity} & $72,60\%$ & $75,47\%$ & $75,76\%$ & $75,85\%$ & $76,06\%$ & $76,80\%$ & $76,79\%$ & $76,11\%$ & $76,45\%$ & $76,46\%$ & $\boldsymbol{76,95\%}$ \\ 5 & \textbf{SCT} & $91,78\%$ & $92,52\%$ & $92,57\%$ & $92,56\%$ & $92,62\%$ & $93,00\%$ & $93,01\%$ & $92,83\%$ & $92,88\%$ & $93,00\%$ & $\boldsymbol{93,07\%}$ \\ 6 & \textbf{Epi-Fat} & $54,60\%$ & $58,00\%$ & $58,31\%$ & $58,51\%$ & $58,87\%$ & $60,00\%$ & $\boldsymbol{60,00\%}$ & $59,12\%$ & $59,44\%$ & $59,57\%$ & $59,95\%$ \\ 7 & \textbf{IM-Fat} & $61,07\%$ & $63,78\%$ & $64,02\%$ & $64,23\%$ & $64,56\%$ & $65,48\%$ & $65,49\%$ & $64,76\%$ & $65,05\%$ & $65,43\%$ & $\boldsymbol{65,68\%}$ \\ 8 & \textbf{Rper-Fat} & $69,28\%$ & $70,75\%$ & $70,75\%$ & $70,54\%$ & $70,64\%$ & $72,03\%$ & $\boldsymbol{72,04\%}$ & $71,55\%$ & $71,60\%$ & $71,93\%$ & $71,99\%$ \\ 9 & \textbf{Nerve-Root} & $45,64\%$ & $50,93\%$ & $51,76\%$ & $51,59\%$ & $52,28\%$ & $53,10\%$ & $53,08\%$ & $52,04\%$ & $52,58\%$ & $52,90\%$ & $\boldsymbol{53,27\%}$ \\ 10 & \textbf{Blood-Vessels} & $58,71\%$ & $60,78\%$ & $61,31\%$ & $60,89\%$ & $61,31\%$ & $63,01\%$ & $62,99\%$ & $62,27\%$ & $62,62\%$ & $63,06\%$ & $\boldsymbol{63,31\%}$ \\ 11 & \textbf{Muscle} & $79,40\%$ & $80,82\%$ & $81,08\%$ & $80,96\%$ & $81,22\%$ & $81,90\%$ & $81,91\%$ & $81,44\%$ & $81,62\%$ & $81,92\%$ & $\boldsymbol{82,03\%}$ \\ \hline \multicolumn{2}{|l|}{\textbf{IoU} without Bg.} & $71,37\%$ & $73,80\%$ & $\boldsymbol{74,08\%}$ & $74,04\%$ & $\boldsymbol{74,33\%}$ & $75,16\%$ & $75,17\%$ & $74,61\%$ & $74,84\%$ & $75,06\%$ & $\boldsymbol{75,29\%}$\\ \multicolumn{2}{|l|}{\textbf{IoU} with Bg.} & $73,07\%$ & $75,33\%$ & $75,60\%$ & $75,55\%$ & $75,83\%$ & $76,61\%$ & $76,62\%$ & $76,09\%$ & $76,31\%$ & $76,52\%$ & $\boldsymbol{76,73\%}$\\ \hline \end{tabular} \end{center} \end{table*} \fi \begin{table*}[t \begin{center} \caption{\label{tab:results:Best:2} Performance of the Automatic Semantic Segmentation generated by several network topologies and ensembles. Some ensembles performed better using model averaging and others using stacking model. The Intersection over Union (IoU) was the metric used to evaluate the performance of the 12 classes by using equation \eqref{eqn:iou:c}. The average with and without the background class was computed using equation \eqref{eqn:iou} -- background is not a target class. Ensemble $E13$ obtained good results with both the arithmetic mean and the geometric mean, and ensemble $E10$ with both MAP and TH labelling criteria } \resizebox{1.0\textwidth}{!}{% \begin{tabular}{|c|l|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c}{} & \multicolumn{5}{|c}{} & \multicolumn{6}{|c|}{\textbf{Best performing ensembles}} \\ \cline{3-13} \multicolumn{2}{|c|}{} & \multicolumn{3}{c|}{\textbf{Baseline}} & \multicolumn{2}{c|}{\textbf{Best Variant}} & \multicolumn{2}{c|}{\textbf{Model Averaging}} & \multicolumn{4}{c|}{\textbf{Stacking Model}} \\ \cline{3-13} \multicolumn{2}{|c|}{} & FCN & U1 & U1 & UMD & UMD & $E13$ & $E13$ & $E10$ & $E10$ & $E11$ & $E12$ \\ \multicolumn{2}{|c|}{\textbf{Class}} & & & & & & Arith & Geo & TCD & TCD & NAD & NAD \\ \cline{1-2} \textbf{\#} & Id & TH & MAP & TH & MAP & TH & MAP & MAP & MAP & TH & MAP & TH \\ \hline 0 & \textbf{Background} & $91.8\%$ & $92.2\%$ & $92.3\%$ & $92.2\%$ & $92.2\%$ & $92.6\%$ & $\boldsymbol{92.6\%}$ & $92.4\%$ & $92.5\%$ & $\boldsymbol{92.6\%}$ & $\boldsymbol{92.6\%}$ \\ 1 & \textbf{Vert} & $84.1\%$ & $86.0\%$ & $86.2\%$ & $86.1\%$ & $86.3\%$ & $86.8\%$ & $86.9\%$ & $86.6\%$ & $86.7\%$ & $86.9\%$ & $\boldsymbol{87.0\%}$ \\ 2 & \textbf{Sacrum} & $81.0\%$ & $84.1\%$ & $84.3\%$ & $84.4\%$ & $84.8\%$ & $85.2\%$ & $85.3\%$ & $84.8\%$ & $85.0\%$ & $85.1\%$ & $\boldsymbol{85.4\%}$ \\ 3 & \textbf{Int-Disc} & $86.9\%$ & $88.7\%$ & $88.9\%$ & $88.9\%$ & $89.1\%$ & $89.4\%$ & $89.4\%$ & $89.1\%$ & $89.3\%$ & $89.4\%$ & $\boldsymbol{89.5\%}$ \\ 4 & \textbf{Spinal-Cavity} & $72.6\%$ & $75.5\%$ & $75.8\%$ & $75.9\%$ & $76.1\%$ & $76.8\%$ & $76.8\%$ & $76.1\%$ & $76.5\%$ & $76.5\%$ & $\boldsymbol{77.0\%}$ \\ 5 & \textbf{SCT} & $91.8\%$ & $92.5\%$ & $92.6\%$ & $92.6\%$ & $92.6\%$ & $93.0\%$ & $93.0\%$ & $92.8\%$ & $92.9\%$ & $93.0\%$ & $\boldsymbol{93.1\%}$ \\ 6 & \textbf{Epi-Fat} & $54.6\%$ & $58.0\%$ & $58.3\%$ & $58.5\%$ & $58.9\%$ & $\boldsymbol{60.0\%}$ & $\boldsymbol{60.0\%}$ & $59.1\%$ & $59.4\%$ & $59.6\%$ & $\boldsymbol{60.0\%}$ \\ 7 & \textbf{IM-Fat} & $61.1\%$ & $63.8\%$ & $64.0\%$ & $64.2\%$ & $64.6\%$ & $65.5\%$ & $65.5\%$ & $64.8\%$ & $65.1\%$ & $65.4\%$ & $\boldsymbol{65.7\%}$ \\ 8 & \textbf{Rper-Fat} & $69.3\%$ & $70.8\%$ & $70.8\%$ & $70.5\%$ & $70.6\%$ & $\boldsymbol{72.0\%}$ & $\boldsymbol{72.0\%}$ & $71.6\%$ & $71.6\%$ & $71.9\%$ & $\boldsymbol{72.0\%}$ \\ 9 & \textbf{Nerve-Root} & $45.6\%$ & $50.9\%$ & $51.8\%$ & $51.6\%$ & $52.3\%$ & $53.1\%$ & $53.1\%$ & $52.0\%$ & $52.6\%$ & $52.9\%$ & $\boldsymbol{53.3\%}$ \\ 10 & \textbf{Blood-Vessels} & $58.7\%$ & $60.8\%$ & $61.3\%$ & $60.9\%$ & $61.3\%$ & $63.0\%$ & $63.0\%$ & $62.3\%$ & $62.6\%$ & $63.1\%$ & $\boldsymbol{63.3\%}$ \\ 11 & \textbf{Muscle} & $79.4\%$ & $80.8\%$ & $81.1\%$ & $81.0\%$ & $81.2\%$ & $81.9\%$ & $81.9\%$ & $81.4\%$ & $81.6\%$ & $81.9\%$ & $\boldsymbol{82.0\%}$ \\ \hline \multicolumn{2}{|l|}{\textbf{IoU} without Bg.} & $71.4\%$ & $73.8\%$ & $74.1\%$ & $74.0\%$ & $74.3\%$ & $75.2\%$ & $75.2\%$ & $74.6\%$ & $74.8\%$ & $75.1\%$ & $\boldsymbol{75.3\%}$\\ \multicolumn{2}{|l|}{\textbf{IoU} with Bg.} & $73.1\%$ & $75.3\%$ & $75.6\%$ & $75.6\%$ & $75.8\%$ & $76.6\%$ & $76.6\%$ & $76.1\%$ & $76.3\%$ & $76.5\%$ & $\boldsymbol{76.7\%}$\\ \hline \end{tabular} } \end{center} \end{table*} \begin{figure*}[t \centerline{\includegraphics[]{qualitativeResults.pdf} \caption{\label{fig:results:qualitative}Comparison of the qualitative results of the best performing topology (UMD+TH) and the best performing ensemble ($E12$+NAD+TH) with the baseline network architecture (U1+TH). A zoomed view shows a posterior protrusion of the L1-L2 disc (green - superior) and a marked L2-L3 disc space narrowing (green - inferior). Additionally, the vertebral endplates are affected by Modic changes. This example demonstrates the high quality of the semantic segmentation obtained despite the variability in morphology and signal of the vertebral elements due to the evolution of the pathologies.} \end{figure*} The problem addressed in this work is the automatic semantic segmentation of lumbar spine MR images using CNNs, both single networks and combining the segmentations generated by several networks within ensembles. The goal of the task is to detect and delimit regions in images corresponding to 12 different classes: 11 target classes plus the background. Two criteria described in Subsection \ref{subsect:Binarisation} were used to label each single pixel into one of the target classes. The first criterion is based on the \emph{Maximum a Posteriori Probability Estimate} (MAP). Each single pixel at output is assigned to the class with the highest score generated by the \emph{softmax} activation function. The second criterion is based on a naive adaptation of \emph{threshold optimisation} (TH). A threshold per target class was tuned using the validation subset to compute the value of the IoU metric for different thresholds. The threshold used for each target class was the one which obtained the best performance. Let us refresh how topologies presented and evaluated in this work were designed. Figure \ref{fig:unet_Blocks} shows a diagram of U-Net architecture (U1), used as baseline, and the complementary blocks used to enhance it. All the topologies but the ones used as baseline were designed as variations from the U-Net by strategically using one or more of the complementary blocks. Table \ref{table:cnn:settings} provides the list of topologies tested in this work and their respective configuration parameters. For the sake of brevity, only the results of few of them are presented in this document, those which obtained the highest accuracies. In particular, one variant of single networks and four ensembles. The networks architectures U1 and FCN correspond to the standard U-Net \citep{ronneberger2015u} and FCN8 \citep{long2015fully} architectures. The results obtained with these two networks were used as the baseline to compare the results obtained with the proposed variations. Table \ref{table:cnn:ensembles} shows the evaluated ensembles constituted by grouping different topologies designed as variations from the U-Net architecture. The listed ensembles are made up from 4 to 13 of the designed network topologies. The FCN architecture is only used in two ensembles, $E8$ and $E13$, for comparation purposes. Table \ref{tab:results:Best:2} shows the Intersection over Union (IoU) per class computed according to \eqref{eqn:iou:c} and the averaged IoU calculated according to \eqref{eqn:iou} for just one topology of single networks, the one which obtained best results, and the four ensembles that performed best. The results of topologies FCN and U1 are used as the baseline. The averaged IoU including the background class is only shown for informational purposes. The best results for each one of the classes have been highlighted in bold. More specifically, the results of U1, UMD and $E10$ are reported in two columns to show the effect of the two labelling criteria used in this work, namely MAP and TH. TH slightly improves the results of MAP in practically all classes, but especially in the case of the class \emph{Nerve-Root}, precisely the most difficult to be detected. In the particular case of ensemble $E13$, the two columns show that no differences can be observed between using the arithmetic mean or the geometric mean; only classes \emph{Vert} and \emph{Sacrum} show a negligible difference in favor of the geometric mean. This reflects that all the topologies combined in this ensemble perform very similarly, and, as it is expected and was previously commented, the use of ensembles lead to more robust and stable semantic segmentations, what is in line with the observed reduction of the variance of the results among the cross-validation iterations. Topology UMD obtained the best results of all the variants tested in this work, outperforming the baseline architecture U-Net (U1) for all classes using the two labelling criteria. The ensemble $E12$+NAD+TH obtained the best overall results. Let us remark that the TH labelling criterion performed better than the MAP criterion in all the performed experiments. But as discussed later, differences are not statistically significant. Figure \ref{fig:results:qualitative} illustrates three examples of predicted masks: one from the best performing topology (UMD+TH) and another from the best performing ensemble ($E12$+NAD+TH) that can be compared with the mask of the baseline architecture (U1+TH). The corresponding T1-weighted and T2-weighted slices used as input to the model and the ground-truth mask are also shown in Figure \ref{fig:results:qualitative}. \begin{figure*}[t \centerline{\includegraphics[]{Box-plot_IoU-EnsVsStruct.pdf}} \caption{\label{fig:box_plot_models}Box plot of Intersection over Union scores per class, $IoU_{c}$, for comparing UMD+TH (the best variation from the U-Net architecture) with the best ensembles and the two architectures whose results are used as baseline. % The 11 target structures in the lumbar region plus the background are represented. % 33 MR images from the test subset (split into a total of 396 2D overlapping patches of size $256 \times 256$) were used for obtaining the classification results to represent the box plots. % Same classification results were also used for computing the $p$-values according to the Wilcoxon signed-rank test in order to check statistical significance of model performance differences. % Statistical significance ($p < 0.05$) with respect to UMD+TH is indicated by the star symbol ($*$). } \end{figure*} Figure \ref{fig:box_plot_models} shows the box plot of metric $IoU_{c}$, i.e., the Intersection over Union score per class, for comparing the topology derived from the U-Net architecture that obtained the best results (UMD+TH) with the best ensembles and the two architectures whose results were used as baseline. 33 MR images from the test subset (split into a total of 396 2D overlapping patches of size $256 \times 256$) were used for obtaining the classification results to represent the box plots. Additionally, the Wilcoxon signed-rank test was carried out with the same classification results. The null hypothesis $H_0$, that in this case can be expessed as \emph{the mean of the difference of each $IoU_{c}$ is zero}, is not validated in some cases using $0.05$ as the threshold for the $p$-value. It can be considered that the results of two models are significantly different when the $p$-value is less than the threshold. The reference model used for computing the differences was UMD+TH. Figure \ref{fig:box_plot_models} highlights the models that performed different with respect to the model UMD+TH according to the Wilcoxon signed-rank test. Models are highlighted by means of the star symbol ($*$) and independently for each target class. Three observations can be extracted thanks to the Wilcoxon signed-rank test. First observation, no significant differences in performance between UMD+TH and UMD+MAP exist. Therefore, as a preliminary conclusion, it can be said that, based on the test subset used in this work, the labelling criterion TH does not contributes with significant improvements with respect to the MAP criterion. However, it should be highlighted that the TH labelling criterion depends on adjusting the threshold of each class by using a different subset than the test subset. For all the topologies evaluated in this work, the validation subset was used to adjust the class-dependent thresholds. It could happen that for other datasets this strategy will not drive to the optimal thresholds. Second observation is that UMD+TH performs better than the baseline models. In seven out of 12 target classes UMD+TH performs better than U1+TH, and in all target classes UMD+TH outperforms FCN+TH. Third and last and most important observation is that ensembles $E10$+TCD+TH and $E12$+NAD+TH performed significantly better than UMD+TH for all target classes. \begin{figure*}[t \centerline{\includegraphics[]{comparingModelEnsembles.pdf}} \caption{$IoU$ metric comparing model averaging and stacking model assembling techniques versus the number of networks in each ensemble.} \label{fig:comparingEnsembles} \end{figure*} Figure \ref{fig:comparingEnsembles} shows a comparison of the two assembling techniques used in this work, model averaging and stacking model. In the case of model averaging, both ways of computing the output of the ensemble from the output of the components were considered, the arithmetic mean and the geometric mean. In the case of the stacking model technique, two layer merging strategies are considered: averaging and concatenation. Averaging uses the vector of normalised scores at the output of the \emph{softmax}. Concatenation uses the input tensors to the classification block. First observation from Figure \ref{fig:comparingEnsembles} is that the model averaging assembling technique is more robust than the stacking model technique to the variance resulting from the predictions of the networks that constitute the ensemble. No significant differences between the use of the arithmetic mean and the geometric mean are observed. As already mentioned above, the high similarity between both ways of computing the mean confirms that all the topologies combined in the ensembles perform very similarly. A second observation from Figure \ref{fig:comparingEnsembles} is that in the case of the stacking model assembling technique, those ensembles including the FCN topology, $E8$ and $E13$, have a significant performance drop. Comparing $E12$ and $E13$ results for the configuration NAD+TH, it can be observed that the addition of the FCN topology significantly deteriorates the performance. In summary, the fact that the variants from the U-Net architecture and thus the proposed ensembles outperform the proposed baseline in practically all classes is a fruitful result of this work. The proposed approach demonstrates high performance in the segmentation of clinically relevant structures (mainly discs, vertebrae and spinal canal), despite the variability in the quality and provenance of the MR scans. \section{Discussion} \label{sect:discussion} Data and metadata play a crucial role in this work. Collecting data was a large and important task that consisted in (i) centralizing MR images coming from different hospitals with their corresponding reports generated by radiologists, (ii) revising the quality of images of each session to decide which ones are valid to this work, and (iii) anonymizing both images and reports. Manually generating the ground-truth mask for each single image was the most challenging and tedious task. In fact, as explained in Subsection \ref{sect:dataset} and summarized in Table \ref{table:summary}, only $1.572$ images from $181$ patients were manually segmented and used in this work. The ground-truth masks are the product of the manual semantic segmentation of images to delimit the 11 target clases plus the background from the anatomical components of the lumbar region visible in sagittal T1w and T2w MR images. Each pixel of the ground-truth masks is assigned to one and only one of the 12 classes. As already mentioned in previous sections of this document, this work is focused on the lumbar region to automatically delimit anatomical structures and tissues from sagittal MR images. Images that come from scanning sessions acquired in different hospitals of the Valencian region and that can correspond to different pathologies. Databases on medical imaging related to lumbar spine segmentation are scarce, works on multiclass segmentation \citep{al2019boundary} develop their own dataset and used the SegNet architecture \citep{badrinarayanan2017segnet} to detect lumbar stenosis by segmenting axial MR images into four regions of interest. \subsection{Medical perspective} In this work, it was designed a specific procedure to semantically segment structures and tissues of the lumbar region that is based on single CNNs and ensembles of CNNs. The procedure performs a multiclass segmentation with promising results in those structures which are relevant from the clinical point of view: \emph{vertebrae, intervertebral discs, spinal cavity, muscle, subcutaneous cellular tissue} and \emph{intra-muscular fat}. However, the segmentation of other relevant structures like \emph{nerve root} and \emph{epidural fat} was more difficult (nerve roots appear in sagittal slices as very small structures at the level of intervertebral foramen). For these two structures, the highest $IoU_{c}$ obtained with the ensemble $E12$+NAD+TH, the one that performed best, are $53.3\%$ and $60.0\%$ respectively, very low values of $IoU$ in comparison with the ones of other structures. The quality of the segmentation strongly depends on the size of the object to be detected. In order to mitigate this problem, both intradural and extradural nerve roots are considered as one single class, the target class \emph{Nerve-root}. Despite this decision, most errors concerning class \emph{Nerve-root} are false negatives, i.e., pixels corresponding to this class are mislabelled as one of the others. One of the strategies to cope with the problem of the small size of some of the objects to be detected was the use of multi-kernels, in such a way that the image at the input layer is processed with receptive fields of different sizes. The output of the convolutional layers with different kernel sizes whose input is the input layer is stacked together by concatenation. Topologies UMD, UMDD, UVMD and UAMD use multi-kernels. In \cite{jiang2021coronary} they use this multiresolution and multiscale strategy in the Coronary vessel segmentation task, obtaining promising results against 20 state-of-the-art visual segmentation methods using a benchmark X-Ray coronary angiography database. Analizing other works devoted to the semantic segmentation of brain images \citep{roy2018quicknat}, it can be said that the structural complexity of the lumbar spine is comparable with the complexity of the brain. There is a high number of structural elements in both cases, the morphology of which significantly changes between the slices of the same scanning session. The number of slices in scanning sessions of the brain is much higher, in such a way that it is possible to consider all the images from a scan as a 3D object and rescale it to an isotropic space with a resoluton that each single pixel of 2D images represents an area of around $1 mm^2$. It is not possible to do similar transformations using the images available for this work, because the number of sagittal slices is much lower and not all scanning sessions have a similar number of slices, i.e., the variance of the distance between sagittal slices is to high for this purpose. Additionally, the variations observed in available scannings of the spine, which are due to ageing and different pathologies, are many more than the ones observed in the available scannings of the brain. Usually, patients with different brain and neurological pathologies have much more similar patterns when compared to patients of distinct spine pathologies. A good example is the high range of variations due to the degeneration of intervertebral discs that are a common finding in symptomatic and asymptomatic individuals \citep{tehranzadeh2000lumbar, lundon2001structure, benoist2005natural}. \subsection{Limitations} The following limitations represented important challenges to carrying out all the work described here. \begin{enumerate}[a)] \item MR images were acquired by using distinct models of scanning devices and from different manufacturers, that in addition were not calibrated exactly the same way. Hence, acquisition parameters were not homogeneous. In order to minimize the impact of the variability of some of the configuration parameters, all the images used in this work were selected to ensure that these parameters are within the ranges presented in Table \ref{tab:scan:settings}. Despite the parameter variability, the quality of the automatic semantic segmentation confirms the robustness of the models proposed in this work, and their potential to be used by clinicians. \item Low image quality due to intrinsic factors of scanning devices like sensibility. \item Overlapping and ambiguous elements that even medical experts have doubts on to what class assign such elements. Therefore, large expertise is required to carry out the manual semantic segmentation due to the complexity of the anatomical structures. The ground-truth metadata was generated by two radiologists. Because of the lack of time of the radiologists, the manual segmentation of the images from each scanning session was carried out by one of the radiologists. Therefore it was not possible to compare different manual segmentations of the same images provided by different radiologists. On average, one radiologist lasted from five to eight hours to segment the 12 slices that, on average, come from a scanning session. \item The models proposed in this work were not configured to deal with patterns corresponding to tissues and findings not included in the training data, as it is the case of tumors and cysts. % All the elements found during the manual segmentation that do not belong to any of the target classes were assigned to the background class. \end{enumerate} \section{Conclusions and Future Works} \label{sect:conclusions} This work addressed the problem of segmenting sagittal MR images corresponding to the lumbar spine with 11 target classes. Each target class corresponds to one structural element of the anatomy of the lumbar region. One additional class, referred to as the background class in this work, was used to help neural networks to distinguish regions of the image not corresponding to any of the anatomical structures of interest. 11 different network topologies were designed as variations from the U-Net architecture to address the problem. These topologies were evaluated both individually and combined in ensembles. In light of the results reported in this work, it can be stated the main objective defined in Section \ref{sect:intro} has been achieved. Several of the topologies and ensembles of neural networks proposed in this work outperformed both network architectures, the FCN and the original U-Net, used as the baseline. Particularly, the results of the topology UMD and the ensembles E10+TCD+TH and E12+NAD+TH are significantly better than the results of the baseline architectures according to the Wilcoxon signed-rank test. Moreover, these two ensembles also performed significantly better than the topology UMD according to the same Wilcoxon test. The use of complementary blocks to enhance the original U-Net architecture improved its performance. The block types used in this work are deep supervision, spatial attention using attention gates, multi-kernels at the input and the VGG16 topology for the encoder branch. However, the combination of using all the complementary block types did not obtain the best result. Most variants that included deep super-vision in the decoding branch improved the baseline. Results of all the individual topologies tested are reported in the \nameref{sect:SupplementaryMaterial}. Regarding ensembles, all combinations of topologies trained with the predictions of individual topologies and following the 3-fold cross-validation procedure with the same partitions of the dataset performed better than any of the individual topologies with the validation subset. The ensembles based on the averaging-model assembling technique showed to be more robust to the variance of network predictions than the ensembles based on the stacking-model technique. In the particular case of the ensembles based on the averaging-model technique, the results using the geometric mean were slightly better than the obtained using the arithmetic mean. But the Wilcoxon signed-range test showed that such an improvement was not statistically significant. However, as mentioned above, the two ensembles that obtained the best overall results are based on the stacking model technique. Intervertebral discs and vertebrae are easier to detect due to the homogeneity of their textures and their morphology. In future work, we will concentrate our efforts on the most difficult target classes to improve the quality of automatic semantic segmentation. Nerve roots, epidural fat, intramuscular fat and blood vessels are the most challenging classes due to the heterogeneity of their morphology and their textures. Furthermore, nerve roots do not appear in the slices with the same frequency as other anatomical structures. It is well known that the imbalance in the number of samples of the different target classes in the training subset makes the less frequent classes much more difficult to be detected because the model could not observe enough samples (2D images in this case) containing regions of such classes. Imbalance plus heterogeneity of textures and morphologies make it especially difficult to detect some classes more accurately. \section*{CRediT authorship contribution statement} \textbf{J. J. S\'{a}enz-Gamboa:} Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing - original draft, review, editing \& final version, Visualization. \textbf{J. Domenech:} Conceptualization, Methodology, Investigation, Resources, Data curation, review. \textbf{A. Alonso-Manjarr\'{e}s:} Conceptualization, Methodology, Investigation, Data curation, review. \textbf{J. A. G\'{o}mez:} Conceptualization, Methodology, Formal analysis, Investigation, Supervision, Funding acquisition, Writing - original draft, review, editing \& final version. \textbf{M. Iglesia-Vay\'{a}:} Conceptualization, Methodology, Formal analysis, Resources, Investigation, Supervision, Funding acquisition, Writing - original draft, review, editing \& final version. \section*{Acknowledgments} This work was partially supported by the Regional Ministry of Health of the Valencian Region, under MIDAS project from BIMCV--\emph{Generalitat Valenciana}, under the grant agreement ACIF/2018/285, and by the DeepHealth project, ``Deep-Learning and HPC to Boost Biomedical Applications for Health'', which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 825111. Authors thanks the Bioinformatics and Biostatistics Unit from Principe Felipe Research Center (CIPF) for providing access to the cluster co-funded by European Regional Development Funds (FEDER) in the Valencian Community 2014-2020, and by the Biomedical Imaging Mixed Unit from \emph{Fundació per al Foment de la Investigació Sanitaria i biomedica} (FISABIO) for providing access to the cluster openmind, co-funded by European Regional Development Funds (FEDER) in Valencian Community 2014-2020. \section*{Supplementary Materials} \label{sect:SupplementaryMaterial} Supplementary material associated with this article can be found, in the online version, at \url{pending-to-be-assigned} \section*{Supplementary Material}
2,869,038,155,361
arxiv
\section{Introduction} Since the AdS/CFT correspondence was proposed\cite{juan}, an important question has been how it can be read to determine properties of the bulk theory from CFT quantities. There has been an enormous amount of investigation of various aspects of the boundary behavior that are implied by different features of the bulk theory, but there has been comparatively little investigation of the problem of extracting bulk behavior from the boundary theory. This problem, which might be referred to as ``decoding the hologram,'' has remained challenging. Of course, one of the purported miracles of the correspondence is precisely related to this question, that of how a higher-dimensional theory could be fully encoded in the lower-dimensional one. One of the challenges is to find sufficiently sharply refined boundary quantities that would be sensitive to fine-grained bulk detail. We will particularly focus on the question of whether and how the bulk S-matrix could be extracted from the boundary theory. Proposals for a prescription to do so from boundary correlators were outlined in \cite{Polchinski:1999ry,Susskind:1998vk}. However, there are considerable subtleties in implementing such proposals, described for example in \cite{FSS}. In particular, generic boundary data that might be used to specify incoming states, corresponding to non-normalizable behavior of states near the boundary, produces divergences that obscure physics at scales short compared to the AdS scale, $R$. This suggests that one consider more specialized boundary sources. In order to localize at scales smaller than $R$ in the bulk, one expects to need a construction of bulk wavepackets that do so. However, in order to avoid the divergences from the non-normalizable behavior, one expects that such data should have compact support on the boundary. The present paper will propose a class of sources and corresponding bulk wavepackets, which appear to strike an optimal balance between these criteria. Specifically, they have compact boundary support and are also taken to have high-frequency modulation in order to maximize localization properties. We then use these wavepackets to see how bulk S-matrix elements in the plane wave limit can be extracted from a candidate class of boundary correlators, in a certain scaling limit.\footnote{This scaling limit is closely related to the proposal in \cite{Polchinski:1999ry}. } This can happen only if the correlators have a certain singularity structure, which, we will explain, is necessary for producing the correct bulk kinematics. This singularity is only visible in the lorentzian regime and can be reached by a specific analytic continuation of the euclidean correlator, similar to the one considered in \cite{Cornalba:2006xk,Cornalba:2006xm,Cornalba:2007zb,Cornalba:2007fs,Cornalba:2008qf} in the context of the Regge limit of CFT correlators and eikonal scattering in AdS. While the correlators we consider are those arising via the GKPW prescription \cite{GKP,Witten} from the bulk supergravity, and thus do not arise directly from a boundary CFT, this construction provides an important test in principle, elucidating necessary CFT structure to encode certain local bulk dynamics. Thus, we suggest it supplies a piece of the answer to the question of how one might ``decode the hologram.'' In outline, the next section will give our explicit boundary sources and will investigate some bulk properties of the corresponding wavepackets. Next, section three, first from the bulk perspective, explains how such wavepackets could be tuned via an appropriate scaling limit to extract S-matrix elements for plane waves, from a simultaneous $R\rightarrow\infty$ and large-wavepacket limit. Then, a corresponding discussion is given on the boundary side, where one finds that the bulk kinematics of the momentum-conserving delta function can be encoded in a certain boundary singularity structure and that, for CFTs with this structure, the reduced transition matrix element could be read off from the coefficient of the singularity. Finally, section four shows that certain proposed CFT correlators, derived in other works from bulk supergravity, reproduce the correct singularity structure and, via our prescription, the correct reduced transition matrix elements. Two appendices contain technical details. \section{Wavepackets in AdS} \label{wavepacket} Our starting point will be to specify boundary data that constructs wavepackets, whose scattering we will then study. Our goal is to have these wavepackets sufficiently localized that they scatter only within a region small as compared to the AdS radius. \subsection{Geometry and coordinates} We begin by reviewing some basics of AdS geometry.\footnote{See \cite{FSS,Penedones:2007ns} for more description of the geometry and the relation to the embedding space.} AdS$_{d+1}$ may be thought of as a hyperboloid $(X^M)^2=-R^2$ in $\mathbb{R}^{2,d}$. AdS$_2$ embedded in $\mathbb{R}^{2,1}$ is shown in figure \ref{hyperboloid}. The point $X_0=(R,0,0,...)$, which we will take as the point about which our wavepackets will intersect, is also shown. \begin{figure} \centering \includegraphics[width=11cm]{globaltime} \caption{AdS$_2$ is shown in blue. The revolution axis of the hyperboloid corresponds to the spacelike direction of $\mathbb{R}^{2,1}$ and the transverse plane is timelike. Global time is the angular coordinate in this plane. The point $X_0$ is a reference point in AdS$_2$. The null momenta $k_1$ and $k_2$ live in the tangent space to AdS at $X_0$. The boundary sources are supported in the neighborhood of the boundary points $P_i$. On the right, we show the universal cover of AdS$_2$ conformally compactified. }\label{hyperboloid} \end{figure} For present purposes we will work on the cover of AdS. We will parameterize this in terms of global coordinates $(\tau,\rho,\mathbf{e})$, with $\mathbf{e}$ a $d$-dimensional unit vector on $S^{d-1}$; these are related to embedding coordinates by \begin{equation} X=\frac{R}{\cos\rho}(\cos\tau,\sin\tau,\sin\rho\,\mathbf{e}). \label{globalcoords} \end{equation} Then the metric takes the form \begin{equation} \label{globalmet} ds^2={R^2\over \cos^2\rho}\left(-d\tau^2 + d\rho^2 + \sin^2\rho\, d\mathbf{e}^2\right)\ . \end{equation} Points on the boundary of AdS, $\rho\rightarrow \pi/2$, naturally map to the corresponding null rays in the embedding space, \begin{equation} \label{boundcoords} P=(\cos\tau,\sin\tau,\mathbf{e})\ . \end{equation} The flat space limit of the vicinity of the point $x_0$, which corresponds to $X_0$, is taken by defining coordinates $t=R\tau$, $r=R\rho$. Then, the metric (\ref{globalmet}) becomes \begin{equation} ds^2=\frac{1}{\cos^2\frac{r}{R}}\left[ -d t^2+ dr^2 +R^2 \sin^2 \frac{r}{R} d\mathbf{e}^2 \right]\ .\end{equation} For $t,r\ll R$, this approximates the flat metric. The corresponding neighborhood in the embedding space is given by \begin{equation} X\approx (R,t,\mathbf{x}) \ ,\label{centralcoords} \end{equation} where $\mathbf{x}=r\mathbf{e} \in \mathbb{R}^d$. Thus AdS$_{d+1}$ is well-approximated by its tangent space $\mathbb{M}^{d+1}$, which is simply the subspace of $ \mathbb{R}^{2,d} $ orthogonal to $ X_0=(R,0,\dots,0)$. \subsection{Wavepackets} We wish to describe boundary sources $\Phi$ that construct bulk wavepackets $\Psi$ that have appropriate properties, such as localization, {\it etc.} These will be related through the bulk-boundary propagator, \begin{equation} G_{B\partial}(b,x)= {C_\Delta \over R^{(d-1)/2}} {1\over (-2P\cdot X/R+i\epsilon)^{\Delta}}\ , \end{equation} where $b=(\tau,\mathbf{e})$ is a boundary point, the product in the denominator is formed between the embedding-space quantities corresponding to $b$ and $x$, via (\ref{globalcoords}) and (\ref{boundcoords}), $\Delta$ is the conformal dimension, and the $i\epsilon$ prescription is that appropriate to the neighborhood of the point $x_0$.\footnote{For more discussion of the $i\epsilon$ prescription on the universal cover of AdS see \cite{Dullemond:1984bc,Penedones:2007ns}.} In our conventions, which include powers of $R$ to yield appropriate bulk dimensions for bosonic fields, the constant $C_{\Delta}$ is given by \begin{equation} C_{\Delta} =\frac{\Gamma(\Delta)}{2 \pi^{\frac{d}{2}}\Gamma\left(\Delta-\frac{d}{2}+1\right)}\ . \end{equation} The bulk wavepacket will be given by \begin{equation} \label{bulkboundary} \Psi(x)=\int_{\partial AdS} d\tau d\mathbf{e} \Phi(b)G_{B\partial}(b,x)\ . \end{equation} It is important that the boundary source $\Phi$ be a smooth, compactly supported function. The requirement of compact support will become necessary when considering multiple boundary sources, as there are divergences when sources overlap, as emphasized in \cite{FSS}. From the AdS side, this can be understood by noting that if there are multiple overlapping non-normalizable modes, integrals over bulk interaction points will not converge. It is also possible to understand the divergence from the CFT perspective, where it arises when two CFT operators approach the same point. The smoothness requirement is useful in ensuring the approximations made later in the paper are well controlled, forcing the Fourier transform $\hat{\Phi}$ to fall faster than any power at high frequencies.\footnote{A smooth, compactly supported function is an example of a Schwartz function. The Fourier transform acts as an endomorphism on the space of Schwartz functions \cite{Reed:1980}.} We shall localize our source around the boundary point $b_0=(\tau_0,\mathbf{e}_0)$. We also introduce explicit frequency dependence, in order to produce a bulk wavepacket with frequencies near $\omega$. Thus, we consider a boundary source of the form \begin{equation} \Phi_{(\omega,\tau_0,\mathbf{e}_0)} (b)= e^{-i\omega R(\tau-\tau_0)} L\left(\frac{\tau-\tau_0}{\Delta\tau}\right) L\left(\frac{\theta}{\Delta\theta} \right) \label{source} \end{equation} where $L$ is a $C^\infty$ function with $L(0)=1$ and with compact support of width $\sim 1$ about the origin, \begin{equation} \cos\theta = \mathbf{e}\cdot\mathbf{e}_0\ , \end{equation} and $\Delta \tau$ and $\Delta \theta$ thus give the widths of the wavepacket on the boundary. In order to achieve the desired localization, we take \begin{equation} \frac{1}{\omega R} \ll \Delta\tau, \Delta\theta \ll 1\ . \label{boundcond} \end{equation} \subsection{Wave packet in the interaction region} \label{wavepackint} We next examine the form of the bulk wavepacket corresponding to the source (\ref{source}) in the vicinity of the point $x_0$, which we would like to be the interaction region between such wavepackets. In order that the wavepacket pass through this region, we take $\tau_0=-\pi/2$. Near $x_0$, and for small ${\tilde \tau} =\tau+\pi/2$, (\ref{boundcoords}) and (\ref{centralcoords}) give \begin{equation} -P\cdot X\approx R\cos\tau + t\sin\tau - \mathbf{e}\cdot \mathbf{x} \approx R{\tilde \tau} -t - \mathbf{e}\cdot \mathbf{x}\ . \end{equation} Substituting this into equation (\ref{bulkboundary}), we find \begin{equation} \Psi_{(\omega,\mathbf{e}_0)}(x)\approx C_{\Delta} R^{\Delta -(d-1)/2} \int d{\tilde \tau} \int d\mathbf{e} {e^{-i\omega R{\tilde \tau}} L\left(\frac{{\tilde \tau}}{\Delta\tau}\right) L\left(\frac{\theta}{\Delta\theta} \right)\over \left(2(R{\tilde \tau} -t - \mathbf{e}\cdot \mathbf{x}) +i\epsilon\right)^\Delta }\ , \end{equation} and the substitution and $\chi = \omega(R{\tilde \tau} -t-\mathbf{e}\cdot \mathbf{x})$ gives \begin{eqnarray} \Psi_{(\omega,\mathbf{e}_0)}(x) &\approx& C_{\Delta} \omega^{\Delta-1}R^{\Delta -(d+1)/2} e^{-i\omega t}\notag \\ &&\int d\chi \int d\mathbf{e} {e^{-i\chi-i \omega \mathbf{e}\cdot\mathbf{x} } L\left(\frac{\chi+\omega(t + \mathbf{e}\cdot \mathbf{x})}{\omega R\Delta\tau}\right) L\left(\frac{\theta}{\Delta\theta} \right)\over (2\chi +i\epsilon)^\Delta} \ .\notag\\ \end{eqnarray} For large $\omega R \Delta \tau$, the dominant contribution to the integral over $\chi$ comes from the singularity at $\ \chi= 0$; subleading contributions are given by the series expansion in $\chi$ of the source near the singularity, and hence are supressed by powers of $\frac{1}{\omega R\Delta\tau}$. Thus, \begin{equation} \label{bulkpack} \Psi_{(\omega,\mathbf{e}_0)}(x)\approx \mathcal{D}_\Delta e^{-i\omega t} \omega^{\Delta-1}R^{\Delta -(d+1)/2} \int d\mathbf{e} e^{-i\omega\mathbf{e}\cdot\mathbf{x}} L\left(\frac{t +\mathbf{e}\cdot \mathbf{x} }{R\Delta\tau}\right)L\left( {\theta \over \Delta \theta} \right) \end{equation} where \begin{equation} \label{DDelta} \mathcal{D}_\Delta=\frac{2\pi C_{\Delta} e^{-i\pi\Delta/2}}{2^\Delta\Gamma(\Delta)}. \end{equation} For $t,r\ll R\Delta \tau$, the first $L\approx1$, and the integral over angles gives the Fourier transform of the angular source. Thus, in the limit of small $\Delta \theta$, we find \begin{equation} \Psi_{(\omega,\mathbf{e}_0)}(x)\approx \Psi_{(\omega,\mathbf{e}_0)}(0)e^{ik\cdot x } \label{planewave} \end{equation} where $k=\omega(1,-\mathbf{e}_0)$. The coefficient $\Psi_{(\omega,\mathbf{e}_0)}(0)$ is easily evaluated, to give \begin{equation} \Psi_{(\omega,\mathbf{e}_0)}(0)= \mathcal{D}_\Delta \omega^{\Delta-1}R^{\Delta -(d+1)/2} \int d\mathbf{e} L\left( {\theta \over \Delta \theta} \right)\ . \end{equation} In the limit of small $\Delta \theta$, \begin{equation} \int d\mathbf{e} L\left( {\theta \over \Delta \theta} \right) = (\Delta\theta)^{d-1} {\tilde L}_{d-1}\ , \end{equation} with \begin{equation} {\tilde L}_{d-1} = \int d^{d-1}\kappa L(\kappa) = \Omega_{d-2}\int_0^\infty \kappa^{d-2} d\kappa L(\kappa) \ \end{equation} and $\Omega_{d-2}$ the volume of unit $S^{d-2}$. Outside of the small $\Delta\theta$ limit, we see that the function $\Psi_{(\omega,\mathbf{e}_0)}(x)$ is a wavepacket with characteristic widths given by \begin{equation} \label{bulksize} \Delta t = R\Delta \tau\quad,\quad \Delta x_\perp = {1\over \omega \Delta \theta} \end{equation} in the longitudinal direction $t+\mathbf{e}_0\cdot \mathbf{x}$ and in the transverse directions, respectively. If we wish to have a wave packet that looks approximately like a plane wave near $x_0$, but is well localized at short distances as compared to $R$, we require \begin{equation} \label{bulkcond} \frac{1}{\omega}\ll \Delta t, \Delta x_\perp \ll R\ , \end{equation} which, using (\ref{bulksize}), is equivalent to (\ref{boundcond}). \section{Flat space S-matrix elements from CFT correlators} \label{flatS} Our goal will be to establish a relation between the CFT correlators and elements of the S-matrix of the dual string theory in the flat (Minkowski) limit. We will do so via a limiting procedure similar to that proposed in \cite{Polchinski:1999ry,Susskind:1998vk}. Specifically, we will scatter four of the wavepackets we have described, in the vicinity of the point $x_0$, adjusting them such that their typical widths remain less than the AdS radius. This is then expected to confine the interactions to the (almost) flat neighborhood of this point and, under the limit of infinite AdS radius, allows one to extract flat space S-matrix elements. In particular, we shall focus on $2\to 2$ elastic scattering with $\Delta_3=\Delta_1$ and $\Delta_4=\Delta_2$. Since this construction is based on choosing specific boundary sources integrated against a CFT correlation function, this then exhibits the appropriate limit of such correlators to be taken if bulk S-matrix elements are to be extracted. We emphasize that we will ultimately work with correlation functions that are derived from the bulk supergravity (or string) Feynman rules, as in \cite{GKP,Witten}. By showing how to isolate the needed behavior from such correlators that have a local bulk origin, we thus provide a test that can be applied to a true boundary conformal field theory, such as ${\cal N}=4$ super-Yang Mills, to see whether it has the appropriate structure to correspond to a local bulk theory. \subsection{Bulk construction} We work in the vicinity of $x_0$, approximately parameterized by (\ref{centralcoords}). We will use wavepackets as given by (\ref{source}), with $(d+1)$-dimensional momenta \begin{equation} \label{STk} k_i = (\omega_i, \mathbf{k}_i)\ . \end{equation} Our convention is that all momenta flow {\it into} the corresponding diagram, and in particular, $\omega_{3,4}$ are negative. It will also be useful to define the corresponding (inward pointing) unit vectors, \begin{equation} \label{Sk} \mathbf{k}_i = |\omega_i| { \mathbf{ \hat k}_i}\ . \end{equation} We thus take wavepackets (\ref{source}) with $\tau_{1,2}=-\pi/2$, and $\tau_{3,4}=\pi/2$, defined in terms of the angles \begin{equation} \label{thetadef} \cos\theta_i = -\mathbf{e}_i \cdot { \mathbf{ \hat k}_i}\ . \end{equation} These produce bulk wavefunctions $\Psi_{k_i}(x)$ with behavior as described in section \ref{wavepackint}. The scattering amplitude between these wave functions reads \begin{equation} \label{Corrfcn} \int_{{\rm AdS}} \prod_{i=1}^4 d x_i \Psi_{k_i}(x_i) G(x_1,\dots,x_4) \end{equation} where $G(x_1,\dots,x_4)$ is the amputated bulk Green's function. We shall take the flat space limit and the plane wave limit together, as in \cite{Polchinski:1999ry}. Specifically, we introduce a dimensionless scaling parameter $\eta$ and define \begin{equation} \label{scalelim} R= \eta^2 {\hat R} \ ,\ \ \ \ \ \ \ \ \Delta \tau = \eta^{-1} \widehat{\Delta \tau} \ , \ \ \ \ \ \ \ \ \Delta \theta =\eta^{-1}\widehat{\Delta \theta}\ . \end{equation} We then take the limit of large $\eta$, holding the $\omega_i$ and hatted quantities fixed. The conditions (\ref{boundcond}) and (\ref{bulkcond}) are automatically satisfied due to the strong ordering $1 \ll \eta \ll \eta^2 $. In this limit, the curvature corrections become small because the range of the wave packets scales with $\eta$ and the AdS radius of curvature scales with $\eta^2$. In the flat region, the wavepackets take the form \begin{equation} \Psi_{k_i}(x)\approx e^{ik_i\cdot x} F_i(x)\ , \end{equation} where the envelope $F(x)$ is given by (\ref{bulkpack}), and becomes nearly constant. In the absence of IR divergences or other subtleties, we thus expect that in (\ref{Corrfcn}) we can replace the AdS Green function $G$ by the corresponding flat-space Green function, which we write in the form \begin{equation} G(x_1,\dots,x_4)=i \int_{\mathbb{M}^{d+1}} \prod_{i=1}^4 \frac{dk'_i}{(2\pi)^{d+1}} e^{-i k'_i\cdot x_i} \mathcal{M}(k'_1, \dots, k'_4 )\ . \end{equation} The scattering amplitude (\ref{Corrfcn}) then becomes \begin{equation} \int_{\mathbb{M}^{d+1}}\prod_{i=1}^4 \frac{d k'_i}{(2\pi)^{d+1}} \hat{F}_i(k_i-k_i') \mathcal{M}(k_1',k_2',k_3',k_4') \label{FMeqn} \end{equation} where $\hat{F}_i$ is the Fourier transform of $F_i$. In the limit $\eta \to \infty$, the support of $\hat{F}_i$ gets localized at $k_i-k_i'\sim 1/\eta \to 0$. In particular, one finds that \begin{equation} \hat{F}_i(k_i-k_i')\rightarrow (2\pi)^{d+1}\delta^{d+1}(k_i-k'_i) \Psi_{k_i}(0)\ . \label{Flim} \end{equation} Of course, $ \mathcal{M}$ is directly related to the flat S-matrix, modulo the usual subtleties of LSZ, etc. In particular, the S-matrix has the form ${\cal S} = 1+i {\cal T}$, and for two-particle scattering between plane-wave states, \begin{equation} \langle k_3,k_4| {\cal T}|k_1,k_2\rangle = (2\pi)^{d+1} \delta^{d+1}\left(\sum_i k_i\right) T(s,t)\ , \end{equation} where one typically defines the Mandelstam invariants \begin{equation} \label{mandelstam} s= -(k_1+k_2)^2\quad,\quad t= -(k_1+k_3)^2\quad,\quad u=-(k_1+k_4)^2\ , \end{equation} and $s+t+u=0$ for massless particles. The scattering angle $\Theta$ is given by \begin{equation} \sin^2 \frac{\Theta}{2} =-\frac{t}{s} \ ,\ \ \ \ \ \ \ \ \ \ \cos^2 \frac{\Theta}{2} =-\frac{u}{s}\ , \end{equation} and $s$ is the square of the center-of-mass energy. Specifically, there is a direct contribution corresponding to the ``one'' in $\cal S$, which would arise from disconnected diagrams, and connected diagrams produce $T$. Focussing on the latter, we expect ${\cal T} = {\cal M}$, and thus combining (\ref{FMeqn}), (\ref{Flim}), that \begin{equation} i (2\pi)^{d+1} \delta^{d+1} \left( \sum k_i \right)T(s,t) = \lim_{\eta \to \infty} \int\prod_{i=1}^4 db_i {\Phi_{k_i} (b_i) \over\Psi_{k_i}(0)} A_{CFT}(b_1,\cdots,b_4)\,. \label{mainformula} \end{equation} Thus, we expect to be able to derive such elements of the S-matrix, corresponding to plane wave external states, from this limiting procedure. \subsection{CFT construction} \label{CFTconstruction} The goal of this subsection is to see this procedure work, directly at the level of the CFT. The reason for this is two-fold. First, it may strike one that we could have been incautious in the limiting procedures of the preceding subsection. Secondly, the formula (\ref{mainformula}) has on its left hand side basic features characteristic of a bulk local theory. We would like to understand how these are reproduced by the CFT quantities, on the right hand side. In fact, this could be viewed as providing a non-trivial test for CFTs, to diagnose whether they could correspond to a local (or approximately local) bulk theory, and moreover provide an important part of the key to decoding bulk local behavior from CFT correlators. Specifically, combining (\ref{source}) and (\ref{mainformula}), together with the scaling limit (\ref{scalelim}), our conjecture is that the S-matrix elements are given by \begin{eqnarray} i (2\pi)^{d+1} \delta^{d+1} \left( \sum k_i \right)T(s,t) &=&\lim_{\eta\rightarrow\infty} \int \prod_{i=1}^4\Bigl[ db_i N_i e^{-i\omega_i {\hat R}(\tau_i-\tau_{i0} )\eta^2}\notag\\ && L\left({\eta (\tau_i-\tau_{i0} )\over \widehat{\Delta \tau} } \right) L\left({\eta\theta_i\over \widehat{\Delta \theta}}\right) \Bigr] A_{CFT}(b_i)\ , \label{mainformulaa} \end{eqnarray} where \begin{equation} N_i = {1\over \eta^{2(\Delta_i-d)}} {1\over \mathcal{D}_{\Delta_i} \tilde{L}_{d-1}|\omega_i|^{\Delta_i -1} {\hat R}^{\Delta_i -(d +1)/2} \widehat{\Delta \theta}^{d-1}}\ . \end{equation} We wish to see how, for a given CFT, the expression on the right hand side produces the left hand side. The form of the CFT four-point function is highly constrained by conformal invariance. Let $P_i = P(b_i)$ be given by (\ref{boundcoords}); then \begin{equation} \label{ACFTdef} A_{CFT}(b_i)= \frac{C_{\Delta_1} C_{\Delta_2} } {(-2 P_1 \cdot P_3+i\epsilon)^{\Delta_1} (- 2 P_2\cdot P_4 +i\epsilon)^{\Delta_2} } \mathcal{A} (z,\bar{z})\, , \end{equation} where we chose the normalization so that $\mathcal{A}=1$ corresponds to the disconnected contribution and $z$ and $\bar{z}$ are defined in terms of the cross ratios \begin{equation} z\bar{z}=\frac{(P_{1} \cdot P_{3})(P_{2} \cdot P_{4})} {(P_{1} \cdot P_{2})(P_{3} \cdot P_{4})}\,, \end{equation} and \begin{equation} (1-z)(1-\bar{z})=\frac{(P_{1} \cdot P_{4})(P_{2} \cdot P_{3})} {(P_{1} \cdot P_{2})(P_{3} \cdot P_{4})}\,. \end{equation} A first check of (\ref{mainformula}) is that the RHS is zero for generic $k_i$'s not summing to zero. In this case, we are probing the four point function at generic values of the cross ratios where the function is regular. Therefore, the $\tau_i$ integrals over the boundary will generate a Fourier transform of a smooth compact support function. In the limit $\eta \to \infty$, the frequency scales with $\eta^2$ and the width scales with $1/\eta$. Thus the final result is zero as expected. We expect the needed delta function to arise from singular behavior of $\mathcal{A} (z,\bar{z})$. The wavepackets in (\ref{mainformulaa}) force the $b_i$ to be in the vicinity of $(\tau_{i0}, - \mathbf{\hat{k}}_i)$, as seen from equations (\ref{STk}-\ref{thetadef}). To exhibit the singular behavior, we let $\tau_{i0}=-\pi/2$ for $i=1,2$, and $\tau_{i0}=\pi/2$ for $i=3,4$. Let us introduce new parameters $\rho$, $\sigma$ through $z=\sigma e^{-\rho}$ and $\bar{z}=\sigma e^{\rho}$; these can be shown to be given by \begin{equation} \sigma^2=\frac{(P_{1} \cdot P_{3})(P_{2} \cdot P_{4})} {(P_{1} \cdot P_{2})(P_{3} \cdot P_{4})}\,, \end{equation} and \begin{equation} \label{rhodef} \sinh^2 \rho= \frac{{\rm Det}(P_{i} \cdot P_{j})} { 4 (P_{1} \cdot P_{3})(P_{2} \cdot P_{4})(P_{1} \cdot P_{2})(P_{3} \cdot P_{4})}\ . \end{equation} We recognize in this last expression the Graham determinant of the null vectors $P_i$. For momentum-conserving $k$'s, we then find that $P(\tau_{i0}, - \mathbf{\hat{k}}_i)$ yield $\rho=0$ -- this follows from linear dependence of the vectors $P_i$. We conclude that, for momentum-conserving $k$'s, the $\eta \to \infty$ limit in (\ref{mainformula}) is probing the $\bar{z}\approx z$ region of the correlator. The reduced amplitude $\mathcal{A} (z,\bar{z})$ is in general a dimensionless function of the cross ratios. We have found that to produce the correct bulk structure, $\mathcal{A}$ must diverge in the kinematical limit $ \bar{z} \to z$. We shall focus on describing tree level interactions in $(d+1)$-dimensional spacetime controlled by the coupling constant $g$. Then, the amplitude $\mathcal{A}$ will be proportional to the dimensionless factor $g^2 R^{5-d-2j}$, where $j$ is fixed by dimensional analysis. This interaction corresponds to a flat space matrix element of the form $i g^2 s^{j-1} \mathcal{M}(\Theta)$, where $\Theta$ is the scattering angle. In the particular case where the external scalars are minimally coupled to an exchanged particle, $j$ is the spin of the exchanged particle. Thus, we expect an amplitude that behaves as \begin{equation} \mathcal{A} (z,\bar{z})\approx g^2 R^{5-d-2j} \frac{\mathcal{F}(\sigma)}{(-\rho^2)^{\beta}} \ . \label{limA} \end{equation} in the vicinity $z\approx \bar{z}$. For now, we assume this generic power law divergence but, below, we shall be able to fix the exponent $\beta$ by requiring appropriate scaling. Moreover, in section \ref{examples} we will examine some specific amplitudes (computed via the bulk supergravity) and confirm that they exhibit such singularities. We will also examine the kinematical origin of the singularity at $\rho=0$ later in this section. The full delta function on momenta follows from the detailed structure of this singularity. We begin by defining new variables ${\hat \tau}_i= (\tau_i -\tau_{i0}) \eta^2$, in terms of which the RHS of (\ref{mainformulaa}) becomes \begin{equation} {\rm RHS}= \lim_{\eta\rightarrow\infty}{1\over \eta^8} \int \prod_{i=1}^4 d{\hat \tau}_i d\mathbf{e}_i N_i e^{-i\omega_i{\hat R} {\hat\tau}_i} L\left({{\hat \tau}_i\over\eta \widehat{\Delta \tau} } \right)L\left({\eta\theta_i\over \widehat{\Delta \theta}}\right) A_{CFT}(\tau_{i0}+{ {\hat \tau}_i\over \eta^2},\mathbf{e}_i) \end{equation} In the limit $\eta\rightarrow\infty$, the integral becomes very peaked in $\theta_i$, {\rm i.e.} at the points $\mathbf{e}_i = -\mathbf{ \hat k}_i$. Moreover, the distribution $L({{\hat \tau}_i/\eta \widehat{\Delta \tau} })$ becomes very flat, as compared to the variation in the exponential. We thus replace it by its value at zero, $L(0)=1$. The result is that \begin{equation} \label{intermexp} {\rm RHS}= \lim_{\eta\rightarrow\infty}{{\cal L}\over \eta^8} \int\prod_i d{\hat \tau}_i e^{-i\sum_i \omega_i {\hat R} {\hat\tau}_i} A_{CFT}(\tau_{i0}+{ {\hat \tau}_i\over \eta^2}, -\mathbf{ \hat k}_i)\ \end{equation} where \begin{equation} {\cal L}= \prod_i N_i \left(\frac{\widehat{\Delta \theta}}{\eta}\right)^{d-1}{\tilde L}_{d-1}= \prod_i {1\over \eta^{2\Delta_i-d-1}} {1\over \mathcal{D}_{\Delta_i} |\omega_i|^{\Delta_i -1} {\hat R}^{\Delta_i -(d +1)/2} } \\ . \end{equation} A non-vanishing result comes from the singularity at ${\hat \tau}_i=0$, which for momentum-conserving $k_i$ produces the singularity at $\rho=0$. The delta function follows by examining perturbations of this singularity as the momenta are varied away from conserved values. We first examine the contributions of the Graham determinant in (\ref{rhodef}). With $b_i=(\tau_{0i}+{\hat \tau}_i/\eta^2,-\mathbf{ \hat k}_i)$, we find via (\ref{boundcoords}) \begin{equation} P_i\cdot P_j = \pm\left( {1\over 2} {{\hat\tau}_{ij}^2\over \eta^4 } + {k_i\cdot k_j\over \omega_i\omega_j}\right) +{\cal O}[({\hat\tau}/\eta^2)^4] \end{equation} with ${\hat\tau}_{ij} = {\hat\tau}_i -{\hat\tau}_j$, and with plus sign for $(i,j)=(1,2)$ or $(3,4)$, and minus otherwise. Thus, the determinant yields \begin{equation} \det(P_i\cdot P_j) = \det\left({k_i\cdot k_j\over \omega_i\omega_j}\right) + {\cal O}({\hat \tau}^2)\ . \end{equation} While it should be possible to derive expressions in a general frame, we find it convenient to pick a particular frame to evaluate the quantities entering the correlator. We do this using the isometry group of AdS, $SO(d,2)$. This contracts to the flat Poincare group, so that such transformations can be used to pick particular Lorentz frames. In making coordinate choices, we also note that $A_{CFT}$ is invariant under translations of $\tau$. This means that we can take ${\hat \tau}_i\rightarrow {\hat \tau}_i -{\hat \tau}_1$, and eliminate ${\hat \tau}_1$ from the correlator. The integral over ${\hat \tau}_1$ then gives $2\pi\delta({\hat R} \sum_i\omega_i)$. Then, let us choose $\mathbf{ \hat k}_2 = -\mathbf{ \hat k}_1$, as part of going to the center of mass frame. Moreover, in general $\mathbf{ \hat k}_1$ and $\mathbf{ \hat k}_3$ define a plane; let $\mathbf{ \hat k}_{4,\perp}$ be the projection of $\mathbf{ \hat k}_4$ perpendicular to that plane. Also, define $\cos\vartheta_3= -\mathbf{ \hat k}_3\cdot\mathbf{ \hat k}_1$ and $\cos\vartheta_4= -\mathbf{ \hat k}_4\cdot\mathbf{ \hat k}_2$. Then, one can check \begin{equation} \label{Pijexp} \frac{1}{4}\det(P_i\cdot P_j) = {\bar \tau}^2/\eta^4 - \mathbf{ \hat k}_{4,\perp}^2 \sin^2\vartheta_3 \left[1+ {\cal O}({\hat \tau}^2/\eta^4)\right] + {\cal O}[({\hat\tau}/\eta^2)^4] \end{equation} with \begin{equation} {\bar \tau} = \left(\frac{{\hat \tau}_2}{2} - {\hat \tau}_4\right) \sin\vartheta_3 + \frac{{\hat \tau}_2}{2}\sin(\vartheta_3-\vartheta_4) + \left(\frac{{\hat \tau}_2}{2}-{\hat \tau}_3\right)\sin \vartheta_4\ . \label{taubar} \end{equation} The other $P_i\cdot P_j$ terms in both $\rho$, eq.~(\ref{rhodef}), and $A_{CFT}$, eq.~(\ref{ACFTdef}), can likewise be expanded about $(\tau_{0i}, -\mathbf{ \hat k}_i)$, but subleading terms enter the final expression at the same order as the neglected terms in (\ref{Pijexp}), and in particular their ${\hat \tau}$ dependence contributes subleading corrections to the singularity. Thus, (\ref{intermexp}) becomes \begin{equation} \label{Intermexp} {\rm RHS} = 2\pi\delta({\hat R}\sum_i\omega_i){\cal B}\lim_{\eta\rightarrow\infty}{{\cal L}\over \eta^8} ({\hat R} \eta^2)^{5-d-2j} \int {d{\hat \tau}_2 d{\hat \tau}_3d{\hat \tau}_4 e^{-i{\hat R} (\omega_2{\hat \tau}_2+\omega_3{\hat \tau}_3+\omega_4{\hat \tau}_4)}\over( \mathbf{ \hat k}_{4,\perp}^2 \sin^2\vartheta_3 - {\bar \tau}^2/\eta^4 + \cdots)^\beta} \end{equation} with \begin{eqnarray} {\cal B} &=& g^2 e^{-i\pi (\Delta_1+\Delta_2)} \frac{C_{\Delta_1}C_{\Delta_2}}{ 2^{ \Delta_1+\Delta_2}} {\cal F}(\sigma) \\ &&\left({-k_1\cdot k_3\over \omega_1\omega_3}\right)^{\beta-\Delta_1} \left({-k_2\cdot k_4\over \omega_2\omega_4}\right)^{\beta-\Delta_2} \left({-k_1\cdot k_2\over \omega_1\omega_2}\right)^{\beta} \left({-k_3\cdot k_4\over \omega_3\omega_4}\right)^{\beta} \ .\nonumber \end{eqnarray} We can now change integration variables from ${\hat \tau}_2$ to ${\bar \tau}$. The leading singularity is then independent of ${\hat \tau}_3$, ${\hat \tau}_4$, and thus integrals over these give delta functions. These, together with the energy-conserving delta function, enforce conservation of energy and momentum in the plane defined by $\mathbf{ \hat k}_1$, $\mathbf{ \hat k}_3$. In particular, in the center-of-mass frame, $\omega_1=\omega_2$, we thus find $\vartheta_3=\vartheta_4 =\Theta$. Collecting all powers of $\eta$ in (\ref{Intermexp}), together with the integral over $\bar \tau$, gives an expression of the form \begin{equation} \lim_{\eta\rightarrow\infty} \eta^{2(2\beta-2\Delta_1-2\Delta_2-2j+d+3)} \int d{\bar \tau} {e^{-i{\hat R} \omega_2 {\bar\tau}/(\sin\Theta)}\over (\eta^4 \mathbf{ \hat k}_{4,\perp}^2 \sin^2\Theta - {\bar\tau}^2 )^\beta } \end{equation} As shown in appendix \ref{Nbeta}, this produces the appropriate delta function on transverse momenta precisely if \begin{eqnarray} \beta= \Delta_1+\Delta_2+j -5/2\, ,\label{beta} \end{eqnarray} and as long as $2\beta >d-2$. We will check in specific examples that this relation holds. Then, \begin{equation} \lim_{\eta\rightarrow\infty} \eta^{2(d-2)} \int d{\bar \tau} {e^{-i{\hat R} \omega_2 {\bar\tau}/(\sin\Theta)}\over (\eta^4 \mathbf{ \hat k}_{4,\perp}^2 \sin^2\Theta - {\bar\tau}^2)^\beta }= {({\hat R} \omega_2)^{2\beta-d+1} {\cal N}_\beta\over (\sin\Theta)^{2\beta-1}} \delta^{d-2}(\mathbf{ \hat k}_{4\perp}) \end{equation} where the coefficient ${\cal N}_\beta$ is derived in appendix \ref{Nbeta}: \begin{equation} \mathcal{N}_\beta=\frac{\pi ^{\frac{d+1}{2}} }{2^{2\beta-d}\Gamma \left( \beta \right) \Gamma\left( \frac{2\beta+3-d}{2} \right)}\ . \end{equation} Combining the various factors, we then find \begin{equation} {\rm RHS}= (2\pi)^{d+1} \delta^{d+1}(\sum_i k_i) \mathcal{K}\, g^2 s^{j-1} \Big(\frac{-t}{s}\Big) ^{j-2} \Big(\frac{-u}{s}\Big) ^{3-j-\Delta_1-\Delta_2} \mathcal{F}\Big(\frac{-t}{s}\Big) \ , \end{equation} where we have rewritten quantities in terms of the Mandelstam parameters (\ref{mandelstam}) and \begin{eqnarray} \mathcal{K}& =& \frac{e^{- i \pi (\Delta_1 + \Delta_2)} \mathcal{N}_\beta C_{\Delta_1} C_{\Delta_2}}{ 2(2\pi)^{d-2} \mathcal{D}_{\Delta_1}^2\mathcal{D}_{\Delta_2}^2 }\ . \end{eqnarray} Thus, with appropriate singularity at $z={\bar z}$, the conformal field theory can reproduce the proper bulk kinematical structure. Finally, comparing the two sides of (\ref{mainformulaa}), yields a general proposal for the form of the bulk reduced transition matrix element, in terms of the coefficient of the singularity: \begin{equation} iT(s,t)=\mathcal{K}\, g^2 s^{j-1} \Big(\frac{-t}{s}\Big) ^{j-2} \Big(\frac{-u}{s}\Big) ^{3-j-\Delta_1-\Delta_2} \mathcal{F}\Big(\frac{-t}{s}\Big) \ , \label{result} \end{equation} where $\cal F$ was defined in (\ref{limA}), and the constant $\cal K$ is given by \begin{eqnarray} \mathcal{K} =\frac{\pi^{\frac{d-3}{2}}\Gamma \left(\Delta_1 \right) \Gamma \left(\Delta_2 \right) \Gamma \left(\Delta_1 - \frac{d}{2} + 1 \right) \Gamma \left(\Delta_2 - \frac{d}{2} + 1 \right) } {4^{j-2} \Gamma \left(\Delta_1 +\Delta_2+j- \frac{5}{2} \right) \Gamma \left(\Delta_1 +\Delta_2+j-1- \frac{d}{2} \right)}\ . \end{eqnarray} \subsection{Boundary kinematics of the singularity} Clearly the singularity at $z={\bar z}$ is an essential feature of the boundary CFT, if it is going to reproduce the full bulk energy-momentum conservation. In this subsection we investigate more closely the limit in which it is produced; then in the next section we will examine explicit correlators that exhibit this singularity. The causal relations between the boundary points used in the previous section were: points 1 and 2 and points 3 and 4 were spacelike related and points 3 and 4 lies inside the future lightcone of both points 1 and 2. With these causal relations, the singularity is expected at $z={\bar z}$. From the bulk point of view, it is natural to consider the space of all points in AdS that are null related to the four boundary points $b_i=(\tau_i,\mathbf{e}_i)$, \begin{equation} X \in \mathbb{R}^ {2,d}\ ,\ \ \ \ \ \ X\cdot P_i=0\ ,\ \ \ \ \ \ X^2=-R^2\ . \end{equation} In general, this is an empty set. Indeed, the conditions $X\cdot P_i=0$ for generic $P_i$ imply $X\in \mathbb{R}^ {d-2}$ which is incompatible with $X^2=-R^2$. Furthermore, the same statement applies to boundary points. However, the condition of equal cross ratios $z=\bar{z}$ was seen equivalent to \begin{equation} {\rm Det} \, P_i\cdot P_j =0 \ , \label{det=0} \end{equation} where the determinant is taken over the indices $i,j=1,\dots,4$. This condition means that the four points $ P_i \in \mathbb{R}^ {2,d}$ are either linearly dependent or they generate a null 4 dimensional submanifold. In the latter case, it is convenient to write \begin{equation} \mathbb{R}^ {2,d}=\mathbb{R}^ {d-3} \times \mathbb{R}_ {\bar N} \times \left( \mathbb{R}_ {N} \times \mathbb{M}^ {3} \right)\ , \end{equation} where $\mathbb{R}_ {{\bar{N}}} \times \mathbb{R}_ {N}=\mathbb{M}^2 $ is a split along two null directions $N, {\bar N}$, and the factor in brackets represents the submanifold generated by the $P_i$'s. For $X$ to be lightlike related to all external points $P_i$, we need $X \in \mathbb{R}^ {d-3} \times \mathbb{R}_ {N}$, but this is incompatible with $X^2=-R^2$. We conclude that there are no bulk points lightlike related to all external points. Moreover, only the boundary point $N \in \mathbb{R}_N$ is lightlike related to all external points. \begin{figure} \centering \includegraphics[height=6cm]{cylinder2} \caption{Sketch of the boundary points configuration in AdS$_{d+1}$ for the Lorentzian kinematical condition of equal cross ratios $z=\bar{z}$. Here, all such points are lightlike related to the bulk point $x_0$. } \label{CFTkin} \end{figure} In the degenerate case that we consider, where the external points are linearly dependent, the space orthogonal to all of them is $\mathbb{M}^ {d-1}$. Then, the condition $X^2=-R^2$ defines a $(d-2)$-dimensional hyperboloid. The space of all boundary points that are null related to the four points $P_i$ consists of a $(d-3)$-sphere, which is the boundary of the $(d-2)$-dimensional hyperboloid in the bulk. In the particular case of $d=2$ there is no null 4 dimensional submanifold of $\mathbb{R}^{2,2}$. Therefore, condition (\ref{det=0}) implies that the $P_i$'s are linearly dependent and we fall in the degenerate case described in the previous paragraph. Then, the $(d-2)$-dimensional hyperboloid in the bulk consists of a single point, which we can take to be our reference point $x_0$ as shown in figure \ref{CFTkin}). Unfortunately, we are unable to draw the more general higher dimensional cases where there are boundary points null related to all external points. We conclude that the divergence of the four point function when $\bar{z} \to z$ would arise when there is a boundary point that is null related to all the four external points of the correlation function. \section{Examples} \label{examples} We shall now illustrate the appearance of the $z={\bar z}$ singularity and the application of our main result (\ref{result}) in some particular examples. More precisely, we shall consider several explicit boundary four point functions (originally derived via euclidean bulk supergravity tree computations), study their $\bar{z} \to z$ limit, and extract from this the corresponding bulk reduced transition matrix elements. These will be found to have precisely the correct form corresponding to the tree level interaction in flat space. \subsection{Analytic continuation} \label{analy} We first describe the analytic continuation necessary to go from euclidean correlators to the lorentzian ones that we require. In the euclidean regime, $z$ and ${\bar z}$ are indeed complex conjugate. We find the lorentzian correlators by following the complex paths of $z$ and $\bar{z}$ generated by the appropriate Wick rotation. The continuation path is described by the Wick rotation of AdS global time $\tau \to -i \tau e^{i \alpha}$ where $\alpha=0$ is the Euclidean regime and $\alpha=\frac{\pi}{2}$ is the Lorentzian one. The formula (\ref{boundcoords}) then gives \begin{equation} P\rightarrow(\cos(-i \tau e^{i \alpha}),\sin(-i \tau e^{i \alpha}),\mathbf{e})\ . \end{equation} With the kinematics described in section \ref{flatS}, this then yields the continuations \begin{equation} \label{cplxpaths} z=\cos^2 \frac{\Theta-i\pi e^{i \alpha}}{2} \ , \hspace{30pt}\bar{z}=\cos^2 \frac{\Theta+i\pi e^{i \alpha}}{2}\ . \end{equation} \begin{figure} \centering \includegraphics[width=7cm]{Wickrotation} \caption{ Complex paths $z(\alpha)$ and $\bar{z}(\alpha)$ starting from the Euclidean regime at $\alpha=0$ to the Lorentzian one at $\alpha=\frac{\pi}{2}$, for the particular scattering angle $\Theta=1$. }\label{Wickrotation} \end{figure} In general the four point function is a multivalued function with branch points at $z,\bar{z}=0,1,\infty$. Therefore, it is important to evaluate the four point function in the appropriate Riemman sheet. The standard choice for the Euclidean four point function is to choose the branch cuts along the positive real axis. From figure \ref{Wickrotation} we see that under the Wick rotation $\bar{z}$ crosses this branch cut. It is also important that $z$ approaches the real axis from below and $\bar{z}$ from above \begin{equation} z\to z-i\epsilon\ ,\hspace{25pt} \bar{z}\to \bar{z} +i\epsilon\ . \end{equation} \subsection{Contact interactions} We start by considering a contact interaction with coupling $g^2$ between our two scalar fields produced by a quartic vertex in AdS$_{d+1}$ . The coupling $g^2$ has length dimension $d-3$ and therefore corresponds to $j=1$ in (\ref{limA}). Moreover, the tree level Witten diagram is simply given by \begin{equation} A_{CFT}(b_i)= g^2 R^{3-d} \pi^{\frac{d}{2}}C_{\Delta_1}^2 C_{\Delta_2}^2 D_{\Delta_1 \Delta_2 \Delta_1 \Delta_2}(P_i)\ , \end{equation} where $D_{\Delta_i}$ is the standard D-function reviewed in appendix \ref{Dfunctions}. Using equation (\ref{DtoDbar}) we find the reduced amplitude \begin{equation} \mathcal{A}(z,\bar{z})= g^2 R^{3-d} \frac{\pi^{\frac{d}{2}} C_{\Delta_1} C_{\Delta_2} \Gamma\left(\Delta_1+\Delta_2-\frac{d}{2}\right)} {2\Gamma^2(\Delta_1) \Gamma^2(\Delta_2)} \bar{D}_{\Delta_1 \Delta_2 \Delta_1 \Delta_2}(u,v)\ , \end{equation} where $u$ and $v$ are defined in terms of $z$ and $\bar z$ in eq.~(\ref{uvdef}). When $\Delta_1$ and $\Delta_2$ are positive integers, we can determine the small $\rho$ behavior of the $\bar{D}$-function using the techniques explained in appendix \ref{Dfunctions}. We obtain \begin{equation} \bar{D}_{\Delta_1\, \Delta_2\, \Delta_1\,\Delta_2 }(u,v)\approx 2i \pi^{\frac{3}{2}} \Gamma\left(\Delta_1+\Delta_2-\frac{3}{2}\right) \frac{\sigma (1-\sigma)^{\Delta_1+\Delta_2-2}}{(-\rho^2)^{\beta}} \ , \end{equation} with \begin{equation} \beta=\Delta_1+\Delta_2-\frac{3}{2}\ , \end{equation} in agreement with the prediction (\ref{beta}), and which gives \begin{equation} \mathcal{F}(\sigma)=i\frac{ \pi^{\frac{d+3}{2}} C_{\Delta_1} C_{\Delta_2} \Gamma\left(\Delta_1+\Delta_2-\frac{d}{2}\right)\Gamma\left(\Delta_1+\Delta_2-\frac{3}{2}\right) } {\Gamma^2(\Delta_1) \Gamma^2(\Delta_2)} \sigma (1-\sigma)^{\Delta_1+\Delta_2-2}\ . \end{equation} Inserting this expression into (\ref{result}) we obtain the simple reduced transition matrix element \begin{equation} T(s,t)=g^2\ , \end{equation} as expected for a contact interaction. \subsection{Scalar exchange} We now consider scalar exchanges in AdS$_{d+1}$, which correspond to $j=0$. As explained in \cite{D'Hoker:1999pj, D'Hoker:1999ni} , the associated four point function can be reduced to a finite sum of D-functions if $2\Delta_1$ minus the conformal dimension of the t-channel exchanged scalar is a positive even integer. We shall consider this particular case. In appendix \ref{Dfunctions}, we show that different D-functions have different singular behavior at $\rho=0$. In particular, the singularity at $\rho=0$ gets stronger as the sum of the indices of the D-function increases. Therefore, in the sum of D-functions obtained in \cite{D'Hoker:1999ni} it is enough to keep \begin{equation} A(b_i)\approx g^2 R^{5-d}\frac{\pi^{\frac{d}{2}} C_{\Delta_1}^2C_{\Delta_2}^2}{4(\Delta_1-1)^2 }\frac{1}{(-2P_1\cdot P_3)} D_{\Delta_1-1\, \Delta_2 \, \Delta_1-1\, \Delta_2}(P_i)\ , \end{equation} which gives \begin{equation} \mathcal{A}(z,\bar{z})=g^2 R^{5-d}\frac{\pi^{\frac{d}{2}} C_{\Delta_1}C_{\Delta_2}\Gamma\left(\Delta_1+\Delta_2-1-\frac{d}{2}\right)}{8\Gamma^2(\Delta_1) \Gamma^2(\Delta_2) } \bar{D}_{\Delta_1-1\, \Delta_2 \, \Delta_1-1\, \Delta_2}(u,v)\ . \end{equation} Using again the result \begin{equation} \bar{D}_{\Delta_1-1\, \Delta_2\, \Delta_1-1\,\Delta_2 }(u,v)\approx 2i \pi^{\frac{3}{2}} \Gamma\left(\Delta_1+\Delta_2-\frac{5}{2}\right) \frac{\sigma (1-\sigma)^{\Delta_1+\Delta_2-3}}{(-\rho^2)^{\Delta_1+\Delta_2-\frac{5}{2}}} \ , \end{equation} we confirm the predicted power of the singularity at $\rho=0$ and obtain \begin{equation} \mathcal{F}(\sigma)=i \frac{\pi^{\frac{d+3}{2}} C_{\Delta_1}C_{\Delta_2}\Gamma\left(\Delta_1+\Delta_2-1-\frac{d}{2}\right) \Gamma\left(\Delta_1+\Delta_2-\frac{5}{2}\right) }{4\Gamma^2(\Delta_1) \Gamma^2(\Delta_2) } \sigma (1-\sigma)^{\Delta_1+\Delta_2-3}\ . \end{equation} The prescription (\ref{result}) then gives \begin{equation} T(s,t)= \frac{g^2}{ -t }\ , \end{equation} which agrees with the expected flat space result. \subsection{Graviton exchange} In \cite{D'Hoker:1999pj,D'Hoker:1999jp} the contribution to the four point function of $\Delta=4$ scalar operators from t-channel graviton exchange in AdS$_5$ was determined. In our conventions, the result reads\footnote{Since in our conventions $\mathcal{A}=1$ corresponds to the disconnected contribution, the normalization can be read of directly from equation (C.11) of \cite{D'Hoker:1999jp}. } \begin{align} \mathcal{A}=&\frac{2 G_5 }{3 \pi R^3} \left[ 45 \bar{D}_{4 4 4 4} -4 \bar{D}_{1 4 1 4} - 20 \bar{D}_{2 4 2 4} - 23 \bar{D}_{3 4 3 4} \right. \\ &\hspace{1cm} \left. + 15\frac{2-z-\bar{z}}{z\bar{z}}\bar{D}_{4 5 4 5}+\frac{2-z-\bar{z}+z\bar{z}}{z\bar{z}} \left(12 \bar{D}_{2 5 2 5} + 20 \bar{D}_{3 5 3 5} \right) \right] \ ,\nonumber \end{align} where $G_5$ is the 5-dimensional Newton constant. The leading singularity as $\rho \to 0$ comes from \begin{equation} \bar{D}_{4 5 4 5 }\left( u,v \right) \approx 2i \pi^{\frac{3}{2}} \Gamma\left(\frac{15}{2} \right) \frac{\sigma (1-\sigma)^{7}}{(-\rho^2)^{\frac{15}{2}}}\ , \end{equation} computed in appendix \ref{Dfunctions}. This gives \begin{equation} \mathcal{A}\approx i G_5 R^{-3} 40 \sqrt{\pi}\, \Gamma\left(\frac{15}{2}\right) \frac{(1-\sigma)^{8}}{\sigma} \frac{1}{(-\rho^2)^{\frac{15}{2}}}\ , \end{equation} which has the predicted form (\ref{limA}). Using the value of the constant \begin{equation} \mathcal{K}=\frac{ \sqrt{\pi} }{5\Gamma\left(\frac{15}{2} \right) } \end{equation} and our main result (\ref{result}), one obtains the matrix element \begin{equation} T(s,t) = 8 \pi G_5 s \,\frac{1-\sigma}{\sigma} = 8 \pi G_5 \frac{ s^2+t s }{-t}\ . \end{equation} This agrees with the matrix element found in \cite{Barker:1966zz} for t-channel graviton exchange between minimally coupled massless scalars. \section{Conclusion and open questions} Since the AdS/CFT correspondence was first proposed, an important open question has been how to ``decode the hologram,'' that is, read off local bulk physics, particularly on scales short as compared to the AdS scale, from the boundary theory. In this paper, we have suggested a partial answer to this question, for certain S-matrix elements. In particular, we have argued that if the boundary CFT has a particular singularity structure, (\ref{limA}), with a characteristic leading behavior at $z={\bar z}$, then this suffices to produce important kinematical structure, in particular the {\it bulk} momentum conserving delta function. Moreover, where the CFT does have such a singularity, the coefficient function of the singularity is expected to provide the reduced transition matrix element, as seen in (\ref{result}). Moreover, we have seen this construction in operation, in the examples of section \ref{examples}. There, we explicitly found that for certain ``CFT correlators,'' we could indeed reproduce the expected $T$-matrix elements. The reason quotes have been added to this last statement is that the correlators we have considered are, of course, correlators computed from the bulk supergravity, and not derived directly from an actual boundary conformal field theory. This construction thus explains how such information {\it could} be encoded in and extracted from actual CFT correlators. A very important question for the future is whether correlators computed from actual boundary CFTs have the appropriate structure. Thus, the present construction provides an important test for CFTs, which can be used to determine whether they encode properties of a bulk local theory. In some respects this seems a non-trivial test, as it requires a very precise fine-grained structure exhibited in the $\eta\rightarrow\infty$ limit, so in correlators, in the $z\rightarrow {\bar z}$ limit, probing very short scales. It will be interesting to see in what cases such structure is produced in bona-fide conformal field theories. While we view this as an important test for CFTs, it is not a complete one. For example, the T-matrix of a bulk theory that is at least approximately local on scales long as compared to the string or Planck scale is expected to have certain other properties, such as characteristic growth at high energies. Moreover, a complete reconstruction of the S-matrix would require that one can recover other S-matrix elements, for example outside the plane-wave limit\cite{MGSG}, and for multi-particle processes. Other related investigations include studying processes with external particles with spin, and examining the structure of loop and string amplitudes. The inclusion of string and loop effects introduces additional parameters in the correlator, namely $\ell_s/R$ and $\ell_{Pl}/R$. We then expect that the nature of the $z=\bar{z}$ singularity of the correlator to change as $z-\bar{z}$ becomes smaller relative to these parameters. However, since the singularity has encoded the overall momentum-conserving delta function, one expects aspects of the structure we found in this paper to remain valid for amplitudes at higher-orders, or even non-perturbatively. In short, these methods suggest a way that candidate CFTs could be probed for anticipated bulk structures. Given a candidate CFT, one might investigate the behavior of its correlators for $z\approx \bar{z}$ to see whether they have the correct structure to encode various bulk phenomena, such as loop effects, string excitations, and small black holes. \vskip .15in \noindent{\bf Acknowledgements} We wish to thank L. Cornalba, M. Costa, T. Okuda, E. Witten, and especially J. Polchinski for discussions. MG and SBG gratefully acknowledge the kind hospitality of the CERN theory group, where part of this work was carried out. The work of MG and SBG was supported in part by the U.S. Dept. of Energy under Contract DE-FG02-91ER40618, and by grant RFPI-06-18 from the Foundational Questions Institute (fqxi.org). MG is supported by a Marie Curie Early Stage Research Training Fellowship of the European Community's Sixth Framework Programme under contract number MEST-2005-020238-EUROTHEPHY. JP is funded by the FCT fellowship SFRH/BPD/34052/2006, partially by the grant CERN/FP/83508/2008, and supported in part by the National Science Foundation under Grant No. NSFPHY05-51164. \appendices \section{The transverse delta function} \label{Nbeta} In this section, we verify the formula used in section \ref{CFTconstruction} for the transverse delta function, \begin{equation} {\cal N}_\beta \delta^{n}({\vec \kappa})= \lim_{\eta\rightarrow\infty} \int d\nu e^{-i\nu} {\eta^{2n}\over \left[\eta^4\kappa^2 -(\nu+i\epsilon)^2\right]^\beta} \ . \end{equation} The $i\epsilon$ prescription was obtained from the Wick rotation of AdS global time explained in section \ref{analy}. In particular, we take $\tau \to \tau(1-i\epsilon)$ which gives ${\hat \tau}_2 \to {\hat \tau}_2 +i\epsilon$, ${\hat \tau}_3 \to {\hat \tau}_3 -i\epsilon$ and ${\hat \tau}_4 \to {\hat \tau}_4-i\epsilon$. Equation (\ref{taubar}) then gives the final prescription $\bar{\tau} \to \bar{\tau} +i\epsilon$. First, we note that for $\kappa^2\neq0$, the function vanishes in the limit, as long as $2\beta>n$. Next, let us compute the integral of this expression over $n$-dimensional $\kappa$ space. We begin with \begin{equation} {\cal N}_\beta = \lim_{\eta\rightarrow\infty}\int d^n\kappa \int d\nu e^{-i\nu} {\eta^{2n} \over \left[\eta^4\kappa^2 -(\nu+i\epsilon)^2\right]^\beta}\ \end{equation} The quantity $\eta$ scales out trivially. Then, we can rewrite \begin{equation} {\cal N}_\beta = \int d^n\kappa \int_0^\infty d\nu \left[ {e^{-i\nu}\over (\kappa^2 -\nu^2 -i\epsilon)^\beta} + c.c.\right] \end{equation} where {\it c.c.} denotes the hermitian conjugate. The denominator can be exponentiated by the Schwinger trick, to yield \begin{equation} {\cal N}_\beta = {1\over \Gamma(\beta)} \int d^n\kappa \int_0^\infty d\nu e^{-i\nu} \int_0^\infty id\zeta (i\zeta)^{\beta-1} e^{-i\zeta(\kappa^2 -\nu^2 -i\epsilon)} + c.c.\ \end{equation} Then, one does the gaussian integral over $\kappa$ to find \begin{equation} {\cal N}_\beta = {\pi^{n/2}\over \Gamma(\beta)} \int_0^\infty id\zeta (i\zeta)^{\beta-n/2-1} \int_0^\infty d\nu e^{i\zeta\nu^2 -i\nu -\epsilon\zeta} + c.c.\ \end{equation} We can now rotate $\nu \to e^{-i\frac{\pi}{2}}\nu$ and $\zeta \to e^{i\frac{3\pi}{2}}\zeta$, \begin{eqnarray} \mathcal{N}_\beta &=& -i e^{i\pi (2\beta-n)} \frac{\pi^{\frac{n}{2}}}{\Gamma(\beta)} \int_0^\infty d\nu \int_0^\infty d\zeta \zeta^{\beta-1-\frac{n}{2}} e^{-\zeta \nu^2 - \nu } +c.c.\\ &=& -i e^{i\pi (2\beta-n)} \frac{\pi^{\frac{n}{2}} \Gamma\left(\beta-\frac{n}{2}\right)}{\Gamma(\beta)} \int_0^\infty d\nu \nu^{n-2\beta} e^{ - \nu } +c.c.\\ &=& -i e^{i\pi (2\beta-n)} \frac{\pi^{\frac{n}{2}} \Gamma\left(\beta-\frac{n}{2}\right)\Gamma\left(n+1-2\beta\right)}{\Gamma(\beta)} +c.c.\\ &=& 2\sin\pi (2\beta-n) \frac{\pi^{\frac{n}{2}} \Gamma\left(\beta-\frac{n}{2}\right)\Gamma\left(n+1-2\beta\right)}{\Gamma(\beta)} \\ &=& \frac{2\pi^{\frac{n+2}{2}} \Gamma\left(\beta-\frac{n}{2}\right)}{\Gamma(\beta)\Gamma\left(2\beta-n\right)} \\ &=& \frac{\pi^{\frac{n+3}{2}} }{2^{2\beta-n-2}\Gamma(\beta)\Gamma\left(\beta -\frac{n-1}{2}\right)}\ . \end{eqnarray} \section{D--functions} \label{Dfunctions} \subsection{Basics} The D--functions are defined as integrals over hyperbolic space \cite{D'Hoker:1999pj,Dolan:2000ut}, \begin{equation} D_{\Delta_{i}}^{d}\left( P_{i}\right) =\pi^{-\frac{d}{2}} \int_{H_{d+1}} dX \,{\textstyle\prod\nolimits_{i}} \,\left( -2X\cdot P_{i}\right)^{-\Delta_{i}}\ , \end{equation} where the points $P_{i}$ are future directed null vectors of the embedding space $\mathbb{M}^{d+2}$ of hyperbolic space $H_{d+1}$ and we set $R=1$. Introducing Schwinger parameters one can derive the following integral representation \begin{align} D_{\Delta_{i}}^{d}\left( P_{i}\right) & = \frac{\Gamma\left( \Delta- \frac{d}{2}\right) }{ {\textstyle\prod\nolimits_{i}} \Gamma\left( \Delta_{i}\right) }\int_0^\infty {\textstyle\prod\nolimits_{i}}\, dt_{i}\,t_{i}^{\Delta_{i}-1}~e^{-\frac{1}{2}\sum_{i,j}t_{i} t_{j}~P_{ij}} \label{Dfunc} \end{align} where $P_{ij}=-2P_{i}\cdot P_{j}\ge 0$ and $\Delta=\frac{1}{2}{\textstyle\sum\nolimits_{i}}\Delta_{i}$. The D--functions are invariant under Lorentz transformations of $\mathbb{M}^{d+2}$ and are homogeneous functions of $P_i$ with weight $-\Delta_i$. Therefore they can be reduced to functions of the invariant cross ratios, \begin{equation} \frac{P_{ij}P_{kl} }{P_{ik} P_{jl}}\ . \end{equation} with $i\neq j\neq k\neq l$. In particular, the four-point function can be written as \begin{equation} D_{\Delta_{i}}^{d}\left( P_{i}\right) = \frac{\Gamma\left( \Delta-\frac{d}{2}\right) }{2 {\textstyle\prod\nolimits_{i}} \Gamma\left( \Delta_{i}\right) } \frac{ \left(\frac{ P_{14}}{ P_{13}P_{34}}\right)^{\frac{\Delta_3-\Delta_1}{2}} \left(\frac{ P_{13}}{ P_{14}P_{34}}\right)^{\frac{\Delta_4-\Delta_2}{2}}}{ P_{13}^{\Delta_1} P_{24}^{\Delta_2} } \bar{D}_{\Delta_{i}}\left( u,v \right) \label{DtoDbar} \end{equation} where $\bar{D}_{\Delta_{i}}$ is a function of the conformally invariant cross ratios \begin{equation} \label{uvdef} u=\frac{ P_{12}P_{34}}{P_{13} P_{24}}=\frac{1}{z\bar{z}}\ ,\hspace{15pt} v=\frac{P_{14} P_{23}}{P_{13} P_{24}}=\frac{(1-z)(1-\bar{z})}{z\bar{z}}\ . \end{equation} This function satisfies the following relations \cite{Dolan:2000ut} \begin{align} \bar{D}_{\Delta_1\, \Delta_2\, \Delta_3\,\Delta_4 }\left( u,v \right) &= -\partial_u \bar{D}_{\Delta_1-1\, \Delta_2-1\,\Delta_3\,\Delta_4}\left( u,v \right) \nonumber \\ &= -\partial_v \bar{D}_{\Delta_1\, \Delta_2-1\,\Delta_3-1\,\Delta_4}\left( u,v \right) \nonumber \\ &= \bar{D}_{\Delta_3\, \Delta_2\,\Delta_1\,\Delta_4}\left( v,u \right) \label{recrel} \\ &= u^{\Delta_3+ \Delta_4- \Delta } \bar{D}_{\Delta_4\, \Delta_3\,\Delta_2\,\Delta_1}\left( u,v \right) \nonumber \\ &= v^{\Delta_4- \Delta } \bar{D}_{\Delta_2\, \Delta_1\,\Delta_3\,\Delta_4}\left( u/v,1/v \right) \nonumber \end{align} Finaly, we recall \cite{Dolan:2000uw,Dolan:2000ut} that the function $\bar{D}_{1111}$ can be written explicitly as \begin{eqnarray} \bar{D}_{1111} &=&\frac{ z\bar{z}}{ z -\bar{z}}\left[2 {\rm Li_2}(z) -2 {\rm Li_2}(\bar{z}) +\log(z\bar{z}) \log\frac{1-z}{1-\bar{z}}\right]\ . \label{expD1111} \end{eqnarray} \subsection{Singular limit} We shall now consider the singular limit, $z \to \bar{z}$, of some D-functions which we used in the main text. We start by studying the function $\bar{D}_{1111}$. From the explicit expression (\ref{expD1111}) it is clear that $\bar{D}_{1111}$ is regular when $z \to \bar{z}$. This happens because the expression in square brackets in (\ref{expD1111}) vanishes as $z \to \bar{z}$ and cancels the explicit pole in front. However, after the analytic continuation of figure \ref{Wickrotation} the function $\bar{D}_{1111}$ has a real singularity. To see this, let us place all the branch cuts of the expression in square brackets in (\ref{expD1111}) along the positive real axis, \begin{equation} 2 {\rm Li_2}(z) -2 {\rm Li_2}(\bar{z}) +\left(\log(-z)+\log(-\bar{z})\right)\left( \log(1-z)-\log(1-\bar{z})\right)\ .\label{expsqbra} \end{equation} Under the analytic continuation of figure \ref{Wickrotation}, $z$ does not cross any branch cut and $\bar{z}$ crosses all branch cuts, yielding the discontinuities \begin{align} \log(-\bar{z}) &\to \log(-\bar{z}) +2\pi i\\ \log(1-\bar{z}) &\to \log(1-\bar{z}) +2\pi i\\ {\rm Li_2}(\bar{z}) &\to {\rm Li_2}(\bar{z}) -2\pi i \log(\bar{z}) \end{align} This turns (\ref{expsqbra}) into \begin{eqnarray} &&2 {\rm Li_2}(z) -2 {\rm Li_2}(\bar{z}) + 4\pi i \log(\bar{z}) \label{expsqbralor} \\ &+&\left(\log(-z)+\log(-\bar{z})+2\pi i \right)\left( \log(1-z)-\log(1-\bar{z}) -2\pi i\right) \nonumber \end{eqnarray} Following figure \ref{Wickrotation} we now take the $\rho \to 0$ limit in the form \begin{equation} z\to \sigma e^{-\rho} -i\epsilon \ , \ \ \ \ \ \ \ \ \ \ \bar{z}\to \sigma e^{\rho} +i\epsilon \ . \end{equation} This drastically simplifies (\ref{expsqbralor}) to $4\pi^2$ and gives the small $\rho$ behavior \begin{equation} \bar{D}_{1111} \approx -\frac{2\pi^2 \sigma}{\rho}\ . \end{equation} It is now very easy to determine the small $\rho$ behavior of other D--functions with positive and integer $\Delta$ and $\Delta_i$. We just need to use the recursion relations (\ref{recrel}) and \begin{align} \partial_u&=\frac{ z\bar{z}}{ z -\bar{z}}\left[ z(1-z)\partial_z -\bar{z}(1-\bar{z})\partial_{\bar{z}}\right]\\ &=-\frac{\sigma^3}{2}\partial_\sigma +\frac{\sigma(1-\sigma \cosh \rho)}{2 \sinh\rho} \partial_\rho \approx \frac{\sigma(1-\sigma)}{2 \rho} \partial_\rho\ ,\\ \partial_v&=\frac{ z\bar{z}}{ z -\bar{z}}\left[\bar{z} \partial_{\bar{z}} - z\partial_z \right] =-\frac{\sigma}{2 \sinh\rho} \partial_\rho \approx - \frac{\sigma}{2 \rho} \partial_\rho\ . \end{align} For example, \begin{align} \bar{D}_{4 5 4 5 }\left( u,v \right) &= (-\partial_u)^3 \bar{D}_{1 2 4 5 }\left( u,v \right)\\ &= (-\partial_u)^3 u^3 \bar{D}_{5 4 2 1 }\left( u,v \right)\\ &= (-\partial_u)^3 u^3 (-\partial_u)^3 \bar{D}_{2 1 2 1 }\left( u,v \right)\\ &= (-\partial_u)^3 u^3 (-\partial_u)^3 v^{-2} \bar{D}_{1 2 2 1 }\left( u/v,1/v \right)\\ &= (-\partial_u)^3 u^3 (-\partial_u)^3 v^{-2} \left[ -\partial_v \bar{D}_{1 1 1 1 }\left( u,v \right) \right]_{u\to \frac{ u}{v}, v\to \frac{1}{v}}\ , \end{align} which in the singular limit reduces to \begin{align} \bar{D}_{4 5 4 5 }&\approx (1-\sigma)^2\sigma^4 \left(-\frac{1}{2 \rho} \partial_\rho\right)^6 \left[ -\frac{\sigma}{2 \rho} \partial_\rho \frac{2\pi^2 \sigma}{\rho} \right]_{\sigma \to1-\sigma, \rho^2 \to \frac{\sigma^2 \rho^2}{(\sigma-1)^2}} \\ & \approx \pi^2 (1-\sigma)^7\sigma \left(-\frac{1}{2 \rho} \partial_\rho\right)^6 \frac{ 1}{(\rho^2)^{\frac{3}{2}}} \\ & \approx 2 \pi^{\frac{3}{2}} \Gamma\left(\frac{15}{2}\right) \frac{ (1-\sigma)^7\sigma}{(\rho^2)^{\frac{15}{2}}} \ . \end{align} The recursion relations (\ref{recrel}) either preserve $\Delta$ or increase it by one unit when taking derivatives $\partial_u$ or $\partial_v$. On the other hand, both $\partial_u$ and $\partial_v$ contribute a factor of $\rho^{-2}$ to the singularity at $\rho=0$. Therefore, a D--function with positive and integer $\Delta$ and $\Delta_i$ has the following small $\rho$ behavior, \begin{equation} \bar{D}_{\Delta_1\, \Delta_2\, \Delta_3\,\Delta_4 } \sim \rho^{3-2\Delta}\ . \end{equation} Another particular example is \begin{align} \bar{D}_{\Delta_1\, \Delta_1\, \Delta_2\,\Delta_2 } &= (-\partial_u)^{\Delta_1-1} u^{\Delta_2-1} (-\partial_u)^{\Delta_2-1} \bar{D}_{1 1 1 1 }\left( u,v \right)\\ &\approx -2 \pi^{\frac{3}{2}} \Gamma\left(\Delta_1+\Delta_2-\frac{3}{2}\right) \frac{\sigma^{\Delta_1-\Delta_2+1} (1-\sigma)^{\Delta_1+\Delta_2-2}}{(\rho^2)^{\Delta_1+\Delta_2-\frac{3}{2}}} \end{align} for integer $\Delta_1$ and $\Delta_2$. Using the relation \begin{equation} \bar{D}_{\Delta_1\, \Delta_2\, \Delta_1\,\Delta_2 }(u,v)= v^{-\Delta_1} \bar{D}_{\Delta_1\, \Delta_1\, \Delta_2\,\Delta_2 }(1/v,u/v)\ , \end{equation} we find \begin{equation} \bar{D}_{\Delta_1\, \Delta_2\, \Delta_1\,\Delta_2 }(u,v)\approx -2 \pi^{\frac{3}{2}} \Gamma\left(\Delta_1+\Delta_2-\frac{3}{2}\right) \frac{\sigma (1-\sigma)^{\Delta_1+\Delta_2-2}}{(\rho^2)^{\Delta_1+\Delta_2-\frac{3}{2}}} \ . \end{equation} \newpage
2,869,038,155,362
arxiv
\section{Introduction} The $q$-entropy has been one of the simplest in functional form, and probably the most studied, nonadditive entropic functional in the Stastitical Physics literature during the last thirty years. Initially it was introduced in Information Theory and Statistics, and after it was re-discovered by C. Tsallis \cite{T1}, it has been developed and brought to prominence ever since, for its potential applications to Statistical Physics, as well as to a variety of other fields not necessarily related to Physics at all \cite{T-book}. Despite the potentially wide range of its applicability, the dynamical foundations of the $q$-entropy remain quite obscure to this date.\\ If one is convinced that phase space hyperbolicity \cite{BP}, in the sense of Dynamical Systems \cite{KH}, lies at the heart of the success of the Boltzmann/Gibbs/Shannon (BGS) entropic functional, a natural question arises as to what is the corresponding dynamical origin of the $q$-entropy. An answer to this question has proved to be elusive, so far. An easier to address, and somewhat related, question has to do with the validity, or even existence, the $q$-entropy for systems described by variables taking values in a continuous set, as opposed to discrete sets of variables. Some controversy has arisen related to this issue during the last decade \cite{Abe1, Andresen, Abe2, BOT, LB, BL, PR, OB1, OA, OB2}. We presented a possible resolution in \cite{CK} among the other recent proposals. The present paper can be considered as a continuation along the line of these works \cite{Abe1, Andresen, Abe2, BOT, LB, BL, PR, OB1, OA, OB2, CK}. \\ The present work relies very heavily on, and provides some physical context and interpretation of, the results of \cite{LYZ0}. The results of \cite{LYZ0} can be seen in the wider context of landmark results of the dual $L_p-$ Brunn-Minkowski theory and the associated star-shaped bodies, which were introduced by E. Lutwak \cite{Lut1, Lut2, Lut3, Lut4}, and further developed jointly by E. Lutwak, D. Yang, G. Zhang \cite{LYZ1, LYZ2, LYZ3, LYZ4}, and collaborators \cite{HLYZ}.\\ In Section 2, we present the $L_p$ relative entropic functional of Lutwak-Yang-Zhang (LYZ) and some of its properties. We claim this functional may contain a good candidate for the sought after relative $q$-entropy for continuous systems and explain the reasons why. In Section 3, we present the generalized Gaussians which are the extremizing distributions of the LYZ functional and some of their properties. In Section 4, we comment about the Fisher information and its generalization and its potential physical implications. In Section 5, we present a more general speculation about the dynamical foundations of the $q$-entropy via coarse-graining, the potential importannce of the duality among $L_p-$ Brunn-Minkowski theories in Convex Geometry for $q$-entropies, and their conjectured possible invariance under changes of the non-extensive parameter \ $q$.\\ \section{Some relative entropy functionals} To set up the notation, we consider the Boltzmann/Gibbs/Shannon (BGS) functional form for the entropy of discrete outcomes \ $i \in I$ \ with corresponding probabilities \ $p_i$ \ to be \begin{equation} \mathcal{S}_{BGS}[\{p_i\}] \ = \ - k_B \sum_{i\in I} \ p_i \log ( p_i ) \end{equation} where \ $k_B$ \ is Botlzmann's constant. The obvious extrapolation of the BGS entropy to continous distributions is \begin{equation} \mathcal{S}_{BGS}[\rho ] \ = \ - k_B \int_X \rho(x) \log(\rho(x)) \ dvol_X \end{equation} where \begin{equation} \rho(x)\ = \ \frac{d\mu}{dvol_X} \end{equation} is the Radon-Nikodym derivative of a chosen measure \ $\mu$, \ usually resulting after a process of coarse-graining, with respect to the volume measure \ $dvol_X$ \ of the ``phase space" \ $X$, \ which is usually a Riemannian manifold or more generally, a metric measure space. In the latter case, we assume that \ $X$ \ is initially endowed with a reference measure \ $\nu$ \ and that the effective/coarse-grained measure \ $\mu$ \ is absolutely continuous with respect to\ $\nu$, \ so \ $\rho$ \ exists $\nu$-almost everywhere on \ $X$. \\ Given two distributions \ $\rho_1, \rho_2: X \rightarrow [0,1]$, with respect to a reference measure \ $d\nu$, which may or may not be the volume of \ $X$, \ their relative BGS entropy, or Kullback-Leibler divergence \ $\mathcal{D}_1 [\rho_1|\!|\rho_2]$ \ is defined \cite{CT} as \begin{equation} \mathcal{D}_1 [\rho_1|\!| \rho_2] \ = \ \int_X \rho_1(x) \ \log \left( \frac{\rho_1(x)}{\rho_2(x)} \right) \ d\nu \end{equation} where we have arranged the units so that \ $k_B=1$ \ for brevity. As is well-known, the Kullback-Leibler divergence \ $\mathcal{D}_1[\rho_1|\!|\rho_2]$, \ even though is not a metric, provides a way of measuring the discrepancy/difference between the densities \ $\rho_1$ \ and \ $\rho_2$. \ Interpreting the BGS entropy in a relative context, as a version of the Kullback-Leibler divergence, solves the issue of the lack of the diffeomorphism (reparametrization) invariance of \ $\mathcal{S}_{BGS}$, \ hence it allows the BGS entropy for continuous systems to potentially have physical meaning, from a formal viewpoint, something that is ultimately positively confirmed by its experimentally tested predictions.\\ With the above notation, the R\'{e}nyi entropy of order \ $\alpha\geq 0, \ \alpha \neq 1$ \ is defined for a discrete set of outcomes as \begin{equation} \widetilde{\mathcal{S}}_\alpha [\{p_i\ ] \ = \ \frac{1}{1-\alpha} \log \left( \sum_{i\in I} p_i^\alpha \right) \end{equation} Someone can readily check that \begin{equation} \lim_{\alpha\rightarrow 1} \widetilde{\mathcal{S}}_\alpha = \mathcal{S}_{BGS} \end{equation} For continuous probability distributions, with the above notation, the naive extension of the R\'{e}nyi entropy is \begin{equation} \widetilde{\mathcal{S}}_\alpha [\rho] \ = \ \frac{1}{1-\alpha} \log \left( \int_X [\rho(x)]^\alpha \ d\nu \right) \end{equation} and the relative R\'{e}nyi entropy, in the continuous case, is \begin{equation} \widetilde{\mathcal{D}}_\alpha [\rho_1|\!|\rho_2] \ = \ \frac{1}{\alpha -1} \log \left( \int_X \rho_1(x) \left[\frac{\rho_1(x)}{\rho_2(x)}\right]^{\alpha -1} \ d\nu \right) \end{equation} in analogy with the Kullback-Leibler divergence (4).\\ The $q$-entropy was initially introduced in \cite{HC, Vaj, Dar}, was more recently re-discovered in Statistical Physics and was proposed as an appropriate entropy for physical systems for which the BGS entropy may not be applicable in \cite{T1}. It is given, for discrete outcomes, by \begin{equation} \mathcal{S}_q [\{p_i \}] \ = \ \frac{1}{q-1} \left(1- \sum_{i\in I}p_i^q \right) \end{equation} One can easily verify that \begin{equation} \lim_{q\rightarrow 1} \mathcal{S}_q \ = \ \mathcal{S}_{BGS} \end{equation} For continous probabilities with density \ $\rho$ \ on a metric measure space \ $X$ \ with reference measure \ $d\nu$, \ the naive extension of (9) is \begin{equation} \mathcal{S}_q [\rho ] \ = \ \frac{1}{q-1} \left(1- \int_X [\rho(x)]^q \ d\nu \right) \end{equation} Initially \cite{T1} it was assumed that the entropic/non-extensive parameter \ $q\in\mathbb{R}$, \ with a later proposal \cite{WW} for extending its domain to \ $q\in\mathbb{C}$. \ A careful treatment of the possible values of \ $q$ \ that allow functional invertibility, as well as other desirable properties of the equilibrium distributions, called ``$q$-exponentials", resulting from a variational optimization of \ $\mathcal{S}_q$, \ which effectively encode the conjectured physical behaviors described by the $q$-entropic functional, \ was presented in \cite{OB10}. \\ In all of the above expressions the reference measure is taken to be the volume \ $dvol_X$ \ of \ $X$.\ The controversy regarding the suitability, or possibility of extending the $q$-entropy for continuous systems \cite{Abe1, Andresen, Abe2, BOT, LB, BL, PR, OB1, OA, OB2}, is related to the relative version of (11) which, following (4),(8) is taken, by most authors working on this issue, to be \begin{equation} \mathcal{D}_q [\rho_1|\!|\rho_2] \ = \ \frac{1}{q-1} \left(1 - \ \int_X \left[ \frac{\rho_1(x)}{\rho_2(x)} \right]^q \ dvol_X \right) \end{equation} A somewhat different approach to such a relative $q$-entropy was proposed in \cite{CK} based on generalized operations induced by \ $\mathcal{S}_q$. \ However, even the approach of \cite{CK} essentially relies on the functional form (12). \\ To move forward, we observe that in the R\'{e}nyi entropy \ $\widetilde{\mathcal{S}}_\alpha$ \ there are two parts of interest. One of them is its logarithmic behavior, something that distinguishes it from the $q$-entropy and bears a strong similarity to the functional form of the BGS entropy \ $\mathcal{S}_{BGS}$. \ A second point is the existence of the ``bias parameter" \ $\alpha$ \ in a power-law manner, strongly resembling the form of the $q$-entropy \ $\mathcal{S}_q$. \ We can disentangle these two behaviors by re-writing (7) as \begin{equation} \widetilde{\mathcal{S}}_\alpha [\rho] \ = \ \log \left(\int_X [\rho (x)]^\alpha \ dvol_X \right)^\frac{1}{1-\alpha} \end{equation} and (8) as \begin{equation} \widetilde{\mathcal{D}}_\alpha [\rho_1|\!|\rho_2] \ = \ \log \left( \int_X \rho_1(x) \left[\frac{\rho_1(x)}{\rho_2(x)}\right]^{\alpha -1}\ dvol_X \right)^\frac{1}{\alpha -1} \end{equation} The resemblance between the functional forms of (11), (13) is obvious. What separates these two functional forms is the presence of the logarithm in (13), and the additive unit in (11). The latter may be important for proper convexity and for normalization purposes of the entropic functional, but it is far less important than the term next to it in (11). Other than that, the two parentheses in (11), (13) contain the same functional form. For this reason, in the sequel, we will deal with this common term of the R\'{e}nyi and the $q$-entropies, in an attempt to find an expression for the most important part of the relative $q$-entropy, along the lines of a functional resembling the content of the parentheses of (14). \\ To proceed, we will recall the definition of the $p$-norm of a function \ $f: X\rightarrow \mathbb{R}$ \ on the measure space \ $(X, vol_X)$ \ \begin{equation} \Vert f \Vert_p \ = \ \left( \int_X |f(x)|^p \ dvol_X \right)^\frac{1}{p}, \ \ \ p\geq 1 \end{equation} For \ $0<p<1$ \ this is only a quasi-norm since the triangle inequality is not obeyed, but this will not be an impediment in any of the future arguments. Moreover, one defines \begin{equation} \Vert f \Vert_\infty \ = \ \mathrm{ess}\sup \{ |f(x)|, \ x \in X \} \end{equation} where \ ess sup \ stands for the essential supremum of the function \ $f$. \ Given this standard notation, one can modify the definition inside the parentheses of (14), by normalizing \ $\rho_1, \ \rho_2$ in an \ $L_p$ \ rather than \ $L_2$ \ sense. \\ We will confine ourselves to dealing with functions having as domain the set of reals \ $\mathbb{R}$, \ for simplicity. In a general analytic context this is clearly a very strong simplification. However, when it comes to the convexity properties of interest in this work, and due to the localization technique (``needle decomposition") \cite{PW, GroMil, KLS, Klartag} especially for a Riemannian space with a Ricci curvature uniformly bounded from below, considering a foliation by geodesics of the underlying space amounts to essentially reducing convexity arguments from \ $X$ \ down to \ $\mathbb{R}$. \ Hence, considering aspects of the convex behavior of an entropy functional for functions defined over \ $\mathbb{R}$ \ instead of over \ $X$ \ may not be such a huge loss of generality as may appear at a first glance.\\ Taking into account the considerations of the above paragraph, one could propose that instead of the argument of the logarithm of (14), one can consider \begin{equation} \mathcal{N}_\lambda [\rho_1 \Vert \rho_2 ] \ = \ \left\{ \int_\mathbb{R} \ [\rho_1(x)]^\lambda \left[ \frac{\rho_1(x)}{\rho_2(x)} \right]^{1-\lambda} dx \right\}^\frac{1}{1-\lambda} \frac{\Vert \rho_2\Vert_\lambda}{\left( \Vert \rho_1\Vert_\lambda\right)^\frac{1}{1-\lambda}} \end{equation} where the parameter \ $\lambda$ \ is employed, instead of using \ $\alpha$ \ as in (14) in order to align our notation with that of \cite{LYZ0}. \ A major difference between (14) and (17) is that \ $\rho_1(x)$ \ is linear in the first factor of the integrand of (14), whereas it has a power-law dependence (raised in the power $\lambda$) in (17). Writing explicitly the norms of the functions, we get the LYZ functional for the relative entropy \begin{equation} \mathcal{N}_\lambda [\rho_1\Vert \rho_2] \ = \ \frac{\left( \int_\mathbb{R} \rho_1(x) [\rho_2(x)]^{\lambda-1} dx \right)^\frac{1}{1-\lambda} \left( \int_\mathbb{R} [\rho_2(x)]^\lambda dx \right)^\frac{1}{\lambda}}{\left(\int_\mathbb{R} [\rho_1(x)]^\lambda dx \right)^\frac{1}{\lambda(1-\lambda)}}, \hspace{10mm} \lambda \neq 1 \end{equation} and \begin{equation} \mathcal{N}_\lambda [\rho_1 \Vert \rho_2] \ = \ \exp(\mathcal{D}_1 [\rho_1\Vert \rho_2]), \hspace{10mm} \lambda = 1 \end{equation} In (17), (18) the measure of integration is the Lebesgue measure of \ $\mathbb{R}$.\ One can define the \ $L_\lambda$ normalized relative R\'{e}nyi entropy by \begin{equation} \mathcal{R}_\lambda [\rho_1\Vert \rho_2] \ = \ \log \mathcal{N}_\lambda [\rho_1\Vert \rho_2] \end{equation} It should be noticed that (20) is different from the argument of the logarithm of (14), so \ $\mathcal{R}_\lambda$ \ is not a simple variation of the conventionally defined relative R\'{e}nyi entropy \ $\mathcal{D}_\alpha$ \ (14), even up to a logarithm, but a totally different functional altogether.\\ In a similar spirit, the relative $q$-entropy can be inferred from \ $\mathcal{N}_\lambda $ \ (17) by omitting the overall exponent \ $1/(1-\lambda )$ \ in the first factor of (17) and normalizing the probability distributions in an \ $L_q$ \ sense: \begin{equation} \mathcal{T}_q [\rho_1\Vert \rho_2] \ = \ \frac{1}{q-1} \left\{ 1 - \ \left( \int_\mathbb{R} \ [\rho_1(x)]^{2-q} \left[ \frac{\rho_1(x)}{\rho_2(x)} \right]^{q-1} dx \right) \frac{(\Vert \rho_2\Vert_q)^{q-1}}{ \Vert \rho_1\Vert_q} \right\} \end{equation} which, in turn, gives \begin{equation} \mathcal{T}_q [\rho_1\Vert \rho_2] \ = \ \frac{1}{q-1} \left\{ 1- \frac{\left( \int_\mathbb{R} \rho_1(x) [\rho_2(x)]^{1-q} dx \right) \left( \int_\mathbb{R} [\rho_2(x)]^q dx \right)^\frac{q-1}{q}}{\left(\int_\mathbb{R} [\rho_1(x)]^q dx \right)^\frac{1}{q}} \right\} \end{equation} What we have done in order to formulate (17), hence (20), and in the same spirit (21), (22) is to effectively renormalize the initial probability distributions \ $\rho_1, \ \rho_2$ \ in an \ $L_q$ \ sense, to the effective probabilities \begin{equation} \overline{\rho}_i \ = \ \frac{\rho_i}{\Vert \rho_i \Vert_q}, \hspace{10mm} i=1,2 \end{equation} This is an occasion where the effective probability distributions \ $\overline{\rho}$ \ are ``escort distributions" \cite{T-book}, whose appearence is rather mysterious, on dynamical grounds at least, and somewhat controversial even today \cite{BOB}. \\ One can recover the discrete form of the $q$-entropy (9), from (21) as follows: as a first step, consider the ``reference probability" distribution \ $\rho_2(x)$ \ to be the uniform one on the compact subset of \ $\mathbb{R}$ \ we are dealing with. If the support of such probability distribution is all of \ $\mathbb{R}$ \ then one has to implement a regularization procedure by confining themselves to probabilities having compact support (``putting the system in a box") and then taking a weak limit, as is frequently done in Quantum Physics, for instance. This way (21) reduces to (11). As a second step, choose a discrete subset of \ $\mathbb{R}$ \ as the support of the probability distribution \ $\rho_1(x)$, \ which now become essentially Dirac delta functions, up to normalization, and one recovers (9). It is evident, as is true most of the times, that the transition from the discrete to the continuous case and vice versa is not unique, but that this process involves some judicious choices usually dictated by additional physical input and consistency requirements in taking the appropriate limits. \\ The message that someone should take from the convex geometric considerations in this work seems to be: we do really have to use the renormalized ``escort distributions" rather than the ``usual" probability distributions, if we want our results to be more ``natural" and compatible with Convex Geometric considerations and also from the viewpoint of Information Theory. We believe that a major theoretical challenge is to find the dynamical reasons for the appearence of these ``escort distributions" starting from the Lagrangian or the Hamiltonian, ``microscopic", description of the systems of many degrees of freedom under consideration. We will consider a local, covariant, geometric quantity that may describe such a dynamical behavior in a near-future work \cite{NK8}. \\ \section{Generalized Gaussians as entropy maximizers} Given that the functional (17), (18) is the core of equations (20) and (21), we now turn our attention to describing its extremal distributions and some key inequalities that it satisfies following \cite{LYZ0}. To begin with, one defines the $p$-th moment of the probability \ $\rho: \mathbb{R} \rightarrow [0,1]$ \ as \begin{equation} \mathfrak{m}_p [\rho] \ = \ \int_\mathbb{R} |x|^p \rho(x) \ dx, \hspace{10mm} p\in (0, +\infty ) \end{equation} as long as this integral exists. The $p$-th deviation \ $\sigma_p$, for \ $p\in [0,+\infty]$, \ is defined as \begin{equation} \mathfrak{s}_p [\rho ] \ = \ \left\{ \begin{array}{ll} \left( \mathfrak{m}_p[\rho] \right)^\frac{1}{p} & \mathrm{if} \ \ p\in (0, +\infty) \\ & \\ \exp \left( \int_\mathbb{R} \rho(x) \log|x| \ dx\right) & \mathrm{if} \ \ p=0\\ & \\ \mathrm{ess}\sup\{|x|: f(x)>0 \} & \mathrm{if} \ \ p=+\infty \end{array} \right. \end{equation} under the assumption that the above expressions exist and are finite. Given the symbol \begin{equation} x_+ \ = \ \max\{x, 0\}, \hspace{10mm} x\in\mathbb{R} \end{equation} recalling the definition of Euler's Gamma function \begin{equation} \Gamma (x) \ = \ \int_0^\infty z^{x-1}e^{-z} \ dz \end{equation} and given the Beta function \ $B(x,y)$ \ \begin{equation} B(x,y) \ = \ \int_0^1 z^{x-1}(1-z)^{y-1} \ dz \end{equation} for \ $x>0, \ y>0$, \ one readily finds a relation between the Gamma and the Beta functions \begin{equation} B(x,y) \ = \ \frac{\Gamma(x) \ \Gamma(y)}{\Gamma(x+y)} \end{equation} With this notation, the definitions of the generalized Gaussians \ $\mathcal{G}: \mathbb{R} \rightarrow [0, +\infty)$ \ in an \ $L_p$ \ sense, for \ $p\in [0, +\infty]$ \ and for \ $a>1-p$, \ are as follows: \begin{equation} \mathcal{G}(x) \ = \ \left\{ \begin{array}{ll} c_{p,a} \left(1+(1-a)|x|^p\right)_+ ^\frac{1}{a-1} & \mathrm{if} \ \ a\neq 1\\ & \\ c_{p,1} \ \exp (-|x|^p) & \mathrm{if} \ \ a=1 \end{array} \right. \end{equation} for \ $p\in (0, +\infty)$, \ with the normalization constant \ $c_{p,a}$ \ straightforwardly calculated to be \begin{equation} c_{p,a} \ = \ \left\{ \begin{array}{ll} \frac{p(a-1)^\frac{1}{p}}{2 B(\frac{1}{p}, \frac{a}{a-1})} & \mathrm{if} \ \ a>1\\ & \\ \frac{p}{2\Gamma (\frac{1}{p})} & \mathrm{if} \ \ a=1\\ & \\ \frac{p(1-a)^\frac{1}{p}}{2 B(\frac{1}{p}, \frac{1}{1-a}-\frac{1}{p})} & \mathrm{if} \ \ a<1 \end{array} \right. \end{equation} For \ $p=0$ \ and \ $a>1$, \ the definition is \begin{equation} \mathcal{G}(x) \ = \ c_{0,a} (-\log |x|)_+ ^\frac{1}{a-1} \end{equation} for almost all \ $x\in\mathbb{R}$, \ with \begin{equation} c_{0,a} \ = \ \frac{1}{2 \Gamma\left(\frac{a}{a-1}\right)} \end{equation} For \ $p=+\infty$ \ and \ $a >0$ \begin{equation} \mathcal{G}(x) \ = \ \frac{1}{2}, \hspace{10mm} -1\leq x \leq 1 \end{equation} with \ $\mathcal{G}(x)=0$ \ everywhere else on \ $\mathbb{R}$, \ and where we assume, for notational consistency in the sequel that \ $c_{\infty, a} = \frac{1}{2}$. \ Moreover, we consider the re-scaled generalized Gaussians \begin{equation} \mathcal{G}_t (x) \ = \ \frac{1}{t} \ \mathcal{G}\left(\frac{x}{t}\right), \hspace{10mm} t>0 \end{equation} A physical significance of such generalized Gaussians was established when it was proved in \cite{Baren} that they are the self-similar solutions of the porous medium equation. The relation between the porous medium equation and the $q$-entropy has been advocated by many authors such as \cite{PP, K, S, FD2, MMPL, LMMP, CN, NCR, T-book, SCN, RNC}, In \cite{NK9} we tried to explore its possible significance for the dynamical underpinnings of the $q$-entropy. We also brought forth the significance of the developments that were initiated by the viewpoint of by \cite{Otto} for systems described by the $q$-entropy, and subsequent developments in the theory of metric measure spaces \cite{Villani, Ambrosio}.\\ At this point someone can define the ``absolute" analogue of the relative entropic functional (17), by \begin{equation} \mathcal{N}_\lambda [\rho] \ = \ \left\{ \begin{array}{ll} \left( \int_\mathbb{R} [\rho(x)]^\lambda \ dx \right)^\frac{1}{1-\lambda} & \mathrm{if} \ \ \lambda \neq 1\\ & \\ \exp \left( -\int_\mathbb{R} \rho(x) \log\rho(x) \ dx \right) & \mathrm{if} \ \ \lambda = 1 \end{array} \right. \end{equation} which, of course, is nothing else than the exponentials of the Renyi entropy (7) and of the Boltzmann/Gibbs/Shannon entropy (2) respectively. For the generalized Gaussians (30), (32), (34) straightforward calculations give for their $p$-th deviation: for \ $0<p<+\infty$ \ and for \ $a>\frac{1}{1+p}$ \ \begin{equation} \mathfrak{s}_p [\mathcal{G}] \ =\ \frac{1}{(a-1+ap)^\frac{1}{p}} \end{equation} For \ $p=0$ \ and for \ $a>1$, \ one gets \begin{equation} \mathfrak{s}_0 [\mathcal{G}] \ = \ \exp \left( - \frac{a}{a-1} \right) \end{equation} and for \ $p=+\infty$ \begin{equation} \mathfrak{s}_\infty [\mathcal{G}] \ = \ 1 \end{equation} The absolute entropies (36) for these generalized Gaussians are, for \ $p\in (0,+\infty)$ \ and for \ $a>\frac{1}{1+p}$ \begin{equation} \mathcal{N}_a [\mathcal{G}] \ = \ \left\{ \begin{array}{ll} \frac{1}{c_{p,a}} (\frac{ap}{a-1+ap})^\frac{1}{1-a} & \mathrm{if} \ \ a\neq 1\\ & \\ \frac{1}{c_p,1} e^\frac{1}{p} & \mathrm{if} \ \ a=1 \end{array} \right. \end{equation} For \ $p=0$ \ and \ $a>1$ \ one gets \begin{equation} \mathcal{N}_a [\mathcal{G}] \ = \ \frac{1}{c_{0,a}} \left(\frac{a}{a-1} \right)^\frac{1}{1-a} \end{equation} and finally, for \ $p=+\infty$ \ and \ $a>0$ \begin{equation} \mathcal{N}_a [\mathcal{G}] \ = \ 2 \end{equation} All of the above notation and results set the stage for the following statement. One of the ways that characterizes the ordinary Gaussians and connects it with the BGS entropy, is the following extremizing property: among all the probability distributions with a given, finite, second moment, the Gaussian is the unique probability distribution that maximizes the BGS entropy. It is remarkable that a similar property is proved in \cite{LYZ0} for the generalized Gaussians and \ $\mathcal{N}_\lambda$. \ The exact statement is as follows: Let \ $\rho$ \ be a probability density function in \ $\mathbb{R}$. \ If \ $p\in [0, +\infty]$, \ $a>\frac{1}{1+p}$ \ and both \ $\mathfrak{s}_p[\rho]$ \ and \ $\mathcal{N}_a [\rho]$ \ are finite, then \begin{equation} \frac{\mathfrak{s}_p[\rho]}{\mathcal{N}_a[\rho]} \ \geq \ \frac{\mathfrak{s}_p[\mathcal{G}]}{\mathcal{N}_a[\mathcal{G}]} \end{equation} Equality in (43) holds if and only if \ $\rho (x) = G_t(x)$ \ for some \ $t\in (0,+\infty)$. \ For a generalization of this theorem, see \cite{LYZ5}. Inequality (43) can also be interpreted as a Sobolev-type inequality, so it may not come as a surprise that further relations exist between (43) with the Sobolev, the log-Sobolev and Gagliardo-Nirenberg inequalities \cite{DD, CENV, Villani}. We have, however, been unable to find any use of these inequalities to the case of $q$-entropy that may appear to have any physical significance, certainly not for the questions addressed in this work.\\ Before closing this section, one may also wish to notice that the \ $p=1$ \ special case of the generalized Gaussians (30) is nothing else than the $q$-exponential functions which appear as the equilibrium distributions, upon maximizing the $q$-entropy under the usually employed constraints reflecting the classical ensembles in Statistical Mechanics \cite{T-book}. Hence (43) can be interpreted as a formalization and extension of the well-known statements about the role of $q$-exponentials in the part of ``nonextensive" Statistical Mechanics employing the $q$-entropy \cite{T-book}. \\ \section{Fisher information, the Cram\'{e}r-Rao inequality and generalizations} The Fisher information, and its associated Riemannian metric, have been fundamental concepts in Statistics since \cite{Rao, Jeffreys}. If we consider a probability distribution \ $\rho(x; \xi)$ \ where \ $x\in\mathbb{R}$ for simplicity, and in order to continue the arguments given above,\ and \ $\xi\in\mathbb{R}$, \ then one defines the Fisher information as the expectation value \ $\mathbb{E}$ \ given by \begin{equation} \mathcal{F}(\xi) \ = \ \mathbb{E} \left[ \left( \frac{\partial \log\rho (x; \xi)}{\partial \xi}\right)^2 \right] \end{equation} or, in other words, \begin{equation} \mathcal{F}(\xi) \ = \ \int_\mathbb{R} \left(\frac{\partial \log\rho(x;\xi)}{\partial \xi}\right)^2 \rho(x; \xi) \ dx \end{equation} The multi-dimensional parameter space generalization of (44), (45) for \ $\xi\in \mathbb{R}^n$, \ or more generally for \ $\xi = (\xi^1, \ldots, \xi^n) \in\mathcal{M}$, \ where \ $\mathcal{M}$ \ is assumed to be the parameter space which is an $n$-dimensional Riemannian manifold is \begin{equation} \mathcal{F}_{ij}(\xi) \ = \ \mathbb{E} \left[\frac{\partial\log\rho(x;\xi)}{\partial \xi^i} \cdot \frac{\partial\log\rho(x;\xi)}{\partial \xi^j}\right], \hspace{10mm} i,j = 1, \ldots, n \end{equation} or, in other words, \begin{equation} \mathcal{F}_{ij}(\xi) \ = \ \int_\mathbb{R} \frac{\partial\log\rho(x;\xi)}{\partial \xi^i} \cdot \frac{\partial\log\rho(x;\xi)}{\partial \xi^j} \ \rho(x;\xi) dx \end{equation} or after integration by parts, one gets the quadratic form on \ $\mathcal{M}$ \begin{equation} \mathcal{F}_{ij}(\xi) \ = \ \mathbb{E} \left[ - \frac{\partial^2\log\rho(x;\xi)}{\partial \xi^i\partial \xi^j} \right] \end{equation} We can also re-express (46) as \begin{equation} \mathcal{F}_{ij}(\xi) \ = \ 4 \int_\mathbb{R} \frac{\partial\sqrt{\rho(x;\xi)}}{\partial\xi^i} \cdot \frac{\partial\sqrt{\rho(x;\xi)}}{\partial\xi^j} \ \rho(x;\xi) dx \end{equation} As for any mathematical concepts, one can wonder what would their significance be, if any, for Statistical Mechanics in general, or for the questions addressed in this work in particular. We will not comment about the former question. However, for the case of determining a relative $q$-entropy, and its possible dynamical underpinnings expressed by coarse-graining through convex bodies in the phase space of a system of many degrees of freedom described by a Hamiltonian, someone can say the following. \\ First, one can see that the Fisher information (44), (45), or the Fisher metric in general (46), (47), is the Hessian (48) of the Kullback-Leibler divergence (4). Moreover, due to (49), it is a positive semi-definite quadratic form and as such, it can be seen as providing a Riemannian metric on the parameter space \ $\mathcal{M}$. \ This is the fundamental point of a long line of investigations coming under the title ``Information Geometry" whose scope far supersedes its potential applicability in Statistical Mechanics \cite{Amari}. The viewpoint of these investigations may prove to be important though for $q$-entropy purposes. One can see the Fisher information, or the Fisher metric, as analytical and geometric structures on the space of probability measures of the parameter space \ $\mathcal{M}$. \ A similar space, the Monge-Kantorovich-Rubinstein-Vasherstein, or in the more widely used terminology the Wasserstein space, has attracted considerable attention recently \cite{Villani, Ambrosio}. We claimed in \cite{NK9} that the Wasserstein space may be important for exploring the dynamical foundations of the $q$-entropy. So, in a sense, an investigation about the foundations of the $q$-entropy, via the Fidsher metric or via any other means, also provides a dynamical foundation for the parts of Information Geometry that may be related to an underlying Hamiltonian evolution of a system of many degrees of freedom. \\ Second, according to Chentsov's theorem \cite{Cen1, Cen2}, the Fisher information metric (46) is the unique Riemannian metric on the parameter space \ $\mathcal{M}$ \ which is invariant under sufficient statistics. The theorem \cite{Cen1, Cen2} is applicable for finite sample spaces. For infinite sample spaces see \cite{AJLS}. Sufficient statistics is a stronger requirement than mere reparametrization invariance, which all geometric structures should obey. Without going into any details, as they can be found in the literature \cite{Amari, KV} and are not required in the sequel, sufficient statistics refers to mapping between sample spaces that preserves all information about a random variable. The Fisher metric can be straightforwardly checked to be invariant under sufficient statistics. However, it is much harder to prove that it is the only invariant under sufficient statistics. One can hardly over-emphasize the use of Riemannian metrics in modelling physical systems, especially if such metrics have desirable features rendering them unique for modelling such systems.\\ One sees that the functional form of the Fisher information (44) involves a logarithm, hence that it may be somehow related to the BGS entropy or the Kullback-Leibler divergence in their Information theoretical applications. A question that can be posed is whether it is possible to extend the definition of the Fisher information to an analogous quantity more closely related to the R\'{e}nyi-/$q$-entropy functionals (17), (21). A proposal for such generalized Fisher information was also provided in \cite{LYZ0}. Let \ $p\in [1, +\infty]$ \ and \ $a\in\mathbb{R}$. \ Then define the $(p, a)$-th Fisher information \ $\mathcal{F}_{p,a}[\rho]$ \ of a probability density \ $\rho$ \ as follows: For \ $p\in(1, +\infty)$, \ let \ $p^\ast \in (1, +\infty)$ \ denote its harmonic/convex conjugate, namely \begin{equation} \frac{1}{p} + \frac{1}{p^\ast} \ = \ 1 \end{equation} Define \begin{equation} \mathcal{F}_{p, a}[\rho] \ = \ \left\{ \int_\mathbb{R} \Big| [\rho(x)]^{a-2} \ \frac{d\rho(x)}{dx}\Big|^{p^\ast} \rho(x) \ dx \right\} ^\frac{1}{p^\ast a} \end{equation} as long as the absolute value above is finite. For \ $p=1$ \ \begin{equation} \mathcal{F}_{p,a} [\rho] \ = \ \mathrm{ess} \sup \left\{ \Big| [\rho(x)]^{a-2} \ \frac{d\rho(x)}{dx}\Big|, \ \ x\in \mathrm{supp} \rho \subset \mathbb{R} \right\} \end{equation} where \ $\mathrm{supp}$ \ stands for ``support of", under the assumption that \ $\rho$ \ is absolutely continuous, and that the essential supremum in (52) is finite. For the case \ $p=+\infty $ \ one defines \begin{equation} \mathcal{F}_{p,a}[\rho] \ = \ \inf_{K\in P(\mathbb{R})} \left\{ \sum_{k\in K} \Bigg| \frac{[\rho(x_k)]^a}{a} - \frac{[\rho(x_{k-1})]^a}{a} \Bigg| \right\} \end{equation} where the index set \ $K$ \ indicates a partition of \ $\mathbb{R}$ \ and \ $P(\mathbb{R})$ \ indicates all possible partitions of \ $\mathbb{R}$. \ Definition (53) expresses the total variation of \ $\rho^a / a$, \ so we assume that \ $\rho^a$ \ has bounded variation in order for it to make sense.\\ The calculation of the $(p,a)$-th Fisher information for the generalized Gaussians (30), (32), (33) is straightforward and gives, for \ $p\in[1, +\infty]$ \ and \ $a>\frac{1}{1+p}$ \ \begin{equation} \mathcal{F}_{p,a} [\mathcal{G}] \ = \ \left\{ \begin{array}{ll} c_{p,a}^\frac{a-1}{a} p^\frac{1}{a} \left(a-1+ap \right)^{-\frac{p-1}{pa}} & \mathrm{if} \ \ p < +\infty\\ & \\ \frac{2^\frac{1-a}{a}}{a^\frac{1}{a}} & \mathrm{if} \ \ p=+\infty \end{array} \right. \end{equation} It may worth observing at this point that \begin{equation} \mathcal{N}_a [\mathcal{G}] \ = \ a \ \mathfrak{s}_p[\mathcal{G}] \ (\mathcal{F}_{p,a}[\mathcal{G}])^a \end{equation} Stam's inequality \cite{Stam} provides an alternative characterization of the Gaussians as extremizing distributions. It states that among all probability distributions with the same Fisher information, Gaussians are the unique distributions that minimize the BGS entropy. Stam's theorem was generalized for the $(p,a)$-th Fisher information, the absolute entropy \ $\mathcal{N}_a$, \ and the generalized Gaussians in \cite{LYZ0} as follows. Let \ $p\in [1, +\infty]$, \ $a > \frac{1}{1+p}$ \ and let \ $\rho$ \ be a probability distribution on \ $\mathbb{R}$. \ For $p$: finite, \ $\rho$ \ is assumed to be absolutely continuous, as in the definition of the $(p,a)$-th Fisher information. Analogously, for \ $p=+\infty$, \ $\rho^a$ \ is assumed to have bounded variation. If both \ $\mathcal{N}_a[\rho]$ \ and \ $ \mathcal{F}_{p,a}[\rho]$ \ are finite, then \begin{equation} \mathcal{N}_a[\rho] \ \mathcal{F}_{p,a}[\rho] \ \geq \ \mathcal{N}_a [\mathcal{G}] \ \mathcal{F}_{p,a} [\mathcal{G}] \end{equation} Equality holds in (56) if and only if there exists a \ $t>0$ \ and an \ $x_0\in\mathbb{R}$ \ such that \ $\rho(x) = \mathcal{G}_t (x-x_0)$, \ for all $x\in\mathbb{R}$. This is another extremizing characterization of the generalized Gaussians, after (43). \\ An important relation involving the Fisher information is the Cram\'{e}r-Rao inequality \cite{CT}, a fundamental inequality in statistical inference. Given an unbiased estimator, the Cram\'{e}r-Rao inequality provides a lower bound for its variance in terms of the Fisher information in the case of a scalar/single parameter. It provides a lower bound about the estimator's covariance in terms of the the Fisher metric (matrix) in the multi-dimensional parametric/vector case. Without providing too many details, as they can be readily found in the literature \cite{CT, KV}, consider a probability distribution \ $\rho(x;\xi)$ \ which depends on a single parameter $\xi$. Then the variance of an unbiased estimator $\hat{\Xi}$ of $\xi$ obeys the Cram\'{e}r-Rao inequality \begin{equation} \mathrm{var}(\hat{\Xi}) \ \geq \ \frac{1}{\mathcal{F}(\xi)} \end{equation} where \ $\mathcal{F}(\xi)$ \ stands for the Fisher information (44). If a \ $\hat{\Xi}$ \ is an unbiased estimator of a vector-valued parameter $\xi = (\xi^1, \ldots, \xi^n)$, \ $\xi\in\mathbb{R}^n$ the Cram\'{e}r-Rao inequality generalizes as \begin{equation} \mathrm{cov}(\hat{\Xi}) \ \geq \ \mathcal{F}^{-1} \end{equation} where \ $\mathrm{cov}$ \ stands for the covariant matrix of the estimator \ $\hat{\Xi}$, \ and \ $\mathcal{F}$ \ is the matrix form of the Fisher metric (46). The generalization to the case of the $(p,a)$-th Fisher information (51), (52), (53) and the generalized Gaussians goes as follows \cite{LYZ0}: Let \ $p\in[1, +\infty]$, \ $a > \frac{1}{1+p}$. \ If both \ $\mathfrak{s}_p[\rho]$ \ and \ $ \mathcal{F}_{p,a}[\rho]$ \ are finite for a probability density \ $\rho$, \ then \begin{equation} \mathfrak{s}_p [\rho] \ \mathcal{F}_{p,a} [\rho] \ \geq \ \mathfrak{s}_p[\mathcal{G}] \ \mathcal{F}_{p,a}[\mathcal{G}] \end{equation} The equality holds in (59) if and only if \ $\rho = \mathcal{G}_t$, \ for some \ $t>0$. \ In (59), if $p$: finite, then \ $\rho$ \ is assumed to be absolutely continuous, and if \ $p=+\infty$ \ then \ $\rho^a$ \ is assumed to have bounded variation. This is a third extremal characterization of the generalized Gaussians, after (43), (56).\\ We see that the definitions of the relative divergence (17), the absolute entropy (36) and the corresponding generalized Gaussians (30), (32), (34), (35) give us generalizations of classical inequalities that are well-known in Information Theory. Definitions such as (17) and subsequently (20), (21) may be reasonable, and in particular (21) may be a good guess for an expression of the relative $q$-entropy. The relations with convex geometry of these functionals, which we did not elaborate upon in this work, but can be found in the references, are an element which increases our confidence toward the relevance of these expressions for Physics. Ultimately, only the calculations of such functionals in particular physical models and quantities stemming from them, and how they compare with experimental data will be the judge of the usefulness, if any, of such expressions in Physics.\\ \section{Conclusions and outlook} In this work, we presented a proposal about the ``essential" functional form (17) of a relative $q$-entropy (21), (22) a functional form which is also shared, up to a power and a logarithm, with a variation of the relative R\'{e}nyi entropy (20). This form appears in the fundamental work of Lutwak-Yang-Zhang, which relies on ideas of the $L_p-$ Brunn-Minkowski theory and its dual. We also stated the extremal distributions of this functional form, an $L_p$ form of the Fisher information, and the related form of a generalized Cram\'{e}r-Rao inequality. We attempted to point out the significance of these constructions, as judged from the viewpoint of the $q$- entropy and their possible significance for the part of Statistical Mechanics based on the $q$-entropy. Perhaps mirroring the precendence of the construction of the $q$-entropy in Information Theory, before its re-discovery by C. Tsallis in \cite{T1}, and its further development in some parts of Physics during the last three decades, the relative $q$-entropy functional (21), (22) may prove to be of interest in Statistical Mechanics and Complexity Theory in the future.\\ The present work takes a considerably different path from the recent \cite{BCP, RPL, Rast, ZSLL, Italians, ZY, WLi, ShiHan} which refer mostly to relative $q$-entropies for quantum systems, in an attempt to quantify coherence measures. The difference is not just superficial: indeed one could attempt to extend the LYZ functional to Quantum Physics in the usual way, by replacing the probability distributions by appropriately defined pseudo-differential operators acting on Hilbert spaces of states of such quantum systems. Whether such a naive substitution works is a totally different matter altogether as one faces, once more, the notorious ``operator ordering problem" which plagues all attempts to quantization starting from classical models, but is especially acute in attempting to quantize General Relativity in a background-independent and non-perturbative manner.\\ The dual $L_p-$ Brunn-Minkowski theory and the relations with the better known $L_p-$ Brunn-Minkowski theory, have the potential of providing a partial understanding of the dynamical foundations of the $q$-entropy, at least as they pertain to coarse-graining \cite{DeGos1, DeGos2, DeGos3, DeGos4, NK10, Gos0}. The idea of such an application of the Brunn-Minkowski theory to entropy functionals is not really new. However, this geometric viewpoint has not been advocated, much less appreciated, in Physics yet, so far as we know. Elements of such a viewpoint can be traced to the relations between Information Theory and Brunn-Minkowski theory as can be seen in \cite{CC, DCT, CT}, for instance.\\ Based, in part on the present work, one could conjecture that coarse graining with convex polyhedra in an $L_p$ sense in phase space, is a reason for choosing to describe the collective behavior of a Hamiltonian system of many degrees of freedom by using the $q$-entropy. Dually, one could employ in such a coarse-grained description star-shaped bodies in an $L_p$ sense in phase space, as the latter are the fundamental objects of the dual $L_p-$ Brunn-Minkowski theory. Such coarse-graining procedures have, ideally, to be somehow related to the dynamical foundations of the theory. Whether they may have anything to do with Physics, can ultimately only be decided by comparing its implications with the results of experiments.\\ One step in this general direction of coarse-graining was taken by the introduction of ``quantum blobs" \cite{Gos1} which are phase space volumes invariant only under linear, rather than the fully nonlinear, symplectic maps, of the phase space. The proposal for the existence and use of ``quantum blobs" is firmly rooted on the symplectic non-squeezing theorem \cite{Gr} and, more generally, on the existence and properties of symplectic capacities \cite{DS, GosL}. These results provide some genuinely 2-dimensional restrictions to deformations, under symplectic maps, of phase space volumes. This rigidity of symplectic maps however, is only applicable to projections of phase space volumes on symplectic 2-planes of the phase space and does not hold for sections of phase space volumes \cite{AM}. It may be worth mentioning at this point, that the implications for Statistical Mechanics, if any, of the distinction between symplectic and volume-preserving maps is currently largely unknown \cite{Gos2, NK12, NK13}. Addressing this question may have far-reaching consequences for a better understanding of the foundations of Statistical Mechanics especially as it applies to ``complex systems" or to systems out of equilibrium.\\ In the same isoperimetric/Sobolev inequality extremal spirit presented in this work, recall that the Viterbo conjecture \cite{Vit, AAKO} essentially claims that the Euclidean ball has the maximum symplectic capacity among all convex sets in the standard symplectic space\ $\mathbb{R}^n$ \ with a given volume, for all symplectic capacities, is true for $L_p$ balls. However, notice that it is violated for convex sets which are sections of star-shaped bodies. Hence sections and projections seem to behave quite differently from a symplectic viewpoint, unlike their familiar correspondence via polar duality encountered in Functional Analysis and in Convex Geometry. \\ In its most straightforward interpretation, the duality of the Brunn-Minkowski with the dual Brunn-Minkowski theories rests in the replacement of concepts involving projections to concepts involving sections of convex and of star-shaped bodies. Hence a naive use of the ``quantum blobs" or their nonlinear symplectic analogues, if such structures could be reasonably defined, may not be the most appropriate objects for providing a form of coarse-graining of phase space, which may behave well under the duality between the $L_p-$ Brunn-Minkowski and the dual $L_p-$ Brunn-Minkowski theories. Such a duality, if it exists, may be used in establishing and explaining a suspected invariance of the $q$-entropy under what appears to be a set of M\"{o}bius-like transformations of the nonextensive parameter $q$ \cite{NK11}. \\ It should also be noticed that ``quantum blobs" are essentially Riemannian constructions \cite{Gos1}. As such, they need to be generalized in order to incorporate the $q$-exponentials and generalized $L_p$ Gaussians which arise as extremal distributions of the above proposed relative $q$-entropy functionals (17), (20), (21). It is, however, not clear to us how to define such essentially $L_p$ generalizations of the ``quantum blobs", if possible at all. Moreover, we are not certain which invariance requirements someone would have to impose to derive such structures, and how such requirements could be justified on dynamical grounds for, at least, the Hamiltonian systems of many degrees of freedom which are relevant to Statistical Mechanics. We believe that it may be worth pursuing and further developing some of these ideas in the future.\\ \vspace{5mm} \noindent{\bf Acknowledgements:} \ We are grateful to the referees whose constructive criticism helped improve the clarity of the exposition of this work.\\ \vspace{3mm}
2,869,038,155,363
arxiv
\section{Introduction} \label{sec_intro} The mechanism of hadronization, \textit{i.e.}, the conversion of quarks and gluons produced in hadronic or electromagnetic reactions into colorless hadrons, is a non-perturbative problem that is presently not calculable within the theory of strong interactions, Quantum Chromodynamics (QCD). For partons produced at large transverse momentum, $p_t$, the factorization theorem of QCD allows to treat the hadronization process via a so-called fragmentation function, which is universal and can, in principle, be determined empirically. At low momentum this scheme is no longer applicable and other hadronization mechanisms become relevant. The quark coalescence model (QCM) has provided a phenomenologically successful framework to understand several non-perturbative features of hadron production in hadronic collisions. In elementary ($p$-$N$ and $\pi$-$N$) reactions, flavor asymmetries in kaon and charmed hadron spectra have been associated with the recombination of produced strange and charm quarks with valence (and sea) quarks in target and projectile~\cite{Das:1977cp,Braaten:2002yt,Rapp:2003wn}. In heavy-ion reactions at the Super Proton Synchrotron (SPS)~\cite{Biro:1994mp} and the Relativistic Heavy-Ion Collider (RHIC), recombination of quarks from a thermalized Quark-Gluon Plasma (QGP)~\cite{Hwa:2002tu,Greco:2003mm,Fries:2003kq,Molnar:2003ff,Lin:2004en, Zimanyi:2005nn,Miao:2007cm,Ravagli:2007xx,Ayala:2007cp,Cassing:2008sv} gives a simple and intuitive explanation of several unexpected features in the observed hadron spectra, most notably the large baryon-to-meson ratio and the rather universal constituent quark number scaling (CQNS) of the elliptic flow coefficient, $v_{2,h}(p_T) \equiv n_q v_{2,q}(p_T/n_q)$ ($n_q$: number of valence quarks in hadron $h$, $p_T$: transverse momentum of $h$). This scaling relation implies the momenta of the coalescing quarks to be collinear, and its implementation in QCMs usually restricts their applicability to sufficiently large momenta so that the associated non-conservation of energy in the hadron formation process is small. Since at high $p_T$ parton fragmentation is expected to take over, the typical range of applicability of QCMs is at intermediate momenta, $2\;\mathrm{GeV} \lesssim p_T \lesssim 6\;\mathrm{GeV}$. In our recent work~\cite{Ravagli:2007xx}, we have suggested a reinterpretation of quark coalescence in terms of hadronic resonance formation, implemented via $q+\bar q \to M$ scattering into a Boltzmann equation ($M$: meson). Energy conservation is obeyed by utilizing hadronic reaction rates, along with detailed balance, based on pertinent spectral functions. In addition, we have shown that this approach correctly recovers the thermal equilibrium limit, which enabled a more controlled extension of the coalescence mechanism to low $p_T$ and to make contact with the phenomenologically successful hydrodynamic description of bulk matter at RHIC. Another aspect that has evaded a satisfactory explanation in QCMs at RHIC is the question of space-momentum correlations in the underlying (thermal) quark distribution functions (see, e.g., Ref.~\cite{Fries:2008hs} for a recent critical review). In hydrodynamic models the elliptic flow of produced hadrons is a collective effect that implies a definite correlation between the particle's momentum and its spatial position in the fireball, \textit{i.e.}, a (locally thermalized) fluid cell moving into a specific direction preferentially emits hadrons in that same direction. Such a correlation is neglected within the so-called ``factorized'' implementation of the parton $v_{2,q}(p_T)$ which does not carry any spatial dependence (and is therefore identical regardless of the quark's position inside the fireball). While this approximation straightforwardly recovers the empirical constituent-quark number scaling (CQNS) of the hadron elliptic flow, it is at variance with the hydrodynamic description of $v_2$ as a collective expansion effect. Previous attempts to incorporate space-momentum correlations into QCMs have found the empirically observed CQNS to be rather fragile~\cite{Pratt:2004zq,Molnar:2004rr,Greco:2005jk}. Part of the problem is the construction of a realistic transition from the thermal to the kinetic regime of the underlying parton phase-space distribution functions, as characterized by the ``saturation'' (leveling off) of the empirical parton $v_2$ at about $p_t\simeq 1\, \mathrm{GeV}$. In Ref.~\cite{Pratt:2004zq} several elliptic ``deformations'' of a thermal blast-wave parameterization have been considered, motivated by different plausible realizations of $v_2$. While some features could be ruled out being incompatible with the empirical CQNS, other assumptions did not spoil the latter. In Ref.~\cite{Greco:2005jk} a reduction of the boost velocity at higher momenta was introduced, entailing a violation of CQNS at the 20\% level. It therefore seems that purely phenomenological prescriptions of space-momentum correlations and associated $v_2$ did not arrive at a conclusive interpretation of the key features underlying the quark distribution functions. In Ref.~\cite{Molnar:2004rr}, based on numerical transport simulations, it was even argued that rather delicate cancellations must be at work to obtain CQNS for the thermal components, thus raising doubts on the robustness of the coalescence approach. In view of the broad empirical applicability of CQNS across different centralities, system sizes and collision energies~\cite{Adare:2006ti,Abelev:2007qg}, such an interpretation would be difficult to reconcile with experiment. In the present paper we adopt a microscopic approach to compute quark distributions in four-dimensional phase space (transverse position and momentum) by employing relativistic Langevin simulations for strange and charm quarks within an expanding thermal QGP background. In a strict sense, the underlying Fokker-Planck equation is applicable for a diffusive treatment of heavy and/or high-momentum quarks, \textit{i.e.}, in a regime where the momentum transfers from the heat bath are small. Our simulations for low-momentum ($p_t\lesssim 1\,\mathrm{GeV}$) strange quarks are thus at the boundary of applicability of a Fokker-Planck treatment and may be considered as extrapolations thereof. The Langevin approach has the attractive feature that it naturally encodes the transition from a thermal to a kinetic regime. In particular, when simulating heavy-quark (HQ) motion in an expanding QGP fireball for non-central collisions, this transition reflects itself in a saturation of the elliptic flow~\cite{vanHees:2005wb,Moore:2004tg}, a key ingredient to CQNS in light-hadron spectra observed at RHIC. It turns out that Langevin simulations preserve the $v_2$ saturation feature when applied to strange quarks (with thermal masses of $\sim$$0.5\,\mathrm{GeV}$). We will therefore investigate whether the resulting quark distribution functions, evolved to the hadronization transition and injected into our resonance recombination approach, allow for a better (microscopic) understanding of space-momentum correlations in the coalescence process. In view of the rather delicate dependence of CQNS on these correlations (as discussed above), a realistic treatment of the kinematics in the hadron formation process is mandatory, including energy-momentum conservation, non-collinear kinematics, and a well defined equilibrium limit. The recombination approach developed in Ref.~\cite{Ravagli:2007xx} satisfies these requirements. In addition to this, the QGP evolution and subsequent hadronization are linked via the ansatz that resonances play an essential role in hot QCD matter around $T_c$. This scenario is consistent with effective potential models where a non-perturbative description of the strongly coupled QGP (sQGP) is realized via bound~\cite{Shuryak:2004tx} and/or resonance~\cite{Mannarelli:2005pz,vanHees:2007me} states of deconfined partons. Recent lattice QCD computations support the picture of various (light and strange) hadronic states surviving up to temperatures of $\sim$1.5-2~$T_c$~\cite{Karsch:2003jg,Asakawa:2003nw}. Our article is organized as follows. In Sec.~\ref{sec_boltz} we review our earlier developed model~\cite{Ravagli:2007xx} for resonance hadronization based on the Boltzmann equation. In Sec.~\ref{sec_parton} we elaborate the computation of the phase-space distributions of quarks obtained from Langevin simulations of an expanding QGP fireball at RHIC. In Sec.~\ref{sec_meson} we discuss the numerical results for the $v_2$ coefficients of $\phi$ and $J/\psi$ mesons within our model, and discuss their properties in terms of CQNS in both transverse momentum and transverse kinetic energy, $K_T$. Sec.~\ref{sec_concl} contains our conclusions. \section{Recombination from the Boltzmann Equation} \label{sec_boltz} Following Ref.~\cite{Ravagli:2007xx}, our description of hadronization at the critical temperature, $T_c$, is based on the Boltzmann equation using resonance quark-antiquark cross sections to compute meson spectra in terms of underlying anti-/quark phase-space distributions, $f_{q,\bar{q}}$ (baryons could be treated in a similar way, \textit{e.g.}, in a two-step process using subsequent quark-quark and quark-diquark interactions; in the present paper, we will focus on mesons). The meson phase-space distribution, $F_M$, is determined by the equation \begin{equation} \label{boltz} \left( \frac{\partial}{\partial t} +\vec{v}\cdot\vec{\nabla} \right) F_M(t,\vec x,\vec p) =-\frac{\Gamma}{\gamma_p} \, F_M(t,\vec x,\vec p)+\beta(\vec x,\vec p) \ , \end{equation} where $\vec p$ and $\vec x$ denote three-momentum and position of the meson, $M$, and $\vec{v}=\vec p/E_M(p)$ ($m$, $E_M(p)$=$\sqrt{m^2+\vec p^2}$: meson mass and energy). The total meson width, $\Gamma$, is assumed to be saturated by the coupling to quark-antiquark states, $M\leftrightharpoons q + \bar q$ and taken to be constant, with the factor $\gamma_p=E_M(p)/m$ accounting for Lorentz time dilation, see also Ref.~\cite{Miao:2007cm}. Integrating over the fireball volume leads to the momentum-distribution function of the meson, $f_M$, and the pertinent transport equation \begin{equation} \label{boltz-mom} f_M(t,\vec p)=\int \mathrm{d}^3 x \, F_M(t,\vec x,\vec p), \quad \frac{\partial}{\partial t} f_M(t,\vec p) =-\frac{\Gamma}{\gamma_p} \, f_M(t,\vec p)+g(\vec p). \end{equation} The drift term vanishes upon integration over $\vec x$ since it is a total divergence: $\vec{v} \cdot \vec{\nabla} f_M(t,\vec x,\vec p)=\vec{\nabla} \cdot [\vec{v} f_M(t,\vec x,\vec p)]$. The relation of the gain term, $g(\vec p)$, to the underlying microscopic interaction is given by \begin{equation} \label{gain} g(\vec p)=\int \mathrm{d}^3 x \beta(\vec x,\vec p)= \int \frac{\mathrm{d}^3 p_1 \mathrm{d}^3 p_2}{(2 \pi)^6} \int \mathrm{d}^3 x \ f_q(\vec x,\vec p_1) \ f_{\bar{q}}(\vec x,\vec p_2) \ \sigma(s) \ v_{\mathrm{rel}}(\vec p_1,\vec p_2) \ \delta^{(3)}(\vec p-\vec p_1-\vec p_2) \end{equation} with $\sigma(s)$ the cross section for the process $q+\bar{q}\to M$ at center-of-mass (CM) energy squared, $s=(p_1^{(4)}+p_2^{(4)})^2$, where $p_{1,2}^{(4)}$ are the four-momenta of quark and antiquark. The quark phase-space distribution functions are normalized as $N_{q,\bar{q}}=\int \frac{\mathrm{d}^3 x \;\mathrm{d}^3 p}{(2\pi)^3} f_{q,\bar{q}}(\vec x,\vec p)$. Throughout this paper, quarks will be assumed to be zero-width quasi-particles with an effective mass $m_q$ (which contains both thermal and bare contributions). The classical nature of the Boltzmann equation warrants the use of classical distribution functions for all the particles, and we assume zero chemical potentials for all quark species. The cross section is approximated by a relativistic Breit-Wigner form, \begin{equation} \label{cross} \sigma(s)=g_{\sigma}\frac{4\pi}{k^2} \frac{(\Gamma m)^2}{(s-m^2)^2+(\Gamma m)^2} \ , \end{equation} where $g_{\sigma}=g_M/(g_q g_{\bar{q}})$ is a statistical weight given in terms of the spin (-color) degeneracy, $g_M$ ($g_{q,\bar{q}}$), of the meson (anti-/quark), and $k$ denotes the quark three-momentum in the CM frame. With $M\leftrightharpoons q +\bar q$ being the only channel, it follows that $\Gamma_{\mathrm{in}}=\Gamma_{\mathrm{out}}=\Gamma$. Detailed balance requires the same $\Gamma$ in the loss term on the right-hand side of Eq.~(\ref{boltz}), thus ensuring the correct equilibrium limit with $\tau=1/\Gamma$ the pertinent relaxation time. This formulation conserves four-momentum and applies to all resonances $M$ with masses above the $q\bar q$ threshold, \textit{i.e.}, for a positive $Q$ value, \begin{equation} \label{masscondition} Q= m - (m_q+m_{\bar q}) \gtrsim 0 . \end{equation} If the $2\rightarrow 1$ channel proceeds too far off-shell, \textit{i.e.}, $Q<0$ and $\Gamma < |Q|$ (\textit{e.g.}, for pions), other processes need to be considered, \textit{e.g.}, $q+\bar{q}\rightarrow M+g$ (which, in principle, is possible in the present framework by implementing the respective cross sections). We note that the majority of the observed pions are believed to emanate from resonance decays ($\rho$, $\Delta$, $a_1$ etc.); in addition, hydrodynamic calculations suggest that the elliptic flow in heavy-ion collisions at RHIC does not change much after hadronization~\cite{Kolb:2003dz}. We also note that in the absence of a confining interaction individual quarks and antiquarks remain a part of the heat bath. \begin{figure}[!t] \begin{center} \includegraphics[width=0.4\textwidth]{pt_blastwave.eps} \end{center} \caption{(Color online) $p_T$ spectra for $\phi$ mesons using the Boltzmann recombination equation in the equilibrium limit, Eq.~(\ref{eqboltzappr}), based on blast-wave input distributions for strange quarks (solid line), compared to $\phi$ spectra directly obtained from a blast-wave expression with the same fireball parameters (temperature $T=180 \, \mathrm{MeV}$, radial expansion surface velocity $\beta_0=0.55c$). The $\phi$-meson resonance parameters are $m_{\phi}=1.02 \, \mathrm{GeV}$, $\Gamma_\phi=50 \, \mathrm{MeV}$, and the $s$-quark mass is $m_s=0.45 \, \mathrm{GeV}$.} \label{ptequil} \end{figure} The equilibrium limit is readily recovered within our approach by imposing the stationarity condition, \begin{equation} \label{eqcond} \frac{\partial}{\partial t} f_M(t,\vec p)=0 \ . \end{equation} Then (\ref{boltz-mom}) is immediately solved by \begin{equation} \label{eqboltzappr} f_M^{\mathrm{eq}}(\vec p)=\frac{\gamma_p}{\Gamma} g(\vec p) \ , \end{equation} which represents the large time limit of the Boltzmann equation and is the expression that comes closest to the conventional QCM approximation. For hadronization times less or comparable to the relaxation time, $\tau$, the equilibrium limit will not be reached and an explicitly time-dependent solution is in order. We have verified numerically that Eq.~(\ref{eqboltzappr}) accurately recovers the standard thermal Boltzmann distribution for a meson $M$ at temperature $T$, if the constraint of a positive $Q$ value is satisfied (for negative $Q$ the $2 \to 1$ channel is inoperative), as illustrated in Fig.~\ref{ptequil} for the case of the $\phi$ meson: when using thermal quark input distributions with radial flow (blast-wave model), the computed equilibrium expression (\ref{eqboltzappr}) is in excellent agreement with the same blast-wave expression (identical temperature and flow profile) directly applied at the meson level. This reiterates the close connection between equilibration and energy conservation in the approach of Ref.~\cite{Ravagli:2007xx}, providing a significant improvement of previous QCMs. \section{Partonic Spectrum} \label{sec_parton} While the blast-wave model provides a convenient description of the thermal component of empirical hadron (and/or parton) spectra, the transition to the kinetic, and eventually hard-scattering, regime is more involved, especially with regard to phase-space correlations within QCMs, as discussed in the Introduction. In an attempt to generate realistic quark input distributions for meson formation processes in the vicinity of $T_c$, we here adopt a Fokker-Planck approach for test particles evolving in a thermally expanding QGP background. The latter is parameterized with guidance from hydrodynamic models for central and semicentral Au-Au collisions at RHIC, implementing empirical values (\textit{i.e.}, adjusted to experiment) for bulk matter properties such as total entropy, radial and elliptic flow. The elliptic flow is parameterized in terms of a flow profile, whose direction at each transverse position is perpendicular to confocal elliptic isobars. Its magnitude is chosen to increase linearly in distance from the center with an average boundary value of $\beta_0=0.55c$ at the end of the mixed phase of the fireball evolution, \textit{i.e.}, at the hadronization time. The acceleration is adjusted so that the fireball at hadronization is approximately circular, with bulk $v_{2,q} \simeq 5.5\%$. This model has been employed before in the context of heavy-quark (HQ), \textit{i.e.}, charm and bottom spectra at RHIC~\cite{vanHees:2005wb}, and good agreement with hydrodynamic simulations has been found~\cite{Moore:2004tg} for the same HQ diffusion coefficient. HQ observables, which at present are semileptonic single-electron decay spectra~\cite{Adare:2006nq,Abelev:2006db}, exhibit an unexpectedly large suppression and elliptic flow which cannot be understood within perturbative QCD (pQCD) including both radiative and elastic scattering. The key microscopic ingredient in Ref.~\cite{vanHees:2005wb} are resonant heavy-light quark interactions mediated via effective (broad) $D$ and $B$ mesons in the QGP~\cite{vanHees:2004gq}, inspired by the findings of thermal lattice QCD. In connection with a ``conventional'' coalescence afterburner~\cite{Greco:2003vf} at $T_c$, the predictions for single-electron suppression and $v_2$ turned out to be in fair agreement with data~\cite{Adare:2006nq,Abelev:2006db}. In more recent work~\cite{vanHees:2007me}, the effective interactions have been replaced by in-medium $T$-matrices based on finite-temperature HQ potentials extracted from lattice QCD; these calculations not only confirmed the interaction strength generated by heavy-light resonances in an essentially parameter-free way, but also identified pre-hadronic meson and diquark channels as the most relevant ones. This, in turn, provides a direct link between two main discoveries at RHIC, namely the strongly interacting nature of the sQGP and quark coalescence from a collective partonic source. In the present paper, we build upon the above findings by extending the Fokker-Planck approach to strange ($s$) quarks. While its applicability criterion, $m_t\gg q \sim T$ ($m_t=\sqrt{m^2+p_t^2}$: transverse mass, $q$: momentum transfer in a typical scattering), seems to be only marginally satisfied for momenta $p_t\lesssim 1\, \mathrm{GeV}$ (at least in the early phases of the QGP evolution), we note that most of the fireball evolution occurs for temperatures close to $T_c$. At higher $p_t$ (or $m_t$) one enters the kinetic regime where the Fokker-Planck treatment becomes reliable again. With these limits properly satisfied, one may hope that the Langevin simulations also accomplish a reasonable description of the low-$p_t$ regime ($p_t \lesssim 1\,\mathrm{GeV}$) for strange quarks, including realistic space-momentum correlations, which is one of the main objectives in our work. It remains to specify the interaction strength of the strange- and charm-quark species. Here we take further guidance from phenomenology by requiring that the final quark $v_2(p_t)$ exhibits the characteristic saturation (or maximum) value of $\sim$$7.5\%$. Our baseline interaction for the stochastic Langevin force is elastic pQCD scattering with a rather large value of $\alpha_s=0.4$ (which can be thought of as containing radiative and/or parts of non-perturbative contributions). However, additional non-perturbative interactions are necessary to achieve a sufficiently large $v_2$. As in Refs.~\cite{vanHees:2004gq,vanHees:2005wb} we associate these with mesonic resonance states with an interaction strength controlled by the resonance width (with a larger width implying stronger coupling). For $s$ quarks ($m_s=0.45\, \mathrm{GeV}$) the ``heavy-light'' resonances require a width of $\Gamma_{s\bar q} \simeq 0.3\, \mathrm{GeV}$, and $\Gamma_{c\bar q} \simeq 0.6\, \mathrm{GeV}$ for $c$ quarks ($m_c=1.5\, \mathrm{GeV}$), which is compatible with Refs.~\cite{vanHees:2004gq,vanHees:2005wb}\footnote{We recall~\cite{Ravagli:2007xx} that the only requirement on the quark and meson masses is that the latter are above the two-quark threshold; within this restriction variations in the mass and (positive) $Q$ values have little effect on the recombination process. In fact, as we will see below, CQNS scaling emerges approximately independent of quark mass.}. This hierarchy is qualitatively consistent with the general expectation that resonance/bound-state formation is suppressed with decreasing constituent mass (and also borne out of the microscopic $T$-matrix calculations for $c$ and $b$ quarks in Ref.~\cite{vanHees:2007me}). Finally, we have to specify the initial quark distributions. For $c$ quarks we use the initial spectra as constructed in Ref.~\cite{vanHees:2005wb}, as to reproduce $D$-meson and semileptonic electron spectra in $p$-$p$ and d-Au collisions. A similar procedure is adopted for strange quarks: we parameterize the quark spectra as a superposition of exponential and power-law spectra in a way that experimental kaon spectra in $200\,\mathrm{GeV}$ $p$-$p$ collisions are properly reproduced (using $\delta$-function fragmentation into kaons at half the parent-quark momentum; as usual in QCMs, the role of gluons is suppressed). In $AA$ collisions, the exponential ``soft'' part is then scaled with the number of participants, $N_{\mathrm{part}}$, and the power-law ``hard'' component with the number of collisions, $N_{\mathrm{coll}}$, for a given centrality. With the interaction strengths and initial conditions fixed, the quark phase-space distribution in semi-/central Au-Au collisions are predicted from the Langevin simulations at the end of the QGP (mixed) phase without further adjustments, and serve as an input for the meson formation processes as described in the previous Section. The framework developed here, \textit{i.e.}, QGP evolution with resonance rescattering and recombination at $T_c$, will be referred to as a ``Resonance Recombination Model'' (RRM). The quark phase-space distributions resulting from the Langevin approach embody strong correlations between spatial and momentum variables. \textit{E.g.}, quarks at high $p_t$ tend to be located in the outer layers of the fireball with a preferential alignment of the momentum and position vector directions (\textit{i.e.}, the quark momentum tends to point ``outward''). Likewise, the collective (radial and elliptic) flow, which implies a well-defined (hydro-like) correlation between the position of the fluid cell and its radial motion, imprints this correlation on the (partially) thermalized components of the Langevin-generated quark spectra. We recall again that the often used factorized implementation of the $v_2$ coefficient in coalescence models completely ignores these rather elementary dependencies. The proper implementation of the differential phase-space information, carried by the quark distribution functions (which is essential for a realistic discussion of hadronic $v_2$-scaling properties), into the hadronization formalism requires a few technical remarks. Since thermalization in the longitudinal direction is somewhat controversial, we assume the quark distributions to be homogeneous in the spatial $z$-coordinate and flat in rapidity. This leaves four independent transverse variables for each particle, which we choose in azimuthal form, $(p_t,\phi_p,r_t,\phi_r)$, corresponding to the distribution ${\mathrm{d} N_q}/{\mathrm{d}^2 p_t \; \mathrm{d}^2 r_t}$. This 4-D phase-space is then divided into finite bins (with a maximum value of $p_t^{\mathrm{max}} \simeq 5 \;\mathrm{GeV}$). For each simulated test quark, its final location and momentum is sorted into this grid. To warrant a (statistically) sufficiently smooth behavior of the computed meson observables, a sample of $\sim$$10^8$ test particles is needed. Finally, an interpolation algorithm has been devised for converting the discretized distribution back into a continuous function, to be plugged into the hadronization formula. The algorithm recovers the periodicity properties of the two angular variables, and converges to an arbitrary sampled function in the limit of a large number of grid points. A suitable grid dimension corresponding to the above variables amounts to, \textit{e.g.}, $(13, 96, 14, 12)$, where the large number of points in $\phi_p$ is dictated by the large sensitivity in the determination of the elliptic flow coefficient, $v_2(p_t)$. We furthermore have to specify how to treat partons that escape the fireball prematurely, \textit{i.e.}, before the end of the QGP/mixed phase is reached. Clearly, these partons preferentially carry a high $p_t$, and, upon exiting the fireball, could undergo a hadronization mechanism different from coalescence, such as fragmentation. Since our fireball is isotropic, the transition from the QGP to the vacuum is a sharp one and it would be unrealistic to coalesce the exiting quark with a thermal distribution at a temperature above $T_c$, or at a time much later than the exit time (when the fireball has cooled down to $T_c$). We therefore decide to include only partons in our hadronization framework which remain inside the fireball throughout the entire QGP evolution. This leads to an underestimation of the high-momentum part of the hadronic spectra. A comprehensive calculation for quantitative comparison to experiment also at high $p_t$ would require the treatment of the exiting partons (hadronized with either fragmentation or coalescence). Similarly, the hadronic $v_2$ we compute only reflects the partons within the QGP fireball at the end of its lifetime (which, however, do include non-thermal components from the Langevin simulation, in addition to the thermalized part of the spectrum). The generated spectrum, which originally represents a probability distribution, requires a suitable normalization. Since the empirical light and strange hadron spectra are consistent with chemical equilibrium close to the expected phase boundary, we assume this to apply at the quark level as well, at the critical temperature $T_c=180 \, \mathrm{MeV}$ of our fireball evolution. The fireball volume has been adjusted to match the total entropy of the fireball to the experimental hadronic final state multiplicities at $T=180\, \mathrm{MeV}$ at given collision centrality, \textit{e.g.}, $V_{\mathrm{FB}}\simeq1200\, \mathrm{fm}^3$ for semicentral Au-Au collisions (note that one fireball covers approximately $1.8$ units in rapidity). For charm quarks we augment the chemical equilibrium number by a fugacity factor, $\gamma_c \simeq 5$~\cite{Grandchamp:2003uw}, to match their number to the expected hard production in primordial nucleon-nucleon collisions (binary collision scaling; in Ref.~\cite{Ravagli:2007xx} $\gamma_c\simeq 8$ at $T_c=170 \, \mathrm{MeV}$ leads to the same number of $c\bar c$ pairs in the fireball). We note, however, that the overall normalization has little impact on our main considerations of space-momentum correlations and $v_2$ systematics. Let us finally specify the assumptions on the rapidity ($y$) distributions. As mentioned above, for both charm and strange quarks we employ a step function as \begin{equation} \frac{\mathrm{d} N}{\mathrm{d} y}=\left . \frac{\mathrm{d} N}{\mathrm{d} y} \right|_{y=0} \theta \left ( \frac{\Delta y}{2}-|y| \right) \ , \end{equation} where the parameter $\Delta y$ depends on the quark mass and has been adjusted to recover approximately the full-width-half-maximum of a thermal $y$ spectrum (amounting to $\Delta y(s)=1.3$ and $\Delta y(c)=0.8$ in connection with the quark masses quoted above). \begin{figure}[!t] \includegraphics[width=0.4\textwidth]{pt_charm_blastwave180_beta0.55.eps} \hspace{1.0cm} \includegraphics[width=0.4\textwidth]{pt_strange_blastwave180_beta0.55.eps} \caption{(Color online) Quark $p_t$ distributions resulting from relativistic Langevin simulations of an expanding elliptic QGP fireball at the end of a QGP (mixed) phase at a temperature of $T=180\, \mathrm{MeV}$ and average radial surface expansion velocity, $\erw{\beta_0} \simeq 0.48c$. The numerically computed spectra (long-dashed lines) are compared to the initial spectra (solid lines) and to a blast-wave parameterization for particles with the same mass and the same fireball conditions ($T=180\, \mathrm{MeV}$ and the same flow field as used for the background medium in the Langevin simulation; short-dashed lines). Left and right panels correspond to charm ($m_c=1.5\, \mathrm{GeV}$) and strange ($m_s=0.45\, \mathrm{GeV}$) quarks, respectively.} \label{fig_ptquark} \end{figure} Fig.~\ref{fig_ptquark} summarizes the results of the Langevin simulations for the $p_t$ probability distributions (integrated over spatial coordinates) for $c$ and $s$ quarks, compared to (normalized) blast-wave spectra for $T=180 \, \mathrm{MeV}$, using the same flow field as in the Langevin simulation. The average surface-expansion velocity is $\erw{\beta_0} \simeq 0.48c$. At low $p_t$ the spectra approach the equilibrium limit as to be expected since the background medium for the Langevin simulation is assumed to be fully equilibrated at all $p_t$, as in a hydrodynamic calculation, and the stochastic process has been realized as to guarantee the correct equilibrium limit (including the adjustment of the (longitudinal) diffusion coefficient according to Einstein's fluctuation-dissipation relation~\cite{vanHees:2005wb}). As discussed above, the interaction strength has been chosen to recover the empirically observed maximum elliptic flow of $v_{2^,q}^{\mathrm{max}} \simeq 7$-$8\%$. Note, however, that the assumption of a fully thermalized background medium implies rather large $v_2$ values at high $p_t$, while the phase-space density of thermal partons is rather small. A full treatment of this problem would require to solve a selfconsistency problem, where the parton spectra in the background fireball evolution also exhibit a saturation of the elliptic flow, $v_2(p_t)$. This is beyond the scope of the present paper. \section{Meson $p_T$ Spectra and $v_2$ Systematics} \label{sec_meson} We now combine the ingredients of our resonance recombination model (RRM) by implementing the quark spectra computed from Langevin simulations in the previous Section (\ref{sec_parton}) with the Boltzmann-based hadronization formalism of Sec.~\ref{sec_boltz}, evaluated in the stationary (equilibrium) limit according to Eq.~(\ref{eqboltzappr}). The key issue we address is how the properties of the input (non-observable) quark spectra reflect themselves in the (observable) meson spectra, in particular whether the space-momentum correlations generated in the Langevin simulations can be consistent with the empirically observed CQNS, which, in turn, opens a window on the quark spectra at hadronization. It remains to specify the masses and widths of hadrons in the recombination process. In line with our restriction to mesons located above the quark-antiquark threshold (due to the limitation to $2 \to 1$ processes) we consider $s$-$\bar{s}$, $c$-$\bar{c}$ coalescence with resonance masses corresponding to the vacuum values for $\phi$ ($1.02\,GeV$) and $J/\psi$ ($3.1\,GeV$) mesons; in connection with the quark masses as given above this implies similar $Q$ values of $0.1$-$0.12 \, \mathrm{GeV}$. The (total) meson widths are chosen of comparable magnitude, \textit{i.e.}, $\Gamma_{\phi}=0.05 \, \mathrm{GeV}$ and $\Gamma_{J/\psi}=0.1\,\mathrm{GeV}$. As elaborated in Ref.~\cite{Ravagli:2007xx}, the numerical results, especially for the meson $v_2$, are rather insensitive to variations in the meson width as long as $Q$ is positive and substantially smaller (not smaller) than the resonance mass (width). In Fig.~\ref{ptmeson} we display our RRM results for $p_T$ spectra of $J/\psi$ and $\phi$ mesons, including available RHIC data. Overall, the spectra largely agree with those computed in our previous work (Fig.~1 in Ref.~\cite{Ravagli:2007xx}), where the input spectra were solely based on a blast-wave parameterization. This is not surprising, since the quark spectra employed in the present work show a rather large degree of thermalization, up to momenta of $p_t=1.5$-$2 \, \mathrm{GeV}$ for strange and charm quarks, cf.~Fig.~\ref{fig_ptquark}. Consequently, the $\phi$-meson spectra shown here are somewhat harder than in Ref.~\cite{Ravagli:2007xx} beyond $p_T\simeq 3\, \mathrm{GeV}$ due to the presence of the kinetic (hard) components in the quark spectra resulting from the Langevin evolution. The computed spectra for the $J/\psi$ are quite reminiscent to earlier blast-wave based results~\cite{Greco:2003vf,Andronic:2006ky,Zhao:2007hh}. Note, however, that in Ref.~\cite{Zhao:2007hh} the recombination (blast wave) contribution only amounts to about $50\%$ of the total $J/\psi$ yield, significantly less than in the present paper (this is sensitive to the total open-charm cross section, which is not very well determined yet). \begin{figure}[!t] \includegraphics[width=0.4\textwidth]{pt_Jpsi_gam600.eps} \hspace{1.0cm} \includegraphics[width=0.4\textwidth]{pt_phi_newsampling_gam150_dy1.3.eps} \caption{(Color online) Meson $p_T$ spectra from quark-antiquark coalescence in central $\sqrt{s_{NN}}=200 \, \mathrm{GeV}$ Au-Au collisions computed within the resonance recombination model for $J/\psi$ (left panel) and $\phi$ (right panel); experimental data are from Refs.~\cite{Adare:2006ns} and \cite{Adler:2004hv,Adams:2004ux,Abelev:2007rw}, respectively.} \label{ptmeson} \end{figure} The RRM results for the meson $v_2(p_T)$ are summarized in Fig.~\ref{v2meson} (solid lines) and compared to the underlying quark $v_2$ \emph{scaled} to meson variables in the conventional (empirical) way as $v_{2}^{\mathrm{scaled}}(p_T)\equiv 2v_{2,q}(p_T/2)$. We find that for both the $J/\psi$ and $\phi$ the agreement is rather impressive, within a few percent relative deviation. The wiggles at the quark level are, to a large extent, driven by the finite grid sampling due to a step width of $400 \, \mathrm{MeV}$ (the statistical error inherent to the Langevin simulation is smaller than that, using $10^8$ test particles). The convolution of two quark distributions results in much smoother curves at the meson level (due to the fitting procedure of the quark input). We are thus able to approximately recover CQNS in a microscopic calculation with the full information on space-momentum correlations, characteristic for hydrodynamic expansion at low $p_t$ and a kinetic regime at higher $p_t$. \begin{figure}[!t] \includegraphics[width=0.4\textwidth]{v2_JPsi_noescape_hvh600_100M.eps} \hspace{1.0cm} \includegraphics[width=0.4\textwidth]{v2_phi_noescape_hvh150_100M.eps} \caption{(Color online) Scaled quark $v_2^{\rm scaled}$ (dashed lines) and meson $v_{2,M}$ (solid lines) coefficients as a function of meson $p_T$ for $J/\Psi$ ($Q=0.1\, \mathrm{GeV}$, $\Gamma=0.1\, \mathrm{GeV}$; left panel) and $\phi$ ($Q=0.12\, \mathrm{GeV}$, $\Gamma=0.05\, \mathrm{GeV}$; right panel) mesons.} \label{v2meson} \end{figure} Another potential source for scaling violations are flavor (or mass) dependencies at the quark level (rather than in the coalescence process). In Fig.~\ref{v2pT} we compare the elliptic flow of different quarks (left panel) and mesons (right panel) with each other. The left panel confirms that the $c$-quark $v_2$ deviates significantly (up to $\sim$$20\%$ at low $p_t$) from the strange-quark $v_2$. At high $p_t$, the strange-quark $v_2$ is a bit high, which, in principle, could be readjusted by a somewhat reduced strength of the resonance interactions in the Langevin simulations. At the meson level, the differences are similar. We have verified that comparable deviations persist when reducing the strange quark $v_2$. \begin{figure}[!t] \includegraphics[width=0.4\textwidth]{v2_quarks_pT_100M.eps} \hspace{1.5cm} \includegraphics[width=0.4\textwidth]{v2_mesons_pT_100M.eps} \caption{(Color online) Elliptic flow coefficient, $v_2$, as a function of transverse momentum for $c$ and $s$ quarks (left panel), as well as $J/\psi$ and $\phi$ mesons resulting from quark recombination (right panel), in semicentral $\sqrt{s_{NN}}=200\, \mathrm{GeV}$ Au-Au collisions.} \label{v2pT} \end{figure} Finally, we address the question of CQNS with respect to transverse kinetic energy, $K_T=m_T-m$, rather than transverse momentum, $p_T$. Such a scaling has recently been highlighted by the PHENIX~\cite{Adare:2006ti} and STAR collaborations~\cite{Abelev:2007qg} and seems to be very well satisfied by all available RHIC data, in centrality, collision energy and collision systems, after a geometric correction for the nuclear overlap. $K_T$ scaling has also been found to result from certain classes of hydrodynamic solutions~\cite{Csanad:2005gv}, and therefore been argued to reflect a collectively expanding thermalized system of partons~\cite{Lacey:2006pn}. From the point of view of quark coalescence, the problem of reconciling quark distribution functions with space-momentum correlations (as implied by hydrodynamic expansion) with CQNS persists. In Fig.~\ref{v2KET} we display the RRM results for $v_{2,q}$ and $v_{2,M}$ for the two different flavors as a function of $K_{t,T}$. We find that the quark input distributions from the QGP Langevin simulations indeed share a rather universal behavior up to $K_t \simeq3\,\mathrm{GeV}$, encompassing both the quasi-equilibrium regime at low energies and the kinetic regime at intermediate energies characterized by a leveling off at $K_t\gtrsim 1\, \mathrm{GeV}$, cf.~left panel of Fig.~\ref{v2KET}. We recall that the only adjusted input to this result is the common maximum value of the individual quark elliptic flow at about $7$-$8\%$ (as suggested by the empirical CQNS deduced from experiment), controlled by the nonperturbative interaction strength in the stochastic Langevin force (again, a fine tuning for the $s$-quark would improve the agreement at higher $K_t$). The approximate universality at the quark level is nicely preserved at the meson level as a result of our Boltzmann-based recombination formalism, see right panel of Fig.~\ref{v2KET}. This is a quite remarkable result in view of the underlying space-momentum correlations in our approach, which has not been achieved before in this form. \begin{center} \begin{figure}[!t] \includegraphics[width=0.4\textwidth]{v2_quarks_KET_100M.eps} \hspace{1.5cm} \includegraphics[width=0.4\textwidth]{v2_mesons_KET_100M.eps} \caption{(Color online) Elliptic flow coefficient, $v_2$, as a function of transverse kinetic energy, $K_{t,T}$, for strange and charm quarks (left panel), as well as for $\phi$ and $J/\psi$ mesons (right panel).} \label{v2KET} \end{figure} \end{center} \section{Conclusions} \label{sec_concl} In the present paper we have extended our previously formulated quark coalescence formalism, utilizing resonance interactions within a Boltzmann equation, by implementing microscopic quark phase-space distributions generated via Langevin simulations of an expanding thermal QGP fireball for Au-Au collisions at RHIC. In this way we could combine the merits of our recombination approach (energy conservation and a proper equilibrium limit) with those of realistic quark distributions, which in particular encode the transition from a thermal regime at low $p_t$ to a kinetic one at intermediate $p_t$. The latter feature is especially important as it produces the leveling-off of the elliptic flow, a key feature of observed hadron spectra and a crucial prerequisite to test any kind of quark scaling behavior. The (constituent) quark-mass and meson parameters were fixed at rather standard values, and we have constrained ourselves to mesons which are reliably calculable in our $2\to1$ recombination setup (\textit{i.e.}, for positive $Q$ values). The only real adjustment concerned the interaction strength in the Langevin process, as to reproduce the empirical maximum value for the quark $v_2$ (with the QGP fireball parameters tuned to empirical values of radial and elliptic flow, as in earlier applications to, \textit{e.g.}, heavy-quark observables). These interactions have been modeled via (meson) resonances in the QGP, which we identified with the states formed in the coalescence process at $T_c$, leading to the notion of a ``Resonance Recombination Model'' (RRM). Since the Fokker-Planck approach as an expansion of the Boltzmann equation is strictly valid for sufficiently massive and/or high-momentum particles, we restricted ourselves to ``heavy'' flavors, i.e., charm and strange quarks. At low $p_t$, the latter are at the borderline of applicability of a Fokker-Planck framework. Our main finding is that within this rather generic set-up, largely based on first principles augmented by a concrete realization of the strongly interacting QGP, the constituent quark scaling of the meson elliptic flow emerges rather naturally \emph{including} space-momentum correlations characteristic for a collectively expanding source. The scaling holds for individual mesons, but appears to be rather universal in quark and meson flavor (mass), especially when applied in (transverse) kinetic energy rather than momentum, which is in line with recent experimental findings. By overcoming some of the limitations of previous (more schematic) coalescence models, and by achieving the first robust implementation of realistic (microscopically computed) phase-space distribution functions of quarks, our formalism could provide a useful tool to better understand systematics of RHIC data, most notably the interplay of a thermal and kinetic regime in connection with phase-space properties of the partonic fireball as viewed through the hadronization process. Clearly, a formidable list of open issues persists, including the extension to light quarks, the role of gluons and of deeply bound hadronic states (possibly requiring additional formation processes), more realistic spectral functions of mesons and quarks and their interactions (both around $T_c$ and above), a selfconsistent treatment of thermal and kinetic components (possibly requiring full parton transport), a systematic classification of viable parton phase-space distributions, hadronic reinteractions, etc. Progress has already been made on a number of these aspects, but a comprehensive approach remains a challenging task. \begin{acknowledgments} We thank T.~Hahn for insightful clarifications about the CUBA multi-dimensional integration package~\cite{Hahn:2004fe} which has been used in this work, and R.~J.~Fries for valuable discussions. This work was supported in part by a U.S. National Science Foundation CAREER award under grant no. PHY-0449489 and by the A.-v.-Humboldt Foundation (through a Bessel Research Award). \end{acknowledgments} \begin{flushleft}
2,869,038,155,364
arxiv
\section{Introduction} In array processing, the covariance matrix $\mathbf{R}$ of the data is widely involved for main applications as filtering~\cite{ReMaBr74,Wa94}, radar/sonar detection~\cite{ScFr94} or localization~\cite{Sc86,RoKa89}. However, when the disturbance in the data is composed of the sum of a Low Rank~(LR) correlated noise and a White Gaussian Noise~(WGN), the covariance matrix is often replaced by the projector onto the LR noise subspace $\mathbf{\Pi}_{\mathrm{c}}$~\cite{KiTu94,Ha96,GiJo02,RaLiGe04}. In practice, the projector onto the LR noise subspace (and the covariance matrix) is generally unknown and an estimate is consequently required to perform the different processing. This estimation procedure is based on the so-called secondary data assumed to be independent and to share the same distribution. Then, the true projector is replaced by the estimated one in order to obtain an adaptive processing. An important issue is then to derive the theoretical performances of the adaptive processing as a function of the number of secondary data $K$. The processing based on the covariance matrix has been widely studied and led to many theoretical results in filtering~\cite{ReMaBr74} and detection~\cite{Ke86,RoFuKeNi92,KrScMc01,BeSc06}. For example, for classical adaptive processing, $K=2m$ secondary data (where $m$ is the data size) are required to ensure good performance of the adaptive filtering, i.e. a 3dB loss of the output Signal to Interference plus Noise Ratio (SINR) compared to optimal filtering~\cite{ReMaBr74}. For LR processing, some results has been obtained especially in filtering~\cite{KiTu94,Ha97,PeHaAyGoRe00,GiFo13} and localization~\cite{KrFoPr92}. Similarly, in LR filtering, the number $K$ of secondary data required to ensure good performance of the adaptive filtering is equal to $2r$ (where $r$ is the rank of the LR noise subspace)~\cite{KiTu94,Ha97}. These last results are obtained from the theoretical study of the Signal to SINR loss. More precisely, in~\cite{Ha97,GiFo13}, the derivation of the theoretical results is based on the hypothesis that the steering vector is orthogonal to the LR noise subspace. Nevertheless, even if the result seems to be close to the simulated one when the hypothesis is no longer valid anymore~\cite{GiFoPaOv14}, it is impossible with traditional techniques of~\cite{Ha97,GiFo13} to obtain a theoretical performance as a function of the distance between the steering vector and the LR noise subspace. Since, in practice, this dependence is essential to predict the performance of the adaptive filtering, we propose in this paper to derive the theoretical SINR loss, for a disturbance composed of a LR noise and a WGN, as a function of $K$ and the distance between the steering vector and the LR noise subspace. The proposed approach is based on the study of the SINR loss structure. The SINR loss (resp. LR SINR loss) is composed of a \textit{simple} Quadratic Form (QF) in the numerator, $\bm{s}_1^H \hat{\mathbf{R}}^{-1}\bm{s}_2$ (resp. $\bm{s}_1^H \hat{\mathbf{\Pi}}^{\bot}_{\mathrm{c}}\bm{s}_2$) and a \textit{structured} QF in the denominator $\bm{s}_1^H \hat{\mathbf{R}}^{-1}\mathbf{R}\hat{\mathbf{R}}^{-1}\bm{s}_2$ (resp. $\bm{s}_1^H \hat{\mathbf{\Pi}}^{\bot}_{\mathrm{c}}\mathbf{R}\hat{\mathbf{\Pi}}^{\bot}_{\mathrm{c}}\bm{s}_2$). These recent years, the \textit{simple} QFs (numerator) have been broadly studied~\cite{Me08,Me08bis,VaLoMe12,CoHa13} using Random Matrix Theory (RMT) tools contrary to \textit{structured} QFs (denominator). RMT tools have also been used in array processing to improve the MUSIC algorithm~\cite{MeLa08,CoPaSi14} and in matched subspace detection~\cite{NaSi10,AsNa13} where the rank $r$ is unknown. The principle is to examine the spectral behavior of $\mathbf{\hat{R}}$ by RMT to obtain their convergences, performances and asymptotic distribution when $K$ tends to infinity and when both the data size $m$ and $K$ tend to infinity at the same ratio, i.e. $m/K\rightarrow c\in ]0,+\infty)$, for different models of $\mathbf{\hat{R}}$ of the observed data as in~\cite{Me08,Me08bis,MeLa08},~\cite{CoHa13} and~\cite{VaLoMe12}. Therefore, inspired by these works, we propose in this paper to study the convergences of the \textit{structured} QFs $\bm{s}_1^H \hat{\mathbf{R}}^{-1}\mathbf{R}\hat{\mathbf{R}}^{-1}\bm{s}_2$ (resp. $\bm{s}_1^H \hat{\mathbf{\Pi}}^{\bot}_{\mathrm{c}}\mathbf{R}\hat{\mathbf{\Pi}}^{\bot}_{\mathrm{c}}\bm{s}_2$): when 1) $K\rightarrow\infty$ with a fixed $m$ and when 2) $m,K\rightarrow\infty$ at the same ratio under the most appropriated model for our data and with the rank assumed to be known. From~\cite{CoPaGiLe14,CoPaGiLe15}, the \textit{spiked} model has proved to be the more appropriated one to our knowledge. This model, introduced by~\cite{Jo01} (also studied in~\cite{BeNa11,Pa07} from an eigenvector point of view) considers that the multiplicity of the eigenvalues corresponding to the signal (the LR noise for us) is fixed for all $m$ and leads to the SPIKE-MUSIC estimator~\cite{HaLoMeNaVa13} of $\bm{s}_1^H\bm{\hat{\Pi}}\bm{s}_2$. Then, the new results are validated through numerical simulations. From these new theoretical convergences, the paper derives the convergence of the SINR loss for both adaptive filters (the classical and the LR one). The new theoretical SINR losses depend on the number of secondary data $K$ but also on the distance between the steering vector and the LR noise subspace. This work is partially related to those of~\cite{TaTaPe10,TaTaPe13} and \cite{YuRuMc13} which uses the RMT tools to derive the theoretical SINR loss in a full rank context (previously defined as classical). Finally, these theoretical SINR losses are validated in a jamming application context where the purpose is to detect a target thanks to a Uniform Linear Antenna (ULA) composed of $m$ sensors despite the presence of jamming. The response of the jamming is composed of signals similar to the target response. This problem is very similar to the well-known Space Time Adaptive Processing (STAP) introduced in~\cite{Wa94}. Results show the interest of our approach with respect to other theoretical results~\cite{KiTu94,Ha97,PeHaAyGoRe00,GiFo13} in particular when the target is close to the jammer. The paper is organized as follows. Section~\ref{sec:pb_statement} presents the received data model, the adaptive filters and the corresponding SINR losses. Section~\ref{sec:RMT} summarizes the existing studies on the \textit{simple} QFs $\bm{s}_1^H\mathbf{\hat{R}}\bm{s}_2$ and $\bm{s}_1^H\mathbf{\hat{\Pi}}\bm{s}_2$, and exposes the covariance matrix model, the \textit{spiked} model. Section~\ref{sec:NewCVResults} gives the theoretical contribution the paper with the convergences of the \textit{structured} QFs $\bm{s}_1^H\mathbf{\hat{\Pi}}_{\mathrm{c}}^\bot\mathbf{B}\mathbf{\hat{\Pi}}_{\mathrm{c}}^\bot\bm{s}_2$ and $\bm{s}_1^H\mathbf{\hat{\Pi}}_{\mathrm{c}}^\bot\mathbf{R}\mathbf{\hat{\Pi}}_{\mathrm{c}}^\bot\bm{s}_2$ and the convergences of the SINR losses. The results are finally applied on a jamming application in Section~\ref{sec:simu}.\\ \indent \textit{Notations:} The following conventions are adopted. An italic letter stands for a scalar quantity, boldface lowercase (uppercase) characters stand for vectors (matrices) and $(.)^H$ stands for the conjugate transpose. $\mathbf{I}_{N}$ is the $N\times N$ identity matrix, $\mathrm{tr}(.)$ denotes the trace operator and $\mathrm{diag}(.)$ denotes the diagonalization operator such as $(\mathbf{A})_{i,i}=(\mathrm{diag}(\mathbf{a}))_{i,i}=(\mathbf{a})_{i}$ and $(\mathbf{A})_{i,j}=0$ if $i\neq j$. $\#\left\lbrace\mathcal{A}\right\rbrace $ denotes the cardinality of the set $\mathcal{A}$. $[\![a,b]\!]$ is the set defined by $\left\lbrace x\in\mathbb{Z}:a\leqslant x\leqslant b,\forall(a,b)\in\mathbb{Z}^2\right\rbrace$. $\boldsymbol{\mathbbm{O}}_{n\times N}$ is a $n\times N$ matrix full of 0. The abbreviations iid and a.s. stem for \textit{independent and identically distributed} and almost surely respectively. \section{Problem statement}\label{sec:pb_statement} \indent The aim of the problem is to filter the received observation vector $\boldsymbol{x}\in\mathbb{C}^{m\times 1}$ in order to whiten the noise without mitigating an eventual complex signal of interest $\boldsymbol{d}$ (typically a target in radar processing). In this paper, $\boldsymbol{d}$ will be a target response and is equal to $\alpha\boldsymbol{a}(\bm{\Theta})$ where $\alpha$ is an unknown complex deterministic parameter (generally corresponding to the target amplitude), $\boldsymbol{a}(\bm{\Theta})$ is the steering vector and $\bm{\Theta}$ is an unknown deterministic vector containing the different parameters of the target (e.g. the localization, the velocity, the Angle of Arrival (AoA), etc.). In the remainder of the article, in order to simplify the notations, $\bm{\Theta}$ will be omitted of the steering vector which will simply be denoted as $\boldsymbol{a}$. If necessary, the original notation will be taken.\\ \indent This section will first introduces the data model. Then, the filters, adaptive filters and the corresponding SINR loss, the quantity characterizing their performances, will be defined. \subsection{Data model}\label{subsec:data_model} \indent The observation vector can be written as: \begin{eqnarray} \boldsymbol{x}=\boldsymbol{d}+\boldsymbol{c}+\boldsymbol{b} \label{eq:probdetectLR} \end{eqnarray} \noindent where $\boldsymbol{c}+\boldsymbol{b}$ is the noise that has to be whitened. $\boldsymbol{b}\in\mathbb{C}^{m\times 1}\sim\mathcal{CN}(\mathbf{0},\sigma^2\mathbf{I}_m)$ is an Additive WGN (AWGN) and $\boldsymbol{c}$ is a LR Gaussian noise $\boldsymbol{c}\in\mathbb{C}^{m\times 1}$ modeled by a zero-mean complex Gaussian vector with a normalized covariance matrix $\mathbf{C}$ ($\mathrm{tr}(\mathbf{C}) = m$), i.e. $\boldsymbol{c}\sim \mathcal{CN}(\mathbf{0},\mathbf{C})$. Consequently, the covariance matrix of the noise $\boldsymbol{c}+\boldsymbol{b}$ can be written as $\mathbf{R}=\mathbf{C}+\sigma^2\mathbf{I}_m\in\mathbb{C}^{m\times m}$. Moreover, considering a LR Gaussian noise, one has $\mathrm{rank}\left( \mathbf{C} \right) = r \ll m $ and hence, the eigendecomposition of $\mathbf{C}$ is: \begin{eqnarray} \mathbf{C} = \sum_{i=1}^r \gamma_i \boldsymbol{u}_i\boldsymbol{u}_i^H \label{eq:SVDC} \end{eqnarray} where $\gamma_i$ and $\boldsymbol{u}_i$, $i\in[\![1;r]\!]$ are respectively the non-zero eigenvalues and the associated eigenvectors of $\mathbf{C}$, unknown in practice. This leads to: \begin{eqnarray} \mathbf{R}=\sum_{i=1}^m\lambda_i\boldsymbol{u}_i\boldsymbol{u}_i^{H} \label{eq:R} \end{eqnarray} \noindent where $\lambda_i$ and $\boldsymbol{u}_i$, $i\in[\![1,m]\!]$ are respectively the eigenvalues and the associated eigenvectors of $\mathbf{R}$ with $\lambda_1 = \gamma_1+\sigma^2>\cdots>\lambda_r=\gamma_r+\sigma^2>\lambda_{r+1}=\cdots=\lambda_m=\sigma^2$. Then, the projector onto the LR Gaussian noise subspace $\boldsymbol{\Pi}_\mathrm{c}$ and the projector onto the orthogonal subspace to the LR Gaussian noise subspace $\boldsymbol{\Pi}_\mathrm{c}^{\bot}$ are defined as follows: \begin{eqnarray} \begin{cases} \boldsymbol{\Pi}_\mathrm{c} = \sum_{i=1}^r\boldsymbol{u}_i\boldsymbol{u}_i^{H}\\ \boldsymbol{\Pi}_\mathrm{c}^{\bot} = \mathbf{I}_{m} - \boldsymbol{\Pi}_\mathrm{c}=\sum_{i=r+1}^{m} {\boldsymbol{u}_i\boldsymbol{u}_i^{H}} \end{cases}\label{eq:defprojectors} \end{eqnarray} \indent However, in practice, the covariance matrix $\mathbf{R}$ of the noise is unknown. Consequently, it is traditionally estimated with the Sample Covariance Matrix (SCM) which is computed from $K$ iid secondary data $\boldsymbol{x}_k=\boldsymbol{c}_k+\boldsymbol{b}_k$, $k\in[\![1,K]\!]$, and can be written as: \begin{eqnarray} \hat{\mathbf{R}}=\frac{1}{K} \sum_{k=1}^{K} \boldsymbol{x}_k \boldsymbol{x}_k^{H}= \sum_{i=1}^m \hat{\lambda}_i \hat{\boldsymbol{u}}_i \hat{\boldsymbol{u}}_i^H \label{eq:Rscm} \end{eqnarray} \noindent where $\hat{\lambda}_i$ and $\hat{\boldsymbol{u}}_i$, $i\in[\![1,m]\!]$ are respectively the eigenvalues and the eigenvectors of $\hat{\mathbf{R}}$ with $\hat{\lambda}_1 \geqslant\hat{\lambda}_2\geqslant\cdots\geqslant\hat{\lambda}_m$, $\boldsymbol{c}_k\sim\mathcal{CN}(\mathbf{0},\mathbf{C})$ and $\boldsymbol{b}_k\sim\mathcal{CN}(\mathbf{0},\sigma^2\mathbf{I}_m)$. Finally, the traditional estimated projectors estimated from the SCM are: \begin{eqnarray} \begin{cases} \hat{\boldsymbol{\Pi}}_{\mathrm{c}}=\sum_{i=1}^r {\hat{\boldsymbol{u}}_i\hat{\boldsymbol{u}}_i^{H}} \\ \hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot} =\mathrm{\textbf{I}}_{m} - \hat{\boldsymbol{\Pi}}_{\mathrm{c}}= \sum_{i=r+1}^{m} {\hat{\boldsymbol{u}}_i\hat{\boldsymbol{u}}_i^{H}}, \end{cases} \label{eq:PIcSCM} \end{eqnarray} \subsection{Adaptive filters} \indent A filtering preprocessing on the observation vector $\boldsymbol{x}$ (Eq.(\ref{eq:probdetectLR})) is first done with the filter $\boldsymbol{w}$ in order to whiten the received signal $p=\boldsymbol{w}^H\boldsymbol{x}$. The filter maximizing the SINR is given by: \begin{eqnarray} \boldsymbol{w}_\mathrm{opt}=\mathbf{R}^{-1}\boldsymbol{a} \label{eq:wopt} \end{eqnarray} \noindent Since, in practice, the covariance matrix $\mathbf{R}$ of the noise is unknown, the estimated optimal filter or adaptive filter (sub-optimal) is: \begin{eqnarray} \boldsymbol{\hat{w}}=\mathbf{\hat{R}}^{-1}\boldsymbol{a} \label{eq:wSCM} \end{eqnarray} \indent In the case where one would benefit of the LR structure of the noise, one should use the optimal LR filter, based on the fact that $\boldsymbol{\Pi}_\mathrm{c}^{\bot}$ is the best rank $r$ approximation of $\mathbf{R}^{-1}$, which is defined by~\cite{KiTu94}: \begin{eqnarray} \boldsymbol{w}_\mathrm{LR}=\boldsymbol{\Pi}_\mathrm{c}^{\bot}\boldsymbol{a} \label{eq:wLRopt} \end{eqnarray} \noindent As, in practice, the projector is not known and is estimated from the SCM, the estimated optimal filter or adaptive filter (sub-optimal) is: \begin{eqnarray} \boldsymbol{\hat{w}}_\mathrm{LR}=\hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}\boldsymbol{a} \label{eq:wLRSCM} \end{eqnarray} \subsection{SINR Loss} Then, we define the SINR Loss. In order to characterize the performance of the estimated filters, the SINR loss compares the SINR at the output of the filter to the maximum SINR: \small \begin{eqnarray} \hat{\rho}&=&\frac{SINR_{out}}{SINR_{max}}=\frac{\vert\boldsymbol{\hat{w}}^H\boldsymbol{d}\vert^2}{(\boldsymbol{\hat{w}}^H\mathbf{R}\boldsymbol{\hat{w}})(\boldsymbol{d}^H\mathbf{R}^{-1}\boldsymbol{d})}\\ &=&\frac{\vert\boldsymbol{a}^H\mathbf{\hat{R}}^{-1}\boldsymbol{a}\vert^2}{(\boldsymbol{a}^H\mathbf{\hat{R}}^{-1}\mathbf{R}\mathbf{\hat{R}}^{-1}\boldsymbol{a})(\boldsymbol{a}^H\mathbf{R}^{-1}\boldsymbol{a})} \label{eq:SNRLoss_wSCM} \end{eqnarray}\normalsize \noindent If $\boldsymbol{\hat{w}}=\boldsymbol{w}_{\mathrm{opt}}$, the SINR loss is maximum and is equal \mbox{to 1.} When we consider the LR structure of the noise, the theoretical SINR loss can be written as:\small \begin{eqnarray} \rho_{\mathrm{LR}} &=&\frac{\vert\boldsymbol{w}_\mathrm{LR}^H\boldsymbol{d}\vert^2}{(\boldsymbol{w}_\mathrm{LR}^H\mathbf{R}\boldsymbol{w}_\mathrm{LR})(\boldsymbol{d}^H\mathbf{R}^{-1}\boldsymbol{d})}\\ &=&\frac{\vert\boldsymbol{a}^H\boldsymbol{\Pi}_\mathrm{c}^{\bot}\boldsymbol{a}\vert^2}{(\boldsymbol{a}^H\boldsymbol{\Pi}_\mathrm{c}^{\bot}\mathbf{R}\boldsymbol{\Pi}_\mathrm{c}^{\bot}\boldsymbol{a})(\boldsymbol{a}^H\mathbf{R}^{-1}\boldsymbol{a})} \label{eq:SNRLoss_wLRopt} \end{eqnarray}\normalsize \noindent Finally, the SINR loss corresponding to the adaptive filter in Eq.(\ref{eq:wLRSCM}) is defined from Eq.(\ref{eq:SNRLoss_wLRopt}) as: \begin{eqnarray} \hat{\rho}_{\mathrm{LR}}=\rho_{\mathrm{LR}}\vert_{\boldsymbol{\Pi}_\mathrm{c}^{\bot}= \hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}} \label{eq:SNRLoss_wLRSCM} \end{eqnarray} \indent Since we are interested in the performance of the filters, we would like to obtain the theoretical behavior of the SINR losses. Some asymptotic studies on the SINR loss in LR Gaussian context have already been done~\cite{Ha97,GiFo13}. In~\cite{Ha97,GiFo13}, the theoretical result is derived by using the assumption that the LR noise is orthogonal to the steering vector and, in this case,~\cite{GiFo13} obtained an approximation of the expectation of the SINR loss $\hat{\rho}_\mathrm{LR}$. However, this assumption is not always verified, not very relevant and is a restrictive hypothesis in real cases. We consequently propose to relax it and study the convergence of the SINR loss using RMT tools through the study of the nominators and denominators. Indeed, one can already note that the numerators are \textit{simple} QFs whose convergences were widely considered in RMT. However, the denominators contain more elaborated QFs which were not tackled in RMT yet and will be the object of Sec.\ref{sec:NewCVResults}. \section{Random matrix theory tools}\label{sec:RMT} \indent This section is dedicated to the introduction of classical results from the RMT for the study of the convergence of QFs. This theory and the convergences are based on the behavior of the eigenvalues of the SCM when $m,K\rightarrow\infty$ at the same rate, i.e. $m/K\rightarrow c\in\left] 0,+\infty\right)$. In order to simplify the notations, we will abusively note $c=m/K$.\\ \indent The useful tools for the study of the eigenvalues behavior and the assumptions to the different convergences will be first presented. Secondly, the section will expose the data model, the \textit{spiked} model~\cite{CoHa13}. Finally, the useful convergences of \textit{simple} QFs ($\bm{s}_1^H\mathbf{\hat{R}}^{-1}\bm{s}_2$, $\bm{s}_1^H\bm{\hat{\Pi}}\bm{s}_2$) will be introduced. \subsection{Preliminaries} The asymptotic behavior of the eigenvalues when \mbox{$m,K\rightarrow\infty$} at the same rate is described through the convergence of their associated empirical Cumulative Distribution Function (CDF) $\hat{F}_m(x)$ or their empirical Probability Density Function (PDF) $\hat{f}_m(x)$\footnote{One can show that under (\textbf{As1},\textbf{As3}-\textbf{As5}) described later, $\hat{f}_m(x)$ a.s. converges towards a nonrandom PDF $f(x)$ with a compact support.}. The asymptotic PDF $f_m(x)$ will allow us to characterize the studied data model. The empirical CDF of the sample eigenvalues of $\mathbf{\hat{R}}$ can be \mbox{defined as:} \begin{eqnarray} \hat{F}_m(x)=\frac{1}{m}\#\left\lbrace k:\hat{\lambda}_k\leqslant x\right\rbrace \label{eq:Fhat} \end{eqnarray} \noindent However, in practice, the asymptotic characterization of $\hat{F}_m(x)$ is too hard. Consequently, one prefers to study the convergence of the Stieltjes transform ($\mathcal{ST}\left[ \cdot\right]$) of $\hat{F}_m(x)$: \begin{eqnarray} \hat{b}_m(z)&=&\mathcal{ST}\left[ \hat{F}_m(x)\right]=\int_{\mathbb{R}}\dfrac{1}{x -z}d\hat{F}_m(x)\\ &=&\dfrac{1}{m}\sum_{i=1}^m\dfrac{1}{\hat{\lambda}_i -z}=\dfrac{1}{m}\mathrm{tr}\left[(\mathbf{\hat{R}}-z\mathbf{I}_m)^{-1}\right] \label{eq:ST_Fhat} \end{eqnarray} \noindent with $z\in\mathbb{C}^+\equiv\lbrace z\in\mathbb{C} :\Im[z]>0\rbrace$ and which almost surely converges to $\bar{b}_m(z)$. It is interesting to note that the PDF can thus be retrieve from the Stieltjes transform of its CDF: \begin{eqnarray} \hat{f}_m(x)=\underset{\Im\left[z\right]\rightarrow 0 }{\mathrm{lim}}\frac{1}{\pi}\Im \left[\hat{b}_m(z)\right] \label{eq:fhat} \end{eqnarray} \noindent with $x\in\mathbb{R}$. In an other manner, the characterization of $\hat{f}_m(x)$ (resp. $f_m(x)$) can be obtained from $\hat{b}_m(z)$ (resp. $\bar{b}_m(z)$). Then, to prove the convergences, we assume the following standard hypotheses. \begin{itemize}[\labelsep =0.2cm] \setlength{\itemindent}{0.3cm} \item[(\textbf{As1})] $\mathbf{R}$ has uniformly bounded spectral norm $\forall m\in\mathbb{N}^*$, i.e. \mbox{$\forall i\in[\![1,m]\!]$}, $\lambda_i < \infty$. \item[(\textbf{As2})] The vectors $\boldsymbol{s}_1$, $\boldsymbol{s}_2\in\mathbb{C}^{m\times 1}$ used in the QFs (here $\boldsymbol{a}(\bm{\Theta})$ and $\boldsymbol{x}$) have uniformly bounded Euclidean norm \mbox{$\forall m\in\mathbb{N}^*$} \item[(\textbf{As3})] Let $\mathbf{Y}\in\mathbb{C}^{m\times K}$ having iid entries $y_{ij}$ with zero mean and unit variance, absolutely continuous and with $\mathbb{E}[|y_{ij}|^8]<\infty$ \end{itemize} \begin{itemize}[\labelsep =0.2cm] \setlength{\itemindent}{0.3cm} \item[(\textbf{As4})] Let $\mathbf{Y}\in\mathbb{C}^{m\times K}$ defined as in (\textbf{As3}), then its distribution is invariant by left multiplication by a deterministic unitary matrix. Moreover, the eigenvalues empirical PDF of $\frac{1}{K}\mathbf{Y}\mathbf{Y}^H$ a.s. converges to the Mar\u{c}enko-Pastur distribution~\cite{MaPa67} with support $[(1-\sqrt{c})^2,(1+\sqrt{c})^2]$. \item[(\textbf{As5})] The maximum (resp. minimum) eigenvalue of $\frac{1}{K}\mathbf{Y}\mathbf{Y}^H$ a.s. tends to $(1+\sqrt{c})^2$ (resp. to $(1-\sqrt{c})^2$). \end{itemize} \subsection{Covariance matrix models and convergence of eigenvalues}\label{subsec:RMTmodels} \indent We first expose the considered data model and, then, the eigenvalues behavior of the SCM. The SCM can be written as $\mathbf{\hat{R}}=\frac{1}{K}\mathbf{X}\mathbf{X}^H$ with: \begin{eqnarray} \mathbf{X}=\mathbf{R}^{1/2}\mathbf{Y}=(\mathbf{I}_m+\mathbf{P})^{1/2}\mathbf{Y} \label{eq:X_spike} \end{eqnarray} \noindent with $\mathbf{X}=[\boldsymbol{x}_1,\cdots,\boldsymbol{x}_K]$. The iid entries of $\mathbf{Y}$ follow the $\mathcal{CN}(0,1)$ distribution according to our data model in Sec.\ref{sec:pb_statement}. The complex normal distribution being a particular case of such distributions defined in (\textbf{As3}), the $\mathbf{Y}$ entries consequently verify it. Thus, the forthcoming convergences hold in the more general case defined by (\textbf{As3}). $\mathbf{R}^{1/2}$ is the $m\times m$ Hermitian positive definite square root of the true covariance matrix. The matrix $\mathbf{P}$ is the rank $r$ perturbation matrix and can be eigendecomposed as $\mathbf{P}=\mathbf{U}\bm{\Omega}\mathbf{U}^H =\sum_{i=1}^{\bar{M}}\omega_i\mathbf{U}_i\mathbf{U}_i^H$ with: \begin{eqnarray} \bm{\Omega}=\begin{bmatrix} \omega_1 \mathbf{I}_{\mathcal{K}_1} & & \\ &\ddots & \\ & & \omega_{\bar{M}}\mathbf{I}_{\mathcal{K}_{\bar{M}}} \end{bmatrix} \label{eq:Omega} \end{eqnarray} \noindent with $\mathbf{U}=[\mathbf{U}_1\cdots\mathbf{U}_{\bar{M}}]$ and $\bar{M}$ the number of distinct eigenvalues of $\mathbf{R}$. Moreover, $\mathbf{U}_i\in\mathbb{C}^{m\times \mathcal{K}_i}$ where $\mathcal{K}_i$ is the multiplicity of $\omega_i$. Hence, the covariance matrix (\ref{eq:R}) can be rewritten as: \begin{eqnarray} \mathbf{R}=\sum_{i=1}^{\bar{M}}\lambda_i\mathbf{U}_i\mathbf{U}_i^H \label{eq:Rgmusic} \end{eqnarray} \noindent where $\lambda_i$, of multiplicity $\mathcal{K}_i$, and $\mathbf{U}_i$ are the eigenvalues and the associated subspaces (concatenation of the $\mathcal{K}_i$ eigenvectors associated to $\lambda_i$) of $\mathbf{R}$ respectively, with $\lambda_1=1+\omega_1>\cdots>\lambda_ {\bar{M}}=1+\omega_{\bar{M}}>0$ and $\sum_{i=1}^{\bar{M}}\mathcal{K}_i=m$. The properties of the \textit{spiked} model are the following: \begin{itemize} \item $\exists n\in[\![1,\bar{M}]\!]$ such that $\omega_n=0$. \item The multiplicity $\mathcal{K}_i$ is fixed $\forall i\in[\![1,\bar{M}]\!]\backslash n$ and does not increase with $m$, i.e. $\mathcal{K}_i/m\!\!\!\!\underset{m,K\rightarrow\infty}{\longrightarrow}\!\!\!\!0^+$, $\forall i\!\in\![\![1,\bar{M}]\!]\backslash n$. \end{itemize} \indent Consequently, we have $\mathrm{rank}(\mathbf{\Omega})=\sum_{i\in[\![1,\bar{M}]\!]\backslash n}\mathcal{K}_i=r$ and $\mathcal{K}_n=m-r$. In other words, the model specifies that only a few eigenvalues are non-unit (and do not contribute to the noise unit-eigenvalues) and fixed. Consequently, $\lambda_n=1$ is the eigenvalue of $\mathbf{R}$ corresponding to the white noise and the others correspond to the rank $r$ perturbation.\\ \indent In our case (see Sec.\ref{sec:pb_statement}), we recall that the covariance matrix $\mathbf{R}$ can be written as in Eq.(\ref{eq:R}) and Eq.(\ref{eq:Rgmusic}). More specifically, the noise component $\boldsymbol{b}$ corresponds to the white noise and its eigenvalue is $\lambda_{\bar{M}}=1$ as, for simplicity purposes, we set $\sigma^2=1$. The $r$ eigenvalues of the LR noise component $\boldsymbol{c}$ are strictly higher than 1. Thus, $\bar{M}=r+1$, $\lambda_1=1+\omega_1>\cdots>\lambda_{\bar{M}-1}=1+\omega_{\bar{M}-1}>\lambda_{\bar{M}}=1$, $\mathcal{K}_i=1$ is the multiplicity of $\lambda_i$, $\forall i\in[\![1,r]\!]$, and $\mathcal{K}_{\bar{M}}=m-r$ is the multiplicity of $\lambda_{\bar{M}}$.\small \begin{eqnarray} \mathbf{R}\!=\!\lambda_{\bar{M}} \mathbf{U}_{\bar{M}}\mathbf{U}_{\bar{M}}^H\!+\!\!\sum_{i=1}^{\bar{M}-1}\lambda_i \mathbf{U}_i\mathbf{U}_i^H\!\!=\!\mathbf{U}_{r+1}\mathbf{U}_{r+1}^H\!+\!\!\sum_{i=1}^{r}\lambda_i \boldsymbol{u}_i\boldsymbol{u}_i^H\label{eq:SCM_spike} \end{eqnarray} \normalsize \indent This model leads to a specific asymptotic eigenvalues PDF of $\mathbf{R}$ as detailed hereafter. The convergence of the eigenvalues is addressed through the convergence of the Stieltjes transform of the eigenvalues CDF. The asymptotic eigenvalue behavior of $\hat{\mathbf{R}}$ for the \textit{spiked} model was introduced by Johnstone~\cite{Jo01} and its eigenvalue behavior was studied in~\cite{BaikSi06}. In order to derive it,~\cite{BaikSi06} exploited the specific expression given in Eq.(\ref{eq:X_spike}). Then,~\cite{CoHa13} introduced the final assumption (\textit{separation condition}) under which the following convergences are given. \begin{itemize}[\labelsep =0.2cm] \setlength{\itemindent}{0.55cm} \item[(\textbf{As6.S})] The eigenvalues of $\mathbf{P}$ satisfy the \textit{separation condition}, i.e. $\vert\omega_i\vert >\sqrt{c}$ for all $i\in[\![1,\bar{M}]\!]\backslash n$ ($i\in[\![1,r]\!]$ in our case). \end{itemize} \noindent Thus, under (\textbf{As1}-\textbf{As5}, \textbf{As6.S}), we have: \begin{eqnarray} \hat{f}_m(x)\longrightarrow f(x) \end{eqnarray} \noindent where $f(x)$ is the Mar\u{c}enko-Pastur law: \begin{eqnarray} f(x)=\begin{cases}\left(1-\frac{1}{c}\right),\qquad\qquad\text{ if } x=0\text{ and }c>1\\ \dfrac{1}{2\pi cx}\sqrt{(\lambda_- -x)(x -\lambda_+)},\\ \qquad\qquad\qquad\qquad\:\:\!\text{if } x \in]\lambda_-,\lambda_+[\\ 0, \qquad\qquad\qquad\quad\:\:\text{otherwise} \end{cases} \end{eqnarray} \noindent with $\lambda_-=(1-\sqrt{c})^2$ and $\lambda_+=(1+\sqrt{c})^2$. However, it is essential to note that, for all $i\in[\![1,\bar{M}]\!]\backslash n$: \begin{eqnarray} \hat{\lambda}_{j\in\mathcal{M}_i}\overset{\mathrm{a.s.}}{\underset{m,K\rightarrow\infty}{\longrightarrow}}\tau_i=1+\omega_i+c\frac{1+\omega_i}{\omega_i}\label{eq:rho} \end{eqnarray} \noindent where $\mathcal{M}_i$ is the set indexes corresponding to the $j$-th eigenvalue of $\mathbf{R}$ (for example $\mathcal{M}_{r+1}=\left\lbrace r+1,\cdots,m\right\rbrace $ for $\lambda_{r+1}$). Two representations of $\hat{f}_m(x)$ for two different $c$ and a sufficient large $m$ are shown on Fig.~\ref{Fig:ddpSPIKE_c01} when the eigenvalues of $\mathbf{R}$ are 1, 2, 3, and 7 with the same multiplicity, where the eigenvalue 1 is the noise eigenvalue. One can observe that say (\textbf{As6.S}) is verified is equivalent to say that $\tau_{n-1}>\lambda_+$ and $\tau_{n+1}<\lambda_-$. In other words, all the sample eigenvalues corresponding to the non-unit eigenvalues of $\mathbf{R}$, converge to a value $\tau_i$ which is outside the support of the Mar\u{c}enko-Pastur law (\enquote{asymptotic} PDF of the \enquote{unit} sample eigenvalues). As an illustration, one can notice that, in Fig.~\ref{Fig:ddpSPIKE_c01}, for $\hat f_m(x)$ plotted for $c=0.1$, the \textit{separation condition} is verified ($\omega_1=6$, $\omega_2=2$ and $\omega_3=1$ are greater that $\sqrt{c}=0.316$) and the three non-unit eigenvalues are represented on the PDF and outside the support of the Mar\u{c}enko-Pastur law by their respective limits $\tau_1=7.116$, $\tau_2=3.15$ and $\tau_3=2.2$. On the contrary, for $\hat{f}_m(x)$ plotted for $c=1.5$, only the two greatest eigenvalues are represented on the PDF by their respective limits $\tau_1=8.75$ and $\tau_2=5.25$ while the \textit{separation condition} is not verified for the eigenvalue $\lambda_3=2$ ($\omega_3=1<\sqrt{c}=1.223$). In this case, the sample eigenvalues corresponding to the eigenvalue $\lambda_3=2$ belongs to the Mar\u{c}enko-Pastur law. \begin{figure}[h!] \centering \includegraphics[scale=0.72]{ddpspikev3.pdf} \caption{PDF of the eigenvalues of the SCM with the \textit{spiked} model when the eigenvalues of $\mathbf{R}$ are 1, 2, 3, and 7 with the same multiplicity, where 1 is the noise eigenvalue.} \label{Fig:ddpSPIKE_c01} \end{figure} \subsection{Convergence of simple quadratic forms} \indent Here, we compare the convergence of two QFs in two convergence regimes: when \mbox{$K\rightarrow\infty$} with a fixed $m$ and when $m,K\rightarrow\infty$ at the same rate. \\ \indent We first present the useful convergences of \textit{simple} QFs function of $\mathbf{\hat{R}}$. It is well known that, due to the strong law of large numbers, when \mbox{$K\rightarrow\infty$} with a fixed $m$, $\mathbf{\hat{R}}\rightarrow\mathbf{R}$ a.s.~\cite{Bi95}. Thus, \begin{eqnarray} \boldsymbol{s}_1^H\mathbf{\hat{R}}^{-1}\boldsymbol{s}_2 \underset{\underset{m<\infty}{\small{K\rightarrow\infty}}}{\overset{\text{a.s.}}{\longrightarrow}} \boldsymbol{s}_1^H \mathbf{R}^{-1}\boldsymbol{s}_2 \label{eq:CV_FQR} \end{eqnarray} \noindent Moreover, when $m,K\rightarrow\infty$ at the same rate~\cite{Gi98,Me08}: \begin{eqnarray} \boldsymbol{s}_1^H\mathbf{\hat{R}}^{-1}\boldsymbol{s}_2 \underset{\underset{m/K \to c<\infty}{\small{m,K\rightarrow\infty}}}{\overset{\text{a.s.}}{\longrightarrow}} \left(1-c\right)^{-1}\boldsymbol{s}_1^H \mathbf{R}^{-1}\boldsymbol{s}_2 \label{eq:CV_FQR_RMT} \end{eqnarray} \indent The useful convergences of \textit{simple} QFs function of $\hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}$ are then presented. As $\mathbf{\hat{R}}\rightarrow\mathbf{R}$ a.s. when \mbox{$K\rightarrow\infty$} with a fixed $m$, $\hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}\rightarrow\boldsymbol{\Pi}_{\mathrm{c}}^{\bot}$ a.s.~\cite{Me08} in the same convergence regime. Thus: \begin{eqnarray} \boldsymbol{s}_1^H\hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}\boldsymbol{s}_2 \:\underset{\underset{m<\infty}{\small{K\rightarrow\infty}}}{\overset{\text{a.s.}}{\longrightarrow}} \,\boldsymbol{s}_1^H\boldsymbol{\Pi}_{\mathrm{c}}^{\bot}\boldsymbol{s}_2\label{eq:CV_FQsimple_Kinf} \end{eqnarray} \indent For the convergences in the large dimensional regime ($m,K\rightarrow\infty$ at the same rate), the convergences are presented under (\textbf{As1}-\textbf{As5}) and the \textit{separation condition} \textbf{As6.S}. \cite{CoHa13} showed that, $\forall i\in[\![1,\bar{M}-1]\!]$: \begin{eqnarray} \boldsymbol{s}_1^H\mathbf{\hat{U}}_i\mathbf{\hat{U}}_i^H\boldsymbol{s}_2 \underset{\underset{m/K \to c<\infty}{\small{m,K\rightarrow\infty}}}{\overset{\text{a.s.}}{\longrightarrow}} \dfrac{1-c\omega_i^{-2}}{1+c\omega_i^{-1}}\boldsymbol{s}_1^H\mathbf{U}_i\mathbf{U}_i^H\boldsymbol{s}_2 \end{eqnarray} \noindent with $\omega_i=\lambda_i-1$. $\lambda_i$ is the $i$-th distinct eigenvalue of $\mathbf{R}$. Let $\chi_i=\dfrac{1-c\omega_i^{-2}}{1+c\omega_i^{-1}}$. Thus, using the following relationship, \begin{eqnarray} \hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}=\mathbf{I}_m-\sum_{i=1}^{\bar{M}-1}\mathbf{\hat{U}}_i\mathbf{\hat{U}}_i^H=\mathbf{I}_m-\sum_{i=1}^{r}\boldsymbol{\hat{u}}_i\boldsymbol{\hat{u}}_i^H \end{eqnarray} one can deduce that with the \textit{spiked} model and in the large dimensional regime: \begin{eqnarray} \boldsymbol{s}_1^H\hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}\boldsymbol{s}_2 \underset{\underset{m/K \to c<\infty}{\small{m,K\rightarrow\infty}}}{\overset{\text{a.s.}}{\longrightarrow}} \boldsymbol{s}_1^H\bar{\boldsymbol{\Pi}}_{\mathrm{c,S}}^{\bot}\boldsymbol{s}_2 \label{eq:CV_LRFQ_SpikeSCM} \end{eqnarray} \noindent with $\bar{\boldsymbol{\Pi}}_{\mathrm{c,S}}^{\bot}=\sum_{i=1}^{m} \psi_i\boldsymbol{u}_i\boldsymbol{u}_i^H$ and \begin{eqnarray} \psi_i=\begin{cases}1,\;\,\qquad\quad \mathrm{if}\;i>r\\ 1-\chi_i,\;\quad \mathrm{if}\;i\leqslant r \end{cases}\label{eq:PiOrthS} \end{eqnarray} \indent Consequently, $\boldsymbol{s}_1^H\hat{\mathbf{R}}^{-1}\boldsymbol{s}_2$ is consistent in the two convergence regimes and, although $\boldsymbol{s}_1^H\hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}\boldsymbol{s}_2$ is consistent when $K\rightarrow\infty$ with a fixed $m$, it is no more consistent under the regime of interest i.e. when both $m,K\rightarrow\infty$ at the same rate. \section{New convergence results}\label{sec:NewCVResults} \subsection{Convergence of structured quadratic forms} \indent In this section, the convergence of the \textit{structured} QF function of $\hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}$ is analyzed and results to Proposition 1.\\ \indent\textit{\textbf{Proposition 1:}} Let $\mathbf{B}$ be a $m\times m$ deterministic complex matrix with a uniformly bounded spectral norm for all $m$. Then, under (\textbf{As1}-\textbf{As5}, \textbf{As6.S}) and the \textit{spiked} model, \begin{eqnarray} \begin{array}{l} \boldsymbol{s}_1^H\hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}\mathbf{B}\hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}\boldsymbol{s}_2\underset{\underset{m/K \to c<\infty}{\small{m,K\rightarrow\infty}}}{\overset{\text{a.s.}}{\longrightarrow}} \boldsymbol{s}_1^H\bar{\boldsymbol{\Pi}}_{\mathrm{c,S}}^{\bot}\mathbf{B}\bar{\boldsymbol{\Pi}}_{\mathrm{c,S}}^{\bot}\boldsymbol{s}_2 \end{array}\label{eq:CV_LRFQ2_SpikeSCM} \end{eqnarray} where $\bar{\boldsymbol{\Pi}}_{\mathrm{c,S}}^{\bot}=\sum_{i=1}^{m} \psi_i\boldsymbol{u}_i\boldsymbol{u}_i^H$ with $\psi_i$ defined by Eq.(\ref{eq:PiOrthS}). \begin{flushright} \vspace{-0.3cm}$\blacksquare$ \end{flushright} \indent \textit{Proof:} See Appendix.\\\\ \noindent Moreover, one can remark that if $\mathbf{B}=\mathbf{R}$, where $\mathbf{R}$ is the covariance matrix as defined in Eq.(\ref{eq:Rscm}), the following convergence holds: \begin{eqnarray} \begin{array}{l} \boldsymbol{s}_1^H\hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}\mathbf{R}\hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}\boldsymbol{s}_2\underset{\underset{m/K \to c<\infty}{\small{m,K\rightarrow\infty}}}{\overset{\text{a.s.}}{\longrightarrow}} \boldsymbol{s}_1^H\bar{\boldsymbol{\Pi}}_{\mathrm{c,S}}^{\bot}\mathbf{R}\bar{\boldsymbol{\Pi}}_{\mathrm{c,S}}^{\bot}\boldsymbol{s}_2 \end{array} \end{eqnarray} \noindent A visualization of the convergence of Eq.(\ref{eq:CV_LRFQ2_SpikeSCM}) in terms of Mean Squared Error (MSE) can be found in Fig.~\ref{Fig:lemma3} when $m,K\rightarrow\infty$ at a fixed ratio. It is compared to the MSE corresponding to the following convergence when $K\rightarrow\infty$ with a fixed $m$: \begin{eqnarray} \begin{array}{l} \boldsymbol{s}_1^H\hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}\mathbf{B}\hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}\boldsymbol{s}_2\underset{\underset{m<\infty}{\small{K\rightarrow\infty}}}{\overset{\text{a.s.}}{\longrightarrow}} \boldsymbol{s}_1^H\boldsymbol{\Pi}_{\mathrm{c}}^{\bot}\mathbf{B}\boldsymbol{\Pi}_{\mathrm{c}}^{\bot}\boldsymbol{s}_2 \end{array}\label{eq:CV_LRFQ2_SCM} \end{eqnarray} \begin{figure}[h!] \centering \includegraphics[scale=0.6]{lemma3.pdf} \caption{MSE over $10^3$ iterations corresponding to Eq.(\ref{eq:CV_LRFQ2_SpikeSCM}) and Eq.(\ref{eq:CV_LRFQ2_SCM}) when the eigenvalues of $\mathbf{R}$ are 1, 21, 31, and 71 with the multiplicity $m-3$, 1, 1 and 1 respectively, $c=0.1$, $\boldsymbol{s}_1=\boldsymbol{s}_2$ are steering vectors of the LR noise component $\boldsymbol{c}$ and $\mathbf{B}=\mathbf{R}$.} \label{Fig:lemma3} \end{figure} \subsection{Convergence of SINR losses} \indent Now, we provide the convergences of the estimated SINR loss using the convergences previously presented and the following convergence. We recall that, as $\hat{\mathbf{R}}\rightarrow\mathbf{R}$ a.s. when $K\rightarrow\infty$ with a fixed $m$, one has: \begin{eqnarray} \boldsymbol{s}_1^H\mathbf{\hat{R}}^{-1}\mathbf{R}\mathbf{\hat{R}}^{-1}\boldsymbol{s}_2 \underset{\underset{m<\infty}{\small{K\rightarrow\infty}}}{\overset{\text{a.s.}}{\longrightarrow}} \boldsymbol{s}_1^H \mathbf{R}^{-1}\boldsymbol{s}_2\label{eq:CV_FQR2} \end{eqnarray} Hence, when $K\rightarrow\infty$ with a fixed $m$ and using Eq.(\ref{eq:CV_FQR}), Eq.(\ref{eq:CV_FQR2}) and the continuous mapping theorem~\cite{Bi95}: \begin{eqnarray \hat{\rho}\underset{\underset{m<\infty}{\small{K\rightarrow\infty}}}{\overset{\text{a.s.}}{\longrightarrow}}\frac{\vert\boldsymbol{a}^H\mathbf{R}^{-1}\boldsymbol{a}\vert^2}{(\boldsymbol{a}^H\mathbf{R}^{-1}\boldsymbol{a})(\boldsymbol{a}^H\mathbf{R}^{-1}\boldsymbol{a})}=1 \end{eqnarray And, under (\textbf{As1}-\textbf{As5}), when $m,K\rightarrow\infty$ at the same rate, from~\cite{TaTaPe10}, we have: \begin{eqnarray}\small \hat{\rho}\underset{\underset{m/K \to c<\infty}{\small{m,K\rightarrow\infty}}}{\overset{\text{a.s.}}{\longrightarrow}}\!\!\frac{(1-c)\vert\boldsymbol{a}^H\mathbf{R}^{-1}\boldsymbol{a}\vert^2}{(\boldsymbol{a}^H\mathbf{R}^{-1}\boldsymbol{a})(\boldsymbol{a}^H\mathbf{R}^{-1}\boldsymbol{a})}=1-c \label{eq:SNRLossopt} \end{eqnarray}\normalsize \noindent Thus, the estimated SINR loss $\hat{\rho}$ is consistent when $K\rightarrow\infty$ with $m$ fixed and when $m,K\rightarrow\infty$ at the same rate, up to an additive constant $c$. Consequently, RMT cannot help us to improve the estimation of the theoretical SINR loss.\\ \indent For the SINR loss corresponding to the adaptive LR filters, when $K\rightarrow\infty$ with a fixed $m$, using Eq.(\ref{eq:CV_FQsimple_Kinf}), Eq.(\ref{eq:CV_LRFQ2_SCM}) and the continuous mapping theorem theorem, we have: \begin{eqnarray} \hat{\rho}_{\mathrm{LR}}\underset{\underset{m<\infty}{\small{K\rightarrow\infty}}}{\overset{\text{a.s.}}{\longrightarrow}}\rho_{\mathrm{LR}} \label{eq:CV_SNRLoss_Kinf} \end{eqnarray} \noindent where $\rho_{\mathrm{LR}}$ is defined by Eq.(\ref{eq:SNRLoss_wLRopt}). When $m,K\rightarrow\infty$ at the same rate, we obtain the following convergence: \begin{eqnarray} \hat{\rho}_{\mathrm{LR}}\underset{\underset{m/K \to c<\infty}{\small{m,K\rightarrow\infty}}}{\overset{\text{a.s.}}{\longrightarrow}}\bar{\rho}_{\mathrm{LR}}^{(\mathrm{S})}=\rho_{\mathrm{LR}}\vert_{\boldsymbol{\Pi}_\mathrm{c}^{\bot}= \boldsymbol{\bar{\Pi}}_{\mathrm{c},\mathrm{S}}^{\bot}}\neq\rho_{\mathrm{LR}} \label{eq:CV_SNRLoss_Spike} \end{eqnarray} \noindent where Eq.(\ref{eq:CV_LRFQ_SpikeSCM}), Proposition 1 and the continuous mapping theorem were used to prove Eq.(\ref{eq:CV_SNRLoss_Spike}). One can observe that, although the traditional estimator of $\rho_{\mathrm{LR}}$ is consistent when $K\rightarrow\infty$ with a fixed $m$, it is no more consistent when $m,K\rightarrow\infty$ at the same rate. It is also important to underline that the new convergence result leads to a more precise approximation of $\hat{\rho}_{\mathrm{LR}}$ than previous works~\cite{GiFo13}. Indeed,~\cite{GiFo13} proposes an approximation dependent on $K$ and the approximation proposed here depends on $K$ (and of course on $c$) as well as on the parameter $\bm{\Theta}$. \section{Simulations}\label{sec:simu} \subsection{Parameters} \indent As an illustration of the interest of the application of RMT in filtering, the jamming application is chosen. The purpose of this application is to detect a target thanks to a ULA composed of $m$ sensors despite the presence of jamming. The response of the jamming, $\boldsymbol{c}$ is composed of signals similar to the target response. In this section, except for the convergences when $m,K\rightarrow\infty$ at the same rate $c$, we choose $m=100$ in order to have a large number for the data dimension. Even if, in some basic array processing applications, this number could seem significant, it actually became standard in many applications such as STAP~\cite{Wa94}, MIMO applications~\cite{LiSt09,TsVi05}, MIMO-STAP~\cite{LiSt09}, etc. Here, $\bm{\Theta}=\theta$ where $\theta$ is the AoA. The jamming is composed of three synthetic targets with AoA $-20^\circ$, $0^\circ$ and $20^\circ$ and wavelength $l_0=0.667$m. Thus, the jamming (LR noise) has a rank $r=3$. Then, the AWGN $\boldsymbol{b}$ power is $\sigma^2=1$. Finally, the theoretical covariance matrix of the total noise can be written as $\mathbf{R}=\frac{JNR}{\mathrm{tr}(\bm{\Lambda})}\mathbf{U}\bm{\Lambda}\mathbf{U}^H+\sigma^2\mathbf{I}_m$ with $\bm{\Lambda}=\mathrm{diag([6,2,1])}$ and where $JNR$ is the jamming to noise ratio. $\frac{JNR}{\mathrm{tr}(\bm{\Lambda})}$ is fixed at $10$dB except for Fig. 4.\\ \indent In order to validate the \textit{spiked} model as covariance matrix model, we visualize a zoom of the experimental PDF of the eigenvalues of our data without target in Fig.~\ref{Fig:ddp_STAP} over $5\times 10^4$ Monte-Carlo iterations. We observe a Mar\u{c}enko-Pastur law around 1 (eigenvalues of the white noise) and Gaussian distributions for the eigenvalues of the jamming, which is consistent to the CLT for the \textit{spiked} model proved in~\cite{CoHa13}. The \textit{spiked} model is consequently relevant for our data model.\\ \begin{figure}[h!] \centering \includegraphics[scale=0.6]{ddp_brouillage2.pdf} \caption{Zoom of the experimental PDF of jamming plus noise data with $c=0.2$ and $\frac{JNR}{\mathrm{tr}(\bm{\Lambda})}=10$dB.} \label{Fig:ddp_STAP} \end{figure} \indent Moreover, in order to verified that the \textit{spiked} model is realistic in terms of \textit{separation condition}, Fig.~\ref{Fig:separationSPIKE_rBrennan} shows ($\omega_r-\sqrt{c}$) as a function of $\frac{JNR}{\mathrm{tr}(\bm{\Lambda})}$ in dB. This figure will be the same for all $m$ and $K$ at a fixed ratio. We recall that, in order to satisfy the \textit{separation condition}, one should have $\omega_r-\sqrt{c}>0$. Consequently, we gladly observe that it is satisfied for $\frac{JNR}{\mathrm{tr}(\bm{\Lambda})}>4$dB for the majority of $c$ even $c>2$. Indeed, in practice, if the $\frac{JNR}{\mathrm{tr}(\bm{\Lambda})}$ is lower, the jamming will not have any effects on the performance. \begin{figure}[h!] \centering \includegraphics[scale=0.6]{CritereSeparationSPIKE_rBrennan.pdf} \caption{\textit{Separation condition} ($\omega_r-\sqrt{c}$) of the \textit{spiked} model for the lowest non-unit eigenvalue as a function of the ratio $\frac{JNR}{\mathrm{tr}(\bm{\Lambda})}$ in dB.} \label{Fig:separationSPIKE_rBrennan} \end{figure} \subsection{Performance of filters} \indent We now observe the performances of filters through the SINR loss. We are first interested in the validation of the convergence of $\hat{\rho}_{\mathrm{LR}}$ in Eq.(\ref{eq:CV_SNRLoss_Spike}) as $m,K\rightarrow\infty$ at the same rate. This convergence is validated and presented in Fig.~\ref{Fig:CV_SNRLoss_mKinf} in terms of MSE over $10^3$ realizations with $c=3$ for an AoA of the target ($\theta=50^\circ$) and an AoA of the jamming ($\theta=20^\circ$). \begin{figure}[h!] \centering \includegraphics[scale=0.6]{CV_SNRLoss_mKinf_v3.pdf} \caption{MSE corresponding to Eq.(\ref{eq:CV_SNRLoss_Kinf}) and Eq.(\ref{eq:CV_SNRLoss_Spike}) when $m,K\rightarrow\infty$ at a fixed ratio $c=3$ and $\frac{JNR}{\mathrm{tr}(\bm{\Lambda})}=10$dB.} \label{Fig:CV_SNRLoss_mKinf} \end{figure}\\ \indent Fig.~\ref{Fig:SINRLoss_Kinf_theta205_avec} shows the visualization of Eq.(\ref{eq:SNRLoss_wLRopt}) (blue line with stars), Eq.(\ref{eq:SNRLoss_wLRSCM}) (blue dashed line), the right side of the convergence in Eq.(\ref{eq:CV_SNRLoss_Spike}) (green line with circles) and the approximation $\mathbb{E}[\hat{\rho}_\mathrm{LR}]\simeq 1-\frac{r}{K}$ introduced by~\cite{GiFo13} (black line) as a function of $K$ when the target is near from the jamming, i.e. $\theta=20.5^\circ$. We observe that the \textit{spiked} model and the RMT helps us to obtain a better estimation of $\mathbb{E}[\hat{\rho}_\mathrm{LR}]$ than the estimation $\mathbb{E}[\hat{\rho}_\mathrm{LR}]\simeq 1-\frac{r}{K}$ as the curve of $\bar{\rho}_\mathrm{LR}^{(\mathrm{S})}$ has the same behavior as the curve of $\hat{\rho}_\mathrm{LR}$. Then, similarly, the same equations are visualized as a function of $\theta$ in Fig.~\ref{Fig:SNRloss_theta_CNR10_K2r} with $K=2r$. We observe that, unlike the estimation $1-r/K$, the RMT with the \textit{spiked} model permits us to obtain a better estimation of $\mathbb{E}[\hat{\rho}_\mathrm{LR}]$ as a function of $\theta$ and consequently a better approximation of its behavior. Thus, it permits to predict the parameter $\theta$ value corresponding to the performance break (here around $21.1^\circ$). \begin{figure}[h!] \centering \includegraphics[scale=0.63]{SINRLoss_Kinf_theta205_avec.pdf} \caption{Visualization of Eq.(\ref{eq:SNRLoss_wLRopt}) (red line with squares), Eq.(\ref{eq:SNRLoss_wLRSCM}) (blue dashed line), the right side of the convergence in Eq.(\ref{eq:CV_SNRLoss_Spike}) (green line with circles) and the traditional estimation of $\mathbb{E}[\hat{\rho}_\mathrm{LR}]$ (black line) as a function of $K$ (over $10^3$ realizations) with $\frac{JNR}{\mathrm{tr}(\bm{\Lambda})}=10$dB, $m=100$ and $\theta=20.5^\circ$.} \label{Fig:SINRLoss_Kinf_theta205_avec} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=0.63]{SNRloss_theta_CNR10_K2r.pdf} \caption{Visualization of Eq.(\ref{eq:SNRLoss_wLRopt}) (red line with squares), Eq.(\ref{eq:SNRLoss_wLRSCM}) (blue dashed line), the right side of the convergence in Eq.(\ref{eq:CV_SNRLoss_Spike}) (green line with circles) and the traditional estimation of $\mathbb{E}[\hat{\rho}_\mathrm{LR}]$ (black line) as a function of $\theta$ (over $10^3$ realizations) with $\frac{JNR}{\mathrm{tr}(\bm{\Lambda})}=10$dB, $m=100$ and $K=2r$.} \label{Fig:SNRloss_theta_CNR10_K2r} \end{figure} \section{Conclusion} \indent In this paper, we proposed new results in random matrix theory with a specific covariance matrix model fitted to our data model: the \textit{spiked} model. Based on this, we studied the convergence of the traditional estimators of the SINR loss in their full rank and low rank version when the number of secondary data $K\rightarrow\infty$ with a fixed data dimension $m$ and when $m,K\rightarrow\infty$ at the same rate $c=m/K$. We observed that the full rank version is consistent in the two regimes. However, the low rank version is consistent when $K\rightarrow\infty$ with a fixed $m$ but is not consistent when $m,K\rightarrow\infty$ at the same rate $c$. Finally, we applied these results to a jamming application. We first observed that the experimental probability density function of the eigenvalue of the covariance matrix of jamming data is relevant with the probability density function of the \textit{spiked} model. Then, we validated the convergence of the SINR loss in its low rank version and we observed that random matrix theory and more precisely the \textit{spiked} model better evaluate the asymptotic performances of the low rank SINR loss corresponding to the adaptive LR filter, especially when the steering vector parameter is close to the jamming one and contrary to previous works. Moreover, it permits to predict the steering vector parameter value corresponding to the performance break. \section{Appendix} \indent The proof is decomposed as follows. We first develop the \textit{structured} QF as a sum of \textit{simple} QFs and base \textit{structured} QF (Subsec. \ref{subsec:A.1}). In a second time, we formulate the base \textit{structured} QF as a complex integral (Subsec. \ref{subsec:A.2}) and split it into several integrals (Subsec. \ref{subsec:A.3}). Then, we determine the deterministic complex integral equivalent of the base \textit{structured} QF (Subsec. \ref{subsec:A.4}) and its formal expression (Subsec. \ref{subsec:A.5}). Finally, we use this result to determine the convergence of the \textit{structured} QF in the large dimensional regime (Subsec. \ref{subsec:A.6}). The regime of convergences in the Appendix, if not precised, is $m,K\rightarrow\infty$ at a fixed ratio $c$. \subsection{Development of the \textit{structured} QF}\label{subsec:A.1} \indent Let $\boldsymbol{s}_1$ and $\boldsymbol{s}_2$ be two deterministic complex vectors and $\mathbf{B}$ be a $m\times m$ deterministic complex matrix with uniformly bounded spectral norm for all $m$. In order to obtain the convergence of the \textit{structured} QF $\boldsymbol{s}_1^H\hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}\mathbf{B}\hat{\boldsymbol{\Pi}}_ {\mathrm{c}}^{\bot} \boldsymbol{s}_2$, one can rewrite, using the notations of Eq.(\ref{eq:SCM_spike}) and the \textit{spiked} model, $\hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}=\hat{\boldsymbol{\Pi}}_{r+1}=\hat{\mathbf{U}}_{r+1}\hat{\mathbf{U}}_{r+1}^H=\mathbf{I}_m-\sum_{i=1}^r\hat{\boldsymbol{\Pi}}_i$ where $\hat{\mathbf{U}}_{r+1}=[\hat{\boldsymbol{u}}_{r+1},\cdots,\hat{\boldsymbol{u}}_m]$, $\hat{\boldsymbol{\Pi}}_i=\hat{\boldsymbol{u}}_i\hat{\boldsymbol{u}}_i^H$, $\forall i\in[\![1,r]\!]$ and $\hat{\boldsymbol{u}}_i$ are the eigenvectors of the SCM. We recall that $r$ is fixed for all $m$, i.e. $r/m\rightarrow 0^+$. Thus, one can develop the \textit{structured} QF as :\small \begin{eqnarray} \!\!\!\!\!\!\boldsymbol{s}_1^H\hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}\mathbf{B}\hat{\boldsymbol{\Pi}}_ {\mathrm{c}}^{\bot} \boldsymbol{s}_2\!\!\!\!\!\! &=&\!\!\!\!\!\!\boldsymbol{s}_1^H\left( \mathbf{I}_m-\sum_{i=1}^r\hat{\boldsymbol{\Pi}}_i\right) \mathbf{B}\left(\mathbf{I}_m-\sum_{i=1}^r\hat{\boldsymbol{\Pi}}_i \right) \boldsymbol{s}_2\\ &=&\!\!\!\!\!\!\boldsymbol{s}_1^H\mathbf{B}\boldsymbol{s}_2\! -\!\boldsymbol{s}_1^H\sum_{i=1}^r\hat{\boldsymbol{\Pi}}_i\mathbf{B}\boldsymbol{s}_2\! -\!\boldsymbol{s}_1^H\mathbf{B}\sum_{i=1}^r\hat{\boldsymbol{\Pi}}_i\boldsymbol{s}_2 \nonumber\\ &&+\boldsymbol{s}_1^H\sum_{i=1}^r\hat{\boldsymbol{\Pi}}_i\mathbf{B}\sum_{i=1}^r\hat{\boldsymbol{\Pi}}_i\boldsymbol{s}_2\\ &=&\!\!\!\!\!\!\boldsymbol{s}_1^H\mathbf{B}\boldsymbol{s}_2\! -\!\sum_{i=1}^r\left(\boldsymbol{s}_1^H\hat{\boldsymbol{\Pi}}_i\mathbf{B}\boldsymbol{s}_2+\boldsymbol{s}_1^H\mathbf{B}\hat{\boldsymbol{\Pi}}_i\boldsymbol{s}_2\right)\nonumber\\ &&\!\!\!\!\!\! +\!\!\sum_{j_1=1}^r\!\boldsymbol{s}_1^H\hat{\boldsymbol{\Pi}}_{j_1}\mathbf{B}\hat{\boldsymbol{\Pi}}_{j_1}\boldsymbol{s}_2\! +\!\!\!\!\sum_{\underset{j_1\neq j_2}{j_1,j_2=1}}^r\!\!\!\!\boldsymbol{s}_1^H\hat{\boldsymbol{\Pi}}_{j_1}\mathbf{B}\hat{\boldsymbol{\Pi}}_{j_2}\boldsymbol{s}_2\label{eq:FQtot} \end{eqnarray}\normalsize \subsection{Formulation of the base \textit{structured} QF as a complex integral}\label{subsec:A.2} \indent Remarking that Eq.(\ref{eq:FQtot}) is a sum of \textit{simple} QFs and base \textit{structured} QFs, we first focus on the convergence of the base \textit{structured} QF $\hat{\eta}(j_1,j_2)=\boldsymbol{s}_1^H\hat{\boldsymbol{\Pi}}_{j_1}\mathbf{B}\hat{\boldsymbol{\Pi}}_{j_2}\boldsymbol{s}_2$, $\left\lbrace j_1,j_2\right\rbrace \in[\![1,r]\!]^2 $. Let us now formulate the base \textit{structured} QF as a complex integral.\\ \indent \textit{\textbf{Proposition 2:}} Let $\mathbf{B}$ be a $m\times m$ deterministic complex matrix with a uniformly bounded spectral norm for all $m$. Then, under (\textbf{As1}-\textbf{As5}, \textbf{As6.S}) and the \textit{spiked} model, $\forall j_1,j_2\in[\![1,r+1]\!]$, if $\hat{\eta}(j_1,j_2)=\boldsymbol{s}_1^H\hat{\boldsymbol{\Pi}}_{j_1}\mathbf{B}\hat{\boldsymbol{\Pi}}_{j_2}\boldsymbol{s}_2$:\small \begin{eqnarray} \hat{\eta}(j_1,j_2)&=&\dfrac{1}{(2i\pi)^2}\oint_{\mathcal{C}_{j_1}^-}\oint_{\mathcal{C}_{j_2}^-}\boldsymbol{s}_1^H\left(\hat{\mathbf{R}}-z_1\mathbf{I}_m\right)^{-1}\nonumber\\ &&\times\mathbf{B}\left(\hat{\mathbf{R}}-z_2\mathbf{I}_m\right)^{-1}\boldsymbol{s}_2dz_1dz_2\label{Eq:cauchy_integral} \end{eqnarray}\normalsize \begin{flushright} \vspace{-0.3cm}$\blacksquare$ \end{flushright} \indent \textit{Proof:} If $j_1\neq j_2$, it can be easily shown that $\hat{\eta}(j_1,j_2)$ can be expressed as the following Cauchy integral:\small \begin{equation} A\! =\!\frac{1}{(2i\pi)^2}\!\oint_{\mathcal{C}_{j_1}^-}\!\!\oint_{\mathcal{C}_{j_2}^-}\!\!\!\boldsymbol{s}_1^H\!(\hat{\mathbf{R}}-z_1\mathbf{I}_m)^{-1}\mathbf{B}(\hat{\mathbf{R}}-z_2\mathbf{I}_m)^{-1}\boldsymbol{s}_2dz_1dz_2\label{Eq:IntCurv} \end{equation}\normalsize \noindent where $\mathcal{C}_j^-$ in a negatively oriented contour encompassing the eigenvalues of $\hat{\mathbf{R}}$ corresponding to the $j$-th eigenvalue of $\mathbf{R}$ and $z_1$ and $z_2$ are independent variables. Indeed, let $\mathbf{G}(z_k)= (\hat{\mathbf{R}}-z_k\mathbf{I}_m)^{-1}=(\frac{1}{K}\mathbf{X}\mathbf{X}^H-z_k\mathbf{I}_m)^{-1}$ with $k\in\left\lbrace 1,2\right\rbrace $. Thus:\small \begin{eqnarray} \!\!\!\!\!\! A\!\!\!\!\! &=&\!\!\!\!\!\dfrac{1}{(2i\pi)^2}\!\oint_{\mathcal{C}_{j_1}^-}\!\!\oint_{\mathcal{C}_{j_2}^-}\!\!\!\boldsymbol{s}_1^H\mathbf{G}(z_1)\mathbf{B}(\hat{\mathbf{R}}-z_2\mathbf{I}_m)^{-1}\!\boldsymbol{s}_2dz_1dz_2\\ &=&\!\!\!\!\!\frac{1}{(2i\pi)^2}\!\oint_{\mathcal{C}_{j_1}^-}\!\!\!\oint_{\mathcal{C}_{j_2}^-}\!\!\!\! \boldsymbol{s}_1^H\mathbf{G}(z_1)\mathbf{B}\!\left(\sum_{n=1}^{m}\!\hat{\lambda}_n\hat{\boldsymbol{u}}_n\hat{\boldsymbol{u}}_n^H\! -\! z_2\mathbf{I}_m\!\right)^{-1}\nonumber\\ &&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times\boldsymbol{s}_2dz_2dz_1 \\ &=&\!\!\!\!\! \frac{1}{(2i\pi)^2}\oint_{\mathcal{C}_{j_1}^-}\oint_{\mathcal{C}_{j_2}^-} \sum_{n=1}^{m}\frac{\boldsymbol{s}_1^H\mathbf{G}(z_1)\mathbf{B}\hat{\boldsymbol{u}}_n\hat{\boldsymbol{u}}_n^H\boldsymbol{s}_2}{\hat{\lambda}_n-z_2}dz_2dz_1 \\ &=&\!\!\!\!\! \frac{1}{2i\pi}\oint_{\mathcal{C}_{j_1}^-}\sum_{n=1}^{m}\frac{1}{2i\pi}\oint_{\mathcal{C}_{j_2}^-} f_n^{(2)}(z_2)dz_2dz_1 \end{eqnarray}\normalsize\\ \noindent where $f_n^{(2)}(z_2)=\frac{\boldsymbol{s}_1^H\mathbf{G}(z_1)\mathbf{B}\hat{\boldsymbol{u}}_n\hat{\boldsymbol{u}}_n^H\boldsymbol{s}_2}{\hat{\lambda}_n-z_2}$. From the expression $f_n^{(2)}(z_2)$, one observe that $f_n^{(2)}(z_2)$ has a single simple pole $\hat{\lambda}_n$ which is encompassed by $\mathcal{C}_{j_2}^-$ for the indexes $n\in\mathcal{M}_{j_2}$ where $\mathcal{M}_{j_2}$ is the set of indexes corresponding to the $j_2$-th eigenvalue of $\mathbf{R}$. Consequently, from complex analysis:\small \begin{eqnarray} A&=& \frac{1}{2i\pi}\oint_{\mathcal{C}_{j_1}^-}\sum_{n\in\mathcal{M}_{j_2}}\frac{1}{2i\pi}\oint_{\mathcal{C}_{j_2}^-} \frac{\boldsymbol{s}_1^H\mathbf{G}(z_1)\mathbf{B}\hat{\boldsymbol{u}}_{n}\hat{\boldsymbol{u}}_{n}^H\boldsymbol{s}_2}{\hat{\lambda}_n-z_2}dz_2dz_1\nonumber\\ \\ &=&\frac{1}{2i\pi}\oint_{\mathcal{C}_{j_1}^-}\sum_{n\in\mathcal{M}_{j_2}}\frac{1}{2i\pi}\oint_{\mathcal{C}_{j_2}^-} f^{(2)}_{n}(z_2)dz_2dz_1 \\ &=& \frac{1}{2i\pi}\oint_{\mathcal{C}_{j_1}^-}\sum_{n\in\mathcal{M}_{j_2}}\left[ -\mathrm{Res}\left(f^{(2)}_{n}(z_2),\hat{\lambda}_{n}\right)\right] dz_1 \end{eqnarray}\normalsize where $\mathrm{Res}\left(f^{(2)}_{n}(z_2),\hat{\lambda}_{n}\right)$ is the residue of $f^{(2)}_{n}(z_2)$ at $\hat{\lambda}_n$. Thus, using the residue theorem and residue calculus:\small \begin{eqnarray} \!\!\!\!\!\!\!\! A\!\!\!\!\! &=&\!\!\!\!\!\frac{1}{2i\pi}\oint_{\mathcal{C}_{j_1}^-}\sum_{n\in\mathcal{M}_{j_2}}\left[- \underset{z_2\rightarrow\hat{\lambda}_n}{\mathrm{lim}}(z_2-\hat{\lambda}_n)f^{(2)}_{n}(z_2)\right] dz_1\\ &=&\!\!\!\!\! \frac{1}{2i\pi}\!\oint_{\mathcal{C}_{j_1}^-}\!\sum_{n\in\mathcal{M}_{j_2}}\!\!\!\underset{z_2\rightarrow\hat{\lambda}_n}{\mathrm{lim}}\!\!\!\left\lbrace (\hat{\lambda}_n-z_2)\frac{\boldsymbol{s}_1^H\mathbf{G}(z_1)\mathbf{B}\hat{\boldsymbol{u}}_{n}\hat{\boldsymbol{u}}_{n}^H\boldsymbol{s}_2}{\hat{\lambda}_n-z_2}\right\rbrace dz_1\nonumber \\ &&\\ &=&\!\!\!\!\! \frac{1}{2i\pi}\oint_{\mathcal{C}_{j_1}^-}\sum_{n\in\mathcal{M}_{j_2}}\underset{z_2\rightarrow\hat{\lambda}_n}{\mathrm{lim}}\left\lbrace\boldsymbol{s}_1^H\mathbf{G}(z_1)\mathbf{B}\hat{\boldsymbol{u}}_{n}\hat{\boldsymbol{u}}_{n}^H\boldsymbol{s}_2\right\rbrace dz_1 \end{eqnarray} \begin{eqnarray} \!\!\!\!\!\!\!\! A\!\!\!\!\!&=&\!\!\!\!\! \frac{1}{2i\pi}\oint_{\mathcal{C}_{j_1}^-}\boldsymbol{s}_1^H\mathbf{G}(z_1)\mathbf{B}\sum_{n\in\mathcal{M}_{j_2}}\hat{\boldsymbol{u}}_{n}\hat{\boldsymbol{u}}_{n}^H\boldsymbol{s}_2dz_1\\ &=&\!\!\!\!\! \frac{1}{2i\pi}\oint_{\mathcal{C}_{j_1}^-}\boldsymbol{s}_1^H\left(\hat{\mathbf{R}}-z_1\mathbf{I}_m\right)^{-1}\mathbf{B}\hat{\bm{\Pi}}_{j_2}\boldsymbol{s}_2dz_1\\ &=&\!\!\!\!\! \frac{1}{2i\pi}\oint_{\mathcal{C}_{j_1}^-}\boldsymbol{s}_1^H\left(\sum_{n=1}^{m}\hat{\lambda}_n\hat{\boldsymbol{u}}_{n}\hat{\boldsymbol{u}}_{n}^H-z_1\mathbf{I}_m\right)^{-1}\mathbf{B}\hat{\bm{\Pi}}_{j_2}\boldsymbol{s}_2dz_1\\ &=&\!\!\!\!\! \frac{1}{2i\pi}\oint_{\mathcal{C}_{j_1}^-}\sum_{n=1}^{m}\frac{\boldsymbol{s}_1^H\hat{\boldsymbol{u}}_{n}\hat{\boldsymbol{u}}_{n}^H\mathbf{B}\hat{\bm{\Pi}}_{j_2}\boldsymbol{s}_2}{\hat{\lambda}_n-z_1}dz_1\\ &=&\!\!\!\!\!\sum_{n=1}^{m}\frac{1}{2i\pi}\oint_{\mathcal{C}_{j_1}^-}f^{(1)}_n(z_1)dz_1 \end{eqnarray}\normalsize where $f_n^{(1)}(z_1)=\frac{\boldsymbol{s}_1^H\hat{\boldsymbol{u}}_{n}\hat{\boldsymbol{u}}_{n}^H\mathbf{B}\hat{\bm{\Pi}}_{j_2}\boldsymbol{s}_2}{\hat{\lambda}_n-z_1}$. Similarly, $f_n^{(1)}(z_1)$ has a single simple pole $\hat{\lambda}_n$ which is encompassed by $\mathcal{C}_{j_1}^-$ for the indexes $n\in\mathcal{M}_{j_1}$. Thus:\small \begin{eqnarray} A&=& \sum_{n\in\mathcal{M}_{j_1}}\frac{1}{2i\pi}\oint_{\mathcal{C}_{j_1}^-}f^{(1)}_{n}(z_1)dz_1\\ &=& -\sum_{n\in\mathcal{M}_{j_1}}\mathrm{Res}\left(f^{(1)}_{n}(z_1),\hat{\lambda}_{n}\right)\\ &=&-\sum_{n\in\mathcal{M}_{j_1}} \underset{z_1\rightarrow\hat{\lambda}_{n}}{\mathrm{lim}}(z_1-\hat{\lambda}_{n})f^{(1)}_{n}(z_1)\\ &=& \sum_{n\in\mathcal{M}_{j_1}}\underset{z_1\rightarrow\hat{\lambda}_{n}}{\mathrm{lim}}\left\lbrace (\hat{\lambda}_{n}-z_1)\frac{\boldsymbol{s}_1^H\hat{\boldsymbol{u}}_{n}\hat{\boldsymbol{u}}_{n}^H\mathbf{B}\hat{\bm{\Pi}}_{j_2}\boldsymbol{s}_2}{\hat{\lambda}_n-z_1}\right\rbrace \\ &=& \sum_{n\in\mathcal{M}_{j_1}}\underset{z_1\rightarrow\hat{\lambda}_{n}}{\mathrm{lim}}\left\lbrace \boldsymbol{s}_1^H\hat{\boldsymbol{u}}_{n}\hat{\boldsymbol{u}}_{n}^H\mathbf{B}\hat{\bm{\Pi}}_{j_2}\boldsymbol{s}_2\right\rbrace\\ &=&\boldsymbol{s}_1^H\sum_{n\in\mathcal{M}_{j_1}}\hat{\boldsymbol{u}}_{n}\hat{\boldsymbol{u}}_{n}^H\mathbf{B}\hat{\bm{\Pi}}_{j_2}\boldsymbol{s}_2=\boldsymbol{s}_1^H\hat{\bm{\Pi}}_{j_1}\mathbf{B}\hat{\bm{\Pi}}_{j_2}\boldsymbol{s}_2 \end{eqnarray}\normalsize Consequently, $\hat{\eta}(j_1,j_2)=A$ for $j_1\neq j_2$.\\ \indent Then, if $j_1=j_2=j$ and using the same arguments as previously, one has:\small \begin{eqnarray} \boldsymbol{s}_1^H\hat{\bm{\Pi}}_{j}\mathbf{B}\hat{\bm{\Pi}}_{j}\boldsymbol{s}_2=\dfrac{1}{2i\pi}\oint_{\mathcal{C}_{j}^-}\boldsymbol{s}_1^H\sum_{n=1}^m\dfrac{\hat{\boldsymbol{u}}_{n}\hat{\boldsymbol{u}}_{n}^H\mathbf{B}\hat{\boldsymbol{u}}_{n}\hat{\boldsymbol{u}}_{n}^H}{\hat{\lambda}_n-z}\boldsymbol{s}_2dz\label{eq:FQstruct_1int} \end{eqnarray}\normalsize However, the remaining of the proof is based on the fact that the resolvent $\mathbf{G}(z)$ of the SCM can be found in the complex integral, which is not the case in the previous equation. Consequently, noticing that:\small \begin{eqnarray} g(\hat{\bm{\Pi}}_{j})=\dfrac{1}{2i\pi}\oint_{\mathcal{C}_{j}^-}\sum_{n=1}^m\dfrac{g(\hat{\bm{\Pi}}_{n})}{\hat{\lambda}_n-z}dz \end{eqnarray}\normalsize where $g(.)$ is a functional, Eq.(\ref{eq:FQstruct_1int}) is equivalent to Eq.(\ref{Eq:cauchy_integral}). As a consequence, $\forall j_1,j_2\in[\![1,r+1]\!]$:\small \begin{eqnarray} \hat{\eta}(j_1,j_2)&=&\dfrac{1}{(2i\pi)^2}\oint_{\mathcal{C}_{j_1}^-}\oint_{\mathcal{C}_{j_2}^-}\boldsymbol{s}_1^H\left(\hat{\mathbf{R}}-z_1\mathbf{I}_m\right)^{-1}\nonumber\\ &&\times\mathbf{B}\left(\hat{\mathbf{R}}-z_2\mathbf{I}_m\right)^{-1}\boldsymbol{s}_2dz_1dz_2 \end{eqnarray}\normalsize \subsection{Development of the complex integral}\label{subsec:A.3} \indent Next, one want to split the previous line integral into several line integrals where some of them will tend to 0. Thus, from~\cite{CoHa13}, with $k\in\left\lbrace 1,2\right\rbrace $, one can write:\small \begin{eqnarray} (\hat{\mathbf{R}}-z_k\mathbf{I}_m)^{-1}\!\!\!\! &=&\!\!\!\! (\mathbf{I}_m+\mathbf{P})^{-1/2}\left[\mathbf{Q}(z_k)-z_k\mathbf{Q}(z_k)\mathbf{U}\right.\nonumber\\ \!\!\!\! &&\!\!\!\!\left.\times\mathbf{\hat{H}}(z_k)^{-1}\boldsymbol{\Omega}(\mathbf{I}_r+\boldsymbol{\Omega})^{-1}\mathbf{U}^H\mathbf{Q}(z_k) \right] \nonumber\\ \!\!\!\! &&\!\!\!\! \times(\mathbf{I}_m+\mathbf{P})^{-1/2}\label{Eq:resolvant} \end{eqnarray}\normalsize \noindent with\small \begin{eqnarray} \mathbf{Q}(z_k)&=&(\tfrac{1}{K}\mathbf{Y}\mathbf{Y}^H-z_k\mathbf{I}_m)^{-1}\\ \mathbf{\hat{H}}(z_k)&=&\mathbf{I}_m+z_k\boldsymbol{\Omega}(\mathbf{I}_m+\boldsymbol{\Omega})^{-1}\mathbf{U}^H\mathbf{Q}(z_k)\mathbf{U} \end{eqnarray}\normalsize \noindent Then, replacing $(\hat{\mathbf{R}}-z_k\mathbf{I}_m)^{-1}$ by Eq.(\ref{Eq:resolvant}) in Eq.(\ref{Eq:cauchy_integral}) and developing the obtained result, one obtains:\small \begin{eqnarray} \hat{\eta}(j_1,j_2)\!\!\!\! &=& \dfrac{1}{(2i\pi)^2}\oint_{\mathcal{C}_{j_1}^-}\oint_{\mathcal{C}_{j_2}^-}\boldsymbol{s}_1^H \mathbf{E}(z_1)\mathbf{B}\mathbf{E}(z_2)\boldsymbol{s}_2dz_1dz_2\nonumber\\ &-&\!\!\!\!\dfrac{1}{(2i\pi)^2}\oint_{\mathcal{C}_{j_1}^-}\oint_{\mathcal{C}_{j_2}^-}\left[ \mathbf{\hat{e}}_1^H(z_1) \mathbf{\hat{H}}(z_1)^{-1}\mathbf{\hat{C}}_2(z_1)\mathbf{B}\right]\nonumber\\ && \qquad\qquad\qquad\qquad\qquad\quad\times \mathbf{E}(z_2)\boldsymbol{s}_2dz_1dz_2\nonumber\\ &-&\!\!\!\!\dfrac{1}{(2i\pi)^2}\oint_{\mathcal{C}_{j_1}^-}\oint_{\mathcal{C}_{j_2}^-}\boldsymbol{s}_1^H\mathbf{E}(z_1) \left[\mathbf{B}\mathbf{\hat{C}}_1^H(z_2)\mathbf{\hat{H}}(z_2)^{-1}\right.\nonumber\\ &&\qquad\qquad\qquad\qquad\qquad\quad\times\left.\mathbf{\hat{e}}_2(z_2) \right]dz_1dz_2\nonumber\\ &+&\!\!\!\!\dfrac{1}{(2i\pi)^2}\oint_{\mathcal{C}_{j_1}^-}\oint_{\mathcal{C}_{j_2}^-}\mathbf{\hat{e}}_1^H(z_1) \mathbf{\hat{H}}(z_1)^{-1}\mathbf{\hat{C}}_2(z_1)\mathbf{B}\nonumber\\ &&\qquad\quad\times\mathbf{\hat{C}}_1^H(z_2)\mathbf{\hat{H}}(z_2)^{-1}\mathbf{\hat{e}}_2(z_2)dz_1dz_2\\ &=& D_1-D_2-D_3+D_4 \end{eqnarray}\normalsize \noindent with\small \begin{eqnarray} \!\!\!\! \mathbf{E}(z)\!\!\!\! &=&\!\!\!\!(\mathbf{I}_m+\mathbf{P})^{-1/2}\mathbf{Q}(z)(\mathbf{I}_m+\mathbf{P})^{-1/2}\\ \!\!\!\!\mathbf{\hat{e}}_1^H(z)\!\!\!\! &=&\!\!\!\! \boldsymbol{s}_1^H(\mathbf{I}_m+\mathbf{P})^{-1/2}z\mathbf{Q}(z)\mathbf{U}\\ \!\!\!\!\mathbf{\hat{C}}_2(z)\!\!\!\! &=&\!\!\!\! \boldsymbol{\Omega}(\mathbf{I}_m+\boldsymbol{\Omega})^{-1}\mathbf{U}^H\mathbf{Q}(z) (\mathbf{I}_m+\mathbf{P})^{-1/2}\\ \!\!\!\!\mathbf{\hat{C}}_1^H(z)\!\!\!\! &=&\!\!\!\! (\mathbf{I}_m+\mathbf{P})^{-1/2}z\mathbf{Q}(z)\mathbf{U}\\ \!\!\!\!\mathbf{\hat{e}}_2(z)\!\!\!\! &=&\!\!\!\! \boldsymbol{\Omega}(\mathbf{I}_m+\boldsymbol{\Omega})^{-1}\mathbf{U}^H\mathbf{Q}(z) (\mathbf{I}_m+\mathbf{P})^{-1/2}\boldsymbol{s}_2 \end{eqnarray}\normalsize \subsection{Determination of the deterministic complex integral equivalent}\label{subsec:A.4} \indent The convergence of the terms $D_1$ to $D_4$ has now to be studied. Some of them will tend to 0 and the remainder of the terms will tend to a deterministic integral equivalent.\\ \indent \textit{\textbf{Proposition 3:}} Let $\mathbf{B}$ be a $m\times m$ deterministic complex matrix with a uniformly bounded spectral norm for all $m$. Then, under (\textbf{As1}-\textbf{As5}, \textbf{As6.S}) and the \textit{spiked} model, $\forall j_1,j_2\in[\![1,r+1]\!]$, $\hat{\eta}(j_1,j_2)- \eta(j_1,j_2)\overset{\mathrm{a.s.}}{\longrightarrow}0$ with\small \begin{eqnarray} \eta(j_1,j_2)\!\!\!\! &=&\!\!\!\!\dfrac{1}{(2i\pi)^2}\oint_{\gamma_{j_1}^-}\oint_{\gamma_{j_2}^-}\mathbf{e}_1^H(z_1) \mathbf{H}(z_1)^{-1}\mathbf{C}_2(z_1)\nonumber\\ &&\quad\times \mathbf{B}\mathbf{C}_1^H(z_2)\mathbf{H}(z_2)^{-1}\mathbf{e}_2(z_2)dz_1dz_2\label{eq:eta} \end{eqnarray}\normalsize \noindent where $\gamma_{j}^-$ is a deterministic negatively oriented circle only enclosing $\tau_j$ (cf. Eq.(\ref{eq:rho})) and\footnotesize \begin{eqnarray} \mathbf{H}(z)&=&\mathbf{I}_m+z\bar{b}_m(z)\boldsymbol{\Omega}(\mathbf{I}_m+\boldsymbol{\Omega})^{-1}\\ \mathbf{e}_1^H(z)&=&z\bar{b}_m(z)\boldsymbol{s}_1^H(\mathbf{I}_m+\mathbf{P})^{-1/2}\mathbf{U}\\ \mathbf{C}_2(z)&=&\bar{b}_m(z)\boldsymbol{\Omega} (\mathbf{I}_m+\boldsymbol{\Omega})^{-1}\mathbf{U}^H (\mathbf{I}_m+\mathbf{P})^{-1/2} \\ \mathbf{C}_1^H(z)&=&z\bar{b}_m(z) (\mathbf{I}_m+\mathbf{P})^{-1/2}\mathbf{U} \\ \mathbf{e}_2(z)&=&\bar{b}_m(z) \boldsymbol{\Omega}(\mathbf{I}_m+\boldsymbol{\Omega})^{-1}\mathbf{U}^H (\mathbf{I}_m+\mathbf{P})^{-1/2}\boldsymbol{s}_2 \end{eqnarray}\normalsize \begin{flushright} \vspace{-0.3cm}$\blacksquare$ \end{flushright} \indent \textit{Proof:} We first recall that we are interested in the indexes $j_1,j_2\in[\![1,r]\!]$. Then, the function $\mathbf{E}(z)$ in $D_1$, $D_2$ and $D_3$ can be rewritten as: \begin{eqnarray} \mathbf{E}(z)=\left(\hat{\mathbf{R}}-z(\mathbf{I}_m+\mathbf{P})\right)^{-1}=\sum_{n=1}^m\dfrac{\hat{\boldsymbol{u}}_n\hat{\boldsymbol{u}}_n^H}{\hat{\lambda}_n-z(1+\omega_n)} \end{eqnarray} Thus, $\mathbf{E}(z_1)$ (resp. $\mathbf{E}(z_2)$) has a single simple pole $\frac{\hat{\lambda}_n}{1+\omega_n}\neq \hat{\lambda}_n$ when $\omega_n\neq 0$, i.e. $n\in[\![1,r]\!]$ ((\textbf{As5, As6.S}) are verified and $\hat{f}(x)\rightarrow f(x)$, with probability one for all large $m,K$ at a fixed ratio $c$). As a consequence, $\forall j_1,j_2\in[\![1,r]\!]$, $\mathcal{C}_{j_1}^-$ (resp. $\mathcal{C}_{j_2}^-$) does not encompass $\mathbf{E}(z_1)$ (resp. $\mathbf{E}(z_2)$). Thus, $D_1=D_2=D_3=0$ and: \begin{eqnarray} \hat{\eta}(j_1,j_2)\!\!\!\! &=&\!\!\!\!\dfrac{1}{(2i\pi)^2}\oint_{\mathcal{C}_{j_1}^-}\oint_{\mathcal{C}_{j_2}^-}\mathbf{\hat{e}}_1^H(z_1) \mathbf{\hat{H}}(z_1)^{-1}\mathbf{\hat{C}}_2(z_1)\mathbf{B}\nonumber\\ &&\qquad\times\mathbf{\hat{C}}_1^H(z_2)\mathbf{\hat{H}}(z_2)^{-1}\mathbf{\hat{e}}_2(z_2)dz_1dz_2\label{eq:D4} \end{eqnarray} \indent We will then determine a deterministic equivalent of Eq.(\ref{eq:D4}), i.e. its convergence in the large dimensional regime from lemma 5 of~\cite{HaLoMeNaVa13}: \begin{eqnarray} \underset{z\in\mathcal{C}}{\mathrm{sup}}\Vert\mathbf{U}^H(\mathbf{Q}(z)-\bar{b}_m(z)\mathbf{I}_m)\mathbf{U}\Vert\underset{\underset{m/K \to c<\infty}{\small{m,K\rightarrow\infty}}}{\overset{\mathrm{a.s.}}{\longrightarrow}}0\label{eq:lemma5} \end{eqnarray} \noindent with $\mathcal{C}$ a closed contour of $\mathbb{C}$. Indeed, one can notice that: \small \begin{eqnarray} \!\!\!\! \mathbf{\hat{H}}(z)\!\!\!\! &=&\!\!\!\!\mathbf{I}_m+z\boldsymbol{\Omega}(\mathbf{I}_m+\boldsymbol{\Omega})^{-1}[\mathbf{U}^H\mathbf{Q}(z)\mathbf{U}]\\ \!\!\!\!\mathbf{\hat{e}}_1^H(z)\!\!\!\! &=&\!\!\!\! \boldsymbol{s}_1^H(\mathbf{I}_m+\mathbf{P})^{-1/2}z\mathbf{U}[\mathbf{U}^H\mathbf{Q}(z)\mathbf{U}]\\ \!\!\!\!\mathbf{\hat{C}}_2(z)\!\!\!\! &=&\!\!\!\! \boldsymbol{\Omega}(\mathbf{I}_m+\boldsymbol{\Omega})^{-1}[\mathbf{U}^H\mathbf{Q}(z)\mathbf{U}]\mathbf{U}^H (\mathbf{I}_m+\mathbf{P})^{-1/2}\\ \!\!\!\!\mathbf{\hat{C}}_1^H(z)\!\!\!\! &=&\!\!\!\! (\mathbf{I}_m+\mathbf{P})^{-1/2}z\mathbf{U}[\mathbf{U}^H\mathbf{Q}(z)\mathbf{U}]\\ \!\!\!\!\mathbf{\hat{e}}_2(z)\!\!\!\! &=&\!\!\!\! \boldsymbol{\Omega}(\mathbf{I}_m+\boldsymbol{\Omega})^{-1}[\mathbf{U}^H\mathbf{Q}(z) \mathbf{U}]\mathbf{U}^H(\mathbf{I}_m+\mathbf{P})^{-1/2}\boldsymbol{s}_2 \end{eqnarray}\normalsize \noindent Thus, from Eq.(\ref{eq:lemma5}), one obtains: \footnotesize \begin{eqnarray} \!\!\!\!\!\!\!\!\mathbf{\hat{H}}(z)\!\!\!\!\!\!\!\!\!\!\!\! &\underset{\underset{m/K \to c<\infty}{\small{m,K\rightarrow\infty}}}{\overset{\mathrm{a.s.}}{\longrightarrow}}&\!\!\!\!\!\!\!\!\!\!\!\! \mathbf{H}(z)=\mathbf{I}_m+z\bar{b}_m(z)\boldsymbol{\Omega}(\mathbf{I}_m+\boldsymbol{\Omega})^{-1}\\ \!\!\!\!\!\!\!\!\mathbf{\hat{e}}_1^H(z)\!\!\!\!\!\!\!\!\!\!\!\! &\underset{\underset{m/K \to c<\infty}{\small{m,K\rightarrow\infty}}}{\overset{\mathrm{a.s.}}{\longrightarrow}}&\!\!\!\!\!\!\!\!\!\!\!\!\mathbf{e}_1^H(z)=z\bar{b}_m(z)\boldsymbol{s}_1^H(\mathbf{I}_m+\mathbf{P})^{-1/2}\mathbf{U}\\ \!\!\!\!\!\!\!\!\mathbf{\hat{C}}_2(z)\!\!\!\!\!\!\!\!\!\!\!\! &\underset{\underset{m/K \to c<\infty}{\small{m,K\rightarrow\infty}}}{\overset{\mathrm{a.s.}}{\longrightarrow}}&\!\!\!\!\!\!\!\!\!\!\!\! \mathbf{C}_2(z)=\bar{b}_m(z)\boldsymbol{\Omega} (\mathbf{I}_m+\boldsymbol{\Omega})^{-1}\mathbf{U}^H (\mathbf{I}_m+\mathbf{P})^{-1/2} \\ \!\!\!\!\!\!\!\!\mathbf{\hat{C}}_1^H(z)\!\!\!\!\!\!\!\!\!\!\!\! &\underset{\underset{m/K \to c<\infty}{\small{m,K\rightarrow\infty}}}{\overset{\mathrm{a.s.}}{\longrightarrow}}&\!\!\!\!\!\!\!\!\!\!\!\! \mathbf{C}_1^H(z)=z\bar{b}_m(z) (\mathbf{I}_m+\mathbf{P})^{-1/2}\mathbf{U} \\ \!\!\!\!\!\!\!\!\mathbf{\hat{e}}_2(z)\!\!\!\!\!\!\!\!\!\!\!\! &\underset{\underset{m/K \to c<\infty}{\small{m,K\rightarrow\infty}}}{\overset{\mathrm{a.s.}}{\longrightarrow}}&\!\!\!\!\!\!\!\!\!\!\!\! \mathbf{e}_2(z)=\bar{b}_m(z) \boldsymbol{\Omega}(\mathbf{I}_m+\boldsymbol{\Omega})^{-1}\mathbf{U}^H (\mathbf{I}_m+\mathbf{P})^{-1/2}\boldsymbol{s}_2 \end{eqnarray}\normalsize \noindent As a result, $\hat{\eta}(j_1,j_2)- \eta(j_1,j_2)\overset{\mathrm{a.s.}}{\longrightarrow}0$ with\small \begin{eqnarray} \eta(j_1,j_2)\!\!\!\! &=&\!\!\!\!\dfrac{1}{(2i\pi)^2}\oint_{\gamma_{j_1}^-}\oint_{\gamma_{j_2}^-}\mathbf{e}_1^H(z_1) \mathbf{H}(z_1)^{-1}\mathbf{C}_2(z_1)\nonumber\\ &&\quad\times \mathbf{B}\mathbf{C}_1^H(z_2)\mathbf{H}(z_2)^{-1}\mathbf{e}_2(z_2)dz_1dz_2\label{eq:eta} \end{eqnarray}\normalsize \noindent where $\gamma_{j}^-$ is a deterministic negatively oriented circle only enclosing $\tau_j$ (cf. Eq.(\ref{eq:rho})). \subsection{Determination of the expression of the deterministic equivalent}\label{subsec:A.5} \indent Let us now find the expression of the deterministic equivalent $\eta(j_1,j_2)$ as a function of the eigenvalues and eigenvectors of the covariance matrix $\mathbf{R}$.\\ \indent \textit{\textbf{Proposition 4:}} Let $\mathbf{B}$ be a $m\times m$ deterministic complex matrix with a uniformly bounded spectral norm for all $m$. Then, under (\textbf{As1}-\textbf{As5}, \textbf{As6.S}) and the \textit{spiked} model, \small \begin{eqnarray} \eta(j_1,j_2)=\chi_{j_1}\chi_{j_2}\boldsymbol{s}_1^H\bm{\Pi}_{j_1}\mathbf{B}\bm{\Pi}_{j_2} \boldsymbol{s}_2 \label{eq:FQ3} \end{eqnarray}\normalsize with $\chi_j=\frac{1-c\omega_j^{-2}}{1+c\omega_j^{-1}}$ and $\left\lbrace j_1,j_2\right\rbrace\in[\![1,r]\!]^2$. \begin{flushright} \vspace{-0.3cm}$\blacksquare$ \end{flushright} \indent \textit{Proof:} We first rewrite Eq.(\ref{eq:eta}) as: \small \begin{eqnarray} \eta(j_1,j_2) =\dfrac{1}{2i\pi}\oint_{\gamma_{j_2}^-}\mathbf{g}\times \mathbf{B}\mathbf{C}_1^H(z_2)\mathbf{H}(z_2)^{-1}\mathbf{e}_2(z_2)dz_2\label{eq:eta2} \end{eqnarray}\normalsize with\small \begin{eqnarray} \mathbf{g}=\dfrac{1}{2i\pi}\oint_{\gamma_{j_1}^-}\mathbf{e}_1^H(z_1) \mathbf{H}(z_1)^{-1}\mathbf{C}_2(z_1)dz_1 \end{eqnarray}\normalsize \noindent in order to determine $\mathbf{g}$ in a first time.\\ \noindent We recall that, in our case, $\omega_1>\cdots>\omega_r>\omega_{r+1}=0$. After an eigendecomposition of $\mathbf{e}_1^H(z_1)$ and $\mathbf{C}_2(z_1)$ and, noticing from~\cite{CoHa13} that:\small \begin{eqnarray} \!\! \mathbf{H}(z)^{-1}\!\!\!\!\!\! &=&\!\!\!\!\!\!\mathrm{diag}\left(\tfrac{1}{1+z\bar{b}_m(z)\frac{\omega_1}{1+\omega_1}},\cdots,\tfrac{1}{1+z\bar{b}_m(z)\frac{\omega_{r+1}}{1+\omega_{r+1}}}\right)\\ \!\!\!\!\!\! &=&\!\!\!\!\!\!\sum_{l=1}^{r+1}\dfrac{1}{1+z\bar{b}_m(z)\frac{\omega_l}{1+\omega_l}}\boldsymbol{\mathcal{I}}_l \label{eq:H} \end{eqnarray}\normalsize \noindent with\small \begin{eqnarray} \boldsymbol{\mathcal{I}}_l=\begin{bmatrix} \boldsymbol{\mathbbm{O}}_{\mathcal{K}_1+\ldots +\mathcal{K}_{l-1}} && \\ &\mathbf{I}_{\mathcal{K}_l}& \\ && \boldsymbol{\mathbbm{O}}_{\mathcal{K}_{l+1}+\ldots +\mathcal{K}_{r+1}} \end{bmatrix}\in \mathbb{C}^{m\times m} \end{eqnarray}\normalsize \noindent one obtains:\small \begin{eqnarray} \mathbf{e}_1^H(z_1) \mathbf{H}(z_1)^{-1}\mathbf{C}_2(z_1)= \boldsymbol{s}_1^H\sum_{l=1}^{r+1}\tfrac{\omega_l \bm{\Pi}_l}{(1+\omega_l)^2}\tfrac{z_1\bar{b}_m^2(z_1)}{1+z_1\bar{b}_m(z_1)\frac{\omega_l}{1+\omega_l}} \end{eqnarray}\normalsize Thus,\small \begin{eqnarray} \mathbf{g}&=&\dfrac{1}{2i\pi}\oint_{\gamma_{j_1}^-}\boldsymbol{s}_1^H\sum_{l=1}^{r+1}\tfrac{\omega_l \bm{\Pi}_l}{(1+\omega_l)^2}\tfrac{z_1\bar{b}_m^2(z_1)}{1+z_1\bar{b}_m(z_1)\frac{\omega_l}{1+\omega_l}}dz_1\\ &=&\sum_{l=1}^{r+1}\tfrac{\omega_l }{(1+\omega_l)^2}\boldsymbol{s}_1^H\bm{\Pi}_l\dfrac{1}{2i\pi}\oint_{\gamma_{j_1}^-}\tfrac{z_1\bar{b}_m^2(z_1)}{1+z_1\bar{b}_m(z_1)\frac{\omega_l}{1+\omega_l}}dz_1\\ &=&\sum_{l=1}^{r+1}\tfrac{1}{1+\omega_l}\boldsymbol{s}_1^H\bm{\Pi}_l\dfrac{1}{2i\pi}\oint_{\gamma_{j_1}^-}\tfrac{z_1\bar{b}_m^2(z_1)}{\frac{1+\omega_l}{\omega_l}+z_1\bar{b}_m(z_1)}dz_1 \end{eqnarray}\normalsize From~\cite{CoHa13}, $\frac{1+\omega_l}{\omega_l}+z_1\bar{b}_m(z_1)=0$ only for $z_1=\tau_{j_1}$ and $z_1\bar{b}_m^2(z_1)\vert_{z_1=\tau_{j_1}}\neq 0$, $j_1\in[\![1,r]\!]$. Hence, $\tfrac{z_1\bar{b}_m^2(z_1)}{\frac{1+\omega_l}{\omega_l}+z_1\bar{b}_m(z_1)}$ has a single simple pole at $\tau_{j_1}$, $j_1\in[\![1,r]\!]$. As a consequence, with $h(z)=z\bar{b}_m(z)$\small \begin{eqnarray} \mathbf{g}&=&\tfrac{1}{1+\omega_{j_1}}\boldsymbol{s}_1^H\bm{\Pi}_{j_1}\dfrac{1}{2i\pi}\oint_{\gamma_{j_1}^-}\tfrac{z_1\bar{b}_m^2(z_1)}{\frac{1+\omega_{j_1}}{\omega_{j_1}}+z_1\bar{b}_m(z_1)}dz_1\\ &=&\tfrac{1}{1+\omega_{j_1}}\boldsymbol{s}_1^H\bm{\Pi}_{j_1}\left[-\mathrm{Res}\left(\dfrac{h(z_1)\bar{b}_m(z_1)}{\frac{1+\omega_{j_1}}{\omega_{j_1}}+h(z_1)},\tau_{j_1}\right) \right] \\ &=&\tfrac{1}{1+\omega_{j_1}}\boldsymbol{s}_1^H\bm{\Pi}_{j_1}\left[-\dfrac{h(\tau_{j_1})\bar{b}_m(\tau_{j_1})}{\left.\left( \frac{1+\omega_{j_1}}{\omega_{j_1}}+h(z_1)\right)'\right\vert_{z_1=\tau_{j_1}}}\right]\\ &=&\tfrac{1}{1+\omega_{j_1}}\boldsymbol{s}_1^H\bm{\Pi}_{j_1}\left[-\dfrac{h(\tau_{j_1})\bar{b}_m(\tau_{j_1})}{h'(\tau_{j_1})}\right] \end{eqnarray}\normalsize from residue calculus and where $(.)'=\frac{d(.)}{dz}$. Then, observing that $\frac{1}{1+\omega_{j_1}}=\frac{1+h(\tau_{j_1})}{h(\tau_{j_1})}$ as $\frac{1+\omega_{j_1}}{\omega_{j_1}}+h(\tau_{j_1})=0$, one finally has\small \begin{eqnarray} \mathbf{g}&=&-\boldsymbol{s}_1^H\bm{\Pi}_{j_1}\frac{1+h(\tau_{j_1})}{h(\tau_{j_1})}\dfrac{h(\tau_{j_1})\bar{b}_m(\tau_{j_1})}{h'(\tau_{j_1})}\\ &=&-\boldsymbol{s}_1^H\bm{\Pi}_{j_1}\dfrac{(1+h(\tau_{j_1}))\bar{b}_m(\tau_{j_1})}{h'(\tau_{j_1})} \end{eqnarray}\normalsize Then, similarly, we write Eq.(\ref{eq:eta2}) as:\small \begin{eqnarray} \eta(j_1,j_2) =\mathbf{g} \mathbf{B}\times\mathbf{\tilde{g}} \end{eqnarray}\normalsize with\small \begin{eqnarray} \mathbf{\tilde{g}}=\dfrac{1}{2i\pi}\oint_{\gamma_{j_2}^-}\mathbf{C}_1^H(z_2)\mathbf{H}(z_2)^{-1}\mathbf{e}_2(z_2)dz_2 \end{eqnarray}\normalsize \noindent Thus, similarly as with $\mathbf{g}$, one deduces that: \small \begin{eqnarray} \mathbf{\tilde{g}}=-\dfrac{(1+h(\tau_{j_2}))\bar{b}_m(\tau_{j_2})}{h'(\tau_{j_2})}\bm{\Pi}_{j_2}\boldsymbol{s}_2 \end{eqnarray}\normalsize As a result:\small \begin{eqnarray} \eta(j_1,j_2)=\xi(\tau_{j_1})\xi(\tau_{j_2})\boldsymbol{s}_1^H\bm{\Pi}_{j_1}\mathbf{B}\bm{\Pi}_{j_2}\boldsymbol{s}_2\label{Eq:3} \end{eqnarray}\normalsize \noindent with\small \begin{eqnarray} \xi(\tau_{j})=\dfrac{(1+h(\tau_j))\bar{b}_m(\tau_j)}{h'(\tau_j)} \end{eqnarray}\normalsize \indent Finally, the last step consists in expressing $\xi(\tau_{j})$ as a function of $\omega_j$. Using Corollary 2 from~\cite{CoHa13}, one expresses $\xi(\tau_{j})$ as:\small \begin{eqnarray} \xi(\tau_{j})=\chi_j=\frac{1-c\omega_j^{-2}}{1+c\omega_j^{-1}} \end{eqnarray}\normalsize As a consequence,\small \begin{eqnarray} \hat{\eta}(j_1,j_2)\underset{\underset{m/K \to c<\infty}{\small{m,K\rightarrow\infty}}}{\overset{\mathrm{a.s.}}{\longrightarrow}}\eta(j_1,j_2)=\chi_{j_1}\chi_{j_2}\boldsymbol{s}_1^H\bm{\Pi}_{j_1}\mathbf{B}\bm{\Pi}_{j_2} \boldsymbol{s}_2 \label{eq:FQ3} \end{eqnarray}\normalsize \noindent with $\left\lbrace j_1,j_2\right\rbrace\in[\![1,r]\!]^2$. \subsection{Convergence of the \textit{structured} QF}\label{subsec:A.6} \indent From the development of the \textit{structured} QF, we recall that the convergences of the \textit{simple} QFs $\boldsymbol{s}_1^H\hat{\boldsymbol{\Pi}}_i\mathbf{B}\boldsymbol{s}_2$ and $\boldsymbol{s}_1^H\mathbf{B}\hat{\boldsymbol{\Pi}}_i\boldsymbol{s}_2$, $\forall i\in[\![1,r]\!]$ can be easily determined from~\cite{CoHa13}:\small \begin{eqnarray} \boldsymbol{s}_1^H\hat{\boldsymbol{\Pi}}_i\mathbf{B}\boldsymbol{s}_2\underset{\underset{m/K\rightarrow c<\infty}{m,K\rightarrow\infty}}{\overset{\text{a.s}}{\longrightarrow}}\chi_i\boldsymbol{s}_1^H\boldsymbol{\Pi}_i\mathbf{B}\boldsymbol{s}_2\label{eq:FQ1}\\ \boldsymbol{s}_1^H\mathbf{B}\hat{\boldsymbol{\Pi}}_i\boldsymbol{s}_2\underset{\underset{m/K\rightarrow c<\infty}{m,K\rightarrow\infty}}{\overset{\text{a.s}}{\longrightarrow}}\chi_i\boldsymbol{s}_1^H\mathbf{B}\boldsymbol{\Pi}_i\boldsymbol{s}_2 \label{eq:FQ2} \end{eqnarray}\normalsize \noindent where $\chi_i$ is defined as in Section III.C.\\ \indent Then, also using Eq.(\ref{eq:FQ3}) in Eq.(\ref{eq:FQtot}), one obtains:\small \begin{eqnarray} \!\!\!\!\!\!\boldsymbol{s}_1^H\hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}\mathbf{B}\hat{\boldsymbol{\Pi}}_ {\mathrm{c}}^{\bot} \boldsymbol{s}_2\!\!\!\!\!\! &\underset{\underset{m/K \to c<\infty}{\small{m,K\rightarrow\infty}}}{\overset{\mathrm{a.s.}}{\longrightarrow}}&\!\!\!\!\!\!\boldsymbol{s}_1^H\mathbf{B}\boldsymbol{s}_2\nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! -\!\sum_{i=1}^r\left(\boldsymbol{s}_1^H\chi_i\boldsymbol{\Pi}_i\mathbf{B}\boldsymbol{s}_2+\boldsymbol{s}_1^H\mathbf{B}\chi_i\boldsymbol{\Pi}_i\boldsymbol{s}_2\right)\nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! +\!\!\sum_{j_1=1}^r\!\boldsymbol{s}_1^H\chi_{j_1}\boldsymbol{\Pi}_{j_1}\mathbf{B}\chi_{j_1}\boldsymbol{\Pi}_{j_1}\boldsymbol{s}_2\nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! +\!\!\!\!\sum_{\underset{j_1\neq j_2}{j_1,j_2=1}}^r\!\!\!\!\boldsymbol{s}_1^H\chi_{j_1}\boldsymbol{\Pi}_{j_1}\mathbf{B}\chi_{j_2}\boldsymbol{\Pi}_{j_2}\boldsymbol{s}_2 \end{eqnarray}\normalsize or equivalently\small \begin{eqnarray} &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\boldsymbol{s}_1^H\hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}\mathbf{B}\hat{\boldsymbol{\Pi}}_ {\mathrm{c}}^{\bot} \boldsymbol{s}_2\underset{\underset{m/K \to c<\infty}{\small{m,K\rightarrow\infty}}}{\overset{\mathrm{a.s.}}{\longrightarrow}}\nonumber\\ &&\qquad\boldsymbol{s}_1^H\left[ \mathbf{I}_m-\sum_{i=1}^r\chi_i\boldsymbol{\Pi}_i\right] \mathbf{B}\left[\mathbf{I}_m-\sum_{i=1}^r\chi_i\boldsymbol{\Pi}_i \right] \boldsymbol{s}_2 \end{eqnarray}\normalsize\small \begin{eqnarray} \boldsymbol{s}_1^H\hat{\boldsymbol{\Pi}}_{\mathrm{c}}^{\bot}\mathbf{B}\hat{\boldsymbol{\Pi}}_ {\mathrm{c}}^{\bot} \boldsymbol{s}_2\underset{\underset{m/K \to c<\infty}{\small{m,K\rightarrow\infty}}}{\overset{\mathrm{a.s.}}{\longrightarrow}}\boldsymbol{s}_1^H\bar{\boldsymbol{\Pi}}_{\mathrm{c,S}}^{\bot} \mathbf{B}\bar{\boldsymbol{\Pi}}_{\mathrm{c,S}}^{\bot}\boldsymbol{s}_2 \end{eqnarray}\normalsize \noindent with $\bar{\boldsymbol{\Pi}}_{\mathrm{c,S}}^{\bot}=\sum_{i=1}^{m} \psi_i\boldsymbol{u}_i\boldsymbol{u}_i^H$ and\small \begin{eqnarray} \psi_i=\begin{cases}1,\;\,\qquad\quad \mathrm{if}\;i>r\\ 1-\chi_i,\;\quad \mathrm{if}\;i\leqslant r \end{cases} \end{eqnarray}\normalsize \footnotesize \bibliographystyle{IEEEtran}
2,869,038,155,365
arxiv
\subsection*{Tight Second Law} In this work, we derive a generalized formula which incorporates both of those microscopic effects and makes the inequality tight. We reveal that for arbitrary protocol $\hat \rho_S \otimes \hat \tau_B \otimes \hat \rho_W \to \hat U \hat \rho_S \otimes \hat \tau_B \otimes \hat \rho_W \hat U^\dag$, where $\hat U$ is the energy-conserving and translationally invariant unitary \cite{Alhambra2016}, the tight Second Law can be written in the form: \begin{equation} \label{tight_second_law} W \le R(\hat \sigma_S \otimes \hat \tau_B) \end{equation} where $\hat \tau_B = \mathcal{Z}_B^{-1} e^{-\beta \hat H_B}$ is the Gibbs state of the bath and $R(\hat \rho)$ is the ergotropy of the state $\hat \rho$ \cite{Allahverdyan2004}: \begin{equation} R(\hat \rho) := \max_{\hat U - \text{unitary}} \Tr[(\hat H - \hat U^\dag \hat H \hat U) \hat \rho], \end{equation} i.e. the maximal energy extracted from an arbitrary unitary channel $\hat \rho \to \hat U \hat \rho \hat U^\dag$ with fixed Hamiltonian $\hat H$ (according to the quantity $R(\hat \sigma_S \otimes \hat \tau_B)$ the constant Hamiltonian is equal to $\hat H_S + \hat H_B$). Further, $\hat \sigma_S$ is the so-called \emph{control-marginal state} \cite{Lobejko2020}, which for initial product states, i.e. $\hat \rho_{SW} = \hat \rho_S \otimes \hat \rho_W$, can be represented as: \begin{equation} \label{sigma} \hat \sigma_S = \int dt \ p(t) \ \hat U_t \hat \rho_S \hat U_t^\dag \end{equation} where $p(t) = \Tr[\hat \rho_W \dyad{t}_W]$, $\hat U_t = e^{-i \hat H_S t}$ and $\ket t_W$ is a canonically conjugate `time state' with respect to the energy states $\ket \varepsilon_W$. Despite that in this paper we only consider product states, nevertheless, we stress that definition of the control-marginal operator $\hat \sigma_S$ can be generalized for arbitrary correlated state $\hat \rho_{SW}$, such that inequality \eqref{tight_second_law} is still valid (see Appendix). Inequality \eqref{tight_second_law} reveals that the optimal work done on a weight via energy-conserving unitary dynamics is equal to the ergotropy of the composite state $\hat \sigma_S \otimes \hat \tau_B$. On the other hand, a concept of the ergotropy as the maximal extractable work arises from the cyclic non-autonomous protocols of closed quantum systems (with implicit work reservoirs) \cite{Allahverdyan2004}, what is intensively studied area of so-called `quantum batteries' \cite{Alicki2013, Hovhannisyan2013, Giorgi2015, Binder2015, Perarnau2015, Campaioli2017}. This proves the important connection between those two frameworks, however, at the same time, it emphasizes the fundamental difference between them, namely, a replacement of the marginal state $\hat \rho_S$ to the control-marginal state $\hat \sigma_S$. As it is presented below, this change significantly affects the work extraction from quantum coherences and shows that the non-autonomous framework is just a special example of the weight dynamics. \subsection*{Work extraction from coherences} Firstly, one should notice that the optimal work $W_{max} = R(\hat \sigma_S \otimes \hat \tau_B)$ depends implicitly on the state of the weight $\hat \rho_W$ through the control-marginal state $\hat \sigma_S$. We point out, however, that the channel \eqref{sigma} only affects the off-diagonal elements of the density matrix $\hat \rho_S$, i.e. if $[\hat \rho_S, \hat H_S] = 0$ then $\hat \sigma_S = \hat \rho_S$. From this follows that for quasi-classical diagonal states the optimal work extraction protocol is independent of the weight state at all. Nevertheless, if we consider a coherent state $\hat \rho_S$ it is no longer true, i.e. in general $\hat \sigma_S \neq \hat \rho_S$, and the optimal value of work $W_{max}$ indirectly depends on the state of the weight (and especially of its amount of coherences). Since the channel \eqref{sigma} is a mixture of unitaries we can define a non-negative quantity: \begin{equation} \label{locked_ergotropy} \Delta_L(\hat \rho_S, \hat \rho_W, \hat \tau_B) := R(\hat \rho_S \otimes \hat \tau_B) - R(\hat \sigma_S \otimes \hat \tau_B) \ge 0, \end{equation} and then the Second Law can be expressed in the form: \begin{equation} \label{tight_with_locked} W \le R(\hat \rho_S \otimes \hat \tau_B) - \Delta_L(\hat \rho_S, \hat \rho_W, \hat \tau_B). \end{equation} Here, we call $\Delta_L(\hat \rho_S, \hat \rho_W, \hat \tau_B)$ a \emph{locked energy}, i.e. a quantum thermodynamic resource that is bounded in coherences and cannot be extracted as a work via a state of the weight $\hat \rho_W$. In this way, we can introduce a concept of the \emph{ideal weight}, i.e. an energy storage system that is able to the full work extraction from coherences with $\Delta_L = 0$. In particular, this is the case if the state of the weight tends to the time state, i.e. $\hat \rho_W \to \dyad{t}_W$, such that we have $\hat \sigma_S \to \hat U_t \hat \rho_S \hat U_t^\dag$ and $\Delta_L \to 0$. The time state of the weight is an extreme and idealized example of the system with an `infinite amount of coherence' (see e.g. \cite{Korzekwa2016, Aberg2013}), and in this sense it is able to perform a unitary transformation on the subsystem and achieve the optimal work extraction. In the opposite limit, where the work storage tends to the energy eigenstate, i.e. $\hat \rho_W \to \dyad{\varepsilon}_W$, the control-marginal state loses all of the coherences, such that $\hat \sigma_S \to D[\hat \rho_S]$ (where $D[\cdot]$ is a dephasing channel in the energy basis), and hence $\Delta_L$ is maximal. The fact that $\hat \sigma_S = D[\hat \rho_S]$ for incoherent states of the work reservoirs was previously observed and called `work-locking' \cite{Korzekwa2016}. In this research, authors discuss only the diagonal states of the energy storage and the work extraction from coherences was analyzed via additional ancillary system, a source of coherence, acting as a catalyst. Here, in contrary, we allow to use the coherent states of the weight (i.e. a fully quantum energy storage) and reveal how this can `unlocked' the extracted work. \subsection*{Ergotropy vs. Free energy} Inequality \eqref{tight_with_locked} expresses the tight form of the Second Law for coherent quantum systems and finite-size heat baths, which separately includes the locked energy $\Delta_L(\hat \rho_S, \hat \rho_W, \hat \tau_B)$ and optimal work replaced by the ergotropy (instead of free energy) $R(\hat \rho_S \otimes \hat \tau_B)$. Those quantities depend on the heat bath equilibrium state $\hat \tau_B$ that is defined both by the temperature $T$ and the Hamiltonian $\hat H_B$. In contrary, the only information coming from the heat bath included in the Second Law given by Eq. \eqref{second_law}, formulated solely in terms of the free energy, is the temperature $T$. This ignorance of the microscopic details of the heat bath as a consequence makes the inequality in general not tight. Now, we would like to state a general relation between ergotropy and free energy for quantum systems coupled to the heat bath. We independently prove that for arbitrary quantum state $\hat \rho_S$ with Hamiltonian $\hat H_S$ and arbitrary Gibbs state $\hat \tau_B$ with Hamiltonian $\hat H_B$ the following inequality holds: \begin{equation} \label{ergotropy_vs_free_energy} R(\hat \rho_S \otimes \hat \tau_B) \le F(\hat \rho_S) - F(\hat \tau_S), \end{equation} where $\hat \tau_S$ is the Gibbs state according to the Hamiltonian $\hat H_S$. In this formula, both the Gibbs states and free energy are defined with respect to the same and arbitrary temperature $T$. As it is seen, the right hand side of the inequality \eqref{ergotropy_vs_free_energy} does not depend on the Hamiltonian $\hat H_B$, what reveals that free energy is indeed the ultimate thermodynamic bound valid for all possible heat baths (see Eq. \eqref{second_law}). According to the result of Skrzypczyk \textit{et. al.} \cite{Skrzypczyk2014}, the bound can be reached for infinite heat baths, i.e. in the thermodynamic limit. This relation proves the Second Law of Thermodynamics for any framework with the optimal extracted work identified as the ergotropy (e.g. for the unitary transformations of states), and shows that ergotropy on it its own is a generalization of free energy for the finite-size heat baths. Further, by this formula, we are able to derive the thermodynamic limit of the locked energy $\Delta_L$. Basically, if for the heat bath holds $R(\hat \rho_S \otimes \hat \tau_B) = F(\hat \rho_S) - F(\hat \tau_S)$ for arbitrary density matrix $\hat \rho_S$, then \begin{equation} \label{locked_ergotropy_bound} \Delta_L(\hat \rho_S, \hat \rho_W, \hat \tau_B) = T \left[ S(\hat \rho_S) - S(\hat \sigma_S) \right], \end{equation} i.e. the locked energy in coherences is equal to the difference of entropy between the state $\hat \rho_S$ and control-marginal state $\hat \sigma_S$ multiplied by the bath temperature $T$. Equation \eqref{locked_ergotropy_bound} gives us an interesting formula how the quantum energy storage is able to extract work from coherences of a system in a contact with the macroscopic heat bath. However, we would like to emphasize that this formula is not the upper bound of the locked energy $\Delta_L$ (as the free energy was for the optimal work), but rather it is the thermodynamic limit. In the next paragraph we provide a numerical simulation of a particular example where the locked energy for finite-size bath can be bigger then value given by Eq. \eqref{locked_ergotropy_bound}, and moreover it can be even non-monotonic with respect to the growing size of the heat bath. \begin{figure}[t] \centering \includegraphics[width = 0.45\textwidth] {fig.png} \caption{\emph{Optimal work and locked energy.} (a) Graph presents how the optimal work given by the ergotropy $R(\hat \rho_S \otimes \hat \tau_B^{(N)})$ and locked energy $\Delta_L(\hat \rho_S, \hat \rho_W, \hat \tau_B^{(N)})$ depends on the number of qubits in the heat bath $N$ for different values of the scaled standard deviation $\sigma/\omega$ of the Gaussian state of the weight. Horizontal lines correspond to the thermodynamic limits (with $N \to \infty$) given by the free energy $F(\hat \rho_S)-F(\hat \tau_S)$ and entropy $T[S(\hat \rho_S) - S(\hat \sigma_S)]$ differences. (b) The vanishing of the locked energy for different size of the heat bath $N$ with respect to the parameter $\sigma/ \omega$.} \label{fig:free_energy_vs_ergotropy} \end{figure} \subsection*{Example} Let us now consider a particular example in order to illustrate how the finite-size bath and state of the weight affect the work extraction process. We would like to concentrate on a system $S$ given by the qubit in a coherent `plus state', i.e. $\hat \rho_S = \dyad{+}_S$, where $\ket{+}_S = \frac{1}{\sqrt{2}} (\ket{0}_S + \ket{1}_S)$, and with Hamiltonian $\hat H_S = \omega \dyad{1}_S$. Next, as a model of a bath we take a collection of qubits with different energy gaps, namely the bath Hamiltonian is given by: \begin{equation} \label{bath_hamiltonian} \hat H_B^{(N)} = \bigotimes_{k=1}^N \omega_k \dyad{1_k}_B \end{equation} where $\omega_k = T \log[\frac{1-k\delta}{k\delta}]$ and $\delta = \mathcal{Z}^{-1}_S e^{-\beta \omega}/N$. The choice of the heat bath is dictated by its property that in the limit of infinite number of qubits a saturation of inequality \eqref{ergotropy_vs_free_energy} is achieved \cite{Skrzypczyk2014}. Finally, we take the weight in a pure state given by a Gaussian superposition of energy states, i.e. $\hat \rho_W = \dyad{\psi}_W$ such that \begin{equation} \label{weight_state} \ket{\psi}_W = (2\pi \sigma^2)^{-1/4} \int d\varepsilon \ e^{-\frac{\epsilon^2}{4\sigma^2}} \ket{\varepsilon}_W, \end{equation} where the vector is solely parameterized by the standard deviation $\sigma$. Within this model we numerically calculate the optimal work \begin{equation} W_{max} = R \left(\dyad{+}_S \otimes \hat \tau_B^{(N)} \right) \end{equation} for baths $\hat \tau_B^{(N)}$ with different number of qubits $N$ \eqref{bath_hamiltonian}. The results are presented in the Fig. 1(a). In particular, in the graph we plot the ultimate bound given by the difference of free energy $F(\hat \rho_S) - F(\hat \tau_S)$ (see inequality \eqref{ergotropy_vs_free_energy}) and reveal how the optimal work $W_{max}$ converges to this limit with increasing number of qubits. We emphasize that ergotropy $R (\hat \rho_S \otimes \hat \tau_B^{(N)})$ is an increasing function with respect to the growing size of the heat bath $N$. Next, we calculate the locked energy, namely \begin{equation} \Delta_L = R \left(\dyad{+}_S \otimes \hat \tau_B^{(N)} \right) - R \left(\hat \xi_S \otimes \hat \tau_B^{(N)} \right) - \frac{\omega \gamma}{2}, \end{equation} where $\gamma = \exp[-\frac{\omega^2}{8\sigma^2}]$ and \begin{equation} \hat \xi_S = \frac{1}{2}(1+\gamma) \dyad{0}_S + \frac{1}{2} (1-\gamma) \dyad{1}_S. \end{equation} It is seen that the Gaussian model of the weight affects the locked energy only via a single parameter $\sigma / \omega$, i.e. a ratio between a standard deviation of the work reservoir wave packet $\sigma$ and the energy gap of a qubit $\omega$. In analogy to the optimal work, in the Fig. 1(a) we analyze how the locked energy depends on the number of qubits in the bath $N$ and compare it to the thermodynamic limit given by $T[S(\hat \rho_S) - S(\hat \sigma_S)]$ \eqref{locked_ergotropy_bound}. An intriguing observation is that as long as the ergotropy is an increasing function with respect to the growing size of the heat bath, the locked energy is not. It is observed that for some values of $\sigma / \omega$ it can be non-monotonic with respect to the number of qubits $N$, i.e. adding a qubit can increase likewise decrease the locked energy. Further, in the Fig 1(b) we present how quickly the locked energy vanishes with increasing value of the ratio $\sigma / \omega$. Two interesting features are observed here. First for high values of $\sigma / \omega$ the locked energy is increasing with the size of the bath $N$, however, this order is changed for low values and becomes non-monotonic. Secondly, for low values we observe a \emph{plateau}, i.e. the locked energy almost stay constant with growing width of the weight wave packet. Notice that in the limit $\sigma \to 0$ state of the weight tends to the energy state with the maximal locked energy and in the limit $\sigma \to \infty$ it tends to the time state for which the locked energy vanishes. \subsection*{Conclusions} We recognized ergotropy as a proper resource regarding the work extraction process with an explicit energy storage given by the translationally invariant weight. The ergotropy on its own can be defined as the optimal work extracted from closed systems driven by the time-dependent and cyclic Hamiltonians, what proofs an important connection between those two frameworks. Nevertheless, we stress that there is no full equivalence between them, since models with an implicit energy storage do not involve the concept of the \emph{locked energy}, i.e. the part from coherences that contribute to ergotropy (or free energy) but cannot be extracted as a work. Indeed, one of the main difference between classical and quantum thermodynamics is that quantum systems are able to perform the work via coherences. However, here we reveal that it is only possible if the work reservoir has coherences as well and the locked energy naturally emerges if we treat it explicitly. In other words, for the quantum process of work extraction it is crucial that the weight is the energy reservoir and the reservoir of coherences likewise. Consequently, we provide a quantitative definition of the \emph{ideal work reservoir}, i.e. the energy storage that is able to the full work extraction from coherences, what really is the case in non-autonomous approach. Furthermore, we analyze the ergotropy of the non-equilibrium quantum system in a contact with arbitrary finite-size heat bath. In the light of the resource theory, such a Gibbs state of the bath is treated as a costless, i.e. it can be for free attached to, and discarded from the system. Due to the non-additivity of the ergotropy, the state of the heat bath activates the non-equilibrium state of the system and consequently both of them form the entire thermodynamic resource, given by the total ergotropy. This can be simply interpreted as a maximal work that can be extracted from such a quantum state. Moreover, one of the most important result of this work is an establishment of the general relation between the ergotropy and free energy for systems coupled to the heat bath, what provides a bridge between microscopic and macroscopic thermodynamics. We show that the total ergotropy of the quantum system and finite-size heat bath is indeed the generalization of the non-equilibrium free energy, and it converges to the latter in the thermodynamic limit. Finally, relation between ergotropy and free energy leads us to the thermodynamic limit of the locked energy. This provides an interesting formula, expressed in terms of the von Neumann entropy, that from one side is fully quantum, since refers to the extraction of work from coherences (i.e. requires the coherent state of the system and the energy storage likewise), however, on the other side involves the classical notion of the macroscopic heat bath. \begin{acknowledgements} The author thanks Micha{\l} Horodecki, Pawe{\l} Mazurek, Tony Short and Patryk Lipka-Bartosik for helpful and inspiring discussions. This research was supported by the National Science Centre, Poland, through grant SONATINA 2 2018/28/C/ST2/00364. \end{acknowledgements} \newpage
2,869,038,155,366
arxiv
\section{Introduction} Let $Q\equiv (q_k)$ be a fixed sequence of positive integers, $q_k>1$, $\Theta_k$ be a sequence of the sets $\Theta_k\equiv\{0,1,\dots ,q_k-1\}$, and $\varepsilon_k\in\Theta_k$. The Cantor series expansion \begin{equation} \label{eq: Cantor series} \frac{\varepsilon_1}{q_1}+\frac{\varepsilon_2}{q_1q_2}+\dots +\frac{\varepsilon_k}{q_1q_2\dots q_k}+\dots \end{equation} of $x\in [0,1]$, first studied by G. Cantor in \cite{Cantor1}. It is easy to see that the Cantor series expansion is the b-ary expansion $$ \frac{\alpha_1}{b}+\frac{\alpha_2}{b^2}+\dots+\frac{\alpha_n}{b^n}+\dots $$ of numbers from the closed interval $[0,1]$ whenever the condition $q_k=b$ holds for all positive integers $k$. Here $b$ is a fixed positive integer, $b>1$, and $\alpha_n\in\{0,1,\dots , b-1\}$. By $x=\Delta^Q _{\varepsilon_1\varepsilon_2\ldots\varepsilon_k\ldots}$ denote a number $x\in [0,1]$ represented by series \eqref{eq: Cantor series}. This notation is called \emph{the representation of $x$ by Cantor series \eqref{eq: Cantor series}.} We note that certain numbers from $[0,1]$ have two different representations by Cantor series \eqref{eq: Cantor series}, i.e., $$ \Delta^Q _{\varepsilon_1\varepsilon_2\ldots\varepsilon_{m-1}\varepsilon_m000\ldots}=\Delta^Q _{\varepsilon_1\varepsilon_2\ldots\varepsilon_{m-1}[\varepsilon_m-1][q_{m+1}-1][q_{m+2}-1]\ldots}=\sum^{m} _{i=1}{\frac{\varepsilon_i}{q_1q_2\dots q_i}}. $$ Such numbers are called \emph{$Q$-rational}. The other numbers in $[0,1]$ are called \emph{$Q$-irrational}. Let $c_1,c_2,\dots, c_m$ be an ordered tuple of integers such that $c_i\in\{0,1,\dots, q_i-~1\}$ for $i=\overline{1,m}$. \emph{A cylinder $\Delta^Q _{c_1c_2...c_m}$ of rank $m$ with base $c_1c_2\ldots c_m$} is a set of the form $$ \Delta^Q _{c_1c_2...c_m}\equiv\{x: x=\Delta^Q _{c_1c_2...c_m\varepsilon_{m+1}\varepsilon_{m+2}\ldots\varepsilon_{m+k}\ldots}\}. $$ That is any cylinder $\Delta^Q _{c_1c_2...c_m}$ is a closed interval of the form $$ \left[\Delta^Q _{c_1c_2...c_m000}, \Delta^Q _{c_1c_2...c_m[q_{m+1}][q_{m+2}][q_{m+3}]...}\right]. $$ Define \emph{the shift operator $\sigma$ of expansion \eqref{eq: Cantor series}} by the rule $$ \sigma(x)=\sigma\left(\Delta^Q _{\varepsilon_1\varepsilon_2\ldots\varepsilon_k\ldots}\right)=\sum^{\infty} _{k=2}{\frac{\varepsilon_k}{q_2q_3\dots q_k}}=q_1\Delta^{Q} _{0\varepsilon_2\ldots\varepsilon_k\ldots}. $$ It is easy to see that \begin{equation*} \label{eq: Cantor series 2} \begin{split} \sigma^n(x) &=\sigma^n\left(\Delta^Q _{\varepsilon_1\varepsilon_2\ldots\varepsilon_k\ldots}\right)\\ & =\sum^{\infty} _{k=n+1}{\frac{\varepsilon_k}{q_{n+1}q_{n+2}\dots q_k}}=q_1\dots q_n\Delta^{Q} _{\underbrace{0\ldots 0}_{n}\varepsilon_{n+1}\varepsilon_{n+2}\ldots}. \end{split} \end{equation*} Therefore, \begin{equation} \label{eq: Cantor series 3} x=\sum^{n} _{i=1}{\frac{\varepsilon_i}{q_1q_2\dots q_i}}+\frac{1}{q_1q_2\dots q_n}\sigma^n(x). \end{equation} Note that, in the paper \cite{Serbenyuk2017}, the notion of the shift operator of an alternating Cantor series is studied in detail. In 1864, G.~Cantor introduced the problem on representaions of rational numbers by series \eqref{eq: Cantor series} (see \cite{Cantor1}). More information about this problem were described in \cite{SA17}. In the present article, we consider the case of positive Cantor series. In the next articles, the cases of alternating and sign-variable Cantor series and certain applications of used techniques will be consider by the author of the present article. \section{Representations of certain rational numbers} Using the main statements from \cite{Cantor1, S13, Ser2017} (the paper \cite{S13} is \cite{Ser2017} translated into English), we get the following. \begin{proposition} A rational number $x=\frac{p}{r}$, where $p<r$ and (p,r)=1, has a finite expansion by positive Cantor series whenever there exists a number $k_0$ such that the condition $q_1q_2\cdots q_{k_0}\equiv 0 \pmod{r}$ holds. \end{proposition} \begin{proposition} There exist certain sequences $(q_k)$ such that all rational numbers represented in terms of corresponding Cantor series have finite expansions. \end{proposition} For example, these representations are following: $$ x=\Delta^{(2k)} _{\varepsilon_1\varepsilon_2...\varepsilon_k...}\equiv\sum^{\infty} _{k=1}{\frac{\varepsilon_k}{2\cdot 4\cdot 8\cdot \ldots \cdot 2k}},~\text{where}~\varepsilon_k\in\{0,1,\dots,2k-1\}; $$ $$ x=\Delta^{(k+1)!} _{\varepsilon_1\varepsilon_2...\varepsilon_k...}\equiv\sum^{\infty} _{k=1}{\frac{\varepsilon_k}{2\cdot 3\cdot 4\cdot \ldots \cdot (k+1)}},~\text{where}~\varepsilon_k\in\{0,1,\dots,k\}. $$ \begin{proposition} Suppose that the sequence $(q_k)$ is periodic. Then a number $x$ is rational if and only if the representation $\Delta^Q _{\varepsilon_1\varepsilon_2\ldots\varepsilon_k\ldots}$ of $x$ is periodic (i.e., the sequence $(\varepsilon_k)$ is periodic). \end{proposition} \begin{proposition} The following is true: $$ \frac{1}{w}=\sum^{\infty} _{k=1}{\frac{\varepsilon^{'} _{k}}{q_1q_2\cdots q_k}}, $$ where $w\varepsilon^{'} _{k}=q_k-1$ holds for all positive integers $k$, $w$ is a certain positive integer. \end{proposition} It follows from the condition $1=\Delta^Q _{[q_1-1][q_2-1]...[q_k-1]...}$. \begin{proposition} Let $n_0$ be a fixed positive integer number, $q_0=\min_{n>n_0}{q_n}$, and $\varepsilon_0$ be a numerator of the fraction $\frac{\varepsilon_{n_0+k}}{q_1q_2...q_{n_0}q_{n_0+1}...q_{n_0+k}}$ in expansion \eqref{eq: Cantor series} of $x$ providing that $q_{n_0+k}=q_0$. Then $\sigma^n(x)=const$ for all $n\ge n_0$ if and only if the condition $\frac{q_n-1}{q_0-1}\varepsilon_0=\varepsilon_n\in\mathbb Z_0$ holds for any $n>n_0$. \end{proposition} \begin{proposition} A number $x$ represented by expansion \eqref{eq: Cantor series} is rational if and only if there exists a subsequence $(n_k)$ of positive integers such that for all $k=1,2,\dots ,$ the following conditions are true: \begin{itemize} \item $$ \frac{\lambda_k}{\mu_k}= \frac{\varepsilon_{n_k+1}q_{n_k+2}\dots q_{n_{k+1}}+\varepsilon_{n_k+2}q_{n_k+3}\dots q_{n_{k+1}}+\dots +\varepsilon_{n_{k+1}-1}q_{n_{k+1}}+\varepsilon_{n_{k+1}}}{q_{n_k+1}q_{n_k+2}\dots q_{n_{k+1}}-1}=const; $$ \item $\lambda_k=\frac{\mu_k}{\mu}\lambda$, where $\mu=\min_{k\in\mathbb N}{\mu_k}$ and $\lambda$ is a number in the numerator of the fraction whose denominator equals $(\mu_1+1)(\mu_2+~1)\dots (\mu+~1)$ from sum \eqref{eq: Cantor series 6}. \end{itemize} Here $$ x=\sum^{\infty} _{k=1}{\frac{\varepsilon_k}{q_1q_2\dots q_k}}=\sum^{n_1} _{j=1}{\frac{\varepsilon_j}{q_1q_2\dots q_j}}+\frac{1}{q_1q_2\dots q_{n_1}}x^{'}, $$ $$ x^{'}=\sum^{\infty} _{k=1}{\frac{\varepsilon_{n_k+1}q_{n_k+2}q_{n_k+3}\dots q_{n_{k+1}}+\varepsilon_{n_k+2}q_{n_k+3}\dots q_{n_{k+1}}+\dots+\varepsilon_{n_{k+1}-1}q_{n_{k+1}}+\varepsilon_{n_{k+1}}}{(q_{n_1+1}\dots q_{n_2})(q_{n_2+1}\dots q_{n_3})\dots (q_{n_k+1}\dots q_{n_{k+1}})}} $$ \begin{equation} \label{eq: Cantor series 6} =\sum^{\infty} _{k=1}{\frac{\lambda_k}{(\mu_1+1)\dots (\mu_k+1)}}. \end{equation} \end{proposition} For example, $$ x=\Delta^{(2k+1)} _{123...k...}\equiv\frac{1}{3}+\frac{2}{3\cdot 5}+\frac{3}{3\cdot 5\cdot 7}+\dots +\frac{k}{3\cdot 5\cdot \ldots \cdot (2k+1)}+\dots =\frac{1}{2}. $$ Here $x=\sigma^n(x)=\frac{1}{2}$ for all $n$, where $n=~0,1,2, \dots$. The last-mentioned sum is useful for modeling rational numbers of the type $\frac{A}{B}$, where $B\equiv 0 \pmod{2}$. In particular, $$ \frac{1}{6}=\Delta^{(2k+1)} _{0234...k...},~\frac{5}{6}=\Delta^{(2k+1)} _{2234...k...}. $$ \section{The main results} Let $\frac{p}{r}$ be a fixed number, where $(p,r)=1$, $p<r$, and $p\in\mathbb N, r\in\mathbb N$. Here $\mathbb N$ is the set of all positive integers. Then $$ \frac{p}{r}=\sum^{\infty} _{n=1}{\frac{\delta_n}{q_1q_2\cdots q_n}}. $$ \begin{remark*} We use the denotation $\delta_n$ for the known $n$th digit in the representation of a number $x$ by a Cantor series and $\varepsilon_n$ for the non-known $n$th digit. In addition, since $ x\in \Delta^Q _{c_1c_2...c_m}$ but $\Delta^Q _{c_1c_2...c_m[q_{m+1}-1][q_{m+2}]...}=\Delta^Q _{c_1c_2...c_{m-1}[c_m+1]000...}$, we assume that $$ \Delta^Q _{c_1c_2...c_{m-1}[c_m]000...}\le x<\Delta^Q _{c_1c_2...c_{m-1}[c_m+1]000...}. $$ \end{remark*} It is easy to see that $$ \frac{p}{r}\in\Delta^Q _{\varepsilon_1}=\left[\Delta^Q _{\varepsilon_1000...}, \Delta^Q _{\varepsilon_1[q_2-1][q_3-1]...}\right]=\left[\frac{\varepsilon_1}{q_1},\frac{\varepsilon_1+1}{q_1}\right]. $$ That is $$ \frac{\varepsilon_1}{q_1}\le \frac{p}{r}< \frac{\varepsilon_1+1}{q_1}, $$ $$ \varepsilon_1r\le pq_1<r(\varepsilon_1+1), $$ $$ \varepsilon_1\le \frac{pq_1}{r}<\varepsilon_1+1. $$ So, $$ \varepsilon_1=\left[\frac{p}{r}q_1\right]\equiv \delta_1, $$ where $[x]$ is the integer part of $x$. Now we get $$ \frac{p}{r}\in \Delta^Q _{\delta_1\varepsilon_2}=\left[\frac{q_2\delta_1+\varepsilon_2}{q_1q_2},\frac{q_2\delta_1+\varepsilon_2+1}{q_1q_2}\right]. $$ Whence, $$ \frac{q_2\delta_1+\varepsilon_2}{q_1q_2}\le \frac{p}{r}< \frac{q_2\delta_1+\varepsilon_2+1}{q_1q_2}, $$ $$ \varepsilon_2\le \frac{pq_1q_2-rq_2\delta_1}{r}<\varepsilon_2+1. $$ So, $$ \varepsilon_2=\left[\frac{pq_1q_2-rq_2\delta_1}{r}\right]\equiv\delta_2. $$ In the third step, we have $$ \frac{p}{r}\in \Delta^Q _{\delta_1\delta_2\varepsilon_3}=\left[\frac{\delta_1q_2q_3+\delta_2q_3+\varepsilon_3}{q_1q_2q_3},\frac{\delta_1q_2q_3+\delta_2q_3+\varepsilon_3+1}{q_1q_2q_3}\right] $$ and $$ \varepsilon_3=\left[\frac{pq_1q_2q_3-r(\delta_1q_2q_3+\delta_2q_3)}{r}\right]\equiv\delta_3. $$ In the $n$th step, we obtain $$ \frac{p}{r}\in \Delta^Q _{\delta_1\delta_2...\delta_{n-1}\varepsilon_n}=\left[\sum^{n-1} _{i=1}{\frac{\delta_i}{q_1q_2\cdots q_i}}+\frac{\varepsilon_n}{q_1q_2\cdots q_n},\sum^{n-1} _{i=1}{\frac{\delta_i}{q_1q_2\cdots q_i}}+\frac{\varepsilon_n+1}{q_1q_2\cdots q_n}\right] $$ $$ =\left[\frac{\delta_1q_2q_3\cdots q_n+\delta_2q_3q_4\cdots q_n+\dots+\delta_{n-1}q_n+\varepsilon_n}{q_1q_2\cdots q_n},\frac{\delta_1q_2q_3\cdots q_n+\dots+\delta_{n-1}q_n+\varepsilon_n+1}{q_1q_2\cdots q_n}\right]. $$ Let $\varsigma_n$ denote the sum $\delta_1q_2q_3\cdots q_n+\delta_2q_3q_4\cdots q_n+\dots+\delta_{n-1}q_n$. Then $$ \frac{\varsigma_n+\varepsilon_n}{q_1q_2\cdots q_n}\le \frac{p}{r}<\frac{\varsigma_n+\varepsilon_n+1}{q_1q_2\cdots q_n}, $$ $$ \varepsilon_n\le\frac{pq_1q_2\cdots q_n-r\varsigma_n}{r}<\varepsilon_n+1. $$ Denoting by $\Delta_n=pq_1q_2\cdots q_n-r\varsigma_n$, we get $$ \varepsilon_n=\left[\frac{\Delta_n}{r}\right]\equiv \delta_n. $$ So, the following statement is true. \begin{lemma} Let $x\in (0,1)$ be a rational number represented by series~\eqref{eq: Cantor series}. If $x=\frac{p}{q}=\Delta^Q _{\delta_1\delta_2...\delta_n...}$, then the equality \begin{equation} \label{eq: delta 1} \delta_n=\left[\frac{\Delta_n}{r}\right] \end{equation} holds for all $n\in\mathbb N$, where $$ \Delta_n=pq_1q_2\cdots q_n-r(\delta_1q_2q_3\cdots q_n+\delta_2q_3q_4\cdots q_n+\dots+\delta_{n-1}q_n). $$ \end{lemma} Also, for $n\ge 2$ the condition $\varsigma_n=\varsigma_{n-1}q_n+\delta_{n-1}q_n$ holds and $$ \Delta_n=q_n(\Delta_{n-1}-r\delta_{n-1}). $$ \begin{lemma} Let $x\in (0,1)$ be a rational number represented by series~\eqref{eq: Cantor series}. If $x=\frac{p}{q}=\Delta^Q _{\delta_1\delta_2...\delta_n...}$, then the equality $$ \delta_n=\left[\frac{q_n(\Delta_{n-1}-r\delta_{n-1})}{r}\right] $$ holds for all $1<n\in\mathbb N$, where $\Delta_1=pq_1$ and $\delta_1=\left[\frac{\Delta_1}{r}\right]$. \end{lemma} Suppose that the following sequence of conditions is true: $$ \delta_1=\left[\frac{p}{r}q_1\right],~~~\delta_2=\left[\frac{pq_1q_2-rq_2\delta_1}{r}\right],~~~\delta_3=\left[\frac{pq_1q_2q_3-r(\delta_1q_2q_3+\delta_2q_3)}{r}\right], \dots $$ $$ \dots , \delta_n=\left[\frac{pq_1q_2\cdots q_n}{r}-(\delta_1q_2q_3\cdots q_n+\delta_2q_3q_4\cdots q_n+\dots+\delta_{n-1}q_n)\right]=\left[\frac{\Delta_n}{r}\right], \dots . $$ It follows from equality \eqref{eq: Cantor series 3} that $$ x=\frac{p}{r}=\frac{\delta_1q_2q_3\cdots q_n+\delta_2q_3q_4\cdots q_n+\dots+\delta_{n-1}q_n+\delta_n}{q_1q_2\cdots q_n}+\frac{\sigma^n\left(\frac{p}{r}\right)}{q_1q_2\cdots q_n}, $$ $$ \sigma^n\left(\frac{p}{r}\right)=\frac{\Delta_n-r\delta_n}{r}=\frac{\Delta_n}{r}-\delta_n $$ and $$ \delta_n=\frac{\Delta_n}{r}-\sigma^n\left(\frac{p}{r}\right). $$ From the last-mentioned relationship and relationship \eqref{eq: delta 1} it follows that $$ 0\le \delta_n=\left[\frac{\Delta_n}{r}\right]=\frac{\Delta_n}{r}-\sigma^n\left(\frac{p}{r}\right)=\left[\frac{\Delta_n}{r}-\sigma^n\left(\frac{p}{r}\right)\right]. $$ That is $$ \sigma^n\left(\frac{p}{r}\right)=\left\{\frac{\Delta_n}{r}\right\}, $$ where $\{a\}$ is the fractional part of $a$. \begin{remark*} Clearly, $$ 0 \le \sigma^n\left(x\right)\le 1 $$ for an arbitrary $x\in [0,1]$. However for any Q-rational number $x=\Delta^Q _{\delta_1\delta_2...\delta_{n-1}\delta_n000...}=\Delta^Q _{\delta_1\delta_2...\delta_{n-1}[\delta_n-1][q_{n+1}-1][q_{n+2}-1][q_{n+3}-1]...}$ the following conditions hold: $$ \sigma^n\left(x\right)=\sigma^n\left(\Delta^Q _{\delta_1\delta_2...\delta_{n-1}\delta_n000...}\right)=\sum^{\infty} _{k=n+1}{\frac{0}{q_{n+1}q_{n+2}\cdots q_{k}}}=0, $$ $$ \sigma^n\left(x\right)=\sigma^n\left(\Delta^Q _{\delta_1\delta_2...\delta_{n-1}[\delta_n-1][q_{n+1}-1][q_{n+2}-1][q_{n+3}-1]...}\right)=\sum^{\infty} _{k=n+1}{\frac{q_k-1}{q_{n+1}q_{n+2}\cdots q_{k}}}=1. $$ Since the condition $\sigma^n\left(x\right)=1$ holds only for the last-mentioned representations of Q-rational numbers $x$, we can use only the first representation of Q-rational numbers $x$ and for these numbers the condition $\sigma^n\left(x\right)=0$ holds. \end{remark*} In addition, note that $$ \varsigma_n=\frac{pq_1q_2\cdots q_n-\Delta_n}{r}. $$ Whence for an arbitrary $n\in\mathbb N$ $$ x=\sum^{n} _{k=1}{\frac{\delta_k}{q_1q_2\cdots q_k}}+\frac{\sigma^n\left(x\right)}{q_1q_2\cdots q_n} $$ $$ =\frac{\delta_1q_2q_3\cdots q_n+\delta_2q_3q_4\cdots q_n+\dots +\delta_{n-1}q_n+\delta_n}{q_1q_2\cdots q_n}+\frac{\sigma^n\left(x\right)}{q_1q_2\cdots q_n} $$ $$ =\frac{\varsigma_n+\delta_n}{q_1q_2\cdots q_n}+\frac{\left\{\frac{\Delta_n}{r}\right\}}{q_1q_2\cdots q_n}=\frac{\frac{pq_1q_2\cdots q_n-\Delta_n}{r}+\delta_n}{q_1q_2\cdots q_n}+\frac{\frac{\Delta_n}{r}-\left[\frac{\Delta_n}{r}\right]}{q_1q_2\cdots q_n} $$ $$ =\frac{pq_1q_2\cdots q_n-\Delta_n+r\delta_n}{rq_1q_2\cdots q_n}+\frac{\frac{\Delta_n}{r}-\delta_n}{q_1q_2\cdots q_n}=\frac{p}{r}. $$ So, we have the following statement. \begin{theorem*} A number $x=\Delta^Q _{\delta_1\delta_2...\delta_n...} \in (0,1)$ is a rational number $\frac{p}{r}$, where $p,r\in\mathbb N, (p,r)=1$, and $p<r$, if and only if the condition $$ \delta_n=\left[\frac{q_n(\Delta_{n-1}-r\delta_{n-1})}{r}\right] $$ holds for all $1<n\in\mathbb N$, where $\Delta_1=pq_1$, $\delta_1=\left[\frac{\Delta_1}{r}\right]$, and $[a]$ is the integer part of $a$. \end{theorem*} Let us consider certain examples. Suppose $$ x=\Delta^{(2n+1)} _{\varepsilon_1\varepsilon_2...\varepsilon_n...}=\sum^{\infty} _{n=1}{\frac{\varepsilon_n}{3\cdot 5\cdot 7 \cdot \ldots \cdot (2n+1)}}. $$ This representation is complicated for modeling rational numbers with an even denominator: $$ \frac{1}{4}=\Delta^{(2n+1)} _{035229[11]4...} $$ $$ \frac{3}{8}=\Delta^{(2n+1)} _{104341967...}. $$ Investigations of the author of the present article about representations of rational numbers by Cantor series can be useful for solving ``P vs NP Problem" (this problem described in http://www.claymath.org/millennium-problems/p-vs-np-problem, www.claymath.org/sites/default/files/pvsnp.pdf). The next articles of the author of the present article will be devoted to such investigations.
2,869,038,155,367
arxiv
\section{Introduction} We believe that Non Archimedean Mathematics (NAM), namely, mathematics based on Non Archimedean Fields is very interesting, very rich and, in many circumstances, allows to construct models of the physical world in a more elegant and simple way. In the years around 1900, NAM was investigated by prominent mathematicians such as David Hilbert and Tullio Levi-Civita, but then it has been forgotten until the '60s when Abraham Robinson presented his Non Standard Analysis (NSA). We refer to Ehrlich \cite{el06} for a historical analysis of these facts and to Keisler \cite{keisler76} for a very clear exposition of NSA. In this paper we apply the general ideas of NAM and some of the techniques of NSA to a new notion of generalized functions which we have called \textbf ultrafunctions}. Ultrafunctions are a particular class of functions based on a superreal field $\mathbb{R}^{\ast }\supset \mathbb{R}$. More exactly, to any continuous function $f:\mathbb{R}^{N}\rightarrow \mathbb{R}$, we associate in a canonical way an ultrafunction $f_{\Phi }:\left( \mathbb{R ^{\ast }\right) ^{N}\rightarrow \mathbb{R}^{\ast }$ which extends $f;$ but the ultrafunctions are much more than the functions and among them we can find solutions of functional equations which do not have any solutions among the real functions or the distributions. Now we itemize some of the peculiar properties of the ultrafunctions: \begin{itemize} \item the space of ultrafunctions is larger than the space of distributions, namely, to every distribution $T,$ we can associate in a canonical way an ultrafunction $T_{\Phi }$ (cf. section \ref{UD}); \item similarly to the distributions, the ultrafunctions are motivated by the need of having generalized solutions; however, while the distributions are no longer functions, the ultrafunctions are still functions even if they have larger domain and range; \item unlikely the distributions, the space of ultrafunctions is suitable for non linear problem; in fact any operator $F$ defined for a reasonable class of functions, can be extended to the ultrafunctions; for example, in the framework of ultrafunctions $\delta ^{2}\ $makes sense (here $\delta $ is the Dirac measure seen as an ultrafunction); \item if a problem has a unique classical solution $u,$ then $u_{\Phi }$ is the only solution in the space of ultrafunctions, \item the main strategy to prove the existence of generalized solutions in the space of ultrafunction is relatively simple; it is just a variant of the Faedo-Galerkin method. \end{itemize} This paper is organized as follows. In Section \ref{OT} we introduce NAM via the notion of $\Lambda $-limit. This approach is quite different from the usual approach to NAM via NSA. It follows a line developed in \cite{benci95 , \cite{benci99}, \cite{BDN2003} and \cite{BHW}. In this section, we introduce all the notions necessary to understand the rest of the paper, but we omit details and most of the proofs. In sections \ref{dvb} and \ref{u}, we introduce the notion of ultrafunction and the last three sections are devoted to applications. The applications are chosen as examples to show the potentiality of the theory and possible directions of study; they are not an exhaustive study of the topics treated there. \bigskip Before ending the introduction, we want to emphasize the differences by our approach to NAM and the approach of most people working in Nonstandard Analysis: there are two main differences, one in the aims and one in the methods. Let examine the difference in the aims. We think that infinitesimal and infinite numbers should not be considered just as entities living in a parallel universe (the nonstandard universe) which are only a tool to prove some statement relative to our universe (the standard universe), but rather that they should be considered mathematical entities which have the same status of the others and can be used to build models as any other mathematical entity. Actually, the advantages of a theory which includes infinitesimals rely more on the possibility of making new models rather than in the proving techniques. Our papers \cite{BGG} and \cite{BHW} as well as this one, are inspired by this principle. As far as the methods are concerned we introduce a non-Archimedean field via a new notion of limit (see section \ref{OL}). Moreover, we make a very limited use of logic: the transfer principle (or Leibnitz Principle) is given by Th. \ref{limit} and it is not necessary to introduce a formal language. We think that this approach is closer to the way of thinking of the applied mathematician. \subsection{Notation} Let $\Omega $\ be a subset of $\mathbb{R}^{N}$: then \begin{itemize} \item $\mathcal{F}\left( \Omega ,E\right) $ denotes the set all the functions defined in $\Omega $ with values in $E;$ \item $\mathcal{C}\left( \Omega \right) $ denotes the set of real continuous functions defined on $\Omega ;$ \item $\mathcal{C}_{0}\left( \overline{\Omega }\right) $ denotes the set of real continuous functions on $\overline{\Omega }$ which vanish on $\partial \Omega ;$ \item $\mathcal{C}^{k}\left( \Omega \right) $ denotes the set of functions defined on $\Omega \subset \mathbb{R}^{N}$ which have continuous derivatives up to the order $k;$ \item $\mathcal{C}_{0}^{k}\left( \overline{\Omega }\right) =\mathcal{C ^{k}\left( \overline{\Omega }\right) \cap \mathcal{C}_{0}\left( \overline \Omega }\right) ;$ \item $\mathcal{D}\left( \Omega \right) $ denotes the set of the infinitely differentiable functions with compact support defined on $\Omega \subset \mathbb{R}^{N};\ \mathcal{D}^{\prime }\left( \Omega \right) $ denotes the topological dual of $\mathcal{D}\left( \Omega \right) $, namely the set of distributions on $\Omega ;$ \item $\mathcal{S}\left( \Omega \right) $ denotes the Schwartz space and \mathcal{S}^{\prime }\left( \Omega \right) $ the set of tempered distributions$;$ \item $\mathcal{E}\left( \Omega \right) =\mathcal{C}^{\infty }\left( \Omega \right) $ denotes the set of the infinitely differentiable functions;\ \mathcal{E}^{\prime }\left( \Omega \right) $ denotes the topological dual of $\mathcal{E}\left( \Omega \right) $, namely the set of distributions with compact support in $\Omega ;$ \item $H^{1}(\Omega )$ is the usual Sobolev space defined as the set of functions $u\in L^{2}\left( \Omega \right) $ such that $\nabla u\in L^{2}\left( \Omega \right) ;$ \item $H_{0}^{1}(\Omega )$ is the closure of $\mathcal{D}\left( \Omega \right) $ in $H^{1}(\Omega );$ \item $H^{-1}(\Omega )$ is the topological dual of $H_{0}^{1}(\Omega ).$ \end{itemize} \section{$\Lambda $-theory\label{OT}} As we have already remarked in the introduction, $\Lambda $-theory can be considered as a variant of nonstandard analysis. It can be introduced via the notion of $\Lambda $-limit, and it can be easily used for the problems which we will consider in this paper. \subsection{Non Archimedean Fields} In this section, we will give the basic definitions relative to non-Archimedean fields and some of the basic facts. $\mathbb{F}$ will denote an ordered field. The elements of $\mathbb{F}$ will be called numbers. Clearly $\mathbb{F}$ contains (a set isomorphic to) the rational numbers. \begin{definition} Let $\mathbb{F}$ be an ordered field. Let $\xi \in \mathbb{F}$. We say that: \begin{itemize} \item $\xi $ is infinitesimal if for all $n\in \mathbb{N}$, $|\xi |\ <\frac{ }{n}$; \item $\xi $ is finite if there exists $n\in \mathbb{N}$ such as $|\xi |<n$; \item $\xi $ is infinite if, for all $n\in \mathbb{N}$, $|\xi |>n$ (equivalently, if $\xi $ is not finite). \end{itemize} \end{definition} \begin{definition} An ordered field $\mathbb{K}$ is called non-Archimedean if it contains an infinitesimal $\xi \neq 0$. \end{definition} It's easily seen that the inverse of a nonzero infinitesimal number is infinite, and the inverse of an infinite number is infinitesimal. Clearly, all infinitesimal numbers are finite. \begin{definition} A superreal field is an ordered field $\mathbb{K}$ that properly extends \mathbb{R}$. \end{definition} It is easy to show that any superreal field contains infinitesimal and infinite numbers. Thanks to infinitesimal numbers, in the superreal fields, we can formalize a new notion of \textquotedblleft closeness". \begin{definition} \label{def infinite closeness} We say that two numbers $\xi $ and $\zeta \in \mathbb{K}$ are infinitely close if $\xi -\zeta $ is infinitesimal. In this case, we will write $\xi \sim \zeta $. \end{definition} It is easy to see that the relation "$\sim $" of infinite closeness is an equivalence relation. \begin{theorem} If $\mathbb{K}$ is a superreal field, every finite number $\xi \in \mathbb{K} $ is infinitely close to a unique real number $r\sim \xi $, called the \textbf{shadow} or the \textbf{standard part} of $\xi $. We will write r=sh(\xi )$. If $\xi \in \mathbb{K}$ is a positive (negative) infinite number, then we put $sh(\xi )=+\infty $ ($sh(\xi )=-\infty $). \end{theorem} We can also consider the relation of \textquotedblleft finite closeness": \begin{equation*} \xi \sim _{f}\zeta \mathrm{\ if\ and\ only\ if\ }\xi -\zeta \mathrm{\ is\ finite.} \end{equation* It is readily seen that also $\sim _{f}$ is an equivalence relation. In the literature, the equivalence classes relative to the two relations of closeness $\sim $ and $\sim _{f}$, are called monads and galaxies, respectively. \begin{definition} \label{def monad} The monad of a number $\xi $ is the set of all numbers that are infinitely close to it \begin{equation*} \mathfrak{mon}(\xi )=\{\zeta \in \mathbb{K}:\xi \sim \zeta \} \end{equation* The galaxy of a number $\xi $ is the set of all numbers that are finitely close to it: \begin{equation*} \mathfrak{gal}(\xi )=\{\zeta \in \mathbb{K}:\xi \sim _{f}\zeta \} \end{equation*} \end{definition} So, $\mathfrak{mon}(0)$ is the set of all infinitesimal numbers in $\mathbb{ }$ and $\mathfrak{gal}(0)$ is the set of all finite numbers. \subsection{The $\Lambda $-limit\label{OL}} $\mathcal{U}$ will denote our "mathematical universe". For our applications a good choice of $\mathcal{U}$ is given by the superstructure on $\mathbb{R} : \begin{equation*} \mathcal{U}=\dbigcup_{n=0}^{\infty }\mathcal{U}_{n} \end{equation* where $\mathcal{U}_{n}$ is defined by induction as follows \begin{eqnarray*} \mathcal{U}_{0} &=&\mathbb{R} \\ \mathcal{U}_{n+1} &=&\mathcal{U}_{n}\cup \mathcal{P}\left( \mathcal{U _{n}\right) \end{eqnarray* Here $\mathcal{P}\left( E\right) $ denotes the power set of $E.$ If we identify the couples with the Kuratowski pairs and the functions and the relations with their graphs, clearly $\mathcal{U}$ contains almost all the mathematical objects needed in mathematics. Given the universe $\mathcal{U}$, we denote by $\Lambda $ the family of finite subsets of $\mathcal{U}.$ Clearly $\left( \Lambda ,\subset \right) $ is a directed set and, as usual, a function $\varphi :\Lambda \rightarrow E$ will be called \textit{net }(with values in $E$). \bigskip {\Large Axioms of\ the }$\Lambda ${\Large -limit} \begin{itemize} \item \textsf{(}$\Lambda $-\textsf{1)}\ \textbf{Existence Axiom.}\ \textit There is a superreal field} $\mathbb{K}\supset \mathbb{R}$ \textit{such that for every net }$\varphi :\Lambda \rightarrow \mathbb{R}$\textit{\ there exists a unique element }$L\in \mathbb{K\ }$\textit{called the} \textquotedblleft $\Lambda $-limit" \textit{of}\emph{\ }$\varphi .$ \textit The} $\Lambda $-\textit{limit will be denoted by} \begin{equation*} L=\lim_{\lambda \uparrow \mathcal{U}}\varphi (\lambda )\ \ \text{\textit{or} \ \ L=\lim_{\lambda \in \Lambda }\varphi (\lambda ) \end{equation* \textit{Moreover we assume that every}\emph{\ }$\xi \in \mathbb{K}$\textit{\ is the }$\Lambda $-limit\textit{\ of some net}\emph{\ }$\varphi :\Lambda \rightarrow \mathbb{R}$\emph{. } \item ($\Lambda $-2)\ \textbf{Real numbers axiom}. \textit{If }$\varphi (\lambda )$\textit{\ is} \textit{eventually} \textit{constant}, \textit namely} $\exists \lambda _{0}\in \Lambda :\ \forall \lambda \supset \lambda _{0},\ \varphi (\lambda )=r,$ \textit{then \begin{equation*} \lim_{\lambda \uparrow \mathcal{U}}\varphi (\lambda )=r \end{equation*} \item ($\Lambda $-3)\ \textbf{Sum and product Axiom}.\ \textit{For all } \varphi ,\psi :\Lambda \rightarrow \mathbb{R}$\emph{: \begin{eqnarray*} \lim_{\lambda \uparrow \mathcal{U}}\varphi (\lambda )+\lim_{\lambda \uparrow \mathcal{U}}\psi (\lambda ) &=&\lim_{\lambda \uparrow \mathcal{U}}\left( \varphi (\lambda )+\psi (\lambda )\right) \\ \lim_{\lambda \uparrow \mathcal{U}}\varphi (\lambda )\cdot \lim_{\lambda \uparrow \mathcal{U}}\psi (\lambda ) &=&\lim_{\lambda \uparrow \mathcal{U }\left( \varphi (\lambda )\cdot \psi (\lambda )\right) \end{eqnarray*} \end{itemize} \bigskip \begin{theorem} The axioms ($\Lambda $-1)\textsf{,}($\Lambda $-2),($\Lambda $-3) are consistent. \end{theorem} \textbf{Proof. }In order to prove the consistency of these axioms, it is sufficient to construct a model. Let us consider the algebra $\mathcal{F \left( \Lambda ,\mathbb{R}\right) $ of the real functions defined on \Lambda $ and se \begin{equation*} \mathfrak{I}_{0}=\left\{ \varphi \in \mathcal{F}\left( \Lambda ,\mathbb{R \right) \ |\ \varphi (\lambda )\ \text{is\ eventually}\ 0\right\} \end{equation* It is easy to check that $\mathfrak{I}_{0}$ is an ideal in the algebra \Lambda .$ By the Krull-Zorn Theorem, every ideal is contained in a maximal ideal. Let $\mathfrak{I}$ be a maximal ideal containing $\mathfrak{I}_{0}.$ We set \begin{equation*} \mathbb{K}:=\frac{\mathcal{F}\left( \Lambda ,\mathbb{R}\right) }{\cong _ \mathfrak{I}}}\ \ \text{\ } \end{equation* where the equivalence relation $\cong _{\mathfrak{I}}$ is defined as follows \begin{equation*} \varphi \cong _{\mathfrak{I}}\psi :\Leftrightarrow \varphi -\psi \in \mathfrak{I} \end{equation*} It is easy to check that $\mathbb{K}$ is an ordered field and $\mathbb R\subset K}$ if we identify $r\in \mathbb{R}$ with the equivalence class \left[ r\right] _{\cong _{\mathfrak{I}}}.$ Finally, we can define the \Lambda $-limit a \begin{equation*} \lim_{\lambda \uparrow \mathcal{U}}\varphi (\lambda )=\left[ \varphi \right] _{\cong _{\mathfrak{I}}} \end{equation* Now, it is immediate to check that the $\Lambda $-limit satisfies ($\Lambda -1),($\Lambda $-2),($\Lambda $-3) $\square $ \bigskip Now we want to define the $\Lambda $-limit of any bounded net of mathematical objects in $\mathcal{U}$ (a net $\varphi :\Lambda \rightarrow \mathcal{U}$ is called bounded if there exists $n$ such that $\forall \lambda \in \Lambda ,\varphi (\lambda )\in \mathcal{U}_{n}$). To do this, consider a ne \begin{equation} \varphi :\Lambda \rightarrow \mathcal{U}_{n} \label{net} \end{equation We will define $\lim_{\lambda \uparrow \mathcal{U}}\varphi (\lambda )$ by induction on $n$. For $n=0,$ $\lim_{\lambda \uparrow \mathcal{U}}\varphi (\lambda )$ is defined by the axioms \textsf{(}$\Lambda $-\textsf{1),}( \Lambda $-2),($\Lambda $-3); so by induction we may assume that the limit is defined for $n-1$ and we define it for the net (\ref{net}) as follows \begin{equation*} \lim_{\lambda \uparrow \mathcal{U}}\varphi (\lambda )=\left\{ \lim_{\lambda \uparrow \mathcal{U}}\psi (\lambda )\ |\ \psi :\Lambda \rightarrow \mathcal{ }_{n-1},\ \forall \lambda \in \Lambda ,\ \psi (\lambda )\in \varphi (\lambda )\right\} \end{equation*} \begin{definition} A mathematical entity (number, set, function or relation) which is the \Lambda $-limit of a net is called \textbf{internal}. \end{definition} \bigskip If $E\in \mathcal{U}$, and$\ \varphi :\Lambda \cap \mathcal{P}\left( E\right) \rightarrow \mathcal{U}_{n},$ then we will use the following notation \begin{equation*} \lim_{\lambda \uparrow E}\varphi (\lambda )=\lim_{\mu \uparrow \mathcal{U }\varphi (\mu \cap E). \end{equation* \bigskip \subsection{Natural extensions of sets and functions} \begin{definition} The \textbf{natural extension }of a set $E\subset \mathbb{R}$ is given b \begin{equation*} E^{\ast }:=\lim_{\lambda \uparrow \mathcal{U}}c_{E}(\lambda )=\ \left\{ \lim_{\lambda \uparrow \mathcal{U}}\psi (\lambda )\ |\ \psi (\lambda )\in E\right\} \end{equation* where $c_{E}(\lambda )$ is the net identically equal to $E$. \end{definition} Using the above definition we have that \begin{equation*} \mathbb{K}=\mathbb{R}^{\ast } \end{equation*} In this context a function $f$ can be identified with its graph; then the natural extension of a function is well defined. Moreover we have the following result: \begin{theorem} The \textbf{natural extension} of a functio \begin{equation*} f:E\rightarrow F \end{equation* is a function \begin{equation*} f^{\ast }:E^{\ast }\rightarrow F^{\ast }; \end{equation* moreover for every $\varphi :\Lambda \cap \mathcal{P}\left( E\right) \rightarrow E,$ we have tha \begin{equation*} \lim_{\lambda \uparrow \mathcal{U}}\ f(\varphi (\lambda ))=f^{\ast }\left( \lim_{\lambda \uparrow \mathcal{U}}\varphi (\lambda )\right) . \end{equation*} \end{theorem} When dealing with functions, when the domain of the function is clear from the context, sometimes the "$\ast $" will be omitted. For example, if $\eta \in \mathbb{R}^{\ast }$ is an infinitesimal, then clearly $e^{\eta }$ is a short way to write $\exp ^{\ast }(\eta ).$ The following theorem is a fundamental tool in using the $\Lambda $-limit: \begin{theorem} \label{limit}\textbf{(Leibnitz Principle)} Let $\mathcal{R}$ be a relation in $\mathcal{U}_{n}$ for some $n\geq 0$ and let $\varphi $,$\psi \in \mathcal{F}\left( \Lambda ,\mathcal{U}_{n}\right) $. If \begin{equation*} \forall \lambda \in \Lambda ,\ \varphi (\lambda )\mathcal{R}\psi (\lambda ) \end{equation* the \begin{equation*} \left( \underset{\lambda \uparrow \mathcal{U}}{\lim }\varphi (\lambda )\right) \mathcal{R}^{\ast }\left( \underset{\lambda \uparrow \mathcal{U}} \lim }\psi (\lambda )\right) \end{equation*} \end{theorem} \bigskip \begin{remark} Notice that, in the above theorem, the relations "$=$" and "$\in $" do not change their "meaning", namely "$=^{\ast }$" and "$\in ^{\ast }$" have the same interpretation than "$=$" and "$\in $". \end{remark} \begin{definition} An internal set is called \textbf{hyperfinite} if it is the $\Lambda $-limit of finite sets. \end{definition} All the internal finite sets are hyperfinite, but there are hyperfinite sets which are not finite. For example the se \begin{equation*} \mathbb{R}^{\circ }:=\ \underset{\lambda \uparrow \mathcal{U}}{\lim } \mathbb{R}\cap \lambda ) \end{equation* is not finite. The hyperfinite sets are very important since they inherit many properties of finite sets via Th. \ref{limit}. For example, $\mathbb{R ^{\circ }$ has the maximum and the minimum and every internal functio \begin{equation*} f:\mathbb{R}^{\circ }\rightarrow \mathbb{R}^{\ast } \end{equation* has the maximum and the minimum as well. Also, it is possible to add the elements of an hyperfinite set of numbers or vectors. Le \begin{equation*} A:=\ \underset{\lambda \uparrow \mathcal{U}}{\lim }A_{\lambda } \end{equation* be an hyperfinite set; then, the hyperfinite sum is defined as follows: \begin{equation*} \sum_{a\in A}a=\ \underset{\lambda \uparrow \mathcal{U}}{\lim }\sum_{a\in A_{\lambda }}a \end{equation* In particular, if $A_{\lambda }=\left\{ a_{1}(\lambda ),...,a_{\beta (\lambda )}(\lambda )\right\} \ $with\ \ $\beta (\lambda )\in \mathbb{N},\ then, setting \begin{equation*} \beta =\ \underset{\lambda \uparrow \mathcal{U}}{\lim }\ \beta (\lambda )\in \mathbb{N}^{\ast } \end{equation*} we use the notatio \begin{equation*} \sum_{j=1}^{\beta }a_{j}=\ \underset{\lambda \uparrow \mathcal{U}}{\lim \sum_{j=1}^{\beta (\lambda )}a_{j}(\lambda ). \end{equation*} \subsection{Qualified sets} Also, if $Q\subset \Lambda $ and $\varphi :\Lambda \rightarrow \mathcal{U _{n}$, the following notation is quite usefu \begin{equation*} \lim_{\lambda \in Q}\varphi (\lambda )=\lim_{\lambda \uparrow \mathcal{U} \widetilde{\varphi }(\lambda ) \end{equation* where \begin{equation*} \widetilde{\varphi }(\lambda )=\left\{ \begin{array}{cc} \varphi (\lambda ) & \text{for}\ \ \lambda \in Q \\ \varnothing & \text{for\ }\ \lambda \notin \end{array \right. \end{equation* We use this notation to introduce the notion of qualified set: \begin{definition} \label{qua}We say that a set $Q\subset \Lambda $ is qualified if for every bounded net $\varphi ,$ we have that \begin{equation*} \lim_{\lambda \uparrow \mathcal{U}}\varphi (\lambda )=\lim_{\lambda \in Q}\varphi (\lambda ). \end{equation*} \end{definition} By the above definition, we have that the $\Lambda $-limit of a net $\varphi $ depends only on the values that $\varphi $ takes on a qualified set. It is easy to see that (nontrivial) qualified sets exist. For example, by ( \Lambda $-2), we can deduce that, for every $\lambda _{0}\in \Lambda $ the se \begin{equation*} Q\left( \lambda _{0}\right) :=\left\{ \lambda \in \Lambda \ |\ \lambda _{0}\subseteq \lambda \right\} \end{equation* is qualified. In this paper, we will use the notion of qualified set via this Theorem \begin{theorem} \label{billo}Let $\mathcal{R}$ be a relation in $\mathcal{U}_{n}$ for some n\geq 0$ and let $\varphi $, $\psi \in \mathcal{F}\left( \Lambda ,\mathcal{U _{n}\right) $. Then the following statements are equivalent: \begin{itemize} \item there exists a qualified set $Q$ such that \begin{equation*} \forall \lambda \in Q,\ \varphi (\lambda )\mathcal{R}\psi (\lambda ) \end{equation*} \item we have \begin{equation*} \left( \underset{\lambda \uparrow \mathcal{U}}{\lim }\varphi (\lambda )\right) \mathcal{R}^{\ast }\left( \underset{\lambda \uparrow \mathcal{U}} \lim }\psi (\lambda )\right) \end{equation*} \end{itemize} \end{theorem} \textbf{Proof}: It is an immediate consequence of Th. \ref{limit} and the definition of qualified set. $\square $ \section{The abstract theory\label{dvb}} In this section we will present a method to extend any vector space $V$ to a larger vector space $\mathcal{B}\left[ V\right] $ of hyperfinite dimension. In the next section we will apply this method to functional vector spaces. \subsection{Definition of ultravectors\label{du}} \begin{definition} \label{UV} Let $H\ $be a separable real (or complex) Hilbert space with scalar product $(\cdot \ ,\cdot )$ and let $V\subset H$ be a dense subspace. We assume that $H\in \mathcal{U}$ and we set \begin{equation*} \mathcal{B}\left[ V\right] :=\ \underset{\lambda \uparrow V}{\lim }\ V_{\lambda } \end{equation* wher \begin{equation*} V_{\lambda }:=Sp\left( \lambda \right) \end{equation* is the span of $\lambda $. $\mathcal{B}\left[ V\right] $ is called the space of ultravectors based on $V.$ \end{definition} In order to simplify the notation, sometimes, we will set $V_{\mathcal{B}} \mathcal{B}\left[ V\right] .$ Notice that $V_{\mathcal{B}}$ is a vector space of hyperfinite dimension $\beta \in \mathbb{N}^{\ast }$, were $\beta $ is defined as follows: \begin{equation*} \beta =\dim ^{\ast }(V_{\mathcal{B}})=\ \underset{\lambda \uparrow V}{\lim \left( \dim V_{\lambda }\right) . \end{equation*} Let $f\in V;$ if we identify $f$ and $f^{\ast },$ we have that $V\subset V_ \mathcal{B}}$. Now let \begin{equation} \Phi :H^{\ast }\rightarrow V_{\mathcal{B}} \label{prog} \end{equation be the orthogonal projector. Then, to every vector $f\in H,$ we can associate the ultravector $\Phi f\in V_{\mathcal{B}}.$ If $\left\{ e_{j}\right\} _{j\leq \beta }$ is a basis for $V_{\mathcal{B}},$the \begin{equation} \Phi f=\sum_{j=1}^{\beta }(f,e_{j})e_{j} \label{fi} \end{equation} Let $V^{\prime }$ denote the dual of $V,$ namely, $V^{\prime }$ is the family of linear functionals $T$ on $V.$ \begin{definition} \label{dd}For any $T\in $ $V^{\prime },$ we denote by $\Phi T$ the only vector in $V_{\mathcal{B}}$ such that \begin{equation*} \forall v\in V_{\mathcal{B}},\ (\Phi T,v)=\left\langle T^{\ast },v\right\rangle ; \end{equation* $\Phi T$ is called \textbf{dual} ultravector. Using the orthonormal basis \left\{ e_{j}\right\} _{j\leq \beta }$, we have tha \begin{equation} \Phi T=\sum_{j=1}^{\beta }(\Phi T,e_{j})e_{j}=\sum_{j=1}^{\beta }\left\langle T^{\ast },e_{j}\right\rangle e_{j} \label{phi} \end{equation} \end{definition} Notice that, if we identify $H$ as a subset of $V^{\prime },$ the operator \Phi $ defined by (\ref{phi}) is the extension of the operator (\ref{fi}) and hence we have denoted them with the same symbol. From our previous discussion the space of ultravectors $V_{\mathcal{B}}$ contains three types of vectors \begin{itemize} \item standard ultravectors: $u\in V_{\mathcal{B}}$ is called \textbf standard} if $u\in V$ (or, to be more precise, if there exists $f\in V$ such that $u=f^{\ast }$); \item dual ultravectors: $u\in V_{\mathcal{B}}$ is called \textbf{dual} ultravector if $u=\Phi T$ for some $T\in V^{\prime }$; \item proper ultravector: $u\in V_{\mathcal{B}}$ is called \textbf{proper} ultravector if it is not a dual ultravector. \end{itemize} The ultravector which are not standard will be called \textbf{ideal.} \subsection{Extension of operators\label{EO}} \begin{definition} \label{CE}Given the operator $F:D\rightarrow V^{\prime },$ $D\subset V,$ the ma \begin{equation*} F_{\mathbf{\Phi }}:V_{\mathcal{B}}\cap D^{\ast }\rightarrow V_{\mathcal{B}} \end{equation* defined b \begin{equation} F_{\mathbf{\Phi }}=\Phi \circ F^{\ast } \end{equation is called \textbf{canonical }extension of $F.$ \end{definition} By the definition of $F_{\mathbf{\Phi }}$, if $u\in V_{\mathcal{B}}\cap D^{\ast },$ we have tha \begin{equation} \forall v\in V_{\mathcal{B}},\ (F_{\mathbf{\Phi }}\left( u\right) ,v)=\left\langle F^{\ast }\left( u\right) ,v\right\rangle \label{bellina} \end{equation} Using an orthonormal basis $\left\{ e_{j}\right\} _{j\leq \beta }$ for $V_ \mathcal{B}},$we have \begin{equation*} F_{\mathbf{\Phi }}\left( u\right) =\sum_{j=1}^{\beta }\left\langle F^{\ast }(u),e_{j}\right\rangle e_{j} \end{equation*} If we identify $H$ with its dual and we take $F:V\cap D\rightarrow H,$ then equation (\ref{bellina}) becomes \begin{equation} \forall v\in V_{\mathcal{B}},\ (F_{\mathbf{\Phi }}\left( u\right) ,v)=\left( F^{\ast }\left( u\right) ,v\right) . \label{belloccia} \end{equation} \section{The ultrafunctions\label{u}} \bigskip \subsection{Definition} \bigskip \begin{definition} Let $\Omega $ be a set in $\mathbb{R}^{N}$, and let $V\left( \Omega \right) \ $be a vector space such that $\mathcal{D}(\Omega )\subseteq V\left( \Omega \right) \subseteq \mathcal{C}(\Omega )\cap L^{2}(\Omega ).$ Then any functio \begin{equation*} u\in \mathcal{B}\left[ V\left( \Omega \right) \right] \end{equation* is called ultrafunction. \end{definition} So the ultrafunctions are $\Lambda $-limits of continuous functions in $V_ \mathcal{\lambda }}\left( \Omega \right) :=Sp\left( \lambda \cap V\left( \Omega \right) \right) $ and hence they are internal function \begin{equation*} u:\Omega ^{\ast }\rightarrow \mathbb{C}^{\ast }. \end{equation*} \begin{remark} If $V\left( \Omega \right) $ is a Sobolev space such as $H^{1}\left( \Omega \right) ,$ then the elements of $V\left( \Omega \right) $ are not functions, but equivalence class of functions, so also the elements of $\mathcal{B \left[ V\left( \Omega \right) \right] $ are equivalence class of functions. In order to avoid this unpleasant fact, in the definition of ultrafunctions, we have assumed $V\left( \Omega \right) \subset \mathcal{C}(\Omega )$. Moreover, this choice has also another motivation: as we will see in the applications, if we approach a problem via the ultrafunctions, we do not need Sobolev spaces (even if we might need the Sobolev inequalities). In some sense the ultrafunctions represent an alternative approach to problems which do not have classical solutions in some $\mathcal{C}^{k}(\overline \Omega }).$ \end{remark} Since $V_{\mathcal{B}}(\Omega )\subset \left[ L^{2}(\Omega )\right] ^{\ast }, $ $V_{\mathcal{B}}(\Omega )$ can be equipped with the following scalar produc \begin{equation*} \left( u,v\right) =\int_{\Omega }^{\ast }u(x)\overline{v(x)}\ dx. \end{equation* where $\int_{\Omega }^{\ast }$ is the natural extension of the Lebesgue integral considered as a functional. Notice that the Euclidean structure of $V_{\mathcal{B}}(\Omega )$ is the \Lambda $-limit of the Euclidean structure of every $V_{\lambda }(\Omega )$ given by the usual $L^{2}\left( \Omega \right) $ scalar product. If $f\in \mathcal{C}(\Omega )$ is a function such that \begin{equation} \forall g\in V(\Omega ),\ \int f(x)g(x)\ dx<+\infty \label{rin} \end{equation then it can be identified with an element of $V\left( \Omega \right) ^{\prime }$ and, by Def. \ref{dd}, there is a unique ultrafunction $f_{\Phi } $ such that $\forall v\in V_{\mathcal{B}}(\Omega ), \begin{equation} \int^{\ast }f_{\Phi }(x)v(x)\ dx=\int^{\ast }f^{\ast }(x)v(x)\ dx. \label{luce} \end{equation} The ma \begin{equation} \Phi :\mathcal{C}(\Omega )\cap V\left( \Omega \right) ^{\prime }\rightarrow V_{\mathcal{B}}(\Omega ) \label{canone} \end{equation is called canonical map. Notice that $f_{\Phi }\neq f^{\ast }$ unless $f\in V(\Omega ).$ Now let us define a new notion which helps to understand the structure of ultrafunctions: \begin{definition} \label{regular}A hyperfinite basis $\left\{ e_{j}\right\} _{j\leq \beta }$ for $V_{\mathcal{B}}\left( \Omega \right) $ is called \textbf{regular} basis if \begin{itemize} \item it is an orthonormal basis, \item $\left\{ e_{j}\right\} _{j\in \mathbb{N}}$ is an orthonormal Schauder basis for $L^{2}(\Omega ).$ \end{itemize} \end{definition} \bigskip The following theorem shows that regular bases exist: \begin{theorem} \label{uu}Let $\left\{ h_{j}\right\} _{j\in \mathbb{N}}\subset V\left( \Omega \right) $ be an orthonormal Schauder basis for $L^{2}(\Omega )$ and let $W$ be the space generated by \textbf{finite} linear combinations of the elements of $\left\{ h_{j}\right\} _{j\in \mathbb{N}}$ (hence $W$ is a dense subspace of $V\left( \Omega \right) ).$ Then there exists a regular basis \left\{ e_{j}\right\} _{j\leq \beta }$ for $V_{\mathcal{B}}\left( \Omega \right) $ such tha \begin{equation*} e_{j}=h_{j}\ \ \text{for }j\leq \theta \end{equation* wher \begin{equation*} \theta =\dim ^{\ast }\left( V_{\mathcal{B}}\left( \Omega \right) \cap W^{\ast }\right) . \end{equation*} \end{theorem} \textbf{Proof}. Let\textbf{\ }$\left[ \left\{ h_{j}\right\} _{j\in \mathbb{N }\right] ^{\ast }=\left\{ h_{j}\right\} _{j\in \mathbb{N}^{\ast }}\subset V\left( \Omega \right) ^{\ast }$ be an orthonormal Schauder basis for L^{2}(\Omega )^{\ast }\ $and set \begin{equation*} \theta =\max \left\{ k\in \mathbb{N}^{\ast }\ |\ \forall j\leq k,\ h_{j}\in V_{\mathcal{B}}\left( \Omega \right) \right\} \end{equation* Since $\left\{ h_{j}\right\} _{j\in \mathbb{N}}\subset V\left( \Omega \right) $, $\theta $ is an infinite number in $\mathbb{N}^{\ast }.$ Set e_{j}=h_{j}\ $for $j\leq \theta .$ Now, we can take an orthonormal basis \left\{ e_{j}\right\} _{j\leq \beta }$ for $V_{\mathcal{B}}$ which contains \left\{ e_{j}\right\} _{j\leq \theta }.$ $\square $ So every ultrafunction $u\in V_{\mathcal{B}}\left( \Omega \right) $ can be represented as follows \begin{equation} u(x)=\dsum\limits_{j=1}^{\beta }u_{j}e_{j}(x)=\dsum\limits_{n=1}^{\theta }u_{j}h_{j}(x)+\dsum\limits_{j=\theta +1}^{\beta }u_{j}e_{j}(x) \label{deco} \end{equation wit \begin{equation*} u_{j}=\int^{\ast }u^{\ast }(x)\overline{e_{j}(x)}\ dx\in \mathbb{R}^{\mathbb \ast }},\ j\leq \beta . \end{equation*} In particular, if $f\in L^{2}(\Omega )$ (or more in general if $f\in V^{\prime }(\Omega )$), the numbers $f_{j},$ $j\in \mathbb{N}$,\ are complex numbers. The internal function $f_{\Phi }(x)=\dsum\limits_{j=1}^{\beta }f_{j}e_{j}$ is the orthogonal projection of $f^{\ast }\in L^{2}(\Omega )^{\ast }$ on $V_{\mathcal{B}}\left( \Omega \right) \subset L^{2}(\Omega )^{\ast }.$ \bigskip \textbf{Example}: Let us see an example; we set \begin{itemize} \item $\Omega =\left[ 0,1\right] ;$ \item $V\left( \left[ 0,1\right] \right) =C_{0}^{2}\left( \left[ 0,1\right] \right) ;$ \item $h_{j}(x)=\sqrt{2}\sin \left( j\pi x\right) $; \end{itemize} By Th. \ref{uu} there exists a regular basis $\left\{ e_{j}(x)\right\} _{j\in J}$ which contains $\left\{ \sqrt{2}\sin \left( j\pi x\right) \right\} _{j\in \mathbb{N}}$. With this assumptions, every vector $u\in V_ \mathcal{B}}\left( \left[ 0,1\right] \right) $ can be written as follow \begin{equation*} u(x)=\sqrt{2}\dsum\limits_{n=1}^{\theta }u_{j}\sin \left( j\pi x\right) +\dsum\limits_{j=\theta +1}^{\beta }u_{j}e_{j}(x)\ \ \text{with}\ \ u_{j}=\int_{0}^{1}u(x)e_{j}(x)dx. \end{equation*} \subsection{Ultrafunctions and distributions\label{UD}} First, we will give a definition of the Dirac $\delta $-ultrafunction concentrated in $q.$ \begin{theorem} Given a point $q\in \Omega ,$ there exists a unique function $\delta _{q}$ in $V_{\mathcal{B}}(\Omega )$ such tha \begin{equation} \forall v\in V_{\mathcal{B}}(\Omega ),\ \int^{\ast }\delta _{q}(x)v(x)\ dx=v(q). \label{ddel} \end{equation $\delta _{q}$ will called the Dirac ultrafunction in $V_{\mathcal{B}}(\Omega )$ concentrated in $q.$ Moreover, we set $\delta =\delta _{0}.$ \end{theorem} \textbf{Proof.} Let $\left\{ e_{j}\right\} _{j\leq \beta }$ be any orthonormal basis for $V_{\mathcal{B}}\left( \Omega \right) $ and se \begin{equation*} \delta _{q}(x)=\dsum\limits_{j=1}^{\beta }e_{j}(q)e_{j}(x) \end{equation* It is easy to check that $\delta _{q}(x)$ has the desired property; in fac \begin{eqnarray*} \int^{\ast }\delta _{q}(x)v(x)\ dx &=&\int^{\ast }\dsum\limits_{j=1}^{\beta }e_{j}(q)e_{j}(x)v(x)\ dx \\ &=&\dsum\limits_{j=1}^{\beta }\left( \int^{\ast }e_{j}(x)v(x)\ dx\right) e_{j}(q)=v(q). \end{eqnarray*} $\square $ Next let us see how to associate an ultrafunction $T_{\Phi }=\Phi T$ to every distribution $T\in \mathcal{D}^{\prime }.$ Let $\left\{ h_{j}\right\} _{j\in \mathbb{N}}\subset \mathcal{D}$ be an orthonormal Schauder basis for L^{2}(\Omega )$; then, there exists an infinite number $\theta $ such that \left\{ h_{j}\right\} _{j\leq \theta }$ is a basis for $V_{\mathcal{B }(\Omega )\cap \mathcal{D}^{\ast };$ then, $T_{\Phi }(x)$ can be defined as follows \begin{equation} T_{\Phi }(x)=\sum_{j=0}^{\theta }\left\langle T^{\ast },h_{j}\right\rangle h_{j}(x) \label{expa} \end{equation Notice that this definition in independent of the choice of the basis sinc \begin{equation} \int^{\ast }T_{\Phi }(x)\overline{v(x)}\ dx=\left\langle T^{\ast },v\right\rangle \ \ \text{if}\ \ v\in V_{\mathcal{B}}(\Omega )\cap \mathcal D}^{\ast } \label{ddd} \end{equation \begin{equation} \int^{\ast }T_{\Phi }(x)\overline{v(x)}\ dx=0\ \ \text{if}\ \ v\in \left( V_ \mathcal{B}}(\Omega )\cap \mathcal{D}^{\ast }\right) ^{\perp }. \label{dddd} \end{equation where $\left( V_{\mathcal{B}}(\Omega )\cap \mathcal{D}^{\ast }\right) ^{\perp }$ denotes the orthogonal complement of $V_{\mathcal{B}}(\Omega )\cap \mathcal{D}^{\ast }$ in $V_{\mathcal{B}}(\Omega ).$ \begin{remark} Here the reader must be careful to distinguish the Dirac ultrafunction as defined by \ref{ddel} and the ultrafunction related to the distribution \delta \in $ $\mathcal{D}^{\prime }$ which now we will call $\delta _ \mathcal{D}}.$ In fact, by (\ref{expa}) we have tha \begin{equation*} \delta _{\mathcal{D}}(x)=\sum_{j=0}^{\theta }h_{j}(0)h_{j}(x) \end{equation* whil \begin{equation*} \delta (x)=\sum_{j=0}^{\theta }h_{j}(0)h_{j}(x)+\dsum\limits_{j=\theta +1}^{\beta }e_{j}(0)e_{j}(x) \end{equation* where $\left\{ h_{j}\right\} _{j\leq \theta }\cup \left\{ e_{j}\right\} _{\theta +1\leq j\leq \beta }$ is a regular basis for $\left( V_{\mathcal{B }(\Omega )\cap \mathcal{D}^{\ast }\right) ^{\perp }.$ Of course, if $\varphi \in \mathcal{D}$, we have tha \begin{equation*} \int^{\ast }\delta (x)\varphi (x)\ dx=\int^{\ast }\delta _{\mathcal{D }(x)\varphi (x)\ dx=\varphi (0); \end{equation* actually the above inequality holds for every $\varphi \in V_{\mathcal{B }(\Omega )\cap \mathcal{D}^{\ast }$. \end{remark} The above remark suggests the following definition: \begin{definition} \label{dt}An ultrafunction $e_{q}\in V_{\mathcal{B}}(\Omega )$ is called a \delta $-type ultrafunction i \begin{equation*} \forall \varphi \in \mathcal{D},\mathcal{\ }\int^{\ast }e_{q}(x)\varphi (x)\ dx\sim \varphi (q). \end{equation*} \end{definition} \bigskip Following the classification of ultravectors, (\ref{ddd}) and (\ref{dddd}), the ultrafunctions can be classified as follows: \begin{definition} An ultrafunction $u\in V_{\mathcal{B}}(\Omega )$ is called \begin{itemize} \item \textbf{standard }if $u\in V(\Omega )$ or, to be more precise, if there exists $f\in V(\Omega )$ such that $u=f^{\ast }$; \item \textbf{ideal }if it is not standard; \item \textbf{dual} ultrafunction if $u=\Phi (T)$ for some $T\in V(\Omega )^{\prime };$ \item \textbf{distributional ultrafunction} if $u=\Phi (T)$ for some $T\in \mathcal{D}^{\prime };$ \item \textbf{proper} ultrafunction if it is not a distributional ultrafunction. \end{itemize} \end{definition} \section{The Dirichlet problem} As first application of ultrafunctions, we will consider the following Dirichlet problem \begin{equation} \left\{ \begin{array}{cc} u\in \mathcal{C}^{2}(\overline{\Omega }) & \\ -\Delta u=f(x) & \text{for}\ \ x\in \Omega \\ u(x)=0 & \text{for\ }x\in \partial \Omeg \end{array \right. \label{1} \end{equation Here $\Omega $ is a bounded set in $\mathbb{R}^{N}.$ This problem is relatively simple and it will help to compare the Sobolev space approach with the ultrafunctions approach. \subsection{Generalized solutions\label{gs}} It is well known that problem (\ref{1}) has a unique solution provided that f(x)\ $and $\partial \Omega $ are smooth. If they are not smooth, it is necessary to look for generalized solutions. In the Sobolev space approach, we transform problem (\ref{1}) in the following one \begin{equation} \left\{ \begin{array}{c} u\in H_{0}^{1}(\Omega ) \\ -\Delta u=f(x \end{array \right. \label{2} \end{equation} It is well known that this problem has a unique solution for any bounded open set $\Omega $ and for a large class of $f,$ namely for every $f\in H^{-1}(\Omega )$. In this approach, the boundary condition is replaced by the fact that $u\in H_{0}^{1}(\Omega ),$ namely by the fact that $u$ is the limit (in $H^{1}(\Omega )$) of a sequence of functions in $\mathcal{C ^{2}(\Omega )$ having compact support in $\Omega $. The equation $-\Delta u=f $ is required to be satisfied in a weak sense \begin{equation*} -\int_{\Omega }u\Delta \varphi \ dx=\int_{\Omega }f\varphi \ dx\ \ \forall \varphi \in \mathcal{D}(\Omega ) \end{equation* $u$ itself is not a function but an equivalence class of functions defined a.e.$ in $\Omega .$ \bigskip Now let us see the ultrafunctions approach. In this case we set $V_{\mathcal B}}^{2,0}(\Omega )=\mathcal{B}\left[ \mathcal{C}_{0}^{2}(\overline{\Omega } \right] $ and problem (\ref{1}) can be written as follows \begin{equation} \left\{ \begin{array}{cc} u\in V_{\mathcal{B}}^{2,0}(\Omega ) & \\ -\Delta _{\Phi }u=f(x) & \text{for}\ \ x\in \text{\ }\Omega ^{\ast \end{array \right. \label{3} \end{equation where $\Delta _{\Phi }=\Phi \circ \Delta ^{\ast }:V_{\mathcal{B }^{2,0}(\Omega )\rightarrow V_{\mathcal{B}}^{2,0}(\Omega )$ is given by Def. \ref{CE}. The following result holds: \begin{theorem} \label{gugo}For any $f\in V_{\mathcal{B}}^{2,0}(\Omega ),$ problem (\ref{3}) has a unique solution. \end{theorem} \textbf{Proof.} By definition, $V_{\mathcal{B}}^{2,0}(\Omega )$ is the \Lambda $-limit of finite dimensional spaces $V_{\lambda }(\Omega )\subset \mathcal{C}_{0}^{2}(\overline{\Omega })$. For every $u\in \mathcal{C _{0}^{1}(\overline{\Omega }),$ by the Poincar\'{e} inequality, we have that \begin{equation*} \int_{\Omega }\nabla u\cdot \nabla u\ dx\geq k\left\Vert u\right\Vert _{L^{2}(\Omega )}^{2}. \end{equation*} In particular, the above inequality holds for any $u\in V_{\lambda }(\Omega ) $. Now, let \begin{equation*} \Phi _{\lambda }:L^{2}(\Omega )\rightarrow V_{\lambda }(\Omega ), \end{equation* be the orthogonal projection. For every $u,v\in V_{\lambda }(\Omega ),$ we have tha \begin{equation*} \int_{\Omega }\nabla u\cdot \nabla v\ dx=\int_{\Omega }-\Delta u\ v\ dx \end{equation* Then, by the Poincar\'{e} inequality, \begin{equation*} -\Phi _{\lambda }\Delta :V_{\lambda }(\Omega )\rightarrow V_{\lambda }(\Omega ) \end{equation* is a positive definite symmetric operator. Then it is invertible. So we have that, for any $\lambda \in \Lambda ,$ there exists a unique $\bar{u _{\lambda }\in V_{\lambda }(\Omega )$ such tha \begin{equation} \forall v\in V_{\lambda }(\Omega ),\ \int_{\Omega }-\Delta \bar{u}_{\lambda }v\ dx=\int_{\Omega }f_{\lambda }v\ dx \label{robina} \end{equation where $f_{\lambda }\in V_{\lambda }(\Omega )$ is such that $f=\ \underset \lambda \uparrow V}{\lim }\ f_{\lambda }.$ If we take the $\Lambda $-limit in this equality, we ge \begin{equation} \forall v\in V_{\mathcal{B}}^{2,0}(\Omega ),\ -\int_{\Omega }^{\ast }\Delta ^{\ast }\bar{u}\ v\ dx=\int_{\Omega }^{\ast }fv\ dx \label{roba} \end{equation wher \begin{equation*} \bar{u}=\ \underset{\lambda \uparrow V}{\lim }\ \bar{u}_{\lambda } \end{equation* and hence, by (\ref{belloccia}), we get \begin{equation*} -\Delta _{\Phi }\bar{u}=f \end{equation*} The uniqueness follows from the uniqueness of $\bar{u}_{\lambda }$. $\square $ \begin{remark} This example shows quite well the general strategy to solve problems within the framework of ultrafunctions. First you solve a finite dimensional problem and then you take the $\Lambda $-limit. Since the $\Lambda $-limit exists for any sequence of mathematical objects, the solvability of the finite dimensional approximations imply the existence of a generalized solution. \end{remark} The solution is a function $\bar{u}:\overline{\Omega }^{\ast }\rightarrow \mathbb{R}^{\ast };$ $\bar{u}$ is defined for every $x\in \overline{\Omega ^{\ast },$ and we have that $u(x)=0$ for\ $x\in \partial \Omega ^{\ast }.$ So the boundary condition can be interpreted "classically" while this is not possible in $H_{0}^{1}(\Omega )$. If problem (\ref{1}) has a solution $U\in \mathcal{C}^{2}(\overline{\Omega }),$ then \begin{equation*} \bar{u}=U^{\ast }. \end{equation* If problem (\ref{2}) has a solution $U\in H_{0}^{1}(\Omega ),$ then we have that \begin{equation*} \int_{\Omega }U\varphi \ dx\sim \int_{\Omega }^{\ast }\bar{u}\varphi \ dx\ \ \forall \varphi \in \mathcal{C}_{0}^{2}(\overline{\Omega }) \end{equation*} Notice that in the above formula the left hand side integral is a Lebesgue integral while in the right hand side, $\int^{\ast }$ is the $\ast -transform of the Riemann integral; the integral make sense since $\bar{u ,\varphi \in \left[ \mathcal{C}_{0}(\overline{\Omega })\right] ^{\ast }$. In the theory of ultrafunctions, the Lebesgue integral seems to be not so necessary. There are interesting and physically relevant cases in which the generalization of the Dirichlet problem cannot be treated within the Sobolev space $H_{0}^{1}(\Omega ).$ For example, consider the problem \begin{equation} \left\{ \begin{array}{cc} -\Delta u=\delta _{y} & \text{for}\ \ x\in \Omega \\ u(x)=0 & \text{for\ }x\in \partial \Omeg \end{array \right. \label{4} \end{equation where $\delta _{y}$ is the Dirac measure concentrated at $y\in \Omega $. This problem is quite natural in potential theory; in fact $u$ represents the potential generated by a point source (and usually it is called Green function). However this problem does not have solution in $H_{0}^{1}(\Omega ) $ since $\delta \notin H^{-1}(\Omega ).$ Actually, with some work, it is possible to prove that it has a "generalized solution" in $H_{0}^{1}(\Omega )+\mathcal{E}^{^{\prime }}(\Omega ).$ However, in the framework of ultrafunction, problem (\ref{4}) is nothing else but a particular case of problem (\ref{3}). However, if $f\in V_{\mathcal{B}}^{2,0}(\Omega )$ is a proper ultrafunction, (namely, $f$ cannot be associated to a distribution via (\ref{ddd}) and (\re {dddd})), problem (\ref{3}) has a solution which cannot be interpreted as a distribution solution. For example, you can take $f=\delta (x)^{2}.$ Remember that $\delta (x)^{2},$ in the ultrafunction theory, makes sense by Def. \ref{CE}. \begin{remark} If you take $f=\delta ^{2}$ you get a well posed mathematical problem, but, most likely, it does not represent any "physically" relevant phenomenon. However, it is possible to choose some proper ultrafunction $f\in V_ \mathcal{B}}^{2,0}(\Omega )$ which models physical phenomena. For exampl \begin{equation*} f(x)=\sin \alpha (\mathbf{n}\cdot x);\ \mathbf{n}\in \mathbb{R}^{N},\ \left\vert \mathbf{n}\right\vert =1,\ \alpha \in \mathbb{R}^{\ast }\ \text infinite, }x\in K^{\ast },K\subset \subset \Omega \end{equation* might represent a electrostatic problem in a sort of periodic medium such as a crystal. Here $K$ represent the support of the crystal and $f(x)$ represents its charge density; it consists of periodic layers of positive and negative charges at a distance of $\frac{1}{\pi \alpha }.$ From a macroscopic point of view the solution is $0$, but at the microscopic level this is not the case. In fact the solution $u$ of problem (\ref{3}) does not vanish, even if it can be proved that \begin{equation*} \forall v\in \mathcal{C}^{2}(\overline{\Omega }),\ \int_{\Omega }^{\ast \bar{u}\ v\ dx\sim 0. \end{equation*} \end{remark} \subsection{The variational approach} Looking at problem (\ref{1}) from a variational point of view, the comparison between the Sobolev space approach and the ultrafunctions approach becomes richer. \bigskip It is well known that the equation (\ref{1}) is the Euler-Lagrange equation of the energy functiona \begin{equation*} J(u)=\int_{\Omega }\left( \frac{1}{2}\left\vert \nabla u\right\vert ^{2}-fu\right) \ dx \end{equation* Thus a minimizer of $J(u)$ on $\mathcal{C}_{0}^{2}(\overline{\Omega })$ solves the problem. However, if $f(x)\ $and $\partial \Omega $ are not smooth a minimizing sequence does not converge in $\mathcal{C}_{0}^{2} \overline{\Omega })$ and also when it converges, it can be proved only by making hard estimates. On the other hand, if you define $H_{0}^{1}(\Omega )$ as the closure of \mathcal{D}(\Omega )$ with respect to the norm \begin{equation*} \left\Vert u\right\Vert _{H_{0}^{1}}=\sqrt{\int_{\Omega }\left\vert \nabla u\right\vert ^{2}dx} \end{equation* the functional $\ J(u)$ becomes $\frac{1}{2}\left\Vert u\right\Vert _{H_{0}^{1}}^{2}-\int_{\Omega }fu\ dx$ and it is immediate to see that it has a minimizer provided that $f\in H^{-1}(\Omega ).$ If you consider problem (\ref{4}), the trouble with the energy functional is that the energ \begin{equation*} J(u)=\int_{\Omega }\frac{1}{2}\left\vert \nabla u\right\vert ^{2}dx-u(y),\ \ u\in \mathcal{C}_{0}^{2}(\overline{\Omega }) \end{equation* is not bounded below and $J$ cannot be extended to all $H_{0}^{1}(\Omega ).$ Instead, if we use the ultrafunctions approach, the energ \begin{equation*} J(u)=\int_{\Omega ^{\ast }}^{\ast }\left( \frac{1}{2}\left\vert \nabla u\right\vert ^{2}-\delta _{y}u\right) dx,\ \ u\in V_{\mathcal{B }^{2,0}(\Omega ) \end{equation* is well defined and it makes sense to look for a minimizer in $V_{\mathcal{B }^{2,0}(\Omega ).$ For every $\lambda \subset \mathcal{C}_{0}^{2}(\overline \Omega })\cap \Lambda ,\ J(u)$ has a minimizer $u_{\lambda }$ in $V_{\lambda }(\Omega )\subset \mathcal{C}_{0}^{2}(\overline{\Omega })$, and hence, if you set \begin{equation*} \bar{u}=\ \underset{\lambda \uparrow V}{\lim }\ u_{\lambda }, \end{equation* we have that \begin{equation*} J(\bar{u})=\ \underset{\lambda \uparrow V}{\lim }\left[ \int_{\Omega }\frac{ }{2}\left\vert \nabla u_{\lambda }\right\vert ^{2}\ dx-u_{\lambda }(y)\right] \end{equation* minimizes $J(u)$ in $V_{\mathcal{B}}^{2,0}(\Omega ).$ Clearly, for some values of $u$, $J(u)$ may assume infinite values in $\mathbb{R}^{\ast }$, but this is not a problem, actually in my opinion, this is one of the main reason to legitimate the use non-Archimedean fields. In fact in the framework of NAM, it is possible to make models of the physical world in which there are material points with a finite charge. They have an "infinite" energy, but, nevertheless, we can make computations and if necessary to evaluate it. The epistemological (and very interesting) issue relative to the meaning of their "physical existence" should not prevent their use. \section{The bubbling phenomenon relative to the Sobolev critical exponent} The bubbling phenomenon relative to the critical Sobolev exponent is the model problem which has inspired this work. In general (at least in the simplest cases), the bubbling phenomenon consists in minimizing sequences whose \textit{mass concentrate to some points}; however their "limit" does not exist in any Sobolev space and not even in any distribution space due to the "strong" non-linearity of the problem. Nevertheless, these problems have been extensively studied and we know a lot of facts relative to the minimizing sequences (or more in general to non-converging Palais-Smale sequences) which, up to an equivalence relation, are called \textit{critical points at infinity} (see \cite{bahri}). The literature on this topic is huge (you can find part of it in \cite{Cha}). We refer also to \cite{bahri}, \cit {BN} and \cite{Cha} for an exposition of the utility of knowing the properties of the critical points at infinity. Ultrafunction theory seems to be an appropriate tool to deal with these kind of problems. \subsection{Description of the problem} Let us consider the following minimization problem \begin{equation*} \underset{u\in \mathfrak{M}_{p}}{\min }\ J(u) \end{equation* wher \begin{equation*} J(u)=\int_{\Omega }\left\vert \nabla u\right\vert ^{2}\ dx \end{equation* and \begin{equation*} \mathfrak{M}_{p}=\left\{ u\in \mathcal{C}_{0}^{2}(\overline{\Omega }):\ \int_{\Omega }\left\vert u\right\vert ^{p}\ dx=1\right\} \end{equation* Here $\Omega $ is a bounded set in $\mathbb{R}^{N}\ $with smooth boundary,\ N\geq 3$ and $p>2$. If $J$ has a minimizer, it is a solution of the following elliptic eigenvalue problem: \begin{equation} \left\{ \begin{array}{cc} u\in \mathcal{C}_{0}^{2}(\overline{\Omega }) & \\ -\Delta u=\lambda u^{p-1} & \text{for}\ \ x\in \Omega \\ u(x)>0 & \text{for}\ \ x\in \Omega \\ \int_{\Omega }\left\vert u\right\vert ^{p}dx=1 & \end{array \right. \label{YAM} \end{equation} As usual in the literature, we se \begin{equation*} 2^{\ast }=\frac{2N}{N-2}; \end{equation* $2^{\ast }$ is called the critical Sobolev exponent for problem (\ref{YAM}) (notice that this "$\ast $" has nothing to do with the natural extension). Moreover, we se \begin{equation*} m_{p}:=\ \underset{u\in \mathfrak{M}_{p}}{\inf }\ J(u) \end{equation* The following facts are well known (see e.g. \cite{Cha} and references): \begin{itemize} \item (i) if $2<p<2^{\ast },$ then $m_{p}>0$ and it is achieved; hence problem (\ref{YAM}) has a solution. \item (ii) if $p=2^{\ast },$ then $m_{2^{\ast }}>0$ and it is achieved only if $\Omega =\mathbb{R}^{N};$ however there are particular domains $\Omega $ such that (\ref{YAM}) has a solution (which, of course, is not a minimizer of $J,$ but a critical point). \item (iii) if $p>2^{\ast },$ then $m_{p}=0$ and it is not achieved. \end{itemize} Probably, the most interesting case is the second one (the critical exponent case) since it presents many interesting phenomena. If $u_{n}$ is a minimizing sequence, it has a subsequence $u_{n}^{\prime }$ which concentrates to some point $x_{0}\in \overline{\Omega };$ more exactly, u_{n}^{\prime }\rightharpoonup 0$ weakly in $H_{0}^{1}(\Omega )\ $and strongly in $H_{0}^{1}(\Omega \setminus B_{\varepsilon }(x_{0}));$ consequently, $\left\vert u_{n}^{\prime }\right\vert ^{p}\rightharpoonup \delta _{x_{0}}$ weakly in $\mathcal{D}^{\prime }(\Omega ),$ but $\left( \delta _{x_{0}}\right) ^{1/p}$ cannot be interpreted as a generalized solution in the framework of the distribution theory just because $\left( \delta _{x_{0}}\right) ^{1/p}$ makes no sense. This phenomenon is called "bubbling" and probably problem (\ref{YAM}) with $p=2^{\ast }$ is the simplest problem which presents it. Similar phenomena occur in many other variational problems such as the Yamabe problem, the Kazdan-Warner problem, in the study of harmonic maps between manifolds, in minimal surfaces theory, in the Yang-Mills equations etc. Let us go back to discuss the concentration phenomenon of a minimizing sequence. Not all the points of $\overline{\Omega }$ have the same "dignity" as concentration points. Let us explain what do we mean. Let \begin{equation} u_{p},\ p\in (2,2^{\ast }), \label{grulla} \end{equation be a minimizer of $J(u)$ on the set $\mathfrak{M}_{p}$. If $p\rightarrow 2^{\ast }$ from the left, it is well known tha \begin{equation*} \underset{p\rightarrow \left( 2^{\ast }\right) ^{-}}{\lim }m_{p}=m_{2^{\ast }} \end{equation* and that \begin{equation} v_{p}:=\frac{u_{p}}{\int_{\Omega }\left\vert u_{p}\right\vert ^{2^{\ast }}dx , \label{lilla} \end{equation is a minimizing sequence of $J$ on $\mathfrak{M}_{2^{\ast }}.$ If, for every $u\in \mathfrak{M}_{2^{\ast }},$ we se \begin{equation*} \mathfrak{B}\left( u\right) =\int_{\Omega }x\left\vert u\right\vert ^{2^{\ast }}dx \end{equation* then we have that, in the generic case, \begin{equation*} \underset{p\rightarrow \left( 2^{\ast }\right) ^{-}}{\lim }\mathfrak{B \left( v_{p}\right) =\overline{x} \end{equation* where $\overline{x}$ is an interior point of $\Omega $. Thus, in this sense, $\overline{x}$ is a "special" concentration point. If we apply ultrafunction theory, the world "special" will get a new meaning; in fact $\overline{x}$ will be characterized as the point infinitely close to the concentration point of the generalized solution. This issue will be further discussed in the next section. \bigskip \subsection{Generalized solutions} \bigskip The minimization problem considered in the previous section can be studied in the framework of the ultrafunctions. In this framework the problem takes the following form: \begin{equation} \underset{u\in \widetilde{\mathfrak{M}}_{p}}{\min }\ J(u) \label{*} \end{equation wher \begin{equation*} J(u)=\int_{\Omega }^{\ast }\left\vert \nabla u\right\vert ^{2}\ dx \end{equation* and \begin{equation*} \widetilde{\mathfrak{M}}_{p}=\left\{ u\in V_{\mathcal{B}}^{2,0}(\overline \Omega })\ |\ \int_{\Omega }^{\ast }\left\vert u\right\vert ^{p}\ dx=1\right\} \end{equation* where $V_{\mathcal{B}}^{2,0}(\overline{\Omega })=\mathcal{B}\left[ \mathcal{ }_{0}^{2}(\overline{\Omega })\right] .$ \begin{theorem} For every $p>2,$ problem (\ref{*}) has a solution $\tilde{u}_{p}.$ If we set $\widetilde{m}_{p}=J(\tilde{u}_{p}),$ we have the following \begin{itemize} \item (i) if $2<p<2^{\ast },$ then $\widetilde{m}_{p}=m_{p}\in \mathbb{R ^{+} $ and there is at least one standard minimizer $\tilde{u}_{p}$, namely \tilde{u}_{p}\in \mathcal{C}_{0}^{2}(\overline{\Omega });$ \item (ii) if $p=2^{\ast },$(and $\Omega \neq \mathbb{R}^{N}),$ then \widetilde{m}_{2^{\ast }}=m_{2^{\ast }}+\varepsilon $ where $\varepsilon $ is a positive infinitesimal; \item (iii) if $p>2^{\ast },$ then $\widetilde{m}_{p}=\varepsilon _{p}$ where $\varepsilon _{p}$ is a positive infinitesimal. \end{itemize} \end{theorem} \bigskip \textbf{Proof.} The proof of this theorem is a simple application of the nostandard methods. We will describe it with some details for the reader not acquainted with these methods. We se \begin{equation*} \tilde{u}_{p}=\ \underset{\lambda \uparrow \mathcal{C}_{0}^{2}(\overline \Omega })}{\lim }\ u_{p,\lambda } \end{equation* where $u_{p,\lambda }$ is the minimizer of $J(u)$ on the set $\mathfrak{M _{p}\cap V_{\lambda }(\overline{\Omega });\ V_{\lambda }(\overline{\Omega )=Sp(\lambda )\subset \mathcal{C}_{0}^{2}(\overline{\Omega })$. We recall that $\mathfrak{M}_{p}\cap V_{\lambda }(\overline{\Omega })\neq 0$ for \lambda $ in a qualified set and that the minimum exists since $V_{\lambda } \overline{\Omega })$ is a finite dimensional vector space and hence \mathfrak{M}_{p}\cap V_{\lambda }(\overline{\Omega })$ is compact. If we set \begin{equation*} m_{p,\lambda }:=\underset{u\in \mathfrak{M}_{p}\cap V_{\lambda }(\overline \Omega })}{\min }\ J(u), \end{equation* taking the $\Lambda $-limit, we have that \begin{equation*} \widetilde{m}_{p}:=\underset{\lambda \uparrow \mathcal{C}_{0}^{2}(\overline \Omega })}{\lim }\ m_{p,\lambda }=\ \underset{u\in \widetilde{\mathfrak{M} _{p}}{\min }\ J(u). \end{equation* So the existence result is proved. Now let us prove the second part of the theorem: (i) If you take $\lambda _{0}=\left\{ u_{p}\right\} ,$ where $u_{p}$ is given by (\ref{grulla}) then for every $\lambda \supseteq \lambda _{0}$, we have that \begin{equation*} m_{p,\lambda }:=\underset{u\in \mathfrak{M}_{p}\cap V_{\lambda }(\overline \Omega })}{\min }J(u)=J(u_{p})=m_{p} \end{equation* and hence, taking the $\Lambda $-limit, we have that $\widetilde{m _{p}=m_{p} $. (ii) It is well known that the value $\widetilde{m}_{2^{\ast }}$ is not achieved by any function $u\in \mathfrak{M}_{2^{\ast }}\cap V_{\lambda } \overline{\Omega });$ then $m_{2^{\ast },\lambda }>m_{2^{\ast }},$ and hence, taking the $\Lambda $-limit, we have that $\widetilde{m}_{2^{\ast }}>m_{2^{\ast }}.$ On the other hand, for every $b\in \mathbb{R}^{+},$ there exists $u\in \mathfrak{M}_{2^{\ast }}$ such $J(u)\leq m_{2^{\ast }}+b,$ and hence \begin{equation*} \widetilde{m}_{2^{\ast }}=J(\tilde{u}_{2^{\ast }})\leq J(u)\leq m_{2^{\ast }}+b, \end{equation* and so, by the arbitrariness of $b,$ we get that $\widetilde{m}_{2^{\ast }}\sim m_{p}.$ (iii) follows by the same argument used in (ii) replacing $m_{2^{\ast }}$ with $0.$ \bigskip $\square $ The next theorem shows that, for $p=2^{\ast },$ the solution $\tilde{u}$ concentrates where it is expected to do. \bigskip \begin{theorem} Suppose that problem (\ref{YAM}) (with $p=2^{\ast }$) has a unique minimum \tilde{u}$ and set \begin{equation*} \xi =\mathfrak{B}^{\ast }\left( \tilde{u}\right) :=\int_{\Omega }^{\ast }x\left\vert \tilde{u}\right\vert ^{2^{\ast }}dx\in \Omega ^{\ast }. \end{equation* The \begin{equation*} \xi \sim \underset{p\rightarrow \left( 2^{\ast }\right) ^{-}}{\lim \mathfrak{B}\left( v_{p}\right) . \end{equation* where $v_{p}$ is defined by (\ref{lilla}). \end{theorem} \textbf{Proof.} Fix $r\in \mathbb{R}^{+}$. We want to prove that, for $p$ sufficiently close to $2^{\ast },$ we have tha \begin{equation*} d^{\ast }(\mathfrak{B}\left( v_{p}\right) ,\xi )\leq r \end{equation* where $d^{\ast }$ denotes the distance in $\left( \mathbb{R}^{N}\right) ^{\ast }$. We have that \begin{equation} \xi =\ \underset{\lambda \uparrow \mathcal{C}_{0}^{2}(\overline{\Omega })} \lim }\ x_{\lambda } \label{lulu} \end{equation where $x_{\lambda }=\mathfrak{B}\left( u_{\lambda }\right) $ and $u_{\lambda }$ is a minimizer of $J$ on the manifold $\mathfrak{M}_{2^{\ast }}\cap V_{\lambda }$. Let $\tilde{u}$ be the minimum of $J\ $on $\widetilde \mathfrak{M}}_{2^{\ast }}$, and apply Th.\ \ref{billo}, to the relation \mathcal{R}$ defined as follows \begin{equation*} u_{\lambda }\mathcal{R}\left( \mathfrak{M}_{2^{\ast }}\cap V_{\lambda }(\Omega )\right) \end{equation* if and only i \begin{equation*} u_{\lambda }\text{\ is the unique minimum of }J\text{\ on }\mathfrak{M _{2^{\ast }}\cap V_{\lambda }(\Omega ). \end{equation*} Then by Th.\ \ref{billo}, there exists a qualified set $Q\subset \Lambda (V), $ such that, for every $\lambda \in Q,\ u_{\lambda }$ is the unique minimum of $J\ $on $\mathfrak{M}_{2^{\ast }}\cap V_{\lambda }(\Omega ).$ Thus $\exists b\in \mathbb{R}^{+},$ $\exists \lambda _{0},\forall \lambda \geq \lambda _{0},\lambda \in Q,\forall u\in \mathfrak{M}_{2^{\ast }}\cap V_{\lambda } \begin{equation*} J(u)<m_{2^{\ast }}+b\Rightarrow d^{\ast }(\mathfrak{B}\left( u\right) ,x_{\lambda })\leq \frac{r}{2} \end{equation* and hence, may be taking a bigger $\lambda _{0},$ using (\ref{lulu}), we ge \begin{equation} J(u)<m_{2^{\ast }}+b\Rightarrow d^{\ast }(\mathfrak{B}\left( u\right) ,\xi )\leq r \label{lella2} \end{equation} Now, let $v_{p}$ be the function defined by (\ref{lilla}); it is well known that \begin{equation*} \underset{p\rightarrow \left( 2^{\ast }\right) ^{-}}{\lim }\ J(v_{p})=m_{2^{\ast }} \end{equation* Then we can take $p$ so close to $2^{\ast }$ so that \begin{equation*} J(v_{p})\leq m_{2^{\ast }}+b. \end{equation* Since $v_{p}\in \mathfrak{M}_{2^{\ast }}\cap V_{\lambda },$ for every \lambda \geq \lambda _{0}\cup \left\{ v_{p}\right\} ,$ $\lambda \in Q$, by \ref{lella2}), we get that \begin{equation*} d^{\ast }(\mathfrak{B}\left( v_{p}\right) ,\xi )\leq r. \end{equation*} $\square $ \bigskip \begin{remark} If $J$ does not have a unique minimum, but a set of minimizers, we se \begin{equation*} \Gamma =\left\{ \xi \in \Omega ^{\ast }:\ \xi =\mathfrak{B}\left( \tilde{u \right) \ \text{where }\tilde{u}\ \text{is a minimizer}\right\} . \end{equation*} Then, arguing as in the proof of the above theorem, it is easy to get the following result: let $p_{n}\rightarrow (2^{\ast })^{-}$, let $x_{n} \mathfrak{B}\left( v_{p_{n}}\right) $ and let $x_{n}^{\prime }$ be a converging subsequence of $x_{n}$. Then there exists $\xi \in \Gamma $ such tha \begin{equation*} \xi \sim \ \underset{n\rightarrow \infty }{\lim }x_{n}^{\prime } \end{equation*} \end{remark} \section{Ultrafunctions and Quantum Mechanics} In this section we will describe an application of the previous theory to the formalism of Quantum Mechanics. In the usual formalism, a physical state is described by a unit vector $\psi \ $in a Hilbert space $\mathcal{H}$ and an observable by a self-adjoint operator defined on it. In the ultravectors/ultrafunctions formalism, a physical state is described by a unit vector $\psi \ $in a hyperfinite space of ultravectors $V_{\mathcal{B}}$ and an observable by a Hermitian operator defined on it. We think that the ultravectors approach presents the following advantages: \begin{itemize} \item once you have learned the basic facts of the $\Lambda $-theory, the formalism which you get is easier to handle since it is based on the matrix theory on finite vector spaces rather than on unbounded self-adjoint operators in Hilbert spaces; \item this approach is closer to the "infinite" matrix approach of the beginning of QM before the work of von Neumann and also closer to the way of thinking of the theoretical physicists and chemists; \item all observables (hyperfinite matrices) have infinitely many eigenvectors; so the continuous spectrum can be considered as a set of eigenvalues infinitely close to each other; \item the distinction between standard and ideal ultravectors has a physical meaning; \item the dynamics does not present any difficulty since it is given by the exponential matrix relative to the Hamiltonian matrix. \end{itemize} Clearly it is too early to know if this formalism will lead to some new physically relevant fact; in any case we think that it is worthwhile to investigate it. In this paper we limit ourselves only to some very general remark. \subsection{The axioms of Quantum Mechanics} We start giving a list of the main axioms of quantum mechanics as it is usually given in any textbook and then we will compare it with the alternative formalism based on ultravectors. \bigskip {\Large Classical axioms of QM} \bigskip \textbf{Axiom C1}. A physical state is described by a unit vector $\psi \ in a Hilbert space $\mathcal{H}$. \bigskip \textbf{Axiom C2.} An observable is represented by a self-adjoint operator A $ on $\mathcal{H}$. (a) The set of observable outcomes is given by the eigenvalues $\mu _{j}$ of $A$. (b) After an observation/measurement of an outcome $\mu _{j}$, the system is left in a eigenstate $\psi _{j}$ associated with the detected eigenvalue \mu _{j}$. (b) In a measurement the transition probability $\mathcal{P}$ from a state \psi $ to an eigenstate $\psi _{j}$ is given by \begin{equation*} \mathcal{P}=\left\vert \left( \psi ,\psi _{j}\right) \right\vert ^{2}. \end{equation*} \textbf{Axiom C3}. The evolution of a state is given by the Shroedinger equatio \begin{equation*} i\frac{\partial \psi }{\partial t}=H\psi \end{equation* where $H,\ $the Hamiltonian operator, is a self-adjoint operator representing the energy of the system. \bigskip {\Large Axioms of QM based on ultravectors} \bigskip \textbf{Axiom U1}. A physical system is described by a complex valued-ultravector space $V_{\mathcal{B}}=\mathcal{B}\left[ V\right] ;\ $a state of this system is described by a unit ultravector vector $\psi \ $in V_{\mathcal{B}}$. \bigskip \textbf{Axiom U2.} An observable is represented by a Hermitian operator $A$ on $V_{\mathcal{B}}$. (a) The set of observable outcomes is given by $sh\left( \mu _{j}\right) $ where $\mu _{j}$ is an eigenvalue of $A$. (b) After an observation/measurement of an outcome $sh\left( \mu _{j}\right) $, the system is left in an eigenstate $\psi _{j}$ associated with the detected eigenvalue $\mu _{j}$. (b) In a measurement the transition probability $\mathcal{P}$ from a state \psi $ to an eigenstate $\psi _{j}$ is given by \begin{equation*} \mathcal{P}=\left\vert \left( \psi ,\psi _{j}\right) \right\vert ^{2}. \end{equation*} \textbf{Axiom U3}. The evolution of the state of a system is given by the Shroedinger equatio \begin{equation} i\frac{\partial \psi }{\partial t}=H\psi \label{sh} \end{equation where $H,$ the Hamiltonian operator, representing the energy of the system. \bigskip \textbf{Axiom U4}. Only the physical states represented by standard vectors (namely vectors in $V$) can be produced in laboratory. \subsection{Discussion of the axioms} AXIOM 1. In the classical formalism, a physical system is not described only by a given Hilbert space as axiom C1 claims, but by an Hilbert space and the domain of a self-adjoint realization of the Hamiltonian operator. On the contrary, in the ultravectors formalism the physical system is described just by the space $V_{\mathcal{B}}$. Let see an example: \textbf{A particle in a box. }For simplicity, we consider a one-dimensional model and suppose that the box is modelled by the interval $\left[ 0,1\right] .$ Clearly, the Hilbert space $L^{2}\left( 0,1\right) $ is not sufficient to describe the system but it is necessary to give the Hamiltonia \begin{equation*} H:H^{2}\left( 0,1\right) \cap H_{0}^{1}\left( 0,1\right) \rightarrow L^{2}\left( 0,1\right) \end{equation* defined by \begin{equation} H\psi =-\frac{1}{2m}\Delta \psi \label{hhh} \end{equation where $\Delta \psi $ must be intended in the sense of distribution (here $m$ denotes the mass of the particle and we have assumed $\hslash =1$). \textbf{A particle in a ring. }Now suppose that a point-particle is constrained in a ring of length 1. Also in this case any state can be represented by a vector in the Hilbert space $L^{2}\left( 0,1\right) ,$ but in order to describe the system is necessary to give a different selfadjoint realization of the Hamiltonian operator, namely an operator having the form \ref{hhh}), but defined on the domai \begin{equation*} H:H_{per}^{2}\left( 0,1\right) \rightarrow L^{2}\left( 0,1\right) \end{equation* where $H_{per}^{2}\left( 0,1\right) $ is the closure in the $H^{2}$ norm of the space \begin{equation*} \mathcal{C}_{per}^{2}\left[ 0,1\right] =\left\{ \psi \in \mathcal{C}^{2} \left[ 0,1\right] ,\mathbb{C})\ |\ \psi (0)=\psi (1);\ \psi ^{\prime }(0)=\psi ^{\prime }(1)\right\} \end{equation*} \bigskip Now let us see how these two cases can be described in the ultrafunctions formalism. \bigskip \textbf{A particle in a box. }In this case, the system is described by the spac \begin{equation*} V_{\mathcal{B}}^{2,0}\left[ 0,1\right] :=\mathcal{B}\left[ \mathcal{C _{0}^{2}\left[ 0,1\right] \right] \end{equation* The Hamiltonian operator $H$ is given by the canonical extension of $-\frac{ }{2m}\Delta $ to $\mathcal{B}\left[ \mathcal{C}_{0}^{2}\left[ 0,1\right] \right] $. \textbf{A particle in a ring. }In this case, the system is described by the spac \begin{equation*} V_{\mathcal{B}}^{2,per}\left[ 0,1\right] :=\mathcal{B}\left[ \mathcal{C _{per}^{2}\left[ 0,1\right] \right] \end{equation* and the Hamiltonian operator $H$ is given by the canonical extension of $ \frac{1}{2m}\Delta $ to $\mathcal{B}\left[ \mathcal{C}_{per}^{2}\left[ 0, \right] \right] $. \bigskip Thus in the ultrafunctions description, different physical systems give different ultrafunction spaces; on the contrary, the Hamiltonian is given by the unique canonical extension of $-\frac{1}{2m}\Delta $ in the relative spaces. \bigskip AXIOM 2. In the ultrafunction formalism, the notion of self-adjoint operator is not needed. In fact osservables can be represented by internal Hermitian operators. It follows that any observable has exactly $\beta =\dim ^{\ast }(V_{\mathcal{B}})$ eigenvalues (of course, if you take account of their multiplicity). No essential distinction between eigenvalues and continuous spectrum is required. For example, consider the eigenvalues of the position operator $\hat{q}$ of a free particle. The eigenfunction relative to an eigenvalue $q\in \mathbb{R}$ is an ultrafuncion of $\delta $-type concentrated at the point $q$ (see Def. \ref{dt}). In general the eigenvalues $\mu $'s of an internal Hermitian operator $A$ are hyperreal numbers, and hence, assuming that a measurement gives a real number, we have imposed in Axiom 2 that the outcome of an experiment is sh(\mu )$. However, we think that the probability is better described by the hyperreal number $\left\vert \left( \psi ,\psi _{j}\right) \right\vert ^{2}$ rather than the real number $sh(\left\vert \left( \psi ,\psi _{j}\right) \right\vert ^{2})$ (see \cite{BHW} for a presentation and discussion of the Non Archimedean Probability). For example, let $\psi \in \mathcal{D}$ be the state of a system; the probability of finding a particle in the position $q$ is given b \begin{equation*} \left\vert \int \psi (x)\eta e_{q}(x)dx\right\vert =\eta \left\vert \psi (q)\right\vert \end{equation* where $e_{q}$ is a $\delta $-type function and the normalization facto \begin{equation*} \eta =\left\Vert e_{q}\right\Vert _{\left( L^{2}\right) ^{\ast }}^{-1}\sim 0 \end{equation* is an infinitesimal number. \bigskip AXIOM 3. Since $H$ is an internal operator defined on a hyperfinite vector space it can be represented by an Hermitian hyperfinite matrix and hence the evolution operator of (\ref{sh}) is the exponential matrix $e^{tH}.$ \bigskip AXIOM 4. In ultrafunction theory, the mathematical distinction between the standard states and the ideal states is intrinsic and it does not correspond to anything in the usual formalism. The point is to know if it corresponds to something physically meaningful. Basically, we can say that the standard states can be prepared in a laboratory, while the ideal states represent "extreme" situations useful in the foundations of the theory and in thought experiments (gedankenexperiment). For example the Dirac $\delta $-measure is not a standard state but an ideal state and it represents a situation in which the position of a particle is perfectly determined. Clearly this situation cannot be produced in a laboratory, but nevertheless it is useful in our description of the physical world. The standard states are represented by functions in $V$ which is chosen depending on the model of the physical system. The other states (namely, the states in $V_{\mathcal{B }\backslash V$) are the ideal states. This situation makes more explicit something which is already present in the classical approach. For example, in the Shroedinger representation of a free particle in $\mathbb{R}^{3}$, consider the state \begin{equation*} \psi (x)=\frac{\varphi (x)}{|x|},\ \varphi \in \mathcal{D}(\mathbb{R}^{3}),\ \varphi (0)>0. \end{equation* We have that $\psi (x)\in L^{2}(\mathbb{R}^{3})$ but this state cannot be produced in a laboratory, since the expected value of its energy \begin{equation*} \left( H\psi ,\psi \right) =\frac{1}{2m}\int \left\vert \nabla \psi \right\vert ^{2}dx \end{equation* is infinite. In other words, Axiom 4 makes formally precise something which is already present (but hidden) in the classical theory. This point will be discussed also in the next section. \bigskip \subsection{The Heisenberg algebra} \bigskip In this section we will apply ultrafunction theory to the description of a quantum particle via the algebraic approach. For simplicity here we consider the one-dimensional case. The states of a particle are defined by the observables $q$ and $p$ which represent the position and the momentum respectively. A quantum particle is described by the algebra of observables generated by $p$ and $q$ according to the following commutation rules \begin{equation*} \left[ p,q\right] =i,\ \ \left[ p,p\right] =0,\ \ \left[ q,q\right] =0 \end{equation* The algebra generated by $p$ and $q$ with the above relations is called the Heisenberg algebra and denoted by $\mathfrak{A}_{H}$. The Heisenberg algebra does not fit in the general theory of $C^{\ast }$-algebras since both $p$ and $q$ are not bounded operator. The usual technical solution to this problem is done via the Weyl operators and the Weyl algebra (for more details and a discussion on this point we refer to \cite{strocchi05}). Let us see an alternative approach via ultrafunction theory. First of all we take a representation of $\mathfrak{A}_{H},$ namely an algebra homomorphis \begin{equation*} J:\mathfrak{A}_{H}\rightarrow \mathfrak{L}(V) \end{equation* where $\mathfrak{L}(V)$ is the algebra of the linear operators on a complex vector space $V\subset H\in \mathcal{U}$ where $H$ is an Hilbert space and \mathcal{U}$ is our universe (see section \ref{OL}). To fix the ideas, we can consider the following "classical example" \begin{equation*} H=L^{2}(\mathbb{R});\ \ \ V=\mathcal{S}; \end{equation* \begin{equation*} J(p)=-i\partial ;\ \ J(q)=x. \end{equation*} The quantum system of a particle will be described by the ultravector space V_{\mathcal{B}}=\mathcal{B}\left[ V\right] $. The operators $J(p)$ and $J(q)$ can be extended to the space $V_{\mathcal{B}}$ according to definition (\re {CE}); such extensions will be called $\hat{p}$ and $\hat{q}$ respectively. \hat{p}$ and $\hat{q}$ are Hermitian operators and hence $V_{\mathcal{B}}$ has an othonormal basis generated by the eigenfunctions of $\hat{p}$ or \hat{q}$. Let $\left\{ e_{a}\right\} _{a\in \Sigma }$ be the eigenfunctions of $\hat{q}$ corresponding to the eigenvalue $a\in \Sigma \subset \mathbb{R ^{\ast }$. A very interesting fact is that the eigenfunctions violate the Heisenberg relation $\left[ \hat{p},\hat{q}\right] =i.$ To see this fact we argue indirectly. Assume that the Heisenberg relation holds; then \begin{equation*} \left( \left[ \hat{p},\hat{q}\right] e_{a},e_{a}\right) =i\left\Vert e_{a}\right\Vert ^{2}. \end{equation*} On the other hand, by a direct computation, we get \begin{eqnarray*} \left( \left[ \hat{p},\hat{q}\right] e_{a},e_{a}\right) &=&\left( \left( \hat{p}\hat{q}-\hat{q}\hat{p}\right) e_{a},e_{a}\right) =\left( \hat{p}\hat{ }e_{a},e_{a}\right) -\left( \hat{q}\hat{p}e_{a},e_{a}\right) \\ &=&\left( \hat{q}e_{a},\hat{p}e_{a}\right) -\left( \hat{p}e_{a},\hat{q e_{a}\right) =a\left( e_{a},\hat{p}e_{a}\right) -a\left( \hat{p e_{a},e_{a}\right) =0. \end{eqnarray*} This fact is consistent with the Axiom \textbf{U4 }which establishes that the ideal states cannot be produced in laboratory. According to this description of QM, the uncertainty relations hold only for the limitation of the experimental apparatus. In a laboratory you can prepare a state corresponding to a function $\psi $ in the space $V=\mathcal{S}$, but you cannot prepare a state such as $e_{a}\in V_{\mathcal{B}}\backslash \mathcal{ }$ which corresponds to a particle which is exactly in the position $a.$
2,869,038,155,368
arxiv
\section{Introduction} \label{sec:intro} In 2+1 flavor QCD, there exist two first order phase transition regions in the quark mass parameter space. When all the three quarks are massless, the chiral phase transition is first order. Many studies have been made to determine the critical mass where the first order transition at small quark masses changes to a crossover. The critical mass turned out to be close to the physical point and thus its quantitative determination is phenomenologically important. However, the continuum limit of the critical mass has not been obtained conclusively. Another first order transition region locates around the heavy quark limit. When all the quarks are infinitely heavy, QCD is just the pure gauge SU(3) Yang-Mills theory (quenched QCD), which is known to have a first order deconfinement transition. This first order transition changes to crossover when the quark mass becomes smaller than a critical value. However, its continuum limit is also not well understood. In this study, we investigate the critical quark mass on lattices with $N_t=6$ and $8$, extending our previous study at $N_t=4$~\cite{Saito1,Saito2}. We first use a histogram method combined with the reweighting method to study the critical point in 2 and 2+1 flavor QCD in Sec.~\ref{sec:histogram}. We also discuss the limitation of the reweighting from quenched QCD due to the overlap problem. To reduce the overlap problem, we perform simulations of an effective action containing a Polyakov loop term in Sec.~\ref{sec:simplt}. We determine the critical point by the finite volume scaling analysis. Section~\ref{sec:summary} is devoted to a summary. \section{Histogram method} \label{sec:histogram} We study the phase structure of QCD around the endpoint of first order phase transition line by the histogram of the absolute value of the Polyakov loop $|\Omega|$, which we define as \begin{eqnarray} W( |\Omega|; \beta, K) = \int {\cal D} U \ \delta(|\Omega| - |\hat{\Omega}|) \ e^{-S_g}\ \prod_{f=1}^{N_{\rm f}} \det M(K_f) , \ \ \ S_g = 6N_{\rm site} \beta \hat{P} , \label{eq:hist} \end{eqnarray} where $\hat{P}$ is the average plaquette, $\det M$ is the quark determinant, $\beta=6/g^2$ is the gauge coupling, $N_{\rm site}=N_s^3 \times N_t$ is the number of lattice sites, $N_{\rm f}$ is the number of flavors, and $K_f$ is the hopping parameter for the $f^{\rm th}$ flavor. For the quarks, we adopt the standard Wilson quark action. In terms of the histogram, the probability distribution function of $|\Omega|$ is given by ${\cal Z}^{-1}(\beta, K) \, W(|\Omega|;\beta,K)$ with $\cal Z$ the partition function defined by $ {\cal Z}(\beta,K) = \int\! W(|\Omega|;\beta,K) \, d|\Omega| $. We then define our effective potential by $V_{\rm eff} (|\Omega|) = - \ln W (|\Omega|)$. On a first order transition line, $V_{\rm eff}$ is double-well type. The double-well turns into a single-well when the quark mass closses the endpoint of the first order transition line which we call the critical point. We adopt the hopping parameter expansion to compute the quark determinant in the heavy quark region \cite{Saito1}. Up to the next-to-leading order contributions, the quark determinant for each flavor is expanded as \begin{eqnarray} \ln \det M(K) &=& 288N_{\mathrm{site}}K^4 \hat{P} + 768N_{\mathrm{site}}K^6 \left( 3 \hat{W}_{\mathrm{rec}}+6 \hat{W}_{\mathrm{chair}}+2 \hat{W}_{\mathrm{crown}} \right) + \cdots \nonumber \\ && \hspace{-15mm} +12\times 2^{N_t}N_s^3 K^{N_t} \mathrm{Re} \hat\Omega + 36\times 2^{N_t}N_s^3 N_t K^{N_t+2} \left( 2 \! \sum_{n=1}^{N_t/2-1} \! \mathrm{Re} \hat\Omega_n + \mathrm{Re} \hat\Omega_{N_t/2} \right) + \cdots \ , \label{eq:hpe_nlo} \end{eqnarray} where $\hat{W}_{\mathrm{rec}}$, $\hat{W}_{\mathrm{chair}}$, $\hat{W}_{\mathrm{crown}}$ are the 6-step Wilson loop operators of rectangle-type, chair-type, and crown-type, respectively, and $\hat{\Omega}_n$ is the $(N_t+2)$-step bended Polyakov loop, which contains two spatial links and $n$ temporal links between these spatial links. The first term with $\hat{P}$ can be absorbed into the plaquette gauge action by a shift $ \beta \rightarrow \beta^* \equiv \beta + 48 \sum_{f=1}^{N_{\rm f}} K_f^4$, where we have summed up contributions of all flavors. The terms with 6-step Wilson loops can also be absorbed into improvement terms of improved gauge action. Because a shift in improvement parameters only affects the amount of lattice discretization errors within the same universality class, the 6-step Wilson loop terms will not affect characteristic physical properties of the system in the continuum limit, such as the order of the phase transition. Therefore, in this study, we concentrate on the influence of Polykov-loop-type terms in~Eq.~(\ref{eq:hpe_nlo}). To study the truncation error of the hopping parameter expansion, we perform both the leading order (LO) calculation keeping only the $O(K^{N_t})$ term, and the next-to-leading order (NLO) calculation in which the $O(K^{N_t+2})$ terms are also taken into account. We investigate how the shape of $V_{\mathrm{eff}}$ is changed by variation of the quark mass. We thus need to know $V_{\mathrm{eff}}$ in a wide range of $|\Omega|$. This is not straightforward with a single simulation because the statistical accuracy of $V_{\mathrm{eff}}$ quickly drops down off the minimum point where $|\Omega|$ is the most probable. To confront the issue, we combine information at several simulation points by the multi-point reweighting method~\cite{Saito1,Saito2}. To stay close to the first order transition line, we adjust $\beta$ for each $K$ so that two minimum values of $V_{\mathrm{eff}}$ are as equal as possible. We then measure the difference between the peak height in the middle of $V_{\mathrm{eff}}$ and the minimum value, which is denoted as $\Delta V_{\mathrm{eff}}$. We define the critical point $K_{ct}$ as $K$ where the difference $\Delta V_{\mathrm{eff}}$ vanishes. We perform simulations of quenched QCD on $24^3 \times 6$, $32^3 \times 6$, $36^3 \times 6$, and $24^3 \times 8$ lattices using the pseudo heat bath algorithm with over relaxation. We investigate the critical mass mainly on the $N_t =6$ lattices. The lattice spacing $a$ is given by $a=(N_t T_c)^{-1}$ for a simulation at the transition temperature $T_c$. To evaluate the lattice spacing dependence, i.e., $N_t$ dependence, we use the results for $N_t=4$ obtained in Ref.~\cite{Saito1} and perform an additional simulation on a lattice with $N_t=8$. Simulations at 4-6 $\beta$ points around the transition point are combined by the multi-point histogram method. Details of the simulations are given in~Ref.~\cite{itagaki19}. \subsection{Critical point in two flavor QCD} \subsubsection{Results at $N_t=6$} \begin{figure}[tb] \centering \vspace{-8mm} \includegraphics[width=7cm,clip]{veff_24t6_inc_ber0.pdf} \hspace{1mm} \includegraphics[width=7cm,clip]{dveff_24t6_inc_ber0.pdf} \vspace{-3mm} \caption{$V_{\mathrm{eff}}$ (left) and $\Delta V_{\mathrm{eff}}$ (right) obtained by the hopping parameter expansion up to the next to leading order on a $24^3\times6$ lattice. } \label{fig1} \end{figure} In the left panel of Fig.~\ref{fig1}, we show $V_{\mathrm{eff}}$ for two flavor QCD on the $24^3 \times 6$ lattice computed with the NLO $\ln \det M$ given in~Eq.~(\ref{eq:hpe_nlo}). We find two minima for $K=0.0$ and 0.116, while it becomes almost flat at the minimum for $K=0.120$. We plot $\Delta V_{\mathrm{eff}}$ as a function of $K$ in the right panel of~Fig.~\ref{fig1}. Fitting the smallest four data by a linear function (dashed line), we obtain $K_{ct}=0.1202(19)$. Repeating the same calculation only with the LO term, we obtain $K_{ct}=0.1359(30)$ on the $24^3 \times 6$ lattice, and $K_{ct}=0.1286(40)$ on the $32^3 \times 6$ lattice. Though these $K_{ct}$ with different spatial volumes are roughly the same within the statistical errors, their central values may be suggesting that $K_{ct}$ decreases at $N_t=6$ as the spatial volume increases. We have also tried to calculate $K_{ct}$ on a $36^3 \times 6$ lattice. However, unlike the cases of $24^3 \times 6$ and $32^3 \times 6$ lattices, the overlap problem turned out to be severe on this lattice to obtain a reliable $V_{\mathrm{eff}}$ up to the critical point~\cite{itagaki19}. We come back to the issue of overlap problem in Sec.~\ref{sec:simplt}. \paragraph{Effective NLO method:} Before proceeding to other issues, let us discuss a method, introduced in~Ref.~\cite{Saito2}, to effectively incorporate NLO effects in the LO calculation of $V_{\mathrm{eff}}$ and $K_{ct}$. The basic observation of~Ref.~\cite{Saito2} is that the bended Polyakov loops $\hat{\Omega}_n$ have strong linear correlation with $\hat{\Omega}$ on each configuration and are well approximated by ${\rm Re} \hat\Omega_n \approx c_n \,{\rm Re} \hat\Omega$, where $c_n = \langle {\rm Re} \hat\Omega_n / {\rm Re} \hat\Omega \rangle$. Substituting this into Eq.~(\ref{eq:hpe_nlo}), we find that the NLO effects can be absorbed by a shift of $K$. Denoting $K_{ct}$ calculated only with the LO term as $K_{ct,\mathrm{LO}}$, the $K_{ct}$ effectively including the NLO terms can be obtained by solving \begin{equation} K_{ct,{\rm eff}}^{N_t} \left( 1+C_\Omega\,N_t K_{ct,{\rm eff}}^{2} \right)= K_{ct,\mathrm{LO}}^{N_t}, \hspace{8mm} C_\Omega \equiv 6\sum_{n=1}^{N_t/2-1} c_n +3c_{N_t/2}. \label{eq:keff} \end{equation} On the $24^3 \times 6$ lattice, we find that $K_{ct,{\rm eff}}=0.1205(23)$, which is consistent with $K_{ct}=0.1202(19)$ computed directly with the NLO contributions. We thus find that the effective NLO method works well. The method is useful in avoiding repeated analyses, e.g., for various number of flavors. \paragraph{Truncation error of hopping parameter expansion:} We now compare the LO and NLO results for $K_{ct}$. We find that NLO $K_{ct}$ is about 10\% smaller than the LO results. This means that the truncation error of the hopping parameter expansion in $K_{ct}$ is larger than the statistical errors for $N_t=6$. This is in contrast to the case of $N_t=4$: On the $24^3 \times 4$ lattice, we obtain $K_{ct,{\rm eff}}=0.0640(10)$ with the effective NLO method, to be compared with the LO value $K_{ct,\mathrm{LO}}=0.0658(3)(^{+4}_{-11})$~\cite{Saito1}. We thus find that, for $N_t=4$, the truncation error is about 3\% and small in comparison with the statistical errors~\cite{Saito2}. Careful treatments including higher order terms are required for $N_t \ge 6$. \subsubsection{Towards the continuum limit of the critical point} Our results of $K_{ct}$ on $24^3 \times 4$ and $24^3 \times 6$ lattices are summarized in Table~\ref{tab1}. The effective NLO method was used for the NLO value for $N_t=4$. We note that $K_{ct}$ for $N_t=6$ is about twice larger than that for $N_t=4$. We have also calculated $V_{\mathrm{eff}}$ on a $24^4 \times 8$ lattice. However, $V_{\mathrm{eff}}$ remains double-well up to quite large $K$ where the hopping parameter expansion is not applicable. Thus, we can not find $K_{ct}$ for $N_t=8$ with the present method~\cite{itagaki19}. \begin{table}[tb] \vspace{-8mm} \caption{ The critical point $K_{ct}$ on $24^3 \times 4$ and $24^3 \times 6$ lattices calculated by LO and NLO hopping parameter expansion of two flavor QCD. Also listed are the values of $m_{\mathrm{PS}}/T_c$ at $K_{ct}$, computed by the reweighting method and by the two flavor full QCD simulation, on a zero-temperature $16^3 \times 32$ lattice.} \begin{center} \begin{tabular}{c|ccc|ccc} \hline & up to LO & reweighting & full QCD & up to NLO & reweighting & full QCD \\ lattice & $K_{ct}$ & $m_{\mathrm{PS}}/T_c$ & $m_{\mathrm{PS}}/T_c$ & $K_{ct}$ & $m_{\mathrm{PS}}/T_c$ & $m_{\mathrm{PS}}/T_c$ \\ \hline $24^3 \times 4$ & 0.0658(10) & 15.47(14) & 15.47(14) & 0.0640(10) & 15.74(14) & 15.73(14) \\ $24^3 \times 6$ & 0.1359(30) & 7.88(69) & 7.43(78) & 0.1202(19) & 11.29(40) & 11.15(42) \\ \hline \end{tabular} \label{tab1} \end{center} \end{table} To make the physical implication of the values of $K_{ct}$ clearer, we calculate the pseudo-scalar meson mass $m_{\mathrm{PS}}$ at $K_{ct}$ by performing additional zero-temperature simulations. In this study, we perform the following two simulations: One is the direct two flavor full QCD simulation adopting the same combination of gauge and quark actions and adjusting the simulation parameters $(\beta, K)$ to the critical point obtained in the finite-temperature study. Another is the quenched QCD simulation combined with the reweighting method, as adopted in the determination of $K_{ct}$ with the LO hopping parameter expansion. As we discussed, the effect of the plaquette term can be absorbed by the shift $\beta \rightarrow \beta^*$. In both full and quenched simulations, we generate configurations by the hybrid Monte Carlo algorithm on $16^3 \times 32$ lattices. The number of configurations is 52 at each simulation point. Our results of $m_{\mathrm{PS}}/T_c$ at $K_{ct}$ are listed also in Table~\ref{tab1}. The errors for $m_{\mathrm{PS}}/T_c$ contain that propagated from the error of $K_{ct}$. We find that the results of full QCD and reweighting calculations are consistent within the errors. This means that the reweighting method is effective and the disregarded 6-step Wilson loops in the reweighting method have no large effects also on the pseudo-scalar meson mass. On the other hand, for $N_t=6$, corresponding to the difference of $K_{ct}$ between the LO and NLO calculations, $m_{\mathrm{PS}}/T_c$ at the NLO $K_{ct}$ is $1.4$ times smaller than that at the LO $K_{ct}$. If we estimate the systematic error from the truncation of the hopping parameter expansion by the difference between the LO and NLO calculations, we obtain $m_{\mathrm{PS}}/T_c = 11.15(42)(372)$ for $N_t=6$. Because this is smaller than $m_{\mathrm{PS}}/T_c = 15.73(14)(26)$ for $N_t=4$, our results suggest that the critical quark mass decreases as the lattice spacing decreases. \subsection{Critical line in 2+1 flavor QCD} \label{sec:3flavor} The effective NLO relation~(\ref{eq:keff}) can be easily generalized to any number of flavors. E.g., the critical line in the $(K_{ud},K_s)$ space in 2+1 flavor QCD is given by \begin{eqnarray} 2 K_{ct,ud}^{N_t} \left( 1+C_\Omega\,N_t K_{ct,ud}^{2} \right) + K_{ct,s}^{N_t} \left( 1+C_\Omega\,N_t K_{ct,s}^{2} \right) =2 K_{ct,\mathrm{LO}}^{N_t} , \label{eq:2+1clNLO} \end{eqnarray} where $K_{ct,\mathrm{LO}}$ is the LO $K_{ct}$ for $N_{\rm f}=2$. By solving this equation using $K_{ct,\mathrm{LO}}$ obtained on the $24^3 \times 4$ and $24^3 \times 6$ lattices, we obtain green curves in Fig.~\ref{fig2} for $N_t=4$ (left) and $N_t=6$ (right), respectively. The red curves in Fig.~\ref{fig2} are for the LO critical line calculated by Eq.~(\ref{eq:2+1clNLO}) with the terms proportional to $C_\Omega$ removed. The difference between the two curves is an estimate for the truncation error of the hopping parameter expansion. \begin{figure}[tb] \centering \vspace{-8mm} \includegraphics[width=7cm,clip]{Kct_nt4.pdf} \hspace{1mm} \includegraphics[width=7cm,clip]{Kct_nt6.pdf} \vspace{-5mm} \caption{Critical line of 2+1 flavor QCD on the $24^3 \times 4$ (left) and $24^3 \times 6$ (right) lattices. } \label{fig2} \end{figure} \section{Effective heavy quark QCD with Polyakov loop} \label{sec:simplt} \begin{figure}[tb] \centering \includegraphics[width=7cm,clip]{binderfig.pdf} \vspace{-3mm} \caption{$\lambda$ dependence of $B_4$ at the transition point. } \label{fig3} \end{figure} We found that the overlap problem occurs in the determination of $K_{ct}$ on the $36^3 \times 6$ lattice~\cite{itagaki19}. As an attempt to avoid the overlap problem, we perform simulations with an effective action for QCD with heavy quarks, $S_{\rm eff} = -6 N_{\rm site} \beta \hat{P} - N_s^3 \lambda {\rm Re} \hat{\Omega}$. The Polyakov loop term corresponds to the LO hopping parameter expansion of $\ln \det M$ with $\lambda = 12 \times 2^{N_t} N_{\rm f} K^{N_t}$. Because a heat bath algorithm is applicable to this action, the computational cost is much smaller than full QCD simulations. We include the NLO contributions by reweighting. Simulations are performed on $N_t=4$ lattices with $N_s=32$, 36, 40, and 48. We generate gauge configurations for several values of $(\beta,\lambda)$, and study dependence on these parameters by the multi-point reweighting method. The number of configurations is $600,000$ for each $\beta$ and $\lambda$. In this study, we identify the critical point by the Binder cumulant of the Polyakov loop, \begin{eqnarray} B_4=\frac{\langle ( \Omega - \langle \Omega \rangle )^4 \rangle}{\langle ( \Omega - \langle \Omega \rangle )^2 \rangle^2 }. \end{eqnarray} At the critical point $\lambda_{ct}$, $B_4$ is independent of the spatial volume. The value of $B_4$ at $\lambda_{ct}$ depends on the universality class. The results of $B_4$ on the first order transition line are plotted in Fig.~\ref{fig3} as function of $\lambda$. As shown in this figure, the lines of $B_4$ with different volumes cross at one point. We fit the data with $B_4(N_s, \lambda)=B_{4ct}+AN_s^{1/\nu}(\lambda -\lambda_{ct})$, where $B_{4ct}$, $A$, $\nu$ and $\lambda_{ct}$ are the fit parameters. $B_{4ct}$ and $\lambda_{ct}$ correspond to $B_4$ and $\lambda$ at the critical point, respectively. From fit using data of all four volumes, we obtain $\lambda_{ct} =0.004754(84)$, $B_{4ct} =1.644(13)$ and $\nu =0.65(8)$. When we fit data of the largest three volumes corresponding to $N_s =36$, 40, and 48 only, we get $\lambda_{ct}=0.00468(11)$, $B_{4ct} =1.630(20)$ and $\nu =0.65(11)$. These results are almost consistent with those expected from the universality class of the 3D Ising model: $B_{4ct}^\mathrm{Ising}=1.604$ and $\nu^\mathrm{Ising}=0.63$. The result $\lambda_{ct} =0.004754(84)$ corresponds to $K_{ct}=0.05932$. This is $10\%$ smaller than the result obtained by the histogram method on the $24^3 \times 4$ lattice. To understand this difference, we investigate the volume dependence of the histogram at the transition point. As we mentioned in the previous section, $K_{ct}$ decreases as the volume increases. The histograms just above and below $\lambda_{ct}$ are plotted in Fig.~\ref{fig4}. For the case of $\lambda =0.004 < \lambda_{ct}$ (left), the central dent in the histogram gets deeper as the volume increases. On the other hand, for $\lambda =0.005 > \lambda_{ct}$ (right), the central dent becomes shallower. These are consistent with the picture that $\lambda_{ct}$ is the boundary that divides regions with one peak and two peaks in the volume infinity limit. \begin{figure}[tb] \centering \vspace{-8mm} \includegraphics[width=65mm,clip]{multi_histogram004.pdf} \hspace{1mm} \includegraphics[width=65mm,clip]{multi_histogram005.pdf} \vspace{-3mm} \caption{Spatial volume dependence of the histograms at $\lambda =0.004$ and $0.005$.} \label{fig4} \end{figure} \section{Summary} \label{sec:summary} We studied the location of critical point at which the first order phase transition changes to crossover in the heavy quark region by investigating the histogram of the Polyakov loop and applying the finite-size scaling analysis. We performed simulations of quenched QCD together with reweighting method. The quark determinant is evaluated by the hopping parameter expansion. Truncation error of the hopping parameter expansion is visible for $N_t=6$. Higher order terms are needed for large $N_t$. Overlap problem arises for large volume. To reduce the overlap problem, we introduce an external source term of the Polyakov loop in the simulation. The value of $B_4$ at the critical point is almost consistent with that of 3D ising model. \paragraph{Acknowledgments:} This work was in part supported by JSPS KAKENHI (Grant Nos.\ JP19H05146, JP19K03819, JP19H05598, JP18K03607, JP17K05442, JP15K05041, JP26400251, and JP26287040), the HPCI System Research project (Project ID: hp170208, hp190028, hp190036), and JHPCN projects (jh190003, jh190063). This research used OCTPUS at the Osaka University and ITO at the Kyushu University.
2,869,038,155,369
arxiv
\section{Introduction} Our focus is on data classification problems in which only a \textit{binary} representation of the data is available. Such binary representations may arise under a variety of circumstances. In some cases, they may arise naturally due to compressive acquisition. For example, distributed systems may have bandwidth and energy constraints that necessitate extremely coarse quantization of the measurements \cite{fang2014sparse}. A binary data representation can also be particularly appealing in hardware implementations because it is inexpensive to compute and promotes a fast hardware device \cite{JacquLBB_Robust,LaskaWYB_Trust}; such benefits have contributed to the success, for example, of 1-bit Sigma-Delta converters \cite{aziz1996overview,candy1962oversampling}. Alternatively, binary, heavily quantized, or compressed representations may be part of the classification algorithm design in the interest of data compression and speed (see, e.g., \cite{BoufoB_1Bit,hunter2010compressive,gupta2010sample,hahn2014adaptive}). The goal of this paper is to present a framework for performing learning inferences, such as classification, from highly quantized data representations -- we focus on the extreme case of 1-bit (binary) representations. Let us begin with the mathematical formulation of this problem. {\bfseries Problem Formulation.} Let $\{x_i\}_{i=1}^{p}\subset \mathbb{R}^n$ be a point cloud represented via a matrix $$X = [x_1\,\, x_2\,\, \cdots \,\, x_p] \in \mathbb{R}^{n\times p}.$$ Moreover, let $A: \mathbb{R}^n \to \mathbb{R}^m$ be a linear map, and denote by $\sign: \mathbb{R} \to \mathbb{R}$ the sign operator given by \begin{align*} \sign(a) = \begin{cases} 1 & a\geq 0 \\ -1 & a<0. \end{cases} \end{align*} Without risk of confusion, we overload the above notation so the sign operator can apply to matrices (entrywise). In particular, for an $m$ by $p$ matrix $M$, and $(i,j) \in [m]\times [p]$, we define $\sign(M)$ as the $m\times p$ matrix with entries $$(\sign(M))_{i,j} := \sign(M_{i,j}).$$ We consider the setting where a classification algorithm has access to training data of the form $Q=\sign(AX)$, along with a vector of associated labels $b = (b_1, \,\, \cdots \,\, , b_p )\in\{1,\dots,G\}^p$, indicating the membership of each $x_i$ to exactly one of $G$ classes. Here, $A$ is an $m$ by $n$ matrix. The rows of $A$ define \textit{hyperplanes} in $\mathbb{R}^n$ and the binary sign information tells us which side of the hyperplane each data point lies on. Throughout, we will take $A$ to have independent identically distributed standard Gaussian entries. Given $Q$ and $b$, we wish to train an algorithm that can be used to classify new signals, available only in a similar binary form via the matrix $A$, for which the label is unknown. \subsection{Contribution} Our contribution is a \textit{framework} for classifying data into a given number of classes using only a binary representation of the data. This framework serves several purposes: (i) it provides mathematical tools that can be used for classification in applications where data is already captured in a simple binary representation, (ii) demonstrates that for general problems, classification can be done effectively using low-dimensional measurements, (iii) suggests an approach to use these measurements for classification using low computation, (iv) provides a simple technique for classification that can be mathematically anlayzed. We believe this framework can be extended and utilized to build novel algorithmic approaches for many types of learning problems. In this work, we present one method for classification using training data, illustrate its promise on synthetic and real data, and provide a theoretical analysis of the proposed approach in the simple setting of two-dimensional signals and two possible classes. Under mild assumptions, we derive an explicit lower bound on the probability that a new data point gets classified correctly. This analysis serves as a foundation for analyzing the method in more complicated settings, and a framework for studying similar types of approaches. \subsection{Organization} We proceed next in Section \ref{sec:prior} with a brief overview of related work. Then, in Section \ref{section::algorithm} we propose a two-stage method for classifying data into a given number of classes using only a binary representation of the data. The first stage of the method performs training on data with known class membership, and the second stage is used for classifying new data points with a priori unknown class membership. Next, in Section \ref{section::experiments} we demonstrate the potential of the proposed approach on both synthetically generated data as well as real datasets with application to handwritten digit recognition and facial recognition. Finally, in Section \ref{section::theory} we provide a theoretical analysis of the proposed approach in the simple setting of two-dimensional signals and two classes. We conclude in Section \ref{sec::conclude} with some discussion and future directions. \subsection{Prior Work}\label{sec:prior} There is a large body of work on several areas related to the subject of this paper, ranging from classification to compressed sensing, hashing, quantization, and deep learning. Due to the popularity and impact of each of these research areas, any review of prior work that we provide here must necessarily be non-exhaustive. Thus, in what follows, we briefly discuss related prior work, highlighting connections to our work but also stressing the distinctions. Support vector machines (SVM) (see, e.g., \cite{CristS_Introduction, hearst1998support, andrew2000introduction, joachims1998text, steinwart2008support}) have become popular in machine learning, and are often used for classification. Provided a training set of data points and known labels, the SVM problem is to construct the optimal hyperplane (or hyperplanes) separating the data (if the data is linearly separable) or maximizing the geometric margin between the classes (if the data is not linearly separable). Although related, the approach taken in this paper is fundamentally different than in SVM. Instead of searching for the \textit{optimal} separating hyperplane, our proposed algorithm uses many, randomly selected hyperplanes (via the rows of the matrix $A$), and uses the relationship between these hyperplanes and the training data to construct a classification procedure that operates on information between the same hyperplanes and the data to be classified. The process of transforming high-dimensional data points into low-dimensional spaces has been studied extensively in related contexts. For example, the pioneering Johnson-Lindenstrauss Lemma states that any set of $p$ points in high dimensional Euclidean space can be (linearly) embedded into $O(\epsilon^{-2} \log(p))$ dimensions, without distorting the distance between any two points by more than a small factor, namely $\epsilon$ \cite{JohnsL_Extensions}. Since the original work of Johnson and Lindenstrauss, much work on Johnson-Lindenstrauss embeddings (often motivated by signal processing and data analysis applications) has focused on randomized embeddings where the matrix associated with the linear embedding is drawn from an appropriate random distribution. Such random embeddings include those based on Gaussian and other subgaussian random variables as well as those that admit fast implementations, usually based on the fast Fourier transform (see, e.g., \cite{ailon2006approximate, achlioptas2003database, dasgupta2003elementary}). Another important line of related work is \textit{compressed sensing}, in which it has been demonstrated that far fewer linear measurements than dictated by traditional Nyquist sampling can be used to represent high-dimensional data \cite{CandeRT_Stable,CandeRT_Robust,Donoh_Compressed}. For a signal $x\in\mathbb{R}^n$, one obtains $m<n$ measurements of the form $y = Ax$ (or noisy measurements $y=Ax+z$ for $z\in\mathbb{R}^m$), where $A\in\mathbb{R}^{m\times n}$, and the goal is to recover the signal $x$. By assuming the signal $x$ is $s$-sparse, meaning that $\| x\|_0 = |\mathrm{supp}(x)| = s \ll n$, the recovery problem becomes well-posed under certain conditions on $A$. Indeed, there is now a vast literature describing recovery results and algorithms when $A$, say, is a random matrix drawn from appropriate distributions (including those where the entries of $A$ are independent Gaussian random variables). The relationship between Johnson-Lindenstrauss embeddings and compressed sensing is deep and bi-directional; matrices that yield Johnson-Lindenstrauss embeddings make excellent compressed sensing matrices \cite{baraniuk2006johnson} and conversely, compressed sensing matrices (with minor modifications) yield Johnson-Lindenstrauss embeddings \cite{krahmer2011new}. To allow processing on digital computers, compressive measurements must often be \textit{quantized}, or mapped to discrete values from some finite set. The extreme quantization setting where only the sign bit is acquired is known as \textit{one-bit compressed sensing} and was introduced in \cite{BoufoB_1Bit}. In this framework, the measurements now take the form $y = \sign(Ax)$, and the objective is still to recover the signal $x$. Several methods have since been developed to recover the signal $x$ (up to normalization) from such simple one-bit measurements (see e.g., \cite{PlanV_One,PlanV_Robust,gopi2013one,JacquLBB_Robust,yan2012robust,jacques2013quantized}). Although the data we consider in this paper takes a similar form, the overall goal is different; rather than signal \textit{reconstruction}, our interest is data \textit{classification}. More recently, there has been growing interest in binary embeddings (embeddings into the binary cube, see e.g., \cite{PlanV_Dimension,yu2014circulant,gong2013iterative,price2015binary,choromanska2016binary, dirksen2016fast}), where it has been observed that using certain linear projections and then applying the sign operator as a nonlinear map largely preserves information about the angular distance between vectors provided one takes sufficiently many measurements. Indeed, the measurement operators used for binary embeddings are Johnson-Lindenstrauss embeddings and thus also similar to those used in compressed sensing, so they again range from random Gaussian and subgaussian matrices to those admitting fast linear transformations, such as random circulant matrices (see, e.g., \cite{dirksen2016fast} for an overview). Although we consider a similar binary measurement process, we are not necessarily concerned with geometry preservation in the low-dimensional space, but rather the ability to still perform data classification. Deep Learning is an area of machine learning based on learning data representations using multiple levels of abstraction, or layers. Each of these layers is essentially a function whose parameters are learned, and the full network is thus a composition of such functions. Algorithms for such deep neural networks have recently obtained state of the art results for classification. Their success has been due to the availability of large training data sets coupled with advancements in computing power and the development of new techniques (e.g., \cite{krizhevsky2012imagenet,simonyan2014very,szegedy2015going,russakovsky2015imagenet}). We consider deep learning and neural networks as motivational to our layered algorithm design. However, we are not tuning nor optimizing parameters as is typically done in deep learning, nor do our layers necessarily possess the structure typical in deep learning ``architectures"; this makes our approach potentially simpler and easier to work with. \section{The Proposed Classification Algorithm} \label{section::algorithm} The training phase of our algorithm is detailed in Algorithm \ref{proposed algorithm1}, where we suppose the training data $Q = \sign(AX)$ and associated labels $b$ are available. Indeed, the training algorithm proceeds in $L$ ``layers". In the $\ell$-th layer, $m$ index sets $\GamLi{\ell}{i} \subset [m]$, $|\GamLi{\ell}{i}| = \ell$, $i=1,...,m$, are randomly selected, so that all elements of $\GamLi{\ell}{i}$ are unique, and $\GamLi{\ell}{i} \neq \GamLi{\ell}{j}$ for $i\neq j$. This is achieved by selecting the multi-set of $\Lambda_{\ell,i}$'s uniformly at random from a set of cardinality ${{m}\choose{\ell}}\choose m$. During the $i$-th ``iteration" of the $\ell$-th layer, the rows of $Q$ indexed by $\GamLi{\ell}{i}$ are used to form the $\ell \times p$ matrix $Q^{\GamLi{\ell}{i}} \in \{\pm 1\}^{\ell \times p}$, and the unique sign patterns $q \in \{\pm 1\}^\ell$ are extracted from the columns of $Q^{\GamLi{\ell}{i}}$. The number of unique sign patterns (i.e., distinct columns) in $Q^{\GamLi{\ell}{i}}$ is given by $T_{\ell,i}\in \mathbb{N}$. For example, at the first layer the possible unique sign patterns are 1 and -1, describing which side of the selected hyperplane the training data points lie on; at the second layer the possible unique sign patters are $\begin{bmatrix} 1 \\ 1 \end{bmatrix}$, $\begin{bmatrix} 1 \\ -1 \end{bmatrix}$, $\begin{bmatrix} -1 \\ 1 \end{bmatrix}$, $\begin{bmatrix} -1 \\ -1 \end{bmatrix}$, describing which side of the two selected hyperplanes the training data points lie on, and so on for the subsequent layers. For the $t$-th sign pattern and $g$-th class, a \textit{membership index\xspace} parameter $r(\ell,i,t,g)$ that uses knowledge of the number of training points in class $g$ having the $t$-th sign pattern, is calculated for every $\GamLi{\ell}{i}$. Larger values of $r(\ell,i,t,g)$ suggest that the $t$-th sign pattern is more heavily dominated by class $g$; thus, if a signal with unknown label corresponds to the $t$-th sign pattern, we will be more likely to classify it into the $g$-th class. In this paper, we use the following choice for the membership index\xspace parameter $r(\ell,i,t,g)$, which we found to work well experimentally. Below, $P_{g|t}$ denotes the number of training points from the $g$-th class with the $t$-th sign pattern at the $i$-th set selection in the $\ell$-th layer (i.e., the $t$-th sign pattern determined from the set selection $\GamLi{\ell}{i}$): \begin{align} \label{RF3} r(\ell,i,t,g) &= \frac{P_{g|t}}{\sum_{j=1}^G P_{j|t}} \frac{\sum_{j=1}^G |P_{g|t} - P_{j|t}|}{\sum_{j=1}^G P_{j|t}}. \end{align} Let us briefly explain the intuition for this formula. The first fraction in \eqref{RF3} indicates the proportion of training points in class $g$ out of all points with sign pattern $t$. The second fraction in \eqref{RF3} is a balancing term that gives more weight to group $g$ when that group is much different in size than the others with the same sign pattern. If $P_{j|t}$ is the same for all classes $j = 1,\dots,G$, then $r(\ell,i,t,g)=0$ for all $g$, and thus no class is given extra weight for the given sign pattern, set selection, and layer. If $P_{g|t}$ is nonzero and $P_{j|t} = 0$ for all other classes, then $r(\ell,i,t,g) = G-1$ and $r(\ell,i,t,j) = 0$ for all $j\neq g$, so that class $g$ receives the largest weight. \begin{algorithm}[ht] \caption{Training } \label{proposed algorithm1} \begin{algorithmic} \STATE \textbf{input:} binary training data $Q$, training labels $b$, number of classes $G$, number of layers $L$ \STATE \FOR{$\ell$ from 1 to $L$, $i$ from 1 to $m$} \STATE \begin{tabular}{ll} \textbf{select:} & Randomly select $\GamLi{\ell}{i} \subset [m]$, $|\GamLi{\ell}{i}| = \ell$ \\ \textbf{determine:} & Determine the $T_{\ell,i}\in\mathbb{N}$ unique column patterns in $Q^{\GamLi{\ell}{i}}$ \end{tabular} \FOR{$t$ from 1 to $T_{\ell,i}$, $g$ from 1 to $G$} \STATE \begin{tabular}{ll} \textbf{compute:} & Compute $r(\ell,i,t,g)$ by (\ref{RF3})\\ \end{tabular} \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} Once the algorithm has been trained, we can use it to classify new signals. Suppose $x\in\mathbb{R}^n$ is a new signal for which the class is unknown, and we have available the quantized measurements $q = \sign(Ax)$. Then Algorithm \ref{proposed algorithm2} is used for the classification of $x$ into one of the $G$ classes. Notice that the number of layers $L$, the learned membership index\xspace values $r(\ell,i,t,g)$, the number of unique sign patterns $T_{\ell,i}$, and the set selections $\GamLi{\ell}{i}$ at each iteration of each layer are all available from Algorithm \ref{proposed algorithm1}. First, the decision vector $\tilde{r}$ is initialized to the zero vector in $\mathbb{R}^G$. Then for each layer $\ell$ and set selection $i$, the sign pattern $q^{\GamLi{\ell}{i}}$ is determined and the index $t^\star\in[T_{\ell,i}]$ is identified corresponding to the sign patterns that were determined during training. For each class $g$, $\tilde{r}(g)$ is updated via $\tilde{r}(g) \leftarrow \tilde{r}(g) + r(\ell,i,t^\star,g)$. If it happens that the sign pattern for $x$ does not match any sign pattern determined during training, no update to $\tilde{r}$ is performed. Finally, after scaling $\tilde{r}$ with respect to the number of layers and measurements, the largest entry of $\tilde{r}$ identifies how the estimated label $\widehat{b}_x$ of $x$ is set. Note that this scaling does not actually affect the outcome of classification, we use it simply to ensure the quantity does not become unbounded for large problem sizes. \begin{algorithm}[ht] \caption{Classification} \label{proposed algorithm2} \begin{algorithmic} \STATE \textbf{input:} binary data $q$, number of classes $G$, number of layers $L$, learned parameters $r(\ell,i,t,g)$, $T_{\ell,i}$, and $\GamLi{\ell}{i}$ from Algorithm \ref{proposed algorithm1} \STATE \STATE \textbf{initialize:} $\tilde{r}(g) = 0$ for $g = 1,\dots,G$. \FOR{$\ell$ from 1 to $L$, $i$ from 1 to $m$} \STATE \begin{tabular}{ll} \textbf{identify:} & Identify the pattern $t^\star\in[T_{\ell,i}]$ to which $q^{\GamLi{\ell}{i}}$ corresponds \\ \end{tabular} \FOR{$g$ from 1 to $G$} \STATE \begin{tabular}{ll} \textbf{update:} & $\tilde{r}(g) = \tilde{r}(g) + r(\ell,i,t^\star,g)$ \\ \end{tabular} \ENDFOR \ENDFOR \STATE \textbf{scale:} Set $\tilde{r}(g) = \frac{\tilde{r}(g)}{Lm}$ for $g=1,\dots,G$ \STATE \textbf{classify:} $\widehat{b}_x = \argmax_{g\in\{1,\dots,G\}}\tilde{r}(g)$ \end{algorithmic} \end{algorithm} \section{Experimental Results}\label{section::experiments} In this section, we provide experimental results of Algorithms \ref{proposed algorithm1} and \ref{proposed algorithm2} for synthetically generated datasets, handwritten digit recognition using the MNIST dataset, and facial recognition using the extended YaleB database. In all of the experiments, the matrix $A$ is taken to have i.i.d. standard Gaussian entries. Also, we assume the data is centered. To ensure this, a pre-processing step on the raw data is performed to account for the fact that the data may not be centered around the origin. That is, given the original training data matrix $X$, we calculate $\mu = \frac{1}{p} \sum_{i=1}^p x_i$. Then for each column $x_i$ of $X$, we set $x_i \leftarrow x_i - \mu$. The testing data is adjusted similarly by $\mu$. Note that this assumption can be overcome in future work by using \textit{dithers}---that is, hyperplane dither values may be learned so that $Q = \sign(AX + \tau)$, where $\tau\in\mathbb{R}^m$---or by allowing for pre-processing of the data. \subsection{Classification of Synthetic Datasets} In our first stylized experiment, we consider three classes of Gaussian clouds in $\mathbb{R}^2$ (i.e., $n=2$); see Figure \ref{syn:gaussian clouds} for an example training and testing data setup. For each choice of $m\in\{5,7,9,11,13,15,17,19\}$ and $p\in\{75,150,225\}$ with equally sized training data sets for each class (that is, each class is tested with either 25, 50, or 75 training points), we execute Algorithms \ref{proposed algorithm1} and \ref{proposed algorithm2} with a single layer and 30 trials of generating $A$. We perform classification of 50 test points per group, and report the average correct classification rate over all trials. The right plot of Figure \ref{syn:gaussian clouds} shows that $m\geq 15$ results in nearly perfect classification. \begin{figure}[!htbp] \centering \begin{tabular}{cc} \includegraphics[height=2in]{images/Synthetic/gaussian_g3_d2/synthetic_1a-eps-converted-to.pdf} & \includegraphics[height=2in]{images/Synthetic/gaussian_g3_d2/synthetic_1b-eps-converted-to.pdf} \\ \end{tabular} \caption{Synthetic classification experiment with three Gaussian clouds ($G=3$), $L=1$, $n=2$, 50 test points per group, and 30 trials of randomly generating $A$. (Left) Example training and testing data setup. (Right) Average correct classification rate versus $m$ and for the indicated number of training points per class.} \label{syn:gaussian clouds} \end{figure} \begin{figure}[!htbp] \centering \begin{tabular}{cc} \includegraphics[height=2in]{images/Synthetic/6ball_g2_d2/synthetic_2a-eps-converted-to.pdf} & \includegraphics[height=2in]{images/Synthetic/6ball_g2_d2/synthetic_2b-eps-converted-to.pdf} \\ \end{tabular} \caption{Synthetic classification experiment with six Gaussian clouds and two classes ($G=2$), $L=4$, $n=2$, 50 test points per group, and 30 trials of randomly generating $A$. (Left) Example training and testing data setup. (Right) Average correct classification rate versus $m$ and for the indicated number of training points per class.} \label{syn:gaussian2 alternating6} \end{figure} Next, we present a suite of experiments where we again construct the classes as Gaussian clouds in $\mathbb{R}^2$, but utilize a non-uniformly alternating pattern around the origin with respect to the classes. In each case, we set the number of training data points for each class to be 25, 50, and 75. In Figure \ref{syn:gaussian2 alternating6}, we have two classes forming a total of six Gaussian clouds, and execute Algorithms \ref{proposed algorithm1} and \ref{proposed algorithm2} using four layers and $m\in\{10,30,50,70,90,110,130\}$. The classification accuracy increases for larger $m$, with nearly perfect classification for the largest values of $m$ selected. A similar experiment is shown in Figure \ref{syn:gaussian2 alternating8}, where we have two classes forming a total of eight Gaussian clouds, and execute the proposed algorithm using five layers. In the next two experiments, we display the classification results of Algorithms \ref{proposed algorithm1} and \ref{proposed algorithm2} when using $m\in\{10,30,50,70,90\}$ and one through four layers, and see that adding layers can be beneficial for more complicated data geometries. In Figure \ref{syn:gaussian3 alternating8}, we have three classes forming a total of eight Gaussian clouds. We see that from both $L=1$ to $L=2$ and $L=2$ to $L=3$, there are huge gains in classification accuracy. In Figure \ref{syn:gaussian4 alternating8}, we have four classes forming a total of eight Gaussian clouds. Again, from both $L=1$ to $L=2$ and $L=2$ to $L=3$ we see large improvements in classification accuracy, yet still better classification with $L=4$. We note here that in this case it also appears that more training data does not improve the performance (and perhaps even slightly decreases accuracy); this is of course unexpected in practice, but we believe this happens here only because of the construction of the Gaussian clouds -- more training data leads to more outliers in each cloud, making the sets harder to separate. \begin{figure}[!htbp] \centering \begin{tabular}{cc} \includegraphics[height=2in]{images/Synthetic/8ball_g2_d2/synthetic_3a-eps-converted-to.pdf} & \includegraphics[height=2in]{images/Synthetic/8ball_g2_d2/synthetic_3b-eps-converted-to.pdf} \\ \end{tabular} \caption{Synthetic classification experiment with eight Gaussian clouds and two classes ($G=2$), $L=5$, $n=2$, 50 test points per group, and 30 trials of randomly generating $A$. (Left) Example training and testing data setup. (Right) Average correct classification rate versus $m$ and for the indicated number of training points per class.} \label{syn:gaussian2 alternating8} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[height=2in]{images/Synthetic/8ball_g3_d2/synthetic_4a-eps-converted-to.pdf} \begin{tabular}{cc} \includegraphics[height=2in]{images/Synthetic/8ball_g3_d2/synthetic_4b-eps-converted-to.pdf} & \includegraphics[height=2in]{images/Synthetic/8ball_g3_d2/synthetic_4c-eps-converted-to.pdf} \\ (a) $L=1$ & (b) $L=2$ \\ \includegraphics[height=2in]{images/Synthetic/8ball_g3_d2/synthetic_4d-eps-converted-to.pdf} & \includegraphics[height=2in]{images/Synthetic/8ball_g3_d2/synthetic_4e-eps-converted-to.pdf} \\ (c) $L=3$ & (d) $L=4$ \\ \end{tabular} \caption{Synthetic classification experiment with eight Gaussian clouds and three classes ($G=3$), $L=1,\dots,4$, $n=2$, 50 test points per group, and 30 trials of randomly generating $A$. (Top) Example training and testing data setup. Average correct classification rate versus $m$ and for the indicated number of training points per class for: (middle left) $L=1$, (middle right) $L=2$, (bottom left) $L=3$, (bottom right) $L=4$.} \label{syn:gaussian3 alternating8} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[height=2in]{images/Synthetic/8ball_g4_d2/synthetic_5a-eps-converted-to.pdf} \begin{tabular}{cc} \includegraphics[height=2in]{images/Synthetic/8ball_g4_d2/synthetic_5b-eps-converted-to.pdf} & \includegraphics[height=2in]{images/Synthetic/8ball_g4_d2/synthetic_5c-eps-converted-to.pdf} \\ (a) $L=1$ & (b) $L=2$ \\ \includegraphics[height=2in]{images/Synthetic/8ball_g4_d2/synthetic_5d-eps-converted-to.pdf} & \includegraphics[height=2in]{images/Synthetic/8ball_g4_d2/synthetic_5e-eps-converted-to.pdf} \\ (c) $L=3$ & (d) $L=4$ \\ \end{tabular} \caption{Synthetic classification experiment with eight Gaussian clouds and four classes ($G=4$), $L=1,\dots,4$, $n=2$, 50 test points per group, and 30 trials of randomly generating $A$. (Top) Example training and testing data setup. Average correct classification rate versus $m$ and for the indicated number of training points per class for: (middle left) $L=1$, (middle right) $L=2$, (bottom left) $L=3$, (bottom right) $L=4$.} \label{syn:gaussian4 alternating8} \end{figure} \clearpage \subsection{Handwritten Digit Classification} In this section, we apply Algorithms \ref{proposed algorithm1} and \ref{proposed algorithm2} to the MNIST \cite{MNIST} dataset, which is a benchmark dataset of images of handwritten digits, each with $28 \times 28$ pixels. In total, the dataset has $60,000$ training examples and $10,000$ testing examples. First, we apply Algorithms \ref{proposed algorithm1} and \ref{proposed algorithm2} when considering only two digit classes. Figure \ref{mnist:01} shows the correct classification rate for the digits ``0" versus ``1". We set $m\in\{10,30,50,70,90,110\}$, $p\in\{50,100,150\}$ with equally sized training data sets for each class, and classify 50 images per digit class. Notice that the algorithm is performing very well for small $m$ in comparison to $n=28\times 28 =784$ and only a single layer. Figure \ref{mnist:01} shows the results of a similar setup for the digits ``0" and ``5". In this experiment, we increased to four layers and achieve classification accuracy around $90\%$ at the high end of $m$ values tested. This indicates that the digits ``0" and ``5" are more likely to be mixed up than ``0" and ``1", which is understandable due to the more similar digit shape between ``0" and ``5". Next, we apply Algorithms \ref{proposed algorithm1} and \ref{proposed algorithm2} to the MNIST dataset with all ten digits. We utilize $1,000$, $3,000$, and $5,000$ training points per digit class, and perform classification with $800$ test images per class. The classification results using 18 layers and $m\in\{100, 200, 400, 600, 800\}$ are shown in Figure \ref{mnist:all}, where it can be seen that with $5,000$ training points per class, above 90\% classification accuracy is achieved for $m\geq 200$. We also see that larger training sets result in slightly improved classification. \begin{figure}[!htbp] \centering \begin{tabular}{cc} \raisebox{1.1\height}{\includegraphics[height=0.7in]{images/MNIST/groups_0_1/mnist_1a-eps-converted-to.pdf}} & \includegraphics[height=2in]{images/MNIST/groups_0_1/mnist_1b-eps-converted-to.pdf} \\ \end{tabular} \includegraphics[height=0.7in]{images/MNIST/groups_0_1/mnist_1c-eps-converted-to.pdf} \caption{Classification experiment using the handwritten ``0" and ``1" digit images from the MNIST dataset, $L=1$, $n=28\times 28 =784$, 50 test points per group, and 30 trials of randomly generating $A$. (Top left) Training data images when $p = 50$. (Top right) Average correct classification rate versus $m$ and for the indicated number of training points per class. (Bottom) Testing data images.} \label{mnist:01} \end{figure} \begin{figure}[!htbp] \centering \begin{tabular}{cc} \raisebox{1.1\height}{\includegraphics[height=0.7in]{images/MNIST/groups_0_5/mnist_2a-eps-converted-to.pdf}} & \includegraphics[height=2in]{images/MNIST/groups_0_5/mnist_2b-eps-converted-to.pdf} \end{tabular} \includegraphics[height=0.7in]{images/MNIST/groups_0_5/mnist_2c-eps-converted-to.pdf} \caption{Classification experiment using the handwritten ``0" and ``5" digit images from the MNIST dataset, $L=4$, $n=28\times 28=784$, 50 test points per group, and 30 trials of randomly generating $A$. (Top left) Training data images when $p = 50$. (Top right) Average correct classification rate versus $m$ and for the indicated number of training points per class. (Bottom) Testing data images.} \label{mnist:05} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[height=2in]{images/MNIST/10Digits/18layers_v2/mnist_3-eps-converted-to.pdf} \caption{Correct classification rate versus $m$ when using all ten (0-9) handwritten digits from the MNIST dataset, $L=18$, $n=28\times 28=784$, 1,000, 3,000, and 5,000 training points per group, 800 test points per group (8,000 total), and a single instance of randomly generating $A$.} \label{mnist:all} \end{figure} \clearpage \subsection{Facial Recognition} Our last experiment considers facial recognition using the extended YaleB dataset \cite{CHHH07,CHH07b,CHHZ06,HYHNZ05}. This dataset includes $32 \times 32$ images of 38 individuals with roughly 64 near-frontal images under different illuminations per individual. We select two individuals from the dataset, and randomly select images with different illuminations to be included in the training and testing sets (note that the same illumination was included for \textit{each} individual in the training and testing data). We execute Algorithms \ref{proposed algorithm1} and \ref{proposed algorithm2} using four layers with $m\in\{10,50,100,150,200,250,300 \}$, $p\in\{20,40,60\}$ with equally sized training data sets for each class, and classify 30 images per class. The results are displayed in Figure \ref{yaleB}. Above $95\%$ correct classification is achieved for $m\geq 150$ for each training set size included. \begin{figure}[!htbp] \centering \begin{tabular}{cc} \raisebox{1.1\height}{\includegraphics[height=0.7in,width=2.5in]{images/YaleB/groups_5_6/yaleb_1a-eps-converted-to.pdf}} & \includegraphics[height=2in]{images/YaleB/groups_5_6/yaleb_1b-eps-converted-to.pdf} \end{tabular} \includegraphics[height=0.7in]{images/YaleB/groups_5_6/yaleb_1c-eps-converted-to.pdf} \caption{Classification experiment using two individuals from the extended YaleB dataset, $L=4$, $n=32\times 32 = 1024$, 30 test points per group, and 30 trials of randomly generating $A$. (Top left) Training data images when $p = 20$. (Top right) Average correct classification rate versus $m$ and for the indicated number of training points per class. (Bottom) Testing data images.} \label{yaleB} \end{figure} \section{Theoretical Analysis for a Simple Case}\label{section::theory} \subsection{Main Results} We now provide a theoretical analysis of Algorithms \ref{proposed algorithm1} and \ref{proposed algorithm2} in which we make a series of simplifying assumptions to make the development more tractable. We focus on the setting where the signals are two-dimensional, belonging to one of two classes, and consider a single layer (i.e., $\ell=1$, $n=2$, and $G=2$). Moreover, we assume the true classes $G_1$ and $G_2$ to be two disjoint \textit{cones} in $\mathbb{R}^2$ and assume that regions of the same angular measure have the same number (or density) of training points. We believe analyzing this setup will provide a foundation for a more generalized analysis in future work. Let $A_1$ denote the angular measure of $G_1$, defined by \[A_1 = \max_{x_1, x_2\in G_1} \angle(x_1,x_2),\] where $\angle(x_1,x_2)$ denotes the angle between the vectors $x_1$ and $x_2$; define $A_2$ similarly for $G_2$. Also, define \[A_{12} = \min_{x_1\in G_1, x_2\in G_2} \angle(x_1,x_2)\] as the angle between classes $G_1$ and $G_2$. Suppose that the test point $x\in G_1$, and that we classify $x$ using $m$ random hyperplanes. For simplicity, we assume that the hyperplanes can intersect the cones, but only intersect \textit{one} cone at a time. This means we are imposing the condition $A_{12} + A_1 + A_2 \leq \pi$. See Figure \ref{2d cones} for a visualization of the setup for the analysis. Notice that $A_1$ is partitioned into two disjoint pieces, $\theta_1$ and $\theta_2$, where $A_1 = \theta_1+\theta_2$. The angles $\theta_1$ and $\theta_2$ are determined by the location of $x$ within $G_1$. \begin{figure}[!htbp] \centering \includegraphics[height=3in]{images/Theory/Slide1.jpg} \caption{Visualization of the analysis setup for two classes of two dimensions. If a hyperplane intersects the $\theta_1$ region of $G_1$, then $x$ is not on the same side of the hyperplane as $G_2$. If a hyperplane intersects the $\theta_2$ region of $G_1$, then $x$ is on the same side of the hyperplane as $G_2$. That is, $\theta_1$ and $\theta_2$ are determined by the position of $x$ within $G_1$, and $\theta_1+\theta_2 = A_1$.} \label{2d cones} \end{figure} The membership index\xspace parameter (\ref{RF3}) is still used; however, now we have angles instead of numbers of training points. That is, \begin{align} \label{RF3 continuous} r(\ell,i,t,g) &= \frac{A_{g|t}}{\sum_{j=1}^G A_{j|t}} \frac{\sum_{j=1}^G |A_{g|t} - A_{j|t}|}{\sum_{j=1}^G A_{j|t}}, \end{align} where $A_{g|t}$ denotes the angle of class $g$ with the $t$-th sign pattern at the $i$-th set selection in the $\ell$-th layer. Throughout, let $t_i^\star$ denote the sign pattern index of the test point $x$ with the $i$-th hyperplane at the first level (i.e., $\ell=1$). Letting $\widehat{b}_x$ denote the classification label for $x$ after running the proposed algorithm, Theorem \ref{main theorem} describes the probability that $x$ gets classified correctly with $\widehat{b}_x = 1$. Note that for simplicity, in Theorem \ref{main theorem} we assume the classes $G_1$ and $G_2$ are of the same size (i.e., $A_1=A_2$) and the test point $x$ lies in the middle of class $G_1$ (i.e., $\theta_1 = \theta_2$). These assumptions are for convenience and clarity of presentation only (note that \eqref{RF3 bound multinomial complement} is already quite cumbersome), but the proof follows analogously (albeit without easy simplifications) for the general case; for convenience we leave the computations in Table \ref{table::redness factors for 2d cones} in general form and do not utilize the assumption $\theta_1 = \theta_2$ until the end of the proof. \begin{theorem}\label{main theorem} Let the classes $G_1$ and $G_2$ be two cones in $\mathbb{R}^2$ defined by angular measures $A_1$ and $A_2$, respectively, and suppose regions of the same angular measure have the same density of training points. Suppose $A_1 = A_2$, $\theta_1 = \theta_2$, and $A_{12} + A_1 + A_2 \leq \pi$. Then, the probability that a data point $x\in G_1$ gets classified in class $G_1$ by Algorithms \ref{proposed algorithm1} and \ref{proposed algorithm2} using a single layer and a measurement matrix $A\in\mathbb{R}^{m\times 2}$ with independent standard Gaussian entries is bounded as follows, \begin{align} \mathbb{P}[\widehat{b}_x = 1] &\geq 1 - \hspace{-8mm} \underset{j+k_{1,\theta_1}+k_{1,\theta_2}+k_2 + k = m, \,\, k_{1,\theta_2} \geq 9(j+k_{1,\theta_1})}{\sum_{j=0}^m \sum_{k_{1,\theta_1}=0}^m \sum_{k_{1,\theta_2}=0}^m \sum_{k_2=0}^m \sum_{k=0}^m} \binom{m}{j,k_{1,\theta_1},k_{1,\theta_2},k_2,k} \left(\frac{A_{12}}{\pi}\right)^j \left(\frac{A_1}{2\pi} \right)^{k_{1,\theta_1}+k_{1,\theta_2}} \notag \\ &\quad\quad\times \left(\frac{A_1}{\pi} \right)^{k_2} \left(\frac{\pi-2A_1-A_{12}}{\pi}\right)^k. \label{RF3 bound multinomial complement} \end{align} \end{theorem} Figure \ref{theorem figure} displays the classification probability bound of Theorem \ref{main theorem} compared to the (simulated) true value of $\mathbb{P}[\widehat{b}_x = 1]$. Here, $A_1 = A_2 = 15^\circ$, $\theta_1 = \theta_2 = 7.5^\circ$, and $A_{12}$ and $m$ are varied. Most importantly, notice that in all cases, the classification probability is approaching 1 with increasing $m$. Also, the result from Theorem \ref{main theorem} behaves similarly as the simulated true probability, especially as $m$ and $A_{12}$ increase. The following two corollaries provide asymptotic results for situations where $\mathbb{P}[\widehat{b}_x = 1]$ tends to 1 when $m\rightarrow\infty$. Corollary \ref{Corollary 1} provides this result whenever $A_{12}$ is at least as large as both $A_1$ and $\pi-2A_1-A_{12}$, and Corollary \ref{Corollary 2} provides this result for certain combinations of $A_1$ and $A_{12}$. \begin{cor}\label{Corollary 1} Consider the setup of Theorem \ref{main theorem}. Suppose $A_{12} \geq A_1$ and $A_{12} \geq \pi - 2A_1 - A_{12}$. Then $\mathbb{P}[\widehat{b}_x = 1] \rightarrow 1$ as $m\rightarrow \infty$. \end{cor} \begin{cor}\label{Corollary 2} Consider the setup of Theorem \ref{main theorem}. Suppose $A_1 + A_{12} > 0.58 \pi$ and $A_{12} + \frac{3}{4}A_1 \leq \frac{\pi}{2}$. Then $\mathbb{P}[\widehat{b}_x = 1] \rightarrow 1$ as $m\rightarrow \infty$. \end{cor} \begin{remark} Note that the two conditions in Corollary \ref{Corollary 2} imply the assumption that $A_1 \geq 0.32\pi$ and $A_{12} \leq 0.26 \pi$. \end{remark} \begin{figure}[!htbp] \centering \includegraphics[height=3in]{images/Theory/theory_1-eps-converted-to.pdf} \caption{$\mathbb{P}[\widehat{b}_x = 1]$ versus the number of hyperplanes $m$ when $A_{12}$ is varied (see legend), $A_1 = A_2 = 15^\circ$, and $\theta_1 = \theta_2 = 7.5^\circ$. The solid lines indicate the probability (\ref{classification with cutting continuous}) with the multinomial probability given by (\ref{multinomial probability}) and the conditional probability (\ref{conditional probability continuous RF3}) simulated over 1000 trials of the uniform random variables. The dashed lines indicate the result (\ref{RF3 bound multinomial complement}) provided in Theorem \ref{main theorem}.} \label{theorem figure} \end{figure} \subsection{Proof of Main Results} \subsubsection{Proof of Theorem \ref{main theorem}} \begin{proof} Using our setup, we have five possibilities for any given hyperplane: (i) the hyperplane completely separates the two classes, i.e., the cones associated with the two classes fall on either side of the hyperplane, (ii) the hyperplane completely does not separate the two classes, i.e., the cones fall on the same side of the hyperplane, (iii) the hyperplane cuts through $G_2$, (iv) the hyperplane cuts through $G_1$ via $\theta_1$, or (v) the hyperplane cuts through $G_1$ via $\theta_2$. Using this observation, we can now define the event \begin{equation}\label{theevent} E( j,k_{1,\theta_1},k_{1,\theta_2},k_2 ) \end{equation} whereby from among the $m$ total hyperplanes, $j$ hyperplanes separate the cones, $k_{1,\theta_1}$ hyperplanes cut $G_1$ in $\theta_1$, $k_{1,\theta_2}$ hyperplanes cut $G_1$ in $\theta_2$, and $k_2$ hyperplanes cut $G_2$. See Table \ref{table::redness factors for 2d cones} for an easy reference of these quantities. Note that we must distinguish between hyperplanes that cut through $\theta_1$ and those that cut through $\theta_2$; $k_{1,\theta_1}$ hyperplanes cut $G_1$ and land within $\theta_1$ so that $x$ is \textit{not} on the same side of the hyperplane as $G_2$ whereas $k_{1,\theta_2}$ hyperplanes cut $G_1$ and land within $\theta_2$ so that $x$ \textit{is} on the same side of the hyperplane as $G_2$. These orientations will affect the computation of the membership index. Using the above definition of (\ref{theevent}), we use the law of total probability to get a handle on $\mathbb{P}[\widehat{b}_x = 1]$, the probability that the test point $x$ gets classified correctly, as follows, {\small \begin{align} \mathbb{P}[\widehat{b}_x = 1] &= \mathbb{P}\left[\sum_{i=1}^m r(\ell,i,t_i^\star,1) > \sum_{i=1}^m r(\ell,i,t_i^\star,2)\right] \notag\\ &= \underset{j+k_{1,\theta_1}+k_{1,\theta_2}+k_2\leq m}{\sum_{j,k_{1,\theta_1}, k_{1,\theta_2},k_2 } \mathbb{P}\left[\sum_{i=1}^m r(\ell,i,t^\star_i,1) > \sum_{i=1}^m r(\ell,i,t^\star_i,2) \;| E( j,k_{1,\theta_1},k_{1,\theta_2},k_2 ) \right]\notag\\ &\qquad\qquad\qquad\qquad\times \mathbb{P}\left E( j,k_{1,\theta_1},k_{1,\theta_2},k_2 ) \right]. \label{classification with cutting continuous} \end{align}} The latter probability in (\ref{classification with cutting continuous}) is similar to the probability density of a multinomial random variable: \begin{align} &\mathbb{P}\left[E( j,k_{1,\theta_1},k_{1,\theta_2},k_2 \right] \notag \\ &= \binom{m}{j,k_{1,\theta_1},k_{1,\theta_2},k_2,m-j-k_{1,\theta_1}-k_{1,\theta_2}-k_2} \left(\frac{A_{12}}{\pi}\right)^j \left(\frac{\theta_1}{\pi} \right)^{k_{1,\theta_1}} \left(\frac{\theta_2}{\pi} \right)^{k_{1,\theta_2}} \notag \\ &\quad\times \left(\frac{A_2}{\pi} \right)^{k_2} \left(\frac{\pi-A_1-A_2-A_{12}}{\pi}\right)^{m-j-k_{1,\theta_1}-k_{1,\theta_2}-k_2}, \label{multinomial probability} \end{align} where $\binom{n}{k_1,k_2,\dots,k_m} = \frac{n!}{k_1!k_2!\cdots k_m!}$. To evaluate the conditional probability in (\ref{classification with cutting continuous}), we must determine the value of $r(\ell,i,t_i^\star,g)$, for $g=1,2$, given the hyperplane cutting pattern event. Table \ref{table::redness factors for 2d cones} summarizes the possible cases. In the cases where the hyperplane cuts through either $G_1$ or $G_2$, we model the location of the hyperplane within the class by a uniform random variable. We will use the notation $z\sim U(a,b)$ to indicate that $z$ is a uniform random variable over the interval $[a,b]$, and let $u$, $u'$, $u_h$, $u_h'$ (for an index $h$) denote independent copies of a $U(0,1)$ uniform random variable; therefore $\theta u \sim U(0, \theta)$. \begin{table}[!htbp] \centering \begin{tabular}{| c | c |c | c |} \hline Hyperplane Case & Number in event \eqref{theevent} & Class $g$ & Value of $r(\ell,i,t_i^\star,g)$ (see \eqref{RF3 continuous})\\ \hline \hline \multirow{2}{*}{(i) separates} & \multirow{2}{*}{$j$} & 1 & $1$\\ & & 2 & $0$\\ \hline \hline \multirow{2}{*}{(ii) does not separate} & \multirow{2}{*}{$m - j - k_2 - k_{1,\theta_1} - k_{1,\theta_2}$} & 1 & $\frac{A_1|A_1-A_2|}{(A_1+A_2)^2}$ \\ && 2 & $\frac{A_2|A_1-A_2|}{(A_1+A_2)^2}$\\ \hline \hline \multirow{2}{*}{(iii) cuts $G_2$} & \multirow{2}{*}{$k_2$} & 1 & $\frac{A_1|A_1-A_2u'|}{(A_1+A_2u')^2}$\\ && 2 & $\frac{A_2 u'|A_1-A_2u'|}{(A_1+A_2u')^2}$\\ \hline \hline \multirow{2}{*}{(iv) cuts $G_1$, $\theta_1$}& \multirow{2}{*}{$k_{1,\theta_1}$} & 1 & $1$\\ && 2 & $0$\\ \hline \hline \multirow{2}{*}{(v) cuts $G_1$, $\theta_2$} & \multirow{2}{*}{$k_{1,\theta_2}$} & 1 & $\frac{(\theta_1+\theta_2u)|\theta_1+\theta_2u-A_2|}{(\theta_1+\theta_2u+A_2)^2}$ \\ && 2 & $\frac{A_2|\theta_1+\theta_2u-A_2|}{(\theta_1+\theta_2u+A_2)^2}$ \\ \hline \end{tabular} \caption{Summary of (\ref{RF3 continuous}) when up to one cone can be cut per hyperplane, where $ u, u'$ are indepdendent $U(0,1)$ random variables.} \label{table::redness factors for 2d cones} \end{table} Using the computations given in Table \ref{table::redness factors for 2d cones} and assuming $j$ hyperplanes separate (i.e. condition (i) described above), $k_{1,\theta_1}$ hyperplanes cut $G_1$ in $\theta_1$ (condition (iv) above), $k_{1,\theta_2}$ hyperplanes cut $G_1$ in $\theta_2$ (condition (v) above), $k_2$ hyperplanes cut $G_2$ (condition (iii) above), and $m-j-k_{1,\theta_1}-k_{1,\theta_2}-k_2$ hyperplanes do not separate (condition (ii) above), we compute the membership index parameters defined in \eqref{RF3 continuous} as: \begin{align}\label{compRF1} \sum_{i=1}^m r(\ell,i,t_i^\star,1) &= j + (m-j-k_{1,\theta_1}-k_{1,\theta_2}-k_2)\frac{A_1|A_1-A_2|}{(A_1+A_2)^2} + k_{1,\theta_1} \notag \\ &\quad + \sum_{h=1}^{k_{1,\theta_2}}\frac{(\theta_1+\theta_2 u_h)|\theta_1+\theta_2 u_h-A_2|}{(\theta_1+\theta_2 u_h+A_2)^2} + \sum_{h=1}^{k_2}\frac{A_1|A_1-A_2 u_h'|}{(A_1+A_2 u_h')^2} \notag\\ &= j + k_{1,\theta_1} + \sum_{h=1}^{k_{1,\theta_2}}\frac{(\theta_1+\theta_2 u_h)|\theta_1+\theta_2 u_h-A_1|}{(\theta_1+\theta_2 u_h+A_1)^2} + \sum_{h=1}^{k_2}\frac{A_1|A_1-A_1 u_h'|}{(A_1+A_1 u_h')^2} \end{align} and \begin{align}\label{compRF2} \sum_{i=1}^m r(\ell,i,t_i^\star,2) &= (m-j-k_{1,\theta_1}-k_{1,\theta_2}-k_2)\frac{A_2|A_1-A_2|}{(A_1+A_2)^2} \notag \\ &\quad + \sum_{h=1}^{k_{1,\theta_2}}\frac{A_2|\theta_1+\theta_2 u_h-A_2|}{(\theta_1+\theta_2 u_h+A_2)^2} + \sum_{h=1}^{k_2}\frac{A_2 u_h'|A_1-A_2 u_h'|}{(A_1+A_2 u_h')^2}\notag\\ &= \sum_{h=1}^{k_{1,\theta_2}}\frac{A_1|\theta_1+\theta_2 u_h-A_1|}{(\theta_1+\theta_2 u_h+A_1)^2} + \sum_{h=1}^{k_2}\frac{A_1 u_h'|A_1-A_1 u_h'|}{(A_1+A_1 u_h')^2}, \end{align} where in both cases we have simplified using the assumption $A_1 = A_2$. Thus, the conditional probability in (\ref{classification with cutting continuous}), can be expressed as: \begin{align} &\mathbb{P}\left[j + k_{1,\theta_1} + \sum_{h=1}^{k_{1,\theta_2}}\frac{|\theta_1+\theta_2 u_h-A_1|(\theta_1+\theta_2 u_h-A_1)}{(\theta_1+\theta_2 u_h+A_1)^2} + \sum_{h=1}^{k_2}\frac{|A_1-A_1 u_h'|(A_1-A_1 u_h')}{(A_1+A_1 u_h')^2} > 0\right] \label{conditional probability continuous RF3}, \end{align} where it is implied that this probably is conditioned on the hyperplane configuration as in \eqref{classification with cutting continuous}. Once the probability (\ref{conditional probability continuous RF3}) is known, we can calculate the full classification probability (\ref{classification with cutting continuous}). Since by assumption, $\theta_1 + \theta_2 = A_1$, we have $\theta_1+\theta_2 u-A_1 \leq 0 $ and $A_1 -A_1 u' \geq 0$. Thus, (\ref{conditional probability continuous RF3}) simplifies to \begin{align} &\mathbb{P}\left[j + k_{1,\theta_1} - \sum_{h=1}^{k_{1,\theta_2}}\frac{(\theta_1+\theta_2 u_h-A_1)^2}{(\theta_1+\theta_2 u_h+A_1)^2} + \sum_{h=1}^{k_2}\frac{(A_1-A_1 u_h')^2}{(A_1+A_1 u_h')^2} > 0\right]\notag\\ &= \mathbb{P}\left[j + k_{1,\theta_1} - \sum_{h=1}^{k_{1,\theta_2}}\frac{(\theta_2 u_h)^2}{(\theta_1+\theta_2 u_h+A_1)^2} + \sum_{h=1}^{k_2}\frac{(A_1 u_h')^2}{(A_1+A_1 u_h')^2} > 0\right]. \label{conditional probability continuous RF3 A1=A2} \end{align} Next, using that $\theta_2 u \geq 0$ and $A_1 u' \leq A_1$ for the random variables in the denominators, we can bound (\ref{conditional probability continuous RF3 A1=A2}) from below by \begin{align} (\ref{conditional probability continuous RF3 A1=A2}) \geq \mathbb{P}\left[j + k_{1,\theta_1} - \sum_{h=1}^{k_{1,\theta_2}}\frac{(\theta_2 u_h)^2}{(\theta_1+A_1)^2} + \sum_{h=1}^{k_2}\frac{(A_1 u_h')^2}{(2A_1)^2} > 0\right]. \label{RF3 first bound} \end{align} Letting $\theta>0$ to be chosen later (and abusing notation slightly to allow $u$, $u'$ to be new independent uniform random variables), we can rewrite the above probability (\ref{RF3 first bound}) as \begin{align} P:= &\mathbb{P}\left[j + k_{1,\theta_1} - \frac{1}{(\theta_1+A_2)^2}\sum_{h=1}^{k_{1,\theta_2}}(\theta_2u_h)^2 + \frac{1}{(2A_1)^2} \sum_{h=1}^{k_2} (A_1u_h')^2> 0\right] \notag \\ &= \mathbb{P}\left[ \frac{1}{(\theta_1+A_1)^2}\sum_{h=1}^{k_{1,\theta_2}}(\theta_2 u_h)^2 - \frac{1}{(2A_1)^2} \sum_{h=1}^{k_2} (A_1u_h')^2< j + k_{1,\theta_1}\right] \notag \\ &= 1- \mathbb{P}\left[ \frac{1}{(\theta_1+A_1)^2}\sum_{h=1}^{k_{1,\theta_2}}(\theta_2 u_h)^2 - \frac{1}{(2A_1)^2} \sum_{h=1}^{k_2} (A_1 u_h')^2 \geq j + k_{1,\theta_1}\right] \notag \\ &= 1- \mathbb{P}\left[ e^{\theta \left(\frac{1}{(\theta_1+A_1)^2}\sum_{h=1}^{k_{1,\theta_2}}(\theta_2 u_h)^2 - \frac{1}{(2A_1)^2} \sum_{h=1}^{k_2} (A_1 u_h')^2\right)} \geq e^{\theta(j + k_{1,\theta_1})}\right] \notag \\ &\geq 1-e^{-\theta(j + k_{1,\theta_1})} \mathbb{E}\left[ e^{\theta \left(\frac{1}{(\theta_1+A_1)^2}\sum_{h=1}^{k_{1,\theta_2}}(\theta_2 u_h)^2 - \frac{1}{(2A_1)^2} \sum_{h=1}^{k_2} (A_1 u_h')^2\right)} \right] \label{Markov step}\end{align} where in the last equality, $\theta$ is a non-negative parameter to be chosen later, and (\ref{Markov step}) follows from Markov's inequality. Continuing and using the independence of each hyperplane, we now have \begin{align} P &\geq 1-e^{-\theta(j + k_{1,\theta_1})} \mathbb{E}\left[ \prod_{h=1}^{k_{1,\theta_2}} e^{ \frac{\theta}{(\theta_1+A_1)^2}(\theta_2 u_h)^2}\right] \mathbb{E}\left[ \prod_{h=1}^{k_2} e^{- \frac{\theta}{(2A_1)^2} (A_1 u_h')^2)} \right] \notag \\ &= 1-e^{-\theta(j + k_{1,\theta_1})}\prod_{h=1}^{k_{1,\theta_2}} \mathbb{E}\left[ e^{ \frac{\theta}{(\theta_1+A_1)^2}(\theta_2 u_h)^2}\right] \prod_{h=1}^{k_2} \mathbb{E}\left[ e^{- \frac{\theta}{(2A_1)^2} (A_1 u_h')^2)} \right] \notag \\ &= 1-e^{-\theta(j + k_{1,\theta_1})}\left(\mathbb{E}\left[ e^{ \frac{\theta}{(\theta_1+A_1)^2}(\theta_2 u)^2}\right] \right)^{k_{1,\theta_2}} \left( \mathbb{E}\left[ e^{- \frac{\theta}{(2A_1)^2} (A_1 u')^2)} \right]\right)^{k_2}. \label{last equality} \end{align} Next, one readily computes the following (we include the computation in the appendix, Section \ref{app:erf}, for completeness): \begin{align}\label{helloerf} \mathbb{E}\left[ e^{ \frac{\theta}{(\theta_1+A_1)^2}(\theta_2 u)^2}\right] = \frac{\sqrt{\pi} \mbox{erfi}(\frac{\theta_2}{A_1+\theta_1}\sqrt{\theta})}{\frac{\theta_2}{A_1+\theta_1}\sqrt{\theta}} \end{align} and \begin{align}\label{helloerf2} \mathbb{E}\left[ e^{- \frac{\theta}{(2A_1)^2} (A_1 u')^2)} \right] = \frac{\sqrt{\pi}\mbox{erf}(\frac{1}{2}\sqrt{\theta})}{\frac{1}{2}\sqrt{\theta}}, \end{align} where $\mbox{erf}(x) = \frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2} \,\, dt$ is the error function and $\mbox{erfi}(x) = -i\mbox{erf}(ix)= \frac{2}{\sqrt{\pi}} \int_0^x e^{t^2} \,\, dt$ is the imaginary error function. Therefore, \begin{align} &P \geq 1-e^{-\theta(j + k_{1,\theta_1})}\left(\frac{\sqrt{\pi} \mbox{erfi}(\frac{\theta_2}{A_1+\theta_1}\sqrt{\theta})}{\frac{\theta_2}{A_1+\theta_1}\sqrt{\theta}} \right)^{k_{1,\theta_2}} \left( \frac{\sqrt{\pi}\mbox{erf}(\frac{1}{2}\sqrt{\theta})}{\frac{1}{2}\sqrt{\theta}}\right)^{k_2}. \label{RF3 bound erf} \end{align} Now, we note the following trivial upper bounds on the $\mbox{erf}$ and $\mbox{erfi}$ functions. \begin{align} \mbox{erf}(x) = \frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2}\,\,dt \leq \frac{2}{\sqrt{\pi}} \int_0^x 1 \,\,dt = \frac{2x}{\sqrt{\pi}} \\ \mbox{erfi}(x) = \frac{2}{\sqrt{\pi}} \int_0^x e^{t^2} \,\,dt \leq \frac{2}{\sqrt{\pi}} e^{x^2} \int_0^x 1 \,\, dt = \frac{2x}{\sqrt{\pi}} e^{x^2}. \end{align} Applying the above bounds to \eqref{RF3 bound erf} gives \begin{align} \mathbb{P}[\widehat{b}_x = 1] &\geq 1-e^{-\theta(j + k_{1,\theta_1})}\left(\frac{\sqrt{\pi} \mbox{erfi}(\frac{\theta_2}{A_1+\theta_1}\sqrt{\theta})}{\frac{\theta_2}{A_1+\theta_1}\sqrt{\theta}} \right)^{k_{1,\theta_2}} \left( \frac{\sqrt{\pi}\mbox{erf}(\frac{\sqrt{\theta}}{2})}{\frac{\sqrt{\theta}}{2}}\right)^{k_2} \notag\\ &\geq 1-2^{k_{1,\theta_2}+k_2} e^{-\theta(j+k_{1,\theta_1})} e^{k_{1,\theta_2}\theta(\frac{\theta_2}{A_1+\theta_1})^2}\notag\\ &= 1-2^{k_{1,\theta_2}+k_2} e^{\theta\left(k_{1,\theta_2}(\frac{\theta_2}{A_1+\theta_1})^2 -(j+k_{1,\theta_1})\right)}\notag\\ &= 1-\alpha e^{\theta(\beta-\gamma)} =: f(\theta) \label{RF3 bound erf trivial} \end{align} where $$\alpha = 2^{k_{1,\theta_2}+k_2},\quad\beta = k_{1,\theta_2}\left(\frac{\theta_2}{A_1+\theta_1}\right)^2\quad \text{and} \quad\gamma = j+k_{1,\theta_1}.$$ Recall that we wish to choose $\theta>0$ such that $f(\theta)$ in \eqref{RF3 bound erf trivial} is maximized. If $\beta-\gamma <0$, then taking $\theta\rightarrow\infty$ maximizes $f(\theta)$ as $f(\theta)\rightarrow 1$. If $\beta-\gamma\geq 0$, then taking $\theta\rightarrow 0$ maximizes $f(\theta)$ as $f(\theta)\rightarrow 1-\alpha < 0$. But, since we are bounding a probability, we always have the trivial lower bound of zero. So, when $\beta-\gamma \geq 0$ we can use the simple bound $ \mathbb{P}[\widehat{b}_x = 1] \geq 0$. Therefore, the probability of interest \eqref{classification with cutting continuous} reduces to (note the bounds on the summation indices): \begin{align} \mathbb{P}[\widehat{b}_x = 1] &= \underset{j+k_{1,\theta_1}+k_{1,\theta_2}+k_2\leq m}{\sum_{j,k_{1,\theta_1}, k_{1,\theta_2},k_2 } \mathbb{P}\left[\sum_{i=1}^m r(\ell,i,t^\star_i,1) > \sum_{i=1}^m r(\ell,i,t^\star_i,2) \;| E( j,k_{1,\theta_1},k_{1,\theta_2},k_2 ) \right]\notag\\ &\qquad\qquad\qquad\qquad\times \mathbb{P}\left E( j,k_{1,\theta_1},k_{1,\theta_2},k_2 ) \right]\\ &\geq {\sum_{\substack{j,k_{1,\theta_1}, k_{1,\theta_2},k_2 \\ j+k_{1,\theta_1}+k_{1,\theta_2}+k_2\leq m, \\ \beta-\gamma<0} }} \binom{m}{j,k_{1,\theta_1},k_{1,\theta_2},k_2,m-j-k_{1,\theta_1}-k_{1,\theta_2}-k_2} \left(\frac{A_{12}}{\pi}\right)^j \left(\frac{\theta_1}{\pi} \right)^{k_{1,\theta_1}} \notag\\ &\quad\quad \times \left(\frac{\theta_2}{\pi} \right)^{k_{1,\theta_2}} \left(\frac{A_2}{\pi} \right)^{k_2} \left(\frac{\pi-A_1-A_2-A_{12}}{\pi}\right)^{m-j-k_{1,\theta_1}-k_{1,\theta_2}-k_2}.\label{choose theta bound} \end{align} The condition $\beta-\gamma < 0$ is equivalent to $k_{1,\theta_2}(\frac{\theta_2}{A_1+\theta_1})^2 - ( j+k_{1,\theta_1}) < 0$, which implies $ k_{1,\theta_2}(\frac{\theta_2}{A_1+\theta_1})^2 < j+k_{1,\theta_1}$. Assuming $\theta_1=\theta_2$ simplifies this condition to depend \textit{only} on the hyperplane configuration (and not $A_1$, $\theta_1$, and $\theta_2$) since $\frac{\theta_2}{A_1+\theta_1} = \frac{\theta_2}{3\theta_2} = \frac{1}{3}$. Thus, the condition $\beta-\gamma <0$ reduces to the condition $k_{1,\theta_2} < 9(j+k_{1,\theta_1})$ and (\ref{choose theta bound}) then simplifies to \begin{align} {\sum_{\substack{j+k_{1,\theta_1}+k_{1,\theta_2}+k_2\leq m, \\ k_{1,\theta_2} < 9(j+k_{1,\theta_1})}} } \binom{m}{j,k_{1,\theta_1},k_{1,\theta_2},k_2,m-j-k_{1,\theta_1}-k_{1,\theta_2}-k_2} \left(\frac{A_{12}}{\pi}\right)^j \left(\frac{\theta_1}{\pi} \right)^{k_{1,\theta_1}+k_{1,\theta_2}} \notag\\ &\quad\quad \times \left(\frac{A_2}{\pi} \right)^{k_2} \left(\frac{\pi-2A_1-A_{12}}{\pi}\right)^{m-j-k_{1,\theta_1}-k_{1,\theta_2}-k_2} \\ &= \sum_{\substack{j+k_{1,\theta_1}+k_{1,\theta_2}+k_2 + k = m, \\ k_{1,\theta_2} < 9(j+k_{1,\theta_1})}} \binom{m}{j,k_{1,\theta_1},k_{1,\theta_2},k_2,k} \left(\frac{A_{12}}{\pi}\right)^j \left(\frac{\theta_1}{\pi} \right)^{k_{1,\theta_1}+k_{1,\theta_2}} \left(\frac{A_2}{\pi} \right)^{k_2} \left(\frac{\pi-2A_1-A_{12}}{\pi}\right)^k, \\ &= \sum_{\substack{j+k_{1,\theta_1}+k_{1,\theta_2}+k_2 + k = m, \\ k_{1,\theta_2} < 9(j+k_{1,\theta_1})}}\binom{m}{j,k_{1,\theta_1},k_{1,\theta_2},k_2,k} \left(\frac{A_{12}}{\pi}\right)^j \left(\frac{A_1}{2\pi} \right)^{k_{1,\theta_1}+k_{1,\theta_2}} \left(\frac{A_1}{\pi} \right)^{k_2} \left(\frac{\pi-2A_1-A_{12}}{\pi}\right)^k, \label{RF3 bound multinomial} \end{align} where we have introduced $k$ to denote the number of hyperplanes that do not separate nor cut through either of the groups, and simplified using the assumptions that $\theta_1 = \frac{A_1}{2}$ and $A_1 = A_2$. Note that if we did not have the condition $k_{1,\theta_2} < 9(j+k_{1,\theta_1})$ in the sum (\ref{RF3 bound multinomial}) (that is, if we summed over all terms), the quantity would sum to 1 (this can easily be seen by the Multinomial Theorem). Finally, this means (\ref{RF3 bound multinomial}) is equivalent to (\ref{RF3 bound multinomial complement}), thereby completing the proof. \end{proof} \subsubsection{Proof of Corollary \ref{Corollary 1}} \begin{proof} We can bound (\ref{RF3 bound multinomial complement}) from below by bounding the excluded terms in the sum (i.e., those that satisfy $k_{1,\theta_2} \geq 9(j+k_{1,\theta_1})$) from above. One approach to this would be to count the number of terms satisfying $k_{1,\theta_2} \geq 9(j+k_{1,\theta_1})$ and bound them by their maximum. Using basic combinatorics (see the appendix, Section \ref{app:comb}), that the number of terms satisfying $k_{1,\theta_2} \geq 9(j+k_{1,\theta_1})$ is given by \begin{align} \label{multinomial count} W_1 = \frac{1}{12} \left(\left\lfloor \frac{m}{10} \right\rfloor + 1\right) \left(\left\lfloor \frac{m}{10} \right\rfloor + 2\right) \left(150 \left\lfloor \frac{m}{10} \right\rfloor^2 - 10(4m + 1)\left\lfloor \frac{m}{10} \right\rfloor +3(m^2 + 3m + 2)\right) \sim m^4. \end{align} Then, the quantity (\ref{RF3 bound multinomial complement}) can be bounded below by \begin{align} &1 - W_1 \max\left( \binom{m}{j,k_{1,\theta_1},k_{1,\theta_2},k_2,k} \left(\frac{A_{12}}{\pi}\right)^j \left(\frac{A_1}{2\pi} \right)^{k_{1,\theta_1}+k_{1,\theta_2}} \left(\frac{A_1}{\pi} \right)^{k_2} \left(\frac{\pi-2A_1-A_{12}}{\pi}\right)^k \right) \notag\\ &= 1 - W_1 \max\left( \binom{m}{j,k_{1,\theta_1},k_{1,\theta_2},k_2,k} \left(\frac{1}{2} \right)^{k_{1,\theta_1}+k_{1,\theta_2}} \left(\frac{A_{12}}{\pi}\right)^j \left(\frac{A_1}{\pi} \right)^{k_{1,\theta_1}+k_{1,\theta_2}+k_2} \left(\frac{\pi-2A_1-A_{12}}{\pi}\right)^k \right), \label{RF3 bound multinomial estimate} \end{align} where the maximum is taken over all $j,k_{1,\theta_1},k_{1,\theta_2}, k_2, k = 0,\dots,m$ such that $k_{1,\theta_2} \geq 9(j+k_{1,\theta_1})$. Ignoring the constraint $k_{1,\theta_2} \geq 9(j+k_{1,\theta_1})$, we can upper bound the multinomial coefficient. That is, assuming $\frac{m}{5} \in \mathbb{Z}$ for simplicity and applying Stirling's approximation for the factorial $n! \sim \sqrt{2\pi n} (\frac{n}{e})^n$, we get \begin{align} \binom{m}{j,k_{1,\theta_1},k_{1,\theta_2},k_2,k} &\leq \frac{m!}{[(\frac{m}{5})!]^5} \notag\\ &\sim \frac{\sqrt{2\pi m} (\frac{m}{e})^m}{[\sqrt{2\pi \frac{m}{5}} (\frac{m}{5e})^{m/5}]^5} \notag\\ &= \frac{5^{m + 5/2}}{(2\pi m)^2}. \label{multinomial bound} \end{align} Since we are assuming $A_{12}$ is larger than $A_1$ and $\pi-2A_1-A_{12}$, the strategy is to take $j$ to be as large as possible while satisfying $k_{1,\theta_2} \geq 9j$ and $j + k_{1,\theta_2} = m$. Since $k_{1,\theta_2} \geq 9j$, we have $j + 9j \leq m$ which implies $j \leq \frac{m}{10}$. So, we take $j= \frac{m}{10}$, $k_{1,\theta_2} = \frac{9m}{10}$, and $k_{1,\theta_1} = k_2 = k = 0$. Then \begin{align} \left(\frac{1}{2} \right)^{k_{1,\theta_1}+k_{1,\theta_2}} \left(\frac{A_{12}}{\pi}\right)^j \left(\frac{A_1}{\pi} \right)^{k_{1,\theta_1}+k_{1,\theta_2}+k_2} \left(\frac{\pi-2A_1-A_{12}}{\pi}\right)^k &\leq \left(\frac{1}{2} \right)^{9m/10} \left(\frac{A_{12}}{\pi} \right)^{m/10} \left(\frac{A_1}{\pi} \right)^{9m/10} \notag\\ &= \left(\frac{1}{2^9} \frac{A_{12}}{\pi} \left(\frac{A_1}{\pi}\right)^9 \right)^{m/10}.\label{rest bound} \end{align} Combining (\ref{RF3 bound multinomial estimate}) with the bounds given in (\ref{multinomial bound}) and (\ref{rest bound}), we have \begin{align} &\geq 1 - W_1 \frac{5^{m + 5/2}}{(2\pi m)^2} \left(\frac{1}{2^9} \frac{A_{12}}{\pi} \left(\frac{A_1}{\pi}\right)^9 \right)^{m/10} \notag\\ &\sim 1- m^4 \frac{5^{m + 5/2}}{(2\pi m)^2} \left(\frac{1}{2^9} \frac{A_{12}}{\pi} \left(\frac{A_1}{\pi}\right)^9 \right)^{m/10} \notag\\ &= 1- m^2 \frac{5^{5/2}}{(2\pi)^2} \left(5^{10} \frac{1}{2^9} \frac{A_{12}}{\pi} \left(\frac{A_1}{\pi}\right)^9 \right)^{m/10}. \end{align} For the above to tend to 1 as $m\rightarrow \infty$, we need $ \frac{5^{10}}{2^9} \frac{A_{12}}{\pi} \left(\frac{A_1}{\pi}\right)^9 < 1$. This is equivalent to $A_{12} \left(\frac{A_1}{2}\right)^9 < \frac{\pi^{10}}{5^{10}}$, which implies $A_{12} \theta_1^9 < \left(\frac{\pi}{5}\right)^{10} = \frac{\pi}{5}\left(\frac{\pi}{5}\right)^{9}$. Note that if $\theta_1 = \frac{\pi}{5}$, then $A_1 = A_2 = 2\theta_1 = \frac{2\pi}{5}$. Then $A_{12}$ could be at most $\frac{\pi}{5}$. But, this can't be because we have assumed $A_{12} \geq A_1$. Thus, we must have $\theta_1 < \frac{\pi}{5}$. In fact, $\theta_1 = \frac{\pi}{6}$ is the largest possible, in which case $A_{12} = A_1 = A_2 = \frac{\pi}{3}$. If $ \theta_1 = \frac{\pi}{6}$, then $A_{12} \theta_1^9 < \frac{\pi}{5}\left(\frac{\pi}{5}\right)^{9}$ becomes $A_{12} < \frac{\pi}{5} \left(\frac{6}{5} \right)^9 \approx 3.24$. Therefore, since we are already assuming $A_{12} + 2A_1 \leq \pi$, this is essentially no further restriction on $A_{12}$, and the same would be true for all $\theta_1 \leq \frac{\pi}{6}$. This completes the proof. \end{proof} \subsubsection{Proof of Corollary \ref{Corollary 2}} \begin{proof} Consider (\ref{RF3 bound multinomial complement}) and set $j' = j + k_{1,\theta_1}$ and $r = k_2 + k$. Then we view (\ref{RF3 bound multinomial complement}) as a probability equivalent to \begin{align}\label{equivalent probability} 1 - \underset{j'+k_{1,\theta_2}+r = m, \,\, k_{1,\theta_2} \geq 9j'}{\sum_{j'=0}^{2m} \sum_{k_{1,\theta_2}=0}^m \sum_{r=0}^{2m}} \binom{m}{k_{1,\theta_2},j',r} \left(\frac{A_{12}+\frac{A_1}{2}}{\pi} \right)^{j'} \left(\frac{A_1}{2\pi} \right)^{k_{1,\theta_2}} \left(\frac{\pi-A_1-A_{12}}{\pi} \right)^{r}. \end{align} Note that multinomial coefficients are maximized when the parameters all attain the same value. Thus, the multinomial term above is maximized when $ k_{1,\theta_2}$, $j'$ and $r$ are all as close to one another as possible. Thus, given the additional constraint that $k_{1,\theta_2} \geq 9j'$, the multinomial term is maximized when $k_{1,\theta_2}=\frac{9m}{19}$, $j' = \frac{m}{19}$, and $r = \frac{9m}{19}$ (possibly with ceilings/floors as necessary if $m$ is not a multiple of 19), (see the appendix, Section \ref{app:facts}, for a quick explanation), which means \begin{align} \binom{m}{k_{1,\theta_2},j',r} &\leq \frac{m!}{(\frac{9m}{19})!(\frac{m}{19})!(\frac{9m}{19})!} \label{facts} \\ &\sim \frac{\sqrt{2\pi m}(\frac{m}{e})^m}{2\pi \frac{9m}{19} (\frac{9m}{19e})^{18m/19} \sqrt{2\pi \frac{m}{19}}(\frac{m}{19e})^{m/19} }\label{Stirling step} \\ &= \frac{19\sqrt{19}}{18\pi m} \left( (\frac{19}{9})^{18/19 19^{1/19}} \right)^m \notag \\ &\approx \frac{19\sqrt{19}}{18\pi m} 2.37^m, \label{trinomial bound} \end{align} where (\ref{Stirling step}) follows from Stirling's approximation for the factorial (and we use the notation $\sim$ to denote asymptotic equivalence, i.e. that two quantities have a ratio that tends to 1 as the parameter size grows). Now assume $A_{12} + \frac{3}{4}A_1 \leq \frac{\pi}{2}$, which implies $\pi-A_1-A_{12} \geq A_{12} + \frac{A_1}{2}$. Note also that $\pi-A_1 - A_{12} \geq A_1$ since it is assumed that $\pi-2A_1 - A_{12}\geq 0$. Therefore, we can lower bound (\ref{equivalent probability}) by \begin{align}\label{corollary 2 bound} 1 - W_2\frac{19\sqrt{19}}{18\pi m} 2.37^m \left(\frac{\pi-A_1-A_{12}}{\pi} \right)^m, \end{align} where $W_2$ is the number of terms in the summation in (\ref{equivalent probability}), and is given by \begin{align} W_2= \frac{1}{6} \left(\left\lfloor \frac{m}{10} \right\rfloor + 1\right) \left(100 \left\lfloor \frac{m}{10} \right\rfloor^2 + (5 - 30m)\left\lfloor \frac{m}{10} \right\rfloor +3(m^2 + 3m + 2)\right) \sim m^3. \end{align} Thus, (\ref{corollary 2 bound}) goes to 1 as $m\rightarrow \infty$ when $2.37\left( \frac{\pi-A_1-A_{12}}{\pi}\right) <1 $, which holds if $A_1+A_{12} > 0.58\pi$. \end{proof} \section{Discussion and Conclusion}\label{sec::conclude} In this work, we have presented a supervised classification algorithm that operates on binary, or one-bit, data. Along with encouraging numerical experiments, we have also included a theoretical analysis for a simple case. We believe our framework and analysis approach is relevant to analyzing similar, layered-type algorithms. Future directions of this work include the use of dithers for more complicated data geometries, as well as a generalized theory for high dimensional data belonging to many classes and utilizing multiple layers within the algorithm.
2,869,038,155,370
arxiv
\section{Introduction} A basic notion in mesoscopic physics is that the measurement of a voltage at some point in the sample is an invasive act, which may destroy the phase coherence throughout the whole sample. B\"uttiker introduced a simple but realistic model for a voltage probe,\cite{Buettiker1} and used it to investigate the transition from coherent to sequential tunneling through a double-barrier junction, induced by the coupling to a voltage lead of the region between the barriers. The mechanism by which the measurement of a voltage destroys phase coherence is that electrons which enter the voltage lead are reinjected into the system without any phase relationship. B\"uttiker's model has been applied successfully to a variety of physical situations,\cite{Buettiker3,BSDV,KLDV,KSEWI,Hershfield,Mello1,Mello2,BVH} including diffusive transport in a disordered wire, ballistic transport through quantum point contacts, and edge-channel transport in the quantum Hall effect. In order to analyze their experimental data, Marcus et al.\cite{MWHG} proposed to use B\"uttiker's model to describe inelastic processes in ballistic and chaotic cavities (``quantum dots''). Here we present a detailed analysis of the effect of a voltage probe on the entire conductance distribution of such a system. Several recent theoretical papers dealt with the phase-coherent conduction through a ballistic chaotic cavity, either by means of a semiclassical approach,\cite{BarangerJalabertStone} or by means of the supersymmetry method,\cite{PrigodinEfetovIida,Pluhar,Mucciolo} or by random-matrix theory.\cite{BarangerMello,JalabertPichardBeenakker,BrouwerBeenakker} Quantum interference has a striking effect on the conductance $G$ of the quantum dot if it is coupled to source and drain reservoirs by means of two ballistic point contacts with a quantized conductance of $2e^2/h$. Classically, one would expect a conductance distribution $P(G)$ which is peaked at $G = e^2/h$, since half of the electrons injected by the source are transmitted on average to the drain. Instead, $P(G)$ was found to be\cite{BarangerMello,JalabertPichardBeenakker} \begin{equation} P(G) \propto G^{-1 + \beta/2}, \ \ 0 \le G \le 2e^2/h, \label{BallisticDistribution} \end{equation} where $\beta \in \{1,2,4\}$ is the symmetry index of the ensemble of scattering matrices ($\beta = 1$ or $2$ in the absence or presence of a time-reversal-symmetry breaking magnetic field; $\beta = 4$ in zero magnetic field with strong spin-orbit scattering). Depending on $\beta$, the conductance distribution is either uniform, peaked at zero or peaked at $2e^2/h$. As we will show, strong coupling of the quantum dot to a voltage lead causes a crossover from Eq.\ (\ref{BallisticDistribution}) to a Gaussian, peaked at $e^2/h$. A small displacement of the peak of the Gaussian for $\beta=1$, and a $\beta$-dependent width of the peak are the remnants of the weak localization and mesoscopic fluctuation effects which are so pronounced in the case of complete phase coherence.\cite{BarangerMello,JalabertPichardBeenakker} A strong coupling of the voltage probe is achieved by means of a wide ballistic lead with many scattering channels (Sec.\ \ref{sec4}). If the voltage lead contains a single channel, we may reduce the coupling to zero by means of a tunnel barrier in this lead (Sec.\ \ref{sec3}). Together, these two sections cover the full range of coupling strengths. In the next section we first formulate the problem in some more detail, and discuss the random-matrix method used to compute the conductance distribution. \section{Formulation of the problem} We consider a ballistic and chaotic cavity (quantum dot) coupled by two leads to source and drain reservoirs at voltages $V_1$ and $V_2$. A current $I = I_1 = -I_2$ is passed from source to drain via leads $1$ and $2$. A third lead is attached to the quantum dot and connected to a third reservoir at voltage $V_3$. This third lead is a voltage probe, which means that $V_3$ is adjusted in such a way, that no current is drawn ($I_3 = 0$). The coupling strength of the voltage probe is determined by the number $N$ of scattering channels (propagating transverse modes at the Fermi-level) in lead $3$ and by the transparency of a tunnel barrier in this lead. We assume that each of the $N$ modes has the same transmission probability $\Gamma$ through the tunnel barrier. We restrict ourselves to the case that the current-carrying leads $1$ and $2$ are ideal (no tunnel barrier) and single-channel (a single propagating transverse mode). This case maximizes the quantum-interference effects on the conductance. We assume that the capacitance of the quantum dot is sufficiently large that we may neglect the Coulomb blockade, and we will regard the electrons to be non-interacting. The scattering-matrix $S$ of the system has dimension $M = N + 2$ and can be written as \begin{equation} S = \left( \begin{array}{ccc} r_{11} & t_{12} & t_{13} \\ t_{21} & r_{22} & t_{23} \\ t_{31} & t_{32} & r_{33} \end{array} \right) \end{equation} in terms of reflection and transmission matrices $r_{ii}$ and $t_{ij}$. The currents and voltages satisfy B\"uttiker's relations\cite{Buettiker2} \begin{equation} {h \over 2 e^2} I_k = \left( N_k - R_{kk} \right) V_k - \sum_{l \neq k} T_{kl} V_l,\ k = 1,2,3, \label{Buetteq} \end{equation} where $R_{kk} = \mbox{tr}\, r_{kk}^{\phantom \dagger} r_{kk}^{\dagger}$, $T_{kl} = \mbox{tr}\, t_{kl}^{\phantom \dagger} t_{kl}^{\dagger}$, and $N_k$ is the number of modes in lead $k$. The two-terminal conductance $G = I/(V_1 - V_2)$ follows from Eq.\ (\ref{Buetteq}) with $I_1 = -I_2 = I$, $I_3 = 0$: \begin{equation} G = {2e^2 \over h} \left( T_{12} + {T_{13} T_{32} \over T_{31} + T_{32}} \right). \label{Conductance} \end{equation} {}From now on, we will measure $G$ in units of $2 e^2/h$. An ensemble of quantum dots is constructed by considering small variations in shape or Fermi energy. To compute the probability distribution $P(G)$ of the conductance in this ensemble we need to know the distribution of the elements of the scattering matrix. Our basic assumption, following Refs.\ \ref{BarangerMello} and \ref{JalabertPichardBeenakker}, is that for ideal leads the scattering matrix is uniformly distributed in the space of unitary $M \times M$ matrices. This is the circular ensemble of random-matrix theory.\cite{Dyson,Mehta} The distribution $P_0(S)$ for the case $\Gamma = 1$ is therefore simply \begin{equation} P_0(S) = {1 \over V}, \label{circ} \end{equation} where $V = \int d\mu$ is the volume of the matrix space with respect to the invariant measure $d\mu$. Both $V$ and $d\mu$ depend on the symmetry index $\beta \in \{1,2,4\}$, which specifies whether $S$ is unitary ($\beta = 2$), unitary symmetric ($\beta = 1$), or unitary self-dual ($\beta = 4$). A characteristic feature of the circular ensemble is that the average $\bar{S}$ of the scattering matrix vanishes. For non-ideal leads this is no longer the case, and Eq.\ (\ref{circ}) therefore has to be modified if $\Gamma \neq 1$. In Ref.\ \ref{BrouwerBeenakker} we showed, for a quantum dot with two non-ideal leads, how the probability distribution $P(S)$ of the scattering matrix can be computed by expressing the elements of the full scattering matrix $S$ (quantum dot plus tunnel barriers) in terms of the scattering matrix $S_0$ of the quantum dot alone (with ideal leads). A more general analysis along these lines\cite{Brouwer} shows that for an arbitrary number of leads the distribution takes the form of a Poisson kernel,\cite{Hua,MelloPereyraSeligman} \begin{mathletters} \label{Poisson} \begin{equation} P(S) = c\, |\det(1 - \bar{S}^{\dagger} S)|^{-\beta M -2 + \beta}, \label{Poisson1} \end{equation} with normalization constant \begin{equation} c = {1 \over V} [\det(1 - \bar{S}^{\dagger} \bar{S})]^{\case{1}{2}\beta M + 1 - \case{1}{2}\beta}. \label{Poisson2} \end{equation} In the present case of two single-channel ideal leads and one non-ideal lead the average $\bar{S} = \int d\mu\, S P(S)$ of the scattering matrix is given by \begin{equation} \bar{S}_{nm} = \left\{ \begin{array}{cl} \sqrt{1 - \Gamma} & \mbox{if $3 \le n = m \le M$,} \\ 0 & \mbox{otherwise.} \end{array} \right. \end{equation} \end{mathletters}% One verifies that for $\Gamma = 1$, $P(S)$ reduces to the distribution (\ref{circ}) of the circular ensemble. Eq.\ (\ref{Poisson}) holds for any $\beta \in \{1,2,4\}$. In what follows, however, we will only consider the cases $\beta = 1,2$ of unitary or unitary symmetric matrices, appropriate for systems without spin-orbit scattering. The case $\beta = 4$ of unitary self-dual matrices is computationally much more involved, and also less relevant from a physical point of view. As indicated by B\"uttiker,\cite{Buettiker1} the cases $N = 1$ and $N > 1$ of a single- and multi-channel voltage lead are essentially different. Current conservation (i.e. unitarity of $S$) poses two restrictions on $T_{31}$ and $T_{32}$: (i) $T_{31} \le 1$, $T_{32} \le 1$; and (ii) $T_{31} + T_{32} \le N$. The second restriction is effective for $N = 1$ only. So for $N=1$, current conservation imposes a restriction on the coupling strength of the voltage lead to the quantum dot which is not present for $N > 1$. We treat the cases $N=1$ and $N>1$ separately, in Secs.\ \ref{sec3} and \ref{sec4}. For $N=1$ we treat the case of arbitrary $\Gamma$, but for $N > 1$ we restrict ourselves for simplicity to $\Gamma = 1$. \section{Single-channel voltage lead} \label{sec3} In the case $N=1$, Eq.\ (\ref{Poisson}) reduces to \begin{equation} P(S) = {1 \over V} \Gamma^{\beta + 1} \left(1 + (1-\Gamma)|S_{33}|^2 -2(1-\Gamma)^{1/2}\, \mbox{Re}\, S_{33} \right)^{-\beta - 1}. \label{PoissonKernel2} \end{equation} In order to calculate $P(G)$, we need to know the invariant measure $d\mu$ in terms of a parameterization of $S$ which contains the transmission coefficients explicitly. The matrix elements of $S$, in the case $N=1$, are related to $R_{kk}$ and $T_{kl}$ by $S_{kk} = \sqrt{R_{kk}} e^{i \phi_{kk}}$, $S_{kl} = \sqrt{T_{kl}} e^{i \phi_{kl}}$, where $\phi_{kl}$ are real phase shifts. When time-reversal symmetry is broken ($\beta = 2$), we choose $R_{11}$, $R_{22}$, $T_{12}$, $T_{21}$, $\phi_{13}$, $\phi_{23}$, $\phi_{33}$, $\phi_{32}$, and $\phi_{31}$ as independent variables, and the other variables then follow from unitarity of $S$. In the presence of time-reversal symmetry ($\beta = 1$), the symmetry $S_{kl} = S_{lk}$ reduces the set of independent variables to $R_{11}$, $R_{22}$, $T_{12}$, $\phi_{13}$, $\phi_{23}$, and $\phi_{33}$. We compute the invariant measure $d\mu$ in the same way as in Ref.\ \ref{BarangerMello}. Denoting the independent variables in the parameterization of $S$ by $x_i$, we consider the change $dS$ in $S$ associated with an infinitesimal change $dx_i$ in the independent variables. The invariant arclength $\mbox{tr}\, dS^{\dagger} dS$ defines the metric tensor $g_{ij}$ according to \begin{equation} \mbox{tr}\, dS^{\dagger} dS = \sum_{i,j} g_{ij} dx_i dx_j. \end{equation} The determinant $\det g$ then yields the invariant measure \begin{equation} d\mu = |\det g|^{1/2} \prod_{i} dx_i. \end{equation} The result turns out to be independent of the phases $\phi_{kl}$ and to have the same form for $\beta = 1$ and $2$, \begin{mathletters} \label{invmeas} \begin{equation} d\mu = (\beta J)^{-1/2} \Theta(J) \prod_i dx_i. \end{equation} The quantity $J$ is defined by \begin{eqnarray} J = \left\{ \begin{array}{l} \displaystyle 0 \ \mbox{if $R_{11} + T_{12} > 1$ or $R_{22} + T_{21} > 1$}, \\ 4 R_{22} T_{12} T_{13} T_{23} - (R_{22} T_{12} + T_{13} T_{23} - R_{11} T_{21})^2 \ \mbox{otherwise}, \end{array} \right. \end{eqnarray} \end{mathletters}% and $\Theta(J) = 1$ if $J > 0$ and $\Theta(J) = 0$ if $J \le 0$. The independent variables $x_i$ are different, however, for $\beta = 1$ and $\beta = 2$ --- as indicated above. We have calculated the probability distribution of the conductance from Eqs.\ (\ref{Conductance}), (\ref{PoissonKernel2}), and (\ref{invmeas}). The results are shown in Fig.\ \ref{fig1}, for several values of $\Gamma$. For $\Gamma = 0$ (uncoupled voltage lead), $P(G)$ is given by\cite{BarangerMello,JalabertPichardBeenakker} \begin{equation} P(G) = \left\{ \begin{array}{ll} \case{1}{2} G^{-1/2} & \mbox{if $\beta$ = 1}, \\ 1 & \mbox{if $\beta$ = 2}. \end{array} \right. \end{equation} For $\Gamma=1$ (maximally coupled single-channel voltage lead), we find \begin{equation} P(G) = \left\{ \begin{array}{lcl} \displaystyle 2 - 2G && \displaystyle \mbox{if $\beta = 1$}, \\ \displaystyle \case{4}{3} \left[2 G- 2G^2 - (3 G^2 - 2 G^3)\ln G\ - (1 - 3 G^2 + 2 G^3) \ln(1 - G) \right] && \displaystyle \mbox{if $\beta = 2$}. \end{array} \right. \end{equation} The average $\langle G \rangle$ and variance $\mbox{var}\, G$ of the conductance can be calculated in closed form for all $\Gamma$. We find that $\langle G \rangle$ is independent of $\Gamma$, \begin{equation} \langle G \rangle = \left\{ \begin{array}{lcl} \displaystyle \case{1}{3} && \displaystyle \mbox{if $\beta = 1$}, \\ \displaystyle \case{1}{2} && \displaystyle \mbox{if $\beta = 2$}. \end{array} \right. \end{equation} The variance does depend on $\Gamma$, \begin{equation} \mbox{var}\, G = \left\{ \begin{array}{lcl} \displaystyle \case{1}{45} \left(1 - \Gamma \right)^{-2} \left({4 - 11 \Gamma + 7 \Gamma^2 - 3 \Gamma^2 \ln \Gamma}\right) && \displaystyle \mbox{if $\beta = 1$}, \\ \displaystyle \case{1}{36} {\left( 1 - \Gamma \right) }^{-3} {\left({3 - 11\,\Gamma + 17\,{\Gamma^2} - 9\,{\Gamma^3} + 4\,{\Gamma^3}\,\ln \Gamma} \right)} && \displaystyle \mbox{if $\beta = 2$}. \end{array} \right. \end{equation} The breaking of phase coherence caused by a single-channel voltage lead is not strong enough to have any effect on the average conductance, which for $\beta = 1$ remains below the classical value of $\case{1}{2}$. The variance of the conductance is reduced somewhat when $\Gamma$ is increased from $0$ to $1$, but remains finite. (For $\beta = 1$ the reduction is with a factor $\case{5}{8}$, for $\beta = 2$ with a factor $\case{5}{9}$.) We will see in the next section, that the complete suppression of quantum interference effects requires a voltage lead with $N \gg 1$. Then $\langle G \rangle \rightarrow \case{1}{2}$ and $\mbox{var}\, G \rightarrow 0$. \section{Multi-channel voltage lead} \label{sec4} Now we turn to the case of a multi-channel ideal voltage lead ($N > 1$, $\Gamma = 1$). Current conservation yields: \begin{equation} \begin{array}{lclcl} T_{13} &=& 1 - R_{11} - T_{12} &=& 1 - |S_{11}|^2 - |S_{12}|^2, \\ T_{31} &=& 1 - R_{11} - T_{21} &=& 1 - |S_{11}|^2 - |S_{21}|^2, \\ T_{32} &=& 1 - R_{22} - T_{12} &=& 1 - |S_{12}|^2 - |S_{22}|^2. \end{array} \end{equation} To determine $P(G)$ it is thus sufficient to know the distribution $\tilde{P}(S_{11},S_{12},S_{21},S_{22})$ of the matrix elements $S_{kl}$ with $k,l \le 2$. This marginal probability distribution has been calculated by Mello and coworkers\cite{PereyraMello} for arbitrary dimension $M \ge 4$ of $S$. As in Sec.\ \ref{sec3} we parameterize $S_{kl} = \sqrt{T_{kl}} e^{i \phi_{kl}}$ if $k \neq l$ and $S_{kk} = \sqrt{R_{kk}} e^{i \phi_{kk}}$ ($k, l \le 2$). We abbreviate $\prod_i dy_i \equiv dR_{11} dR_{22} dT_{12} dT_{22} \prod_{k,l=1}^{2} d\phi_{kl}$. For the cases $\beta = 1,2$ one then has\cite{PereyraMello} \begin{mathletters} \label{multidensity} \begin{equation} d \tilde{P} = \left\{ \begin{array}{lcl} \displaystyle c_1 \delta(T_{12} - T_{21}) \delta(\phi_{12} - \phi_{21}) F^{(M - 5)/2} \Theta(F) \prod_i dy_i && \mbox{if $\beta = 1$}, \label{FiniteDistribution1} \\ \displaystyle c_2 F^{M-4} \Theta(F) \prod_i dy_i && \mbox{if $\beta = 2$}, \label{FiniteDistribution2} \end{array} \right. \end{equation} where $F$ is defined by \begin{eqnarray} F &=& \left\{ \begin{array}{l} 0 \ \mbox{if $R_{11} + T_{12} > 1$ or $R_{22} + T_{21} > 1$,} \\ (1-R_{11})(1-R_{22}) + (1-T_{12})(1-T_{21}) - 1 \\ \hspace{1cm} -\, 2 (R_{11} R_{22} T_{12} T_{21})^{1/2} \cos(\phi_{11} + \phi_{22} - \phi_{12} - \phi_{21}) \ \mbox{otherwise}. \end{array} \right. \end{eqnarray} \end{mathletters}% The coefficients $c_1$ and $c_2$ are normalization constants. Calculation of the probability distribution of the conductance is now a matter of quadrature. Results are shown in Fig.\ \ref{fig2}, for $N$ up to $10$. As $N$ increases, $P(G)$ becomes more and more sharply peaked around $G = \case{1}{2}$. In the limit $N \rightarrow \infty$, $P(G)$ approaches a Gaussian, with mean and variance given by \begin{eqnarray} \langle G \rangle &=& \left\{ \begin{array}{lcl} \displaystyle \case{1}{2} - \case{1}{2} N^{-1} + {\cal O}(N^{-2}) && \mbox{if $\beta = 1$}, \\ \displaystyle \lefteqn{\case{1}{2}} \hphantom{\case{3}{4} N^{-2} + {\cal O}(N^{-3})} && \mbox{if $\beta = 2$}, \end{array} \right. \\ \mbox{var}\, G &=& \left\{ \begin{array}{lcl} \lefteqn{\case{3}{4} N^{-2} + {\cal O}(N^{-3})} \hphantom{\displaystyle \case{1}{2} - \case{1}{2} N^{-1} + {\cal O}(N^{-2})} && \mbox{if $\beta = 1$}, \\ \displaystyle \case{1}{4} N^{-2} + {\cal O}(N^{-3}) && \mbox{if $\beta = 2$}. \end{array} \right. \end{eqnarray} The variance of $G$ is reduced by a factor $3$ when time-reversal symmetry is broken in the limit $N \rightarrow \infty$. The offset of $\langle G \rangle$ from $\case{1}{2}$ when $\beta = 1$ is a remnant of the weak localization effect. \section{Conclusion} We have calculated the entire probability distribution of the conductance of a quantum dot in the presence of a voltage probe, for single-channel point contacts to source and drain, in the presence and absence of time-reversal symmetry (no spin-orbit scattering). The average conductance is not changed if a single-channel voltage lead containing a tunnel barrier is attached, but the shape of the distribution changes considerably. A strikingly simple result is obtained for a single-channel ballistic voltage lead in zero magnetic field ($N=1$, $\Gamma=1$, $\beta=1$), when $P(G) = 2 - 2G$, to be compared with $P(G) = \case{1}{2} G^{-1/2}$ without the voltage probe.\cite{BarangerMello,JalabertPichardBeenakker} (In both cases $G \in [0,1]$ is measured in units of $2e^2/h$.) When the number $N$ of channels in the voltage lead is increased, the probability distribution becomes sharply peaked around $G = \case{1}{2}$. Both the width of the peak and the deviation of its center from $\case{1}{2}$ scale as $1/N$ for $N \gg 1$. The width is reduced by a factor $\sqrt{3}$ upon breaking the time-reversal symmetry. The loss of phase coherence induced by a voltage probe can be investigated experimentally by fabricating a cavity with three leads attached to it. Furthermore, as emphasized by Marcus et al.,\cite{MWHG} the inelastic scattering which occurs at finite temperatures in a quantum dot might well be modeled effectively by an imaginary voltage lead. This research was supported by the ``Ne\-der\-land\-se or\-ga\-ni\-sa\-tie voor We\-ten\-schap\-pe\-lijk On\-der\-zoek'' (NWO) and by the ``Stich\-ting voor Fun\-da\-men\-teel On\-der\-zoek der Ma\-te\-rie'' (FOM).
2,869,038,155,371
arxiv
\section{Introduction} It is now well established that a bar rotating in a halo loses angular momentum through dynamical friction. This topic has received a lot of attention recently for two important reasons: (1) it offers a constraint on the density of the DM halo (Debattista \& Sellwood 1998, 2000), and (2) it may flatten the density cusp (Weinberg \& Katz 2002). Both these claims have been challenged. Realistic bars in cuspy halos produce a mild density decrease at most (Holley-Bockelmann {\it et al.}\ 2003) or even a slight increase (Sellwood 2003), but we leave this issue aside here and concentrate instead on the density constraint. Holley-Bockelmann \& Weinberg (2005) announce a preliminary report of simulations with weak friction in halos having uniform density cores, but we focus here on the older counter-example claimed by Valenzuela \& Klypin (2003; hereafter VK03) of a bar that experiences little friction in a cusped dense halo. \begin{figure}[t] \centerline{\psfig{figure=VKcomp.ps,width=.8\hsize,clip=}} \caption{The time evolution of the bar pattern speed in a number of resimulations of model A$_1$ of VK03. The evolution reported by VK03 is reproduced as the dot-dashed line; all other lines are from simulations from the same initial particle load, but run with our code using many different sets of numerical parameters.} \label{VKcomp} \end{figure} VK03 kindly made available the initial positions and velocities of all the particles of their model A$_1$, in which the bar did not slow for 2-3 Gyrs after it had formed and settled. We have used our code (Sellwood 2003) to rerun this simulation many times, and the pattern speed evolution in many of these runs is shown in Figure~\ref{VKcomp}. It is striking that in most cases, the bar slowed earlier than VK03 found, but in one anomalous case, the bar stayed fast for about 10 Gyr! The anomalous result is not a consequence of some inadequate numerical parameter, since many of the other cases are from models with parameters that bracket those of the anomalous case -- {\it i.e.}\ longer and shorter time steps, coarser and finer grids, {\it etc.} Note that apart from the crucial delay in the onset of friction in the case by VK03 and the one anomalous case we find, the evolution is generally very similar. In particular, whenever the bar slows, it slows at a similar rate. The following sections account for the discrepancies between the results shown in Fig.~\ref{VKcomp}. \section{Frictional Torque} In a classic paper, Tremaine \& Weinberg (1984) laid out the mathematical apparatus for friction in a spherical system. Following the precepts of Lynden-Bell \& Kalnajs (1972), they derived a formula for the torque experienced by a rotating perturbation potential, $\Phi_p$. They work in action-angle variables (see Binney \& Tremaine 1987, \S3.5). In a spherical potential, there are two non-zero actions: the total angular momentum per unit mass $L \equiv J_\phi$ and the radial action $J_r$, each associated with two separate frequencies, $\Omega$ and $\kappa$, which are generalizations to orbits of arbitrary eccentricity of the usual frequencies of Lindblad epicycles familiar from disk dynamics. In the limit that a constant amplitude perturbation rotates steadily at $\Omega_p$, they showed that the net LBK torque is \begin{equation} \tau_{\rm LBK} \propto \sum_{m,k,n} \left(m{\partial f \over \partial L} + k {\partial f \over \partial J_r}\right)|\Phi_{mnk}|^2 \delta(n \Omega + k \kappa - m \Omega_p), \end{equation} where $f$ is the usual distribution function and $\Phi_{mnk}$ is a Fourier coefficient of the perturbing potential. The Dirac delta function implies that the net torque is the sum of the separate contributions from resonances, where $n \Omega + k \kappa = m \Omega_p$. Because the bar pattern speed decreases, as a result of the frictional torque, this expression needs to be generalized to a time-dependent forcing (see Weinberg 2004), but the revised expression for the torque still contains the same derivatives of the distribution function. Lynden-Bell (1979) offered a clear insight into how an orbit is affected when close to a resonance. The unperturbed orbit, which is a rosette in an inertial reference frame, closes in any frame that rotates at the rate \begin{equation} \Omega^\prime = \Omega + k \kappa / m, \end{equation} for any pair $k,\,m$. [See {\it e.g.}\ Kalnajs (1977) for illustrations of several of the most important shapes.] When the pattern speed of the bar is close to $\Omega^\prime$ for some pair $k,\,m$, the orbit can be regarded as a closed figure that precesses at the slow rate \begin{equation} \Omega_s \equiv (\Omega^\prime - \Omega_p) \ll \Omega_p. \end{equation} Under these circumstances, the ``fast action'' is adiabatically invariant, while the ``slow action'' can suffer a large change. Things are particularly straightforward at corotation, where the fast action is the radial action, while the slow action that can suffer a large change is simply the angular momentum. \begin{figure}[t] \centerline{\psfig{figure=Omegap.ps,width=.5\hsize,clip=} \hfil \psfig{figure=Ls200_CR.ps,width=.5\hsize,clip=}} \caption{Left: The time evolution of the bar pattern speed in the restricted simulation discussed in \S3. This simulation employs 10M particles. Right: The mean density of particles as a function of $L_{\rm res}$ at $t=200$ in the same simulation.} \label{restrict} \end{figure} \section{Restricted Simulations} Fully self-consistent simulations are complicated by evolution of the total potential, changes to the bar mass profile, {\it etc.}\ ~It is therefore easier first to try to understand ``restricted'' simulations in which a rigid bar rotates in halo of non-interacting test particles (Lin \& Tremaine 1983; Sellwood 2004). The particles move in a rigid halo potential that is perturbed by that of the rotating bar, and the bar is accelerated in response to the vector-sum of the non-axisymmetric forces felt by the particles. Figure~\ref{restrict} shows an example of the pattern speed evolution of a homogeneous ellipsoid with principal axes $1:0.5:0.1$ in a Hernquist (1990) halo. The bar mass is 1\% of the halo mass, $M_h$, has a semi-major axis equal to the halo break radius $r_h$, and rotates initially so that it just fills its corotation circle. We use units such that $G=M_h=r_h=1$. At intervals during the simulation, we compute $\Omega^\prime = \Omega + k \kappa / m$ for every particle, and calculate $F$, the average density of particles as a function $\Omega^\prime$. It is somewhat easier to understand a plot of $F$ as a function of the angular momentum, $L_{\rm res}$, of a circular orbit that has the given $\Omega^\prime(k,m)$. The right-hand panel of Figure~\ref{restrict} shows the form of $F$ near to corotation ($m = 2, \; k = 0$) at $t=200$, which is typical. Near to the resonance (marked by the vertical line), particles cross corotation in both directions on horse-shoe orbits (Binney \& Tremaine 1987, \S7.5). The generally negative slope of $F$ implies an excess of lower $L$ particles that gain angular momentum and move out across the resonance, and this imbalance is responsible for friction. If the pattern speed were to stay constant, the imbalance would tend to flatten the slope of $F$, and the distribution of particles about the resonance would approach kinetic equilibrium in which there would be more nearly equal numbers of particles crossing in both directions. But as $\Omega_p$ declines, the resonance keeps moving to larger $L_{\rm res}$, and equilibrium is never established; instead, the density of particles about the dominant resonance(s) responsible for friction takes on the characteristic humped form shown in Fig.~\ref{restrict}. Friction arises principally at corotation over most of the evolution. The outer Lindblad resonance contributes in the early stages, but dominates only if the bar is unreasonably fast. The inner Lindblad resonance becomes important only when the bar is already quite slow; {\it e.g.}\ it is responsible for the more rapid braking around $t=1000$ in Fig.~\ref{restrict}. \begin{figure}[t] \centerline{\psfig{figure=drive.ps,width=.8\hsize,clip=}} \caption{The solid curve shows evolution of the bar pattern speed in a restricted experiment in which the bar was reaccelerated (by external interference) between times 180 and 200, but was otherwise allowed to evolve freely. (This experiment used 1M particles.) The dotted curve shows what happens without interference.} \label{drive} \end{figure} \section{Anomalous Situation} Now suppose that $\Omega_p$ rises for some reason, after having declined for some time, as illustrated in Figure~\ref{drive}. The shoulder in $F$ created by the previous friction survives, but the resonance at the now higher $\Omega_p$ lies on the other side of the shoulder, as shown in Figure~\ref{Ls_forced}. Thus the local gradient in $F$ has changed sign, leading to an adverse gradient for friction, and the torque from the dominant resonance disappears. Under these circumstances, a balance between gainers and losers is soon established, and the bar can rotate in a dense halo with little friction, which we describe as a ``metastable state''. In fact, $\Omega_p$ declines slowly because of weak friction at other resonances, and normal friction resumes when the slope of $F$ at the main resonance changes, as shown in the last frame of Fig.~\ref{Ls_forced}. \section{Self-consistent Simulations} If we now re-examine Fig.~\ref{VKcomp}, we see that the period of weak friction is preceded by a small rise in the bar pattern speed in both the simulation of VK03 (dot-dashed line) and in the anomalous case we found. It is likely therefore that friction stopped for a while in both cases because the local density gradient across the principal resonance became flat, as just described. Analysis of our simulation that displayed this behavior suggests that $\Omega_p$ rose because of an interaction between the bar and a spiral in the disk, which caused the bar to gain angular momentum. Such an event is rare; spirals generally remove angular momentum from the bar at most relative phases. It is possible that VK03 were unlucky to have such an event in their case, but they report similar behavior in their model B making a chance event unlikely. \begin{figure}[t] \centerline{\psfig{figure=Ls_forced.ps,width=\hsize,clip=}} \caption{The mean density of particles as a function of $L_{\rm res}$ at several different times in the simulation shown in Fig.~\ref{drive}.} \label{Ls_forced} \end{figure} One significant difference between our code and that used by VK03 (Kravtsov, Klypin \& Khokhlov 1997) is that their resolution is adaptive, which causes gravity to strengthen at short range when the grid is refined. The increase in the local density as the bar amplitude rises causes the code to refine the grid, strengthening gravity and thereby causing the bar to contract slightly and to spin-up. We have found that a reduction of softening length in our code at this epoch also leads to a metastable state. It is likely, therefore, that their anomalous result is an artifact of their adaptive code. \section{The Metastable State is Fragile} Whatever the origin of the bar speed-up in simulations, it remains possible that the metastable state could occur in real galaxies. If friction in a dense halo can be avoided for this reason, then the observed pattern speeds will provide no constraint on the halo density. However, further experiments in which we perturbed our model in the metastable state very slightly, revealed that the state is highly fragile. For example, a satellite of merely 1\% of the galaxy mass flying by at 30kpc is sufficient to jolt the system out of the metastable state. We therefore conclude that anomalously weak friction is unlikely to persist for long in nature. \section{Conclusions} Tremaine \& Weinberg (1984) showed that angular momentum is transferred from a rotating bar to the halo through resonant interactions. We find that friction is dominated by a single resonance at most times, and that corotation is most important for a bar with realistic pattern speed -- {\it i.e.}\ when the bar extends almost to corotation. Friction arises because the phase space density is a decreasing function of angular momentum in normal circumstances, causing an excess of particles that gain angular momentum over those that lose. While this process would tend to flatten the density gradient if the pattern speed remained steady, the decreasing angular speed of the bar prevents this steady state from being reached. Instead we find that the density of particles in phase space develops a shoulder, with the resonance holding station on the high-angular momentum side of the shoulder as the feature moves to larger $L_{\rm res}$. However, if the bar is spun up slightly for some reason after a period of normal friction, the rise in the pattern speed may move the resonance to the other side of the pre-constructed shoulder. The change in the local gradient of particle density at the dominant resonance causes friction to become very weak for a while, allowing the bar to rotate almost steadily. Mild friction persists because of contributions from other, sub-dominant resonances, and normal friction resumes once the pattern speed has declined sufficiently for the gradient at the main resonance to become favorable for friction once more. A state in which strong friction is suspended for this reason is ``meta\-stable'', both because it relies on a local minimum in the phase space density, and because the state is fragile. A very mild jolt to the system is sufficient to cause normal friction to resume. The absence of friction in the simulation A$_1$ reported by Valenzuela \& Klypin (2003) is probably an artifact of their code. Their adaptive grid causes gravity to strengthen as the bar density builds up, making the pattern speed of the bar rise for a purely numerical reason. Thus their claimed counter-example to the argument of Debattista \& Sellwood (1998, 2000) is a numerical artifact of their method. {\it Pace} Holley-Bockelmann \& Weinberg (2005), our constraint on halo density still stands: A {\it strong\/} bar in a {\it dense\/} halo will quickly become unacceptably slow through dynamical friction. \begin{acknowledgments} We thank Anatoly Klypin for providing the initial positions and velocities of the particles in his model, and for many discussions. This work was supported by NASA (NAG 5-10110) and the NSF (AST-0098282). \end{acknowledgments} \begin{chapthebibliography}{1} \def{\it Ap.\ J.}{{\it Ap.\ J.}} \def{\it Ap.\ J. Lett.}{{\it Ap.\ J. Lett.}} \def{\it Ap.\ J. Suppl.}{{\it Ap.\ J. Suppl.}} \def{\it MNRAS}{{\it MNRAS}} \bibitem{BT} Binney, J. \& Tremaine, S. 1987, {\it Galactic Dynamics\/} (Princeton: Princeton University Press) \bibitem{DS98} Debattista, V. P. \& Sellwood, J. A. 1998, {\it Ap.\ J. Lett.}, {\bf 493}, L5 \bibitem{DS00} Debattista, V. P. \& Sellwood, J. A. 2000, {\it Ap.\ J.}, {\bf 543}, 704 \bibitem{H90} Hernquist, L. 1990, {\it Ap.\ J.}, {\bf 356}, 359 \bibitem{HBWK} Holley-Bockelmann, K., Weinberg, M. D. \& Katz, N. 2003, astro-ph/0306374 \bibitem{HBW} Holley-Bockelmann, K. \& Weinberg, M. D. 2005, DDA abstract 36.0512 \bibitem{K77} Kalnajs, A. J. 1977, {\it Ap.\ J.}, {\bf 212}, 637 \bibitem{KKK97} Kravtsov, A. V., Klypin, A. \& Khokhlov, A. M. 1997, {\it Ap.\ J. Suppl.}, {\bf 111}, 73 \bibitem{LT83} Lin, D. N. C. \& Tremaine, S. 1983, {\it Ap.\ J.}, {\bf 264}, 364 \bibitem{LB79} Lynden-Bell, D. 1979, {\it MNRAS}, {\bf 187}, 101 \bibitem{LBK} Lynden-Bell, D. \& Kalnajs, A. J. 1972, {\it MNRAS}, {\bf 157}, 1 \bibitem{S03} Sellwood, J. A. 2003, {\it Ap.\ J.}, {\bf 587}, 638 \bibitem{S04} Sellwood, J. A. 2004, astro-ph/0407533 \bibitem{TW84} Tremaine, S. \& Weinberg, M. D. 1984, {\it MNRAS}, {\bf 209}, 729 \bibitem{VK03} Valenzuela, O. \& Klypin, A. 2003, {\it MNRAS}, {\bf 345}, 406 \bibitem{W04} Weinberg, M. D. 2004, astro-ph/0404169 \bibitem{WK03} Weinberg, M. D. \& Katz, N. 2002, {\it Ap.\ J.}, {\bf 580}, 627 \end{chapthebibliography} \end{document}
2,869,038,155,372
arxiv
\section{Introduction} Future large scale structure surveys will be able to measure with percent precision the parameters governing the evolution of matter perturbations. While we have the tools to investigate the standard model, the next challenge is to be able to compare those data with cosmologies that go beyond General Relativity, in order to test whether a fluid component like Dark Energy or similarly a Modified Gravity scenario can better fit the data. On the theoretical side, while many Modified Gravity models are still allowed by type Ia supernova (SNIa) and Cosmic Microwave Background (CMB) data \cite{planck_collaboration_planck_2016}; structure formation can help us to distinguish among them and the standard scenario, thanks to their signatures on the matter power spectrum, in the linear and mildly non-linear regimes (for some examples of forecasts, see \cite{amendola_cosmology_2013, casas_fitting_2015, bielefeld_cosmological_2014}). The evolution of matter perturbations can be fully described by two generic functions of time and space \cite{kunz_phenomenological_2012, amendola_observables_2013}, which can be measured via Galaxy Clustering and Weak Lensing surveys. In this work we want to forecast how well we can measure those functions, in different redshift bins. While any two independent functions of the gravitational potentials would do, we follow the notation of \cite{planck_collaboration_planck_2016} and consider $\mu$ and $\eta$: the first modifies the Poisson equation for $\Psi$ while the second is equal to the ratio of the gravitational potentials (and is therefore also a direct observable \cite{amendola_observables_2013}). We will consider forecasts for the planned surveys Euclid, SKA1 and SKA2 and a subset of DESI, DESI-ELG, using as priors the constraints from recent {\it Planck}\ data (see also \cite{Alonso2016, hojjati_cosmological_2012, baker_observational_2015, bull_extending_2015, Gleyzes2016} for previous works that address forecasts in Modified Gravity. In section \ref{sec:Parameterizing-Modified-Gravity} we define $\mu$ and $\eta$ and parameterize them in three different ways. First, in a general manner, we let these functions vary freely in different redshift bins. Complementarily, we also consider two specific parameterizations of the time evolution proposed in \cite{planck_collaboration_planck_2016}. Here, we also specify the fiducial values of our cosmology for each of the parameterizations considered. Section \ref{sec:The-non-linear-power} discusses our treatment for the linear and mildly non-linear regime. Linear spectra are obtained from a modified Boltzmann code \cite{hojjati_testing_2011}; the mild non-linear regime (up to k $\sim$ 0.5 h/Mpc) compares two methods to emulate the non-linear power spectrum: the commonly used Halofit \cite{smith_stable_2003, takahashi_revising_2012}, and a semi-analytic prescription to model the screening mechanisms present in Modified Gravity models \cite{hu_parameterized_2007}. In section \ref{sec:Fisher-Matrix-method} we explain the method used to produce the Fisher forecasts both for Weak Lensing and Galaxy Clustering. We explain how we compute and add the CMB {\it Planck}\ priors to our Fisher matrices. Section \ref{sec:Results:-Redshift-Binned} discusses the results obtained for the redshift binned parameterization both for Galaxy Clustering and for Weak Lensing in the linear and non-linear cases. We describe our method to decorrelate the errors in section \ref{sub:Decorrelation-of-covariance}. The results for the other two time parameterizations are instead discussed in sections \ref{sub:MG-DE} and \ref{sub:MG-TR}, both for Weak Lensing and Galaxy Clustering in the linear and mildly non-linear regimes. To test the effect of our non-linear prescription, we show in section \ref{sub:Testing-the-effect-of-Zhao} the impact of different choices of the non-linear prescription parameters on the cosmological parameter estimation. \section{\label{sec:Parameterizing-Modified-Gravity}Parameterizing Modified Gravity} In linear perturbation theory, scalar, vector and tensor perturbations do not mix, which allows us to consider only the scalar perturbations in this paper. We work in the conformal Newtonian gauge, with the line element given by \begin{equation} ds^{2}=-(1+2\Psi)dt^{2}+a^{2}(1-2\Phi)dx^{2}\,\,\,. \end{equation} Here $\Phi$ and $\Psi$ are two functions of time and scale that coincide with the gauge-invariant Bardeen potentials in the Newtonian gauge. In theories with extra degrees of freedom (Dark Energy, DE) or modifications of General Relativity (MG) the normal linear perturbation equations are no longer valid, so that for a given matter source the values of $\Phi$ and $\Psi$ will differ from their usual values. We can parameterize this change generally with the help of two new functions that encode the modifications. Many different choices are possible and have been adopted in the literature, see e.g.\ \cite{planck_collaboration_planck_2016} for a limited overview. In this paper we introduce the two functions through a gravitational slip (leading to $\Phi\neq\Psi$ also at linear order and for pure cold dark matter) and as a modification of the Poisson equation for $\Psi$, \begin{eqnarray} -k^{2}\Psi(a,k) & \equiv & 4\pi Ga^{2}\mu(a,k)\rho(a)\delta(a,k)\,\,\,;\label{eq: mu_def}\\ \eta(a,k) & \equiv & \Phi(a,k)/\Psi(a,k)\,\,\,.\label{eq: eta_def} \end{eqnarray} These expressions define $\mu$ and $\eta$. Here $\rho(a)$ is the average dark matter density and $\delta(a,k)$ the comoving matter density contrast -- we will neglect relativistic particles and radiation as we are only interested in modeling the perturbation behaviour at late times. In that situation, $\eta$, which is effectively an observable \cite{amendola_observables_2013}, is closely related to modifications of GR \cite{saltas_anisotropic_2014,sawicki_non-standard_2016}, while $\mu$ encodes for example deviations in gravitational clustering, especially in redshift-space distortions as non-relativistic particles are accelerated by the gradient of $\Psi$. When considering Weak Lensing observations then it is also natural to parameterize deviations in the lensing or Weyl potential $\Phi+\Psi$, since it is this combination that affects null-geodesics (relativistic particles). To this end we introduce a function $\Sigma(t,k)$ so that \begin{equation} -k^{2}(\Phi(a,k)+\Psi(a,k))\equiv8\pi Ga^{2}\Sigma(a,k)\rho(a)\delta(a,k)\,\,\,.\label{eq:Sigma-def} \end{equation} Since metric perturbations are fully specified by two functions of time and scale, $\Sigma$ is not independent from $\mu$ and $\eta$, and can be obtained from the latter as follows: \begin{equation} \Sigma(a,k)=(\mu(a,k)/2)(1+\eta(a,k))\,\,\,.\label{eq:SigmaofMuEta} \end{equation} Throughout this work, we will denote the standard Lambda-Cold-Dark-Matter ($\Lambda$CDM) model, defined through the Einstein-Hilbert action with a cosmological constant, simply as GR. For this case we have that $\mu=\eta=\Sigma=1$. All other cases in which these functions are not unity will be labeled as Modified Gravity (MG) models. Using effective quantities like $\mu$ and $\eta$ has the advantage that they are able to model {\em any} deviations of the perturbation behaviour from $\Lambda$CDM expectations, they are relatively close to observations, and they can also be related to other commonly used parameterization \cite{pogosian_how_2010} On the other hand, they are not easy to map to an action (as opposed to approaches like effective field theories that are based on an explicit action) and in addition they contain so much freedom that we normally restrict their parameterisation to a subset of possible functions. This has however the disadvantage of loosing generality and making our constraints on $\mu$ and $\eta$ parameterization-dependent. In this paper, we prefer to complement specific choices of parameterizations adopted in the literature (we will use the choice made in \cite{planck_collaboration_planck_2016}) with a more general approach: we will bin the functions $\mu(a)$ and $\eta(a)$ in redshift bins with index $i$ and we will treat each $\mu_{i}$ and $\eta_{i}$ as independent parameters in our forecast; we will then apply a variation of Principal Component Analysis (PCA), called Zero-phase Component Analysis (ZCA). This approach has been taken previously in the literature by \cite{hojjati_cosmological_2012}, where they bin $\mu$ and $\eta$ in several redshift and $k$-$scale$ bins together with a binning of $w(z)$ and cross correlate large scale structure observations with CMB temperature, E-modes and polarization data together with Integrated Sachs-Wolfe (ISW) observations to forecast the sensitivity of future surveys to modifications in $\mu$ and $\eta$. In the present work, we will neglect a possible $k$-dependence, we will focus on Galaxy Clustering (GC) and weak lensing (WL) surveys and we will show that there are important differences between the linear and non-linear cases; including the non-linear regime generally reduces correlations among the cosmological parameters. In the remainder of this section we will introduce the parameterizations that we will use. \subsection{Parameterizing gravitational potentials in discrete redshift bins \label{sub:param-z-bins-th}} As a first approach we neglect scale dependence and bin the time evolution of the functions $\mu$ and $\eta$ without specifying any parameterized evolution. To this purpose we divide the redshift range $0\leq z\leq3$ in 6 redshift bins and we consider the values $\mu(z_{i})$ and $\eta(z_{i})$ at the right limiting redshift $z_{i}$ of each bin as free parameters, thus with the $i$ index spanning the values $\{0.5,1.0,1.5,2.0,2.5,3.0\}$. The first bin is assumed to have a constant value, coinciding with the one at $z_1=0.5$, i.e. $\mu(z<0.5)=\mu(z_{1})$ and $\eta(z<0.5)=\eta(z_{1})$. The $\mu(z)$ function (and analogously $\eta(z)$) is then reconstructed as \begin{equation}\label{eq:MGbin-mu-parametrization} \mu(z)=\mu(z_{1})+\sum_{i=1}^{N-1}{\frac{\mu(z_{i+1})-\mu(z_{i})}{2}\left[1+\tanh{\left(s\frac{z-z_{i+1}}{z_{i+1}-z_{i}}\right)}\right]}, \end{equation} where $s=10$ is a smoothing parameter and $N$ is the number of binned values. We assume that both $\mu$ and $\eta$ reach the GR limit at high redshifts: to realize this, the last $\mu(z_{6})$ and $\eta(z_{6})$ values assume the standard $\Lambda\mathrm{CDM}$ value $\mu=\eta=1$ and both functions are kept constant at higher redshifts $z>3$. Similarly, the derivatives of these functions are obtained by computing \begin{equation} \mu'({\bar{z}_{j}})=\frac{\mu(z_{i+1})-\mu(z_{i})}{z_{i+1}-z_{i}}, \end{equation} with $\bar{z}_{j}=(z_{i+1}+z_{i})/2$, using the same $\tanh(x)$ smoothing function: \begin{equation}\label{eq:MGbin-muderiv-parametrization} \frac{d\mu(z)}{dz}=\mu'(\bar{z}_{1})+\sum_{j=1}^{N-2}{\frac{\mu'(\bar{z}_{j+1})-\mu'(\bar{z}_{j})}{2}\left[1+\tanh{\left(s\frac{z-\bar{z}_{j+1}}{\bar{z}_{j+1}-\bar{z}_{j}}\right)}\right]}\,\,\,. \end{equation} In particular we assume $\mu'=\eta'=0$ for $z<0.5$ and for $z>3$. We set the first five amplitudes of $\mu_{i}$ and $\eta_{i}$ as free parameters, thus the set we consider is: $\theta=\{\Omega_{m},\Omega_{b},h,\ln10^{10} A_{s},n_{s},\{\mu_{i}\},\{\eta_{i}\}\}$, with $i$ an index going from 1 to 5. We take as fiducial cosmology the values shown in Tab. \ref{tab:fiducial-MG-AllCases} columns 5 and 6. We only modify the evolution of perturbations and assume that the background expansion is well described by the standard $\Lambda$CDM expansion law for a flat universe with given values of $\Omega_{m}$, $\Omega_{b}$ and $h$. \subsection{Parameterizing gravitational potentials with simple smooth functions of the scale factor \label{sub:param-smooth-funct}} As an alternative approach, we assume simple specific, time parameterizations for the $\mu$ and $\eta$ MG functions, adopting the ones used in the {\it Planck}\ analysis \cite{planck_collaboration_planck_2016}. We neglect here as well any scale dependence: \begin{itemize} \item a parameterization in which the time evolution is related to the dark energy density fraction, to which we refer as `late-time' parameterization: \begin{eqnarray} \mu(a,k)\equiv1+E_{{\rm 11}}\Omega_{{\rm DE}}(a)\,\,\,,\label{eq:DE-mu-parametrization}\\ \eta(a,k)\equiv1+E_{{\rm 22}}\Omega_{{\rm DE}}(a)\,\,\,;\label{eq:DE-eta-parametrization} \end{eqnarray} \item a parameterization in which the time evolution is the simplest first order Taylor expansion of a general function of the scale factor $a$ (and closely resembles the $w_{0}-w_{a}$ parametrization for the equation of state of DE), referred to as `early-time' parameterization, because it allows departures from GR also at high redshifts \footnote{Notice that our early-time parametrization is called `time-related' in \cite{planck_collaboration_planck_2016}.}: \begin{eqnarray} \mu(a,k)\equiv1+E_{{\rm 11}}+E_{{\rm 12}}(1-a)\,\,\,,\label{eq:TR-mu-parametrization}\\ \eta(a,k)\equiv1+E_{{\rm 21}}+E_{{\rm 22}}(1-a)\,\,\,.\label{eq:TR-eta-parametrization} \end{eqnarray} \end{itemize} The late-time parameterization is forced to behave as GR ($\mu=\eta=1$) at high redshift when $\Omega_{{\rm DE}}(a)$ becomes negligible; the early time one allows more freedom as the amplitude of the deviations from GR do not necessarily reduce to zero at high redshifts. Both parameterizations have been used in \cite{planck_collaboration_planck_2016}. In \cite{bull_extending_2015, Gleyzes2016, Alonso2016} the authors used a similar time parameterization in which the Modified Gravity parameters depend on the time evolution of the dark energy fraction. In \cite{bull_extending_2015} an extra parameter accounts for a scale-dependent $\mu$: their treatment keeps $\eta$ (called $\gamma$ in their paper) fixed and equal to 1; it uses linear power spectra up to $k_{\mathrm{max}}(z)$ with $k_{\rm max}(z=0)=0.14$/Mpc $\approx$ 0.2 h/Mpc. In \cite{Gleyzes2016} the authors also use a combination of Galaxy Clustering, Weak Lensing and ISW cross-correlation to constrain Modified Gravity in the Effective Field Theory formalism \cite{Gubitosi2013}. In \cite{Bellini2016} and \cite{Alonso2016} a similar parameterization was used to constrain the Horndeski functions \cite{Bellini2014} with present data and future forecasts respectively, in the linear regime. For the late-time parameterization, the set of free parameters we consider is: $\theta=\{\Omega_{m},\Omega_{b},h,\ln10^{10}A_{s},n_{s},E_{11},E_{22}\}$, where $E_{11}$ and $E_{22}$ determine the amplitude of the variation with respect to $\Lambda\mathrm{CDM}$. As fiducial cosmology we use the values shown in Table \ref{tab:fiducial-MG-AllCases}, columns 1 and 2, i.e. the marginalized parameter values obtained fitting these models with recent {\it Planck}\ data; notice that these results differ slightly from the {\it Planck}\ analysis in \cite{planck_collaboration_planck_2016} for the same parameterization, because we don't consider here the effect of massive neutrinos. For the early time parameterization we have $E_{11}$ and $E_{21}$ which determine the amplitude of the deviation from GR at present time ($a=0$) and 2 additional parameters ($E_{12},\ E_{22}$), which determine the time dependence of the $\mu(a)$ and $\eta(a)$ functions for earlier times. The fiducial values for this model, obtained from the {\it Planck}+BSH best fit is given in columns 3 and 4 of Table \ref{tab:fiducial-MG-AllCases}. \begin{table}[htbp] \centering{} \begin{tabular}{|cc|cc|cc|} \hline \multicolumn{2}{|c|}{\textbf{Late time}} & \multicolumn{2}{c|}{\textbf{Early time}} & \multicolumn{2}{c|}{\Tstrut \textbf{Redshift Binned}}\tabularnewline \multicolumn{1}{|c}{Parameter } & \multicolumn{1}{c|}{Fiducial } & \multicolumn{1}{c}{Parameter } & \multicolumn{1}{c|}{Fiducial } & \multicolumn{1}{c}{Parameter } & \multicolumn{1}{c|}{Fiducial }\tabularnewline \hline $\Omega_{c}$ & \multicolumn{1}{c|}{$0.254$ } & $\Omega_{c}$ & \multicolumn{1}{c|}{$0.256$} & $\Omega_{c}$ & $0.254$ \tabularnewline $\Omega_{b}$ & \multicolumn{1}{c|}{$0.048$ } & $\Omega_{b}$ & \multicolumn{1}{c|}{$0.048$} & $\Omega_{b}$ & $0.048$ \tabularnewline $n_{s}$ & \multicolumn{1}{c|}{$0.969$ } & $n_{s}$ & \multicolumn{1}{c|}{$0.969$} & $n_{s}$ & $0.969$ \tabularnewline $\ln10^{10}A_{s}$ & \multicolumn{1}{c|}{$3.063$ } & $\ln10^{10}A_{s}$ & \multicolumn{1}{c|}{$3.091$} & $\ln10^{10}A_{s}$ & $3.057$ \tabularnewline $h$ & \multicolumn{1}{c|}{$0.682$ } & $h$ & \multicolumn{1}{c|}{$0.682$} & $h$ & $0.682$ \tabularnewline $E_{11}$ & \multicolumn{1}{c|}{$0.100$ } & $E_{11}$ & \multicolumn{1}{c|}{$-0.098$} & $\mu_{1}$ & $1.108$\tabularnewline $E_{22}$ & \multicolumn{1}{c|}{$0.829$ } & $E_{12}$ & \multicolumn{1}{c|}{$0.096$} & $\mu_{2}$ & $1.027$\tabularnewline \cline{1-2} \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & $E_{21}$ & \multicolumn{1}{c|}{$0.940$} & $\mu_{3}$ & $0.973$\tabularnewline \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & $E_{22}$ & \multicolumn{1}{c|}{$-0.894$} & $\mu_{4}$ & $0.952$\tabularnewline \cline{3-4} \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & $\mu_{5}$ & $0.962$\tabularnewline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & $\eta_{1}$ & $1.135$\tabularnewline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & $\eta_{2}$ & $1.160$\tabularnewline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & $\eta_{3}$ & $1.219$\tabularnewline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & $\eta_{4}$ & $1.226$\tabularnewline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & $\eta_{5}$ & $1.164$\tabularnewline \cline{5-6} \end{tabular}\protect\caption{\label{tab:fiducial-MG-AllCases} Fiducial values for the Modified Gravity parameterizations and the redshift-binned model of $\mu$ and $\eta$ used in this work. The DE related parameterization contains two extra parameters $E_{11}$ and $E_{22}$ with respect to GR; the early-time parametrization depends on 4 extra parameters $E_{11},\,E_{12},\,E_{22}$ and $E_{21}$ with respect to GR; the redshift-binned model contains 10 extra parameters, corresponding to the amplitudes $\mu_{i}$ and $\eta_{i}$ in five redshift bins. In this work we will use alternatively and for simplicity the notation $\ell \mathcal{A}_s \equiv \ln(10^{10} A_{s})$. The fiducial values are obtained performing a Monte Carlo analysis of {\it Planck}+BAO+SNe+H$_{0}$ (BSH) data \cite{planck_collaboration_planck_2016}.} \label{tab:DEfid} \end{table} \begin{figure}[htbp] \centering{}\begin{center} \includegraphics[width=0.45\linewidth]{figures/fiducials/muFiducialsPlot} \includegraphics[width=0.45\linewidth]{figures/fiducials/etaFiducialsPlot} \end{center} \caption{\label{fig:fidplot} The Modified Gravity functions $\mu$ and $\eta$ as a function of redshift $z$ for each of the models considered in this work, evaluated at the fiducials specified inTable \ref{tab:fiducial-MG-AllCases}. In light long-dashed blue lines, the `redshift binned model' (Eqns.\ \ref{eq:MGbin-mu-parametrization}-\ref{eq:MGbin-muderiv-parametrization}). In short-dashed red lines the late-time parameterization (Eqns.\ \ref{eq:DE-mu-parametrization}-\ref{eq:DE-eta-parametrization}) and in green solid lines the early-time parameterization (Eqns.\ \ref{eq:TR-mu-parametrization}-\ref{eq:TR-eta-parametrization}). Finally the medium-dashed orange line represents the standard $\Lambda\mathrm{CDM}$ model (GR) for reference. } \end{figure} \section{\label{sec:The-non-linear-power}The power spectrum in Modified Gravity} \subsection{The linear power spectrum} In this work we will use linear power spectra calculated with MGCAMB \cite{zhao_searching_2009,hojjati_testing_2011}, a modified version of the Boltzmann code CAMB \cite{lewis_efficient_2000}. We do so, as MGCAMB offers the possibility to input directly any parameterization of $\mu$ and $\eta$ without requiring further assumptions: MGCAMB uses our Eqns.\ (\ref{eq: mu_def}) and (\ref{eq: eta_def}) in the Einstein-Boltzmann system of equations, providing the modified evolution of matter perturbations, corresponding to our choice of the gravitational potential functions. Non-relativistic particles like cold dark matter are accelerated by the gradient of $\Psi$, so that especially the redshift space distortions are sensitive to the modification given by $\mu(a,k)$. For relativistic particles like photons and neutrinos on the other hand, the combination of $\Phi+\Psi$ (and therefore $\Sigma$) enters the equations of motion. The impact on the matter power spectrum is more complicated, as the dark matter density contrast is linked via the relativistic Poisson equation to $\Phi$. In addition, an early-time modification of $\Phi$ and $\Psi$ can also affect the baryon distribution through their coupling to radiation during that period. As already mentioned above, we will not consider the $k-$dependence of $\mu$ and $\eta$ in this work and our modifications with respect to standard GR will be only functions of the scale factor $a$. \subsection{Non-linear power spectra} As the Universe evolves, matter density fluctuations ($\delta_{m}$) on small scales (k > 0.1 h/Mpc) become larger than unity and shell-crossing will eventually occur. The usual continuity and Euler equations, together with the Poisson equation become singular \cite{bernardeau_large-scale_2001,bernardeau_evolution_2013}, therefore making a computation of the matter power spectrum in the highly non-linear regime practically impossible under the standard perturbation theory approach. However, at intermediate scales of around $k\approx0.1-0.2$ h/Mpc, there is the possibility of calculating semi-analytically the effects of non-linearities on the oscillation patterns of the baryon acoustic oscillations (BAO). There are many approaches in this direction, from renormalized perturbation theories \cite{crocce_renormalized_2006,blas_time-sliced_2016,taruya_closure_2008} to time flow equations \cite{pietroni_flowing_2008,anselmi_nonlinear_2012}, effective or coarse grained theories \cite{carrasco_effective_2012,baumann_cosmological_2012,pietroni_coarse-grained_2011,manzotti_coarse_2014} and many others. This direction of research is important, since from the amplitude and widths of the first few BAO peaks, one can extract more information from data and break degeneracies among parameters, especially in Modified Gravity. Computing the non-linear power spectrum in standard GR is still an open question, and even more so when the Poisson equations are modified, as it is in the case in Modified Gravity theories. A solution to this problem is to calculate the evolution of matter perturbations in an N-body simulation \cite{springel_cosmological_2005,fosalba_mice_2013,takahashi_revising_2012,lawrence_coyote_2010,heitmann_coyote_2014}, however, this procedure is time-consuming and computationally expensive. Because of these issues, several previous analyses have been done with a conservative removal of the information at small scales (see for example the {\it Planck}\ Dark Energy paper \cite{planck_collaboration_planck_2016}, several CFHTLenS analysis \cite{heymans_cfhtlens_2013,kitching_3d_2014} or the previous PCA analysis by \cite{hojjati_cosmological_2012}). However, future surveys will probe an extended range of scales, therefore removing non-linear scales from the analysis would strongly reduce the constraining power of these surveys. Moreover, at small scales we also expect to find means of discriminating between different Modified Gravity models, such as the onset of screening mechanisms needed to recover GR at small scales where experiments strongly constrain deviations from it. For these reasons it is crucial to find methods which will allow us to investigate, at least approximately, the non-linear power spectrum. Attempts to model the non-linear power spectrum semi-analytically in Modified Gravity have been investigated for f(R) theories in \cite{zhao_modeling_2014,taruya_regularized_2014}, for coupled dark energy in \cite{casas_fitting_2015,saracco_non-linear_2010,vollmer_efficient_2014} and for growing neutrino models in \cite{brouzakis_nonlinear_2011}. Typically they rely on non-linear expansions of the perturbations using resummation techniques based on \cite{pietroni_flowing_2008,taruya_closure_2008} or on fitting formulae based on N-Body simulations \cite{casas_fitting_2015,takahashi_revising_2012,bird_massive_2011}. A similar analysis is not available for the model-independent approach considered in this paper. In order to give at least a qualitative estimate of what the importance of non-linearities would be for constraining these Modified Gravity models, we will adopt in the rest of the paper a method which interpolates between the standard approach to non linear scales in GR and the same applied to MG theories. \subsubsection{Halofit} We describe here the effect of applying the standard approach to non linearities to MG theories. This is done using the revised Halofit \cite{takahashi_revising_2012}, based on \cite{smith_stable_2003}, which is a fitting function of tens of numerical parameters that reproduces the output of a certain set of $\Lambda\mathrm{CDM}$ N-body simulations in a specific range in parameter space as a function of the linear power spectrum. This fitting function is reliable with an accuracy of better than 10\% at scales larger than $k\lesssim1$h/Mpc and redshifts in between $0\leq z\leq10$ (see \cite{takahashi_revising_2012} for more details). This fitting function can be used within Boltzmann codes to estimate the non-linear contribution which corrects the linear power spectrum as a function of scale and time. We will use the Halofit fitting function as a way of approximating the non-linear power spectrum in our models even though it is really only valid for $\Lambda\mathrm{CDM}$. In Fig.\ \ref{fig:lin-non-pk-mg}, the left panel shows a comparison between the linear and non-linear power spectra calculated by MGCAMB in two different models, our fiducial late-time model (as from Table \ref{tab:fiducial-MG-AllCases}) and GR, both sharing the same $\Lambda\mathrm{CDM}$ parameters. At small length scales (large $k$), the non-linear deviation is clearly visible at scales $k\gtrsim0.3$ h/Mpc and both MG and GR seem to overlap due to the logarithmic scale used. In the right panel, we can see the ratio between MG and GR for both linear and non-linear power spectra, using the same 5 $\Lambda\mathrm{CDM}$ parameters $\{\Omega_{m},\Omega_{b},h,\,A_{s},n_{s}\}$. We can see clearly that MG in the non-linear regime, using the standard Halofit, shows a distinctive feature at scales in between $0.2\lesssim k\lesssim2$. This feature however, does not come from higher order perturbations induced by the modified Poisson equations (\ref{eq: mu_def}, \ref{eq:Sigma-def}), because Halofit, as explained above, is calibrated with simulations within the $\Lambda\mathrm{CDM}$ model and does not contain any information from Modified Gravity. The feature seen here is caused by the different growth rate of perturbations in Modified Gravity, that yields then a different evolution of non-linear structures. \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.45\textwidth]{figures/power-spectra/pk-GRvsMGDE2nonuhs_Lin-vs-Nonlin-zind_1} \includegraphics[width=0.45\textwidth]{figures/power-spectra/pk-Ratios-GRvsMGDE2nonuhs_Lin-vs-Nonlin-zind_1} \par\end{centering} \protect\caption{\label{fig:lin-non-pk-mg} \textbf{Left: }matter power spectra computed with MGCAMB (linear) and MGCAMB+Halofit (non-linear), illustrating the impact of non-linearities at different scales. As an illustrative example, MG in this plot corresponds to the fiducial model in the late-time parametrization defined in Eq.\ (\ref{eq:DE-mu-parametrization}). All curves are computed at $z=0$. The green solid line is the GR fiducial in the non-linear case, the blue long-dashed line is also GR but in the linear case. The short-dashed red line is the MG fiducial in the non-linear case and the medium-dashed brown line the MG fiducial in the linear case. \textbf{Right: }in order to have a closer look at small scales, we plot here the ratio of the MG power spectrum to the GR power spectrum for the linear (blue solid) and non-linear (red short-dashed) cases separately. The blue solid line compared to the horizontal grey dashed line, shows the effect of Modified Gravity when taking only linear spectra into account. While the red dashed line, which represents the non-linear case, shows that the ratio to GR presents clearly a bump that peaks around $k\approx1.0$ h/Mpc, meaning that the power spectrum in MG differs at most 4\% from the non-linear power spectrum in GR. We will see later that we are able to discriminate between these two models using future surveys, especially when non-linear scales ($k \gtrsim 0.1$ h/Mpc) are included. } \end{figure} \subsubsection{Prescription for mildly non-linear scales including screening} \label{sub:Prescription-HS} As discussed above, modifications to the $\Phi$ and $\Psi$ potentials make the use of Halofit to compute the evolution at non linear scales unreliable. In order to take into account the non-linear contribution to the power spectrum in Modified Gravity, we investigate here a different method, which starts from the consideration that whenever we modify $\mu$ and $\eta$ with respect to GR, we modify the strength of gravitational attraction in a way universal to all species: this means that, similarly to the case of scalar-tensor or $f(R)$ theories, we need to assume the existence of a non-perturbative screening mechanism, acting at small scales, that guarantees agreement with solar system experiments. In other words, it is reasonable to think that the non-linear power spectrum will have to match GR at sufficiently small scales, while at large scales it is modified. Of course, without having a specific model in mind, it remains arbitrary how the interpolation between the small scale regime and the large scale regime is done. In this paper, we adopt the Hu \& Sawicki (HS) Parametrized Post-Friedmann prescription proposed in \cite{hu_parameterized_2007}, which was used for the case of $f(R)$ theories previously by \cite{zhao_modeling_2014}. Given a MG model, this prescription interpolates between the non-linear power spectrum in Modified Gravity (which is in our case just the linear MG power spectrum corrected with standard Halofit, $P_{\mathrm{HMG}}$) and the non-linear power spectrum in GR calculated with Halofit ($P_{\mathrm{HGR}}$). The resulting power spectrum will be denoted as $P_{\mathrm{nlHS}}$ \begin{equation} P_{\mathrm{nlHS}}(k,z)=\frac{P_{\mathrm{HMG}}(k,z)+c_{\mathrm{nl}} S_{\mathrm{L}}^{2}(k,z)P_{\mathrm{HGR}}(k,z)}{1+c_{\mathrm{nl}} S_{\mathrm{L}}^{2}(k,z)} \, ,\label{eq:PHSDefinition} \end{equation} with \begin{equation} S_{\mathrm{L}}^{2}(k,z)=\left[\frac{k^{3}}{2\pi^{2}}P_{\mathrm{LMG}}(k,z)\right]^{s} \, . \label{eq:prescription_sigma_def} \end{equation} The weighting function $S_{\mathrm{L}}$ used in the interpolation quantifies the onset of non-linear clustering and it is constructed using the linear power spectrum in Modified Gravity ($P_{\mathrm{LMG}}$). The constant $c_{\mathrm{nl}}$ and the constant exponent $s$ are free parameters. In Figure \ref{fig:lin-nonlin-Zhao-MG} we show the ratio $P_{\mathrm{nlHS}}/P_{\mathrm{HGR}}$, which illustrates the relative difference between the non-linear HS prescription in MG and the Halofit non-linear power spectrum in GR, for different values of $c_{\mathrm{nl}}$ (left panel) and different values of $s$ (right panel). The parameter $c_{\mathrm{nl}}$ controls at which scale there is a transition into a non-linear regime in which standard GR is valid (this can be the case when a screening mechanism is activated); $s$ controls the smoothness of the transition and is in principle a model and redshift dependent quantity. When $c_{\mathrm{nl}}=0$ we recover the Modified Gravity power spectrum with Halofit $P_{\mathrm{HMG}}$; when $c_{\mathrm{nl}}\rightarrow\infty$ we recover the non-linear power spectrum in GR calculated with Halofit $P_{\mathrm{HGR}}$. In \cite{zhao_modeling_2014,zhao_n-body_2011,koyama_non-linear_2009}, the $c_{\mathrm{nl}}$ and $s$ constants were obtained fitting expression (\ref{eq:PHSDefinition}) to N-Body simulations or to a semi-analytic perturbative approach. In the case of $f(R)$, $s=1/3$ seems to match very well the result from simulations up to a scale of $k=0.5$h/Mpc \cite{koyama_non-linear_2009}. A relatively good agreement up to such small scales is enough for our purposes. In the absence of N-Body simulations or semi-analytic methods available for the models investigated in this work, we will assume unity for both parameters, which is a natural choice, and we will test in Section \ref{sub:Testing-the-effect-of-Zhao} how our results vary for different values of these parameters, namely $c_{\mathrm{nl}}=\{0.1,0.5,\,1,3\}$ and $s=\{0,\,1/3,\,2/3,\,1\}$. This will give a qualitative estimate of the impact of non-linearities on the determination of cosmological parameters. \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.45\textwidth]{figures/power-spectra/pk-Ratios-ZhaovsGRNL-MGDE2nonuhs-z_0p-zind_1} \includegraphics[width=0.45\textwidth]{figures/power-spectra/pk-Ratios-Zhao_SigmaExp-vsGRNL-MGDE2nonuhs-z_0p-zind_1} \par\end{centering} \protect\caption{\label{fig:lin-nonlin-Zhao-MG} The ratio of the Modified Gravity non-linear power spectrum using the HS prescription by \cite{hu_parameterized_2007} ($P_{\mathrm{nlHS}}$) with respect to the GR+Halofit fiducial non-linear power spectrum $P_{\mathrm{HGR}}$, for different values of $c_{\mathrm{nl}}$ (left panel) and $s$ (right panel), illustrated in Eqns.(\ref{eq:PHSDefinition}, \ref{eq:prescription_sigma_def}). The value $c_{\mathrm{nl}}=0$ (green solid line) corresponds to MG+Halofit $P_{\mathrm{HMG}}$. All curves are calculated at $z=0$. \textbf{Left: } We show the ratio for $c_{\mathrm{nl}}=\{0.5,1.0,10,10^8\}$, plotted as short-dashed red, medium-dashed blue, short-dashed brown and medium-dashed purple lines respectively. When $c_{\mathrm{nl}}\rightarrow\infty$, Eqn.\ (\ref{eq:PHSDefinition}) corresponds to the limit of $P_{\mathrm{HGR}}$ and therefore the ratio is just 1. The effect of the HS prescription is to grasp some of the features of the non-linear power spectrum at mildly non-linear scales induced by Modified Gravity, taking into account that at very small scales, a screening mechanism might yield again just a purely GR non-linear power spectrum. The parameter $c_{\mathrm{nl}}$ interpolates between these two cases. \textbf{Right: }in this panel we show the effect of the parameter $s$, for $s=\{0,0.33,0.66,1\}$ (short-dashed red, medium-dashed blue, short-dashed brown and long-dashed purple, respectively). Both parameters need to be fitted with simulations in order to yield a reliable match with the shape of the non-linear power spectrum in Modified Gravity, as it was done in \cite{zhao_n-body_2011} and references therein. The grey dashed line marks the constant value of 1.} \end{figure} \section{\label{sec:Fisher-Matrix-method}Fisher Matrix forecasts} The Fisher matrix formalism (\cite{tegmark_measuring_1998,seo_improved_2007,seo_baryonic_2005}) is one of the most popular tools to forecast the outcome of an experiment, because of its speed and its versatility when the likelihood is approximately Gaussian. Here we apply the Fisher matrix formalism to two different probes, Galaxy Clustering (GC) and Weak Lensing (WL), which are the main cosmological probes for the future Euclid satellite \cite{mukherjee_planck_2008}. The background and perturbations quantities we use in the following equations are computed with a version of \texttt{MGCAMB} \cite{zhao_searching_2009,hojjati_testing_2011} modified in order to account for the binning and the parameterizations described in Section \ref{sec:Parameterizing-Modified-Gravity}. \subsection{Future large scale galaxy redshift surveys \label{sub:FutureSurveys}} In this work we choose to present results on some of the future galaxy redshift surveys, which are planned to be started and analyzed within the next decade. Our baseline survey will be the Euclid satellite \cite{amendola_cosmology_2013, laureijs_euclid_2011}. Euclid\footnote{http://www.euclid-ec.org/} is a European Space Agency medium-class mission scheduled for launch in 2020. Its main goal is to explore the expansion history of the Universe and the evolution of large scale cosmic structures by measuring shapes and redshifts of galaxies, covering 15000$\text{deg}^2$ of the sky, up to redshifts of about $z\sim2$. It will be able to measure up to 100 million spectroscopic redshifts which can be used for Galaxy Clustering measurements and 2 billion photometric galaxy images, which can be used for Weak Lensing observations (for more details, see \cite{amendola_cosmology_2013, laureijs_euclid_2011}). We will use in this work the Euclid Redbook specifications for Galaxy Clustering and Weak Lensing forecasts \cite{laureijs_euclid_2011}, some of which are listed in Tables \ref{tab:GC-specifications} and \ref{tab:WL-specifications} and the rest can be found in the above cited references. Another important future survey will be the Square Kilometer Array (SKA)\footnote{https://www.skatelescope.org/}, which is planned to become the world's largest radiotelescope. It will be built in two phases, phase 1 split into SKA1-SUR in Australia and SKA1-MID in South Africa and SKA2 which will be at least 10 times as sensitive. The first stage is due to finish observations around the year 2023 and the second phase is scheduled for 2030 (for more details, see \cite{yahya_cosmological_2015,santos_hi_2015,raccanelli_measuring_2015,bull_measuring_2015}). The first phase SKA1, will be able to measure in an area of 5000$\text{deg}^2$ of the sky and a redshift of up to $z\sim0.8$ an estimated number of about $5\times10^6$ galaxies; SKA2 is expected to cover a much larger fraction of the sky ($\sim$30000$\text{deg}^2$), will yield much deeper redshifts (up to $z\sim2.5$) and is expected to detect about $10^9$ galaxies with spectroscopic redshifts \cite{santos_hi_2015}. SKA1 and SKA2 will also be capable of performing radio Weak Lensing experiments, which are very promising, since they are expected to be less sensitive to systematic effects in the instruments, related to residual point spread function (PSF) anisotropies \cite{harrison_ska_2016}. In this work we will use for our forecasts of SKA1 and SKA2, the specifications computed by \cite{santos_hi_2015} for GC and by \cite{harrison_ska_2016} for WL. The numerical survey parameters are listed in Tables \ref{tab:GC-specifications} and \ref{tab:WL-specifications}, while the galaxy bias $b(z)$ and the number density of galaxies $n(z)$, can be found in the references mentioned above. We will also forecast the results from DESI\footnote{http://desi.lbl.gov/}, a stage IV, ground-based dark energy experiment, that will study large scale structure formation in the Universe through baryon acoustic oscillations (BAO) and redshift space distortions (RSD), using redshifts and positions from galaxies and quasars \cite{desi_collaboration_desi_2016-1,desi_collaboration_desi_2016,levi_desi_2013}. It is scheduled to start in 2018 and will cover an estimated area in the sky of about 14000$\text{deg}^2$. It will measure spectroscopic redshifts for four different classes of objects, luminous red galaxies (LRGs) up to a redshift of $z=1.0$, bright [O II] emission line galaxies (ELGs) up to $z=1.7$, quasars (QSOs) up to $z\sim3.5$ and at low redshifts ($z\sim0.2$) magnitude-limited bright galaxies (BLGs). In total, DESI will be able to measure more than 30 million spectroscopic redshifts. In this paper we will use for our forecasts only the specifications for the ELGs, as found in \cite{desi_collaboration_desi_2016-1}, since this observation provides the largest number density of galaxies in the redshift range of our interest. We cite the geometry and redshift binning specifications in Table \ref{tab:GC-specifications}, while the galaxy number density and bias can be found in \cite{desi_collaboration_desi_2016-1}. \subsection{Galaxy Clustering\label{sub:Fisher-Galaxy-Clustering}} The distribution of galaxies in space is not perfectly uniform. Instead it follows, up to a bias, the underlying matter power spectrum so that the observed power spectrum $P_{\mathrm{obs}}$ (the Fourier transform of the real-space two point correlation function of the galaxy number counts) is closely linked to the dark matter power spectrum $P(k)$. The observed power spectrum however also contains additional effects like redshift-space distortions due to velocities and a suppression of power due to redshift-uncertainties. Here we follow \cite{seo_improved_2007}, neglecting further relativistic and observational effects, and write the observed power spectrum as \begin{equation} P_{\mathrm{obs}} (k,\mu,z)=\frac{D_{A,f}^{2}(z)H(z)}{D_{A}^{2}(z)H_{f}(z)}b^{2}(z)(1+\beta_{d}(z)\mu^{2})^{2}e^{-k^{2}\mu^{2}(\sigma_{r}^{2}+\sigma_{v}^{2})}P(k,z)\,\,\,.\label{eq:observed-Pk} \end{equation} $P_{\mathrm{obs}} (k,\mu,z)$ is the observed power spectrum as a function of the redshift $z$, the wavenumber $k$ and of $\mu\equiv\cos\alpha$, where $\alpha$ is the angle between the line of sight and the 3D-wavevector $\vec{k}$. This observed power spectrum contains all the cosmological information about the background and the matter perturbations as well as corrections due to redshift-space distortions, geometry and observational uncertainties. In the formula, the subscript $f$ denotes the fiducial value of each quantity, $P(k,z)$ is the matter power spectrum, $D_{A}(z)$ is the angular diameter distance, $H(z)$ the Hubble function and $\beta_{d}(z)$ is the redshift space distortion factor, which in linear theory is given by $\beta_{d}(z)=f(z)/b(z)$, with $f(z)\equiv d\ln G/d\ln a$ representing the linear growth rate of matter perturbations and $b(z)$ the galaxy bias as a function of redshift, which we assume to be local and scale-independent. The exponential factor represents the damping of the observed power spectrum, due to two different effects: $\sigma_{z}$ an error induced by spectroscopic redshift measurement errors, which translates into an uncertainty in the position of galaxies at a scale of $\sigma_{r}=\sigma_{z}/H(z)$ and $\sigma_{v}$ which is the dispersion of pairwise peculiar velocities which are present at non-linear scales and also introduces a damping scale in the mapping between real and redshift space. We marginalize over this parameter, similarly to what \cite{bull_extending_2015} and others have done, and we take as fiducial value $\sigma_{v} = 300$km/s, compatible with the estimates by \cite{de_la_torre_modelling_2012}. We also include the Alcock-Paczynski effect \cite{alcock_evolution_1979}, by which the $k$ modes and the angle cosine $\mu$ perpendicular and parallel to the line-of-sight get distorted by geometrical factors related to the Hubble function and the angular diameter distance \cite{ballinger_measuring_1996, feldman_power_1994}. We can then write the Fisher matrix for the galaxy power spectrum in the following form \citep{seo_improved_2007, amendola_testing_2012}: \begin{equation} F_{ij}=\frac{V_{\rm survey}}{8\pi^{2}}\int_{-1}^{+1}\mbox{d}\mu\int_{k_{\rm min}}^{k_{\rm max}}\mbox{d}k\,k^{2}\frac{\partial\ln P_{\mathrm{obs}} (k,\mu,z)}{\partial\theta_{i}}\frac{\partial\ln P_{\mathrm{obs}} (k,\mu,z)}{\partial\theta_{j}}\left[\frac{n(z)P_{\mathrm{obs}} (k,\mu,z)}{n(z)P_{\mathrm{obs}} (k,\mu,z)+1}\right]^{2}\,\,.\label{eq:fisher-matrix-gc} \end{equation} Here $V_{\rm survey}$ is the volume covered by the survey and contained in a redshift slice $\Delta z$ and $n(z)$ is the galaxy number density as a function of redshift. We will consider the smallest wavenumber $k_{\rm min}$ to be $k_{\rm min}=0.0079\textrm{h/Mpc}$ , while the maximum wavenumber will be $k_{\rm max}=0.15$h/Mpc for the linear forecasts and $k_{\rm max}=0.5$h/Mpc for the non-linear forecasts. In the above formulation of the Galaxy Clustering Fisher matrix, we neglect the correlation among different redshift bins and possible redshift bin uncertainties as was explored recently in \cite{bailoni_improving_2016}, we will use for our forecasts the more standard recipe specified in the Euclid Redbook \cite{laureijs_euclid_2011}. \begin{table}[h] \centering{} \begin{tabular}{|c|cccc|c|} \hline \Tstrut \textbf{Parameter} & \textbf{Euclid} & \textbf{DESI-ELG} & \textbf{SKA1-SUR} & \textbf{SKA2} & \textbf{Description}\tabularnewline \hline \Tstrut $A_{\rm survey}$ & 15000 $\mbox{deg}^{2}$ & 14000 $\mbox{deg}^{2}$ & 5000 $\mbox{deg}^{2}$ & 30000 $\mbox{deg}^{2}$ & Survey area in the sky\tabularnewline $\sigma_{z}$ & 0.001 & 0.001 & 0.0001 & 0.0001 & Spectroscopic redshift error\tabularnewline $\{z_{\rm min},\ z_{\rm max}\}$ & \{0.65, 2.05\} & \{0.65, 1.65\} & \{0.05, 0.85\} & \{0.15, 2.05\} & Min. and max. limits for redshift bins \tabularnewline $\Delta z$ & 0.1 & 0.1 & 0.1 & 0.1 & Redshift bin width\tabularnewline \hline \end{tabular}\protect\caption{\label{tab:GC-specifications} Specifications for the spectroscopic galaxy redshift surveys used in this work. The number density of tracers $n(z)$ and the galaxy bias $b(z)$, can be found for SKA in \cite{santos_hi_2015} and for DESI in reference \cite{desi_collaboration_desi_2016-1}.} \end{table} \subsection{Weak Lensing \label{sub:Fisher-Weak-Lensing}} Light propagating through the universe is deflected by variations in the Weyl potential $\Phi+\Psi$, leading to distortions in the images of galaxies. In the regime of small deflections (Weak Lensing) we can write the power spectrum of the shear field as \begin{equation} \label{def_shear} C_{ij}(\ell)=\frac{9}{4}\int_{0}^{\infty}\mbox{d}z\:\frac{W_{i}(z)W_{j}(z)H^{3}(z)\Omega_{m}^{2}(z)}{(1+z)^{4}}\left[\Sigma(\ell/r(z),z)\right]P_{m}(\ell/r(z)) \, . \end{equation} In this expression we used Eqn.\ (\ref{eq:Sigma-def}) to relate the Weyl potential to $\Sigma$ and to the matter power spectrum $P_m$ and we use the Limber approximation to write down the conversion $k=\ell/r(z)$, where $r(z)$ is the comoving distance given by \begin{equation} r(z) = c\int_0^z \frac{d\tilde{z}}{H(\tilde{z})} \, . \end{equation} The indices $i,\:j$ stand for each of the $\mathcal{N}_{bin}$ redshift bins, such that $C_{ij}$ is a matrix of dimensions $\mathcal{N}_{bin}\times\mathcal{N}_{bin}$. The window functions $W_i$ are given by \begin{equation} W(z)=\int_{z}^{\infty}\mbox{d}\tilde{z}\left(1-\frac{r(z)}{r(\tilde{z})}\right)n(\tilde{z}) \end{equation} where the normalized galaxy distribution function is \begin{equation} n(z)\propto z^{2}\exp\left(-(z/z_{0})^{3/2}\right) \, . \label{eq:ngal dist} \end{equation} Here the median redshift $z_{\rm med}$ and $z_{0}$ are related by $z_{\rm med}=\sqrt{2}z_{0}$. The Weak Lensing Fisher matrix is then given by a sum over all possible correlations at different redshift bins \citep{tegmark_measuring_1998}, \begin{equation} F_{\alpha\beta}=f_{\rm sky}\sum_{\ell}^{\ell_{\mathrm{max}}}\sum_{i,j,k,l}\frac{(2\ell+1)\Delta\ell}{2}\frac{\partial C_{ij}(\ell)}{\partial\theta_{\alpha}}\textrm{Cov}_{jk}^{-1}\frac{\partial C_{kl}(\ell)}{\partial\theta_{\beta}}\textrm{Cov}_{li}^{-1} \, . \label{eq:FisherSum-WL} \end{equation} The prefactor $f_{\rm sky}$ is the fraction of the sky covered by the survey. The upper limit of the sum, $\ell_{\mathrm{max}}$, is a high-multipole cutoff due to our ignorance of clustering and baryon physics on small scales, similar to the role of $k_{\rm max}$ in Galaxy Clustering. In this work we choose $\ell_{\mathrm{max}} = 1000$ for the linear forecasts and $\ell_{\mathrm{max}} = 5000$ for the non-linear forecasts (this cutoff is not necessarily reached at all multipoles $\ell$, as what matters is the minimum scale between $\ell_{\rm max}$ and $k_{\rm max}$, as we discuss below; see also \cite{casas_fitting_2015}). In Eqn.\ (\ref{eq:FisherSum-WL}), $\textrm{Cov}_{ij}$ is the corresponding covariance matrix of the shear power spectrum and it has the following form: \begin{equation} \textrm{Cov}_{ij}(\ell)=C_{ij}(\ell)+\delta_{ij}\gamma_{\rm int}^{2}n_{i}^{-1}+K_{ij}(\ell) \end{equation} where $\gamma_{\rm int}$ is the intrinsic galaxy ellipticity, whose value can be seen in Table \ref{tab:WL-specifications} for each survey. The shot noise term $n_{i}^{-1}$ is expressed as \begin{equation} n_{i}=3600\left(\frac{180}{\pi}\right)^{2}n_{\theta}/\mathcal{N}_{bin} \end{equation} with $n_{\theta}$ the total number of galaxies per $\text{arcmin}^2$ and the index $i$ standing for each redshift bin. Since the redshift bins have been chosen such that each of them contains the same amount of galaxies (equi-populated redshift bins), the shot noise term is equal for each bin. The matrix $K_{ij}(\ell)$ is a diagonal ``cutoff'' matrix, discussed for the first time in \cite{casas_fitting_2015} whose entries increase to very high values at the scale where the power spectrum $P(k)$ has to be cut to avoid the inclusion of uncertain or unresolved non-linear scales. We choose to add this matrix to have further control on the inclusion of non-linearities. Without this matrix, due to the redshift-dependent relation between $k$ and $\ell$, a very high $\ell_{\mathrm{max}}$ would correspond at low redshifts, to a very high $k_{\rm max}$ where we do not longer trust the accuracy of the non-linear power spectrum. Therefore, the sum in Eqn.\ (\ref{eq:FisherSum-WL}) is limited by the minimum scale imposed either by $\ell_{\mathrm{max}}$ or by $k_{\rm max}$, which is the maximum wavenumber considered in the matter power spectrum $P(k,z)$. As we did for Galaxy Clustering, we use for linear forecasts $k_{\rm max}=0.15$ and for non-linear forecasts $k_{\rm max}=0.5$. \begin{table}[h] \centering{} \begin{tabular}{|c|ccc|c|} \hline \Tstrut \textbf{Parameter} & \textbf{Euclid} & \textbf{SKA1} & \textbf{SKA2} & \textbf{Description}\tabularnewline \hline \Tstrut $f_{\rm sky}$ & 0.364 & 0.121 & 0.75 & Fraction of the sky covered\tabularnewline $\sigma_{z}$ & 0.05 & 0.05 & 0.05 & Photometric redshift error\tabularnewline $n_{\theta}$ & 30 & 10 & 2.7 & Number of galaxies per arcmin\tabularnewline $\gamma_{\rm int}$ & 0.22 & 0.3 & 0.3 & Intrinsic galaxy ellipticity \tabularnewline $z_{0}$ & 0.9 & 1.0 & 1.6 & Median redshift over $\sqrt{2}$ \tabularnewline $\mathcal{N}_{bin}$ & 12 & 12 & 12 & Total number of tomographic redshift bins \tabularnewline \hline \end{tabular}\protect\caption{\label{tab:WL-specifications} Specifications for the Weak Lensing surveys Euclid, SKA1 and SKA2 used in this work. Other needed quantities can be found in the references cited in section \ref{sub:FutureSurveys}. For all WL surveys we use a redshift range between $z=0.5$ and $z=3.0$, using 6 equi-populated redshift bins.} \end{table} \subsection{Covariance and correlation matrix and the Figure of Merit\label{sec:covcorr}} The covariance matrix is defined for a \emph{d-dimensional }vector $p$ of random variables as \begin{equation} \mathbf{C}=\langle \Delta p \Delta p^{T} \rangle\label{eq:covariance_def} \end{equation} with $\Delta p = p - \langle p \rangle$ and the angular brackets $\langle \, \rangle$ representing an expectation value. The matrix $\mathbf{C}$, with all its off-diagonal elements set to zero, is called the variance matrix $V$ and contains the square of the errors $\sigma_{i}$ for each parameter $p_{i}$ \begin{equation} \mathbf{V} \equiv diag(\sigma_{1}^{2},...,\sigma_{d}^{2})\,\,\,. \end{equation} The Fisher matrix $\mathbf{F}$ is the inverse of the covariance matrix \begin{equation} \mathbf F= \mathbf C^{-1}\,\,\,. \end{equation} The correlation matrix $\mathbf P$ is obtained from the covariance matrix $\mathbf C$, in the following way \begin{equation} P_{ij}=\frac{C_{ij}}{\sqrt{C_{ii}C_{jj}}} \, . \label{eq:correlation_def} \end{equation} If the covariance matrix is non-diagonal, then there are correlations among some elements of $p$. We can observe this also by plotting the marginalized error ellipsoidal contours. The orientation of the ellipses can tell us if two variables $p_{i}$ and $p_{j}$ are correlated ($P_{ij}>0$), corresponding to ellipses with 45 degree orientation to the right of the vertical line or if they are anti-correlated ($P_{ij}<0$), corresponding to ellipses oriented 45 degrees to the left of the vertical line. To summarize the information contained in the Fisher/covariance matrices we can define a Figure of Merit (FoM). Here we choose the logarithm of the determinant, while another possibility would be the Kullback-Leibler divergence, which is a measure of the information entropy gain, see Appendix \ref{sec:KL}. The square-root $\sqrt{\det(\mathbf{C})}$ of the determinant of the covariance matrix is proportional to the volume of the error ellipsoid. We can see this if we rotate our coordinate system so that the covariance matrix is diagonal, $\mathbf C = {\rm diag}(\sigma_1^2, \sigma_2^2, \ldots \sigma_{d}^{2})$, then $\det(\mathbf C) = \prod_i \sigma_i^2$ and $(1/2)\ln(\det(\mathbf C)) = \ln \prod_i\sigma_i$ would indeed represent the logarithm of an error volume. Thus, the smaller the determinant (and therefore also $\ln(\det(\mathbf{C}))$), the smaller is the ellipse and the stronger are the constraints on the parameters. We define \begin{equation}\label{eq:FoM} \mathrm{FoM} = -\frac{1}{2} \ln(\det(\mathbf{C})) \, , \end{equation} with a negative sign in front such that stronger constraints lead to a higher Figure of Merit. In the following, the value of the FoM reported in all tables will be obtained including only the dark energy parameters (i.e.\ the $(\mu_i,\eta_i)$ sub-block for the binned case and the $(\mu,\eta)$ sub-block in the smooth functions case), after marginalizing over all other parameters. The FoM allows us to compare not only the constraining power of different probes but also of the different experiments. As the absolute value depends on the details of the setup, we define the relative figure of merit between probe $a$ and probe $b$: $\mathrm{FoM}_{a,b} = -1/2 \ln(\det(\mathbf{C_a})/\det(\mathbf{C_b})) = \mathrm{FoM}_{a}-\mathrm{FoM}_{b}$ and we fix our reference case (probe $b$), for each parametrization, to the Galaxy Clustering observation using linear power spectra with the Euclid survey (labeled as `Euclid Redbook GC-lin' in all figures and tables). The FoM has units of `nits', since we are using the natural logarithm. These are similar to `bits', but `nits' are counted in base $e$ instead of base 2. An analogous construction allows us to study quantitatively the strength of the correlations encoded by the correlation matrix $\mathbf P$. We define the `Figure of Correlation' (FoC) as: \begin{equation}\label{eq:FoC} \mathrm{FoC} = -\frac{1}{2} \ln(\det(\mathbf{P})) \, . \end{equation} If the parameters are independent, i.e.\ fully decorrelated, then $\mathbf{P}$ is just the unit matrix and $\ln(\det(\mathbf{P}))=0$. Off-diagonal correlations will decrease the logarithm of the determinant, therefore making the FoC larger. From a geometrical point of view, the determinant expresses a volume spanned by the vector of (normalized) variables. If these variables are independent, the volume would be maximal and equal to one, while if they are strongly linearly dependent, the volume would be squeezed and in the limit where all variables are effectively the same, the volume would be reduced to zero. Hence, a more positive FoC indicates a stronger correlation of the parameters. \subsection{CMB {\it Planck}\ priors} \label{sub:Fisher-Planck} Alongside the information brought by LSS probes, we also include CMB priors on the parameterizations considered. In order to obtain these, we analyze the binned and parameterized approaches described in Section \ref{sec:Parameterizing-Modified-Gravity} with the {\it Planck}+BSH combination of CMB and background (BAO+SN-Ia+$H_{0}$) datasets discussed in the {\it Planck}\ Dark Energy and Modified Gravity paper \cite{planck_collaboration_planck_2016}. We use a Markov Chain Monte Carlo (MCMC) approach, using the publicly available code \texttt{COSMOMC} \cite{lewis_cosmological_2002,lewis_efficient_2013}, interfaced with our modified version of \texttt{MGCAMB}. The MCMC chains sample the parameter vector $\Theta$ which contains the standard cosmological parameters $\{\omega_{b}\equiv\Omega_{b}h^{2},\,\omega_{c}\equiv\Omega_{c}h^{2},\,\theta_{\rm MC},\,\tau,n_{s},\ln{10^{10}A_{s}}\}$ to which we add the $E_{ij}$ parameters when we parameterize the time evolution of $\mu$ and $\eta$ with continuous functions of the scale factor, and the $\mu_{i},\ \eta_{i}$ parameters in the binned case. On top of these, also the 17 nuisance parameters of the {\it Planck}\ analysis are included. From the MCMC analysis of the {\it Planck}\ likelihood we obtain a covariance matrix in terms of the parameters $\Theta$. We marginalize over the nuisance parameters and over the optical depth $\tau$ since this parameter does not enter into the physics of large scale structure formation. $\theta$ is usually the ratio of sound horizon to the angular diameter distance at the time of decoupling. Since calculating the decoupling time $z_{\rm CMB}$ is relatively time consuming, as it involves the minimization of the optical transfer function, \texttt{COSMOMC} uses instead an approximate quantity $\theta_{\rm MC}$ based on the following fitting formula from \cite{hu_small_1996} \begin{align} z_{\rm CMB} & =1048\times(1+0.00124\omega_{b}^{-0.738})\nonumber \\ & \times\left(1+0.0783\omega_{b}^{-0.238}/(1+39.5\omega_{b}^{0.763})\right.\nonumber \\ & \times\left.(\omega_{d}+\omega_{b})^{0.560/(1+21.1\omega_{b}^{1.81})}\right) \end{align} where $\omega_{d}\equiv(\Omega_{c}+\Omega_{\nu})h^{2}$. The sound horizon is defined as \begin{equation} r_{s}(z_{\rm CMB})=cH_{0}^{-1}\int_{z_{\rm CMB}}^{\infty}\mbox{d}z\frac{c_{s}}{E(z)} \end{equation} where the sound speed is $c_{s}=1/\sqrt{3(1+\overline{R}_{b}a)}$with the baryon-radiation ratio being $\overline{R}_{b}a=3\rho_{b}/4\rho_{\gamma}$. $\overline{R}_{b}=31500\Omega_{b}h^{2}(T_{\rm CMB}/2.7\mbox{K})^{-4}$. However, \texttt{CAMB }approximates it as $\overline{R}_{b}a=30000a\Omega_{b}h^{2}$. Therefore we first marginalize the covariance matrix over the nuisance parameters and the parameter $\tau$, which cannot be constrained by LSS observations. Then, we invert the resulting matrix, to obtain a {\it Planck}\ prior Fisher matrix and then use a Jacobian to convert between the MCMC parameter basis $\Theta_{i}$ and the GC-WL parameter basis $\theta_{i}$. We use the formulas above for the sound horizon $r_{s}$ and the angular diameter distance $d_{A}$ to calculate the derivatives of $\theta_{\rm MC}$ with respect to the parameters of interest. Our Jacobian is then simply (see Appendix \ref{sec:appjac} for details) \begin{equation} J_{ij}=\frac{\partial\Theta_{i}}{\partial\theta_{j}} \, . \end{equation} \section{\label{sec:Results:-Redshift-Binned}Results: Euclid forecasts for redshift binned parameters} In this section we analyze the Modified Gravity functions $\mu(a)$ and $\eta(a)$, described in Section \ref{sec:Parameterizing-Modified-Gravity}, when they are allowed to vary freely in five redshift bins. For this purpose, we calculate a Fisher matrix of fifteen parameters: five for the standard $\Lambda\mathrm{CDM}$ parameters $\{\Omega_{m},\Omega_{b},h,\ln10^{10}A_{s},n_{s}\}$, five for $\mu$ (one for each bin amplitude $\mu_{i}$) and five for $\eta$ (one for each bin amplitude $\eta_{i}$), corresponding to the 5 redshift bins z=\{0-0.5, 0.5-1.0, 1.0-1.5, 1.5-2.0, 2.0-2.5\}. The fiducial values for all fifteen parameters were calculated running a Markov-Chain-Monte-Carlo with {\it Planck}\ likelihood data and can be found in Table \ref{tab:fiducial-MG-AllCases}. We first show the constraints on our 15 parameters for Galaxy Clustering (GC) forecasts in subsection \ref{sub:GC-Correlations}, while in subsection \ref{sub:Weak-Lensing} we report results for Weak Lensing (WL). In subsection \ref{sub:Combined-GC-WL-Planck-Binned}, we comment on the combination of forecasts for GC+WL together with {\it Planck}\ data. All forecasts are performed using Euclid Redbook specifications. Other surveys will be considered for the other two time parameterizations in section \ref{subsub: other-surveys-late-time} and \ref{subsub: other-surveys-early-time}. For each case, we show the correlation matrix obtained from the covariance matrix and argue that the redshift-binned parameters show a strong correlation, therefore we illustrate the decorrelation procedure for the covariance matrix in subsection \ref{sub:Decorrelation-of-covariance} where we also include combined GC+WL and GC+WL+{\it Planck} cases. \subsection{\label{sub:GC-Correlations}Euclid Galaxy Clustering Survey} For the Galaxy Clustering survey, we give results for two cases, one using only linear power spectra up to a maximum wavevector of $k_{\rm max}=0.15$ h/Mpc and another one using non-linear power spectra up to $k_{\rm max}=0.5$ h/Mpc, as obtained by using the HS parameterization of Eqn.\ (\ref{eq:PHSDefinition}). For the redshift-binned case, we will report forecasts only for a Euclid survey, using Euclid Redbook specifications which are detailed in section \ref{sub:Fisher-Galaxy-Clustering}. \begin{figure}[H] \centering \includegraphics[width=0.4\textwidth]{figures/Decorrelations-GC/correlation-full-fiducialMGBin3-Euclid-GC-linearPK-} \includegraphics[width=0.4\textwidth]{figures/Decorrelations-GC/correlation-full-fiducialMGBin3-Euclid-GC-nonlinearPk__Zhao-} \caption{\label{fig:GCcorr} Correlation matrix $\mathbf P$ defined in (\ref{eq:correlation_def}) obtained from the covariance matrix in the MG-binning case, for a Galaxy Clustering Fisher forecast using Euclid Redbook specifications. \textbf{Left panel:} Linear forecasts. Here there are strong positive correlations among the $\mu_i$ and $\eta_i$ parameters and anti-correlations between $\ln10^{10}A_{s}$ and the $\mu_i$ parameters, as well as between $\mu_i$ and $\eta_i$. The FoC in this case is $\approx 65$. (see Eqn.\ (\ref{eq:FoC}) for its definition). \textbf{Right panel: } Non-linear forecasts using the HS prescription. Interestingly, the anti-correlations between $\ln10^{10}A_{s}$ and $\mu_i$ have disappeared, as well as the correlations among the $\mu_i$ parameters. The FoC is in this case $\approx 32$, meaning that the variables are much less correlated than in the linear case. This is due to the fact that taking into account non-linear structure formation breaks degeneracies between the primordial amplitude parameter and the modifications to the Poisson equation.} \end{figure} We calculate the Fisher matrix for the 15 parameters $\theta=\{\Omega_{m},\Omega_{b},h,\ln10^{10}A_{s},n_{s},\mu_{i},\eta_{i}\}$ where $\eta_{i}$ and $\mu_{i}$ represent ten independent parameters, one for each function at each of the 5 redshift bins corresponding to the redshifts z=\{0-0.5, 0.5-1.0, 1.0-1.5, 1.5-2.0, 2.0-2.5\}. As a standard procedure, we marginalize over the unknown bias parameters. From the covariance matrix, defined previously in Eqn.\ (\ref{eq:covariance_def}), we obtain the correlation matrix $P_{ij}$ defined in Eqn.\ (\ref{eq:correlation_def}) for the set of parameters $\theta_{i}$. In Figure \ref{fig:GCcorr} we show the matrix $P_{ij}$ in the linear (left panel) and non-linear-HS (right panel) cases. Redder (bluer) colors signal stronger correlations (anti-correlations). A covariance matrix that contains strong correlations among parameter A and B, means that the experimental or observational setting has difficulties distinguishing between A and B for the assumed theoretical model, i.e.\ this represents a parameter degeneracy. Therefore if for example parameter A is poorly constrained, then parameter B will be badly constrained as well. The appearance of correlations among parameters is linked to the non-diagonal elements of the covariance matrix. Subsequently, this means that the fully marginalized errors on a single parameter, will be larger if there are strong correlations and will be smaller (closer to the value of the fully maximized errors) if the correlations are negligible. In the linear case, $\mu_{i}$ and $\eta_{i}$ parameters show correlations among each other, while the primordial amplitude parameter $\ln10^{10}A_{s}$ exhibits a strong anti-correlation with all the $\mu_{i}$. This can be explained considering that a larger growth of structures in linear theory can also be mimicked with a larger initial amplitude of density fluctuations. Interestingly, including non-linear scales in the analysis (right panel of Fig.\ \ref{fig:GCcorr}) leads to a strong suppression of the correlations among the $\mu_i$. Also the correlation between these and $\ln10^{10}A_{s}$ is suppressed as a change in the initial amplitude of the power spectrum is not able to compensate for a modified Poisson equation when non-linear evolution is considered. As discussed in Section \ref{sec:covcorr}, we can also express the difference between the correlation matrix of the linear forecast and the non-linear forecast in a more quantitative way, by computing the determinant of the correlation matrix, or equivalently the FoC (\ref{eq:FoC}). If the correlations were negligible, this determinant would be equal to one (and therefore its FoC would be 0), while if the correlations were strong, the determinant would be closer to zero with a corresponding large positive value of the FoC. For the linear forecast, the FoC is about $62$, while for the non-linear forecast, it is much smaller at approximately $35$. In Table \ref{tab:errors-all-MGBin3} we show the 1$\sigma$ constraints obtained on $\ln{(10^{10}A_s)}$ and on the $\mu_i$ and $\eta_i$ parameters, both in the linear and non-linear cases for a Euclid Redbook GC survey (top rows). While linear GC alone ($k_{\rm max}$ = 0.15 h/Mpc) is not very constraining in any bin, the inclusion of non-linear scales ($k_{\rm max} $= 0.5 h/Mpc) drastically reduces errors on the $\mu_{i}$ parameters: the first three bins in $\mu_i$ (0. < z < 1.5 ) are the best constrained, to less than $10\%$, with the corresponding $\eta_i$ constrained at 20$\%$ by non-linear GC alone. This is also visible in the FoM which increases by 19 nits (`natural units', similar to bits but using base $e$ instead of base 2), nearly 4 nits per redshift bin on average, when including the non-linear scales. The fact that the error on $\ln10^{10}A_{s}$ improves from 90\% to 0.68\% shows that the decorrelation induced by the non-linearities breaks the degeneracy with the amplitude and therefore improves considerably the determination of cosmological parameters. This shows that it is important to include non-linear scales in GC surveys (and not only in Weak Lensing ones, which is usually more expected and will be shown in the next subsection). \begin{table}[htbp] \centering{}% \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c||c|} \hline \Tstrut \textbf{Euclid} (Redbook) & $\ell \mathcal{A}_{s}$ & $\mu_{1}$ & $\mu_{2}$ & $\mu_{3}$ & $\mu_{4}$ & $\mu_{5}$ & $\eta_{1}$ & $\eta_{2}$ & $\eta_{3}$ & $\eta_{4}$ & $\eta_{5}$ & MG FoM \tabularnewline \hline \Tstrut Fiducial & 3.057 & 1.108 & 1.027 & 0.973 & 0.952 & 0.962 & 1.135 & 1.160 & 1.219 & 1.226 & 1.164 & relative\tabularnewline \hline \Tstrut \textbf{GC (lin)} \Tstrut \input{latex-tables/errorsSigma-percentage-fiducialMGBin3-Euclid-GC-linearPK-v13.tex} & 0 \tabularnewline \Tstrut \textbf{GC (nl-HS)} \Tstrut \input{latex-tables/errorsSigma-percentage-fiducialMGBin3-Euclid-GC-nonlinearPk__Zhao-v13.tex} & 19 \tabularnewline \hline \hline \Tstrut \textbf{WL (lin)} \input{latex-tables/errorsSigma-percentage-fiducialMGBin3-Euclid-WL-linearPK-v13.tex} & -27\tabularnewline \Tstrut \textbf{WL (nl-HS)} \input{latex-tables/errorsSigma-percentage-fiducialMGBin3-Euclid-WL-nonlinearPk__Zhao-v13.tex} & -10 \tabularnewline \hline \hline \Tstrut \textbf{GC+WL (lin)} \input{latex-tables/errorsOneSigma-Percentage-GC+WL--linearPk-.tex} & 12 \tabularnewline \Tstrut \textbf{GC+WL+{\it Planck} (lin)} \input{latex-tables/errorsOneSigma-Percentage-GC+WL+Planck--linearPk-.tex} & 27 \tabularnewline \hline \hline \Tstrut \textbf{GC+WL (nl-HS)} \input{latex-tables/errorsSigma-percentage-fiducialMGBin3-Euclid-GC+WL-nonlinearPk__Zhao-v13.tex} & 24 \tabularnewline \Tstrut \textbf{GC+WL+{\it Planck}} & & & & & & & & & & & & \tabularnewline \textbf{(nl-HS)} \input{latex-tables/errorsSigma-percentage-fiducialMGBin3-Euclid-GC+WL+Planck-nonlinearPk__Zhao-v13.tex} & 33 \tabularnewline \Tstrut \textbf{GC+WL+{\it Planck}} & & & & & & & & & & & & \tabularnewline \textbf{(nl-Halofit)} \input{latex-tables/errorsSigma-percentage-fiducialMGBin3-Euclid-GC+WL+Planck-nonlinearPk--v13.tex} & 33 \tabularnewline \hline \end{tabular}\protect\caption{\label{tab:errors-all-MGBin3} 1$\sigma$ fully marginalized errors (as a percentage of the corresponding fiducial) on cosmological parameters for Euclid (Redbook) Galaxy Clustering and Weak Lensing surveys, alone and combining the two probes. We compare forecasts using linear spectra (lin) and forecasts using the nonlinear HS prescription (nl-HS). In Galaxy Clustering, the cutoff is set to $k_{\rm max}=0.15$ h/Mpc in the linear case and $k_{\rm max}=0.5$ h/Mpc in the non-linear case. For WL, the maximum cutoff in the linear case is at $\ell_{\rm max} = 1000$, while in the nl-HS case it is $\ell_{\rm max} = 5000$. At the bottom, we add on top a {\it Planck}\ prior (see section \ref{sub:Fisher-Planck}). For comparison, we also show in the last row the combined GC+WL+{\it Planck}, using just Halofit power spectra. The last column indicates the relative Figure of Merit ($\text{FoM}_{a,b}$) of the MG parameters in nits (`natural units', i.e.\ using the natural logarithm), with respect to our reference GC linear case, see (\ref{eq:FoM}) and surrounding text. A larger FoM indicates a more constraining probe. We notice a considerable improvement, in both GC and WL, when non-linearities are included. The combination GC+WL in the linear case constrains the MG parameters in the first two bins (z < 1.0 ) to less than 10$\%$ and including {\it Planck}\ priors allows to access higher redshifts with the same accuracy. A significant improvement in the constraints is obtained when adding the non-linear regime, in agreement with the observed reduction in correlation seen in Figs.\ \ref{fig:GCcorr} and \ref{fig:WLcorr}. This is especially well exemplified by the error on $\ell \mathcal{A}_s \equiv \ln(10^{10} A_{s})$, which reduces from 160\% to 0.82\%, from the linear to the non-linear forecast in the GC case and from 640\% to 7.3\% in the WL case. Finally, we note that since we are showing errors on $\mu$ and $\eta$, WL seems to be unfairly poor at constraining parameters. However, when converting this errors into errors on $\Sigma$, which is directly measured by WL, the constrains on $\Sigma_{1,2,3}$ are slightly better, of the order of 40\% for WL(nl-HS) as can be guess from the degeneracy directions shown in Fig.\ \ref{fig:DE+Planck-ellipses-mu-sig-eta}. The FoM itself is nearly unaffected by the choice of $\{\mu,\eta\}$ vs $\{\mu,\Sigma\}$ as it is rotationally invariant. } \end{table} \subsection{\label{sub:Weak-Lensing}Euclid Weak Lensing Survey} In the case of Weak Lensing, the linear forecast is performed with linear power spectra up to a maximum multipole $\ell_{\rm max}=1000$, while the non-linear forecast is performed with non-linear spectra up to a maximum multipole of $\ell_{\rm max}=5000$, as explained in section \ref{sec:Fisher-Matrix-method}. Since we limit our power spectrum to a maximum in $k$-space, as explained in section \ref{sub:Fisher-Weak-Lensing}, these multipole values are not reached at every redshift. Like for GC, also for WL it is very important to include information from the non-linear power spectrum, since in that range will lie most of the constraining power of next-generation surveys like Euclid. In Figure \ref{fig:WLcorr} we show the correlation matrices for the linear (left panel) and non-linear (right panel) Fisher Matrix forecasts. In this case, as opposed to the GC correlation matrices, it is not visually clear which case is more correlated than the other. At a closer look, in the linear case we can observe strong anti-correlations between the $\mu_{j}$ and $\eta_{j}$ parameters for $j={2,3,4,5}$ and an anti-correlation between $\eta_{1}$ and the primordial amplitude $\ln10^{10}A_{s}$. In the non-linear case, the primordial amplitude parameter is effectively decorrelated from the Modified Gravity parameters, and the anti correlation between $\mu_{j}$ and the $\eta_{j}$ affects the first three bins (effectively increasing degeneracies in the first bin). The anti-correlation between these two sets of parameters is expected, since WL is sensitive to $\Sigma$, which is a product of $\eta$ and $\mu$, c.f.\ Eqn.\ (\ref{eq:SigmaofMuEta}). The decrease of correlation when going from the linear to the non-linear case is confirmed, also quantitatively, by the FoC: the one for the linear case is of $\approx 69$, larger than the one for the non-linear case $\approx 32$. Once again, the inclusion of non-linear scales breaks degeneracies, especially among the linear amplitude of the power spectrum and the MG parameters. \begin{figure}[htbp] \centering \includegraphics[width=0.4\textwidth]{figures/Decorrelations-WL/correlation-full-fiducialMGBin3-Euclid-WL-linearPK-} \includegraphics[width=0.4\textwidth]{figures/Decorrelations-WL/correlation-full-fiducialMGBin3-Euclid-WL-nonlinearPk__Zhao-} \caption{\label{fig:WLcorr} Correlation matrix obtained from the covariance matrix in the MG-binning case, for a Weak Lensing Fisher Matrix forecast using Euclid Redbook specifications. \textbf{Left panel:} linear forecasts. Strong anti-correlations are present between the $\mu_i$ and the $\eta_i$ parameters for the same value of i in {2,3,4,5}. The amplitude $\ln(10^{10}A_{s})$ parameter is mostly uncorrelated except with $\eta_1$. The FoC (\ref{eq:FoC}) in this case is approximately $69$. \textbf{Right panel:} non-linear forecasts using the HS prescription. Here the same trend as in the linear case is present, with just subtle changes. The FoC in this case is about $32$, meaning that the variables are indeed less correlated than in the linear case. The parameter $\ln(10^{10}A_{s})$ is effectively not correlated to other parameters, and the anti-correlation of $\mu_{i}$ and the $\eta_{i}$ for the same value of the index i is present in the first three bins. The anti-correlation between these two sets of parameters is expected, since WL is sensitive to the Weyl potential $\Sigma$, which is a product of $\mu$ and $\eta$.} \end{figure} Table \ref{tab:errors-all-MGBin3} shows the corresponding 1$\sigma$ marginalized errors on $\ln{(10^{10}A_s)}$ and on the $\mu_i$ and $\eta_i$, both in the linear and non-linear cases for a Euclid Redbook WL survey. As in the case of GC, linear WL cannot constrain alone any of the amplitudes of the Modified Gravity parameters $\mu_i$ and $\eta_i$ for any redshift bin. Being able to include non-linear scales improves constraints on the amplitude $\ln(10^{10}A_{s})$ by a factor 100. The 1$\sigma$ errors on $\mu_{i}$ and $\eta_{i}$ improve up to one order of magnitude, with the FoM increasing by 17 nits, although remaining quite unconstraining for Modified Gravity parameters in all redshift bins. Notice, however, that the 1$\sigma$ error on $\mu_1$ from WL in the linear case is slightly smaller than in the non-linear case. This can be attributed to the fact that in the linear case, $\mu_1$ is uncorrelated to any other parameter, as shown in Figure \ref{fig:WLcorr} and on that specific bin (0 < z < 0.5) non-linearities don't seem to improve the constraints on this parameter. \subsection{Combining Euclid Galaxy Clustering and Weak Lensing, with {\it Planck}\ data \label{sub:Combined-GC-WL-Planck-Binned}} The combination of Galaxy Clustering and Weak Lensing is expected to be very powerful for Modified Gravity parameters as they measure two different combinations of $\mu$ and $\eta$, thus breaking their degeneracy as illustrated in Fig. \ref{fig:DE+Planck-ellipses-mu-sig-eta}. This is shown in Table \ref{tab:errors-all-MGBin3}, where the sensitivity drastically increases with respect to the two separate probes, especially in the low redshifts bins $(0.<z<1.5)$, where the lensing signal is dominant. Adding non-linearities further doubles the FoM. The {\it Planck}\ data constrains mostly the standard $\Lambda\mathrm{CDM}$ parameters and has only a limited ability to constrain the MG sector. However, the additional information breaks parameter degeneracies and in this way significantly decreases the uncertainties on all parameters, so that the linear GC+WL+{\it Planck} is comparable to the non-linear GC+WL. Also, quantitatively, the correlation among parameters is reduced by combining GC+WL with {\it Planck}\ data. The FoC in this case is of $\approx 22$. We also see in Table \ref{tab:errors-all-MGBin3} that the differences between the non-linear prescription adopted here (nl-HS) and a straightforward application of Halofit to the MG case (nl-Halofit) is not very large. We will investigate further the impact of the parameters used in the non-linear prescription in section \ref{sub:Testing-the-effect-of-Zhao}. \subsection{\label{sub:Decorrelation-of-covariance}Decorrelation of covariance matrices and the Zero-phase Component Analysis} In the previous subsections we highlighted how the MG parameters and the amplitude of the primordial power spectrum exhibit significant correlations and showed how including non-linearities helps to decorrelate them. Even without including non-linearities, however, it is interesting to investigate how we can completely decorrelate the parameters, identifying in this way those parameter combinations which are best constrained by data. Given again a \emph{d-dimensional} vector $p$ of random variables (our originally correlated parameters), we can calculate its covariance matrix $\mathbf C$ defined in Eqn.\ (\ref{eq:covariance_def}). The process of decorrelation is the process of making the matrix $\mathbf C$ a diagonal matrix. Let us define some important identities. The covariance matrix can be decomposed in its eigenvalues (the elements of a diagonal matrix $\Lambda$) and eigenvectors (the rows of an orthogonal matrix $U$). \begin{equation} \mathbf C=U\Lambda U^{T} \quad \Leftrightarrow \quad \mathbf F=U\Lambda^{-1}U^{T} \,\,\, , \label{eq:eigensystemofC} \end{equation} where $\mathbf F$ is the Fisher Matrix. It is possible to show that applying a transformation matrix $W$ to the $p$ parameter vector, thus obtaining a new vector of variables $q=Wp$, the covariance matrix of the transformed vector $q$ is whitened (i.e. it is the identity matrix, and whitening is defined as the process of converting the covariance matrix into an identity matrix) \begin{align} \mathbf {\tilde{C}} & =W\langle\Delta\hat{p}\Delta\hat{p}^{T}\rangle W^{T} =\langle\Delta\hat{q}\Delta\hat{q}^{T}\rangle \label{eq:Cwhitened}\\ & = \mathbb{1} \,\,\, . \nonumber \end{align} This means that the transformed $q$ parameters are decorrelated, since their correlation matrix is diagonal. The choice of $W$ is not unique as several possibilities exist; we focus on a particular choice in the rest of the paper referred to as Zero-phase Component Analysis (ZCA, first introduced by \cite{Bell19973327} in the context of image processing), but we show two other possible choices and their effect on the analysis in Appendix \ref{sec:appdec}. Zero-phase Component Analysis (sometimes also called Mahalanobis transformation \cite{kessy_optimal_2015}) is a specific choice of decorrelation method that minimizes the squared norm of the difference between the $q_i$ and the $p_i$ vector $\|\vec{p}-\vec{q}\|$, under the constraint that the vector $\vec{q}$ should be uncorrelated \cite{kessy_optimal_2015}. In this way the uncorrelated variables $q$ will be as close as possible to the original variables $p$ in a least squares sense. This is achieved by using the $W$ matrix: \begin{equation} W \equiv F^{1/2} \,\,\, . \end{equation} Then in this case, the covariance matrix is whitened, by following Eqn.\ (\ref{eq:Cwhitened}): \begin{equation} \tilde{C} =F^{1/2}F^{-1}F^{1/2}=\mathbb{1} \,\,\, . \end{equation} In our case, since we do not want to whiten, but just decorrelate the covariance matrix, we renormalize the $W$ matrix by multiplying with a diagonal matrix $N_{jj}\equiv\sum_j(W_{ij}^{-2})$, such that the sum of the square of the elements on each row of the new weighting matrix $\tilde{W} \equiv N W$, is equal to unity; therefore the final transformed covariance is still diagonal but is not the identity matrix: \begin{equation} \tilde{C}=\tilde{W}C\tilde{W}=N^{2} \mathbb{1} \label{eq:Ctilde-decor} \end{equation} and at the same time we ensure that the vector of new variables $q_i$ will have the same norm as the old vector of variables $p_i$. \subsubsection{ZCA for Galaxy Clustering \label{subsub:ZCA-GC}} From Fig.\ \ref{fig:GCcorr} we can see that the correlations are present in sub-blocks, one for the standard $\Lambda\mathrm{CDM}$ parameters and another one for the Modified Gravity parameters. The exception lies in the linear case where $\ell \mathcal{A}_{s} \equiv \ln{(10^{10}A_s)}$ is strongly anti-correlated with all the $\mu_i$ and positively correlated with the $\eta_i$. To use a more objective criterion, we choose the $10\times 10$ block of MG parameters $\mu_i$ and $\eta_i$ with parameter indices 6 to 15, and only add to this block a $\Lambda\mathrm{CDM}$ parameter with index $a$ if the following condition is satisfied: \[ \sum_{i=6}^{i=15}(P_{ai})^2 \geq 1 \] where the index $a$ corresponds to one of the first five standard parameters. For Galaxy Clustering, the only index satisfying this condition is $a=4$ in the linear case, corresponding to $\ell \mathcal{A}_{s} \equiv \ln{(10^{10}A_s)}$: i.e. the standard parameter corresponding to the amplitude is, as said, degenerate with Modified Gravity parameters $\mu_i$ and $\eta_i$. In the non-linear case no parameter satisfies this condition (because, as we have seen, non-linearities are able to eliminate correlation with the amplitude), but for consistency we will use the same vector of 11 parameters $p_i = \left\{ \ell \mathcal{A}_{s}, \mu_i, \eta_i \right\}$ for our decorrelation procedure. Therefore we will also have 11 transformed uncorrelated $q_i$ parameters, function of the original $p_i$ parameters, in all the cases presented below. Figure \ref{fig:Wmat-ZCA-GC} shows the coefficients that relate the $q_i$ parameters to the original $p_i$ ones, in the linear (left panel) and the non-linear (right panel) cases, also shown explicitly in Tables \ref{tab:Wcoeff-lin-GC} and \ref{tab:Wcoeff-nlHS-GC} of Appendix \ref{sec:Wmatrices}. We plot in Figure \ref{fig:GCbinerrs} a comparison between the 1$\sigma$ errors on the primary parameters $p_i$ (represented by circles connected with dark green dashed lines) and the decorrelated parameters $q_i$ (represented by squares connected with orange solid lines). In the linear case (left panel), we can see that the errors on the $q_i$ parameters are 2 orders of magnitude better than the errors on the $p_i$ parameters. In the non-linear case (right panel) the improvement is of at most 1 order of magnitude and that for a completely decorrelated parameter like $\ell \mathcal{A}_{s}$, the error on its corresponding $q_i$ is exactly the same. This shows that a decorrelation procedure is still worth to do, even when including the non-linear regime, even if the degeneracy with the amplitude is already completely broken thanks to the non-linear prescription. The fact that the curve of 1$\sigma$ errors for the $q_i$ follows the same pattern as the curve for the $p_i$ errors, is due to the fact that we have used a ZCA decomposition (see Section \ref{sub:Decorrelation-of-covariance}) and therefore the $q_i$ are as similar as possible to the $p_i$. \begin{figure}[htbp] \centering \includegraphics[width=0.4\textwidth]{figures/Decorrelations-GC/Weight_Matrix_ZCA_SquareNorm--_fiducialMGBin3_Euclid_GC_linearPK_} \includegraphics[width=0.4\textwidth]{figures/Decorrelations-GC/Weight_Matrix_ZCA_SquareNorm--_fiducialMGBin3_Euclid_GC_nonlinearPk__Zhao_} \caption{\label{fig:Wmat-ZCA-GC} Entries of the matrix $W$ that relates the $q_i$ parameters to the original $p_i$ ones, after applying the ZCA decorrelation of the covariance matrix in the linear and non-linear GC cases. This matrix shows for each new variable $q_i$ on the vertical axis, the coefficients of the linear combination of parameters $\mu_i$, $\eta_i$ and $A_s$ that give rise to that variable $q_i$. The red (blue) colors, indicate a large (small) contribution of the respective variable on the horizontal axis. \textbf{Left panel:} linear forecast for Euclid Redbook specifications. \textbf{Right panel:} non-linear forecast for Euclid Redbook specifications, using the HS prescription. In both cases one can observe that most $q_i$ parameters have only small or negligible contributions from $\mu_5$ and $\eta_5$, which are found to be the less constrained bins. } \end{figure} \begin{figure}[htbp] \centering{}\includegraphics[width=0.4\linewidth]{figures/Decorrelations-GC/Errors_at_par_index_i--_ZCA_SquareNorm--fiducialMGBin3_Euclid_GC_linearPK_} \includegraphics[width=0.4\linewidth]{figures/Decorrelations-GC/Errors_at_par_index_i--_ZCA_SquareNorm--fiducialMGBin3_Euclid_GC_nonlinearPk__Zhao_} \caption{\label{fig:GCbinerrs} Results for a Euclid Redbook GC survey, with redshift-binned parameters, before and after applying the ZCA decorrelation. Each panel shows the 1$\sigma$ fully marginalized errors on the primary parameters $p_i$ (green dashed lines), and the 1$\sigma$ errors on the decorrelated parameters $q_i$ (orange solid lines). \textbf{Left: } linear forecasts, performed using linear power spectra up to a maximum wavenumber $k_{\rm max}=0.15$h/Mpc. \textbf{Right: }non-linear forecasts using non-linear spectra with the HS prescription up to a maximum wavenumber $k_{\rm max}=0.5$h/Mpc. In the linear case, the errors on the decorrelated $q_i$ parameters are about 2 orders of magnitude smaller than for the primary parameters, while in the non-linear HS case, the improvement in the errors is of one order of magnitude. This means that applying a decorrelation procedure is worth even when non-linearities are considered. In both cases for GC, the least constrained parameters are $\mu_5$ and $\eta_5$, corresponding to $2.0 < z < 2.5$.} \end{figure} \begin{figure}[htbp] \centering{}\includegraphics[width=0.4\linewidth]{figures/Decorrelations-GC/linear_GC--_best_constrained_modes-Errors_on_q_ZCA_SquareNorm--_fiducialMGBin3_Euclid_GC_linearPK_} \includegraphics[width=0.4\linewidth]{figures/Decorrelations-GC/non-linear_GC--_best_constrained_modes-Errors_on_q_ZCA_SquareNorm--_fiducialMGBin3_Euclid_GC_nonlinearPk__Zhao_} \caption{\label{fig:GCbestconst} Best constrained modes for a Euclid Redbook GC survey, with $\mu$ and $\eta$ binned in redshift, after transforming into uncorrelated $q$ parameters via ZCA. Each of the four best constrained parameters $q_i$, shown in the panels, is a linear combination of the primary parameters $p_i$. The $q_i$ in the legends are ordered from left to right, from the best constrained to the least constrained.} \end{figure} We are interested in finding the best combination of primary parameters $p_i$ giving rise to the best constrained uncorrelated parameters $q_i$. In order to find the errors on the parameters $q_i$, we need to look at the diagonal of the decorrelated covariance matrix $\mathbf{\tilde{C}}$ expressed in Eqn.\ (\ref{eq:Ctilde-decor}) and identify the $q_i$ parameters with the smallest relative errors ($\sigma_{q_i}/q_i$): we find than in the linear GC case, the best constrained combinations of primary parameters (ordered from most to least constrained) are given approximately by: \begin{equation} \label{eq:bestCombined-GClin} \begin{aligned} q_1 &= +0.9 \ell \mathcal{A}_{s} + 0.32 \mu_4 \\ q_3 &= +0.75\mu_2 - 0.29\eta_1 + 0.50 \eta_2\\ q_4 &= -0.25\mu_2 + 0.74\mu_3 - 0.32 \eta_2 + 0.49 \eta_3 \\ q_2 &= +0.70\mu_1 - 0.30\mu_2 + 0.52 \eta_1 - 0.36 \eta_2 \,\,\, . \end{aligned} \end{equation} In contrast, for the non-linear GC case, the parameter $\ell \mathcal{A}_{s} \equiv \ln{(10^{10}A_s)}$ is not correlated to any other, and therefore it is well constrained on its own. The best 4 constrained parameters (ordered from most to least constrained) in the non-linear case, are: \begin{equation} \label{eq:bestCombined-GCnonlin} \begin{aligned} q_1 &= +0.99\ell \mathcal{A}_{s} \\ q_4 &= -0.28\mu_2 + 0.76\mu_3 -0.33 \eta_2 + 0.47 \eta_3\\ q_3 &= +0.73\mu_2 -0.32 \eta_1 + 0.49 \eta_2\\ q_2 &= +0.68\mu_1 -0.35\mu_2 + 0.52 \eta_1 -0.37 \eta_2 \,\,\, . \end{aligned} \end{equation} The best constrained decorrelated parameters $q_i$ for a Euclid GC survey, expressed in the set of Equations (\ref{eq:bestCombined-GClin}) (linear) and (\ref{eq:bestCombined-GCnonlin}) (non-linear HS), can be seen graphically in Fig.\ \ref{fig:GCbestconst} for the linear (left panel) and non-linear HS (right panel) cases respectively. From these combinations we see that a survey like Euclid, using GC only, will be sensitive to Modified Gravity parameters $\mu$ and $\eta$ mainly in the first three redshift bins, corresponding to a range $0. < z < 1.5$. The complete matrix $W$ of coefficients relating the $q_i$ to the $p_i$ parameters, can be found in Tables \ref{tab:Wcoeff-lin-GC} and \ref{tab:Wcoeff-nlHS-GC} of Appendix \ref{sec:Wmatrices}. \subsubsection{ZCA for Weak Lensing} We apply the same decorrelation procedure to the WL case, obtaining the $q$ vectors shown in the weight matrix of Figure \ref{fig:Wmat-ZCA-WL} again reported explicitly in Tables \ref{tab:Wcoeff-lin-WL} and \ref{tab:Wcoeff-nlHS-WL} in Appendix \ref{sec:Wmatrices}. \begin{figure}[htbp] \centering{}\includegraphics[width=0.4\linewidth]{figures/Decorrelations-WL/Weight_Matrix_ZCA_SquareNorm--_fiducialMGBin3_Euclid_WL_linearPK_}\includegraphics[width=0.4\linewidth]{figures/Decorrelations-WL/Weight_Matrix_ZCA_SquareNorm--_fiducialMGBin3_Euclid_WL_nonlinearPk__Zhao_} \caption{ Entries of the matrix $W$ that relates the $q_i$ parameters to the original $p_i$ ones, after applying the ZCA decorrelation of the covariance matrix in the linear and non-linear WL cases. This matrix shows for each new variable $q_i$ on the vertical axis, the coefficients of the linear combination of parameters $\mu_i$, $\eta_i$ and $A_s$ that give rise to that variable $q_i$. The red (blue) colors, indicate a large (small) contribution of the respective variable on the horizontal axis. \textbf{Left panel:} linear forecast for Weak Lensing Euclid Redbook specifications. \textbf{Right panel:} non-linear forecast for Weak Lensing Euclid Redbook specifications, using the HS prescription. As for GC, most $q_i$ parameters have only small or negligible contributions from $\mu_5$ and $\eta_5$, which are found to be the less constrained bins. \label{fig:Wmat-ZCA-WL}} \end{figure} In Figure \ref{fig:WLbinerrs} we show the comparison between the errors on the primary parameters $p_i$ and the de-correlated ones $q_i$. As for the GC case, the errors in the linear case improve by 2 orders of magnitude after applying the decorrelation procedure (left panel). In the non-linear case (right panel) the improvement is smaller, but still worth to do, especially to constrain $q_2,q_3,q_7,q_8$. \begin{figure}[htbp] \centering{}\begin{center} \includegraphics[width=0.4\linewidth]{figures/Decorrelations-WL/Errors_at_par_index_i--_ZCA_SquareNorm--fiducialMGBin3_Euclid_WL_linearPK_} \includegraphics[width=0.4\linewidth]{figures/Decorrelations-WL/Errors_at_par_index_i--_ZCA_SquareNorm--fiducialMGBin3_Euclid_WL_nonlinearPk__Zhao_} \end{center} \caption{\label{fig:WLbinerrs} Results for a Euclid Redbook WL survey, with redshift-binned parameters, before and after applying the ZCA decorrelation. Each panel shows the 1$\sigma$ fully marginalized errors on the primary parameters $p_i$ (green dashed lines), and the 1$\sigma$ errors on the decorrelated parameters $q_i$ (orange solid lines). \textbf{Left: }Linear forecasts, performed with an $\ell_{\rm max}=1000$ and linear matter power spectra. \textbf{Right: }Non-linear forecasts using the non-linear spectra with the HS prescription, up to an $\ell_{\rm max}=5000$. The errors in the non-linear HS case, are about 1 order of magnitude smaller than in the linear case. For the best constrained $q_i$ parameters, the decorrelated errors are up to 2 orders of magnitude smaller than the corresponding fully marginalized parameters on the parameters $p_i$.} \end{figure} More generally, as we did for the GC case in the previous section, we look for the $q_i$ parameters with the smallest relative errors ($\sigma_{q_i}/q_i$) and find in the linear WL case, that the best constrained combinations (ordered from most to least constrained) of primary parameters are given approximately by: \begin{equation} \label{eq:bestCombined-WLlin} \begin{aligned} q_1 &= +0.76\ell \mathcal{A}_{s} + 0.48 \mu_2 + 0.33\eta_2 \\ q_3 &= -0.59\mu_1 + 0.67 \mu_2 - 0.30\eta_1 + 0.32 \eta_2\\ q_7 &= +0.65\mu_1 - 0.60 \mu_2 + 0.36\eta_1 - 0.28 \eta_2\\ q_2 &= +0.67\mu_1 - 0.59 \mu_2 + 0.33\eta_1 - 0.29 \eta_2 \,\,\, . \end{aligned} \end{equation} This means that WL in the linear case will only be able to constrain combinations of the first two redshift bins in $\mu$ and $\eta$ (corresponding to $ 0. < z < 1.0 $). This can also be observed graphically in the left panel of Figure \ref{fig:WLbestconst}. For the non-linear WL case, the combinations remain practically the same, except for $q_1$, which will depend much more strongly on the parameter $\ell \mathcal{A}_{s}$. The best 4 constrained parameters in this case, are (ordered from most to least constrained): \begin{equation} \label{eq:bestCombined-WLnonlin} \begin{aligned} q_3 &= -0.55\mu_1 + 0.71 \mu_2 + -0.27\eta_1 + 0.34\eta_2\\ q_1 &= +0.93\ell \mathcal{A}_{s} - 0.32\mu_2 \\ q_2 &= +0.67\mu_1 - 0.60 \mu_2 + 0.33\eta_1 - 0.29 \eta_2\\ q_4 &= -0.46\mu_1 + 0.29 \mu_2 + 0.73\mu_3 + 0.31 \eta_3 \,\,\, . \end{aligned} \end{equation} These combinations can also be visualized in the right panel of Figure \ref{fig:WLbestconst}. The complete matrix $W$ of coefficients relating the $q_i$ to the $p_i$ parameters, can be found in Tables \ref{tab:Wcoeff-lin-GC} and \ref{tab:Wcoeff-nlHS-GC} of Appendix \ref{sec:Wmatrices}. \begin{figure}[htbp] \centering{}\includegraphics[width=0.4\linewidth]{figures/Decorrelations-WL/linear_WL--_best_constrained_modes-Errors_on_q_ZCA_SquareNorm--_fiducialMGBin3_Euclid_WL_linearPK_} \includegraphics[width=0.4\linewidth]{figures/Decorrelations-WL/non-linear_WL--_best_constrained_modes-Errors_on_q_ZCA_SquareNorm--_fiducialMGBin3_Euclid_WL_nonlinearPk__Zhao_} \caption{\label{fig:WLbestconst} Best constrained modes for a Euclid Redbook WL survey, with $\mu$ and $\eta$ binned in redshift, after transforming into uncorrelated $q$ parameters via ZCA. Each of the four best constrained parameters $q_i$, shown in the panels, is a linear combination of the primary parameters $p_i$. $q_i$ in the label are ordered from the best constrained to the least constrained.} \end{figure} \subsubsection{ZCA for Weak Lensing + Galaxy Clustering + CMB {\it Planck}\ priors} As mentioned earlier, Galaxy Clustering and Weak Lensing are particularly important, combined together, to constrain Modified Gravity parameters, as they probe two independent combinations of the gravitational potentials. We now show results for their combination, using for both the non-linear HS prescription, together with a {\it Planck}\ prior (which was obtained by performing an MCMC analysis on {\it Planck}+BSH background data, as specified in Section \ref{sub:Fisher-Planck}). Notice that we neglect here any information coming from the cross correlation of the two probes; we therefore assume that these two observables are independent of each other and we simply add the GC and WL Fisher matrices to obtain our combined results; this appears to be a conservative (pessimistic) choice \cite{Lacasa2016}. In Table \ref{tab:errors-all-MGBin3}, we can see that the inclusion of the {\it Planck}\ prior improves considerably certain parameters, especially the less constrained ones by GC+WL, namely $\mu_{4,5}$ and $\eta_{4,5}$. In terms of correlations, we can observe in the left panel of Fig.\ \ref{fig:GC+WL+Planck-corr-Wmat}, that the structure of the correlation matrix resembles the one of the linear WL case (Fig.\ \ref{fig:WLcorr}), except that the block of standard $\Lambda\mathrm{CDM}$ parameters is much less correlated and that the anti-correlation among $\mu_i$ and $\eta_i$ is much stronger now. On the other hand, applying the decorrelation procedure (Section \ref{sub:Decorrelation-of-covariance}), the weight matrix $W$ (right panel of Fig.\ \ref{fig:GC+WL+Planck-corr-Wmat}), resembles the $W$ matrix observed in the non-linear GC case, illustrated in Figure \ref{fig:Wmat-ZCA-GC}). Notice that now the variables $q_i$ depend quite strongly on only one of the $\mu_i$ for $i=\{1,2,3,4\}$, while the $q_i$ for $i=\{6,7,8,9,10\}$ depend on a balanced sum of $\mu_i$ and $\eta_i$ for $i=\{6,7,8,9,10\}$. \begin{figure}[htbp] \centering \includegraphics[width=0.4\linewidth]{figures/Decorrelations-GC+WL+Planck/correlation-full-fiducialMGBin3-Euclid-GC+WL+Planck-nonlinearPk__Zhao-} \includegraphics[width=0.4\linewidth]{figures/Decorrelations-GC+WL+Planck/Weight_Matrix_ZCA_SquareNorm--_fiducialMGBin3_Euclid_GC+WL+Planck_nonlinearPk__Zhao_} \caption{\label{fig:GC+WL+Planck-corr-Wmat} Results for the combined forecasts of Euclid Redbook GC+WL using the non-linear HS prescription together with the addition of {\it Planck}\ CMB priors. \textbf{Left panel:} correlation matrix obtained from the covariance matrix in the MG-binning case. Red (purple blue) colors represent strong positive (negative) correlations. The structure of this matrix is considerably diagonal, except for the strong anti-correlations of the pair $(\mu_i,\eta_i)$ for $i=\{1,2,3,4,5\}$, which resembles the correlations found for the WL case alone (see Figure \ref{fig:WLcorr}). However, the sub-block of standard cosmological parameters is now much more diagonal and shows less correlations than in the GC (Fig.\ \ref{fig:GCcorr}) or WL cases. The natural FoC (defined in \ref{eq:FoC}) in this case is $\approx 22$, showing that the variables are much less correlated than in the two previous cases. \textbf{Right panel:} entries of the matrix $W$ for the ZCA decorrelation of the covariance matrix. This matrix shows for each new variable $q_i$ on the vertical axis, the coefficients of the linear combination of parameters $\mu_i$ and $\eta_i$ that give rise to that variable $q_i$. The red (blue) colors, indicate a large (small) contribution of the respective variable on the horizontal axis.} \end{figure} Finally, in this combined case the best constrained $q_i$ variables, are $q_1,\;q_2,\;q_3,\;$, $q_4$ approximately given by: \begin{equation} \label{eq:bestCombined-GCWLnonlinPlanck} \begin{aligned} q_1 &= +0.93\ell \mathcal{A}_{s}\\ q_2 &= +0.84\mu_1 + 0.48\eta_1 \\ q_3 &= +0.80\mu_2 - 0.26\eta_1 + 0.45 \eta_2\\ q_4 &= +0.28\ell \mathcal{A}_{s} + 0.79\mu_3 - 0.29 \eta_2 + 0.39 \eta_3 \,\,\, . \end{aligned} \end{equation} These combinations of primary parameters are illustrated in the right panel of Figure \ref{fig:GC+WL+Planck-bestconst-errspq}. The combination $q_2$ is similar to the combination $2\mu+\eta$ that was also identified in \cite{planck_collaboration_planck_2016} as being well-constrained. The best constrained modes $q_2$, $q_3$ and $q_4$ all contain terms of the form $a\mu_i + b\eta_i$ for $i=\{1,2,3\}$, with positive coefficients $a$ and $b$, where $a \approx 2b$. All errors are shown in the left panel of Figure \ref{fig:GC+WL+Planck-bestconst-errspq}. Notice how in this case, the improvement on the errors of the $q_i$ variables is less than an order of magnitude, thus smaller than what found in GC and WL separately; this is due to combination of GC and WL which, together with the inclusion of the CMB prior, lead to smaller correlations among the parameters. When combining GC+WL in the non-linear HS case, the FoC (defined in \ref{eq:FoC}) is $\approx 31$, showing that there is not much gain in decorrelation, compared to GC or WL alone, where this quantity was approximately $32$. However, combining GC+WL (non-linear HS) with {\it Planck}\ priors yields $\textrm{FoC} \approx 22$, showing that correlations among parameters are drastically reduced. The fact that the curve of 1$\sigma$ errors for the $q_i$ (orange line, marked with circles) follows the same pattern as the curve for the $p_i$ errors (green dashed line, marled with circles), is due to the fact that we have used a ZCA decomposition and therefore the $q_i$ are as close as possible to the $p_i$. The complete matrix of coefficients $W$ can be found in Table \ref{tab:Wcoeff-nlHS-WL+GC+Planck} in appendix \ref{sec:Wmatrices}. \begin{figure}[htbp] \centering \includegraphics[width=0.4\linewidth]{figures/Decorrelations-GC+WL+Planck/Errors_at_par_index_i--_ZCA_SquareNorm--fiducialMGBin3_Euclid_GC+WL+Planck_nonlinearPk__Zhao_} \includegraphics[width=0.4\linewidth]{figures/Decorrelations-GC+WL+Planck/non-linear_GC+WL+Planck--_best_constrained_modes-Errors_on_q_ZCA_SquareNorm--_fiducialMGBin3_Euclid_GC+WL+Planck_nonlinearPk__Zhao_} \caption{\label{fig:GC+WL+Planck-bestconst-errspq} \textbf{Left:} the 1$\sigma$ fully marginalized errors on the primary parameters $p_i$ (green dashed lines), and the 1$\sigma$ errors on the decorrelated derived parameters $q_i$ (yellow solid lines). As opposed to the GC or WL cases (figs.\ref{fig:GCbinerrs},\ref{fig:WLbinerrs}), here the decorrelated errors are much more similar to the standard errors. This is due to the fact that in the GC+WL+{\it Planck} combination, the cosmological parameters are not so strongly correlated. \textbf{Right:} best constrained modes for a Euclid Redbook GC+WL case using the non-linear HS prescription and adding a CMB {\it Planck}\ prior. Each panel shows the four best constrained parameters $q_i$. Each of them is a linear combination of the primary parameters $p_i$. The best constrained modes are sums $a\mu_i + b\eta_i$ for $i=\{1,2,3\}$ and positive values $a$ and $b$.} \end{figure} \section{Modified gravity with simple smooth functions of the scale factor} As discussed in Sec.\ \ref{sec:Parameterizing-Modified-Gravity}, $\mu$ and $\eta$ (or an equivalent pair of functions of the gravitational potentials) depend in general on time and space. We will now investigate the time dependence further, starting from the two parameterizations proposed in \cite{planck_collaboration_planck_2016} and recalled in Eqns.\ (\ref{eq:DE-mu-parametrization}-\ref{eq:TR-eta-parametrization}) in this paper. We extend the analysis of the {\it Planck}\ paper by testing different prescriptions for the non-linear regime in Modified Gravity (as illustrated in Section \ref{sec:The-non-linear-power}) and further investigate forecasts for future experiments like Euclid, SKA, DESI. In the following subsections we first give results for the late-time parameterization of Eqns.\ (\ref{eq:DE-mu-parametrization},\ref{eq:DE-eta-parametrization}) and then for the early time parameterization of Eqns.\ (\ref{eq:TR-mu-parametrization},\ref{eq:TR-eta-parametrization}). In both cases we consider Galaxy Clustering and Weak Lensing, neglecting, as in Section \ref{sec:Results:-Redshift-Binned} any information coming from the cross correlation of the two probes. \subsection{\label{sub:MG-DE}Modified Gravity in the late-time parameterization} The late-time parameterization is defined in Eqns.\ (\ref{eq:DE-mu-parametrization},\ref{eq:DE-eta-parametrization}). We now calculate forecasts for Galaxy Clustering and Weak Lensing, with future surveys, in the linear and mildly non-linear regimes. We also include prior information obtained from the analysis of the {\it Planck}+BSH datasets (where we recall that BSH stands for BAO + SNe + $H_0$ prior), as discussed in Section \ref{sub:Fisher-Planck}. \subsubsection{Galaxy Clustering in the linear and mildly non-linear regime} In Table \ref{tab:errors-Euclid-GC-WL-late_time} we show forecasts for the Euclid survey \cite{laureijs_euclid_2011} for Galaxy Clustering (top part of the table) and three different cases: using only linear scales with a cutoff at $k_{\rm max}=0.15$h/Mpc, labeled GC(lin); extending forecasts in the mildly non-linear regime, obtained using the prescription described in Sec.\ \ref{sub:Prescription-HS}, with a cutoff at $k_{\rm max}=0.5$ h/Mpc, labeled GC(nl-HS); combining the mildly non-linear case with {\it Planck}\ priors, as described in Sec.\ \ref{sub:Fisher-Planck}. We take into account the BAO features, redshift space distortions and the full shape of the power spectrum, as well as the survey specifications of the Euclid Redbook, recalled in Section \ref{sub:Fisher-Galaxy-Clustering} for convenience. The columns correspond to the marginalized errors on five standard cosmological parameters $\{\Omega_c, \Omega_b, n_s, \ln (10^{10} A_s), h\}$ and three combinations of the gravitational potentials $\{\mu, \eta, \Sigma\}$: the latter are reconstructed in time, according to the late-time parameterization, as defined in Eqns.\ (\ref{eq:DE-mu-parametrization},\ref{eq:DE-eta-parametrization}). We recall that only two of these three functions are independent and fully determine cosmological linear perturbations. In the late-time scenario, for a Galaxy Clustering survey, neither $\eta$ nor $\Sigma$ are actually constrained by a linear forecast, while $\mu$ is mildly constrained. Adding the non-linear regime improves constraints on $\mu$, while the other parameters remain unconstrained, unless we also include {\it Planck}\ priors, which yields an improvement in the FoM of 6.3 nits. In general the observable power spectrum may depend on both $\mu$ (explicitely appearing in the last term of the equation for the density perturbation (cf. Eqn.(\ref{eq-delta})) and on $\eta$ (implicitly contained in the derivatives of the gravitational potential in the same equation). The contribution of the derivative of the potentials is larger in the early-time parameterization, with respect to the late-time one by construction. This is due to the fact that in the late-time case deviations from GR go to zero at large redshifts. In this sense, in the specific case of the late time parameterization, the observed power spectrum mainly depends on $\mu$ only, which explains why this is the only quantity (mildly) constrained by GC alone. In the early-time parameterization, though, modifications can appear also at earlier times, so that both $\eta$ and $\mu$ effectively affect the power spectrum, which explains why they can both be constrained with a smaller uncertainty, as we will discuss in Section \ref{sub:MG-TR}. In Appendix \ref{sec:appder} we review the equation governing the evolution of cold dark matter density fluctuations, as a function of the Modified Gravity functions $\mu(a)$ and $\eta(a)$ as they are implemented in the code MGCAMB \cite{hojjati_testing_2011}. The inability of GC to constrain $\eta$ in this parametrization, is also visible in Fig.\ \ref{fig:DE+Planck-ellipses-mu-sig-eta} which shows that the GC constraints are degenerate along the $\eta$ or $\Sigma$ directions. Therefore, we show how with this parameterization choice Euclid GC will be extremely sensitive to modifications of the Poisson equation for $\Psi$, while it would require additional information to constrain departures from the standard Weyl potential. \begin{table}[htbp] \centering{}% \begin{tabular}{|l|c|c|c|c|c||c|c|c|c|} \hline \textbf{Euclid} (Redbook) & $\Omega_{c}$ & $\Omega_{b}$ & $n_{s}$ & $\ell\mathcal{A}_{s}$ & $h$ & $\mu$ & $\eta$ & $\Sigma$ & MG FoM \Tstrut\tabularnewline \hline \multicolumn{1}{|l|}{Fiducial} & {0.254} & {0.048} & {0.969} & { 3.060} & {0.682 } & {1.042} & {1.719} & {1.416} & relative \Tstrut\tabularnewline \hline \hline \Tstrut \textbf{GC(lin) } \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC-lin-Euclid-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 0\tabularnewline \Tstrut\textbf{GC(nl-HS) } \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC-nlHS-Euclid-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 2.9 \tabularnewline \Tstrut\textbf{GC(nl-HS)+{\it Planck} } \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC-nlHS+Planck-Euclid-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 6.3 \tabularnewline \hline \hline \Tstrut \textbf{WL(lin) } \input{latex-tables/MGDE-tables/muEtaSigmaPars_WL-lin-Euclid-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 3.2\tabularnewline \Tstrut \textbf{WL(nl-HS) } \input{latex-tables/MGDE-tables/muEtaSigmaPars_WL-nlHS-Euclid-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 4.5 \tabularnewline \Tstrut \textbf{WL(nl-HS)+{\it Planck} } \input{latex-tables/MGDE-tables/muEtaSigmaPars_WL-nlHS+Planck-Euclid-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 5.7 \tabularnewline \hline \hline \Tstrut \textbf{GC+WL(lin)} \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC+WL-lin-Euclid-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 6.6 \tabularnewline \Tstrut \textbf{GC+WL(lin)+{\it Planck}} $\;$ \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC+WL-lin+Planck-Euclid-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 7.0 \tabularnewline \hline \hline \Tstrut \textbf{GC+WL(nl-HS)} \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC+WL-nlHS-Euclid-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 8.8 \tabularnewline \Tstrut \textbf{GC+WL(nl-HS)+{\it Planck}} $\;$ \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC+WL-nlHS+Planck-Euclid-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 8.9 \tabularnewline \Tstrut \textbf{GC+WL(nl-Halofit)+{\it Planck}} $\;$ \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC+WL-nlHalofit+Planck-Euclid-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 9.6 \tabularnewline \hline \end{tabular}\protect \caption{\label{tab:errors-Euclid-GC-WL-late_time} 1$\sigma$ fully marginalized errors on the cosmological parameters in the late-time parameterization of Modified Gravity for a Euclid Galaxy Clustering forecast (top), a Weak Lensing forecast (middle) and the combination of both probes (bottom): Modified Gravity is encoded in two of the three functions $\mu$, $\eta$, $\Sigma$, which are reconstructed in the late-time parameterization defined in Eqns.\ (\ref{eq:DE-mu-parametrization},\ref{eq:DE-eta-parametrization}). For each case, we also list the forecasted errors using a {\it Planck}+BSH prior. Linear forecasts are labeled by ``lin'', and correspond to a cutoff $k_{\rm max}=0.15$h/Mpc for GC and $\ell_{\rm max}=5000$ for WL; non-linear forecasts use the prescription described in sec.\ \ref{sub:Prescription-HS}, are labeled by ``nl-HS'' and correspond to a cutoff of $k_{\rm max}=0.5$h/Mpc for GC and $\ell_{\rm max}=5000$ for WL. in both cases, power spectra have been computed using the MGCAMB Boltzmann code. For completeness, we also show the GC+WL+{\it Planck} case using the non-linear power spectra computed using Halofit only (nl-Halofit). In the last column, we show for each observation the Figure of Merit (FoM) relative to our base observable in this parametrization, which is GC(linear). GC(linear) has an absolute FoM of -0.94 in `nits'. We can see that in both GC and WL there is a considerable gain when including non-linear scales and {\it Planck}\ priors. The difference in the FoM between the non-linear HS prescription and the standard Halofit approach is quite small. } \end{table} \begin{table}[htbp] \centering{}% \begin{tabular}{|l|c|c|c|c|c||c|c|c|c|} \hline \Tstrut \textbf{Euclid} (Redbook) & $\Omega_{c}$ & $\Omega_{b}$ & $n_{s}$ & $\ell\mathcal{A}_{s}$ & $h$ & $\mu$ & $\eta$ & $\Sigma$ & MG FoM \tabularnewline \hline \Tstrut Fiducial & {0.256} & {0.048} & {0.969} & {3.091} & {0.682} & {0.902} & {1.939} & {1.326} & relative\tabularnewline \hline \hline \Tstrut\textbf{GC(lin)} \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC-lin-Euclid-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 0 \tabularnewline \Tstrut \textbf{GC(nl-HS)} \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC-nlHS-Euclid-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 6.6 \tabularnewline \Tstrut \textbf{GC(nl-HS)+{\it Planck}} \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC-nlHS+Planck-Euclid-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 6.7 \tabularnewline \hline \hline \Tstrut \textbf{WL(lin)} \input{latex-tables/MGTR-tables/muEtaSigmaPars_WL-lin-Euclid-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 4.9 \tabularnewline \Tstrut \textbf{WL(nl-HS)} \input{latex-tables/MGTR-tables/muEtaSigmaPars_WL-nlHS-Euclid-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 6.6 \tabularnewline \Tstrut \textbf{WL(nl-HS)+{\it Planck}} \input{latex-tables/MGTR-tables/muEtaSigmaPars_WL-nlHS+Planck-Euclid-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 7.2 \tabularnewline \hline \hline \Tstrut \textbf{GC+WL(lin)} \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC+WL-lin-Euclid-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 6.4 \tabularnewline \Tstrut \textbf{GC+WL(lin)+{\it Planck}} \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC+WL-lin+Planck-Euclid-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 6.9 \tabularnewline \hline \hline \Tstrut \textbf{GC+WL(nl-HS)} \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC+WL-nlHS-Euclid-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 8.0 \tabularnewline \Tstrut \textbf{GC+WL(nl-HS)+{\it Planck}} \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC+WL-nlHS+Planck-Euclid-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 8.2 \tabularnewline \Tstrut \textbf{GC+WL(nl-Halofit)+{\it Planck} $\;$} \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC+WL-nlHalofit+Planck-Euclid-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 8.9 \tabularnewline \hline \end{tabular}\protect\caption{\label{tab:errors-Euclid-GC-WL-early_time} Same as Table (\ref{tab:errors-Euclid-GC-WL-late_time}) but for the early-time parameterization. Note that the last column (MG FoM) cannot be compared to the one for a different parameterization (Table \ref{tab:errors-Euclid-GC-WL-late_time}) since the reference value is different (GC(lin)) and the two parameterizations have a different number of primary parameters). In this case the absolute MG FoM of GC(lin) is $\approx -0.47$ nits. } \end{table} \begin{table}[htbp] \centering{}% \begin{tabular}{|l|c|c|c|c|c||c|c|c|c|} \hline & $\Omega_{c}$ & $\Omega_{b}$ & $n_{s}$ & $\ell\mathcal{A}_{s}$ & $h$ & $\mu$ & $\eta$ & $\Sigma$ & MG FoM \TBstrut\tabularnewline \hline Fiducial & {0.254 } & {0.048 } & {0.969 } & {3.060 } & {0.682 } & {1.042 } & {1.719 } & {1.416 } & relative\TBstrut\tabularnewline \hline \hline \textbf{GC(nl-HS)} & & & & & & & & & \TBstrut\tabularnewline Euclid \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC-nlHS-Euclid-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 2.9 \tabularnewline SKA1-SUR \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC-nlHS-SKA1-SUR-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 1.7 \tabularnewline SKA2 \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC-nlHS-SKA2-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 5.5 \tabularnewline DESI-ELG \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC-nlHS-DESI-ELG-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 1.8 \tabularnewline \hline \hline \textbf{WL(nl-HS)} & & & & & & & & & \TBstrut\tabularnewline Euclid \input{latex-tables/MGDE-tables/muEtaSigmaPars_WL-nlHS-Euclid-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 4.5 \tabularnewline SKA1 \input{latex-tables/MGDE-tables/muEtaSigmaPars_WL-nlHS-SKA1-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 0.5 \tabularnewline SKA2 \input{latex-tables/MGDE-tables/muEtaSigmaPars_WL-nlHS-SKA2-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 4.9 \tabularnewline \hline \hline \textbf{GC+WL(lin)} & & & & & & & & & \Tstrut\tabularnewline Euclid \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC+WL-lin-Euclid-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 6.6 \tabularnewline SKA1 \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC+WL-lin-SKA1-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 3.7 \tabularnewline SKA2 \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC+WL-lin-SKA2-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 7.5 \tabularnewline \hline \hline \textbf{GC+WL(lin)+{\it Planck}} & & & & & & & & & \Tstrut\tabularnewline Euclid \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC+WL-lin+Planck-Euclid-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 6.9 \tabularnewline SKA1 \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC+WL-lin+Planck-SKA1-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 5.3 \tabularnewline SKA2 \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC+WL-lin+Planck-SKA2-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 7.8 \tabularnewline \hline \hline \textbf{GC+WL(nl-HS)} & & & & & & & & & \Tstrut\tabularnewline Euclid \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC+WL-nlHS-Euclid-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 8.7 \tabularnewline SKA1 \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC+WL-nlHS-SKA1-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 5.5 \tabularnewline SKA2 \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC+WL-nlHS-SKA2-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 10.3 \tabularnewline \hline \hline \textbf{GC+WL(nl-HS)+{\it Planck}} & & & & & & & & & \Tstrut\tabularnewline Euclid \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC+WL-nlHS+Planck-Euclid-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 8.9 \tabularnewline SKA1 \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC+WL-nlHS+Planck-SKA1-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 6.9 \tabularnewline SKA2 \input{latex-tables/MGDE-tables/muEtaSigmaPars_GC+WL-nlHS+Planck-SKA2-fiducialMGDE2nonuhs--oneSigmaPercentageErrs-.tex} & 10.3 \tabularnewline \hline \end{tabular}\protect\caption{\label{tab:errors-GC-SKAcompare-MG-DE-mu-eta-sigma} 1$\sigma$ fully marginalized errors on the cosmological parameters $\{\Omega_{m},\Omega_{b},h,\ell \mathcal{A}_{s},n_{s},\mu,\eta, \Sigma\}$ in the late-time parameterization comparing different surveys in the linear and non-linear case. In the last column, we show for each observation the Modified Gravity Figure of Merit (MG FoM) relative to our base observable, which is the Euclid Redbook GC(linear), see Table \ref{tab:errors-Euclid-GC-WL-late_time}. We can see that in general terms, SKA2 is the most powerful survey, followed by Euclid and SKA1. In the case of GC alone, DESI-ELG is more constraining than SKA1-SUR. Notice that in this parameterization, a GC survey would only constrain $\mu$ with a high accuracy, while a WL survey would constrain $\Sigma$ with a very good accuracy. The combination of both is much more powerful than the single probes. Adding {\it Planck}\ priors (last row) improves considerably the constraints on the $\Lambda\mathrm{CDM}$ parameters but has an almost negligible effect on the MG parameters (MG FoM remain almost constant when adding {\it Planck}\ priors to the GC+WL (non-linear HS) case. The marginalized contours for the $\mu$-$\eta$ plane , comparing these surveys, can be seen in the left panel ofFig.\ref{fig:combined_surveys}. } \end{table} \begin{table}[htbp] \centering{}% \small \begin{tabular}{|l|c|c|c|c|c||c|c|c|c|} \hline & $\Omega_{c}$ & $\Omega_{b}$ & $n_{s}$ & $\ell\mathcal{A}_{s}$ & $h$ & $\mu$ & $\eta$ & $\Sigma$ & MG FoM \TBstrut\tabularnewline \hline Fiducial & {0.256} & {0.0485} & {0.969} & {3.091} & {0.682} & {0.902} & {1.939} & {1.326} & relative \TBstrut\tabularnewline \hline \hline \textbf{GC(nl-HS)} & & & & & & & & & \TBstrut\tabularnewline Euclid \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC-nlHS-Euclid-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 6.6 \tabularnewline SKA1-SUR \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC-nlHS-SKA1-SUR-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 2.2 \tabularnewline SKA2 \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC-nlHS-SKA2-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 8.3 \tabularnewline DESI-ELG \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC-nlHS-DESI-ELG-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 4.3 \tabularnewline \hline \hline \textbf{WL(nl-HS)} & & & & & & & & & \TBstrut\tabularnewline Euclid \input{latex-tables/MGTR-tables/muEtaSigmaPars_WL-nlHS-Euclid-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 6.6 \tabularnewline SKA1 \input{latex-tables/MGTR-tables/muEtaSigmaPars_WL-nlHS-SKA1-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 3.4 \tabularnewline SKA2 \input{latex-tables/MGTR-tables/muEtaSigmaPars_WL-nlHS-SKA2-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 6.9 \tabularnewline \hline \hline \textbf{GC+WL(lin)} & & & & & & & & & \Tstrut\tabularnewline Euclid \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC+WL-lin-Euclid-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 6.4 \tabularnewline SKA1 \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC+WL-lin-SKA1-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 3.3 \tabularnewline SKA2 \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC+WL-lin-SKA2-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 6.8 \tabularnewline \hline \hline \textbf{GC+WL(lin)+{\it Planck}} & & & & & & & & & \Tstrut\tabularnewline Euclid \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC+WL-lin+Planck-Euclid-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 6.8 \tabularnewline SKA1 \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC+WL-lin+Planck-SKA1-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 4.5 \tabularnewline SKA2 \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC+WL-lin+Planck-SKA2-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 7.2 \tabularnewline \hline \hline \textbf{GC+WL(nl-HS)} & & & & & & & & & \Tstrut\tabularnewline Euclid \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC+WL-nlHS-Euclid-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 8.1 \tabularnewline SKA1 \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC+WL-nlHS-SKA1-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 4.4 \tabularnewline SKA2 \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC+WL-nlHS-SKA2-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 8.8 \tabularnewline \hline \hline \textbf{GC+WL(nl-HS)+{\it Planck}} & & & & & & & & & \Tstrut\tabularnewline Euclid \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC+WL-nlHS+Planck-Euclid-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 8.1 \tabularnewline SKA1 \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC+WL-nlHS+Planck-SKA1-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 4.9 \tabularnewline SKA2 \input{latex-tables/MGTR-tables/muEtaSigmaPars_GC+WL-nlHS+Planck-SKA2-fiducialMGTR2nonuhs--oneSigmaPercentageErrs-.tex} & 8.8 \tabularnewline \hline \end{tabular} \caption{\label{tab:errors-GC-SKAcompare-MG-TR-mu-eta-sigma-Zhao-1} Same as Table \ref{tab:errors-GC-SKAcompare-MG-DE-mu-eta-sigma} but for the early-time parameterization. The last 4 columns correspond to the projection of the errors on $E_{11}$ and $E_{22}$ onto $\mu,\;\eta$ and $\Sigma$, respectively. We have marginalized over $E_{12}$ and $E_{21}$ since at $z=0$ they don't contribute to the Modified Gravity parameters. Notice that in this parameterization, a GC survey alone is able to constrain both $\mu$ and $\Sigma$ to a good level for all surveys, better than with the late time parameterization, more often used in literature. The combination of GC+WL is however less constraining in the early time parametrization than in late time parameterization one. The reference case for the MG FoM is the Euclid (Redbook) GC linear forecast (Table \ref{tab:errors-Euclid-GC-WL-early_time}). The non-linear forecast for GC+WL+{\it Planck} would yield, for Euclid and SKA2, contraints at the 1-2\% accuracy on $\mu$, $\Sigma$, while for SKA1 the contraints would be at the 8\% level. The marginalized contours for the $\mu$-$\eta$ plane , comparing these surveys, can be seen in the right panel ofFig.\ref{fig:combined_surveys}. } \end{table} \subsubsection{Weak Lensing in the linear and mildly non-linear regime} Using the Euclid Weak Lensing specifications described in Section sub:Fisher-Weak-Lensing, we obtain the results displayed in the middle panel of Table \ref{tab:errors-Euclid-GC-WL-late_time}. Also in this case, we use the late-time parameterization, in three different cases: the first uses only linear quantities, with a maximum multipole of $\ell_{\rm max}=1000$; the second case uses the non-linear HS prescription of section \ref{sub:Prescription-HS} up to a maximum multipole of $\ell_{\rm max}=5000$; the third case combines Weak Lensing with {\it Planck}\ priors, as described in \ref{sub:Fisher-Planck}. In the linear case, WL forecast yields constraints on the standard $\Lambda\mathrm{CDM}$ parameters at around 10\% of accuracy, with the exception of $\Omega_{b}$ which is poorly constrained at around 26\%, and the expansion rate h (19\%). This is likely due to the fact that WL is only directly sensitive to the total matter distribution in the Universe and cannot differentiate baryons from dark matter. All constraints improve when adding non-linear information, with $\Omega_b$ and h still constrained only at the level of about $20\%$. When the {\it Planck}\ priors are included, though, constraints shrink down to about 1$\%$ for all cosmological parameters. The Modified Gravity parameters show the expected trend; in the linear case, only $\Sigma$ is constrained, at 11$\%$, as this parameter is directly defined in terms of the lensing potential $\Phi+\Psi$; a Weak Lensing probe is however not directly sensitive to $\mu$ and $\eta$ separately, as can be seen in Fig.\ \ref{fig:DE+Planck-ellipses-mu-sig-eta}. The linear FoM is only slightly weaker than the one from GC for this parameterization, probably because both probes have an effectively unconstrained degeneracy direction. When adding non-linear information, errors on $\mu$ and $\eta$ improve, though still remaining in a poorly constrained interval (25$\%$-44$\%$). Already on its own, however, Weak Lensing could rule out many models of Modified Gravity that change $\Sigma$ at more than $5\%$ (or even $2.9\%$, if we include {\it Planck}\ priors). The combination GC+{\it Planck} and WL+{\it Planck} has a comparable overall constraining power, with GC+{\it Planck} being about 1 nit stronger and providing smaller errors on $\mu$. \subsubsection{Combining Weak Lensing and Galaxy Clustering}\label{subsub:late-time-comb-GC+WL} \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.3\textwidth]{figures/ellipses/DE-related/ellipsesPlot-withLegendManual-MuEtaFisher-Marged-fiducialMGDE2nonuhs-GC_WL_GC+WL+Planck--nlHS-pars-6-7_-.pdf} \includegraphics[width=0.3\textwidth]{figures/ellipses/DE-related/ellipsesPlot-noLegendManual-MuSigmaFisher-Marged-fiducialMGDE2nonuhs-GC_WL_GC+WL+Planck--nlHS-pars-6-7_-.pdf} \end{centering} \begin{centering} \includegraphics[width=0.3\textwidth]{figures/ellipses/DE-related/ellipsesPlot-noLegendManual-MuSigmaFisher-Marged-fiducialMGDE2nonuhs-GC_WL_GC+WL+Planck--nlHS-pars-2-6_-.pdf} \includegraphics[width=0.3\textwidth]{figures/ellipses/DE-related/ellipsesPlot-noLegendManual-MuSigmaFisher-Marged-fiducialMGDE2nonuhs-GC_WL_GC+WL+Planck--nlHS-pars-4-7_-.pdf} \end{centering} \caption{\label{fig:DE+Planck-ellipses-mu-sig-eta} Fisher Matrix marginalized contours (1, 2 $\sigma$) for the Euclid space mission in the late-time parameterization using mildly non-linear scales and the HS prescription. Green lines represent constraints from a Galaxy Clustering survey, pink lines stand for the Weak Lensing observables, and orange lines represent the GC+WL+{\it Planck} combined confidence regions. \textbf{Upper Left: }contours for the fully marginalized errors on $\eta$ and $\mu$. \textbf{Upper Right: }contours for the fully marginalized errors on $\Sigma$ and $\mu$.\textbf{ Lower Left: }contours for the fully marginalized errors on $\mu$ and $\Omega_{b}$. \textbf{Lower Right: }contours for the fully marginalized errors on $\Sigma$ and $\ln(10^10 A_{s})$. The fact that the combination of GC, WL and {\it Planck}\ breaks many degeneracies in the 7-dimensional parameter space, explains why the combined contours (yellow) have a much smaller area. Notice that in this parametrization, GC measures mostly $\mu$ and WL constrains mostly just $\Sigma$.} \end{figure} After using the two primary probes from Euclid separately, we discuss here the constraints obtained combining Weak Lensing and Galaxy Clustering. The combination between GC and WL can be seen in the bottom panel of Table \ref{tab:errors-Euclid-GC-WL-late_time}. In the late-time parameterization, in the linear case, Weak Lensing combined with a galaxy clustering for a Euclid survey (Redbook specifications) constrains the standard $\Lambda$CDM parameters in the range $2 \% - 6 \%$, and below 1$\%$ when {\it Planck}\ priors are included. Modified Gravity parameters $\mu$ and $\eta$ are now also constrained below 10$\%$, reaching $1\%$ when adding non-linear scales. The remarkable improvement can be attributed to the fact that the combination of GC and WL Fisher matrices breaks many degeneracies in the parameter space. This is shown in Figure \ref{fig:DE+Planck-ellipses-mu-sig-eta}, where it is possible to notice how the two probes are almost orthogonal both in the $\mu$-$\eta$- and $\mu$-$\Sigma$-plane. Weak Lensing measures the changes in the Weyl potential, parametrized by $\Sigma$, while $\mu$ is related to the Poisson equation, and therefore to the potential $\Psi$, modified by peculiar velocities and sensitive to Galaxy Clustering; $\eta$ can also be written as a combination of $\mu$ and $\Sigma$ (see Eqn.\ \ref{eq:SigmaofMuEta}). Further improvement is brought by the sensitivity of Galaxy Clustering to standard $\Lambda$CDM parameters; even though GC constraints on $\Sigma$ and $\eta$ are not as good as the ones for Weak Lensing, the better measurement of standard parameters provided by Galaxy Clustering breaks degeneracies in the Modified Gravity sector of the parameter space, leading to narrower bounds for $\eta$ and $\Sigma$ with respect to both probes taken separately. WL is instead not sensitive to modifications of the Poisson equation for matter and this explains why constraints on $\mu$ are not improved by the combination of the two probes, but are rather dominated by GC. The correlation among parameters can also help us explain the observed results. The Figure of Correlation, defined in Eq.\ (\ref{eq:FoC}), for GC (non-linear HS) alone is 4.9, while for WL (non-linear HS) the correlation is higher, with $\textrm{FoC}=16.9$. When combining both probes (GC+WL (non-linear HS)) the FoC goes to an intermediate point of 7.6. Given the constraining power of the GC+WL combination on MG functions, adding the {\it Planck}\ priors does not lead to significant improvements on the dark energy related parameters. On the other hand, standard parameters significantly benefit from the inclusion of CMB and background priors and we can expect this to be a relevant factor for MG models with degeneracies with $\Lambda$CDM parameters, e.g. models affecting also the expansion history of the universe. An overview of the constraints on Modified Gravity described in this section is shown in Fig.\ref{fig:DE+Planck-ellipses-mu-sig-eta}, with Euclid GC, Euclid WL and Euclid GC+WL combined with {\it Planck}\ priors. \subsubsection{Forecasts in Modified Gravity for SKA1, SKA2 and DESI \label{subsub: other-surveys-late-time}} \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.45\textwidth]{figures/BarPlots/4surveys-GCnlHS-MGDE-latetime}\hspace{-0.5pt} \includegraphics[width=0.45\textwidth]{figures/BarPlots/4surveys-WLnlHS-MGDE-latetime.pdf} \includegraphics[width=0.45\textwidth]{figures/BarPlots/4surveys-GC+WLlin-MGDE-latetime}\hspace{-0.5pt} \includegraphics[width=0.45\textwidth]{figures/BarPlots/4surveys-GC+WLlin+Planck-MGDE-latetime} \includegraphics[width=0.45\textwidth]{figures/BarPlots/4surveys-GC+WLnlHS-MGDE-latetime}\hspace{-0.5pt} \includegraphics[width=0.45\textwidth]{figures/BarPlots/4surveys-GC+WLnlHS+Planck-MGDE-latetime} \par\end{centering} \caption{\label{fig:BarPlot-DE-GC-1}1$\sigma$ fully marginalized errors on the parameters $\{\Omega_{m},\Omega_{b},h,\ell\mathcal{A}_{s},n_{s},\mu,\eta,\Sigma\}$ for the late-time parameterization of MG obtained by forecasts on Galaxy Clustering (non-linear HS) (top left panel), Weak Lensing (non-linear HS) (top right panel), the combinations GC+WL (linear) (middle left) and GC+WL+{\it Planck} (linear) (middle right) and the combinations GC+WL (non-linear HS) (bottom left) and GC+WL+{\it Planck} (non-linear) (bottom right). In the GC case, the surveys considered are SKA2 (blue), SKA1-SUR (green), Euclid Redbook (purple) and DESI-ELG (orange). For forecasts including WL, only Euclid, SKA1 and SKA2 are included. Although the 1$\sigma$ constraints on the standard parameters are overall weaker for WL than for GC, Weak Lensing surveys perform better on Modified Gravity parameters. Comparing the different surveys, Euclid and SKA2 perform similarly well for the WL observable alone, if non-linearities are included. Notice that SKA1-SUR performs better than Euclid on the $\eta$ and $\Sigma$ parameters, because it can measure better at lower redshifts. Including the {\it Planck}\ prior, the GC+WL combination for Euclid and SKA2 constrains all parameters at much better than percent accuracy. Detailed specifications of the different surveys are explained in the text. } \end{figure} For the SKA1 and SKA2 surveys (whose specifications are explained in detail in Section \ref{sub:FutureSurveys}), previous work on forecasting cosmological parameters has been done, among others, by \cite{baker_observational_2015} and \cite{bull_extending_2015}. In the latter work, the author parameterizes the evolution of $\mu(a)$ using the late-time parameterization, but also adds an extra parameter allowing for a scale dependence in $\mu(a)$ and including a {\it Planck}\ prior. For a fixed scale, the 1$\sigma$ errors on the amplitude of $\mu$ lie between 0.045 and 0.095, depending on the details of the SKA1 specifications, while for SKA2, this error is of about 0.017. This setting would correspond to our GC+WL(linear) + {\it Planck}\ case (see Table \ref{tab:errors-GC-SKAcompare-MG-DE-mu-eta-sigma}) where we find for SKA1 a 1$\sigma$ error on $\mu$ of 0.12 and for SKA2 the forecasted error is 0.036. Our errors are somewhat larger, but we also have extended our analysis to let the gravitational slip $\eta$ be different from 1 at present time, our departure from $\mu=1$ at present time is larger by a factor 4 and our linear forecast is conservative in the sense that it includes less wavenumbers $k$ at higher redshifts, compared to theirs. In Figure \ref{fig:BarPlot-DE-GC-1} we show the 1$\sigma$ fully marginalized forecasted errors on the parameters $\{\Omega_{m},\Omega_{b},h,\ell \mathcal{A}_{s},n_{s},\mu,\Sigma\}$ for different Weak Lensing (left panel) and Galaxy Clustering (right panel) surveys in the late-time parameterization. In the GC case, the surveys considered are DESI-ELG (yellow), SKA2 (green), SKA1-SUR (orange) and Euclid (blue). For the WL forecast, we considered Euclid (blue), SKA1 (orange) and SKA2 (green). These constraints correspond to the ones listed in Table \ref{tab:errors-GC-SKAcompare-MG-DE-mu-eta-sigma}. The marginalized confidence contours for the $\mu$-$\eta$ plane , comparing all these surveys, can be seen in the left panel ofFig.\ref{fig:combined_surveys}. The 1$\sigma$ fully marginalized constraints on the parameters are weaker for WL than for GC, which may be a consequence of the higher correlation among variables for the Weak Lensing observable, described in the previous section \ref{subsub:late-time-comb-GC+WL}. Comparing the different surveys, the general trend is that Euclid and SKA2 perform at a similar level for WL at both linear and non-linear level; for GC and when combining both probes, SKA2 gives the strongest constraints, followed by Euclid, SKA1 and DESI (for GC). Notice that in this parameterization, a SKA1-SUR GC survey constrains the $\Sigma$ parameter alone better than a Euclid Galaxy Clustering survey (although Euclid is overall much stronger as can be seen with the FoM). This is due to the fact that SKA1-SUR probes much lower redshifts (from $z=0.05-0.85$) than Euclid and is therefore suitable to better constrain those parameterizations in which the effect of the Modified Gravity parameters is stronger at lower redshifts; this is the case of the late-time parameterization, which is proportional to the dark energy density, dominating at low redshifts only. This result is reversed in the early time parameterization, in which Modified Gravity can play a role also at earlier redshifts. \begin{figure}[htbp] \centering \includegraphics[width=0.35\linewidth]{figures/ellipses/DE-related/ellipsesPlot-withLegend-Ska2-SKA1-Euclid-DESI-MuEtaFisher-Marged-fiducialMGDE2nonuhs-GC+WL--nlHS-pars-6-7_-FixedRange.pdf} \includegraphics[width=0.35\linewidth]{figures/ellipses/T-related/ellipsesPlot-withLegend-MuEtaFisher-Marged-AllSurveys-SKA1-SKA2-Euclid-DESI-fiducialMGTR2nonuhs-GC_GC+WL--nlHS-pars-6-7_-FixedRange.pdf} \caption{\label{fig:combined_surveys} 1$\sigma$ and 2$\sigma$ fully marginalized confidence contours on the parameters $\mu$ and $\eta$, for 3 different surveys combining Galaxy Clustering (GC) and Weak Lensing (WL): Euclid, SKA1-SUR and SKA2 and for GC only: DESI-ELG, all in the late-time (left panel) and early-time (right panel) parameterizations of sections \ref{sub:MG-DE} and \ref{sub:MG-TR}, respectively. As explained in the main text, the constraints are parameterization-dependent, especially on $\eta$, where in the late-time scenario GC alone is not able to constrain it, while in the early-time scenario GC can constrain both $\mu$ and $\eta$. } \end{figure} \subsection{\label{sub:MG-TR} Modified Gravity in the early-time parameterization} \subsubsection{Galaxy Clustering, Weak Lensing and its combination} We extend our analysis now to an alternative choice, the early time parameterization specified in Eqns.\ (\ref{eq:TR-mu-parametrization}) and (\ref{eq:TR-eta-parametrization}). As before, we use Euclid Redbook specifications for WL and GC and the cut in scales discussed previously for the two observables, i.e. a maximum wavelength cutoff at $k_{\rm max}=0.15$ for GC and a maximum multipole of $\ell_{\rm max}=1000$ for WL in the linear case, and a cutoff $k_{\rm max}=0.5$h/Mpc and a maximum multipole of $\ell_{\rm max}=5000$ for WL in the non-linear regime, which is analyzed using the prescription described in Sec.\ \ref{sub:Prescription-HS}. We use the two observables both separately and in combination, without accounting for cross correlation of the two (as discussed in section \ref{sec:Results:-Redshift-Binned} this seems to correspond to a conservative choice), with and without {\it Planck}\ priors. Results are shown in Table \ref{tab:errors-Euclid-GC-WL-early_time} and Figure \ref{fig:T-related-ellipses-mu-omegac}. The general behaviour of the constraints is similar to the one in the late-time parameterization, with the combination of GC and WL able to break the degeneracies with standard cosmological parameters, leading to a significant improvement of the constraints on MG parameters, constraining $\mu$ and $\Sigma$ at the 1-2\% level. There are some other interesting differences with the late-time scenario. First, the addition of {\it Planck}\ priors does not really improve much the constraints obtained by GC or WL alone, which was not the case in the late-time parametrization. This is related to the fact that in the early-time parameterization, GC and WL (non-linear) on their own are already good at constraining both $\mu$ (at 2-3 \%) and $\eta$ (at around 8\%), with consequently small errors on $\Sigma$ ($\approx 3$\%). In Appendix \ref{sec:appder} we show the derivatives of the matter power spectrum with respect to the MG parameters $\mu$ and $\eta$ in both parameterizations. We can observe that in the early-time scenario, the derivative $dP(k)/d\eta$ is larger than in the late-time parameterization, leading therefore to better constraints. Another difference lies in the correlation among parameters, which for WL and GC+WL is considerably smaller than in the late-time scenario. The Figure of Correlation (defined in Eqn.\ \ref{eq:FoC}) for GC (non-linear HS) alone is 4.7, while for WL (non-linear HS) the correlation is somewhat higher, with $\textrm{FoC}=7.3$. When combining both probes (GC+WL (non-linear HS)) the FoC goes to an intermediate point of 5.2. \begin{figure}[htbp] \includegraphics[width=0.3\textwidth]{figures/ellipses/T-related/ellipsesPlot-withLegend-Manual-MuEtaFisher-Marged-fiducialMGTR2nonuhs-GC_WL_GC+WL+Planck--nlHS-pars-6-7_-.pdf} \includegraphics[width=0.3\textwidth]{figures/ellipses/T-related/ellipsesPlot-noLegend-Manual-MuSigmaFisher-Marged-fiducialMGTR2nonuhs-GC_WL_GC+WL+Planck--nlHS-pars-6-7_-.pdf} \\ \includegraphics[width=0.3\textwidth]{figures/ellipses/T-related/ellipsesPlot-noLegend-Manual-MuSigmaFisher-Marged-fiducialMGTR2nonuhs-GC_WL_GC+WL+Planck--nlHS-pars-2-6_-.pdf} \includegraphics[width=0.3\textwidth]{figures/ellipses/T-related/ellipsesPlot-noLegend-Manual-MuSigmaFisher-Marged-fiducialMGTR2nonuhs-GC_WL_GC+WL+Planck--nlHS-pars-4-7_-.pdf} \caption{\label{fig:T-related-ellipses-mu-omegac} Fisher Matrix marginalized forecasted contours (1$\sigma$, 2$\sigma$) for the Euclid Redbook satellite in the early-time parameterization using mildly non-linear scales and the HS prescription. Green lines represent constraints from the Galaxy Clustering survey, pink lines stand for the Weak Lensing observables, and orange lines represent the GC+WL+{\it Planck} combined confidence regions. \textbf{Upper left: }contours for the fully marginalized errors on $\eta$ and $\mu$. \textbf{Upper right: }contours for the fully marginalized errors on $\Sigma$ and $\mu$.\textbf{ Lower left: }contours for the fully marginalized errors on $\mu$ and $\Omega_{b}$. \textbf{Lower right: }contours for the fully marginalized errors on $\Sigma$ and $\ln 10^{10} A_s$. Notice that in this parameterization, GC and WL are able to constrain both $\mu$ and $\eta$ or $\Sigma$ on their own.} \end{figure} \subsubsection{Other Surveys: DESI-ELG, SKA1 and SKA2 \label{subsub: other-surveys-early-time}} Also in the early time parameterization we obtain the 1-$\sigma$ fully marginalized errors for Galaxy Clustering (top left panel of Figure \ref{fig:BarPlot-MGTR-Surveys}) considering DESI-ELG (yellow), SKA2 (green), SKA1-SUR (orange) and Euclid (blue), and for Weak Lensing (top right panel of Figure \ref{fig:BarPlot-MGTR-Surveys}) using Euclid (blue), SKA1 (orange) and SKA2 (green). We also report the results in Table \ref{tab:errors-GC-SKAcompare-MG-TR-mu-eta-sigma-Zhao-1}, where it is possible to notice how the conclusions drawn on the hierarchy of the considered experiments do not change with respect to the late-time parameterization. The main difference is that in this case the full SKA1-SUR GC survey does not constrain the $\Sigma$ parameter better than the Euclid survey; this is due to the fact that in the early time parametrization, deviations from $\Lambda$CDM are present also at high redshift, therefore we do expect the information present at small redshift to be as relevant as the one coming from higher $z$, where Euclid performs significantly better than SKA1-SUR. The marginalized contours for the $\mu$-$\eta$ plane, comparing all these surveys, can be seen in the right panel of Fig.\ref{fig:combined_surveys}. \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.45\textwidth]{figures/BarPlots/4surveys-GCnlHS-MGTR-earlytime}\hspace{-0.5pt} \includegraphics[width=0.45\textwidth]{figures/BarPlots/4surveys-WLnlHS-MGTR-earlytime.pdf} \includegraphics[width=0.45\textwidth]{figures/BarPlots/4surveys-GC+WLlin-MGTR-earlytime}\hspace{-0.5pt} \includegraphics[width=0.45\textwidth]{figures/BarPlots/4surveys-GC+WLlin+Planck-MGTR-earlytime} \includegraphics[width=0.45\textwidth]{figures/BarPlots/4surveys-GC+WLnlHS-MGTR-earlytime}\hspace{-0.5pt} \includegraphics[width=0.45\textwidth]{figures/BarPlots/4surveys-GC+WLnlHS+Planck-MGTR-earlytime} \par\end{centering} \caption{\label{fig:BarPlot-MGTR-Surveys} Same as Fig.\ref{fig:BarPlot-DE-GC-1} but for the early-time parameterization (Eqns. \ref{eq:TR-mu-parametrization},\ref{eq:TR-eta-parametrization}) The 1$\sigma$ fully marginalized constraints on the parameters are weaker for WL than for GC, which is a consequence of the higher correlation among variables for the Weak Lensing observable. } \end{figure} \subsection{Testing the effect of the Hu-Sawicki non-linear prescription on parameter estimation \label{sub:Testing-the-effect-of-Zhao}} In this section we show the effect of changing the parameters $c_{\mathrm{nl}}$ and $s$ used in the HS prescription (specified in Eq.\ \ref{eq:PHSDefinition}) for the mildly non-linear matter power spectrum. As mentioned already in Section \ref{sub:Prescription-HS}, previous works (see \cite{zhao_modeling_2014,zhao_n-body_2011,koyama_non-linear_2009}), have fitted the values of $c_{\mathrm{nl}}$ and $s$ to match N-body simulations in specific Modified Gravity models. In all these cases the HS parameters $c_{\mathrm{nl}}$ and $s$ have been found to be of order unity, with $c_{\mathrm{nl}}$ ranging usually from $0.1$ to $3$ and $s$ from about $1/3$ to a value of around $2$. In the absence of N-body simulations for our models, we selected our benchmark HS parameters to be $c_{\mathrm{nl}}=1$ and $s=1$, as discussed in Section \ref{sub:Prescription-HS}, which we used for all the analysis presented above. However, in order to test the effect of a change of $c_{nl}$ and $s$ non-linear parameters on our estimation of errors on the cosmological parameters, we perform our GC and WL forecasts on the MG late-time model (Section \ref{sub:MG-DE}) also changing one at a time the values of both HS parameters. We use the following values for our test: $c_{\mathrm{nl}}=\{0.1,0.5,\,1,\,3\}$ and $s=\{0,\,1/3,\,2/3,\,1\}$. \begin{figure}[htbp] \centering{} \includegraphics[width=0.35\textwidth]{figures/DensityPlots/GC-Zhao-effect-mu.pdf} \includegraphics[width=0.35\textwidth]{figures/DensityPlots/GC-Zhao-effect-eta.pdf} \caption{\label{fig:Density-GC-HSpars}Effect of the $c_{\mathrm{nl}}$ and $s$ parameters on the 1$\sigma$ marginalized error of the $\mu$ (left panel) and the $\eta$ (right panel) parameters in the MG late-time parametrization for a Euclid Galaxy Clustering forecast with Redbook specifications. The colored contours show the percentage discrepancy when departing from the benchmark case $c_{\mathrm{nl}}=1$ and $s=1$ (marked with a black arrow) to all other points in the $c_{\mathrm{nl}}$-$s$ space. The red (blue) contours signal the regions of maximum positive (negative) discrepancy. For example in the left panel, choosing $c_{\mathrm{nl}}$ and $s$ in the red region, will yield a 1$\sigma$ marginalized error on $\mu$ which is 90\% larger than in the benchmark case (see Table \ref{tab:errors-Euclid-GC-WL-late_time} for the benchmark forecast). For the standard $\Lambda\mathrm{CDM}$ cosmological parameters (not shown here) the discrepancy is smaller than 4\% for all choices of $c_{\mathrm{nl}}$ and $s$. } \end{figure} \begin{figure}[htbp] \centering{} \includegraphics[width=0.35\textwidth]{figures/DensityPlots/WL-Zhao-effect-mu.pdf} \includegraphics[width=0.35\textwidth]{figures/DensityPlots/WL-Zhao-effect-Sigma.pdf} \caption{\label{fig:Density-WL-HSpars} Same as Fig.\ref{fig:Density-GC-HSpars} but for a Weak Lensing Euclid forecast using Redbook specifications. In the left panel, choosing $c_{\mathrm{nl}}$ and $s$ in the red region, will yield a 1$\sigma$ marginalized error on $\mu$ which is 60\% larger than in the benchmark case (see Table \ref{tab:errors-Euclid-GC-WL-late_time} for the benchmark forecast). In the right panel we see that for the MG parameter $\Sigma$ the maximum positive and negative discrepancy is only of about 6\%. The maximum and minimun discrepancy for the MG parameter $\eta$ is of -15\% and 50\%. This means that the parameter $\Sigma$ (defined as the lensing Weyl potential, and therefore directly constrained by Weak Lensing) is much less sensitive to changes in the non-linear prescription parameters.} \end{figure} In Figure \ref{fig:Density-GC-HSpars} we show the percentage discrepancy between the 1$\sigma$ marginalized error obtained on parameters in the GC non-linear HS benchmark case (see Table \ref{tab:errors-Euclid-GC-WL-late_time} for the exact values) and the corresponding error obtained by performing the same forecast with a different value of the $c_{\mathrm{nl}}$-$s$ parameters. In general terms we see that for the MG parameter $\mu$ (left panel of Figure \ref{fig:Density-GC-HSpars}) the relative difference in the estimation of the 1$\sigma$ forecasted errors can lie between 90\% (at $c_{\mathrm{nl}}=3$, $s=0.33$) and -30\% for $c_{\mathrm{nl}}=0.1$ and $s=1.0$. The behavior of the contour lines shows us that for a fixed value of $c_{\mathrm{nl}}$, the forecasted error on the parameter $\mu$ remains unaffected. For $\eta$ (right panel) the relative discrepancy lies between 40\% and -2\%. Here, to get the same 1$\sigma$ errors on $\eta$, one would have to vary both $c_{\mathrm{nl}}$ and $s$. We also tested the effect on the standard $\Lambda\mathrm{CDM}$ parameters and found it to be smaller than 4\% for all choices of $c_{\mathrm{nl}}$ and $s$. In the case of WL forecasts, we perform the same tests, which are shown in Figure \ref{fig:Density-WL-HSpars}. We can observe that the relative discrepancies in the case of the $\mu$ parameter lie between $\sim$60\% and $\sim$-15\%, while for $\Sigma$ the discrepancy is considerably smaller. The 1$\sigma$ error on $\Sigma$ varies only between $\pm 6$\%. This is however a particular effect of choosing $\Sigma$, which is the true WL observable. If we perform this test on $\eta$ using WL, we will find a stronger discrepancy, that lies in between $\sim$50\% and $\sim$-15\%, similar to the one found for $\mu$. We can also test the effect of adding these two HS parameters as extra nuisance parameters to our model of the non-linear power spectrum; therefore, by taking derivatives of the observed power spectrum with respect to these parameters, we can forecast what would be the estimated error on $c_{\mathrm{nl}}$ and $s$. Then, marginalizing over $c_{\mathrm{nl}}$ and $s$ would yield more realistic constraints on our cosmological parameters, and take into account our ignorance on the correct parameters of the HS prescription. In Table \ref{tab:errors-GC+WL-Marginalize-HS-Zhao-Euclid-DEpar-muetasigma} we list the 1$\sigma$ marginalized constraints on $c_{\mathrm{nl}}$ and $s$ for our benchmark fiducial ( $c_{\mathrm{nl}}=1$, $s=1$) and using the standard fiducial for the cosmological parameters used previously for the MG late time parametrization. Taking these HS parameters into account in our Fisher forecast, will automatically change the constraints on the other parameters. For our method to be consistent, we would like this effect to remain as small as possible. In Table \ref{tab:errors-GC+WL-Marginalize-HS-Zhao-Euclid-DEpar-muetasigma}, we list the same constraints reported in Table \ref{tab:errors-Euclid-GC-WL-late_time}, but obtained marginalizing over $c_{\mathrm{nl}}$ and $s$ at the benchmark fiducial $c_{\mathrm{nl}}=1$, $s=1$. Comparing the two tables, we can see that for GC the errors on the cosmological parameters remain quite stable, except for $\eta$ and $\Sigma$; this is understandable since those parameters are not well constrained by GC alone. For WL, we see a difference of 4 to 7 percent points in the errors on $h$, $n_{s}$ and $\Omega_{b}$, while all other errors remain stable, with $\Sigma$ varying less than 2 percent points. Remarkably, the combined constraints from GC+WL are even less affected by the two nuisance parameters. Comparing the Modified Gravity FoM between the two tables, shows the expected behavior, the MG FoM is reduced when adding two extra parameters, but the change is very small, of just 0.3-0.5 nits. \begin{table}[htbp] \centering{}% \begin{tabular}{|c|c|c|c|c|c||c|c|c||c|c|c|} \hline \textbf{Euclid} (Redbook) & $\Omega_{c}$ & $\Omega_{b}$ & $n_{s}$ & $\ell\mathcal{A}_{s}$ & $h$ & $\mu$ & $\eta$ & $\Sigma$ \Tstrut & $c_{\mathrm{nl}}$ & $s$ & MG FoM \tabularnewline \hline \Tstrut Fiducial & {0.254} & {0.048} & {0.969} & {3.060} & {0.682} & {1.042} & {1.719} & {1.416} & {1} & {1} & relative \tabularnewline \hline \hline \Tstrut \textbf{GC(nl-HS)} \input{latex-tables/MGDEzh-tables/muEtaSigmaPars_GC-nlHS-Euclid-fiducialMGDE2nonuhszh--oneSigmaPercentageErrs-.tex} & 2.4 \tabularnewline \Tstrut \textbf{WL(nl-HS)} \input{latex-tables/MGDEzh-tables/muEtaSigmaPars_WL-nlHS-Euclid-fiducialMGDE2nonuhszh--oneSigmaPercentageErrs-.tex} & 4.2 \tabularnewline \Tstrut \textbf{GC+WL(nl-HS)} \input{latex-tables/MGDEzh-tables/muEtaSigmaPars_GC+WL-nlHS-Euclid-fiducialMGDE2nonuhszh--oneSigmaPercentageErrs-.tex} & 8.5 \tabularnewline \hline \end{tabular}\protect\caption{\label{tab:errors-GC+WL-Marginalize-HS-Zhao-Euclid-DEpar-muetasigma} 1-$\sigma$ fully marginalized errors on the cosmological parameters and the two HS parameters $c_{\mathrm{nl}}$ and $s$ for a Euclid Galaxy Clustering forecast, a Weak Lensing forecast and the combination of both in the late-time parameterization of Modified Gravity using non-linear scales and the HS prescription. In contrast to Table \ref{tab:errors-Euclid-GC-WL-late_time} (where $c_{nl}$ and $s$ had been fixed to the benchmark value) here we include $c_{\mathrm{nl}}$ and $s$ as free parameters and marginalize over them. The MG FoM is computed relative to the same Euclid Redbook GC linear case, used previously. The errors and the MG FoM show the expected behavior of adding two nuisance parameters, and remain quite stable. All other naming conventions are the same as for Table \ref{tab:errors-Euclid-GC-WL-late_time}. Remarkably, the combination of GC and WL is still able to constrain all Modified Gravity parameters at the level of 1-2 $\%$ after marginalizing over the non-linear parameters.} \end{table} \section{Conclusions} We study in this paper the constraining power of upcoming large scale surveys on Modified Gravity theories, choosing a phenomenological approach that does not require to specify any particular model. To this purpose we consider the two functions $\mu$ and $\eta$ that encode general modifications to the Poisson equation and the anisotropic stress. We study three different approaches to MG: redshift-binning, where we discretize the functions $\mu(z)$ and $\eta(z)$ in 5 redshift bins and let the values of $\mu$ and $\eta$ in each of the bins vary independently with respect to the others; an early-time parameterization, where $\mu$ and $\eta$ are allowed to vary at early times and their amplitude can be different from unity today; a late-time parameterization where $\mu$ and $\eta$ are linked to the energy density of dark energy and therefore they are very close to unity in the past, but they can vary considerably at small redshifts. For convenience, we summarize all results in Table \ref{table_results}, with a direct link to the section and tables related to each scenario. \begin{table} \begin{tabular}{|l|l|l|} \hline \Tstrut \textbf{Model} & \textbf{Description} & \textbf{Results} \\ \hline \Tstrut Redshift Binned $\mu$ and $\eta$ & Section \ref{sub:param-z-bins-th}, Eqns.\ (\ref{eq:MGbin-mu-parametrization})-(\ref{eq:MGbin-muderiv-parametrization}) & Table \ref{tab:errors-all-MGBin3}; \,\,\,\,\,\,\,Figs.\ \ref{fig:GCcorr}-\ref{fig:GC+WL+Planck-bestconst-errspq} \\ \hline \Tstrut Late-time parameterization & Section \ref{sub:param-smooth-funct}, Eqns.\ (\ref{eq:DE-mu-parametrization})-(\ref{eq:DE-eta-parametrization}) & Tables \ref{tab:errors-Euclid-GC-WL-late_time},\ref{tab:errors-GC-SKAcompare-MG-DE-mu-eta-sigma}; Figs.\ \ref{fig:DE+Planck-ellipses-mu-sig-eta}, \ref{fig:BarPlot-DE-GC-1}, \ref{fig:combined_surveys} \\ \hline \Tstrut Early-time parameterization & Section \ref{sub:param-smooth-funct}, Eqns.\ (\ref{eq:TR-mu-parametrization})-(\ref{eq:TR-eta-parametrization}) & Tables \ref{tab:errors-Euclid-GC-WL-early_time},\ref{tab:errors-GC-SKAcompare-MG-TR-mu-eta-sigma-Zhao-1}; Figs.\ \ref{fig:T-related-ellipses-mu-omegac}, \ref{fig:BarPlot-MGTR-Surveys}, \ref{fig:combined_surveys} \\ \hline \Tstrut Effect of non-linear prescription & Section \ref{sub:Prescription-HS}, Eqns.\ (\ref{eq:PHSDefinition}),(\ref{eq:prescription_sigma_def}) & Table \ref{tab:errors-GC+WL-Marginalize-HS-Zhao-Euclid-DEpar-muetasigma}; \,\,\,\,\, Figs.\ \ref{fig:Density-GC-HSpars}, \ref{fig:Density-WL-HSpars} \\ \hline \end{tabular} \caption{Summary of results for this work. For each studied model, we indicate where to find the explanations and where to find the main Fisher forecast results.} \label{table_results} \end{table} We use the predictions of linear perturbation theory to compute the linear power spectrum in Modified Gravity and then use a prescription to add the non-linearities, by interpolating between Halofit non-linear corrections computed for the linear power spectrum for the MG model and for the corresponding GR model ($\eta=\mu=1$). We find that the non-linear power spectrum is sensitive to changes in $\mu$ and $\eta$; limiting the analysis to linear scales significantly reduces the constraining power on the anisotropic stress. Using this prescription, we perform Fisher forecasts for Galaxy Clustering and Weak Lensing, taking into account linear and non-linear scales. We use the specifications for Euclid (using Redbook specifications), SKA1 \& SKA2 and DESI (only ELG). In addition to these surveys we also include {\it Planck}\ priors obtained by performing an MCMC analysis with {\it Planck}\ data for the MG parametrizations considered here. In the redshift-binned case, we find that in the linear case the $\mu_i$ and $\eta_i$ parameters are strongly correlated, while including the information coming from non linear scales reduces this correlation. We compute a figure of merit (FoM), given by the determinant of the $\mu$-$\eta$ part of the Fisher matrix, for the cases examined, finding that the combination of Galaxy Clustering and Weak Lensing is able to break the degeneracies among Modified Gravity parameters; as an example the error obtained with the non-linear prescription on $\mu$ ($\eta$) in the first redshift bin changes from $7\%$ ($20\%$) for Galaxy Clustering to $2.2\%$ ($3.6\%$) when this is combined with Weak Lensing, even if Weak Lensing alone is not very constraining for the same parameters. Overall, constraints are stronger at low redshifts, with the first two bins (0 < z < 1) being constrained at better than $5\%$ for both $\mu$ and $\eta$ if non-linearities are included (while the constraints are half as good, $<10\%$, if we only consider linear scales). Given the significant correlation between the $\mu_i$ and $\eta_i$ parameters, we apply the ZCA decorrelation method, in order to find a set of uncorrelated variables, which gives us information on which redshift dependence of $\mu$ and $\eta$ will be best constrained by future surveys. If one combines GC+WL (Euclid Redbook)+{\it Planck}, the best constrained combinations of parameters (effectively $2\mu+\eta$ in the lowest redshift bin) will be measured with a precision of better than 1\%. In the linear case, the errors on the decorrelated $q_i$ parameters are about 2 orders of magnitude smaller than for the primary parameters, while in the non-linear HS case, the improvement in the errors is of one order of magnitude. This also shows that applying a decorrelation procedure is worth even when non-linearities are considered. In addition to binning Modified Gravity functions in redshift, we also forecast the constraining power of the same probes in the case where we assume a specific time evolution for the $\mu$ and $\eta$ functions. We choose two different and complementary time evolutions, used in \cite{planck_collaboration_planck_2016} and to which we refer as late-time and early-time evolution. We investigate also in this case the impact of the non-linear prescription interpolating between Halofit and the MG power spectrum. For these parameterizations we extract constraints on the present reconstructed value of $\mu$, $\eta$ and $\Sigma$, where the latter is the parameter actually measured directly by Weak Lensing. In the late time parameterization, in the linear case, $\mu$ is mainly constrained by GC (although poorly, at the level of $17\%$) while WL constrains directly $\Sigma$, the modification of the lensing potential, at the level of $9\%$. Adding non-linear scales allows to significantly improve constraints down to less than $2\%$ for GC (on $\mu$) and to less than $5\%$ on $\Sigma$ for those two probes. Combining probes allows to reach $1-2\%$ on all Modified Gravity values of the $\mu$, $\eta$ and $\Sigma$ functions at z = 0. In the early-time parameterization, we find that including non-linearities allows to constrain also the $\eta$ and $\Sigma$ functions with GC alone at the level of 8\% and 5\%, respectively. This is related to the early time deviations from GR allowed by this parameterization, which are not present in the late time case: a variation in $\eta$ can yield a variation of the amplitude of the power spectrum, which can then be measured in the mildly non-linear regime. Overall, also in this case the combination of Weak Lensing and Galaxy Clustering leads to errors of the order of $1\%$ on the values of these functions at present. Finally, we test the impact on the forecasts given by uncertainties appearing in the non linear HS prescription, related in particular to the parameters $c_{nl}$ and $s$. We find that the errors on the parameters $\mu$ and $\eta$ can vary by up to $90\%$ for the Galaxy Clustering case and up to $65\%$ for Weak Lensing when we change the fiducial values of the HS parameters in a region between 0 and 3 for $c_{nl}$ and 0 and 1 for $s$. The effect on $\Sigma$ is quite small, with a discrepancy of $\pm 6 \%$ compared to the benchmark case. Interestingly, when we include these two parameters as extra nuisance parameters in our forecast formalism and marginalize over them, the effect is very small and the errors found previously remain stable both for GC, WL and their combination. It is clear that limiting the analysis to linear scales discards important information encoded in structure formation. On the other hand, a realistic analysis of non-linear scales would have to include several further effects (baryonic effects, higher order RSD's, damping of BAO peaks, corrections to peculiar velocity perturbations, higher order perturbation theory in Modified Gravity, just to name a few), which make our non-linear case an optimistic limit. Therefore, the quantitive true constraints given by a survey like Euclid will probably lie in between these two limiting cases. \section*{Acknowledgments} MM, SC and VP thank the COST Action (CANTATA/CA15117), supported by COST (European Cooperation in Science and Technology). SC and VP acknowledge support from the Heidelberg Graduate School for Fundamental Physics (HGSFP) and from the SFB-Transregio TR33 "The Dark Universe". MM is supported by the Foundation for Fundamental Research on Matter (FOM) and the Netherlands Organization for Scientific Research / Ministry of Science and Education (NWO/OCW). MK acknowledges financial support by the Swiss National Science Foundation. MM also thanks Alessandra Silvestri for useful discussion. \bibliographystyle{unsrtnat}
2,869,038,155,373
arxiv
\section{Introduction}\label{section:introduction} The calculation of the spectral properties of an exciton trapped in a semiconductor nano-structure is a long standing problem. Even in the case of the most simple model used to study this system, {\em i.e.} the one band effective mass approximation (EMA), there is a number of subtle details that it is necessary to take into account. In particular, when the nano-structure consists in a heterogeneous quantum dot, the mismatch between the different material parameters imposes boundary conditions to the particle wave functions and the electrostatic potential between the electron and hole that form the exciton can not be taken as the simple Coulomb potential \cite{Delerue2004,Ferreyra(1998)}. The presence of excitons can be easily spotted in the absorption spectrum of different semiconductors. The energy associated to each absorption peak is smaller to the energy difference between the electronic energy levels that lie on the semiconductor valence band and those that lie on the semiconductor conduction band. If the electrons in the semiconductor were independent between them and no many-body effects were present, the absorption energy should be equal to the energy difference between two electronic levels, one located in the conduction band and the other located in the valence band. But, when an electron is promoted from the lower band to the upper one, by radiation absorption for instance, the ``hole'' produced interacts with the electron, and the binding of the pair gives place to the exciton. The energy difference observed in the absorption spectrum, with respect to the values that correspond to independent electrons, is precisely given by the binding energy between the pair electron-hole \cite{Cardona(2005),Bastard(1988)}. An exciton trapped in a quantum dot or in a given nano-structure has very different physical properties compared to a bulk one. Roughly speaking, the radius associated to the exciton is, more or less, of the same magnitude order than the characteristic length of many nano-structures. This can be used to tune the physical trait of interest using the material parameters of the semiconductors employed in the nano-structure, its geometry and its size to tailor different properties. For instance, the mean life time of the exciton can be adjusted depending on the application in mind. Recently, some Quantum Information Processing (QIP) proposals use an exciton trapped in a quantum dot as qubit, in which case the basis states are given by the presence ($\left| 1\right\rangle$), or absence ($\left| 0\right\rangle$), of the exciton \cite{Troiani(2000),Biolatti(2000)}. Obviously, for QIP applications the life time of the exciton should be larger than the time needed to operate and control the qubit. Modeling an exciton in a particular physical situation can be a tricky business, since a plethora of effective Hamiltonians are available to use. Starting from the multi-band many-electrons Hamiltonian, there is a number of ways to derive effective Hamiltonians which usually decompose the one-electron wave function in an ``envelop part'' and a ``Bloch part''. The Bloch part accounts for the underlying periodical lattice structure of the semiconductor and the envelop part for the specific nano-structure under consideration. If the electron is not confined, {\em i.e.} belongs to the bulk of the material, the envelop part is reduced to the usual imaginary exponential present in the wave functions allowed by the Bloch theorem. Otherwise, when a nano-structure is present, the effective Hamiltonians for the envelop part resemble a multicomponent Schr\"odinger-like equations. Probably, the most well know procedure to derive effective-Hamiltonians is the {\bf k.p} method \cite{Voon(2009)}. The number of components of the Hamiltonian derived using the {\bf k.p} method depends on how much information about the semiconductor band structure is incorporated. For instance, the well-known eight-band model is derived using the fact that the valence band levels have orbital angular momentum quantum number $L=1$, the conduction band levels $L=0$, and the electronic spin has $s=1/2$ \cite{Haug(2004)}. In this work we consider the simplest two-band effective mass approximation model (EMA) for an exciton, trapped in a spherical Type -I device, with a core, well, and barrier structure. The core and the barrier semiconductor compound is exactly the same, while the well semiconductor is characterized by a conduction band whose bottom energy is lower than the corresponding energy of the material that form the core and the external barrier. The gap between the conduction band and the valence band is narrower for the well semiconductor, than the gap for the core one. Consequently, the confinement potential for the electron and the hole are given by the profiles of the conduction and valence bands of the semiconductors that form the device. This model has been studied extensively because, despite its apparent simplicity, allows to obtain accurately the spectrum of different QD accurately in a wide variety of situations. Anyway, there are some features on it that ask for some careful treatment. The first feature that must be handled with care is owed to the model binding potential, since this is assumed as given by the profile of the semiconductor bands, a step-like potential results for both, the electron and the hole. Moreover, since the effective mass of both, the electron and the hole, are assumed as discontinuous position dependent functions, the Hermitian character of the kinetic energy is assured only with appropriate matching conditions for the wave functions at the interfaces between different materials. Finally, the electrostatic potential between the electron and hole can not be taken as the simple Coulomb potential. It has been shown that polarization terms should be included because, again, of the presence of interfaces between materials with different dielectric constants \cite{Ferreyra(1998)}. It is worth to mention that the study of the properties of excitons confined in quantum dots \cite{Woggon(1995)} has been tackled using a number of methods, like perturbation theory \cite{Chang(1998),Schooss(1994)}, the {\bf k.p} method \cite{Pokatilov(2001),Efros(1998)}, and with different types of confinement potential as the infinite potential well \cite{Ferreyra(1998)} or the parabolic one \cite{Garm(1996)}. The selection of potentials and method are often dictated by the application in mind, that could range from excitonic lasers \cite{Bimberg(1997),Bimberg(2005)}, through quantum information processing \cite{Kamada(2004),Chen(2001),Biolatti(2002)}, up to one-electron transistors \cite{Ishibashi(2003)} in micro electronics devices. Between the physical phenomena that have received more attention, it can be mentioned the binding energy \cite{Lelong(1996)}, the decoherence \cite{Calarco(2003),Bonadeo(1998)A,Bonadeo(1998)B} and the Stark effect \cite{Billaud(2009)} among many others. To study the two-band EMA model described above we calculated its spectrum and eigen-states using a variational approach. The approach allows us to take into account the matching conditions at the interfaces of the device and the (quite) complicated interaction potential between the two parts of the exciton that includes the effects of polarization. As we shall see, our approach allows us to obtain highly accurate approximate results for the spectrum and approximate wave functions that provide information about the entanglement content of the two-particle quantum state. This information gives useful information about the parameter range where the two-particle wave function can be more or less accurately taken as the product of one-particle wave functions. The manuscript is organized as follows: in Section~\ref{section:one-body-model} the variational method to obtain the spectrum of one or two-particle Hamiltonians is discussed and presented in some detail, In Section~\ref{section:two-particle} the electron-hole pair EMA Hamiltonian is analyzed paying some attention to the electrostatic problem originated by the geometry and composition of the quantum dot and the binding energy of the electron-hole pair is calculated, in Section~\ref{section:separability} the separability problem of the exciton wave function is presented and analyzed, in particular it is shown that the characterization through a measure of separability gives a better understanding of the exciton spectral properties. The time evolution of the exciton wave function when an external driving field is applied to the QD is studied in Section~\ref{section:control}. The study is intended to look for regimes where the external driving allows to switch between only two pre-selected states of the exciton, with a negligible or controllable leakage of probability to other exciton states. Finally, our results are discussed and summarized in Section~\ref{section:conclusions} \section{One body models and methods}\label{section:one-body-model} The EMA approximation, when applied to the description of an independent electron (e), or hole (h), is characterized by a number of parameters such as the effective mass of the particle or the energy band gap of the materials from which is made the quantum dot. Otherwise, the Hamiltonian associated to it seems like an ordinary Schr\"odinger-like Hamiltonian \begin{equation}\label{eq:one-body-ham} \mathcal{H}_{e(h)}=-\frac{\hbar^2}{2}\nabla_{e(h)}\left(\frac{1}{m_{e(h)}^*(r_{ e(h)})}\nabla_{e(h)}\right)+ V_{e(h)}(r_{e(h)})+V_{s}(r_{e(h)}) \end{equation} where $m_{e(h)}^*(r_{e(h)})$ is the electron (hole) effective mass. For an one-band model, $m_{e}^*$ corresponds to the the effective mass of the electron in the conduction band, while $m_{h}^*$ is taken as the {\em light} effective mass of the hole in the valence band. $V_{e(h)}(r_{e(h)})$ is the binding potential for the electron (hole). Figure \ref{fig:tipoQD}a) shows a cartoon of the spherical self assembled quantum dot under consideration. The inner core, of radius $a$, and the outer shell (also called clad) are made of the same compound, let us call it two (2), while the middle layer, of radius $b$, is made of a different compound, let us call it one (1). These kind of hetero structures are termed of Type I or II, if the top energy of the valence band of the middle layer material, $E^1_{top}$, is larger than the top energy of the valence band of material two, $E^2_{top}$, {\em i.e.} the hetero structure is of Type I if $E^1_{top} > E^2_{top}$, otherwise is of Type II, as can be seen clearly in Figure \ref{fig:tipoQD} b). The cartoon in Figure~\ref{fig:tipoQD}b) shows the conduction and valence bands profile as a function of the radial distance from the center of the hetero-structure. \begin{figure}[floatfix] \begin{center} \includegraphics[scale=0.3]{fig1.eps} \includegraphics[scale=0.9]{fig2.eps} \end{center} \caption{\label{fig:tipoQD} a) The cartoon shows the structure of the spherical self-assembled quantum dot. The different layers correspond to, from the center and going in the radial direction, the core whose radius $a$ and dielectric constant are shown, the potential well, with inner radius $a$ and exterior radius $b$, and the clad. b) The profiles of the conduction band, $E_c(r)$, and of the valence band, $E_v(r)$, for both a Type-I device (top) and a a Type-II devive (bottom). $E_g1$ and $E_g2$ are the gap energies of the potential well material and the core/clad material, respectively.} \end{figure} Since the potential in Equation \ref{eq:one-body-ham}, $V_{e(h)}(r_{e(h)})$, is taken as exactly the conduction band profile (valence band profile) for the electron (hole), it is clear that a Type-I device corresponds to the case where the middle layer acts as a potential well for both particles, so \begin{equation} \label{eq:potencial} V_{e(h)}(r)=\left\{\begin{array}{lcl} V_0^{e(h)}>0& &0<r<a\\ 0& &a<r<b\\ V_0^{e(h)}>0& &b<r \end{array}\right., \end{equation} and $V_0^{e(h)}=const.$ The potential $V_s$ is owed to the polarization charges produced at the interfaces core/middle layer and middle layer/clad because of the mismatch between their dielectric constants. It can be shown that the auto-polarization potential is given by \begin{equation} \label{eq:polarizacion} V_s(r)=\frac{q^2}{2\, \varepsilon_1}\sum_{l}\frac{1}{(1-pq)}\left(qr^{2l}+\frac{p}{r^{2l+1}}+\frac{pq} {r} \right) \end{equation} where \begin{equation} p=(\varepsilon_1-\varepsilon_2)la^{(2l+1)}/[\varepsilon_2 l+\varepsilon_1 (l+1)], \end{equation} and \begin{equation} q=(\varepsilon_1-\varepsilon_2)(l+1)b^{-(2l+1)}/[\varepsilon_1 l+\varepsilon_2 (l+1)] . \end{equation} The kinetic energy term in Equation \ref{eq:one-body-ham} is written in a symmetric fashion in order to guarantee the Hermitian character of the operator, if it is considered together with the following matching conditions \begin{eqnarray} \label{eq:matching} \psi(r=a^-)&=&\psi(r=a^+) \\ \nonumber \psi(r=b^-)&=&\psi(r=b^+) \\ \nonumber \left( \frac{1}{m^*} \frac{d\psi}{dr}\right) |_{r=a^-} &= &\left( \frac{1}{m^*} \frac{d\psi}{dr}\right) |_{r=a^+} \\ \nonumber \left( \frac{1}{m^*} \frac{d\psi}{dr}\right) |_{r=b^-} &= &\left( \frac{1}{m^*} \frac{d\psi}{dr}\right) |_{r=b^+} . \end{eqnarray} The eigenvalue problem \begin{equation} \label{eq:eigenvalue-problem} \mathcal{H} \psi = E \psi , \end{equation} with $\mathcal{H}$ given by Equation~\ref{eq:one-body-ham} can be studied using different (an numerous) methods that result in an approximate spectrum and, in some cases, approximate eigenfunctions. Anyway, the matching conditions can not be implemented in a direct way depending on which method is used to tackle the problem. A method that allows to weight adequately the matching conditions is one that incorporates easily the step-like nature of the binding potential, Equation~\ref{eq:potencial}, and the effective mass. The approximate eigenfunctions and eigenvalues analyzed in this work were obtained using B-splines basis sets, which have been used to obtain high accuracy results for atomic, molecular and quantum dot systems. The method has been well described elsewhere, so the only details that we discus here are the relevant ones to understand some of results presented later on. To use the B-splines basis, the normalized one-electron orbitals are given by \begin{equation}\label{phi-bs} \phi_{n}({r}) = C_n \, \frac{B^{(k)}_{n+1}(r)}{r} \,;\;\;n=1,\ldots \end{equation} \noindent where $B^{(k)}_{n+1}(r)$ is a B-splines polynomial of order $k$. The numerical results are obtained by defining a cutoff radius $R$, and then the interval $[0,R]$ is divided into $I$ equal subintervals. B-spline polynomials \cite{deboor} (for a review of applications of B-splines polynomials in atomic and molecular physics, see Reference \cite{bachau01}, for its application to QD problems see Reference \cite{Ferron2013}) are piecewise polynomials defined by a sequence of knots $t_1=0\leq t_2\leq\cdots \leq t_{2 k+I-1}=R$ and the recurrence relations \begin{equation}\label{bs1} B_{i}^{(1)}(r)\,=\,\left\{ \begin{array}{ll} 1 & \mbox{if}\,t_i\leq r < t_{i+1} \\ 0 &\mbox{otherwise,} \end{array} \right. \,. \end{equation} \begin{equation}\label{bsrr} B_{i}^{(k)}(r)\,=\,\frac{r-t_i}{t_{i+k-1}-t_i}\,B_{i}^{(k-1)}(r)\,+\, \frac{t_{i+k}-r}{t_{i+k}-t_{i+1}}\,B_{i}^{(k-1)}(r)\; (k>1)\,. \end{equation} The standard choice for the knots in atomic physics \cite{bachau01} is $t_1=\cdots=t_k=0$ and $t_{k+I}=\cdots=t_{2k+I-1}=R$. Because of the matching conditions at the interfaces between the core and the potential well and between the potential well and the clad, it is more appropriate to choose the knots as follows: at the extremes of the interval $\left[0,R\right]$ where the wave function is calculated, there are $k$ knots that are repeated, while at the interfaces $r=a$ and $r=b$ there are $k-3$ knots that are repeated, and in the open intervals $(0,a)$, $(a,b)$ and $(b,R)$ the knots are distributed uniformly \cite{HaoXue(2002)} The constant $C_n$ in Equation~\ref{phi-bs} is a normalization constant obtained from the condition $\left\langle \phi_n |\phi_n \right\rangle =1 $ \begin{equation}\label{nor-c} C_n = \frac{1}{\left[ \int_0^{R} \, \left(B^{(k)}_{n+1}(r) \right)^2 \,dr \right]^{1/2}} \,. \end{equation} Because $B_1(0)\ne0$ and $B_{I+3k-9}(R)\ne0$, we have $N=I+3k-11$ orbitals corresponding to $B_2,\ldots,B_{I+3k-10}$. In all the calculations we used the value $k=5$, and, we do not write the index $k$ in the eigenvalues and coefficients. \begin{figure}[floatfix] \begin{center} \includegraphics[scale=0.6]{fig3.eps} \end{center} \caption{\label{fig:fin-vs-inf} The first one-electron eigenvalues for a infinite potential well (dashed lines) and for a finite potential well (solid lines) as functions of the dimensionless ratio $a/b$. In both cases, $b=31.71$ nm, that is the Bohr radius for an exciton immersed in a HgS matrix. The eigenvalues correspond to zero angular momentum ($\ell=0$) states.} \end{figure} To gain some insight about the performance of the B-spline method we first studied the eigenvalue problem of one electron confined in a multi-layered quantum dot as the one depicted in Figure~\ref{fig:tipoQD}a). The quantum dots whose core, well and clad are made of CdS/HgS/CdS have been extensively studied \cite{Schooss(1994)}, so it is a good starting point to our study. For these materials, the effective masses for the electron in their respective conduction bands are $m_{e,CdS}^*=0.2$, $m_{e,HgS}^*=0.036$, while for the hole in the valence bands the effective masses are given by $m_{h,CdS}^*=0.7$, $m_{h,HgS}^*=0.040$. The dielectric constants are $\varepsilon_{CdS}=\varepsilon_2=5.5$, $\varepsilon_{HgS}=\varepsilon_1=11.36$. Besides, \begin{eqnarray*} \label{eq:band-energies} E^{CdS}_{bottom} - E^{HgS}_{bottom} & = & 1.35\mbox{eV}, \\ E^{HgS}_{top} - E^{CdS}_{top} & = & 0.9\mbox{eV} . \end{eqnarray*} The other parameters that define the device are the radii $a$ and $b$. To augment the effect of the confinement owed to the binding potential we choose $b$ equal to the Bohr radius of a bulk exciton in HgS \cite{Chang(1998)}, then $a$ can take any value between zero and $b$. This model was studied in \cite{Ferreyra(1998)} considering the limit of ``strong confinement'', {\em i.e.} the electron is bound in an infinite potential well. We want to remark that the electro-hole pair behaves very much as a hydrogen-like atom when it is immersed in a semiconductor bulk besides, in many cases, the hole effective mass is much larger than electron one. So, it makes sense to consider the Bohr radius as one of the length scales that characterize the problem. Figure~\ref{fig:fin-vs-inf} shows the behavior of the lowest lying approximate one-electron eigenvalues obtained using the B-spline method for a quantum dot with the material parameters listed above, and for one electron bound in an infinite potential well, as functions of the ratio between both radii, $a/b$. The eigenvalues corresponds to eigenfunctions with orbital angular momentum with quantum number $\ell=0$. As can be observed from the Figure, the infinite potential well eigenvalues, that can be obtained exactly, are a pretty good approximation to the quantum dot eigenvalues for small values of $a/b$, but for larger values of $a/b$, or for the excited states, the relative error grows considerably. Since in many works in the literature, the binding energy of the exciton is obtained using the exact solutions of the infinite potential well, the results shown in Figure~\ref{fig:fin-vs-inf} indicate that it is necessary to proceed with great caution if this approximation were to be used. The eigenvalues obtained using the B-spline method have a relative error of less than $10^{-3}$. \begin{figure}[floatfix] \begin{center} \includegraphics[scale=0.6]{fig4.eps} \end{center} \caption{\label{fig:auto-polarization} The lowest lying one-electro eigenvalues for different radial ($n_r$) and angular momentum ($\ell$) quantum numbers. The eigenvalues were calculated for a finite potential well and the Hamiltonian includes the auto-polarization term in Equation~\ref{eq:polarizacion} (solid lines). The dashed lines are the corresponding eigenvalues obtained taking out the autopolarization term (or equivalently choosing $\varepsilon_1=\varepsilon_2$). The labels for each curve are the quantum numbers that identify the corresponding eigenstate. } \end{figure} There has been some discussion about the need to include, or not, the self polarization term, Equation~\ref{eq:polarizacion}, in the one electron Hamiltonian, Equation~\ref{eq:one-body-ham} \cite{Delerue2004}. Figure~\ref{fig:auto-polarization} shows the behaviour of the lowest lying energy eigenvalues, for several orbital angular momentum quantum numbers, and with, or without, the auto-polarization term. Is is clear that for $a/b<0.4$, and for all the angular momentum quantum numbers shown, that the auto-polarization term changes the eigenvalues in a, approximately, 10$\%$ , showing that for large quantum dot the auto-polarization contribution can not be ignored. When $a$ grows up to $b$ the effect becomes less and less important. Other way to characterize the phenomenology is the following, if the eigenvalues become independent of $\ell$ for a given radial quantum number $n$, {\em i.e.} the energy eigenvalues depends almost only on $n$, then the well potential becomes ``small enough'' and the auto-polarization term becomes negligible. In this limit, the polarization term and the angular momentum part of the kinetic energy can be treated as perturbations. It is interesting to point that despite that the hole Hamiltonian is determined by different parameters than the electron one, the behavior of its spectrum is very similar, for that reason we do not include a detailed analysis of it. On the other hand, once the electron and hole approximate eigenfunctions are obtained, a plot of them reveals that both particles are well localized inside the potential well. Besides, the repetition of knots at the interfaces enables that the approximate solutions meet the matching conditions, Equation~\ref{eq:matching}, to a high degree of accuracy. \section{Two-particle model}\label{section:two-particle} The Hamiltonian for an exciton formed by an electron and a hole can be written as \begin{equation} \label{eq:hamiltoniano-exciton} \mathcal{H}_{ex}=\mathcal{H}_{e}+\mathcal{H}_{h}+V_c({\mathbf r}_e,{\mathbf r}_h) , \end{equation} where $\mathcal{H}_{e}$ and $\mathcal{H}_{h}$ are the one-body Hamiltonians of Equation~\ref{eq:one-body-ham}, and $V_c({\mathbf r}_e,{\mathbf r}_h)$ is the electrostatic potential between the electron and hole. Usually, one is tempted to consider $V_c$ as the usual Coulomb potential between a positive charge and a negative one but, if the exciton is confined to an hetero-structure made of different materials, this approach oversimplifies the situation. Having in mind the argument of the paragraph above, we consider a better a approximation for the actual electrostatic potential that was suggested by Ferreyra and Proetto \cite{Ferreyra(1998)}. Since the hole is usually ``heavier'' than the electron, and that the scenario of most interest occurs when both particle are well localized, it is simpler to consider $V_c$ as the solution to the Poisson equation considering that the hole and electron coordinates are restricted to the potential well, {\em i.e.} \begin{equation} V_c({\mathbf r}_e,{\mathbf r}_h) = q_e q_h \, G({\mathbf r}_e,{\mathbf r}_h) , \end{equation} where \begin{equation}\label{eq:green} \nabla_r^2 G({\mathbf r},{\mathbf r}') = \left\{ \begin{array}{lcl} -\frac{4\pi}{\varepsilon_1} \delta({\mathbf r}-{\mathbf r}') & \quad & \mbox{if}\; a<r<b \\ 0 & & \mbox{otherwise} \end{array} \right. , \end{equation} \begin{figure}[floatfix] \begin{center} \includegraphics*[scale=0.6]{fig6.eps} \end{center} \caption{\label{fig:binding-energy} The binding energy expectation value as a function of the ratio $a/b$. The curves are grouped in two bundles, the higher one was calculated including all the polarization effects originated by the dielectric constants mismatch, $\varepsilon_1\neq\varepsilon_2$ , while the lower one ignores the mismatch and takes $\varepsilon_1 = \varepsilon_2$. In each bundle, the curve at the top (solid line) corresponds to the exciton ground state. As the binding energy is a decreasing function of the exciton eigen-energy, in each bundle the lower and lower lying curves correspond to the first excited state, to the second one, ans so successively. The sudden drop of the curves observed for large enough values of the ratio $a/b$ show where each level reach the ``ionization threshold''.} \end{figure} So, solving Equation~\ref{eq:green}, it can be shown that the electrostatic potential that gives the interaction between the electron and the hole can be written as \begin{eqnarray} \label{eq:electrostatic-potential} V_c(\mathbf{r}_e,\mathbf{r}_h)&=&q_eq_h\sum_{l,m}Y_{lm}^*(\theta_h,\varphi_h)Y_{ lm } (\theta_e,\varphi_e) \\ \nonumber & &\quad \times \frac{4\pi}{\varepsilon_1(2l+1)(1-pq)} \\ &&\quad \times [r_<^l+pr_<^{l+1}][r_>^{-(l+1)}+qr_>^l] , \end{eqnarray} where \begin{equation}\label{eq:p-term} p=(\varepsilon_1-\varepsilon_2)la^{(2l+1)}/[\varepsilon_2 l+\varepsilon_1 (l+1)] , \end{equation} \begin{equation}\label{eq:q-term} q=(\varepsilon_1-\varepsilon_2)(l+1)b^{-(2l+1)}/[\varepsilon_1 l+\varepsilon_2 (l+1)] , \end{equation} and \begin{equation} r_{>(<)}=max(min) \{r_e, r_h\}. \end{equation} The exciton binding energy can be obtained as the expectation value of the electrostatic potential \begin{equation} \label{eq:binding-energy} E_{binding}^{\alpha}= -\langle \psi_{\alpha}|V_c(\mathbf{r}_e,\mathbf{r}_h)|\psi_{\alpha}\rangle, \end{equation} where $|\psi_{\alpha}\rangle$ is an eigenstate of the exciton Hamiltonian, Equation~\ref{eq:hamiltoniano-exciton}. Figure~\ref{fig:binding-energy} shows the behavior of the binding energy as a function of the radii ratio $a/b$. For clarity, we restrict the curves shown to those data obtained with eigenfunctions with orbital angular momentum quantum numbers $l_e=l_h=m_e=m_h=0$. The Figure shows two well defined separated sets of curves. The lower set of curves correspond to the binding energy calculated without polarization terms (or equivalently putting $\varepsilon_1=\varepsilon_2$ in Equations~\ref{eq:electrostatic-potential},\ref{eq:p-term} and \ref{eq:q-term}). The upper set of curves corresponds to the binding energy calculated considering all the polarization effects ($\varepsilon_1\neq \varepsilon_2$). The polarization terms, owed to the polarization charges at the interfaces between the different materials, does not change the qualitative behavior of the binding energy but, at least for the parameters of Figure~\ref{fig:binding-energy}, not including them leads to an underestimation of the binding energy of approximately 100\% . The inclusion of the polarization terms in the electrostatic potential increases the binding energy since for $\varepsilon_1>\varepsilon_2$ the correction terms in Equation~\ref{eq:electrostatic-potential}, with respect to the Coulomb potential, have all the same sign. \section{separability of the exciton eigenfunctions}\label{section:separability} The availability of accurate numerical approximations for the actual exciton eigenfunctions gives the possibility to analyze their separability, in particular in this Section we analyze the behavior of the von Neumann entropy associated to the excitonic quantum state. We will show that some features of the binding energy already described in the precedent Section, can be understood best studying simultaneously both quantities. In particular, the analysis of the separability of the quantum states can shed some light about when the interaction between the exciton components can be treated using perturbation theory. Besides, there are some quantities that determine the strength of the interaction of the exciton with external fields, as the dipole moment, or expectation values that can not be accurately obtained if the correlation between the electron and hole is not taken into account. A well known separability measure of the quantum state is the von Neumann entropy, $S$, which can be calculated as \begin{equation}\label{eq:von-Neumann-def} S=-\sum_k \lambda_k \; \ln(\lambda_k) , \end{equation} where $\lambda_k$ are the eigenvalues of the electron reduced density matrix $\rho^{red}(\mathbf{r}_e,\mathbf{r}^{\prime}_e)$ which, if the quantum state of the exciton is given by a vector state $\psi(\mathbf{r}_e,\mathbf{r}_h)$, then \begin{equation}\label{eq:reduced-density} \rho^{red}(\mathbf{r}_e,\mathbf{r}_e^{\prime} ) = \int \psi^{\star}(\mathbf{r}_e,\mathbf{r}_h) \psi(\mathbf{r}_e',\mathbf{r}_h) \; d\mathbf{r}_h . \end{equation} \begin{figure}[floatfix] \begin{center} \includegraphics[scale=0.5]{fig7.eps} \includegraphics[scale=0.5]{fig8.eps} \end{center} \caption{\label{fig:von-Neumann-entropy} a) The von Neummann entropy for the first few eigenstates, the lowest curve corresponds to the ground state eigenstate (label 0), the following to the second excited eigenstate (label 2) and so on. The inset shows a detailed view of the fundamental state von Neumann entropy. b) The binding energy for the ground state calculated following different methods. From top to bottom the curves were obtained with the full Hamiltonian and the B-spline method (solid line), without the auto-polarization terms ({\em i.e.} neglecting $V_s(r)$) and the B-spline method (dot-dashed line), again with the full Hamiltonian but using perturbation theory (dashed line) and finally without the auto-polarization terms and using perturbation theory (double-dot dashed line), respectively. For details see the text } \end{figure} Figure~\ref{fig:von-Neumann-entropy}a shows the behavior of the von Neumann entropy as a function of the ratio $a/b$, for the approximate eigenfunctions of the first low lying exciton eigenvalues calculated using the B-spline method. As can be appreciated from the figure, the von Neumann entropy is a monotone decreasing function of the ratio $a/b$, {\em i.e.} the electron and hole became more and more independent (separable its wave function) when the radius of the core is increased. This is expected, but even for the ground state the effect of the non-separability of the excitonic wave function has a rather large influence in the binding energy, as can been appreciated in panel b). The von Neumann entropy, on the other hand is larger for the exciton excited states so, at least in principle, any effect related to the non-separability of the exciton wave function should be stronger for the excited states. Figure~\ref{fig:von-Neumann-entropy}b shows the behaviour of the groud state binding energy, again as a function of the ratio $a/b$, obtained using different methods. The top curve corresponds to the binding energy calculated with the B-spline approximation while, in decreasing order, the figure also shows the curves that correspond to the binding energy calculated with the B-spline method but without including the auto-polarization terms, besides the binding energy calculated using perturbation theory with and without the auto-polarization terms. The binding energy curve obtained using perturbation theory shows the first order approximation energy calculated using the finite potential well electron and hole eigenfunctions as the unperturbed levels. At least for this set of parameters and materials, taking into account the auto-polarization terms has less influence than using a method (the B-splines) that takes into account the non-separability of the two-particle wave function. For the ground state the worst case scenario, perturbation theory without auto-polarization, differs from the best one, B-splines plus auto-polarization terms, by about five percent. Of course, since the self-assembled quantum dots offer a huge amount of different combinations of materials, geometries and sizes the quantitative results may change more or less broadly. Figure~\ref{fig:otro-dot} shows the binding energy obtained with the same methods that were used to obtain the data in Figure~\ref{fig:von-Neumann-entropy}b), as describe above, but for an exciton in a different quantum dot. In this case, we considered a quantum dot formed by core/well/barrier structure made of ZnS/CdSe/ZnS. All the necessary parameters to determine the Hamiltonian, effective masses, dielectric constants, etc, can be found in Reference~\cite{Schooss(1994)}. \begin{figure} \begin{center} \includegraphics[scale=0.6]{fig9.eps} \end{center} \caption{\label{fig:otro-dot} The ground state exciton binding energy {\em vs} $a/b$ for a device with core,well and clad made of ZnS/CdSe/ZnS. The curves are obtained following the same prescription than those shown in Figure~\ref{fig:von-Neumann-entropy}. All the materials parameters can be found in Reference~\cite{Chang(1998)}. It is easy to appreciate that for this device the influence of the auto-polarization terms is smaller that in the first device analyzed, but the difference between the values obtained using perturbation theory and the B-spline method is larger} \end{figure} For the case shown in Figure~\ref{fig:otro-dot}, the influence of the auto-polarization terms is even smaller that in the first case analyzed, Figure~\ref{fig:von-Neumann-entropy}, and the difference between the best and worst scenarios defined above is around seven, eight percent. \section{control}\label{section:control} There are two important applications of excitons in which the speed with which you can go from having to not having a exciton is crucial: the controlled and rapid photon production and the switching between the basis states of the qubit logically associated to the exciton. The physical process is exactly the same, but the motivation and requirements depend on which application is being considered. In this section we are interested in estimating how much we can force an excitonic qubit so that it oscillates periodically between its basis states with the highest possible frequency and a small probability loss to other exciton eigenstates. The need to estimate how fast the qubit can be switched between its basis states comes from the limits imposed by the decoherence sources that are unavoidable and restrict seriously the total operation time in which the qubit would keep its quantum coherence. Moreover, we intend to achieve the switching between the basis states with the simplest control pulse, that is, a sinusoidal one. So, many times it is found reasonable to ask that, if $T_S$ is the switching time and $\tau$ the time scale associated to the decoherence processes present in the physical system, then $T_S\sim 10^{-4} \tau$. The main source of decoherence in charge qubits is owed to the interaction of the charge carrier, the electron(s) trapped in the quantum dot, and the thermal phonon bath present in the semiconductor matrix. This is the reason why, in many cases, spin qubits are preferred although its control is more complicated and have more longer operation time. Since in the exciton case the strength of the coupling with the phonon bath depends on the difference between the electronic and hole wave functions, it is to be expected that the decoherence rate for qubits based on one exciton would be smaller than for a charge qubit based on one (or more) electrons trapped in a quantum dot. Ideally, when the potential wells for the electron and the hole have exactly the same depth, for equal effective masses, and in the limit of zero interaction, the coupling with the phonon bath almost disappears. In this sense, the separability of the electron-hole wave function provides a good measure to select the parameter region where the coupling of the exciton with the phonon bath is smaller. Consequently, since even for very low temperatures the decoherence produced by the phonon bath imposes a total operation time on the order of a few tens of nanoseconds for qubits based on multi-layered self-assembled quantum dots, then to be considered a putative useful qubit the switching time $T_S$ should be on the order of the picoseconds or sub-picoseconds. The leaking of probability to other exciton eigenstates when an external driving is applied can be analyzed using the following unitary one-exciton Hamiltonian, which describes the interaction of the electron-hole dipolar moment, $\vec{d}$, with an external periodic field $\vec{E}(t)$ applied to the quantum dot \cite{Haug(2004)} \begin{equation} \mathcal{H}_{int}(t)=-\vec{d}\ldotp\vec{E}(t) , \end{equation} which in second quantization formalism can be written as \cite{Haug(2004),Biolatti(2002)} \begin{equation} \label{interaccion} \mathcal{H}_{int}(t)=-E(t)\sum_{nm} \left[ \mu_{nm}^* a_{n}^{\dagger} b_{m}^{\dagger} + h.c\right] , \end{equation} where $a^{\dagger}_n$ is the creation operator of an electron in the conduction band, $b^{\dagger}_m$ is the creation operator of a hole in the valence band, $n$ and $m$ stand for the corresponding one-particle levels, and $\mu^{\star}_{nm}$ is the matrix element of the dipolar moment operator, given by \begin{equation} \mu_{nm}=\mu_{bulk}\int \phi_{n}^e(\vec{r})\phi_{m}^h(\vec{r})d^3r, \end{equation} where $\mu_{bulk} $ is the dipolar moment corresponding to an electronic transition from the valence band to the conduction band\cite{Haug(2004),Biolatti(2002)} . All the one- and two-particles quantities needed to determine the parameters in Equation~\ref{interaccion} and the time evolution of the exciton state can be obtained using the B-spline method described in the preceding Sections. Moreover, since the B-spline method provides a very good approximation to all the exciton bound states the time evolution of the approximate exciton quantum state can be written as a sum over bound states as follows \begin{equation} \Psi(t) = \sum_i U_i(t) \psi_i , \end{equation} where the sum runs over the approximate bound states provided by the B-spline method, $\psi_i$, and the time-dependent coefficients $U_(t)$ can be calculated integrating a set of complex coupled ordinary differential equations. The number of ordinary differential equations is determined by how many bound states the B-spline method is able to find. In the cases analyzed from now on we considered up to thirty bound states. The numerical integration of the ordinary differential equation was performed using standard Runge-Kutta algorithms. Before analyzing the behavior of the exciton time-evolution it is worth to remark that the model allows only one exciton but the electron-hole pair can occupy many different exciton eigenstates and not only those associated to the qubit. Also, the model does not allow the ``ionization'' of the quantum dot nor the spontaneous recombination of the electron-hole pair in the time scale associated to the periodic external driving. As $|U_0|^2$ and $|U_1|^2$ are the probability that the exciton is in state $|0\rangle$ or in state $|1\rangle$ respectively, we use the {\em leakage}, $L$, to characterize the probability loss that experiments the qubit when the driving $E(t) = E_0 \sin(\omega t )$ is applied. The leakage is defined as \begin{equation} L = \lim_{n \rightarrow \infty} \frac{1}{nT} \int_{t^{\prime}}^{t^{\prime}+nT} \; \left( 1 - |U_0|^2 - |U_1|^2\right) \, dt \; , \end{equation} where $T = 2\pi/\omega$. \begin{figure} \begin{center} \includegraphics[scale=0.3]{fig10.eps} \end{center} \caption{\label{fig:time-evolution}(color on-line) The time evolution of the occupation probabilities $|U_0|^2$ (black solid line) and $|U_1|^2$ (red solid line). The exciton is initialized, in all cases, in the fundamental state, so $|U_0(t=0)|^2 = 1$, and $|U_i(t=0)|^2=0$, $\forall i \neq 0$. From top to bottom, panel a) shows the time evolution for $E_0 = 1\times 10^{-3}$ eV/nm, panel b) for $E_0 = 1\times 10^{-2}$eV/nm and panel c) for $E_0 = 5\times 10^{-2}$eV/nm, respectively. } \end{figure} Form now on, we consider a CdS/HgS/CdS structure with $a=b/2$. Figure~\ref{fig:time-evolution} shows the behavior of the occupation probabilities $|U_0|^2$ and $|U_1|^2$ as a function of time, for different external field strengths $E_0$. In the three cases shown, the frequency of the external driving is set equal to the resonance frequency $\omega_{res}$ of the exciton ground state. The resonance frequency, $\omega_{res} = (E_0 + E_{g1})/\hbar$, where $E_0$ is the lowest eigenvalue calculated from the exciton Hamiltonian and $E_{g1}$ is the energy gap of material one, {\em i.e.} $E_{g1} = E^{gap}_{HgS}$. From the different panels in Figure~\ref{fig:time-evolution}, it is possible to appreciate that switching times on the order of picoseconds or less are achievable for driving strengths small with no noticeable leakage. This scenario is further supported by the data shown in Figure~\ref{fig:leakage-vs-w}. \begin{figure} \begin{center} \includegraphics[scale=0.5]{fig11.eps} \includegraphics[scale=0.5]{fig12.eps} \end{center} \caption{\label{fig:leakage-vs-w}(color on-line) a) The {\em leakage} of probability {\em vs} the strength of the driving field. The linear dependency of $L$ with $E_0$, when the axis scale is log-log, can be clearly appreciated. From top to bottom, the Figure shows curves obtained for $\omega=0.9 \omega_{res}$ (green line and symbols), $\omega= \omega_{res}$ (black line and symbols) and $\omega=1.1 \omega_{res}$. The leakage for $\omega=0.9 \omega_{res}$ is smaller than for $\omega = \omega_{res}$ because the qubit does not leaves the fundamental state. b) The leakage as a function of the external driving frequency for three different values of $E_0$. Here we use the same external driving strengths that in Figure~\ref{fig:time-evolution}. From bottom to top, $E_0 = 1\times 10^{-3}$ eV/nm (black line), $E_0 = 1\times 10^{-2}$eV/nm (red line) and $E_0 = 5\times 10^{-2}$eV/nm (green line).} \end{figure} Figure~\ref{fig:leakage-vs-w}a) shows the {\em leakage} as a function of the strength of the external driving $E_0$ for several driving frequencies $\omega$. The data is shown in a $\log-\log$ scale, and under this assumption the leakage shows a linear behavior for a span of a few magnitude orders. The different curves correspond to different values of the driving frequency, but to analyze the dependency of the leakage with the driving frequency we choose to plot it at fixed values of the driving strength. Figure~\ref{fig:leakage-vs-w} shoes the behavior of the leakage as a function of the external drving frequency and for the three external driving strengths used in Figure~\ref{fig:time-evolution}. The different curves show a rich structure, and several spikes which are present in all the curves for the same frequencies. These spikes are owed to the presence of many one-exciton levels that are very close to the ground state energy. It is clear that to obtain low levels of leakage, besides an excellent tuning of the driving frequency, it is necessary to have well resolved exciton energy levels or, in other words, the nano-structure should be designed in such way that the exciton is formed in the non-perturbative regime, {\i.e} when the binding energy is as large as possible. This seems to advice the use of large potential wells, but there is a trade off to consider since the separation between the one-particle levels diminishes when the characteristic sizes of the potential well are increased. A simple way to enhance the interaction between electron-hole pair, at least in principle, is to choose materials whose combination provides a deep potential well and a low potential-well dielectric constant. \section{Conclusions and Discussion} \label{section:conclusions} One of the advantages of using the B-spline method is its adaptability to tackle problems with step-like parameters and matching conditions at the interface between different spatial regions. Similarly, the method allows to tackle complicated one or two-particle potentials, with a limited number of adaptations. The interaction potential between the electron and the hole, Equation~\ref{eq:electrostatic-potential} can be treated modifying the method usually employed with the Coulomb potential \cite{deboor}, since both can be written as expansions in spherical harmonics. As the analysis of the von Neumann entropy shows, for large QD (or small cores in our case) the perturbation theory calculations give a rather poor approximation for the binding energy. We can be fairly sure that our results, since they are variational, that predict smaller ground state energy for the exciton Hamiltonian, Equation~\ref{eq:hamiltoniano-exciton}, than other methods are more accurate than previous results. This implies that our results predict larger values for the exciton binding energy. To avoid an excessive leakage of probability, it is mandatory to design a quantum dot such that the two lower states of the exciton are well separated from the other one-exciton states. Choosing materials that allow for a larger interaction between the electron and the hole seems to be an apparent solution. \acknowledgments We would like to acknowledge SECYT-UNC, and CONICET for partial financial support of this project. We acknowledge the fruitful discussions with Dr. C\'esar Proetto in the early stages of this work.
2,869,038,155,374
arxiv
\section{Introduction} Consider a smooth dynamical system $(M,f)$, where $M$ is a compact smooth manifold and $f$ is a $C^2$ diffeomorphism over $M$. Among all $f$-invariant Borel probabilities, we are interesting in finding measures that reflect the chaotic properties of $f$ from the viewpoints of entropies and Lyapunov exponents. In 1970s, Sinai, Ruelle and Bowen \cite{Bow75, BoR75, Rue76, Sin72} managed to get this kind of measures for hyperbolic systems. Generally, for an invariant measure $\mu$ of $f$, if $(f,\mu)$ has a positive Lyapunov exponent and the conditional measures of $\mu$ along (Pesin) unstable manifolds of $\mu$ are absolutely continuous with respect to Lebesgue measures on these manifolds, then one says that $\mu$ is an \emph{SRB measure} (see for instance in \cite{y02} for this definition). Ledrappier-Young in \cite{ly} proved that this is equivalent to say that $h_\mu(f)$ is equal to the integral of the sum of its positive Lyapunov exponents, i.e., $(f,\mu)$ satisfies the Pesin Entropy formula. One can ask the following question (by the philosophy of Palis \cite{pa}): what is the abundance of SRB measures for diffeomorphisms? In this paper, we will show the existence of SRB measures for systems with H\"older continuous invariant splittings and some weak hyperbolic properties. Let $K\subset M$ be an attractor, i.e., $K$ is a compact invariant set and $K=\bigcap_{n\ge 0} f^n(U)$ for some open neighborhood $U$ of $K$ such that $\overline{f(U)}\subset U$. Assume that $T_{\overline{U}}M=E\oplus F$ is a H\"{o}lder continuous $Df$-invariant splitting. The simplest case is when $U = M$. We say a Borel set has \emph{total probability} if it has measure one for every $f$-invariant probability measure. \begin{theoremalph}\label{Theo-attractor} Under the above setting, if we have $$ {\rm Leb}\left(\left\{x\in U: \limsup_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^{n}\log\|Df^{-1}/F(f^{i}(x))\|<0\right\}\right)>0, $$ and there exists a subset $\Gamma\subset U$ with total probability such that for every point $x\in \Gamma$, it has $$ \liminf_{n\rightarrow \infty}\frac{1}{n}\log\|Df^n/E(x)\| \le 0, $$ then there is an SRB measure supported on $K$. \end{theoremalph} \begin{remark} We only need the bundle $F$ to be \emph{H\"{o}lder} continuous in the proof. \end{remark} By adjusting the condition of $E$-direction, we have the next Corollary: \begin{corollary}\label{corollary-attractor} Under the assumption of theorem~\ref{Theo-attractor}, if we have $$ \liminf_{n\rightarrow \infty}\frac{1}{n}\log\|Df^n/E(x)\|< 0 $$ on a set of total probability, then the SRB measure $\mu$ we get is \emph{physical}, in the following sense: $${\rm Leb}\left(\left\{x:\frac{1}{n}\sum_{i=0}^{n-1}\delta_{f^i(x)}\xrightarrow{weak \ast}\mu\right\}\right)>0.$$ \end{corollary} There are several previous related results. By the limit of our knowledge, we give a partial list below. \begin{itemize} \item Alves, Bonatti and Viana in \cite{ABV00} proved the existence of SRB (\emph{physical}) measures in ``mostly expanding" systems. Notice that all the splittings in \cite{ABV00} are \emph{dominated}. In contrast to \cite{ABV00}, the splitting in Theorem~\ref{Theo-attractor} is only H\"{o}lder continuous, which can be deduced by domination (one can see a proof in \cite[Theorem 3.7]{AP10}). \item In \cite{lep04} together with \cite{AL13}, the authors considered a system where the uniform hyperbolicity decreasing to vanish when approaching to some invariant critical set. They assume there exists a subset $\Lambda$ of points that exhibit non zero Lyapunov exponents (which ensures the orbits do not stay too long time in any fixed neighborhood of the critical set), and have local stable/unstable manifolds with uniform size. These facts imply the existence of (countable) \emph{Markov Partition} over $\Lambda$. By using the Markov Partition they proved that if there is an unstable manifold of some point in $\Lambda$ that intersects $\Lambda$ with positive Lebesgue measure, then there exists some SRB measure. \item Climenhaga, Dolgopyat and Pesin in~\cite{cdp13} considered a system with a measurable splitting and measurable invariant cone fields. They proved the system has some SRB measures if the system has some property called \emph{effective hyperbolicity} for measurable invariant cone fields. Unlike~\cite{cdp13}, systems in this paper do not have invariant cone fields. \item One part of Theorem 1.2 of Liu and Lu in \cite{LiL15} proved the existence of SRB measures for attractors with a continuous invariant splitting with two bundles, one bundle is uniformly expanding and the other one has no positive Lyapunov exponents everywhere. \end{itemize} In this work, we need to deal with splittings which are not dominated. \cite{ABV00} studied the case of dominated splittings deeply. When we don't have the dominated property, we don't have the invariant cones generally, and we lose the estimations on the H\"older curvature of sub-manifolds and the distortion bounds in contrast to \cite{ABV00}. However, the non-uniform expansion along $F$ and the non-expansion along $E$ allow us to have some non-uniform domination on a set with positive Lebesgue measure. Then by focusing on some special sets, we can recover the invariance of cones and the distortion bounds. By more accurate calculation, we can estimate the H\"older curvature only for hyperbolic times on the special sets. We also have a version for sub-manifold tangent to the $F$-bundle. Given sub-manifold $D$, denote by ${\rm Leb}_D$ the induced normalized Lebesgue measure restricted to $D$. \begin{theoremalph}\label{Thm;submanifold} Let $T_{\overline{U}}M=E\oplus F$ be a H\"{o}lder continuous $Df$-invariant splitting. If there is a $C^2$ local sub-manifold $D\subset U$, whose dimension is $\dim F$, such that $$ {\rm Leb}_D\left(\left\{x\in D: ~T_x D=F(x), ~\limsup_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^{n}\log\|Df^{-1}/F(f^{i}(x))\|<0\right\}\right)>0, $$ and there exists a subset $\Gamma\subset U$ with total probability such that for every point $x\in \Gamma$, it has $$ \liminf_{n\rightarrow \infty}\frac{1}{n}\log\|Df^n/E(x)\| \le 0, $$ then there is an SRB measure supported on $K$. \end{theoremalph} Recall that R. Leplaideur \cite{lep04} considered some topologically hyperbolic diffeomorphisms. More precisely, \cite{lep04} discussed some open set $U$ containing a compact invariant set $\Omega$ with a H\"older continuous invariant splitting $E^{cs}\oplus E^{cu}$, together with continuous non-negative functions $k^s$ and $k^u$, such that \begin{itemize} \item $\|Df/E^{cs}(x)v\|\le {\rm e}^{-k^s(x)}\|v\|$, $\|Df/E^{cu}(x)v\|\ge {\rm e}^{k^u(x)}\|v\|$ for any $x\in U$ and any non-zero vector $v\in T_x M$ in the subspace, respectively; \item $k^s(x)=0$ if and only if $k^u(x)=0$. Moreover, the set of all points with the above property is invariant. \end{itemize} \cite{lep04} proved that for some point with large unstable manifold and has good estimations, then $f$ admits a finite or $\sigma$-finite SRB measure. Especially, \cite{lep04} reduced the initial problem to \cite[Lemma 3.8]{lep04} which asserts that there is a unstable manifold $D$ such that $$ {\rm Leb}_D\left(\left\{x\in D: ~\limsup_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^{n}\log\|Df^{-1}/F(f^{i}(x))\|<0\right\}\right)>0. $$ Our Theorem~\ref{Thm;submanifold} can apply to the main theorem of \cite{lep04} to show there really exists a finite SRB measure. Notice that this has already been obtained earlier by a recent paper of Alves-Leplaideur \cite{AL13}. However, the method here is different from \cite{AL13}: we don't need to estimate the unstable manifold in advance and we don't construct Markov partitions. This paper is organized as follows. In Section $2$, we study the dynamics from continuous invariant splittings, mainly about some geometry properties for the iterated disks around some special points which will be denoted by $\Lambda_{\lambda_1,1}$, including the angles between these disks and $F$-bundle, the backward contracting property in hyperbolic times and also the bounded distortion property, and one should notice that this is the unique place using the \emph{H\"{o}lder} assumption of the $F$-bundle. Section $3$ is dedicated to prove Theorem \ref{Theo-attractor}, during which we will select some disk tangent to the $F$-direction cone field such that one can apply the properties obtained in Section $2$. Then we consider the Lebesgue measures of the disk under dynamics $f$ and we will find some ergodic measure of the accumulation of these measures satisfying Theorem \ref{Theo-attractor}. After that we will give the proof of Corollary \ref{corollary-attractor} as a simple application. Finally, a short proof of Theorem \ref{Thm;submanifold} will be presented by using the main approach we built in previous sections but with some modification. \textbf{Acknowledgement} We would like to thank Prof. J. Xia for useful discussions and suggestions. Z. Mi would like to thank Northwestern University for their hospitality and the excellent research atmosphere, where this work was partially done. Z. Mi also would like to thank China Scholarship Council (CSC) for their financial support. \section{The dynamics from continuous invariant splittings} Assume that $f$ is a $C^2$ diffeomorphism on a compact Riemmannian manifold $M$, and $K$ is an attractor introduced in Section $1$. Let $T_{\overline{U}}M=E\oplus F$ be a $Df$-invariant continuous splitting throughout this section unless otherwise noted. \subsection{Pliss Lemma and its applications} The next classic Pliss lemma is very useful in getting hyperbolic times. The proof can be found for instance in \cite[Lemma 3.1]{ABV00}. \begin{lemma}\label{Lem:numberPliss}(\textit{Pliss lemma}) For numbers $C_0\geq C_1>C_2\geq0,$ there is $\theta=\theta(C_0,C_1,C_2)>0$ such that for any integer $N$, and numbers $b_1, b_2, \cdots ,b_N\in{\mathbb R}$, if $$ \sum_{j=1}^N b_j\geq C_1N, ~~~b_j\leq C_0,~~~\forall~ 1\le j\le N. $$ Then there is an integer $\ell>\theta N$ and a subsequence $1<n_1<\cdots <n_{\ell} \leq N$ such that $$ \sum_{j=n+1}^{n_i} b_j\geq C_2(n_i-n) ~~~\text{for every} ~~0 \leq n<n_i ~~\text{and} ~~i=1, \cdots ,\ell. $$ \end{lemma} We will use Lemma~\ref{Lem:numberPliss} to give some results for diffeomorphisms. \begin{definition}(\textit{Hyperbolic time}) Given $\sigma <1$ and $x\in \overline{U} $, if $$ \prod_{j=n-k+1}^n \|Df^{-1}/F(f^j (x))\| \leq \sigma^k,~~~\text{ for all}~~1 \leq k \leq n, $$ then we say n is a $\sigma$-hyperbolic time for $x$. \end{definition} \begin{lemma}\label{Lem:hyperbolitimefordiff} Given $0<\sigma_1<\sigma_2<1$, there is $\theta=\theta(\sigma_1,\sigma_2,f)>0$ such that for any $x$ and any $N\in{\mathbb N}$, if $$\prod_{j=1}^N\|Df^{-1}/F(f^j(x))\|\le \sigma_1^N,$$ then, there are $1\le n_1<n_2<\cdots<n_\ell\le N$, where $\ell>\theta N$ such that $n_i$ is a $\sigma_2$-hyperbolic time for $x$, $1\le i\le \ell$. \end{lemma} \begin{proof} This is an application of the Pliss Lemma (Lemma~\ref{Lem:numberPliss}) by taking $$b_j=-\log\|Df^{-1}/F(f^j(x))\|.$$ More precisely, by assumption, we have $$ \sum_{j=1}^N \log \|Df^{-1}/F(f^j(x))\|\le N\log \sigma_1. $$ So $$ \sum_{j=1}^N b_j\geq(-\log \sigma_1)N. $$ Now, we take $C_1=-\log \sigma_1,$ $C_2=-\log \sigma_2$ and $C_0=\sup \big|\log \|Df^{-1}/F\|\big|$. By the assumption, we have $C_0\geq C_1>C_2\geq0$ . Thus, Lemma~\ref{Lem:numberPliss} implies that there are $1\le n_1<n_2<\cdots<n_\ell\le N$ with $\ell>\theta N$ such that for every $n_i$, we have $$ \sum_{j=n+1}^{n_i} b_j\geq(-\log \sigma_2)(n_i-n) ~~~\text{for every}~~ 0\le n < n_i, $$ which in other words, $$ \prod_{j=n_i-k+1}^{n_i} \|Df^{-1}/F(f^j (x))\| \leq \sigma_2^k ~~~\text{for every}~~1 \leq k \leq n_i, $$ that is to say $n_i$ is a $\sigma_2$-hyperbolic time for $x$, which completes the proof. \end{proof} We also need the following lemma of Pliss type which considers infinitely many times. \begin{lemma}\label{Lem:infinitePliss} For a sequence of real numbers $a_1, a_2, \cdots $, for $N\in \mathbb{N}$, if $$ \sum_{i=1}^{n}a_i \geq 0,~~~ \text{for every }~n\ge N. $$ Then there exists $1\leq k \le N$ such that $$ \sum_{i=k}^n a_i \ge 0,~~~\text{for every }~n\ge k. $$ \end{lemma} \begin{proof} Denote by $S(n)=\sum_{i=1}^n a_i$, for every $n\geq 1$. By convention, one can define $S(0)=0$. By the hypothesis, $S(n)\geq 0$ for every $n\ge N $. We choose some $0 \le \ell \leq N$ such that $S(\ell)$ be the smallest number among the sequence of numbers $S(n)$, where $n$ takes from $0$ to $N$, that is $$ S(\ell)=\min\{S(n): 0 \le n \le N\}. $$ We can also restrict $0\le \ell \le N-1$ as $S(0)=0 \le S(N)$. So $S(\ell) \le S(n)$ for every $\ell < n \le N$, and also $S(\ell) \leq 0$ which implies that $S(n) \ge S(\ell)$ for every $n > N$. Together, we obtain that $S(n) \ge S(\ell)$ for all $n > \ell$. Now take $k=\ell +1$, we have $S(n) \ge S(k-1)$ for every $n\geq k$, then $$ \sum_{i=k}^n a_i =S(n)-S(k-1)\geq 0 ~~\text{for every}~ n \ge k. $$ \end{proof} Given $\lambda\in(0,1)$ and $N\in \mathbb{N}$, define $$ \Lambda_\lambda=\left\{x\in U:~\limsup_{n\to\infty}\frac{1}{n}\sum_{i=1}^n\log\|Df^{-1}/F(f^i(x))\|\leq 2\log\lambda\right\}; $$ $$ \Lambda_{\lambda, N }=\left\{x\in \Lambda_\lambda: \frac{1}{n}\sum_{i=1}^{n}\log\|Df^{-1}/F(f^{i}(x))\|\leq\log \lambda<0,~~~ \forall n\geq N \right\}. $$ As an application of Lemma \ref{Lem:infinitePliss}, we have the next Proposition which asserts that one can reduce the positive volume set in assumption of Theorem \ref{Theo-attractor} to some set $\Lambda_{\lambda,1}$ (non-invariant) also with positive volume. $\Lambda_{\lambda,1}$ can be manipulated easier. \begin{proposition}\label{Pro:measureLebesgue} Let $$ \Lambda=\left\{x\in U: \limsup_{n\rightarrow \infty}\frac{1}{n}\sum_{i=1}^{n}\log\|Df^{-1}/F(f^{i}(x))\|<0\right\}. $$ If ${\rm Leb}(\Lambda)>0$, then there exists a constant $\lambda\in(0,1)$, such that ${\rm Leb}(\Lambda_{\lambda, 1})>0$. \end{proposition} \begin{proof} Let $$\Lambda(k)=\left\{x\in U:~\limsup_{n\to\infty}\frac{1}{n}\sum_{i=1}^n\log\|Df^{-1}/F(f^i(x))\|\leq-\frac{1}{k}\right\},$$ then, we have $\Lambda=\bigcup_{k=1}^{\infty}\Lambda(k)$ by the definition of $\Lambda$. This together with ${\rm Leb}(\Lambda)>0 $ implies that ${\rm Leb} (\Lambda_{\lambda})>0 $ for some $\lambda\in (0,1)$. Notice that $\Lambda_{\lambda} =\bigcup_{N=1}^{\infty}\Lambda_{\lambda, N }$ and $\Lambda_{\lambda, N }\subset \Lambda_{\lambda, N+1 } $ for all $N \in \mathbb{N}$, so there exists an $N\in\mathbb{N}$ such that ${\rm Leb}(\Lambda_{\lambda, N })>0.$ For any $x \in \Lambda_{\lambda, N}$, $$ \sum_{i=1}^n\left(-\log \|Df^{-1}/F(f^{i}(x))\|-\log \lambda^{-1}\right)\ge 0,~~~\forall n\ge N. $$ Set $a_i=-\log \|Df^{-1}/F(f^{i}(x))\|-\log \lambda^{-1}$, then $$ \sum_{i=1}^n a_i \ge 0, ~~~\forall n\ge N. $$ By applying Lemma \ref{Lem:numberPliss}, there exists some $1 \le k \le N$ such that $$ \sum_{i=k}^n a_i \ge 0, ~~~\forall n\ge k. $$ Thus, $$ \sum_{i=k}^n\left(-\log \|Df^{-1}/F(f^{i}(x))\|-\log \lambda^{-1}\right)\ge 0,~~~\forall n\ge k. $$ So, by the definition of $\Lambda_{\lambda,1}$, we have $f^{k-1}(x)\in \Lambda_{\lambda,1} $. Now we make a partition of $\Lambda_{\lambda,N}$, let $$\Lambda_{\lambda,N,j}=\left\{x\in \Lambda_{\lambda,N}:\;f^j(x)\in\Lambda_{\lambda,1}\right\},$$ then $ \Lambda_{\lambda,N} = \bigcup_{j=0}^{N-1} \Lambda_{\lambda,N,j} $ . Since ${\rm Leb }(\Lambda_{\lambda,N})>0$, we have ${\rm Leb}(\Lambda_{\lambda, N,j})>0$ for some $0 \leq j\leq N-1$. The fact that $f^j(\Lambda_{\lambda,N,j})\subset \Lambda_{\lambda,1} $ implies ${\rm Leb}(\Lambda_{\lambda,1})>0 $. \end{proof} \begin{proposition}\label{pro:hyperbolictimeforgood} Given $0<\sigma_1<\sigma_2<1$, there is $\theta=\theta(\sigma_1,\sigma_2,f)\in(0,1)$ such that for every $x\in\Lambda_{\sigma_1,1}$ and any $N\in \mathbb N $, there are $1\le n_1<n_2<\cdots<n_\ell\le N$, where $\ell>\theta N$ such that $n_i$ is a $\sigma_2$-hyperbolic time for $x$, $i=1,\cdots ,\ell$. \end{proposition} \begin{proof} For every $x\in \Lambda_{\sigma_1,1}$, by the definition we have that $$ \frac{1}{n}\sum_{i=1}^{n}\log\|Df^{-1}/F(f^{i}(x))\|\leq\log \sigma_1<0,~~~ \forall n\geq 1, $$ it is equivalent to $$ \prod_{i=1}^N\|Df^{-1}/F(f^i(x))\|\le \sigma_1^N,~~~\text{for any }~~N\in{\mathbb N}. $$ \\Now we can apply the Lemma~\ref{Lem:hyperbolitimefordiff} to end the proof. \end{proof} The above Proposition~\ref{Pro:measureLebesgue} and Proposition~\ref{pro:hyperbolictimeforgood} tell us that under the setting of Theorem~\ref{Theo-attractor} there exists a set with positive Lebesgue measure such that all the points there have infinitely many hyperbolic times, and these hyperbolic times have uniformly positive density. \subsection{Adjusting constants} \begin{comment} For given diffeomorphism $g$ and any $g$-invariant measure $\nu$, the metric entropy of $g$ with respect to $\nu$ is denoted by $h_{\nu}(g)$. Let $\chi_1(g,x)>\cdots >\chi_{k(x)}(g,x)$ denote the distinct Lyapunov exponents of $g$ at $x$, the multiplicity of $\chi_i(g,x)$ being ${\rm dim} E_i(x)$, where $E_i(x)$ is the corresponding Lyapunov space. By Oseledec's theorem, we know that Lyapunov exponents are defined $\nu$-$a.e$. \end{comment} We have the following theorem that asserts that if an iteration of a diffeomorphism $f$ has an SRB measure, then $f$ has an SRB measure itself. The proof is standard, hence omitted. \begin{theorem}\label{thm:renormalized} For given $N\in \mathbb{N}$, if $\mu$ is an $SRB$ measure for $f^N$, then there exist some $SRB$ measure $\hat{\mu}$ for $f$. More precisely, we can take $$\hat{\mu}=\frac{1}{N}\sum_{i=0}^{N-1}f_\ast^i\mu.$$ \end{theorem} By Theorem~\ref{thm:renormalized}, for considering the existence of SRB measures, it suffices to consider $f^N$ for some integer $N$. We need the following Proposition whose proof is similar to~\cite{cao03} and we omit it here also. \begin{proposition}\label{weak contracting} Let $\Lambda$ be a compact positively invariant set and $E\subset T_\Lambda M$ be a continuous $Df$-invariant bundle. If there is a set $\Lambda'\subset \Lambda$ with total probability such that for any $x\in\Lambda'$, one has $$ \liminf_{n\rightarrow \infty}\frac{1}{n}\log\|Df^n/E(x)\| \le 0 $$ then $\forall \varepsilon>0$, there exists $N:=N(\varepsilon)\in \mathbb{N}$ such that $$ \|Df^n/E(x)\| < {\rm e}^{n\varepsilon} $$ for any $n \ge N$ and $x \in \Lambda$. \end{proposition} Thus, under the assumptions of the main theorems, by considering a large iteration of $f$, we can add some standing assumptions for $f$: \begin{enumerate} \item[H:] there are constants $\varepsilon_0>0$, $\xi, \lambda_1, \lambda_2, \lambda_3 \in (0,1)$ such that \begin{itemize} \item $\|Df/E(x)\|<{\rm e}^{{\varepsilon}_0}$ for every $x\in \overline{U}$; \item $0<\lambda_1<\lambda_1{\rm e}^{\varepsilon_0}<\lambda_2<\lambda_3={\lambda_2{\rm e}^{\varepsilon_0}}/{b^{\xi}}<1$, where\footnote{If $V_1,V_2$ are two $d$-dimensional linear space and $A:V_1\rightarrow V_2$ is a linear map, we define the \emph{mininorm} $$ m(A)=\inf_{v\neq 0}\frac{\|Av\|}{\|v\|}. $$ If the linear map $A$ is invertible, then one obtains $m(A)=\|A^{-1}\|^{-1}$.} $b=\inf_{x\in \overline{U}} m(Df/F(x)) >0$; \item ${\rm Leb}(\Lambda_{\lambda_1,1})>0$. \end{itemize} \end{enumerate} For every $x\in \Lambda_{\lambda_1,1}$, by Proposition~\ref{pro:hyperbolictimeforgood} we know that there are infinitely many $\lambda_2$-hyperbolic times for $x$. Let $n$ be a $\lambda_2$-hyperbolic time for $x$, then by the definition of hyperbolic time and the standing assumption, we have $$ \prod_{j=n-k}^{n-1}\frac{\|Df/E(f^j(x))\|}{m(Df/F(f^j(x)))} \le ({e^{\varepsilon_0}\lambda_2})^k, ~~\text{for every}~~ 1 \le k \le n, $$ furthermore, we get $$ \prod_{j=n-k}^{n-1}\frac{\|Df/E(f^j(x))\|}{m(Df/F(f^j(x)))^{1+\xi}} \le \left (\frac{e^{\varepsilon_0}\lambda_2}{b^{\xi}}\right)^k={\lambda_3}^k,~~\text{for every}~~ 1 \le k \le n. $$ Which means that if $n$ is a $\lambda_2$-hyperbolic time for $x$, we have $$ \prod_{j=n-k}^{n-1}\frac{\|Df/E(f^j(x))\|}{m\left(Df/F(f^j(x))\right)^{1+\xi}} \le {\lambda_3}^k,~~\text{for every}~~ 1 \le k \le n. $$ \subsection{Sub-manifolds tangent to cone field and their iterations} Denote by $B_r(x)=\{y\in M: d(x,y)\leq r\}$ the closed ball of radius $r$ around $x$. We can assume $M$ is an embedded manifold in $\mathbb{R}^N$ for $N$ large enough by the Whitney Embedding theorem. For a subspace $A\subset \mathbb{R}^N $ and a vector $v\in \mathbb{R}^N $, writing $dist(v,A)=\min_{w\in A}\|v-w\|$ as the length of the distance between $v$ and its orthogonal projection to $A$. If $A,B$ are any two subspaces of $\mathbb{R}^N $, define the distance between them (see \cite[Chapter 2.3]{bp02} and \cite{byu87}), $$ dist(A,B)=\max\left\{\max_{u\in A,\|u\|=1}dist(u,B),\max_{v\in B,\|v\|=1}dist(v,A)\right\}, $$ in particular, if subspaces $A$ and $B$ have the same dimension, we have $$ \max_{u\in A,\|u\|=1}dist(u,B)=\max_{v\in B,\|v\|=1}dist(v,A). $$ \begin{definition}(\textit{Cone field}) Let $0<a<1$, define the $F$-direction cone field $\mathcal{C}_a^F=\left(\mathcal{C}_a^F(x)\right)_{x\in U} $ of width $a$ by $$ \mathcal{C}_a^F(x)=\Big\{v=v_E+v_F\in E(x)\oplus F(x) \; such \; that\; \|v_E\|\leq a\|v_F\|\Big\}. $$ One can define the $E$-direction cone field $\mathcal{C}_a^E$ of width $a$ in a similar way. \end{definition} For an embedded sub-manifold $D$, we say that it is \emph{tangent to ${\cal C}_a^F$} if $T_x D\subset {\cal C}_a^F(x)$ for any $x\in D$. If the splitting is \emph{dominated} as in \cite{ABV00}, then the $F$-direction cone field is invariant by $Df$. Now the splitting here is only \emph{continuous}. In~\cite{ABV00}, the authors assume all the systems there have the dominated splitting property such that one can use the invariance property of the cones to obtain several nice geometry properties of the iterations of some embedded sub-manifold tangent to $F$-direction cone field with small fixed width. Indeed, the invariance property ensures that all the images of these kinds of sub-manifolds are also tangent to the $F$-direction cone field of the same width as before. Moreover, the angles between bundle $F$ and tangent spaces of these iterated sub-manifolds are decreasing as iterated times increasing. In our setting, due to the lack of domination we have no invariance property of the cone fields. Consequently, one can not iterate every sub-manifold tangent to $F$-direction cone field such that all its iterations have the nice geometry properties like dominated case. However, it is enough for us to iterate sub-manifold around the neighborhood of some particular points (points of $\Lambda_{\lambda_1,1}$). For this reason, we shall study systems with domination in local sense. More precisely, we consider average dominated orbit segment (Definition \ref{Def;average dominated} below), and built the invariance property of cones in weak sense, which means that for any disk containing a starting point of some orbit segment with average dominated, if it is tangent to $F$-direction cone field, then so do their iterations whatever they admit some uniform small radius around the average dominated orbit. In fact, analogous to dominated case, the angles between $F$-bundle and iterated disks are decreasing exponentially. Furthermore, with the help of hyperbolic time we can show that the iterated disks are backward contracted in exponential rate on hyperbolic times. \begin{definition}\label{Def;average dominated} An orbit segment $\left(x,f^{n}(x)\right)$ is called $\gamma$-average dominated if for any $1\le i\le n$, we have $$\prod_{j=0}^{i-1}\frac{\|Df/E(f^{j}(x))\|}{m\left(Df/F(f^{j}(x))\right)}\le \gamma^{i}.$$ \end{definition} By the standing assumption, we have \begin{lemma}\label{fromweakcontract} For any point $x\in \Lambda_{\lambda_1,1}$, the orbit segment $\left(x, f^{n}(x)\right)$ is $\lambda_1{\rm e}^{\varepsilon_0}$-average dominated for any $n\in \mathbb{N}$. \end{lemma} The proof of this lemma is to use the definition directly. Similar to the case of dominated splittings, we have the following two lemmas for an average dominated orbit segment. \begin{lemma}\label{Lem:bowen-ball-dominated} For any $0<\gamma_1<\gamma_2<1$, there is $r=r(\gamma_1,\gamma_2)>0$ such that for any $x$, if $\left(x,f^{n}(x)\right)$ is $\gamma_1$-average dominated, then for any $y\in U$ satisfying $d\left(f^{j}(x),f^{j}(y)\right)\le r$ for any $0\le j\le n-1$, one has that $\left(y,f^{n}(y)\right)$ is $\gamma_2$-average dominated. \end{lemma} \begin{proof} For any constants $0<\gamma_1<\gamma_2<1$, by the uniform continuity of $Df$ and bundles, there exists $r=r(\gamma_1,\gamma_2)>0 $ such that $$ \sqrt{\gamma_1/\gamma_2}\leq \frac{\|Df/E(x)\|}{\|Df/E(y)\|}\le \sqrt{\gamma_2/\gamma_1}; $$ and $$ \sqrt{\gamma_1/\gamma_2}\leq \frac{m(Df/F(x))}{m(Df/F(y))}\le \sqrt{\gamma_2/\gamma_1}, $$ whenever $d(x,y)\le r$. Then, by hypothesis, we obtain the following: \begin{eqnarray*} \prod_{j=0}^{i-1}\frac{\|Df/E(f^{j}(y))\|}{m(Df/F(f^{j}(y)))} &\le& \prod_{j=0}^{i-1}\frac{\sqrt{\gamma_2/\gamma_1}\|Df/E(f^{j}(x))\|}{\sqrt{\gamma_1/\gamma_2}m(Df/F(f^{j}(x)))}\\ &\le& \gamma_2^{i} \end{eqnarray*} for any $1\le i \le n$. That is to say, $\left(y,f^{n}(y)\right)$ is $\gamma_2$-average dominated. \end{proof} \begin{lemma}\label{Lem:cone-inside} For any $\lambda\in(0,1)$ and $a\in(0,1)$, if $\left(x,f^{n}(x)\right)$ is $\lambda$-average dominated, then $Df^{i}(x){\cal C}_a^F(x)\subset {\cal C}_{\lambda^{i} a}^F\left(f^{i}(x)\right),$ for every $ 1\le i \le n $. \end{lemma} \begin{proof} Denote $v_0\in {\cal C}_a^F(x)$ by $v_0=v_E+v_F$, where $v_E\in E(x)$, $v_F\in F(x)$ and $\|v_E\|/\|v_F\|\le a$. Since the orbit segment $\left(x,f^{n}(x)\right)$ is $\lambda$-average dominated, \begin{eqnarray*} \frac{\|Df^{i}(x)(v_E)\|}{\|Df^{i}(x)(v_F)\|} &\leq& \prod_{j=0}^{i-1}\frac{\|Df/E(f^{j}(x))\|\|v_E\|}{m(Df/F(f^{j}(x)))\|v_F\|}\\ &\le & \lambda^i\frac{\|v_E\|}{\|v_F\|}\\ &\le & \lambda^i a, \end{eqnarray*} for any $1\le i\le n$. By invariance, $Df^{i}(x)(v_E)\in E(f^{i}(x))$ and $ Df^{j}(x)(v_F)\in F(f^{i}(x))$, the above inequality means that $Df^{i}(x){\cal C}_a^F(x)\subset {\cal C}_{\lambda^{i} a}^F\left(f^{i}(x)\right),$ for any $1\le i \le n $. \end{proof} As a consequence of Lemma~\ref{Lem:bowen-ball-dominated} and Lemma~\ref{Lem:cone-inside}, for sub-manifold tangent to $F$-direction cone field we have the following fact: \begin{lemma}\label{Lem:submanifold-cone} Given $\lambda\in(0,1)$, there exists $r>0$ such that for any $a\in (0,1)$, if $\left(x,f^{n}(x)\right)$ is $\lambda$-average dominated, then for any sub-manifold $D\ni x$ tangent to ${\cal C}_a^F$ such that $d_{f^{i}D}\left(f^{i}(x),\partial (f^{i}(D))\right)\le r$ for any $0\le i\le n-1$, then \begin{itemize} \item $f^{j}(D)$ is tangent to ${\mathcal C}_{\lambda^{j/2} a}^F $ for any $1\le j\le n$; \item $dist(F(f^{j}(y),T_{f^{j}(y)}f^{j}(D))\le \lambda^{j/2} a$, for every $y\in D$ and $1\le j\le n$. \end{itemize} \end{lemma} \begin{proof} We take $r=r(\lambda,\lambda^{1/2})$ as in Lemma~\ref{Lem:bowen-ball-dominated}. For the sub-manifold $D$ and $v_0\in T_y D$ for any $y\in D$. Denote by $v_0=v_E+v_F$, where $v_E\in E(y)$ and $v_F\in F(y)$ satisfying $\|v_E\|/\|v_F\|\le a$. Since the orbit segment $\left(y,f^{n}(y)\right)$ is $\lambda^{1/2}$-average dominated by Lemma~\ref{Lem:bowen-ball-dominated}, we get the first statement of this Lemma by applying Lemma~\ref{Lem:cone-inside} directly. Now we will prove the second statement. Since $v_j=Df^{j}(y)(v_0)=Df^{j}(y)(v_E)+Df^{j}(y)(v_F)$ for every $1\le j\le n$, by the definition of \emph{dist} in previous remarks, we have \begin{eqnarray*} dist\left(\frac{Df^{j}(y)(v_F)}{\|Df^{j}(y)(v_F)\|},T_{f^j(y)}f^{j}(D)\right) &\leq & \left\|\frac{Df^{j}(y)(v_F)}{\|Df^{j}(y)(v_F)\|}-\frac{v_j}{\|Df^{j}(y)(v_F)\|}\right\| \\ & = &\frac{\|Df^{j}(y)(v_F)-v_j\|}{\|Df^{j}(y)(v_F)\|}\\ & = &\frac{\|Df^{j}(y)(v_E)\|}{\|Df^{j}(y)(v_F)\|}\\ & \leq &\lambda^{j/2} \cdot a. \end{eqnarray*} By the arbitrariness of $v_0$, we know $$ dist \left(F(f^{j}(y)),T_{f^{j}(y)}f^{j}(D) \right)=\max_{w\in F(f^{j}y), \|w\|=1}dist(w, T_{f^{j}(y)}f^{j}(D))\leq \lambda^{j/2} \cdot a. $$ \end{proof} \begin{lemma}\label{Lemma:hyperbolictangentcone} Given $0 <\gamma_1<\gamma_2<1$, there are $r_0>0$ and $a_0>0$ such that for any $r\in(0,r_0]$ and $a\in (0,a_0]$, if $\left(x,f^{n}(x)\right)$ is $\gamma_1$-average dominated and $n$ is a $\gamma_2$-hyperbolic time for $x$, then for any embedded sub-manifold $\widetilde{D}$ containing $x$ with radius larger than $r$ around $x$, there is a simply connected sub-manifold $D \subset \widetilde{D}$ containing $x$ as its interior such that \begin{itemize} \item $f^{k}(D)\subset B_r\left(f^{k}(x)\right)$ for every $0 \le k \le n$; \item $f^{n}(D)$ is a disk of radius $r$ around $f^{n}(x)$; \item $d_{f^{n-k}D}\left(f^{n-k}(x), f^{n-k}(y)\right)\le (\gamma_2)^{k/2}d_{f^{n}D}\left(f^{n}(x),f^{n}(y)\right)$ for any point $y \in D$. \end{itemize} \end{lemma} \begin{proof} By the uniform continuity of $Df$ and the bundles, there exist constants $r_0>0$ and $a_0>0$, such that $$ \frac{m(Df/{\widetilde F}(y))}{m(Df/F(x))} \geq \sqrt{\gamma_2}, \quad \eqno(1) $$ whenever $d(x,y)\leq r_0$ and $dist(\widetilde F(y),F(y))\le a_0$, and also for every $n$, the orbit segment $\left(y,f^n(y)\right)$ is $\gamma_2$-average dominated whenever $d(f^ix,f^iy)\le r_0$ for any $0\le i \le n-1$. For any $r\in (0,r_0]$ and $a\in (0,a_0]$ fixed, let $\widetilde{D}$ be an embedded sub-manifold satisfying $d_{\widetilde{D}}(x,\partial\widetilde{D})>r$. Define $D_i$ as the connected component of $f(D_{i-1})\cap B_r\left(f^i(x)\right)$ containing $f^i(x)$ inductively for any $1\le i\le n$, where $D_0$ is the connected component of $\widetilde{D}\cap B_r(x)$ containing $x$, and by construction we know $d_{D_0}(x,\partial D_0)\ge r$. Now we will firstly show that the the sub-manifold $D_n$ contains some disk of radius $r$. By the construction, we have $D_i\subset B_r(f^i(x))$ for $0\le i\le n$, and $f^{-k}(D_n) \subset D_{n-k}$ for every $0 \le k \le n$. Then Lemma~\ref{Lem:submanifold-cone} implies that all the pre-images $\{f^{-k}(D_n)\}_{0<k\le n}$ are tangent to the cone field with width $a$, respectively. We will argue by absurd: we assume by contradiction that $D_n$ has radius less than $r$, then there exists some point $y_n\in \partial(D_n)$ such that $d_{D_n}(f^n(x),y_n)< r$. Define $y_{n-k}=f^{-k}(y_n)$ for every $0\le k \le n$, then $y_i\in D_i\subset B_r(f^i(x))$, for every $0\le i \le n-1$. Thus, we can choose a sequence of points $z_k\in f^{-(k+1)}(D_n)$ and apply the inequality $(1)$ to get the following estimation \begin{eqnarray*} d_{f^{-k}D_n}\left(f^{n-k}(x),y_{n-k}\right) &\ge & m(Df/T_{z_k}(f^{-k-1}D_{n})d_{f^{-k-1}D_n}(f^{n-k-1}(x),y_{n-k-1})\\ & \geq &\sqrt{\gamma_2}m(Df/F(f^{n-k-1}(x)))d_{f^{-k-1}D_n}(f^{n-k-1}(x),y_{n-k-1}), \end{eqnarray*} for every $0\le k \le n-1$. Consequently, $$ d_{D_n}(f^n(x),y_n)\ge (\sqrt{\gamma_2})^k \prod_{j=n-k}^{n-1} m(Df/F(f^j(x)))d_{f^{-k}(D_n)}\left(f^{n-k}(x),y_{n-k}\right), $$ for every $1\le k \le n$. As $n$ is a $\gamma_2$-hyperbolic time for $x$, we know $$ \prod_{j=n-k}^{n-1}m(Df/F(f^j(x)))\ge \gamma_2^{-k}. $$ So $$ d_{D_n}\left(f^n(x),y_n\right) \ge (\gamma_2^{-1/2})^kd_{f^{-k}D_n}\left(f^{n-k}(x),y_{n-k}\right).\quad \eqno(2) $$ By the assumption $d_{D_n}(f^n(x),y_n)<r$, we have all the points $y_i$ are contained in the interior of $B_r(f^i(x))$, and $d_{D_i}(f^i(x),y_i)\le {\gamma_2}^k\cdot r<r$, $0\le i \le n-1$. So $y_0 \in \partial(D_0)$ and $d_{D_0}(x,y_0)\ge r$, a contradiction. Therefore, the radius of $D_n$ is larger than $r$. Consequently, we can take a disk $\widetilde{D_n}$ contained in $D_n$ with radius $r$. Let $D=f^{-n}(\widetilde{D_n})$, then $D$ satisfies the first two properties by our construction immediately. The last inequality about the backward contracting property can be deduced similarly to the process of the proof of inequality $(2)$, which is a consequence of the assumption that $n$ is a hyperbolic time for $x$ and the fact that sub-manifold $f^i(D)$ is tangent to the cone field (with width smaller than $a$) around $f^i(x)$ with radius not bigger than $r$, for every $0\le i\le n$. So we can apply the estimation $(1)$ inductively. \end{proof} \subsection{Distortion bounds and H\"older curvature at hyperbolic times} \begin{proposition}\label{diskproperty} There exist constants $a>0,r>0$ such that if $x\in \Lambda_{\lambda_1,1}$ and $n$ is a $\lambda_2$-hyperbolic time for $x$, for any sub-manifold $D$ tangent to $\mathcal{C}_a^F$ with radius larger than r around $x$, we have \begin{itemize} \item $d_{f^{n-k}(D)}\left(f^{n-k}(x),f^{n-k}(y)\right)\le {\lambda_2}^{k/2}d_{f^n(D)}\left(f^n(x),f^n(y)\right)$ ~~for any $0\le k \le n$; \item $dist\left(T_{f^j(y)}f^j(D),F(f^j(y))\right)\le {\lambda_2}^j \cdot a$ ~~for every $0\le j \le n$, \end{itemize} whenever $y\in D$ such that $d_{f^n(D)}\left(f^n(x),f^n(y)\right)\le r$. \end{proposition} \begin{proof} It can be deduced from Lemma \ref{fromweakcontract} and Lemma \ref{Lemma:hyperbolictangentcone}. \end{proof} We will discuss \emph{bounded distortion}, it plays a crucial role in the proof of the existence of SRB measures. Now we use the assumption: $F$ is H\"{o}lder continuous. \begin{proposition}\label{pro:bounded distortion} There exist $a>0$, $r>0$ and $\mathcal{K}>0$ such that for any $C^1$ sub-manifold $D$ tangent to $\mathcal{C}_a^F$ with radius larger than r around $x\in \Lambda_{\lambda_1,1}$, and $n\geq 1$ is a $\lambda_2$-hyperbolic time for $x$, then $$ \frac{1}{\mathcal{K}}\leq\frac{|\det Df^n/T_yD|}{|\det Df^n/T_xD|}\leq \mathcal{K} $$ for every $y\in D$ such that $d_{f^nD}(f^nx,f^ny)\leq r $. \end{proposition} \begin{proof} Choose $a,r$ that satisfy the condition of Proposition \ref{diskproperty}, without loss of generality, we suppose $r<1$, then one obtains \begin{eqnarray*} \left|\log \frac{|\det {Df}^n/T_yD|}{|\det{Df}^n/F(y)|}\right|& \leq & \sum_{i=0}^{n-1}\left|\log|\det Df/T_{f^iy}f^iD|-\log|\det Df/F(f^i(y))|\right|\\ & \leq & \sum_{i=0}^{n-1}R_1 dist(T_{f^iy}f^i D, F(f^i(y)))\\ & \leq & \sum_{i=0}^{n-1}R_1 {\lambda_2}^i a\\ & \leq & R_1 \cdot \frac{a}{1- \lambda_2}, \end{eqnarray*} where $R_1$ is a universal constant depending only on $f$. Especially, by taking $x=y$, we have $$ \left|\log \frac{|\det{Df}^n/F(x)|}{|\det {Df}^n/T_xD|}\right| \leq R_1\cdot \frac{a}{1- \lambda_2}. $$ Since bundle $F$ is H\"{o}lder by assumption, we may suppose $x\mapsto F(x)$ is $\beta$-H\"{o}lder continuous for some $0<\beta\leq 1$. Therefore, we have the following estimation \begin{eqnarray*} \left|\log \frac{|\det {Df}^n/F(y)|}{|\det {Df}^n/F(x)|}\right|& \leq &\sum_{i=0}^{n-1}\left|\log|\det Df/F(f^i(x))|-\log|\det Df/F(f^i(y))|\right|\\ & \leq & \sum_{i=0}^{n-1}R_2d(f^i(x),f^i(y))^{\beta}\\ & \leq & \sum_{i=0}^{n-1}R_2d_{f^iD}(f^i(x), f^i(y))^{\beta}, \end{eqnarray*} \\where $R_2$ is the H\"older constant for $\log|\det Df/F|$. By Proposition~\ref{diskproperty}, we obtain \begin{eqnarray*} \left|\log \frac{|\det Df^n/F(y)|}{|\det Df^n/F(x)|}\right|& \leq &\sum_{i=0}^{n-1}R_2d_{f^iD}(f^i(x),f^i(y))^{\beta}\\ & \leq & R_2 \sum_{i=0}^{n-1}\left[(\lambda_2)^{\frac{n-i}{2}}d_{f^nD}(f^n(x), f^n(y))\right]^{\beta}\\ & \leq & R_2 \sum_{i=0}^{n-1}({\lambda_2}^{\frac{\beta}{2}})^{n-i}r^{\beta}\\ & \leq & R_2 \cdot \frac{{\lambda_2}^{\frac{\beta}{2}}r^\beta}{1-{\lambda_2}^{\frac{\beta}{2}}}. \end{eqnarray*} \\With all the inequalities above, it follows that \begin{eqnarray*} \left|\log\frac{|\det {Df}^n/T_yD|}{|\det {Df}^n/T_xD|}\right|&\leq& \left|\log \frac{|\det {Df}^n/T_yD|}{|\det {Df}^n/F(y)|}\right|+\left|\log \frac{|\det {Df}^n/F(y)|}{|\det {Df}^n/F(x)|}\right|\\ &+& \left|\log \frac{|\det {Df}^n/F(x)|}{|\det {Df}^n/T_xD|}\right|\\ & \leq & 2 R_1 \cdot \frac{a}{1- \lambda_2}+ R_2 \cdot \frac{{\lambda_2}^{\frac{\beta}{2}}r^\beta}{1-{\lambda_2}^{\frac{\beta}{2}}}. \end{eqnarray*} \\Now it suffices to take $$\mathcal{K}=\exp\left(2R_1\frac{a}{1- \lambda_2}+ R_2 \cdot \frac{{\lambda_2}^{\frac{\beta}{2}}}{1-{\lambda_2}^{\frac{\beta}{2}}}\right). $$ \end{proof} For an embedded $C^1$ sub-manifold $D$, we say this sub-manifold is $C^{1+\xi}$ or the tangent bundle $TD$ is $\xi$-H\"older continuous if $x\mapsto T_x D$ defines a H\"older continuous subsection (with H\"older exponent $\xi $) from $D$ to the Grassmannian bundles over $D$. We will discuss in local coordinates. By the compactness of $M$ we can choose $\delta_0 >0$ small and fixed in advance, such that for any $x\in M$ the inverse of exponential map $\exp_x^{-1}$ is well defined on the $\delta_0$ neighborhood of $x$. Denote by $V_x$ the corresponding neighborhood of the origin of $T_x M$, then we identify these two neighborhoods. For every $a>0$, up to shrinking $\delta_0$ such that for any $y \in D \cap V_x$, $T_y D$ is parallel to a unique graph of some linear map $L_x(y)$ from $T_x D$ to $E(x)$, whenever $D$ is tangent to cone field $\mathcal{C}_{a}^F$. Now we can describe the H\"older property of tangent bundle in local coordinate form. \begin{definition} For constants $C>0$ and $\xi \in (0,1]$ fixed in standing assumption $({\rm H})$, if $D$ is tangent to the cone field $\mathcal{C}_{a}^F$, we say that the tangent bundle $TD$ is $(C,\xi)$-H\"older continuous if $$ \|L_x(y)\| \le Cd_D(x,y)^{\xi} ~~~\text{for every}~~y\in D\cap V_x. $$ \end{definition} Then, for given $C^{1+\xi}$ sub-manifold $D$ tangent to the $F$-direction cone field, we define its \emph{H\"older curvature} $$ \mathcal{H}_c(D)=\inf \bigg\{C>0: TD~\text{is} ~(C,\xi)\text{-H\"older continuous} \bigg\}. $$ In next section, we will iterate $C^2$ disks tangent to the $F$-direction cone field and then consider the limit condition of the iterated disks. The next Proposition makes one can apply the Ascoli-Arzela theorem to get the accumulated disks of hyperbolic times which we will prove are actually the unstable disks. \begin{proposition}\label{Holder curvature} There exist constants $0<\lambda_4<1$, $\mathcal{L}>0 $, $a>0$ and $r>0$ such that for any given $C^{1+\xi}$ sub-manifold $\widetilde{D}$ tangent to $\mathcal{C}_a^F$ with radius larger than r around $x\in \Lambda_{\lambda_1,1}$, if $n$ is a $\lambda_2$-hyperbolic time for $x$, then there is a sub-manifold $D\subset \widetilde{D} $ containing $x$ such that $f^n(D)$ is contained in $B_r(f^n(x))$ and the H\"older curvature of $f^n(D)$ satisfies $$ \mathcal{H}_c(f^n(D))\le \lambda_4^n \mathcal{H}_c(\widetilde{D})+ \frac{\mathcal{L}}{1-{\lambda}_4}. $$ As a consequence, $\mathcal{H}_c(f^n(D))< 2\mathcal{L}/ (1-{\lambda}_4)$ when the $\lambda_2$-hyperbolic time $n$ large enough. \end{proposition} \begin{proof} By applying the Proposition~\ref{diskproperty}, we can choose $a>0$ and $r>0$, such that there exists a sub-manifold $D\subset \widetilde{D}$ containing $x$ with the following properties: \begin{itemize} \item $f^i({D})$ is contained in the corresponding ball of radius $r$, for any $0\le i \le n$; \item $f^n({D})$ is a disk of radius $r$ with center $f^n(x)$. \end{itemize} Without loss of generality we assume $r \le \delta_0$. Given $y\in D$, in the neighborhood $V_y$ and $V_{f(y)}$ we can express $f$ in local coordinate from $T_yD\oplus E(y)$ to $T_{f(y)}D\oplus E(f(y))$ as $f(u,v)=(u_1(u,v),v_1(u,v))$, then $Df(u,v)$ can be expressed by the following matrix $$ Df(u,v)= \left( \begin{array}{cc} \partial_u u_1 & \partial_v u_1 \\ \partial_u v_1 & \partial_v v_1 \\ \end{array} \right),$$ as $Df(E(y))=E(f(y))$ and $Df(T_yD)=T_{f(y)}f(D)$, we have $$\partial_u u_1(0,0)=Df/T_y D,~~ \partial_v u_1(0,0) =0,~~ \partial_u v_1(0,0)=0,~~ \partial_v v_1(0,0)=Df/E(y). $$ We have the following choices of constants: \begin{itemize} \item there is $L>0$ such that for any disk $D$ centered at $y$ tangent to the cone field associated to $F$, then $\|L_y(z)\|\le L$ for any $z\in D$. Clearly, we can assume $L\ge 1$. \item $Df$ is $(L_1,\xi)$-H\"older. \end{itemize} Notice that the constants do not depend on $y$. \bigskip For every $0<\alpha< b/4$, we can adjust $r$, $a$ such that \begin{eqnarray*} &~&m(\partial_u u_1(z)) \ge m(Df/F(x))-\alpha/L,~\|\partial_v u_1(z)\| \le \alpha/L,\\ &~&\|\partial_u v_1(z)\|\le\alpha/L,~\|\partial_v v_1(z)-Df/E(x)\|\le \alpha/L, \end{eqnarray*} for any $z\in D$. \begin{Claim} The H\"older curvature ${\cal H}_c(f(D))$ of $f(D)$ has the following estimation: $$ {\cal H}_c(f(D))\le \frac{\|Df/E(x)\|+2\alpha}{(m(Df/F(x))-2\alpha)^{1+\xi}}\mathcal{H}_c(D)+ \frac{L_1}{(m(Df/F(x))-2\alpha)^{1+\xi}}. $$ \end{Claim} \begin{proof}[Proof of the Claim] For the estimation of the H\"older curvature of $f(D)$, it suffices to know $$\sup_{z_1\in f(D)}\frac{\|L_{f(y)}z_1\|}{d_{f(D)}(f(y),z_1)^{-\xi}}$$ since one can choose $y\in D$ arbitrarily. Now for every $z_1\in f(D)$, according to the previous argument there exists a unique linear map $L_{f(y)}(z_1)$ parallel to the tangent space $ T_{z_1}f(D)$, for pre-image $z$ of $z_1$, there also exists a unique linear map $L_y(z)$ parallel to $T_zD$, then by the Mean Value theorem we have that there exists some point $w \in D$ such that $$ d_{f(D)}(f(y),z_1) \ge m(Df/T_w D)d_{D}(y,z)\ge (m(Df/F(x))-\alpha/L) d_{D}(y,z). $$ By the construction, $L_{f(y)}$ has the following expression: $$ L_{f(y)}(z_1)=\left(\partial_u v_1(z)+ \partial_v v_1(z)L_y(z)\right)\left(\partial_u u_1(z)+ \partial_v u_1(z) L_y(z)\right)^{-1}. $$ We have $\|\partial_v u_1(z) L_y(z)\|\le \alpha/L \|L_y(z)\|\le \alpha < m(\partial_u u_1(z))$, and furthermore, \begin{eqnarray*} \|(\partial_u u_1(z)+ \partial_v u(z) L_y(z))^{-1}\| & \le & \frac{1}{m(\partial_u u_1(z))-\|\partial_v u(z)\|L}\\ & \le & \frac{1}{m(Df/F(x))-\alpha/L -\alpha}\\ & \le & \frac{1}{m(Df/F(x))-2\alpha}, \end{eqnarray*} $$ \|\partial_u v_1(z)+ \partial_v v_1(z)L_y(z)\| \le L_1 d_D(y,z)^\xi+ (\|Df/E(x)\|+\alpha/L)\|L_y(z)\|. $$ Combing all these estimations and the fact $\|L_y(z)\|{d_D(y,z)}^{-\xi} \le \mathcal{H}_c(D) $ we get that \begin{eqnarray*} \frac{\|L_{f(y)}z_1\|}{d_{f(D)}(f(y),z_1)^\xi}&\le&\frac{\|L_{f(y)}z_1\|}{( m(Df/F(x)-2\alpha)^{\xi}d_D(y,z)^\xi}\\ &\le&\frac{\|\partial_u v_1(z)+\partial_v v_1(z)\|}{ (m(Df/F(x))-2\alpha)^{1+\xi}d_D(y,z)^\xi}\\ &\le&\frac{L_1 d_D(y,z)^\xi}{(m(Df/F(x))-2\alpha)^{1+\xi}d_D(y,z)^\xi}+\frac{(\|Df/E(x)\|+2\alpha)L_y(z)}{(m(Df/F(x))-2\alpha)^{1+\xi}d_D(y,z)^\xi}\\ &\le&\frac{\|Df/E(x)\|+2\alpha}{(m(Df/F(x))-2\alpha)^{1+\xi}}\mathcal{H}_c(D)+ \frac{L_1}{(m(Df/F(x))-2\alpha)^{1+\xi}}. \end{eqnarray*} \end{proof} Recall $b=\inf_{x\in \overline{U}} m(Df/F(x))$ and $\alpha< b/4$, we have $$ \frac{L_1}{(m(Df/F(y))-2\alpha)^{1+\xi}}\le \frac{L_1}{{(b/2)}^{1+\xi}}. $$ Define $$ \mathcal{L}=\frac{2^{1+\xi}L_1}{b^{1+\xi}}; $$ and $$ c_j=\frac{\|Df/E(f^j(x))\|+2\alpha}{(m(Df/F(f^j(x)))-2\alpha)^{1+\xi}} ~~\text{for every}~~0 \le j \le n-1. $$ By using the claim inductively, we have that $$ \mathcal{H}_c(f^n(D)) \le c_0\cdots c_{n-1} \mathcal{H}_c(D)+ {\mathcal{L}}(1+ c_{n-1}+c_{n-1}c_{n-2}+ \cdots + c_{n-1}\cdots c_1). $$ Recall the comments after standing assumption $({\rm H})$, for some $\lambda_4 \in (\lambda_3,1)$ fixed in advance, by choosing $\alpha$ sufficiently small by reducing $r$ and $a$, thus we have the estimations $$ \prod_{j=n-k}^{n-1} c_j \le \lambda_4^k ~~\text{for every}~~ 1 \le k \le n, $$ then, we obtain $$ \mathcal{H}_c({f^n(D)}) \le \lambda_4^n\mathcal{H}_c(D)+ \frac{\mathcal{L}}{1-\lambda_4}. $$ \end{proof} \section{The iteration of Lebesgue measure} The main aim of this section is to prove Theorem~\ref{Theo-attractor}, by standing assumption $({\rm H})$ we know ${\rm Leb}(\Lambda_{\lambda_1, 1})>0$. Now we fix $a$ and $r$ as in Proposition~\ref{diskproperty} and Proposition~\ref{Holder curvature}. Then, reducing to a small neighborhood of some Lebesgue density point, one can construct a smooth foliation with all the leaves are smooth (so $C^2$) and tangent to the given cone field ${\cal C}_a^F$ everywhere. Then there exists at least one leaf $D$ of this foliation such that $D$ intersects $\Lambda_{\lambda_1, 1}$ in a set of positive Lebesgue measure by using the Fubini's theorem. Now we consider the sequence of averages of forward iterations of Lebesgue measure restricted to the disk $D$ above, that is $$ \mu_n=\frac{1}{n}\sum_{i=0}^{n-1}f_{\ast}^i \rm Leb_D~. $$ In this section we will prove that there exists some ergodic component of any limit measure of $\mu_n$, which is the SRB measure in the Theorem \ref{Theo-attractor} or the \emph{Physical} measure in Corollary \ref{corollary-attractor}. Our main ideas in this section come from \cite{ABV00}. \subsection{Construct absolute continuous (non-invariant) part of the limit measures} For any disk $D$ containing $x$, denote by $B_D(x,\delta)$ the ball of radius $\delta$ around $x$ in $D$. \begin{proposition}\label{Pro:acpart-iterate} There are $\eta>0$ and $0<r_1<r$ such that for each $n$, there are points $x_{n,1},\cdots,x_{n,k(n)}\in f^n(D)$ such that \begin{itemize} \item $f^{-n}(x_{n,j})\in \Lambda_{\lambda_1,1}$ and $n$ is $\lambda_2$-hyperbolic time for $f^{-n}(x_{n,j})$ for $1\le j\le k(n)$; \item $B_{f^n(D)}(x_{n,j},r_1/4),~1\le j\le k(n)$ are pairwise disjoint; \item there is $\widetilde{\varepsilon}_0>0$ such that for any $\varepsilon\in[0,\widetilde{\varepsilon}_0)$, we have $$\mu_{n,ac,\varepsilon}(\bigcup_{0\le i\le n-1}K_{i,\varepsilon})\ge \eta,$$ where $$K_{n,\varepsilon}=\bigcup_{1\le i\le k(n)}B_{f^n(D)}(x_{n,i},\frac{r_1}{4}-\varepsilon);$$ $$\mu_{n,ac,\varepsilon}=\frac{1}{n}\sum_{i=0}^{n-1}\sum_{j=1}^{k(i)}f^i_*{\rm Leb}_D|B_{f^i(D)}(x_{i,j},\frac{r_1}{4}-\varepsilon).$$ We denote ${\mu}_{n,ac}=\mu_{n,ac,0}$ and $K_n=K_{n,0}$. \end{itemize} \end{proposition} \begin{proof} Take $r_1\in (0,r)$ such that if we let $D_0$ be a sub-disk of $D$ by removing the $r_1/2$ neighborhood of the boundary, then ${\rm Leb}_D(\Lambda_{\lambda_1,1} \cap D_0)>0 $. Define $$ S_n=\bigg\{x \in \Lambda_{\lambda_1,1} \cap D_0: n~ \text{is a}~ \lambda_2 \text{-hyperbolic time for}~ x\bigg\}. $$ \paragraph{Step 1:} First we will show that there exists a constant $\tau >0$ such that there are balls $B_{f^n(D)}(x_{n,j},r_1/4)$ for each $n$, where $x_{n,j}\in f^n(D)$ and $1 \le j \le k(n)$, having the following properties: \begin{itemize} \item $f^{-n}(x_{n,j})\in \Lambda_{\lambda_1,1}$ and $n$ is a $\lambda_2$-hyperbolic time of $f^{-n}(x_{n,j})$ for $1\le j\le k(n)$; \item $B_{f^n(D)}(x_{n,j},r_1/4),~1\le j\le k(n)$ are pairwise disjoint; \item we have the estimation: $$ f^n_\ast {\rm Leb}_D\left(\cup_{j=1}^{k(n)}B_{f^n(D)}(x_{n,j},r_1/4)\right) \ge \tau{\rm Leb}_D(S_n).\quad \eqno(3) $$ \end{itemize} Recall the Besicovitch Covering lemma, see \cite[2.8.9-2.8.14]{geometric}. \begin{Lemma}\label{lem:Besicovitch}(\textit{Besicovitch Covering lemma}) For $k\in \mathbb{N}$, there exists constant $p=p(k)\in \mathbb{N}$ such that for any $k$ dimensional compact $C^2$ Riemannian manifold $N$, any set $A\subset N$, and for any family $\mathcal{B}$ of balls such that any $x\in A$ is in the central of some ball in $\mathcal{B}$, there exists a sub-families $\mathcal{B}_1,\cdots, \mathcal{B}_p$ contained in $\mathcal{B}$ with the following properties: \begin{itemize} \item A $\subset \bigcup_{i=1}^{p}\bigcup_{B\in \mathcal{B}_i}B$; \item either $B \cap B'=\emptyset$, or $B=B'$, for any $B, B'$ in $\mathcal{B}_i$ and $1\le i \le p$. \end{itemize} \end{Lemma} Now we shall apply the Besicovitch Covering lemma. For every fixed $n$, put $N=f^n(D)$, $A=f^n(S_n)$. $N$ is a $C^2$ sub-manifold since $f$ and $D$ are $C^2$. Denote by $\mathcal{B}=\{B_{f^n(D)}(x,r_1/4)$, $x\in A \}$ as the family of balls. As a consequence of Besicovitch Covering lemma, we can choose a sequence of sub-families $\mathcal{B}_1,\cdots, \mathcal{B}_p$ of $\mathcal{B}$ such that $f^n(S_n) \subset \bigcup_{i=1}^{p} \bigcup_{B\in \mathcal{B}_i}B$ and every $\mathcal{B}_i$ is formed by disjoint balls with fixed radius $r_1/4$, so $$ f^n_\ast {\rm Leb}_D\left(f^n(S_n)\right)\le f^n_\ast {\rm Leb}_D\left(\bigcup_{i=1}^{p} \bigcup_{B\in \mathcal{B}_i}B\right). $$ We choose some $1\le i \le p$ such that $$ f^n_\ast {\rm Leb}_D\left(\bigcup_{B\in \mathcal{B}_i}B\right) \ge \frac{1}{p} f^n_\ast {\rm Leb}_D\left(f^n(S_n)\right)=\frac{1}{p}{\rm Leb}_D(S_n). $$ Let $B_{f^n(D)}(x_{n,j},r_1/4)$, $1\le j \le k(n)$ be the disjoint balls of $\mathcal{B}_i$. Then by our construction $f^{-n}(x_{n,j})\in \Lambda_{\lambda_1,1}$ and $n$ is the $\lambda_2$-hyperbolic time for $f^{-n}(x_{n,j}), 1\le j \le k(n)$, the above estimation becomes $$ f^n_\ast {\rm Leb}_D\left(\cup_{j=1}^{k(n)}B_{f^n(D)}(x_{n,j},r_1/4)\right) \ge \frac{1}{p}{\rm Leb}_D(S_n). $$ It suffices to take $\tau=1/p$ to end this step. \paragraph{Step 2:}Define $$\mu_{n,ac}=\frac{1}{n}\sum_{i=0}^{n-1}\sum_{j=1}^{k(i)}f^i_*{\rm Leb}_D|B_{f^n(D)}(x_{i,j},r_1/4).$$ We consider the space $\{0,1,\cdots,n-1\}\times D$ with the product measure $\xi_n\times {\rm Leb}_D$, where $\xi_n$ is the uniform distribution measure on $\{0,1,\cdots,n-1\}$. Define the indicator function $$ \chi(x,i)=\left\{ \begin{array}{cr} 1 & ~~~\text{if}~x\in S_i~;\\ 0 & ~~~\text{otherwise}~. \end{array}\right. $$ Then, by using Fubini's theorem \begin{eqnarray*} \frac{1}{n}\sum_{i=0}^{n-1}{\rm Leb}_D(S_i)&=&\int\left(\int\chi(x,i)d{\rm Leb}_D(x)\right)d\xi_n(i)\\ &=& \int\left(\int\chi(x,i)d\xi_n(i)\right)d{\rm Leb}_D(x). \end{eqnarray*} Since ${\rm Leb}_D( \Lambda_{\lambda_1,1} \cap D_0)> 0$ and the density of $\lambda_2$-hyperbolic times for all the points in $\Lambda_{\lambda_1,1}$ are bounded from below by $\theta=\theta(\lambda_1,\lambda_2,f)>0$. In other words, $\int\chi(x,i)d\xi_n(i)\ge\theta$. Thus $$\frac{1}{n}\sum_{i=0}^{n-1}{\rm Leb}_D(S_i)\geq \theta {\rm Leb}_D( \Lambda_{\lambda_1,1} \cap D_0),~~~\text{for every }~n\in \mathbb{N}. \quad \eqno(4) $$ By the definition of $\mu_{n,ac}$, and the estimations from $(3)$ and $(4)$, it follows that \begin{eqnarray*} \mu_{n,ac}\left(\bigcup_{0\le i\le n-1}\bigcup_{1\le j\le k(i)}B_{f^i(D)}(x_{i,j},r_1/4)\right)& \ge & \frac{1}{n}\sum_{i=0}^{n-1}\sum_{j=1}^{k(i)}(f_*^i {\rm Leb}_D)\left(B_{f^i(D)}(x_{i,j},r_1/4)\right)\\ &=&\frac{1}{n}\sum_{i=0}^{n-1}(f_*^i {\rm Leb}_D)(K_i)\\ &\geq & \frac{1}{n}\sum_{i=0}^{n-1}\tau {\rm Leb}_D(S_i)\\ &\geq & \tau \theta {\rm Leb}_D(\Lambda_{\lambda_1,1} \cap D_0), \end{eqnarray*} then, by definition, we have $$\mu_{n,ac}\left(\bigcup_{0\le i\le n-1}K_i\right)=\mu_{n,ac}\left(\bigcup_{0\le i\le n-1}\bigcup_{1\le j\le k(i)}B_{f^i(D)}(x_{i,j},r_1/4)\right) \geq \eta_0, $$ where $\eta_0=\tau \theta {\rm Leb}_D(\Lambda_{\lambda_1,1} \cap D_0)$. Given $i\ge 0$, for any measurable sets $A,B\subset f^i(D)$, we have that $$\frac{{\rm Leb}_{f^i(D)}(A)}{{\rm Leb}_{f^i(D)}(B)}=\frac{\int_{f^{-i}(A)}|{\rm det}( Df^i)|d{\rm Leb}(D) } {\int_{f^{-i}(B)}|{\rm det}(Df^i)|d{\rm Leb}(D)}= \frac{|{\rm det}(Df^i(\xi_A))|{\rm Leb}_D(f^{-i}(A))}{|{\rm det}(Df^i(\xi_B))|{\rm Leb}_D(f^{-i}(B))}$$ for some $\xi_A\in f^{-i}(A)$ and $\xi_B\in f^{-i}(B)$. By Proposition~\ref{pro:bounded distortion}, if we take $A=B_{f^i(D)}(x,r_1/4)\setminus B_{f^i(D)}(x,\frac{r_1}{4}-\varepsilon)$ and $B=B_{f^i(D)}(x,r_1/4)$, we have that $$ \frac{f_*^i{\rm Leb}_D\left(B_{f^i(D)}(x,r_1/4)\setminus B_{f^i(D)}(x,\frac{r_1}{4}-\varepsilon)\right)} {f_*^i{\rm Leb}_D(B_{f^i(D)}(x,r_1/4))}=\frac{{\rm Leb}_D(f^{-i}(A))}{{\rm Leb}_D(f^{-i}(B))}\le \mathcal{K}\frac{{\rm Leb}_{f^i(D)}(A)}{{\rm Leb}_{f^i(D)}(B)},$$ Due to the fact that $$\frac{{\rm Leb}_{f^i(D)}\left(B_{f^i(D)}(x,r_1/4)\setminus B_{f^i(D)}(x,\frac{r_1}{4}-\varepsilon)\right)}{{\rm Leb}_{f^i(D)}(B_{f^i(D)}(x,r_1/4))} $$ can be arbitrary small by reducing $\varepsilon$, we have that for $\eta=\eta_0/2$, there is $\widetilde{\varepsilon}_0 >0$ small enough such that for any $\varepsilon\in [0,\widetilde{\varepsilon}_0)$, one obtains $\mu_{n.ac.\varepsilon}(\bigcup_{0\le i\le n-1}K_{i,\varepsilon})\ge \eta$. The proof is complete. \end{proof} Now let $K_{\infty}=\bigcap_{n=1}\overline{\bigcup_{j\ge n}K_j}$ which is the accumulation points of $\{K_j\}_{j\ge 1}$, let $x_{\infty}$ be an accumulation point of $\{x_{n,j(n)}\}$ for some $j(n)$, up to considering the subsequences we may suppose $x_{n,j(n)}\rightarrow x_{\infty}$. As we have shown, disks $\{B_{f^n(D)}(x_{n,j(n)},r_1/4), n\ge 1\}$ are all tangent to the $F$-direction cone field of fixed width $a$ with uniform size, and they have the uniform H\"{o}lder curvature when $n$ large enough by applying Proposition \ref{Holder curvature} (Recall $n$ is the hyperbolic time). Therefore, Ascoli-Arzela theorem ensures that there exists a disk $B(x_{\infty})$ of radius $r_1/4$ around $x_{\infty}$ such that $B_{f^n(D)}(x_{n,j(n)},r_1/4)$ converges to $B(x_{\infty})$ in the $C^1$ topology, then $B(x_{\infty})\subset K_{\infty}$. We will prove certain properties of accumulation points and corresponding disks. \begin{lemma}\label{lem:unstable}Let $x_{\infty}$ be an accumulation point of $\{x_{n,j(n)}\}$ for some $j(n)$, and suppose $B(x_{\infty})$ is the accumulation disk, then we have \begin{enumerate} \item $K_{\infty}\subset K$, and in particular, $x\in B(x_{\infty})\subset K$; \item the subspace $F(x_\infty)$ is uniformly expanding in the following sense: $$ \|{Df}^{-k}/F(x_\infty)\|\leq \lambda_2^{k/2} ~~~\text{for every}~~ k~ \geq~ 1 ; $$ \item $B(x_\infty)$ is contained in the corresponding strong unstable manifold $\mathcal{W}^u_{loc}(x_\infty)$: $$d(f^{-k}(x_\infty),f^{-k}(y))\leq \lambda_2^{k/2} d(x_\infty,y), ~~~ \forall y\in B(x_\infty);$$ \item $B(x_\infty)$ is tangent to $F(y)$ for every point $y\in B(x_\infty)$. \end{enumerate} \end{lemma} \begin{proof} By the construction, one observes that $K_j\subset f^{\ell}(U)$ for any $j\ge {\ell}$. Then $\bigcup_{j\ge {\ell}}K_j\subset f^{\ell}(U)$. This implies $$\overline{\bigcup_{j\ge {\ell}}K_j}\subset \overline{f^{\ell}(U)} \subset f^{{\ell}-1}(U).$$ Therefore, $$K_{\infty}=\bigcap_{{\ell}\in \mathbb{N}}\overline{\bigcup_{j\ge {\ell}}K_j}\subset \bigcap_{{\ell}\in \mathbb{N}}f^{{\ell}-1}(U)=K.$$ Then $x\in B(x_{\infty})\subset K_{\infty}\subset K$. We obtain conclusion $(1)$. Next we will check the last three conclusions. By construction and Proposition \ref{diskproperty}, we have the following: \begin{itemize} \item $\prod_{l=0}^{k-1}\|Df^{-1}/F(f^{-l}(x_{n,j(n)})\|\le \lambda_2^k$ for every $1\leq k \leq n$ and every $n$; \item for every $k\geq 1$, $f^{-k}$ is a $\lambda_2^{k/2}$ contraction on $B_{f^n(D)}(x_{n,j(n)},r_1/4)$ for every $n$, i.e., $d(f^{-k}x_{n,j(n)},f^{-k}y)\leq \lambda_2^{k/2}d(x_{n,j(n)},y)$ for every $0 \leq k\leq n$, whenever $y$ is contained in $B_{f^n(D)}(x_{n,j(n)},r_1/4)$; \item disks $\{B_{f^n(D)}(x_{n,j(n)},r_1/4)\}$ are contained in the corresponding $F$-direction cone field and angles between $F$ and the tangent spaces of these disks are exponentially contracted as $n$ increasing. \end{itemize} Passing to the limit, we know $(2),(3),(4)$ are true. \end{proof} \begin{definition} A \emph{fake $F$-cylinder} at some point $y$ is a set $\exp_y(\varphi(X\times D_0))$, where $X\subset {\mathbb R}^{\dim E}$ is a compact set, $D_0\subset{\mathbb R}^{\dim F}$ is the unit ball such that for each $x\in X $, $\varphi_x:~D_0\to E$ is a $C^{1+\xi}$ map \begin{itemize} \item $\exp_y(\varphi_x(D_0))$ tangent to the cone field ${\cal C}_a^F$. \end{itemize} If in addition, we have that \begin{itemize} \item $\exp_y(\varphi_x(D_0))$ is a local unstable manifold. \item the intersection of $\exp_y(\varphi_x(D_0))$ and $\exp_y(\varphi_z(D_0))$ is relatively open in each one for any $x,z\in X$. \end{itemize} then we say that $\exp_y(\varphi(X\times D_0))$ is a \emph{$F$-cylinder}. $\{\exp_y(\varphi_x(D_0)\}_{x\in X}$ is called the \emph{canonical partition} of the $F$-cylinder. \end{definition} \begin{definition} For two finite Borel measures $\nu_1$ and $\nu_2$, we denote $\nu_1\prec\nu_2$ if for any measurable set $A$, we have $\nu_1(A)\le \nu_2(A)$. \end{definition} \begin{proposition}\label{partabsolutelycontinuous} There is a measure $\mu_{ac}\prec\mu$ and and $F$-cylinder $L_\infty$ such that $\mu_{ac}(L_\infty)> 0$ and the conditional measure of $\mu_{ac}$ associated to the canonical partition ${\cal L}_\infty$ is absolutely continuous with respect to the Lebesgue measure for almost every $\gamma\in{\cal L}_\infty$. \end{proposition} \begin{proof} Let $\{n_k\}$ be a subsequence such that $\{\mu_{n_k}\}$ accumulates. By taking a subsequence if necessary, one can assume that $\{\mu_{n_k,ac}\}$ accumulates. Set $\mu_{ac}=\lim_{n\to\infty}\mu_{n,ac}$. We have $\mu_{ac}(\overline{U})\geq \limsup_{k \rightarrow \infty}\mu_{n_k,ac}(\overline{U})\geq\eta>0$, and then $\mu_{ac}(K_{\infty})\ge \eta$, since ${\rm supp}(\mu_{ac})\subset K_{\infty}$. For $\varepsilon>0$ small, take $$K_{\infty,\varepsilon}=\bigcap_{n\in\mathbb N}{\overline {\bigcup_{j\ge n}K_{j,\varepsilon}}}.$$ We have ${\rm supp}(\mu_{ac,\varepsilon})\subset K_{\infty,\varepsilon}$. Take a point $y\in {\rm supp}(\mu_{ac,\varepsilon})$. Then for $\delta>0$, we have $\mu_{ac}(K_{\infty}\cap B(y,\delta))\ge \mu_{ac,\varepsilon}(K_{\infty,\varepsilon}\cap B(y,\delta))>0$. By construction we have that $K_{\infty,0}\cap B(y,\delta)$ is an $F$-cylinder if we take $\delta\ll \varepsilon$, where $B(y,\delta)$ is a small open neighborhood of $y$ with radius $\delta$. Set $$L_\infty=K_{\infty,0}\cap U(x,\delta)=\bigcup_{x\in X_\infty}\exp_y(\varphi_x(D_0)),$$ where $X_\infty=\{x\in E(y):x\in \exp_y^{-1}(\gamma),\gamma~\textrm{is an unstable leaf in~}K_{\infty,0}\}.$ Define $X_n=\{x\in E(y):x\in \exp_y^{-1}(B_{f^n(D)}(x_{n,j(n)},r_1/4)),~\textrm{for some~}x_{n,j(n)}\in f^n(D),~\textrm{where~}n~\textrm{is a~}\lambda_2\textrm{-hyperbolic time for~}f^{-n}(x_{n,j(n)})\}$. Notice that $X_n$ may have non-empty intersection with $X_\infty$ or $X_m$ for $m\neq n$. By the construction, we have that $\mu_{ac}\prec\mu$. Now we need to show that the conditional measure of $\mu_{ac}$ associated to the canonical partition of ${\cal L}_\infty$ is absolutely continuous with respect to the Lebesgue measure for almost every $\gamma\in{\cal L}_\infty$. \begin{comment} Let $\{n_k\}$ be a subsequence such that $\{\mu_{n_k}\}$ accumulates. By taking a subsequence if necessary, one can assume that $\{\mu_{n_k,ac}\}$ accumulates. Set $\mu_{ac}=\lim_{n\to\infty}\mu_{n,ac}$. We have $\mu_{ac}(M)\geq \limsup_{k \rightarrow \infty}\mu_{n_k,ac}(M)\geq\eta>0$. We will show that $\mu_{ac}$ has the absolutely conditional measures along the strong unstable manifolds when restricted to a cylinder with positive $\nu$ measure, from begin, we will construct this cylinder as following: consider the set $$ \Delta_{\infty}=\bigcap_{n=1}^{\infty}\overline{\bigcup_{i\geq n}\Delta_i} $$ which is the set of accumulation points of $(\Delta_i)_i$, for every $y\in \Delta_{\infty}$ there exist some sequence $i_j\rightarrow \infty$ as $j\rightarrow \infty$ and disks $D_j=\Delta_{i_j}(x_j,r/4)\subset \Delta_{i_j}$ such that there exist points $y_j\in D_j$ with $y_j\rightarrow y$ as $j\rightarrow \infty$, up to replacing by a subsequence, we may assume the centers $x_j$ converge to a point $x$, and $D_j$ converge to a disk $D(x)$ of radius $r/4$ around $x$ by applying the Arela-Ascoli theorem. Then $y\in \overline{D(x)}\subset \Delta_{\infty}$, and by the construction we have all these $x$ are in the set $\widetilde{H_{\infty}}=\bigcap_{n=1}^{\infty}\overline{\bigcup_{i\geq n} f^i(\widetilde{H_i})}$, we will also denote $\mathcal{D}_{\infty}$ be the family of all these disks obtained in this way. Similar to ~\cite[Lemma 3.7]{ABV00} we obtain the next Lemma For any given $x\in \widetilde{H_{\infty}}$, $\delta>0$ small, we let $C_{\delta}(x)$ be the tubular neighborhood of $\overline{D(x)}$, where $$ C_{\delta}(x)=\left\{\exp_zv: ~\text{for every} ~z\in \overline{D(x)} ~\text{and vector}~ v \perp T_z \overline{D(x)}~ \text{with}~~ \|v\| \leq \delta \right\} $$ as the union of the images under exponential map at every point $z \in \overline{D(x)}$ of every vector orthogonal to $\overline{D(x)}$ with size not larger than $\delta$, take $\delta$ small, such that $C_{\delta}(x)$ be a cylinder endowed with a canonical projection $\pi:C_{\delta}(x)\rightarrow \overline{D(x)} $ and without loss of generality we may suppose that the boundary of $C_{\delta}(x)$ has $\nu$ measure zero. Notice that as the support of measure $\nu_n$ are contained in $\bigcup_{i=0}^{n-1}\Delta_i$, the support of $\nu$ is contained in $\Delta_{\infty}$, then the supp$(\nu)$ can be covered by these cylinders, now we shall make a more fine covering by decomposing these cylinders, more precisely, for fixed $\varepsilon> 0$, by the compactness, we may make a covering of $\overline{D(x)}$ by finitely many domains $D_{x,l}$, $l=1,\cdots,N$, small enough so that for every smooth disk tangent to the $F$-direction cone field intersect with $C_{x,l}=\pi^{-1}(D_{x,l})$ a disk with diameter less than $\varepsilon $. take the $D_{x,l}$ diffeomorphic to the unit compact ball with dimension equal to dim$F$, so that $C_{x,l}$ is a cylinder. For a disk $\gamma$, if we have that the canonical projection $\pi$ maps $C_{x,l}\cap \gamma$ diffeomorphically onto $C_{x,l}$, we say $\gamma$ crosses $C_{x,l}$. Now, we let $$ K_{\infty}(x,l)=\bigcup\left\{D(x)\cap C_{x,l}:~~\forall~~ D(x)\in \mathcal{D}_{\infty} ~\text{which is crosses}~~ C_{x,l} \right\} $$ let $\mathcal{K}_{\infty}(x,l)$ be the family of disks coming from $K_{\infty}(x,l)$ Since then, we have construct a sequence of cylinders $C_{x,l}$ and their subset $K_{\infty}(x,l)$ which is a union of disks from the family $\mathcal{K}_{\infty}(x,l)$ whose members are local strong unstable manifold. Now, we state the following proposition: \begin{proposition}\label{pro:absolutely} There exist some $(x,l)$ and constants $\zeta_1> 0$ and $K_1> 0$ such that $\nu(K_{\infty}(x,l))\geq \zeta_1 $, and the restriction of $\nu$ to $K_{\infty}(x,l)$ has absolutely continuous conditional measures along the disks in $\mathcal{K}_{\infty}(x,l)$: $$ \frac{1}{K_1}{\rm Leb}_{\gamma}(B)\leq\nu_{\gamma}(B)\leq K_1{\rm Leb}_{\gamma}(B) $$ for any Borel set $B\subset \gamma$, where $(\nu_{\gamma})_{\gamma}$ be the conditional measures of $\nu\mid K_{\infty}(x,l)$ along the disks $\gamma \in \mathcal{K}_{\infty}(x,l)$. \end{proposition} \end{comment} Define $$L_n=\left(\bigcup B_{f^n(D)}(x_{n,j(n)},r_1/4)\right)\cap B(y,\delta).$$ Notice that $L_n$ can be identified to be a fake $F$-cylinder as ${\exp}_y\varphi(X_n\times D_0)$. Let $\widehat{L}=\bigcup_{0\leq i \le \infty}L_i\times \{i\}$, and $\widehat{\mu}_{n,ac}$ be $$ \widehat{\mu}_{n,ac}(\bigcup_{i=0}^{n-1}B_{i}\times \{i\})=\frac{1}{n}\sum_{i=0}^{n-1}f_{\ast}^{i} {\rm Leb}_D(B_i), $$ where $B_i\subset L_i$ is a measurable set. We can define a limit in $\widehat{L}$ by the following way: we define $\lim_{n\to\infty}(x_n,m(n))=(x_0,n_0)$ if and only if $\lim_{n\to\infty}x_n=x_0$ in the Riemannian metric of the manifold $M$, and one of the following cases occurs \begin{itemize} \item $n_0=\infty$, $m(n)=\infty$ and $x_0,x_n\in L_\infty$ for $n$ large enough; \item $n_0=\infty$, $\lim_{n\to\infty}m(n)=\infty$ and $x_n\in L_{m(n)}$; \item $n_0$ if finite, for $n$ large enough, $m(n)=n_0$, $x_0,x_n\in L_{n_0}$. \end{itemize} This limit gives a topology on $\widehat L$, and under this topology, $\widehat L$ is a compact space. \bigskip The fake $F$-cylinder $$\exp_{y}\left(\left(\bigcup_{1\le n\le\infty}X_n\right)\times D_0\right)$$ gives a measurable partition on $\widehat L$. By the Proposition~\ref{pro:bounded distortion}, there is a constant $\mathcal{C}>0$ such that for each measurable set $B\subset D_0$, for each $n\in{\mathbb N}$, we have $$\frac{1}{\mathcal{C}}\frac{{\rm Leb}(B)}{{\rm Leb}(D_0)}\le \frac{{\widehat \mu}_{n,ac}(\cup_{i=0}^{n-1}X_i\times B)}{{\widehat \mu}_{n,ac}(\cup_{i=0}^{n-1}X_i\times D)}\le {\mathcal{C}}\frac{{\rm Leb}(B)}{{\rm Leb}(D_0)}.$$ By using the dominated convergence theorem, for almost every disk in the $F$-cylinder $L_\infty$, the conditional measure of $\mu_{ac}$ is absolutely continuous with respect to the Lebesgue measure. \end{proof} \subsection{Existence of SRB measure and Physical measure} For each $x\in M$, one can consider the measures $$\mu_{x,n}=\frac{1}{n}\sum_{i=0}^{n-1}\delta_{f^i(x)}.$$ The set $\Sigma$ is defined to be: $x\in\Sigma$ if and only if $\lim_{n\to\infty}\mu_{x,n}$ exists and is ergodic. From Ergodic Decomposition theorem \cite[Chapter II.6]{Man87}, one knows that $\Sigma$ has total probability and if one denotes $\mu_{x}=\lim_{n\to\infty}\mu_{x,n}$, then for any bounded measurable function $\psi$ and any invariant measure $\nu$, one has $x\mapsto \int \psi d\mu_x$ is measurable and $$\int \psi d\nu=\int\int \psi d\mu_xd\nu(x).$$ \begin{lemma}\label{Lem:mu-disintegration} There is a set $Z_\infty\subset L_\infty\cap\Sigma$ such that $\mu(Z_\infty)>0$ and the conditional measures of $(\mu|Z_\infty)$ on the unstable manifolds are absolutely continuous with respect to the Lebesgue measures on these manifolds. \end{lemma} \begin{proof} We consider a family of measurable sets $\cal A$ such that for each $A\in{\cal A}$, we have $A\subset L_\infty\cap \Sigma$ and ${\rm Leb}_\gamma(\gamma\cap A)=0$ for each leaf $\gamma\in{\cal L}_\infty$. We can find such a measurable set $A_\infty$ such that $$\mu(A_\infty)=\max_{A\in{\cal A}}\mu(A).$$ Such a maximal exists because if we have a sequence of measurable sets $\{A_n\}$ such that $\lim_{n\to\infty}\mu(A_n)=\sup_{A\in{\cal A}}\mu(A)$, then we take $A_{\infty}=\cup_{n=1}^{\infty}A_n$. By the definition of $\cal A$, we have $A_{\infty}\in\cal A$ , then $\mu_{ac}(A_{\infty})=0$, for the conditional measures of $\mu_{ac}$ along the leaves of $\mathcal{L}_{\infty}$ are absolutely continuous with respect to Lebesgue as we proved in Proposition \ref{partabsolutelycontinuous}. Set $Z_\infty=L_\infty\cap \Sigma\setminus A_\infty$. Since $\mu_{ac}(Z_\infty)=\mu_{ac}(L_{\infty})>0$, we have $\mu(Z_\infty)>0$. For any measurable set $A\subset Z_\infty$ satisfying ${\rm Leb}_{\gamma}(A\cap \gamma)=0$ for almost every $\gamma\in{\cal L}_\infty$, by the definition of $A_\infty$ and $Z_\infty$, we have $(\mu|Z_\infty)(A)=0$. This implies that $(\mu|Z_\infty)$ has absolutely continuous conditional measures on the unstable manifolds. \end{proof} \begin{lemma}\label{Lem:localconstant} By reducing $\Sigma$ if necessary, for every two points $x,y\in\Sigma\cap \gamma$ for some unstable manifold $\gamma$, we have that $\mu_x=\mu_y$. \end{lemma} \begin{proof} According to Birkhoff Ergodic theorem, by reducing $\Sigma$ if necessary, one can assume that for any $x\in\Sigma$, $\lim_{n\to\infty}1/n\sum_{i=0}^{n-1}\delta_{f^{-i}(x)}$ exists and equals to $\mu_x$. For any $x,y\in\Sigma\cap \gamma$, one has $\lim_{n\to\infty}1/n\sum_{i=0}^{n-1}\delta_{f^{-i}(x)}=\lim_{n\to\infty}1/n\sum_{i=0}^{n-1}\delta_{f^{-i}(y)}$ by the definitions. This implies $\mu_x=\mu_y$. \end{proof} Denote by $\mathcal{P}=\{\gamma \cap Z_{\infty}:\gamma \in \mathcal{L}_{\infty}\}$ and ${\cal Q}=\{ Q\subset Z_{\infty}:~x,y\in Q~\textrm{if~and~only~if~}\mu_x=\mu_y\}$ the two measurable partitions of $Z_{\infty}$, then from Lemma \ref{Lem:localconstant} we have $\mathcal{Q}\prec\mathcal{P}$ which means $\mathcal{P}$ is finer than $\mathcal{Q}$. Also let $\pi_{\cal{P}}:Z_{\infty}\rightarrow \cal{P}$ and $\pi_{\cal{Q}}:Z_{\infty}\rightarrow \cal{Q}$ be the projections. For every measurable subset $A\subset Z_\infty$, by the Ergodic Decomposition theorem we mentioned above, take $\psi=\chi_A$, we obtain $$ \mu(A)=\int_{\Sigma} \mu_x(A)d\mu(x) $$ and $$ \mu_x(A)=\int \chi_{A}d\mu_x=\lim_{n\rightarrow +\infty}\frac{1}{n}\sum_{i=0}^{n-1}\chi_{A}(f^i(x)) $$ $\mu$-almost everywhere, where $\chi_{A}$ denotes the characteristic function of measurable set A. By Poincar\'{e}'s recurrence theorem we know $\mu$ almost every point $z\in Z_\infty$ has infinitely many return times. Let $k(z)$ be the smallest integer such that $f^{-k(z)}(z)\in Z_\infty$. One knows $$ \mu(A)=\int_{Z_\infty}k(z)\mu_z(A)d\mu(z) $$ for any measurable subset $A$ of $Z_\infty$, where one uses the fact $\mu_z=\mu_{f^i z}$ for every $i\in \mathbb{Z}$. Recall the definition and properties of conditional expectation. Given two $\sigma$-algebras $\mathcal{B}_1$, $\mathcal{B}_2$ with the property $\mathcal{B}_2\subset \mathcal{B}_1$, that is to say $\mathcal{B}_2$ is the sub-$\sigma$-algebra of $\mathcal{B}_1$. Consider measurable space $(X,\mathcal{B}_1,\mu)$, one can define a conditional expectation operator $E(\cdot/\mathcal{B}_2):L^1(X,\mathcal{B}_1,\mu)\rightarrow L^1(X,\mathcal{B}_2,\mu)$ such that for every function $\phi\in L^1(X,\mathcal{B}_1,\mu)$, $E(\phi/\mathcal{B}_2)$ is the $\mu$.a.e. unique $\mathcal{B}_2$-measurable function with $$ \int_A \phi d\mu=\int_A E(\phi/\mathcal{B}_2)d\mu,~~~\text{for every}~~A\in \mathcal{B}_2. $$ For every $\phi\in L^1(X,\mathcal{B}_1,\mu)$ and $\mathcal{B}_2$-bounded measurable function $\psi$, we have $E(\phi \psi/\mathcal{B}_2)=\psi E(\phi /\mathcal{B}_2)$. That is to say $$ \int_A \phi \psi d\mu= \int_A \psi E(\phi/\mathcal{B}_2)d\mu,~~~\text{for every}~~A\in \mathcal{B}_2. $$ Consider the sub-$\sigma$-algebra $\cal{B}(\cal{Q})$ of the original $\cal{B}$ which is generated by the measurable partition $\cal{Q}$. Then there exists a unique conditional expectation $\ell$ of the function $k$ which is a measurable function defined on $\cal{B}(\cal{Q})$ and $\ell$ is constant on each element of $\cal{Q}$. Moreover, as $z\mapsto \mu_z(A)$ is $\cal{B}(\cal{Q})$-bounded measurable functions for every measurable set $A$, by Ergodic Decomposition theorem, we have $$ \int_E k(z)\mu_z(A)d\mu=\int_E \ell(z)\mu_z(A)d\mu,~~~\text{for every}~~\mathcal{B}(\mathcal{Q})\text{-measurable set}~E. $$ We can define $\mu_{Q}=\mu_z$ and $\ell(Q)=\ell(z)$ for some $z\in Q$, every $Q\in \cal{Q}$. They are well defined as $\mu_z$ and $\ell(z)$ are constant on each element of $\cal{Q}$. Thus, $$ \int \ell(z)\mu_z(A)d\mu=\int \ell(Q)\mu_Q(A)d\widehat\mu_{\cal{Q}}, $$ where $\widehat\mu_{\cal{Q}}$ is the quotient measure defined by $\widehat\mu_{\cal{Q}}(B)=\pi_{\cal{Q}}^{-1}(B)$ for any $B\subset \cal{Q}$. So, $$ \mu(A)=\int \ell(Q)\mu_Q(A)d\widehat\mu_{\cal{Q}}, ~~\text{for every measurable subset}~A\subset Z_{\infty}. $$ We have the following claim: \begin{Claim}\label{lemma;conditional measures} $\{\ell(Q)\mu_Q\}_{Q\in \cal{Q}}$ is the family of conditional measures of $\mu$ with respect to the measurable partition $\cal{Q}$. \end{Claim} \begin{proof} Without loss of generality, by contradiction, we may assume that there is a subset $B\subset \mathcal{Q}$ with positive $\widehat\mu_{\cal{Q}}$ measure such that $$ \ell(Q)\mu_Q(Q)>1 ~~\text{for any}~~Q\in B. $$ By the definition of $\widehat\mu_{\cal{Q}}$, we have \begin{eqnarray*} \widehat\mu_{\cal{Q}}(B)=\mu (\pi_{\mathcal{Q}}^{-1}(B))& = &\int\ell(Q)\mu_Q(\pi_{\mathcal{Q}}^{-1}(B))d\widehat\mu_{\cal{Q}} \\ & = & {\int}_B \ell(Q)\mu_Q(\pi_{\mathcal{Q}}^{-1}(B))d\widehat\mu_{\cal{Q}} \\ & + & {\int}_{\mathcal{Q}\setminus B} \ell(Q)\mu_Q(\pi_{\mathcal{Q}}^{-1}(B))d\widehat\mu_{\cal{Q}}. \end{eqnarray*} Observe that $\mu_Q(\pi_{\mathcal{Q}}^{-1}(B))=0$ for any $Q\in \mathcal{Q}\setminus B$, this is because of $\pi_{\mathcal{Q}}^{-1}(B)\subset Z_{\infty}\setminus Q$ for every $Q\in \mathcal{Q}\setminus B$ and the fact $\mu_Q(Z_{\infty}\setminus Q)=0$ for every $Q\in \mathcal{Q}$. On the other hand, $Q\in B$ implies $\mu_Q(\pi_{\mathcal{Q}}^{-1}(B))=\mu_Q(Q)$, all these together we know \begin{eqnarray*} \widehat\mu_{\cal{Q}}(B) &=& {\int}_B \ell(Q)\mu_Q(\pi_{\mathcal{Q}}^{-1}(B))d\widehat\mu_{\cal{Q}} \\ &=& {\int}_B \ell(Q)\mu_Q(Q)d\widehat\mu_{\cal{Q}} \\ &>& \widehat\mu_{\cal{Q}}(B). \end{eqnarray*} This gives a contradiction, which completes the proof of the claim. \end{proof} \begin{lemma}\label{lem:ergodicmeasure} There exists some point $z\in Z_{\infty}$ such that $\mu_z(Z_{\infty})>0$ and $\mu_z$ has absolutely continuous conditional measures along the leaves of $\mathcal{L}_{\infty}$. \end{lemma} \begin{proof} For every $Q\in \cal{Q}$, let $\{\mu_{Q,P}: P\in \mathcal{P},P\subset Q \}$ be the family of conditional measures of $\mu_{Q}$ with respect to the finer partition $\cal{P}$ restrict to $Q$. Denote by $\widehat{\mu}_{Q,\mathcal{P}}$ the quotient measure of $\mu_{Q}$ with respect to the partition $\mathcal{P}$ restricted to $Q$, then by definition for every measurable set $A$ we have $$ \mu_{Q}(A)=\int\mu_{Q,P}(A)d\widehat{\mu}_{Q,\mathcal{P}}, $$ which implies $$ \ell(Q)\mu_{Q}(A)=\int\mu_{Q,P}(A)d\ell(Q)\widehat{\mu}_{Q,\mathcal{P}}. $$ So $\{\mu_{Q,P}\}$ are conditional measures of $\ell(Q)\mu_{Q}$ with respect to partition $\cal{P}$ restricted to $Q$. If we denote by $\{\mu_P\}_{P\in \cal{P}}$ as the conditional measures of $\mu$ with respect to the finer partition $\cal{P}$, as we have shown in the claim above, $\{\ell(Q)\mu_Q\}_{Q\in \cal{Q}}$ are the conditional measures of $\mu$ with respect to the measurable partition $\cal{Q}$. Therefore, by the essential uniqueness of Rokhlin decomposition we have $\mu_P=\mu_{Q,P}$ for $\widehat\mu_{\cal{Q}}$-almost every $Q\in \cal{Q}$ and $\widehat\mu_{Q,\cal{P}}$-almost every $P\in \cal{P}$ with $P\subset Q$. By definition of $\mu_Q$, which is equivalent to say $\mu_P=\mu_{z,P}$ for $\mu$-almost every $z\in Z_{\infty}$ and $\widehat{\mu}_z$-almost every $P$, where $\widehat{\mu}_z$ represents for the quotient measure of $\mu_z$ with respect to partition $\cal{P}$. Since we have $$ \int_{Z_{\infty}} k(z)\mu_z(Z_{\infty})d\mu= \mu(Z_{\infty})>0, $$ there exists a subset $Z_1 \subset Z_{\infty}$ such that $\mu_z(Z_{\infty})>0$ for every $z\in Z_1$. Furthermore, as we have shown in Lemma \ref{Lem:mu-disintegration}, $\mu_P$ is absolutely continuous with respect to Lebesgue measure for almost every $P$. Here one should notice that for every $P\in \cal{P}$, we have $P=\gamma \cap Z_{\infty}$, for some $\gamma \in \cal{L}_{\infty}$ by the construction of $\cal{P}$. Then by the argument above we obtain a set $Z_2$ with full $(\mu|Z_{\infty})$ measure such that for every $z\in Z_2$, $\mu_{z,P}$ is absolutely continuous with respect to Lebesgue measure for $\widehat{\mu}_z$-almost every $P\in \cal{P}$. So if one takes some point $z\in Z_1\cap Z_2$, then it satisfies the requirement of this lemma. \end{proof} \begin{proof}[Proof of Theorem A] Take $z\in Z_{\infty}$ as in Lemma \ref{lem:ergodicmeasure}, then $\mu_z(L_{\infty})>0$. For every regular points $y$ we have $\lim_{n\rightarrow \infty}\frac{1}{n}\log \|Df^{-n}/F(y)\|\leq \frac{1}{2}\log\lambda_2$, which can be concluded from Lemma~\ref{lem:unstable}. That is to say there exists a set with positive $\mu_z$-measure such that all the points there have dim$F$ Lyapunove exponents larger than $-\frac{1}{2}\log \lambda_2$, so we have that $\mu_z$ has dim$F$ positive Lyapunov exponents by ergodicity. By assumption, we know all the Lyapunov exponents along $E$-direction are non positive for $\mu_z$ almost every point. So, by Pesin theory (see more in \cite{bp02} for instance) we obtain that $\mu_z$-almost every point $x$ has a local unstable manifold. Furthermore, since the disks $\gamma \in \mathcal{L}_{\infty}$ are contained in the local unstable manifolds. Using the ergodicity and absolute continuity property proved in Lemma \ref{lem:ergodicmeasure}, we have that the ergodic measure $\mu_z$ has absolutely continuous conditional measures on unstable manifolds. This ends the proof of Theorem~\ref{Theo-attractor}. \end{proof} \begin{proof}[Proof of Corollary 1] The condition $$ \liminf_{n\rightarrow \infty}\frac{1}{n}\log\|Df^n/E(x)\|<0. $$ on a total probability set implies that $E$ is uniformly contracted by the work of Cao in \cite{cao03}. Since we have found an ergodic SRB measure $\mu$, then $\mu$ is a Physical measure by using the absolute continuity of stable foliation. One can see \cite{y02} for more details. \end{proof} \begin{comment} \center{{\bf Appendix}}
2,869,038,155,375
arxiv
\section{\uppercase{Introduction}} \vspace{-0.3cm} Low-Power Wide-Area Networks (LP-WANs) are gaining increasing attention in both the industry and academia, because of the advantages of long-range coverage, low energy consumption, and low deployment cost. Among LP-WANs technologies, LoRa is one of the leading emergent technologies in the unlicensed sub-GHz bands. It covers an area of several square kilometers from the base station and supports millions of end devices in the field \cite{ref7}. However, with the widely application of different wireless technologies in daily life and industry \cite{ref7,ref8,ref9,ref9,ref10,ref11,ref12}, multiple wireless protocols might be densely deployed in the same area, such as LoRa, sigfox \cite{ref28}, and 802.11ah \cite{ref8}. As a result, those wireless networks are inevitably overlapping, and lead to either isomorphism or heterogeneity interferences. Most of the conventional approaches mitigate the LoRa interference by re-designing the Physical and MAC layers \cite{ref3,ref4,ref5,ref6}. Transparent solutions are proposed in \cite{ref5,ref6} to re-design and synchronize LoRa sender, while \cite{ref1,ref6} take efforts to avoid corruptions at the base station side. Those efforts introduce extra hardware cost, deployment complexity, or are not compatible with deployed LoRa devices. Some recent efforts make use of the cloud resources to mitigate the LoRa interference, without any extra hardware. OPR \cite{ref2} restores the corrupted packets by transmitting those packets and RSSI samples to the cloud, and cycles through alternative fragments matched in the error-detection fields. However, those cloud-based methods lead to excessive overhead of RSSI transmission and computation, which greatly limits their feasibility in practice. Inspired by the cloud-based methods, we ask a natural question that can we further reduce the transmission and computation overhead. In this paper, we propose a novel method for LoRa interference mitigation in an edge-cloud collaborative manner, named as Edge-Cloud Collaborative Recovery (ECCR). Instead of directly transmitting the RSSI samples, we identify the corruptions by adding error checking codes at the base station side. Besides, we find that bit errors of the packet received by different base stations are disjoint with others. In a nutshell, corruptions are detected by the error checking code before decoding. And then, the packets from multiple receivers can be utilized to restore the packet at the cloud side. Since the errors in packets are located on the base station, there is no need to transfer RSSI samples to the cloud any more. Benefit from this, ECCR greatly reduces both the transmission and computation overhead in the conventional cloud-based approach. To support edge-cloud collaboration, the challenge for ECCR in a LoRa base station is to rapidly detect packet corruptions quickly enough so that it ensures recovering packets in real-time communication. We design error checking code after the encoding of the LoRa physical payload, through which, the base station detects corruptions before decoding the packet. With such error checking code, ECCR successfully detects and reports corruption for using cloud resources to restore packets. This paper presents the first edge-cloud collaborative design for LoRa interference mitigation. The features we provide and the challenges we address in this emulation-based design are indeed generic and applicable to a whole set of future interference mitigation. Specifically, the major contributions of ECCR are as follows: \vspace{-0.3cm} \begin{itemize} \item[$\bullet$] We propose ECCR, a novel interference mitigation approach for LoRa. To the best of our knowledge, it is the first edge-cloud collaborate method for interference mitigation. It takes the advantage of both clouds’ global management ability and edges’ signal perceive benefits. Without modifying any hardware, ECCR is a transparent design and hence is easily deployed in existing LoRa infrastructure. \item[$\bullet$] To mitigate interference in real-time, we address a few challenges including (i) detecting corruption before decoding the packets, (ii) collaborating multiple base stations for packet recovery. These techniques provide guidance for the range extension of edge-cloud collaborative interference mitigation. \item[$\bullet$] We conduct extensive experiments on commercial-off-the-shelf devices (sx1280 LoRa chip) and the USRP-N210 platform as the base station to evaluate the correctness and performance of ECCR. Experimental results show that ECCR is able to accurately decode packets even the corruption rate achieves $51.76\%$. \end{itemize} \vspace{-0.6cm} \section{\uppercase{Background and motivation}} \vspace{-0.3cm} To explain ECCR, it is necessary to first introduce how LoRa and LoRa’s Wide-Area Networking architecture (LoRaWAN) work. In this section, we first concisely introduce the architecture of LoRa and LoRa’s Wide-Area Networking architecture (LoRaWAN) and then explain the principles of ECCR, finally, we conduct experiments to motivate our works. \vspace{-0.5cm} \subsection{How LoRa Works} \vspace{-0.3cm} LoRa is a new type of spread spectrum communication protocol released by Semtech in August 2013. It works in the unlicensed sub-GHz frequency. As shown in Fig. 1, LoRa employs the technology of Chirp Spread Spectrum. Since the characteristics of long-range and high robustness, it has been utilized for decades in military communications. Recently, it has become the de-facto mainstream technology for the Internet of Things (IoT) for both industrial and civilian networks worldwide. \vspace{-0.8cm} \begin{figure}[H] \centering \includegraphics[width=4.5cm]{Fig01.pdf} \caption{\textbf{Spread Spectrum of LoRa \cite{ref29}}} \label{Fig01} \vspace{-0.3cm} \end{figure} \vspace{-1.0cm} \subsection{How LoRaWAN Works} \vspace{-0.2cm} The architecture of LoRaWAN is shown in Fig. 2. It contains several functional modules: Receiver, Gateway, Network service, Application. Generally, LoRa end node utilizes sub-GHz band wireless for data transmission with base stations, and after receiving the mixed signals, the gateway demodulates them into packets. These packets are finally transmitted to the cloud for applicational usage. \vspace{-0.8cm} \begin{figure}[H] \centering \includegraphics[width=9cm]{Fig02.pdf} \caption{\textbf{LoRaWAN Architecture}} \label{Fig02} \end{figure} \vspace{-0.8cm} \vspace{-0.9cm} \begin{table}[H] \centering \begin{tabular}{| >{\centering} p{95 pt} | >{\centering} p{90 pt} | >{\centering} p{80 pt} | >{\centering} p{70 pt} |} \hline & \textbf{Data Transmission amount} & \textbf{Error correction capability} & \textbf{Computational complexity}\cr \hline \textbf{Standard} \cite{ref16} & Low & Low & Low \cr \hline \textbf{Cloud-based} \cite{ref2} & High & High & High \cr \hline \textbf{Edge-Cloud: ECCR} & \textbf{Low} & \textbf{High} & \textbf{Low} \cr \hline \end{tabular} \caption{\textbf{Motivation of the ECCR}} \label{Table01} \end{table} \vspace{-0.9cm} \vspace{-1.0cm} \subsection{Motivations} \vspace{-0.3cm} Prior works has shown massive improvement in performance with modified hardware to coherently combine signals at the physical layer. However the extra hardware cost indeed limits the feasibility in real system. Balanuta et al. propose a cloud-based approach in \cite{ref2} to leverage most likely corrupt bits across a set of packets that suffered failed CRCs at multiple LoRa LP-WAN basestations. After offloading the corrupted packets and RSSI samples, the failed packets might be recovered with a probability of 72\%. However, the offloading phase incurs excessive communication overhead and limits the large-scale application. We summarize the major performances of Standard LoRa[17], Conventional approach with specialized hardware, and Cloud-based approach in Table 1. In this paper, we ask a natural question that can we design an approach that achieves all the ideal performance at the same time, i.e., low data transmission amount, low computational complexity, high error correction capability and no extra hardware demand . To achieve this, we design an interference mitigation approach in a edge-cloud collaborative manner. The corruption is detected at base station side while the packets are restored at the cloud side. Such a design greatly reduces the data transmission amount. Besides, ECCR utilizes packets from multiple base stations, to achieve high error correction capability. With a carefully designed packet recovery algorithm, ECCR is able to restore packets with the time approaching LoRa. \vspace{-0.9cm} \begin{figure}[H] \centering \includegraphics[width=10cm]{Fig03.pdf} \caption{\textbf{In Phase/in Quadrature (I/Q) of a LoRa packet with Collision captured with a software defined Radio. (From top to bottom are Lab, Hallway, Library)}} \label{Fig03} \end{figure} \vspace{-0.8cm} \vspace{-0.8cm} \begin{table}[H] \centering \begin{tabular}{|p{50 pt}|p{250pt}|} \hline & \textbf{\centerline{Payload Received}} \cr \hline \textbf{Lab} & \textbf{\color{red}{74 86 111}} … 108 111 32 87\quad… 114 108 100 33 … \cr \hline \textbf{Hallway} & 72 101 108 … 108 111 32 87\quad… \textbf{\color{red}{98 108 117 49}} … \cr \hline \textbf{Library} & 72 101 108 … \textbf{\color{red}{105 119 32 78}}… 114 108 100 33… \cr \hline \end{tabular} \caption{\textbf{Payload corruption in different receivers. (Bold part represent for corruptions)}.} \label{Table02} \end{table} \vspace{-0.8cm} \vspace{-0.8cm} \subsubsection{Disjoint Interference.} Our design is based on an interesting finding. For different wireless devices, their coverage vary a lot due to the difference in protocols. We find that, the interference is disjoint among different LoRa base stations. To support this findings, we conduct a real experiment. Our first micro-benchmark shows the difference of interference in different receivers. Fig. 3 shows the corruptions in different receivers, which are disjoint. Table 1 also shows that the received payloads of LoRa are corrupted at different locations when facing interference. We utilized a real-world, 10 sq. km. test-bed in Southeast University, to collect LoRa packets with interference. We examine the interference of LoRa in different sites: (i) a laboratory room, (ii) a hallway, and (iii) a library. We set the transmission power at 10dBm and put the sender in outdoor environments. \vspace{-0.7cm} \subsubsection{Benefit of Low Data Transmission Requirement.} Received Signal Strength Indication (RSSI) is an indication of the strength of the received signal, and its realization is carried out after the baseband receiving filter of the reverse channel. Traditionally, RSSI is utilized to determine the link quality and whether to increase the signal sending power. OPR \cite{ref2} proposes a cloud-based error detecting method. It requires the base station sending RSSI samples as an index for detecting interference. Since, LoRa is able to work under the power level of the noise floor, burst interference increases RSSI level, and then corruptions can be identified in the cloud side. However, in LoRa protocols, RSSI is eight times of the length of the payload (e.g. 200 bytes for a 25-byte payload packet [2]). Compared to the payload, RSSI sample is still very long, even after compression. Clearly, transmitting the RSSI samples to the cloud leads to high the network throughputs of base stations. \vspace{-0.6cm} \section{\uppercase{Main design}} \vspace{-0.3cm} ECCR takes advantages of both the global management ability of the cloud and the signal awareness benefit of each base station. In this section, we first describe the overview of ECCR, then move forward and step into the key components of ECCR, i.e., error detection and error recovery, repectively. \vspace{-0.8cm} \begin{figure}[H] \centering \includegraphics[width=7cm]{Fig04.pdf} \caption{\textbf{The Architecture of the ECCR}} \label{Fig04} \end{figure} \vspace{-1.2cm} \subsection{Overview of the ECCR} \vspace{-0.3cm} Fig. 4 shows the architecutre of ECCR.The LoRa senders send packets to the base sations in the field. ECCR adds error checking code after the encoding of LoRa physical payload for error detection, to detects corruption of the received packets in the base station and report them to the cloud. Since the corrupted parts of those packets are disjoint, when multiple packets form different base stations are available, the cloud restore the corrupted packets with a weight voting algorithm. Generally, the error correction capability of ECCR has a growth trend as the increasing number of base stations, because an increasing number of useful packets are collected by base stations. \vspace{-0.4cm} \subsection{Error Detection} \vspace{-0.3cm} Error Detection is the most important part of the ECCR design. ECCR adds error checking codes in the LoRa physical payloads so that the base station can identify whether a received packet is corrupted by the interference before decoding. \vspace{-0.8cm} \begin{figure}[H] \centering \includegraphics[width=9cm]{Fig05.pdf} \caption{\textbf{How ECCR Works in LoRa Senders}} \label{Fig05} \end{figure} \vspace{-0.8cm} Fig. 5 shows the ECCR checking codes after encoding a LoRa physical payload. We take the idea of hamming for error checking, and add checking bits into the payload. The number of checking bits in a packet is counted with the Equation: $2^r \geq m + r + 1$, where, $r$ is the number of checking bits, $m$ is data bits. As shown in Fig.5. For example, when the data bits is 7, r equals to 4, since $2^4 \geq 7 + 4 + 1$. The checking codes only add 4 more bits into a 7-bit payload. Specifically, checking bits are located in the $2^k$-th bit in the new payload, where $k=1,2,...r$. In the above example, they are 1, 2, 4, 8 , repectively. Each checking bit represents for a interleave group (e.g. $G_1 - G_4$). Group $G_k$ indexes for the bits which are located in where the $k$-th bit of binary representation of the location is 1 (e.g. $G_1$ index for 11, 9, 7, 5, 3, 1). Although the packet length will slightly increas(4 bits in 7-bit payload). Our approach avoids the transmission of RSSI samples (add 200 bytes to a 25-byte payload packet \cite{ref2}). Generally, ECCR reduces the computation overhead of error detection. \vspace{-1.0cm} \begin{figure}[H] \centering \includegraphics[width=12cm]{Fig06.pdf} \caption{\textbf{How ECCR Carries on Error Detection in LoRa Receivers}} \label{Fig06} \end{figure} \vspace{-0.8cm} Fig. 6 show the error detection works in the base stations. The ECCR checking codes are used for detecting corruptions before decoding packets. After demodulating the signal, the base station utilizes ECCR checking to detect corruption, before decoding. If the bit in location $k$ is wrong, the interleave groups with index it, will fail the error checking. In our design, "1" is used to represent for error, while "0" for correct, and the binary sequence of right and wrong cases of interleaving groups meets the error location when converted to decimal. ECCR takes advantage of this and detects error locations. For example, if the $5$-th bit corrupted, the correct and wrong situation of the four interleaving groups are as the equation (1): \vspace{-0.3cm} \begin{equation} \left\{ \begin{array}{l} Gb_1 = D_1 \oplus D_3 \oplus D_5 \oplus D_7 \oplus D_9 \oplus \\ Gb_2 = D_2 \oplus D_3 \oplus D_6 \oplus D_7 \oplus D_{10} \oplus D_{11} \\ Gb_3 = D_4 \oplus D_1 \oplus D_6 \oplus D_7 \\ Gb_4 = D_8 \oplus D_8 \oplus D_9 \oplus D_{10} \oplus D_{11} \end{array} \right. \label{eq02} \end{equation} \vspace{-0.3cm} Here $D_k$ represents the correct and wrong conditions of $k$-th bit, and $Gb_{1-4}$ represents the Boolean value of interleaving groups. Once a corruption packet is detected, the base stations reports to the cloud. Then the cloud is able to collaborate packets from multiple base stations and restore packets through voting. Although ECCR checking code is able to correct some error bits, it has restrictions when corruptions increase (e.g. When 7, 11 corrupted, four interleave groups all come to failure, the error cannot be recovered by ECCR checking code), so that ECCR further utilizes the cloud to recovery packets. \vspace{-0.3cm} \vspace{-0.3cm} \subsection{Error Recovery} \vspace{-0.3cm} We have demonstrated in Section 2.3 that the received payloads of LoRa are corrupted at a different location when facing disjoint interference in multiple base stations. ECCR further utilizes the error checking code to detect corruptions in packets, then the error is reported to the cloud through reliable ethernet connections during which LoRaWAN utilizes 128 bits AES for integrity protection and data encryption. The cloud collaborates packets from multiple base stations, assigns weight to them according to the proportion of corruption, and utilizes a weight voting algorithm to restore the correct packet. Specifically, the weight of symbols is signed according to the equation (2): \vspace{-0.3cm} \begin{equation} \left\{ \begin{array}{l} W_k = \frac{\sum_{i=1}^{L_k}k(2)[t] - \sum_{i=1}^{L_k}(k(2)[t] == 1 \land G_i) }{\sum_{i=1}^{L_k}k(2)[t]} \\ L_k = \sum_{i=1}^{p}k(2)[i] \lor 1 (\exists k(2)[p] \land \xout{\exists} k(2)[p+1] )\\ G_i = \land_{n=1}^{\cup(D in G_i)}D_n \end{array} \right. \label{eq03} \end{equation} \vspace{-0.3cm} Where $W_k$ represents for the weight of the $k$-th symbol, $k(2)$ for the binary representation of $k$, $L_k$ for the length of $k(2)$, and $G_i$ for the ECCR checking result for $i$-th interleaving group, $D_n$ for the correct and wrong conditions of the $n$-th bit. $D$ in $G_i$ are shown in equation (2). \vspace{-0.8cm} \begin{table}[H] \scriptsize \centering \begin{tabular}{| p{35 pt} | p{170pt} | p{130pt}| } \hline & \textbf{\centerline{Payload Received}} & \textbf{\centerline{Symbol Weights}} \cr \hline \textbf{Lab} & \textbf{74 86 111} … 108 111 32 87\quad… 114 108 100 33& \textbf{\color{red}{0 0 0}} … 100 0 50 33 … 100 50 50 33 \cr \hline \textbf{Hallway} & 72 101 108 … 108 111 32 87\quad… \textbf{98 108 117 49}& 0 0 0 … 100 0 50 33 … \textbf{\color{red}{ 0 0 0 0 }} \cr \hline \textbf{Library} & 72 101 108 … \textbf{105 119 32 78} … 114 108 100 33& 0 0 0 … \textbf{\color{red}{0 0 0 0}} … 100 50 50 33\cr \hline & \textbf{\centerline{Voting Result}}& \cr \hline \textbf{Voting} & 72 101 108 … 108 111 32 87\quad… 114 108 100 33& \cr \hline \end{tabular} \caption{\textbf{Voting Result (Bold part represent for corruptions).}} \label{Table03} \end{table} \vspace{-1.2cm} ECCR utilizes the weight equation (2) to assign weights for each symbols. The weights are also utilized to measure the reliability of packets during the weight voting process. A higher weight means the packet is closer to the correct one. In that way, the correct information of multiple packets are collaborated to restore the ture packet (an example of weight voting process is shown in Table 3). Note that, if all the weights of the symbols in those packets are 0, in other words, interleaving groups that index the symbol location all come to error, ECCR treats every packet equally at that symbol location. (e.g. $1$, $2$ and $3$-th symbol in Table 3) \vspace{-1.0cm} \section{\uppercase{Performance evaluation}} \vspace{-0.8cm} To evaluate the correctness and performance of the proposed ECCR approach, we conduct extensive experiments with emulation. Our evaluations include different situations of Wi-Fi interference. To ensure limit-testing the performance of ECCR, we emulated In-Phase/Quadrature (I/Q) of LoRa packets using LoRaMatlab \cite{ref13} and utilize WLAN Waveform Generator \cite{ref14} to generate Wi-Fi packets as interference. We control the degree of interference by extending the time of Wi-Fi packets. Our test covers multiple scenarios, including: (i) Standard LoRa, (ii) LoRa and ECCR checking code, (iii) ECCR. \vspace{-0.8cm} \begin{figure}[H] \begin{minipage}[t]{0.2\linewidth} \centering \includegraphics[width=2.3cm]{Fig07.pdf} \caption{\textbf{Lab}} \label{fig:side:a} \end{minipage}% \begin{minipage}[t]{0.2\linewidth} \centering \includegraphics[width=2.3cm]{Fig08.pdf} \caption{\textbf{Hallway}} \label{fig:side:b} \end{minipage}% \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=3.5cm]{Fig09.pdf} \caption{\textbf{Library}} \label{fig:side:c} \end{minipage}% \begin{minipage}[t]{0.4\linewidth} \centering \includegraphics[width=3.7cm]{Fig10.pdf} \caption{\textbf{Outdoor Site}} \label{fig:side:d} \end{minipage} \end{figure} \vspace{-0.8cm} \vspace{-0.6cm} \begin{figure}[H] \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=6.1cm]{Fig11.pdf} \caption{\textbf{Packets Decoding Rate}} \label{fig:side:a} \end{minipage}% \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=6.1cm]{Fig12.pdf} \caption{\textbf{Symbol Error Rate (SER)}} \label{fig:side:b} \end{minipage} \end{figure} \vspace{-0.4cm} \vspace{-1.2cm} \subsubsection{Performance of Packets Correctly Decoding Rate.} Fig. 11 shows the Packets correctly Decoding Rate of three scenarios when the interference duration increases. Packets correctly decoding rate of (i) and (ii) both decline when the interference extend. ECCR maintains $100\%$ correctly decoding rate until there are more than 30 error bits in a packet. Notice that because both (i) and (ii) utilize forward error correction code for interference mitigation the decoding rates steadily declines when increasing the duration of the interference. ECCR takes advantage of the correct information from multiple receivers so it faces a drop of correctly decoding rate when the duration of interference exceeds a boundary (30 error bits in our experiments). \vspace{-0.6cm} \subsubsection{Performance of Symbol Error Rate.} Fig. 12 compares the Symbol Error Rate (SER) in three different scenarios. Notice that adding ECCR checking codes reduces SER, and ECCR achieves the lowest SER. Specifically, the error correction capability of LoRa is increased by adding ECCR checking codes (see Section 3.2 error detection). Besides, the use of weight voting algorithm further reduces SER. Fig. 11 and Fig. 12 show that ECCR maintains accurately decodes packets when packets have $51.76\%$ corruption (in original LoRa) and it also reduces SER by $51.76\%$. \vspace{-0.8cm} \begin{figure}[H] \centering \includegraphics[width=11cm]{Fig13.pdf} \caption{\textbf{Average Computation Time}} \label{Fig13} \end{figure} \vspace{-0.6cm} \vspace{-0.8cm} \subsubsection{Average Computation Time.} Fig. 13 shows the computing time performance of three different scenarios, where adding ECCR checking code contributes $17\%$ computation time and weight voting algorithm contributes $10\%$ computation time. ECCR, which costs $27\%$ of average computation time, achieves to accurately decode packets when packets have nearly $51.76\%$ corruption (In the strongest interference situation, $27$ milliseconds). \vspace{-0.4cm} \section{\uppercase{Related works}} \vspace{-0.4cm} This section summarizes the most related works of this paper. Most of the efforts on the LP-WANs interference mitigation fall into the following two categories. LP-WANs interference mitigation mostly on re-designing LoRa base stations and end devices \cite{ref3,ref4,ref5,ref6}, is protocol-based. Recently, using cloud computing resources for recovering corrupted packets has emerged as a mechanism for information collaboration for interference mitigation, these cloud-based methods show great compatibility to deployed LP-WANs systems. \vspace{-0.3cm} \subsubsection{Protocol-based Approaches.} \vspace{-0.3cm} Early efforts on interference mitigation in LP-WANs have focused on solutions to physical, and MAC layers, including SCLoRa \cite{ref17}, Choir \cite{ref23}, FTrack \cite{ref24}, mLoRa \cite{ref25}, etc. in physical layer and S-MAC \cite{ref26}, LMAC \cite{ref27}, etc. in MAC layer. These protocol-based solutions require re-designing LP-WANs senders and/or base stations. The requirement on dedicated devices greatly limits the large-scale application for those approaches. \vspace{-0.3cm} \subsubsection{Cloud-based Approaches.} \vspace{-0.3cm} Benefit from the architecture of the LP-WANs system, it is feasible to utilize cloud resources for interference mitigations. For example, OPR \cite{ref2} offloads RSSI samples and the corrupted packets together send to the cloud, and utilizes the cloud to compute and restore packets. Taking advantage of the ample computational resources and the global management ability of the cloud, cloud-based approaches achieve great progress in recent researches. Besides, those approaches show great compatibility to deployed LP-WANs systems, since they don't require any hardware modification. However, offloading all the RSSI samples to the cloud incurs excessive communication overhead in the uplink bandwidth. ECCR proposed in this paper is the first work to realize error detecting at the base sation side without transmitting RSSI sample to the cloud. It greatly reduces the data transmission amount. Also, ECCR utilizes a weight voting algorithm to collaborates correct information from multiple base stations, so that it has the capability to recovery packets with low compute complexity. \section{\uppercase{conclusion}} \vspace{-0.4cm} This work presents ECCR, an edge-cloud collaborative recovery design for interference mitigation. Taking the advantage of both the global management ability of the cloud and the signal to perceive the benefit of each base stations, ECCR is the first work, to the best of our knowledge, to implement the interference mitigation based on edge-cloud collaborative recovery, which achieves to accurately decode packets when packets have nearly 50\% corruption and reduce SER for 50\%. Our experiments show that ECCR achieves correctly decoding packets when there is 50\% corruption. In the future, we will further focus on the case with insufficient receivers (e.g., two base stations). ECCR, explores a new methodology to achieve interference mitigation with edge-cloud collaboration, and achieves a nice balance among reliability, flexibility, deployability, and complexity. \section{\uppercase{acknowledgement}} \vspace{-0.4cm} This work was supported in part by National Natural Science Foundation of China under Grant No. 61902066, Natural Science Foundation of Jiangsu Province under Grant No. BK20190336, China National Key R\&D Program 2018YFB2100302 and Fundamental Research Funds for the Central Universities under Grant No. 2242021R41068
2,869,038,155,376
arxiv
\section{Introduction}\label{sec:int} The expansion of the solar corona into interplanetary space was predicted in 1958 by Parker's classic model \citep{parker1958apj}. Soon after, in-situ spacecraft measurements \citep{neugebauer1966jgr} confirmed that the interplanetary region is pervaded by solar plasma flowing at supersonic speed.\footnote{For a recent historical review of the discovery of the solar wind, see \cite{obridko2017SoSyR}.} Research efforts in the following decades have established that the solar wind is a complex and dynamic system that enters centrally into much of space research and is of relevance to studies of solar, geophysical, and astronomical phenomena. The \textit{Parker Solar Probe} (\textit{PSP}) mission \citep{fox2016SSR} \replaced{is scheduled for a Summer 2018 launch}{was launched on August 12, 2018}, with the goal of exploring, for the first time, regions of solar wind that are of crucial importance in establishing the heliosphere. While approaching the Sun closer than any prior spacecraft, \textit{PSP} will provide unprecedented high-resolution measurements of the solar corona and the young solar wind, with \replaced{its}{the} main objectives being discovery of the structure and dynamics of the coronal magnetic field, and the processes that heat and accelerate the wind and accelerate and transport energetic particles. As the \textit{PSP} makes its high-resolution in-situ measurements, a knowledge of the large-scale environment within which these observations exist is of vital importance. This global context may be provided by remote sensing \citep{bird1990POIH,vourlidas2016ssr} and global simulation. The present work is the first of a series of papers focused on contextual predictions for \textit{PSP} using global simulations of the solar wind. \replaced{The transition of the solar corona into the solar wind is accomplished by several dynamical changes in the nature of the flow. The inner corona is magnetically structured, subsonic, and sub-Alfv\'enic, but as the solar plasma flows out from the corona into the young solar wind, it transforms into a supersonic and super-Alfv\'enic flow that is dominated by hydrodynamics.}{The transition of the solar corona into the solar wind is accomplished by several dynamical changes in the nature of the flow, regionally organized by magnetic topology and associated factors such as open vs. closed connectivity and composition. Regions of fast wind, slow wind, and mixed wind apparently trace to different magnetic connectivities and different altitudes \citep[e.g.,][]{mccomas2003grl,cranmer2007ApJS}. In the simplest picture the inner-coronal plasma is magnetically structured, subsonic, and sub-Alfvenic, but as it flows out from the corona into the young solar wind it evolves into a supersonic and super-Alfvenic flow that is dominated by hydrodynamics.} Recent work indicates that this transition may coincide with the onset of large-scale turbulence \citep{deforest2016ApJ828,chhiber2018apjl} and mark the outer boundary of a zone of preferential ion heating \citep{kasper2017apj}. Useful markers that characterize this transition are the sonic critical surface, the Alfv\'en critical surface, and the first $\beta = 1$ surface (the plasma-$\beta$ is the ratio of gas to magnetic pressure). In particular, when the flow speed $u$ exceeds the Alfv\'en speed $V_A$, the magnetic field rigidity can no longer enforce plasma co-rotation \citep{weber1967ApJ148}, or overcome the differential ram-pressure due to shearing interactions between neighbouring wind streams. And when the plasma-$\beta$ increases above unity, gradients in the plasma (thermal) pressure may displace the magnetic field and more isotropic motions are possible \citep{chhiber2018apjl}. \replaced{The region in which these two crucial conditions ($u > V_A$ and $\beta \sim 1$) are attained becomes, in effect, the region where the corona gives up control of the solar plasma, and the solar wind as an independent entitity is born.}{The broad region in which these two crucial conditions \((u > V_A\) and \(\beta\sim 1)\) are attained becomes, in effect, the region where the corona gradually gives up control of the solar plasma, and the kinetic-energy dominated solar wind emerges as an independent entity. Beyond these regions the solar wind no longer communicates through magnetohydrodynamic (MHD) interactions with the magnetically dominated regions of its origin.} In this work we employ well-tested global MHD simulations of the solar wind \citep{usmanov2014three,usmanov2018}, that are self-consistently coupled with a turbulence transport model, to study and characterize this region of transitions and to make contextual predictions for the \textit{PSP} mission.\footnote{Our use of ``transition'' here should not be confused with the well-known transition region that lies just above the chromosphere \citep[e.g.,][]{cranmer2007ApJS}.} We incorporate the effects of long-term solar variability \citep[e.g.,][]{owens2013lrsp} by varying magnetic source dipole tilts and employing magnetogram-based boundary conditions. The simulation results are compared with a variety of remote sensing observations, demonstrating how the two approaches may be combined to gain insights regarding large scale heliospheric conditions in this region. Global simulation and remote sensing thus generate mutual support, and in turn, provide valuable context for the finer details that emerge from in-situ measurements. Subsequent papers in this series on contextual predictions for \textit{PSP} will focus on turbulence properties along the spacecraft's trajectory, on modifications of Taylor's hypothesis for \textit{PSP} \citep{matthaeus1997AIPCtrajectory,klein2015ApJtaylor}, and on solar wind azimuthal flow. The paper is organized as follows -- in Section \ref{sec:back} we provide background on critical surfaces and physically distinct regions of the inner wind, discussing recent work that motivates the present study. An overview of the \textit{PSP} trajectory is provided in Section \ref{sec:sampl}, and our solar wind model is briefly described in Section \ref{sec:model}. Results are presented in Section \ref{sec:results}, including comparisons of model output with remote sensing observations and contextual predictions along the \textit{PSP} trajectory. We conclude with discussion in Section \ref{sec:disc}. \section{Theoretical and Observational Background}\label{sec:back} Two critical points\footnote{A mathematical discussion of a critical (or equilibrium) point of a system of ordinary differential equations may be found in standard texts \citep[e.g.,][]{boyce1969elementary}.} are frequently discussed within the context of the solar wind -- the sonic and the Alfv\'enic critical points, where the flow speed equals the sound speed and the Alfv\'en speed, respectively. One encounters the notion in even the simplest, spherically symmetric, stationary and isothermal model of the solar wind \citep[e.g.,][]{hundhausen1972coronal}. We briefly review the standard presentation below. The relevant equations may be derived by assuming an equal number density $n$ of protons and electrons, and an equation of state $P = 2nkT$, where $T=\frac{1}{2}(T_e+T_p)$ is the average of electron and proton temperatures. Mass conservation (\(4 \pi n u r^2 = \text{constant}\)), combined with the inviscid momentum conservation equation in a gravitational potential \begin{equation} nmu \frac{du}{dr} = - 2kT \frac{dn}{dr} - nm \frac{GM_\odot}{r^2}, \label{eq:momcon} \end{equation} yields \begin{equation} \frac{1}{u} \frac{du}{dr} \left( u^2 - \frac{2kT}{m} \right) = \frac{4kT}{mr} - \frac{GM_\odot}{r^2}. \label{eq:momcon_crit} \end{equation} Here $u$ is the speed of radial expansion, $m$ is the sum of proton and electron masses, $k$ is the Boltzmann constant, $G$ is the gravitational constant, and $M_\odot$ is the solar mass. The right-hand side of Equation \eqref{eq:momcon_crit} vanishes at the \textit{critical radius} $r_c = GM_\odot m/4kT$. The left-hand side must also vanish here, for which we must have either a vanishing velocity derivative, or $u^2(r_c) \equiv u^2_c = 2kT/m$. The solutions of Equation \eqref{eq:momcon_crit} have the well-known `X', or \textit{saddle} type topology \citep[see e.g.,][]{hundhausen1972coronal}; the solution of physical interest is transonic, with a monotonically increasing velocity which is equal to the sound speed at the critical radius, i.e., at the sonic point. As additional physical effects are added to a solar wind model, the mathematical structure of the equations changes, and with it the nature of the critical point \added{\citep[e.g.,][]{lamers1999book}}. For instance, including electrons in a two-fluid model would introduce two sound speeds and two possible critical points. As we will see in Section \ref{sec:results}, inclusion of the electron pressure in a two-fluid model shifts the location of the sonic point to a slightly greater heliocentric distance. Therefore, the ``singular'' aspect of a critical point is of limited physical relevance and it is questionable whether spacecraft data may be used to localize a definite critical point. \added{Observational and instrumental issues aside, the sharp transitions between regions of interest that emerge in the simplest models will almost certainly become more gradual transitions, or even ``fuzzy'' or erratic transitions, in the real solar wind that is influenced by three-dimensional (3D) effects, multifluid plasma physics, turbulence, etc. In the following we will refer to these transitions as ``surfaces'' when it causes no confusion, but we remind the reader that in general we intend nonsingular and more gradual transitions \citep[see also][]{deforest2018ApJ}.} \replaced{Nevertheless, From a physical perspective, these points (which become \textit{critical surfaces} in a three dimensional context) imply the existence of separate regions in the solar wind which are dominated by different physical effects.}{From a physical perspective, these critical points become critical surfaces in a 3D context and denote transitions between separate regions in the solar wind that are dominated by different physical effects.} For instance, counterpropagating Alfv\'enic fluctuations may effectively generate turbulence in the inner corona \citep{matthaeus1999ApJL523}, but above the Alfv\'en critical surface the population of inward propagating modes is diminished \citep{bruno2013LRSP}, and Alfv\'en wave collisions are no longer an efficient mode of turbulence production \citep{verdini2007apj}. The Alfv\'en surface also effects a separation of coronal regions having different angular flow properties; in the simplest picture, below this surface the torque produced by the magnetic field is sufficiently strong to transfer angular momentum and produce a corotation of the coronal wind with the sun, while above the critical surface the azimuthal velocity of the solar wind drops rapidly with distance \citep{weber1967ApJ148}. In addition to the demarcation of different regions by critical surfaces, the general vicinity of the surfaces may be a site of interesting physics, such as enhancement in turbulent fluctuations \citep{lotova1985AA150}. These surfaces also signify the point beyond which MHD wave modes are unable to communicate upstream, because above the sonic (Alfv\'enic) critical surface the speed of propagation of information by sonic (Alfv\'en) modes is smaller than the speed of their advection downstream by the wind. Further, signatures of different coronal and solar phenomena may be evident in the location and morphology of critical surfaces, and may manifest in their temporal and spatial variability \citep{gral1996nature,lotova1997SoPh172}. Recent observations by \cite{deforest2016ApJ828} and subsequent numerical investigations by \cite{chhiber2018apjl} provide additional \deleted{\textit{current}} motivation for the present study. Making use of highly processed \textit{STEREO} images from December 2008, \cite{deforest2016ApJ828} found a textural shift in the solar wind flow between heliocentric distances of 20 -- $80~R_\odot$. The images revealed that radially aligned, ``striated'' patterns gave way to more isotropic structures, \added{that DeForest et al.} termed ``\textit{flocculae}'', at distances of a few tens of solar radii. \cite{chhiber2018apjl} performed global solar wind MHD simulations, representing nominal large-scale solar wind conditions during December 2008, and superposed plasma-$\beta$ unity surfaces computed from these simulations on the \textit{STEREO} images. They found that the observed textural shift occurred near the first plasma-$\beta=$~1 surface. The emerging interpretation states that as the solar wind passes into the region where $\beta \equiv 8\pi P/B^2 \geq 1$, mechanical pressure may overcome the organizing influence of the magnetic field $B$, thus enabling the observed isotropic motions, which may be triggered by hydrodynamic shearing between wind streams \citep[e.g.,][]{roberts1992jgr}. A further point of interpretation, consistent with the one above, is that the \textit{flocculae} may be a manifestation of solar wind fluctuations interacting at the largest scales that are causally related through turbulence in the expanding solar wind \citep{chhiber2018apjl}. The existence of such a maximum length scale of interaction is clear based on the finite amount of available propagation time, combined with the assumption that the relevant correlations must be produced by signals propagating at magnetohydrodynamic speeds. The Alfv\'en and $\beta=1$ surfaces may also be of significance to the phenomenon of preferential ion heating in the solar wind \citep[e.g.,][]{marsch2006kinetic}. Recently, \cite{kasper2017apj} found evidence for a zone, extending from just above the transition region ($\sim 0.3~R_\odot$) to a distance of tens of solar radii, where $\alpha$-particles are heated preferentially over protons. The outer boundary of this zone is likely associated with the Alfv\'en and $\beta=1$ surfaces. This point will be discussed further in Section \ref{sec:results}. \section{Sampling of the three dimensional heliosphere by Parker Solar Probe}\label{sec:sampl} The preceding section serves to emphasize the importance and relevance of critical surfaces. Yet, spacecraft missions hitherto have not been able to sample these in-situ (prior to \textit{PSP}, the closest heliocentric distance of approach was that of \textit{Helios} at 0.29 au ($\sim62~R_\odot$)). \textit{PSP} is set to change this by spending ``a total of 937 hours inside $20~R_\odot$, 440 hours inside $15~R_\odot$, and 14 hours inside $10~R_\odot$'' over its 7-year nominal mission \citep{fox2016SSR}. The spacecraft will most likely spend a very substantial amount of time under the first $\beta=1$ surface, which is inferred to lie between 20 and $60~R_\odot$ \citep{deforest2016ApJ828,chhiber2018apjl}.\footnote{The location of the Alfv\'en and first unit beta surfaces may dip below $10~R_\odot$ at the heliospheric current sheet (HCS). It must be noted that global models are likely to overestimate the spatial extent of the HCS due to their coarse resolution. This issue is discussed further in Section \ref{sec:results}.} According to observations and models \citep[e.g.,][]{mullan1990aa,lotova1997SoPh172,suzuki2005ApJ,cranmer2007ApJS, verdini2010ApJ,pinto2011ApJ,oran2013ApJ,deforest2014ApJ787, pinto2017ApJ,chhiber2018apjl,perri2018JPP}, the Alfv\'en surface lies between \(\sim\) \replaced{10}{2}\(\text{ -- } 30~R_\odot\) and \textit{PSP} could spend a substantial time under this surface as well. The sonic surface may lie below the \textit{PSP}'s lowest perihelion at $9.86~R_\odot$, since coronal models often predict a location of $2\text{ -- }5~R_\odot$, although these predictions are applicable mainly to coronal hole regions \added{ \citep{kopp1976SoPh,mckenzie1995AA,habbal1995grl, giordano2000apj,cranmer2007ApJS,verdini2010ApJ}}. At low latitudes the sonic point may lie as far as $20~R_\odot$ \citep{lotova1997SoPh172}. Since the periods in which the spacecraft will probe the regions within these surfaces will be of special significance to the success of the \textit{PSP} mission, it becomes a matter of some importance to estimate when these periods might occur. Figure \ref{fig:overview} shows a 3D perspective of the \textit{PSP} trajectory. The spacecraft ephemeris was extracted from a \href{https://naif.jpl.nasa.gov/naif/index.html}{NASA SPICE kernel}, and the trajectory is presented here in the Heliocentric Inertial (HCI) coordinate system \citep[e.g.,][]{franz2002pss}. Here the $XY$-plane is defined by the Sun's equator of epoch J2000; the $+Z$-axis is parallel to the Sun's rotation axis of epoch J2000, pointing toward the Sun's north pole; the $+X$-axis is the ascending node of the Solar equatorial plane on the ecliptic plane of J2000; and the origin of the coordinate system is the Sun's center of mass. The \textit{PSP} trajectory in 3D space is shown in red, while the blue curves represent projections of the 3D trajectory onto the $XY, XZ$, and $YZ$ planes. The Earth (at time of launch) and the Sun are represented by the blue dot and the `*', respectively (not to scale). The trajectory shown includes all orbits in the 7-year nominal mission duration. \begin{figure} \gridline{\fig{orbit3}{.5\textwidth}{}} \caption{\textit{PSP} trajectory in HCI coordinates (see text for details). The origin is the Solar center-of-mass and the $XY$-plane is the Solar equatorial plane. The red curves show the trajectory in 3D space and the blue curves are its projections onto the $XY, XZ,$ and $YZ$ planes. The `*' symbol and blue dot represent the positions of the Sun and Earth, respectively.} \label{fig:overview} \end{figure} As \textit{PSP} makes its high-resolution in-situ measurements, a knowledge of the large-scale environment within which these observations exist is of vital importance. In the next section we describe the solar wind model we have used to study the critical surfaces/regions and to make context predictions for the \textit{PSP} trajectory. \section{Solar Wind Model}\label{sec:model} \deleted{A long-standing problem in heliospheric physics has been the identification of physical mechanisms that heat and accelerate the solar wind \citep{hundhausen1972coronal,leer1982ssr,meyervernet2007}, in particular the fast wind that emanates from coronal holes. The source of this additional energy presumably lies in the solar photosphere \citep{cranmer2005apjs}, but it must be transported across the chromospheric transition region and dissipated in the corona. Candidate mechanisms that enable this transport and dissipation include magnetic reconnection, wave and wave-particle interactions, and turbulence. Investigation of the finer details of these processes requires a kinetic description \citep{schekochihin2009ApJS182,servidio2015JPP,howes2017pop, yang2017pop}, but} The large-scale features of the solar wind flow are widely regarded as well-represented in a fluid (MHD) description \citep{tu1995SSRv,goldstein1995araa,bruno2013LRSP, matthaeus2015ptrs,makwana2015pop,parashar2015apj}.\footnote{One objection might be that magnetosonic modes may be heavily damped in kinetic theory \citep{barnes1979inbook}; an effect absent in MHD. However, compressive modes may represent a small fraction of the energy in the weakly compressive interplanetary medium, and in any case the dissipation rate due to linear damping may be small compared to the cascade rate that leads to turbulent dissipation \citep{matthaeus2014apj}.} The MHD description is particularly indispensable for global simulation of the solar wind \added{\citep[e.g.,][]{gombosi2018LRSP}}, where the largest length scales in the system span at least a few solar radii ($1~R_\odot = 6.9 \times 10^5$~km). Kinetic effects come into play at the ion-inertial scale, which is roughly 90 km at 1 au \citep[e.g.,][]{schekochihin2009ApJS182} and becomes smaller closer to the sun. Current and foreseeable computational resources do not permit the resolution of this wide range of scales \citep[e.g.,][]{schmidt2015LRCA,miesch2015SSR194}. This makes MHD simulation our tool of choice for the current study that focuses on the global context of \textit{PSP} observations. However, special provisions need to be made to preserve essential physical information contained in the smaller-scale \textit{fluctuations}, which are necessarily unresolved, even if the macroscopic features are well represented. The large scales traversed by \textit{PSP} orbits are illustrated strikingly in Figure \ref{fig:overview}, which serves to reinforce the appropriateness of this approach. Fluid models of the solar wind have adopted various approaches to the problem of incorporating a source of heating and acceleration, including parametric heat deposition \citep[e.g.,][]{habbal1995grl,mckenzie1995AA,riley2015ApJ}, \added{a polytropic equation of state \citep[e.g.,][]{lee2009SoPh,gressl2014SoPh}}, WKB waves in a weakly inhomogeneous background \citep[e.g.,][]{jacques1978ApJ,usmanov2000global}, and MHD turbulence driven by Alfv\'en waves interacting with large-scale gradients \citep[e.g.,][]{matthaeus1999ApJL523,dmitruk2002ApJ575,suzuki2005ApJ, verdini2010ApJ,vanderholst2014ApJ,yang2016SoPh}. We use an approach with a fully self-consistent and dynamical coupling of bulk solar wind flow with small-scale MHD turbulence -- bulk flow influences the turbulence, and in turn, turbulence dynamically feeds back into the bulk wind flow. In addition to turbulent heating and acceleration, the model incorporates two-fluid energy equations, heat conduction due to electrons, and proton-electron Coulomb collisions. We briefly describe \replaced{the bulk flow equations}{the model} below, and refer the reader to \cite{usmanov2018} for details, including those of the turbulence model and closure approximations. Formally, the model is based on \deleted{the Reynolds-averaged Navier-Stokes approach, with} a Reynolds decomposition \citep[e.g.,][]{Monin1971book} applied to MHD. All physical fields, e.g., $\tilde{\mathbf{a}}$, are separated into a mean and a fluctuating component: \(\tilde{\mathbf{a}} = \mathbf{a}+\mathbf{a'}\), making use of an averaging operation where \(\mathbf{a} = \langle \tilde{\mathbf{a}} \rangle\). This ensemble average is associated with the large scales of motion, assumed to be deterministic. The quantity $\mathbf{a'}$ is a fluctuating component, here assumed to be of arbitrary amplitude and random in nature. By construction $\langle \mathbf{a'} \rangle = 0$. The model \deleted{as implemented here} assumes that the solar wind is a fully ionized proton-electron plasma. The two species are described as fluids with separate energy equations and it is assumed that the bulk velocity is the same for the two species \citep{hartle1968ApJ151,hundhausen1972coronal,isenberg1986JGR, marsch2006kinetic}. \deleted{To derive the mean-flow equations,} The velocity and magnetic fields are Reynolds-decomposed into mean and fluctuating components: $\tilde{\mathbf{v}} = \mathbf{v}+\mathbf{v'}$ and $\tilde{\mathbf{B}} = \mathbf{B}+\mathbf{B'}$, and the decomposed fields are substituted into the momentum and induction equations in the frame of reference corotating with the Sun. The ensemble averaging operator $\langle .\rangle$ is applied, yielding large-scale, mean flow equations: a continuity equation, a momentum equation, an induction equation, and two pressure equations. The dependent variables are the mean velocity in the corotating frame ${\bf v}$, the mean magnetic field ${\bf B}$, the number density $N_S$ and pressure $P_S$ of solar wind (thermal) protons, and the pressure of electrons $P_E$. Pressures are assumed to be isotropic and we neglect density and pressure fluctuations \citep{usmanov2014three,usmanov2018}. The mass density $\rho = m_p N_S$ is defined in terms of the proton mass \(m_p\). We use the classical Spitzer formula \citep{spitzer1965,hartle1968ApJ151} for the proton-electron Coulomb collision time scale, and the electron heat flux below $5\text{ -- }10~R_\odot$ is approximated by the classical collision dominated model of \cite{spitzer1953PhRv} \citep[see also][]{chhiber2016solar}, while above $5 \text{ -- } 10~R_\odot$ we adopt Hollweg's ``collisionless'' model \citep{hollweg1974JGR79,hollweg1976JGR}. Four turbulence quantities arise in the mean-flow equations: a source term $Q_\text{T}$ of energy deposition/extraction due to turbulent dissipation, the Reynolds stress $\mbox{\boldmath$\cal R$} = \langle \rho \mathbf{v}'\mathbf{v}' - \mathbf{B}'\mathbf{B}'/4\pi \rangle$, the magnetic pressure of the fluctuations \(\langle B'^2\rangle/8\pi\), and the mean turbulent electric field $\mbox{\boldmath$\varepsilon$}_m = \langle \mathbf{v}' \times \mathbf{B}' \rangle (4 \pi \rho)^{-1/2}$. These represent the coupling of the bulk flow to the small-scale fluctuations. Transport equations for the fluctuations are obtained by subtracting the mean-field equations from the full MHD equations. This yields a set of equations that describe the transport of three statistical descriptors for solar wind MHD fluctuations -- the turbulence energy, the correlation length of turbulent fluctuations, and the cross helicity -- which are coupled to the mean-field equations through terms involving $Q_\text{T}, \mbox{\boldmath$\cal R$},$ and $\mbox{\boldmath$\varepsilon$}_m$. To close the full set of equations, we employ an MHD analog of the familiar von K\'arm\'an--Howarth decay law \citep{karman1938prsl,wan2012JFM697,bandyopadhyay2018prx} for $Q_\text{T}$. Further details on the model, including those on numerical implementation, may be found in \cite{usmanov2014three} and \cite{usmanov2018}. The simulations have been found to give reasonable agreement with many spacecraft observations of large-scale solar wind fields, turbulence parameters (energy, cross helicity, and correlation scale), as well as the temperature, for varying heliocentric distance, and where feasible, varying helio-latititude \citep{breech2008turbulence,usmanov2011solar,usmanov2012three, usmanov2014three,usmanov2016four,chhiber2018apjl,usmanov2018}. The model has been used to compute diffusion coefficients for energetic particles, again finding good agreement with spacecraft observations \citep{chhiber2017ApJS230}. Recent work (reviewed below) has combined our model's output with \textit{STEREO} images to enable a localization of the first $\beta=1$ surface \citep{chhiber2018apjl}. The next section describes various runs of the simulation model performed for this work, and presents results relating to critical surfaces in the solar wind along with predictions along \textit{PSP} orbits. \section{Results}\label{sec:results} The present work is based on analysis of two classes of simulation runs: (I) In the first case we employ a dipole magnetic field at the inner boundary, with the dipole tilted at angles of 0\degree, 5\degree, 10\degree, and 30\degree~(Runs I-A, I-B, I-C, and I-D, respectively) relative to the solar rotation axis. A 60\degree~run was also analyzed, but the results were found to be similar to the 30\degree~simulation. This simple configuration has both open (near the pole of the dipole) and closed (near its equator) magnetic field geometry, and allows for simulation of both coronal-hole-like and streamer-like flows. This gives us a representation of the ambient, large-scale bimodal solar wind flow during periods of low-to-medium solar activity \citep{mccomas2003grl,usmanov2003tilted,owens2013lrsp}. (II) In the second case the MHD code is driven by a magnetic field at the base obtained from July 1989, July 1994, and December 2008 magnetogram data (Runs II-A, II-B, and II-C, respectively) published by the Wilcox Solar Observatory. Note that the magnetogram runs use a slightly older numerical model with a simpler WKB-wave based treatment of the coronal region \citep[\(1 \text{ -- } 45~R_\odot\); see][]{usmanov2000global,usmanov2003tilted,usmanov2014three}, \added{since the new coronal model \citep{usmanov2018} requires further testing with boundary conditions based on solar-maximum magnetograms.} The simulation domain extends from the coronal base at $1~R_\odot$ to 3 au. The following input parameters are specified at the coronal base: the driving amplitude of Alfv\'en waves ($\sim 30$ km~s$^{-1}$), the density ($\sim 1 \times 10^8$ particles cm$^{-3}$) and temperature ($\sim 1.8 \times 10^6$~K). The magnetic field magnitude is assigned either using a source magnetic dipole on the Sun's poles (with strength 12~G to match values observed by Ulysses) or from solar magnetograms. Runs I-A to I-D use an adiabatic index \(\gamma = 1.67\) throughout the simulation domain, while Runs II-A to II-C use \(\gamma=1.02\) in the WKB-based coronal region and \(\gamma=1.67\) above \(45~R_\odot\). For further numerical details see \cite{usmanov2014three} and \cite{usmanov2018}. \subsection{Surfaces in the Meridional Plane} The significance of the sonic and Alfv\'en critical surfaces, as well as the first $\beta=1$ surface, was discussed in Section \ref{sec:back}. Operationally the Alfv\'en critical surface is defined by the set of points, scanning outward, at which the solar wind speed first exceeds the Alfv\'en speed \(V_A = B/\sqrt{4 \pi \rho}\). Similarly, the sonic surface is defined by the set of points, scanning outwards from the sun, at which the total solar wind speed becomes larger than the sound speed $c_s = \sqrt{\gamma P_p/\rho}$. Here $\gamma$ is the polytropic index and $P_p$ is the proton pressure. Another definition of the sound speed is $c'_s = \sqrt{\gamma P/\rho}$, where $P = P_p + P_e$ includes the electron pressure $P_e$. We show the sonic surfaces computed using both these definitions to stress that the inclusion of various physical effects may change the location of the surface, and it is perhaps more appropriate to envision a transonic \textit{region} \citep{lotova1997SoPh172} rather than a highly localized surface. Nevertheless, at the fluid level of description $P$ may be considered the more appropriate measure of pressure. The plasma beta is also defined in two ways; in terms of the proton beta, \(\beta_p = 8\pi P_p/B^2\), and in terms of the total electron plus proton beta, \(\beta_{p+e} = 8\pi (P_p + P_e)/B^2\). The first $\beta=1$ surface is identified as the set of points, scanning outward, at which $\beta=1$ is first encountered. This is done in the analysis separately for proton beta and for total beta. Figure \ref{fig:merid} depicts the projection of these surfaces onto an arbitrarily selected meridional plane at 37\degree~heliolongitude for Runs I-A and I-D. Unless specified otherwise, simulation data are plotted in the Heliographic Coordinate system \citep[HGC,][]{franz2002pss}. Heliographic latitude is measured from the solar equator positive towards North, Heliographic longitude is defined in the direction of planetary motion, with the $XY$-plane defined by the solar equator. \begin{figure} \gridline{\fig{0tilt_merid1}{0.25\textwidth}{(a)} \fig{0tilt_merid2}{0.25\textwidth}{(b)} \fig{30tilt_merid1}{0.25\textwidth}{(c)} \fig{30tilt_merid2}{0.25\textwidth}{(d)} } \caption{Meridional planes from untilted dipole Run I-A ((a), (b)) and 30\degree~tilted dipole Run I-D ((c), (d)). Panels (a) and (c) show heliocentric distances from \(1 \text{ -- } 30~R_\odot\) while panels (b) and (d) show \(1 \text{ -- } 150~R_\odot\). The black curves show the sonic surface (solid line using $c_s$ with just proton pressure and dashed line using $c'_s$ which includes proton and electron pressures; see text), the white curve shows the Alfv\'en surface, and the green curves show the first unity $\beta$ surface (solid line shows $\beta_p=1$ and dashed line shows $\beta_{p+e}=1$).} \label{fig:merid} \end{figure} The surfaces show a laminar appearance, and display a very organized ordering. The two configurations depicted are very similar, with no asymmetry in the zero-tilt case, and only minor asymmetries seen in the north-south direction. For all latitudes well-separated from the heliospheric current sheet (HCS), the $\beta=1$ surface is the most distant, with the Alfv\'en surface contained well within it, and the sonic surface(s) lower still, in the range \(3 \text{ -- } 5~R_\odot\). The most dramatic feature is the rearrangement of the surfaces near the heliospheric current sheet region \added{\citep[consistent with previous work that examines the properties of these surfaces, e.g.,][]{pneuman1971SoPh,keppens2000ApJ,usmanov2000global, pinto2011ApJ,oran2013ApJ}}, an effect that can completely reverse the surfaces to an opposite ordering. In fact, one can find a substantial region in which the $\beta=1$ surface lies at lower altitudes than the Alfv\'en surface. There are also regions, much smaller in these particular cases, in which the sonic surface is found at altitudes above the Alfv\'en surface. In those small regions, the solar wind would have the somewhat anomalous character of being super-Alfv\'enic but subsonic. Alfv\'en wave pressure in such regions may be able to increase the mass flux of the resulting wind at higher radial distances \citep[see][]{leer1982ssr}. Before proceeding with further analysis, we want to emphasize that there are unavoidable limitations in using these simulations. One obvious comment is that our MHD solutions are based on simplified data that do not represent the actual boundary conditions corresponding to the solar wind during the \textit{PSP} passage. More specifically, we emphasize that the discrete spatial resolution of the MHD model limits the thinning of the HCS. Therefore both the HCS and the much wider plasma sheet surrounding it are expected to be broader in the simulation than in the actual solar wind \citep{winterhalter1994JGR}. A rough estimation based on published data suggests that the real HCS may be a factor of \(\sim 5\) thinner than what we are able to resolve here. Nevertheless, within the resolution parameters of the code, the physics of the simulation is deemed to be accurate, so that, for example, the inversion of critical surfaces is expected to occur, albeit over a thinner region, in the solar minimum conditions seen in some \textit{PSP} orbits.\footnote{It would be of interest to compare the present MHD-based results with analyses based on flux-tube solar wind models in which the HCS remains thin \citep[e.g.,][]{pinto2017ApJ}. Such a comparison is outside the scope of the present paper.} \subsection{Remote Sensing Context} We recall briefly the novel use of \textit{STEREO} Heliospheric Imaging (HI) data by \citet{deforest2016ApJ828}, which examined a series of images of the inner solar wind and argued, based on physical grounds, that the observed striation-flocculation transition occurred in the neighborhood of the first plasma-\(\beta =1\) surface. \cite{chhiber2018apjl} employed MHD simulations, similar to those analyzed here, to provide confirming evidence of this interpretation. Figure \ref{fig:stereo} revisits this analysis, showing that the region in which the \textit{striae} gives way to \textit{flocculae} is commensurate with the region in the simulation in which the first $\beta=1$ surface is encountered, as the wind transitions from magnetic control to hydrodynamic control. \begin{figure} \centering \includegraphics[scale=.2]{image_with_spacecraft_crop} \caption{Green curves show the first unity beta surfaces (solid line for $\beta_p=1$; dashed line for $\beta_{p+e}=1$) computed from the model (Run II-C) superimposed on a \textit{STEREO} image from \cite{deforest2016ApJ828}. White `+' shows location of enhanced turbulence inferred by \citet{lotova1985AA150} (see Figure \ref{fig:scintillation}); \textit{Helios} perihelion is shown as `{\color{blue}$\oplus$}'; the first three perihelia of the \textit{PSP} are shown as `$\otimes$'.} \label{fig:stereo} \end{figure} Recently, \cite{kasper2017apj} found evidence for a zone, extending from just above the transition region ($\sim 0.3~R_\odot$) to a distance of tens of solar radii, where $\alpha$-particles are heated preferentially over protons. The lower boundary of this zone would likely be at the chromospheric transition region, where the plasma collisionality changes from high to weak, thus permitting nonthermal physics to produce observed temperature anisotropies \citep[e.g.,][]{marsch2006kinetic}. It is conceivable that this zone of preferential heating ends at the first beta unity surface, since kinetic temperature anisotropies are generally associated with \(\beta \lesssim 1\) \citep[e.g.,][]{matteini2012SSR172}. This zone should be detected by the \textit{PSP} when it reaches below the first beta unity surface. The location of the sonic critical surface as a function of latitude was estimated from scintillation data by \citet{lotova1997SoPh172}. Figure~\ref{fig:lotova1997} shows the Lotova et al. results and compares them with sonic critical surfaces obtained from two MHD simulations -- a solar minimum magnetogram and a solar maximum magnetogram. We note a reasonable qualitative similarity, especially regarding the oblateness at the poles during solar minimum and the spherical but jagged shape during solar maximum. During solar minimum, there exists a clear demarcation between slow wind streams at equatorial latitudes and fast wind in polar regions. As a result, the wind becomes supersonic at larger distances from the Sun at low latitudes, while the sonic surface at the poles lies at lower heights. These results \replaced{suggest}{support the idea} that variations in the morphology of the critical surfaces can be used to infer the state of solar activity \added{\citep[e.g.,][]{keppens2000ApJ,pinto2011ApJ,pinto2017ApJ}}. \begin{figure} \gridline{\fig{lotova1997transonic}{0.5\textwidth}{(a)} } \gridline{\fig{sonic_cr2123}{0.4\textwidth}{(b)} \fig{sonic_cr2078}{0.4\textwidth}{(c)} } \caption{(a) Transonic regions from \cite{lotova1997SoPh172}, showing the transition from spherically symmetric but jagged morphology at solar maximum (1989), to oblateness at the poles during solar minimum (1994). Sonic surfaces (solid line using $c_s$ with just proton pressure and dashed line using $c'_s$ which includes proton and electron pressures; see text) from Runs II-A and II-B, using solar maximum (July 1989) and solar minimum (July 1994) magnetograms, respectively. Contours of proton density are shown in the background. The transition from solar maximum (b) to solar minimum (c) is qualitatively consistent with the one seen in Figure \ref{fig:lotova1997}(a).} \label{fig:lotova1997} \end{figure} Another look at the properties of the solar wind in the critical region is provided by the scintillation intensity data of \cite{lotova1985AA150}, reproduced in Figure \ref{fig:scintillation}. For comparison we show the radial profiles of two parameters obtained \deleted{with} from an \added{(axisymmetric)} simulation with an untilted dipole (Run I-A), \added{in the ecliptic (Figure \ref{fig:scintillation}(a)) and polar (Figure \ref{fig:scintillation}(b)) regions.} The parameters shown are the radial solar wind speed \(V_r\) and the turbulence energy density (per unit mass) \(Z^2\) \added{at 6.75\degree~heliolatitude, representative of the ecliptic region (Figure \ref{fig:scintillation}(a)), and at 82\degree~heliolatitude, representative of the polar region (Figure \ref{fig:scintillation}(b)).} The scintillation profile (measured through $m\nu$, where \(m\) is a scintillation index and \(\nu\) is the frequency of observation; see \cite{lotova1985AA150}) shows a feature in the range of \(15\text{ -- } 30~R_\odot\) that is interpreted as a region of enhanced turbulence, giving rise to enhanced radio scattering from density irregularities. Shaded regions in the Figure \ref{fig:scintillation}(a) indicate the range of radii at which the Alfv\'en and sonic surfaces are found in the ecliptic region in the simulation \added{(between heliolatitudes 6.75\degree~and \(- 6.75\degree\)), while the vertical lines in Figure \ref{fig:scintillation}(b) represent the locations of these surfaces at 82\degree~heliolatitude. The Figure also shows \textit{PSP} perihelia for several orbits.} We note that the scintillation feature lies very close to the position of the maximum turbulence energy per unit mass $Z^2$ from the simulation, and is also close to the locations of the sonic and Alfv\'enic critical surfaces in the simulation. This enhancement in turbulence may be caused by the interactions of counter-propagating Alfv\'en waves \citep{matthaeus1999ApJL523}. The acceleration of the wind also begins in this region, \added{with larger speeds and turbulence energies seen at polar latitudes.} \begin{figure} \gridline{\fig{lotova_6deg}{0.5\textwidth}{(a)} \fig{lotova_82deg}{0.5\textwidth}{(b)} } \caption{Enhanced scintillation ($m\nu$) region from the observations of \cite{lotova1985AA150}, seen as a bump at \(\sim 20~R_\odot\) in the dashed red curve. Radial solar wind speed \(V_r\) (dash-dotted blue curve) and turbulence energy density (per unit mass) \(Z^2\) (solid black curve) are shown at (a) an ecliptic heliolatitude of 6.75\degree~and (b) a polar heliolatitude of 82\degree. Panel (a) shows shaded bands representing the locations of the Alfv\'en (pale blue band) and sonic (grey band with dashed outline) surfaces in the ecliptic region of the simulation (between heliolatitudes 6.75\degree~and \(- 6.75\degree\)). Panel (b) shows vertical lines representing locations of the Alfv\'en surface (pale blue solid) and the sonic surface (grey dashed) at 82\degree~heliolatitude. All simulation results shown here are from Run I-A. The first, third, and final perihelia of the \textit{PSP} are represented as $\oplus$ symbols, {at heliocentric distances of 35.66, 20.35, and 9.86\(~R_\odot\), respectively \citep{fox2016SSR}.}} \label{fig:scintillation} \end{figure} \subsection{What PSP will see: Dipole-based Simulations} Using the \textit{PSP} trajectory and a coordinate transformation to link it to the global MHD solution, one may graphically illustrate the relationship between the \textit{PSP} orbit and the simulated heliospheric structure. Superposing the orbits on the simulation results should not be construed as a prediction, since the boundary data, even if compatible with projected future conditions, is necessarily imprecise. However this exercise does present a possible context for the \textit{PSP} mission. \replaced{Portraying this relationship is not trivial, because the critical surfaces rotate with Sun, while the \textit{PSP} orbit traces a curve in three-space that does not precisely lie in a single plane in any inertial frame (see Fig. \ref{fig:overview}).}{Here we evaluate the MHD solution along the \textit{PSP} trajectory, taking solar rotation into account.} To produce an illustrative comparison of the orbits and critical surfaces, we may choose to look at a sequence of (non-inertial) meridional planes that always contain the \textit{PSP} orbit. In this frame the orientation of the solar dipole field rotates at a non-constant angular frequency. Figure \ref{fig:pspmerid1} depicts such a sequence of meridional planes. The MHD simulation used for this illustration employed a 10\degree~tilted dipole boundary condition (Run I-C), representing solar-minimum conditions likely to be sampled by the \textit{PSP} in its early orbits. The position of \textit{PSP} in each frame (during the 8th orbit; see Figure \ref{fig:barplot2}) is at the center of the yellow `+' symbol. The times are chosen to correspond to \textit{PSP} passing over a critical surface. The plots are labeled by time measured in days-from-launch. \added{For these conditions, probably not unusual for early \textit{PSP} orbits that occur during solar minimum, the spacecraft is often found skimming the edges of the \(\beta=1\) surface near the HCS. This may provide opportunities for \textit{PSP} to study \(\beta\sim 1\) plasma for extended periods.} A video animation of these figures is available as Supplementary Material. An animation illustrating \textit{PSP} crossings of critical surfaces in the final orbit, during solar-maximum conditions (Run II-A), is also available. \begin{figure} \gridline{\fig{55movie}{0.45\textwidth}{(a)} \fig{85movie}{0.45\textwidth}{(b)} } \gridline{\fig{115movie}{0.45\textwidth}{(c)} \fig{145movie}{0.45\textwidth}{(d)} } \caption{\textit{PSP} crosssings of the critical surfaces are illustrated by a sequence of meridional planes that contain the spacecraft trajectory. The 8th orbit is depicted in a 10\degree~dipole simulation (Run I-C; see Figure \ref{fig:barplot2}(a)), representing solar-minimum conditions. The sonic, Alfv\'en, and first (proton+electron) beta unity surfaces are depicted as solid pink, solid blue, and dashed green curves, which are superposed on contours of proton density. The \textit{PSP} position is at the center of the yellow `+' symbol. A video animation is available as Supplementary Material.} \label{fig:pspmerid1} \end{figure} Another interesting way to visualize the relationship between the \textit{PSP} orbit and the critical surfaces is to tally the time spent in each orbit within the \(\beta=1\) surface (henceforth \(\beta\) refers to the ``two-fluid'' plasma beta \(\beta_{p+e}\)), the Alfv\'en surface, and the sonic surface. \added{For the purposes of the present study, the initial (``launch'') heliolongitude of the \textit{PSP} is arbitrarily placed within the simulation. Rather than focus on a particular (arbitrary) trajectory, we consider \(\sim 100\) values of the initial longitude \(\phi_{\textit{PSP} ,0}\), ranging from 0\degree~to 359\degree, and perform an average over them. That is, for a given simulation run (that represents a particular type of solar conditions), we first compute the time spent within the critical surfaces during an orbit, for \textit{each} \textit{PSP} trajectory defined by a value of \(\phi_{\textit{PSP} ,0}\). We then average these times over the different \(\phi_{\textit{PSP} ,0}\) to obtain a \textit{mean} number of hours within the surfaces, for each orbit. These results are presented in the following figures, discussed below.} As a first example of this compilation, Figure \ref{fig:barplot1}(a) shows the residence time within each of these regions, using the planned \textit{PSP} orbits, for the case of a solar wind with untilted dipole boundary conditions. The upper section of the plot shows, as functions of time, the variation of orbital radial distances, as well as radial position of the critical surfaces at the angular position (heliolatitude and heliolongitude) of the \textit{PSP}, \added{for an arbitrary \(\phi_{\textit{PSP} ,0}\).} This directly illustrates \textit{PSP's} penetration of the critical surfaces at various times. Referring to the lower section of Figure \ref{fig:barplot1}(a) that shows accumulated time \added{(averaged over \(\phi_{\textit{PSP} ,0}\))} within critical surfaces, for each orbit, we see that, beginning with orbit 8, this virtual \textit{PSP} mission penetrates the Alfv\'en surface for 18 hours or more for all subsequent orbits to 25. Beginning with orbit 10, \textit{PSP} spends between 15 and 40 hours in each plotted orbit below the predicted sonic surface. \replaced{There are no orbits falling below the \(\beta=1\) surface. This set of predictions is somewhat anomalous due to the lack of dipole tilt, so that the orbits almost always fall in the (artificially wide) high-\(\beta\) current sheet region.}{Due to the lack of dipole tilt an anomalous amount of time is spent in high-beta plasma, and the residence time below the \(\beta=1\) surface is supressed compared to subsequent cases. Recall also that the HCS in the simulation is artificially wide, and therefore the times spent within the \(\beta=1\) surface are likely to be underestimated, in particular for simulations with low dipole tilts.}\explain{We have changed an interpolation procedure used in the computation of the beta=1 surface, which produces a smoother and more gradual transition to the high-beta current sheet. Only Figure \ref{fig:barplot1}(a) is noticeably changed by this -- we now see a finite number of hours spent within the beta=1 surface for the untilted dipole case, in contrast to the previously submitted manuscript, in which the \textit{PSP} did not spend any time under the beta=1 surface.} \begin{figure} \gridline{\fig{orbit0tilt}{0.65\textwidth}{(a)} } \gridline{\fig{orbit5tilt}{0.65\textwidth}{(b)} } \caption{\textit{PSP} surface crossings from simulations with (a) 0\degree~ and (b) 5\degree~ dipole tilt. In each plot, the top section shows the radial and latitudinal position of the \textit{PSP} for each orbit, and the radial position of the critical surfaces at the angular position of the \textit{PSP}. The bottom section shows the time spent by the \textit{PSP} under each surface, per orbit. The striped green, lavender, and narrow red bars represent the \(\beta=1\), Alfv\'en, and sonic surfaces, respectively.} \label{fig:barplot1} \end{figure} Figure \ref{fig:barplot1}(b) shows a similar compilation done for a 5\degree~dipole-tilt run. We can see now, as would be expected, that the encounters with critical surfaces have a strong dependence on the dipole tilt angle, which translates into the degree of latitudinal excursion of the HCS. In fact, for this case the critical surfaces are frequently seen at larger heliocentric distances, with significant consequences for the sub-critical-surface residence times. The \(\beta=1\) surface is crossed relatively early, and from orbit 4 onwards \textit{PSP} spends nearly 50 hours or more within it. Furthermore, for all orbits after 7, the \textit{PSP} spends at least 20 hours within at least one of the critical surfaces. These 20 to 40 hour periods will represent opportunities for crucial observations. For instance, below the Alfv\'en surface the \textit{PSP} might detect a large population of inward propagating Alfv\'en modes, and the enhanced turbulence seen in Figure \ref{fig:scintillation} could be detected in the trans-Alfv\'enic region. Two more cases with dipole boundary conditions are shown in Figure \ref{fig:barplot2}, with tilt angles of 10\degree~and 30\degree. The results for a 60\degree~dipole run (not shown) are very similar to the 30\degree~case. It is apparent that the \(\beta=1\) surface is found at considerably larger radial distances as the tilt angle is increased. During solar maximum, the \textit{PSP} is therefore likely to spend more than a hundred hours under the first beta unity surface per orbit. Furthermore, Figure \ref{fig:barplot2}(b) indicates that no time is spent within the sonic surface during any of the orbits in the 30\degree~dipole case. The reason for this can be understood from the discussion of Figure \ref{fig:lotova1997} -- Since the \textit{PSP} trajectory stays within low heliolatitudes, it may be able to sample the extended portion of the sonic surface during solar minimum; However, during solar maximum the height of this surface is generally too low to be crossed at the latitudes sampled by the spacecraft (see also Figure \ref{fig:merid}(c)). \begin{figure} \gridline{\fig{orbit10tilt}{0.65\textwidth}{(a)} } \gridline{\fig{orbit30tilt}{0.65\textwidth}{(b)} } \caption{\textit{PSP} surface crossings from a simulation with (a) a 10\degree~and (b) a 30\degree~dipole tilt. Further description follows Figure \ref{fig:barplot1}.} \label{fig:barplot2} \end{figure} \subsection{What PSP will see: Magnetogram-based simulations} Here we briefly show results for two cases in which the MHD simulation is driven by magnetograms: one from solar minimum conditions (Carrington Rotation 1885, July 1994; Run II-B; Figure \ref{fig:barplot3}(a)) and another from solar maximum conditions (Carrington Rotation 1818, July 1989; Run II-A; Figure \ref{fig:barplot3}(b)). Examining the solar minimum case, one sees that the residence times within the Alfv\'en and sonic surfaces rarely, if ever, exceed twenty hours in a single orbit. Figure \ref{fig:barplot3}(b) shows a solar maximum case employing a July 1989 magnetogram. The residence times under the \(\beta=1\) surface are below 100 hours during any orbit. There are only a few orbits in which the Alfv\'en surface is encountered, and then for no more than about 10 hours in a single orbit. As indicated by Figure \ref{fig:barplot3}(b) (and Figure \ref{fig:barplot2}(b)), \textit{PSP} crossings of the sonic surface are unlikely to occur during solar maximum. A video animation of simulated \textit{PSP} ``surface crossings'' in the solar maximum case is available as Supplementary Material. \begin{figure} \gridline{\fig{orbit_cr1885}{0.65\textwidth}{(a)} } \gridline{\fig{orbit_cr1818}{0.65\textwidth}{(b)} } \caption{\textit{PSP} surface crossings for (a) a July 1994 (solar minimum) magnetogram run and (b) a July 1989 (solar maximum) magnetogram run. Further description follows Figure \ref{fig:barplot1}. A video animation of simulated \textit{PSP} ``surface crossings'' in the solar maximum case is available as Supplementary Material.} \label{fig:barplot3} \end{figure} Compared with the dipole-based results (Figures \ref{fig:barplot1} and \ref{fig:barplot2}), the reduced time spent under the surfaces in Figure \ref{fig:barplot3} appears to be due to the rapid radial decay of the higher-order multipole magnetic fields that are implied by a complex magnetogram boundary condition \citep{reville2015ApJ798}. \added{It is also apparent that the \textit{PSP} spends significantly fewer hours within the Alfv\'en surface in the solar maximum case (Figure \ref{fig:barplot3}(b)), compared to solar minimum (Figure \ref{fig:barplot3}(a)). The implied lowering of the Alfv\'en radius during solar maximum has been noted in other recent work as well \citep[e.g.,][]{pinto2011ApJ,pinto2017ApJ,perri2018JPP}.} \added{While the decay of higher-order multipoles is a well-understood effect leading to radial reduction in fine-scale angular structure, this is somewhat offset by dynamical production of fine-scale structure in the corona and beyond. This effect is captured to a certain degree by existing models such as the present one but is also clearly limited by the ability to include fine-scale dynamics, that is, limited by spatial resolution of the numerics \citep{schmidt2015LRCA,miesch2015SSR194} as well as by the resolution of the boundary conditions (magnetogram resolution). Accordingly, more realistic global models of the solar atmosphere, like the real Sun, will include more will fine-scale structure at larger distances, and therefore the possibility of a larger number of brief passages through critical regions that we discuss here. In such cases interesting modifications might be expected to the depiction in Figure \ref{fig:barplot3} and to its comparison with Figure \ref{fig:barplot2}. The ``woodgrain'' structure obtained using high-resolution coronagraph imaging, as discussed by \cite{deforest2018ApJ}, hints at the appearance of such fine-resolution structuring in the real solar wind.} \section{Conclusions and Discussion}\label{sec:disc} We have shown here some detailed illustrative exercises in the use of a global heliospheric MHD code with turbulence modeling to simulate context that could be observed by the upcoming \textit{Parker Solar Probe} mission. We emphasize again that these results cannot be construed as predictions, since the boundary data employed are not only imprecise, but also are not appropriate to the conditions at the time when \textit{PSP} will fly, except perhaps in a qualitative sense. Nevertheless it is interesting and even useful to explore the kind of conditions that \textit{PSP} might experience, an approach that we call \textit{context prediction}. In this paper we have focused on ambient steady-state conditions in the solar wind, driven by boundary conditions that are simple untilted or tilted dipoles, or otherwise magnetograms from previous solar minimum or solar maximum epochs. We note that a sensitive parameter is the total solar dipole strength, and we have used values commonly adopted in other work, which lead to agreement with near-Earth observations \citep{usmanov2014three,chhiber2018apjl,usmanov2018}, with the understanding that this value is actually not well constrained \citep{riley2014SoPh,usmanov2018}. To summarize, the present results are of two major types: First, we find broad agreement in our study with the interpretation of existing remote sensing results, both from heliospheric imaging and from radio scintillation studies. Our results confirm the likely association of the region near the first outgoing $\beta=1$ surfaces with morphological changes in the solar wind as observed in \textit{STEREO} imaging \citep{deforest2016ApJ828}. Our global simulations also support the idea that a region near the critical Alfv\'en surfaces may be characterized by a local enhancement of turbulence levels, a feature that may have implications for additional heating and acceleration of the solar wind. Second, the trajectory analyses show that the period of time that \textit{PSP} is likely to spend inside the $\beta=1$, sonic, and Alfv\'en surfaces depends sensitively on the degree of solar activity and the tilt of the solar dipole and the location of the heliospheric current sheet. Here we have provided a first set of such context predictions, emphasizing the possible range of positions of the sonic and Alfv\'enic critical surfaces, and the first plasma beta unity surface. The importance of these surfaces \citep[e.g.,][]{lotova1985AA150,deforest2016ApJ828,chhiber2018apjl} lies in the fact that the physical character and conditions of the interplanetary medium are likely to be different on either side of these boundaries, which may in reality be very complex regions, or at least corrugated surfaces. \textit{Parker Solar Probe} seeks to address questions such as the physical mechanisms that heat the corona and accelerate the wind, and to reveal the structure of the electromagnetic fields, plasma and energetic particles in these very regions of the corona and wind. Therefore, a baseline understanding the range of distances at which these regions might be encountered and crossed becomes quite important for anticipating what the mission is likely to measure, for how long, and on which orbits. In a forthcoming paper we will continue these investigations, describing in some detail the turbulence properties that are expected in the regions above and below the critical surfaces and along the \textit{PSP} trajectory \citep[see also][]{cranmer2018RNAAS}, together with an evaluation of the validity of the Taylor hypothesis for \textit{PSP} observations. \acknowledgments We thank J. Kasper for useful discussions and the APL \textit{PSP} project office for providing the NASA SPICE kernel containing the \textit{PSP} ephemeris. This research is supported in part by the NASA \textit{Parker Solar Probe} mission through the IS\(\hbox{$\odot$}\)IS project and subcontract SUB0000165 from Princeton University to University of Delaware, by the NASA HGC program grant NNX14AI63G, by the NASA LWS program under grant NNX15AB88G, and by NASA HSR grants 80NSSC18K1210 and 80NSSC18K1648. The preparation of this article made use of the \href{http://adsabs.harvard.edu/}{SAO/NASA Astrophysics Data System (ADS)}.
2,869,038,155,377
arxiv
\section{Introduction} Determination of a particle's trajectory in a turbulent flow field requires an equation that satisfies the Navier--Stokes equation and accounts for all relevant forces. The first attempt was made by Stokes for a sphere moving slowly with a uniform velocity in a viscous fluid of unlimited extent that is stationary far from the particle \citep{stokes1850effect}. Boussinesq and Basset later considered the linear inertia of flow surrounding the sphere and developed an equation for the unsteady motion of a spherical particle accelerating from rest and moving with a time-varying velocity $v_p(t)$, adding an unsteady drag force or ``history term" to the equation of motion that accounts for prior particle interactions with the surrounding flow \citep{boussinesq1885applications,boussinesq1885resistance,basset1888treatise}. In the interests of mathematical simplicity, the derivation by \cite{boussinesq1885applications,boussinesq1885resistance} and \cite{basset1888treatise} omitted non-linear inertia terms proportional to the squares and products of velocities of the surrounding flow relative to a moving sphere. Such an assumption can be valid in the Stokes flow regime because the particle motion can be considered to be ``slow". Fluid viscous forces dominate inertia and the Reynolds number is ``small", i.e, $\Rey ={v_p d_p}/{\nu} < 1$ where $d_p$ is the sphere diameter and $\nu = \mu/\rho_f$ is the kinematic viscosity of the fluid, $\mu$ the dynamic viscosity of the fluid, and $\rho_f$ the fluid density. The next significant advance was introduced by \cite{Tchen1947mean} who generalized the equation of motion for unsteady motion of spherical particle in a fluid at rest. He proposed an equation for the motion of a slow spherical particle in a fluid that has a velocity $v_f(t)$ independent of the sphere. To reduce the problem to that of a particle moving in a fluid at rest, Tchen assumed the particle moves with a velocity $v_p(t)-v_f(t)$. In addition, he allowed for the entire system, including both the fluid and the particle to experience a pressure gradient force due to a changing rectilinear velocity of the fluid $v_f(t)$. \cite{corrsin1956equation} later showed that if the fluid is turbulent, and the sphere is smaller than the shortest wavelength characterizing the turbulent flow, spatial and temporal inhomogeneities in the fluid also add a torque due to spatial velocity gradients, and a force due to a static pressure gradient. Further adaptations and extensions of the equation of motion account for the drag force due to the forced velocity curvature around the sphere, or the Faxén correction, and viscous shear stress, leading to the widely used Maxey--Riley equation \citep{faxen1922widerstand,buevich1966motion,Riley1971,soo1975equation,gitterman1980memory,maxey1983equation}. For a particle that is at rest in a stationary fluid until the instant $t=0$, and is sufficiently small to have a negligible effect on fluid motions far from the particle, the Maxey--Riley equation accounts for the trajectory, dispersion, and settling velocity of the particle. The force balance includes the buoyancy force, the stress gradient of the fluid flow in the absence of a particle, the force due to the virtual mass, steady Stokes drag and unsteady Basset drag \begin{align} m_p\frac{d\textit{\textbf{v}}_p}{dt} =& (m_p-\rho_f V_p)\textit{\textbf{g}} + \rho_f V_p\frac{D\textit{\textbf{v}}_f}{Dt} - k\rho_f V_p\frac{d}{dt}\Big(\textit{\textbf{v}}_p-\textit{\textbf{v}}_f -\frac{1}{10} {a}^2 \nabla^2\textit{\textbf{v}}_f \Big) \nonumber \\ & -6 \pi \mu a \Big( \textit{\textbf{v}}_p-\textit{\textbf{v}}_f - \frac{1}{6} {a}^2 \nabla^2\textit{\textbf{v}}_f \Big) -6 \pi \mu a^2 \int_{0}^{t}\frac{\frac{d}{d\tau}(\textit{\textbf{v}}_p(\tau)-\textit{\textbf{v}}_f(\tau)-\frac{1}{6} {a}^2 \nabla^2\textit{\textbf{v}}_f)}{\sqrt{\pi \nu(t-\tau)}} d\tau \label{eq:Maxey--Riley} \end{align} where the index \textit{p} denotes the particle and \textit{f} for the fluid. $\frac{d}{dt}=\pd{}{t}+{v_p}_j \pd{}{x_j}$ is the total time derivative along the particle trajectory, and $\frac{D}{Dt}=\pd{}{t}+{v_f}_j \pd{}{x_j}$ the acceleration of the fluid along its own trajectory, $m_p$ the particle mass, $v_p$ the Lagrangian velocity of the particle, and $v_f$ the Eulerian fluid velocity in the particle location. $\rho_{f}V_p$ is the fluid mass $m_f$ occupied by the particle volume $V_p$ of radius $a$. $k=(m'_f/m_f)$ is an added mass coefficient, and $m'_f$ the virtual mass of the fluid, assumed to undergo the same acceleration as the particle. The coefficient $k$ is a function of the flow regime and geometric properties of the particle. For irrotational flow around a sphere $k$ is $0.5$. No analytical solution exists for the full expression of the Maxey--Riley equation of motion. Numerically, however, the equation provides a useful guide for exploring interactions between particles and a moving fluid flow. Its application extends to fields as wide ranging as sediment transport and waste management, combustion, particle transport, and deposition, particle clustering, atmospheric precipitation, aquatic organism behaviors, and underwater robotics \citep{chao1963turbulent,soo1975equation,murray1970settling,reeks1977dispersion,nir1979effect,kubie1980settling,maxey1987gravitational,maxey1990advection,mei1990particle,mei1991particle,falkovich2002acceleration,peng2009transport,daitche2013advection,beron2019building}. An important point is that \eqref{eq:Maxey--Riley} assumes that the Reynolds number of the particle relative to the surrounding fluid flow satisfies $\Rey = |\textit{\textbf{v}}_p-\textit{\textbf{v}}_f| d_p/\nu < 1$, that is the Stokes flow regime. For larger values of $\Rey$, semi-empirical adjustments are sometimes made \citep{ho1964fall,hwang1985fall,tunstall1968retardation,field1968effects,murray1970settling,maxey1990advection,wang1993settling,nielsen1993turbulence,stout1995effect,good2014settling}. The steady drag force $F_d = 6\pi\mu a(v_p - v_f)$ shifts from scaling linearly with relative velocity to scaling as an empirically derived steady drag coefficient $C_D$ and the relative velocity squared $F_d = \frac{1}{2} \rho_f A_p C_D(\Rey) (v_p - v_f)^2$. Then the steady drag at high Reynolds numbers becomes sufficiently large that the history term in \eqref{eq:Maxey--Riley} becomes negligible and can be omitted from the equation of motion \citep{wang1993settling,stout1995effect,good2014settling}. However, it remains that the mathematical form of the history term developed by Boussinesq-Basset applies only when the Reynolds number is small $(\mathrm{Re} < 1)$. A priori, there is no mathematical justification for arguing that unsteady drag is negligible compared to steady drag when the Reynolds number is arbitrarily high. While a few theoretical studies have considered unsteady drag on a sphere at a finite but small Reynolds number in the range $\mathrm{Re} < 100$ \citep{oseen1913ueber,proudman1957expansions,sano1981unsteady,mei1991unsteady,mei1992flow,mei1994flow,lovalenti1993force,michaelides1997transient}, as of yet, no general formulation has been presented for the unsteady drag on solid bodies moving within a viscous liquid when $\mathrm{Re} \gg 1$. This article attempts to fill this gap by first revisiting the classical derivation of Basset's solution, and then by using a similar approach obtaining a formulation for the unsteady drag term suitable for application to higher Reynolds numbers. \section{Overview of the Stokes solution} \label{sec:Stokes-solution} \cite{stokes1850effect} considered a sphere of radius $a$ falling at constant velocity $V_0$ under gravity along a straight axis $z$, considering the center of the sphere as the origin so that the motion of the fluid is symmetrical with respect to the axis of fall. Relative to the center of sphere, in a spherical coordinate system $(r,\theta,\phi)$ where $r$ is the radius, $\theta$ is the zenith angle, and $\phi$ is the azimuthal angle (Fig. \ref{fig:stream_function_Stokes}), the $u_r$ and $u_{\theta}$ components of velocities along and perpendicular to the direction of $r$ are \begin{align} v_r(t,r,\theta) &= \frac{1}{r^2 \mathrm{sin}\theta} \pd{\psi(t,r,\theta)}{\theta} \label{eq:stream-function1} \\ v_{\theta}(t,r,\theta) &= -\frac{1}{r \mathrm{sin}\theta} \pd{\psi(t,r,\theta)}{r} \label{eq:stream-function2} \end{align} \begin{figure} \centering \includegraphics[width=6cm, height=7cm]{stream_function_Stokes.pdf} \caption{The laminar stream function around a sphere falling with constant velocity $V_0$ at an earlier (dashed top) and a later time $t$ (solid bottom). The stream function changes in spatial co-ordinates $r$ and $\theta$ with respect to a stationary observer so that it is a function of both the sphere velocity and time.} \label{fig:stream_function_Stokes} \end{figure} For an incompressible fluid in an azimuthally symmetric 2D spherical coordinate system, the Navier--Stokes equations that determine the surrounding fluid flow around the moving sphere are \begin{align} \pd{v_r}{t} + v_r \pd{v_r}{r} + \frac{v_{\theta}}{r} \pd{v_r}{\theta} -\frac{v_{\theta}^2}{r} &= -\frac{1}{\rho_f} \pd{p}{r} +\nu \Big(\nabla^2 v_r -\frac{2 v_r}{r^2} -\frac{2}{r^2} \pd{v_{\theta}}{\theta} -\frac{2 v_{\theta} cot\theta}{r^2} \Big) \label{eq:Navier--Stokes1} \\ \pd{v_{\theta}}{t} + v_r \pd{v_{\theta}}{r} + \frac{v_{\theta}}{r} \pd{v_{\theta}}{\theta} +\frac{v_r v_{\theta}}{r} &= -\frac{1}{r \rho_f} \pd{p}{\theta} +\nu \Big(\nabla^2 v_{\theta} +\frac{2}{r^2} \pd{v_r}{\theta} -\frac{ v_{\theta}}{r^2 \mathrm{sin}^2\theta} \Big) \label{eq:Navier--Stokes2} \end{align} Assuming the no-slip condition at the surface of sphere, at a constant fall velocity $V_0$ the boundary conditions are \begin{equation} v_r \bigg |_{r=a} = V_0 \ \mathrm{cos}\theta, \quad v_\theta \bigg |_{r=a} = -V_0 \ \mathrm{sin}\theta \end{equation} Defining for brevity an operator \begin{equation}\label{eq:operator} \mathrm{D} = \pd[2]{}{r^2} + \frac{1}{r^2} \pd[2]{}{\theta^2} - \frac{\mathrm{cos}\theta}{r^2 \mathrm{sin}\theta} \pd{}{\theta} \end{equation} then using \eqref{eq:stream-function1}-\eqref{eq:stream-function2}, the Navier--Stokes equations \eqref{eq:Navier--Stokes1}-\eqref{eq:Navier--Stokes2} can be rewritten in terms of the stream function as follows \begin{align} -\frac{1}{\rho_f} \pd{p}{r} &= \frac{1}{r^2 \mathrm{sin}\theta} \pd{}{\theta} \Big(\pd{\psi}{t} -\nu \mathrm{D} \psi \Big) \label{eq:Navier--Stokes-stream-function1} \\ \frac{1}{\rho_f} \pd{p}{\theta} &= \frac{1}{\mathrm{sin}\theta} \pd{}{r} \Big(\pd{\psi}{t} -\nu \mathrm{D} \psi \Big) \label{eq:Navier--Stokes-stream-function2} \end{align} Taking the derivative of \eqref{eq:Navier--Stokes-stream-function1} with respect to $\theta$ and the derivative of \eqref{eq:Navier--Stokes-stream-function2} with respect to $r$ and eliminating the pressure term, the equation for $\psi(t,r,\theta)$ becomes \begin{equation} \label{eq:stream-function-equation} \underbrace{ \mathrm{D} \Big(\nu \mathrm{D}- \pd{}{t} \Big) \psi}_{linear} + \ \mathrm{sin}\theta \underbrace{ \bigg( \pd{\psi}{r} \pd{}{\theta} - \pd{\psi}{\theta} \pd{}{r} \bigg) \frac{\mathrm{D \psi} }{r^2 \mathrm{sin}^2\theta} }_{non-linear} = 0 \end{equation} The solution to \eqref{eq:stream-function-equation} is the stream function for a viscous and incompressible fluid surrounding a moving sphere. Note the distinction between the linear term that produces a laminar flow around a slow-moving sphere and the non-linear term that arises from retaining the velocity products and squares in \eqref{eq:Navier--Stokes1}-\eqref{eq:Navier--Stokes2}. In 1850, Stokes solved the linear term at steady-state, namely $\mathrm{D} \big(\mathrm{D} \psi(r,\theta) \big) = 0$ by switching reference frames and treating the fluid as moving with velocity $V_0$ relative to a stationary sphere. Therefore, by placing the origin at the center of the quiescent sphere, and supposing a solution in the form of $\psi(r,\theta) = \mathrm{sin}^2(\theta) f(r)$, \cite{stokes1850effect} determined the motion of a fluid for a sphere that moves slowly at a constant velocity in a fluid at rest \begin{equation} \label{eq:Basset-infinity} \psi(r,\theta) = \frac{1}{4} V_0 a^2 \mathrm{sin}^2\theta \Big(\frac{3r}{a} - \frac{a}{r} \Big) \end{equation} Stokes obtained the familiar expression $F_D = 6 \pi \mu a V_0$ for the drag force of the fluid on the sphere assuming the no-slip condition at the sphere's surface. The terminal velocity of a falling sphere is then obtained from balance with the gravitational force (Appendix \ref{appA}). \section{Overview of Basset's solution} \label{sec:Basset-solution} Basset argued that Stokes' formula for the terminal velocity yields values larger than those obtained by experiment. Based on his prior theoretical studies \citep{basset1888treatise}, \cite{basset1910descent} attributed the discrepancy to the neglect of the $\pd{\psi}{t}$ term in \eqref{eq:stream-function-equation} for steady motion, suggesting that it should be replaced by $V_0 \pd{\psi}{z}$, and again maintaining the origin at the center of the moving sphere (Fig. \ref{fig:stream_function_Stokes}). Stokes' assumption that sphere starts the motion with a constant velocity $V_0$ also implies a discontinuity at the sphere surface. Suppose that a sphere that is set in motion with a constant velocity of $V_0$. The no-slip condition requires that the fluid velocity instantly change from $\frac{1}{2} V_0 \ \mathrm{sin}\theta$ to $-V_0 \ \mathrm{sin}\theta$ (Appendix \ref{appB}). This discontinuity is unphysical. If instead the sphere is moving with a variable velocity $V(t)$ starting from rest then the revised linear equation to be solved is \begin{equation} \label{eq:Eq.I} {\mathrm{D} \Big(\nu \mathrm{D} - \pd{}{t} \Big) \psi(t,r,\theta)}=0 \end{equation} The solution was found first by \cite{boussinesq1885resistance,boussinesq1885applications} and apparently independently three years later by \cite{basset1888treatise}. Any more generalized analytical solution to \eqref{eq:stream-function-equation} has yet to be determined. Much has been written about the Basset drag force in the literature but less about how it was originally derived. Here, we revisit Basset's solution for two reasons. His work on the problem of variable slow motion of a sphere in a viscous fluid was last published in 1888, and the innovative analytical methods he used to solve partial differential equations are not well known. Second, we extend his mathematical approach to present a revised form of the Maxey--Riley equation suitable for application to a wider range of Reynolds numbers than the Stokes regime. The solution to \eqref{eq:Eq.I} is outlined in more detail in Appendix \ref{appA}. Briefly, Basset's approach to solving \eqref{eq:Eq.I} for $\psi(t,r,\theta)$ was motivated by the absence of an analytical solution to the linear form of the Navier--Stokes equation for an accelerating particle. He began by first assuming that the sphere moves with constant velocity $V_0$. In this case, the particular solution for the stream function around a sphere with a moving origin \eqref{eq:Eq.I} is \begin{align} \label{eq:Basset-solution0} \psi(t,r,\theta) =& \frac{1}{2} V_0 a^2 \ \mathrm{sin}^2\theta \Big\{\frac{3 \nu t}{r a} + \frac{6 \sqrt{\nu t/\pi}}{r} + \frac{a}{r} \Big\} \\ &-\frac{3}{\sqrt{\pi}} V_0 a^2 \ \mathrm{sin}^2\theta \int_{\frac{r-a}{2 \sqrt{\nu t}}}^{\infty} \Big\{\frac{2 \xi^2 \nu t}{ra} + \frac{2 \xi \sqrt{\nu t}}{r} + \frac{1}{2}(\frac{a}{r} - \frac{r}{a})\Big\} e^{-\xi^2} d\xi \nonumber \end{align} The stream function around the sphere obtained by the Basset \eqref{eq:Basset-solution0} is laminar and its form is identical to that obtained by Stokes \eqref{eq:Basset-infinity}, as shown in Fig. \ref{fig:stream_function_Stokes}. The difference is that the stream function is non-steady due to acceleration of the fluid around the sphere. Basset's unsteady stream function reduces to the Stokes steady stream function at the particle surface $r=a$, and in the limit $t\rightarrow\infty$ where the integral term of Basset's solution with ${r-a}/{2 \sqrt{\nu t}}$ approaches zero. At a distance radially far from the particle surface, or for shorter times where the fluid has not yet reached a steady motion, the value of stream function calculated by Basset's solution is greater than that found by Stokes. By substituting \eqref{eq:Basset-solution0} into \eqref{eq:Navier--Stokes-stream-function1}, the solution for the fluid pressure field is \begin{equation} \label{eq:Basset-pressure} p(t,r,\theta) = \frac{3 V_0 a \mu \ \mathrm{cos}\theta}{2 r^2} \big(1 + \frac{a}{\sqrt{\pi \nu t}} \big) \end{equation} and the fluid velocities $v_r$ and $v_{\theta}$ are obtained by substituting \eqref{eq:Basset-solution0} into \eqref{eq:stream-function1} and \eqref{eq:stream-function2}, respectively. The drag force owes to the upstream pressure gradient across a falling particle and the shear stress in the particle boundary layer. At the sphere surface where $r=a$, the drag force is \begin{align} \label{eq:Basset-drag} \nonumber F_D = 2 \pi a^2 \int_{0}^{\pi} \bigg\{ \Big(p - 2 \mu \pd{v_r}{r} \Big) \mathrm{cos}\theta + \mu \Big(\pd{v_\theta}{r} + \frac{1}{r} \pd{v_r}{\theta} -\frac{v_\theta}{r} \Big) \mathrm{sin}\theta \bigg\} \mathrm{sin}\theta \ d\theta \\ = 6 \pi \mu a V_0 \big(1+ \frac{a}{\sqrt{\pi \nu t}} \big) \end{align} Neglecting velocity squared terms, there is a correction term to the Stokes drag. For physical insight, suppose that there is a relaxation time to the terminal velocity $\tau_p= V_0/g$ that in the Stokes flow regime is equal to $\tau_p = {m_p}/{6 \pi \mu a}$, simplifying to $\tau_p = \frac{ \rho_p d^2_p}{18 \mu}$. The fractional addition to the Stokes drag in \eqref{eq:Basset-drag} varies temporally as ${a}/{\sqrt{\pi \nu t}}$, which is proportional to $\sqrt{{\tau_p}/{t}}$. Then, $\tau_p$ is the time $t_{max}$ at which the unsteady drag is a maximum. Fluid accelerations around the particle surface exert a force on the particle that is proportional to the particle cross-section. The perturbation diffuses away from the particle as $1/\sqrt{t}$. For the case of turbulent flows, it has been suggested that the appropriate timescale to which particle relaxation time could be compared is the Kolmogorov timescale $\tau_\eta$ where $\eta$ is the Kolmogorov length scale, in which case the fractional enhancement of unsteady drag to Stokes drag is $\sim a/\eta$ \citep{daitche2015role}. Effectively then, there is an extra drag force at constant $V_0$ that prolongs the time it takes the particle to approach its terminal velocity. For the more physical case that $V$ is not a constant, Basset's approach was to substitute in \eqref{eq:Basset-drag} the time variable $t$ with a historical time $\tau$, and $V_0$ with a time-varying velocity of form $\frac{dV(t-\tau)}{dt} d\tau$, integrating the result from $0$ to $t$. To see the justification for this substitution, consider that the transformation $\zeta = t-\tau$ leads to \begin{align} \label{eq:velocity_variation} \int_{0}^{t} \frac{dV(t-\tau)}{dt} d\tau &= - \int_{t}^{0} \frac{dV(\zeta)}{d\zeta} d\zeta \\ \nonumber &= V(t) - V(0) \end{align} If the sphere starts from rest, then $V(0) = 0$, and $V(\zeta)$ is finite between its limits, any integration of a time varying velocity in \eqref{eq:velocity_variation} will yield the current sphere velocity at time $t$. \cite{basset1888treatise} proposed that if $V(t)$ is a solution to a partial differential equation, then the integral of $\frac{d V(t-\tau)}{dt} d\tau$ must also be a solution. The total drag force in \eqref{eq:Basset-drag} then becomes \begin{equation} \label{eq:Basset-drag-timechange} F_D = 6 \pi \mu a \Big( V(t) + a \int_{0}^{t} \frac{1}{\sqrt{\pi \nu \tau}} \frac{dV(t-\tau)}{dt} \ d\tau \Big) \end{equation} Drag is not only a function of the current velocity but also of the particle acceleration due to prior interactions between the particle and the fluid. \cite{basset1910ondescent} later adopted a method developed by \cite{picciati1907sul} that simplifies the procedure of first find a solution for constant velocity and then for changing velocity. Picciati's method reduces the problem to the determination of a function that satisfies Fourier's heat equation, and yields a solution equivalent to \eqref{eq:Basset-drag-timechange}. The equation of motion for a sphere of mass $m_p$ moving slowly with a time-varying velocity becomes \begin{align} \label{eq:Basset-motion-equation} m_p \frac{d \textit{\textbf{V}}(t)}{dt} = (m_p - m_f) \textit{\textbf{g}} - 6 \pi \mu a \Big( \textit{\textbf{V}}(t) + a \int_{0}^{t} \frac{1}{\sqrt{\pi \nu (t-\tau)}} \frac{d\textit{\textbf{V}}(\tau)}{d\tau} \ d\tau \Big) - \frac{1}{2} m_f \frac{d\textit{\textbf{V}}(t)}{dt} \end{align} Equation \eqref{eq:Basset-motion-equation} does not consider the squares and products of flow velocities in the Navier--Stokes equation \eqref{eq:Navier--Stokes1}-\eqref{eq:Navier--Stokes2} and so it remains valid only for Stokes flow. It is this equation of motion that \citep{Tchen1947mean} employed to account for the effects of temporal variability in the fluid flow and that with subsequent revisions led to the Maxey--Riley equation of motion \eqref{eq:Maxey--Riley}. \section{Unsteady drag at high Reynolds numbers} To determine the hydrodynamic fluid forces at higher Reynolds numbers, what is required is a particular solution to the full Navier--Stokes equations \eqref{eq:Navier--Stokes1}-\eqref{eq:Navier--Stokes2}. This is not yet possible due to the mathematical difficulties introduced when higher order velocity terms are retained. Consequently, these terms have traditionally been either ignored or parameterized based on empirical studies. In the latter case, the steady drag on a falling sphere with velocity $V_0$ in a stationary and incompressible viscous fluid can then be expressed using Rayleigh’s formula $F_d = \frac{1}{2} \rho_f A_p C_D(\Rey) V^2_0$, in which case the drag force becomes \begin{equation} \label{eq:Basset-drag0} F_D = \frac{1}{2} \rho_f A_p C_D(\Rey){V_0}^2 \Big(1 + \frac{a}{\sqrt{\pi \nu t}} \Big) \end{equation} For example, in the Stokes flow regime, a formulation for the drag coefficient $C_D(\Rey) = {24}/{\Rey}$ converts the Rayleigh formula to the familiar expression $F_d = 6 \pi \mu a V_0$ expressed in \eqref{eq:Basset-drag}. For higher Reynolds numbers, empirical estimates of the drag coefficient can be used. But if a more generalized drag force is to be implemented within the context of an equation such as the Maxey--Riley equation, appropriate adjustments must be made to the equation itself. We now proceed to derive an expression for the unsteady drag at high Reynolds numbers in a manner analogous to that described in Section \ref{sec:Basset-solution} for low Reynolds numbers. Following Basset's approach leading to \eqref{eq:Basset-drag-timechange} by way of \eqref{eq:velocity_variation}, a more general equation of motion is then \begin{multline} \label{eq:new-motion-equation} m_p \frac{d V(t)}{dt} = (m_p - m_f) g -\frac{1}{2} \rho_f A_p \bigg( C_D(\Rey) V^2(t) + a \int_{0}^{t} \frac{C_D(\tau) |V(\tau)|}{\sqrt{\pi \nu(t-\tau)}} \frac{dV(\tau)}{d\tau} d\tau \bigg) \\ -\frac{1}{2} m_f \frac{dV(t)}{dt} \end{multline} where the integral term expresses a more generalized unsteady drag. A possible limitation of this expression is that $C_D$ is derived empirically for a particle moving at constant velocity, and does not account for any time variation in the drag due to acceleration. Experimental studies suggest drag coefficients that can be significantly higher \citep{hughes1952er,selberg1968drag,igra1993shock}. What is important to note however is that within the integrand in \eqref{eq:new-motion-equation}, the particle acceleration is multiplied by the magnitude of the particle velocity, whereas with the Basset equation \eqref{eq:Basset-motion-equation}, it is multiplied by a constant. Therefore, for a particle falling at high velocity with a large Reynolds number, unsteady drag is not necessarily negligible as has sometimes been assumed. Ignoring any alterations to the drag force due to forced velocity curvature around the sphere (the Faxén correction $\frac{1}{6} a^2 \nabla^2 u_f$), and viscous shear stress, we propose a more general version of the Maxey--Riley equation of particle motion \begin{align} m_p\frac{d\textit{\textbf{v}}_p}{dt} =& (m_p-\rho_f V_p)\textit{\textbf{g}} + \rho_f V_p\frac{D\textit{\textbf{v}}_f}{Dt} - \frac{1}{2} \rho_f A_p \Big\{C_D(\Rey) |\textit{\textbf{v}}_p-\textit{\textbf{v}}_f| (\textit{\textbf{v}}_p-\textit{\textbf{v}}_f) \nonumber \\ & + a \int_{0}^{t} \frac{C_D(\tau) |\textit{\textbf{v}}_p(\tau)-\textit{\textbf{v}}_f(\tau)|}{\sqrt{\pi \nu (t-\tau)}} \frac{d\big(\textit{\textbf{v}}_p(\tau)-\textit{\textbf{v}}_f(\tau)\big)}{d\tau} \ d\tau \Big\} - k\rho_f V_p\frac{d}{dt} (\textit{\textbf{v}}_p-\textit{\textbf{v}}_f) \label{eq:modified-Maxey--Riley} \end{align} where $\Rey = |v_p(t) - v_f(t)| d_p/\nu$ is the particle's relative Reynolds number. \section{Numerical analysis} The equation of motion \eqref{eq:new-motion-equation} is now solved numerically. The particle velocity is initialized at some value close to zero, the particle's Reynolds number $\Rey={|V(t)|d_p}/{\nu}$ is specified, and the drag coefficient is calculated from an empirically derived relationship between the drag coefficient and the Reynolds number of a rigid sphere \citep{whiteviscous} \begin{equation}\label{eq:drag_coefficient} C_D(Re)=\begin{cases} \frac{1}{4} + \frac{24}{Re} +\frac{6}{1+\sqrt Re} & \ \ \ \text{if $\Rey \leq 3,000$}\\ 0.3659, & \ \ \ \ \ \ \text{otherwise} \end{cases} \end{equation} The history term in \eqref{eq:new-motion-equation} can be estimated except where $\tau$ approaches $t$, at which point the integrand becomes infinite and must be treated separately. Using the definition of an integral, the history term evolves from the previous time step $(t-\Delta{t})$ through \begin{align} \label{eq:Basset_integral} \int_{0}^{t} \frac{V(\tau)}{\sqrt{t-\tau}} \frac{dV(\tau)}{d\tau} \ d\tau &= \lim_{n\rightarrow\infty}\sum_{k=1}^{n-1} \bigg(\frac{V(t-k\Delta{t})}{\sqrt{k}} \ \frac{dV(t-k\Delta{t})}{dt} \bigg) \cdot \Delta{t}^{1/2} \end{align} where $t$ is the time of motion and $\Delta{t}$ is the time interval employed in the simulation. The right-hand side of \eqref{eq:Basset_integral} is amenable to standard numerical techniques. \begin{figure} \centering \includegraphics[width=14cm, height=8cm]{norm_drag_force.pdf} \caption{Normalized steady and unsteady drag forces acting on a particle falling into stationary air over a normalized log-time of motion, according to \eqref{eq:new-motion-equation}. The forces are normalized by gravity and the time of motion by the particle relaxation time in Stokes flow $\tau_p$. The normalized total force is also shown. Left) particle with a density ratio of $s=15$ and a low Reynolds number of $\Rey = 0.2$ Right) particle with a density ratio of $s=830$ and a Reynolds number of $\Rey = 1100$. The dashed purple line shows for comparison the unsteady Basset drag calculated using the Maxey--Riley equation of motion \eqref{eq:Maxey--Riley}.} \label{fig:norm_drag_force} \end{figure} Equation \eqref{eq:new-motion-equation} was solved numerically for the approach of a particle initially at rest to its terminal velocity considering both a low and high Reynolds number. Fig. \ref{fig:norm_drag_force} shows a comparison of steady and unsteady drag forces normalized by the gravity force as a function of time normalized by the particle Stokes time $\tau_p$. For a particle with a Reynolds number of $\Rey = 0.2$ the generalized equation for unsteady drag, the integral term in \eqref{eq:new-motion-equation}, is equivalent to the Basset history term and is a maximum $25\%$ of the total force when the Stokes time $t/\tau_p \simeq 1$. Its contribution to the particle acceleration is negligible as the drag turns steady and the particle approaches its terminal velocity. For a higher Reynolds number of $\Rey = 1100$, the unsteady drag accounts for a maximum $\sim15\% $ of the total force at a time much shorter than particle relaxation time $\tau_p$ while the Basset history drag plays a negligible role. Fig. \ref{fig:drag_different_Re} shows that the time at which the unsteady drag reaches its maximum value decreases logarithmically as the cube root of the particle Reynolds number $\Rey^{1/3}$, or that $\ln(t_{max}/\tau_p) \propto -\Rey^{1/3}$. Also shown is the ratio of the Basset drag to the generalized unsteady drag, which also decreases as $\Rey^{1/3}$, indicating a diminished relative importance of the Basset drag at higher Reynolds numbers. So while the revised unsteady drag dominates the Basset history drag, and consequently increases the drag and reduces the particle terminal velocity, relative to the Stokes time, the period over which the drag affects the particle motion is correspondingly short. \begin{figure} \centering \includegraphics[width=9cm, height=8cm]{drag_different_Re.pdf} \caption{Time relative to the Stokes time $\tau_p$ at which the unsteady drag is a maximum, and the corresponding ratio of the Basset drag to the unsteady drag, as a function of the cube root of the Reynolds number.} \label{fig:drag_different_Re} \end{figure} \section{Discussion} There remain some important limitations to \eqref{eq:new-motion-equation}. First, there is an implicit assumption that the particle starts from rest. While nonetheless assuming Stokes flow, \cite{basset1888treatise} developed a rather more complicated equation of motion for a particle initially projected vertically with velocity $V_i$ (for derivation, see Appendix \ref{appB}) \begin{align} \label{eq:Basset-motion-equation-inivelocity} \frac{d \textit{\textbf{V}}(t)}{dt} = - \lambda \textit{\textbf{V}}_i e^{-\lambda t} + f \textit{\textbf{g}} \ e^{-\lambda t} - \lambda a \ \frac{d}{dt} \int_{0}^{t} \int_{0}^{\upsilon} \frac{e^{-\lambda (t-\upsilon)}}{\sqrt{\pi \nu (\upsilon-\tau)}} \frac{d\textit{\textbf{V}}(\tau)}{d\tau} \ d\tau \ d\upsilon \end{align} The coefficient $f = \frac{m_p - m_f}{m_p + \frac{1}{2} m_f}$ simplifies to $f = \frac{\rho_p - \rho_f}{\rho_p + \frac{1}{2} \rho_f}$, and $\lambda = \frac{6\pi \mu a}{m_p + \frac{1}{2} m_f}$ to $\lambda = \frac{9 \rho_f \nu}{2 a^2 (\rho_p + \frac{1}{2}\rho_f)}$. Basset was unable to integrate this complicated integro-differential equation, but for the limited case of $\lambda \ll 1$, as applies to a sphere moving in a fluid whose kinematic viscosity is small, he used a method of successive approximation to obtain the acceleration and velocity to the third power in $\lambda$. Later, \cite{boggio1907integrazione} successfully reduced the complexity of the problem to a solvable second order differential equation (see Appendix \ref{appC}). The solution employs error functions of form $\mathrm{erf(\sqrt{\alpha t})}$ and $\mathrm{erf(\sqrt{\beta t})}$ where $\alpha,\beta = \frac{\lambda}{2}\{(q-2) \pm \sqrt{q(q-4)}\}$ and $q={\lambda a^2}/{\nu}$. Substituting this expression for $\lambda$ yields $q=\frac{9 \rho_f}{2\rho_p + \rho_f}$. For a particle denser than the fluid then $q<4$, and $\alpha$ and $\beta$ are complex numbers. For this case, \begin{align} \label{eq:Basset-analytical-solution-inivelocity} V(t) = \frac{f g}{\lambda} &+ \big(V_i - \frac{f g}{\lambda} \big) e^{\gamma t} \big\{\mathrm{cos}(\delta t) - \frac{\gamma + \lambda}{\delta} \ \mathrm{sin}(\delta t)\big\} \\ &-\frac{h \ e^{\gamma t}}{\delta} \bigg\{\mathrm{cos}(\delta t) \int_{0}^{t} \frac{e^{-\gamma t} \mathrm{sin}(\delta t)}{\sqrt{t}} dt- \mathrm{sin}(\delta t) \int_{0}^{t} \frac{e^{-\gamma t} \mathrm{cos}(\delta t)}{\sqrt{t}} dt \bigg\} \nonumber \end{align} where $\gamma = \frac{\lambda}{2}(q-2) = - \frac{\lambda}{2} \big(\frac{4\rho_p-7\rho_f}{2\rho_p+\rho_f}\big)$, $\delta = \frac{\lambda}{2} \sqrt{q(4-q)} = \frac{\lambda}{2} q^{\frac{1}{2}} \sqrt{\frac{8\rho_p-5\rho_f}{2\rho_p+\rho_f}}$, and $h = \frac{\lambda a}{\sqrt{\pi \nu}} (f g - \lambda V_i)$. This equation is not widely known but it significantly reduces the computational expense of finding a solution for $V(t)$ by eliminating the requirement of tracking the history of the particle's motion. A second, more troubling limitation of \eqref{eq:Basset-motion-equation-inivelocity}-\eqref{eq:Basset-analytical-solution-inivelocity}, and hence also of \eqref{eq:Basset-motion-equation} and \eqref{eq:new-motion-equation}, is that for a particle starting at $t = 0$ with a finite vertical velocity, the effect of the initial velocity (or any disturbance to the flow field surrounding the sphere) on the eventual particle displacement does not decay to zero at infinite time. The end result is that the terminal velocity differs from that expected from the Stokes solution. While the effect is small, it nonetheless implies the unphysical property of infinite memory in a dissipative viscous fluid \citep{reeks1984dispersive} To resolve this issue, \cite{sano1981unsteady} applied a matching procedure initially developed by \cite{bentwich1978unsteady} to unsteady low Reynolds number flow past a sphere to find that the drag decays faster than $t^{-1/2}$ when $t \gg \tau_p$. Thus, the temporal dependence of the Basset drag is only appropriate at times less than $\tau_p$ when inertial forces are low compared to viscous forces. A similar conclusion was reached by \cite{mei1991unsteady}. \cite{mei1992flow} applied a successive orders of approximation method to solve the Navier--Stokes equation to $\mathrm{O}(\mathrm{Re})$ for the case of oscillating flow over a sphere, by considering small fluctuations in velocity when the Reynolds number is not negligibly small. \cite{mei1992flow} then proposed a modified expression for the unsteady drag that includes an integration kernel that decays as $t^{-2}$ for $t\gg \tau_p$, limited to finite Reynolds numbers $(\mathrm{Re} \leq 100)$ and small-amplitude fluctuations in the velocity of the free stream. \cite{mei1994flow} later investigated the applicability of the kernel for other types of unsteady flows. \cite{mainardi1997fractional} went further to interpret the Basset force in terms of a fractional derivative of any order $\ell$ ranging in the interval $0 < \ell < 1$ as \begin{equation} \label{eq:Basset-drag-fractionalDeriv} F_D = \frac{9}{2}m_f \Big( \frac{1}{\tau_0} + \frac{1}{\tau^{1-\ell}_0} \frac{d^{\ell}}{{dt}^{\ell}} \ \Big) V(t) \end{equation} where $\tau_0= a^2/\nu$ represents the characteristic time to reach steady-state in a viscous fluid. $\ell = 1/2$ yields the total Basset drag in \eqref{eq:Basset-drag-timechange}. This generalization, suggested by mathematical speculation, modifies the behaviour of the solution, changing its decay from $t^{-1/2}$ to $t^\ell$ for $t\gg\tau_p$. \cite{mainardi1997fractional} considered three cases of $\ell = 1/4$, $1/2$, and $3/4$ and compared the particle terminal velocity with a desired temporal adjustment behavior $e^{-t/\tau_p}$ expected from Stokes drag $(\ell = 0)$. The results yielded improved agreement with the Stokes solution but the topic is still considered unsolved, as ideally it requires a full solution to the Navier--Stokes equations, including non-linear inertia terms involving the products of velocities. \section{Conclusions} The Maxey--Riley equation was originally developed for the study of small, slow-moving spheres but is widely used for higher Reynolds numbers under the assumption that unsteady Basset drag is insignificant relative to the steady drag. Here we have presented a historical review of the derivation of the equation of motion that leads to the Maxey--Riley equation and argue that the Basset drag can be suitably applied only when Reynolds numbers are small. Following Basset's original approach, but considering drag proportional to the particle relative velocity squared, a revised analytical equation is developed for extension to higher Reynolds numbers. Simulations based on this equation show that the unsteady drag force contributes substantially to the total drag at timescales less than the Stokes time, even for high values of the Reynolds number. \section*{Acknowledgments} This work is supported by the U.S. Department of Energy (DOE) Atmospheric System Research program award number DE-SC0016282 and the National Science Foundation (NSF) Physical and Dynamic Meteorology program award number 1841870.
2,869,038,155,378
arxiv
\section{Introduction} \label{sec:Introduction} 3D Dirac semimetals are 3D analogs of graphene \cite{Geim}. Their conduction and valence bands touch only at discrete (Dirac) points in the Brillouin zone with the electron states described by the 3D massless Dirac equation. Each Dirac point in momentum space is composed of two superimposed Weyl nodes of opposite chirality. Such points are usually obtained by fine tuning of certain physical parameters (e.g., the spin-orbit coupling strength or chemical composition) and are difficult to control. Additionally, they are often unstable with respect to mixing of Weyl modes and opening a gap. An important idea was proposed in Refs.~\cite{Manes:2011jk,Mele}, where it was shown that an appropriate crystal symmetry can protect and stabilize the gapless 3D Dirac points. Indeed, if a pair of crossing bands belong to different irreducible representations of the discrete (rotational) crystal symmetry and if this symmetry is not broken dynamically, then the mass term for the corresponding Dirac fermions will be prohibited. The {\it ab initio} calculations in Ref.~\cite{Mele} showed that $\beta$-cristobalite $\mathrm{BiO_2}$ exhibits three Dirac points at the Fermi level. Unfortunately, this material is metastable. By using the first-principles calculations and an effective model analysis, the compounds $\mathrm{A_3Bi}$ (A=Na, K, Rb) and $\mathrm{Cd_3As_2}$ were identified in Refs.~\cite{Fang,WangWeng} as possible 3D Dirac semimetals protected by crystal symmetry. Giant diamagnetism, linear quantum magnetoresistance, and the quantum spin Hall effect are expected in these materials. Furthermore, various topologically distinct phases can be realized in these compounds by breaking the time-reversal and inversion symmetries. By using angle-resolved photoemission spectroscopy, the Dirac semimetal band structure was indeed observed \cite{Borisenko,Neupane,Liu} in $\mathrm{Cd_3As_2}$ and $\mathrm{Na_3Bi}$ opening the path toward experimental investigations of the properties of 3D Dirac semimetals. Weyl semimetals is another group of materials that is closely related to 3D Dirac semimetals and have already attracted a lot of theoretical interest (for reviews, see Refs.~\cite{Hook,Turner,Vafek}). They are characterized by topologically non-trivial Weyl nodes in reciprocal space. Weyl nodes are monopoles of the Berry flux and, therefore, can appear or annihilate only in pairs. Weyl semimetals were proposed to be realized in pyrochlore iridates \cite{Savrasov}, topological heterostructures \cite{Balents}, magnetically doped topological insulators \cite{Cho}, and nonmagnetic materials such as $\mathrm{TaAs}$ \cite{1501.00060,1501.00755}. Recently, first experimental studies of Weyl semimetal candidate $\mathrm{TaAs}$ were reported in Refs.~\cite{1502.00251,1502.03807,1502.04684,1503.01304}. The authors observed unusual transport properties and surface states that are characteristic of the Weyl semimetal phase. Another interesting realization of the Weyl points in the context of photonic crystals has been recently reported in Ref.~\cite{Weyl-photonic}. Since the magnetic field breaks the time-reversal symmetry, a Dirac (semi-)metal in a magnetic field may transform into a Weyl one with Weyl nodes separated in momentum space by a nonzero chiral shift \cite{Gorbar:2013qsa}. Experimentally, the transition from a Dirac metal to a Weyl one in a magnetic field might have been observed in $\mathrm{Bi_{1-x}Sb_x}$ for $x \approx 0.03$ \cite{Kim:2013dia}. In moderately strong magnetic fields, a negative magnetoresistivity is observed and interpreted as a fingerprint \cite{Nielsen:1983rb,Son:2012bg,Gorbar:2013dha} of a Weyl/Dirac metal phase. The surface Fermi arcs \cite{Savrasov,Haldane,Aji,Okugawa:2014ina}, which connect Weyl nodes of opposite chirality, are related to the non-trivial topology of Weyl semimetals. In equilibrium, the presence of such surface states ensures that the chemical potentials at different Weyl points are identical \cite{Haldane}. Although Fermi arcs always connect Weyl nodes of opposite chirality, their shapes depend on the boundary conditions and, as shown in Ref.~\cite{Hosur}, Fermi arcs of an arbitrary form can be engineered. The Fermi arcs on the opposite surfaces of a semimetal sample together with the Fermi surfaces of bulk states form a closed Fermi surface. In an external magnetic field, the nontrivial structure of the corresponding Fermi surface gives rise to closed magnetic orbits involving the surface Fermi arcs \cite{Vishwanath}. These orbits produce periodic quantum oscillations of the density of states in a magnetic field leading to unconventional Fermiology of surface states. It was argued in Ref.~\cite{Gorbar:2014qta} that the interaction effects can change the separation between Weyl nodes in momentum space and the length of the Fermi arcs in the reciprocal space and, thus, affect these magnetic orbits. As a result, we found that the period of oscillations of the density of states related to closed magnetic orbits involving Fermi arcs has a non-trivial dependence on the orientation of the magnetic field projection in the plane of the semimetal surface \cite{Gorbar:2014qta}. If experimentally observed, such a dependence would provide an important clue to the effects of interactions in Weyl semimetals. Normally, one would not expect any surface Fermi arcs in 3D Dirac semimetals because the Dirac point has no topological charge and the associated Berry flux vanishes. In Refs.~\cite{Fang,WangWeng}, however, it was shown that the 3D Dirac semimetals $\mathrm{A_3Bi}$ (A=Na, K, Rb) and $\mathrm{Cd_3As_2}$ possess non-trivial surface Fermi arcs. This finding suggests a topologically nontrivial nature of the corresponding Dirac materials. Recently we showed \cite{Gorbar:2014sja} that this is indeed the case for Dirac semimetals $\mathrm{A_3Bi}$ ($\mathrm{A}=\mathrm{Na}, \mathrm{K}, \mathrm{Rb}$). The physical reason for their nontrivial topological properties is connected with a discrete symmetry of the low-energy effective Hamiltonian. The symmetry classification allows one to split all electron states into two separate sectors, each describing a Weyl semimetal with a pair of Weyl nodes and broken time-reversal symmetry. The time-reversal symmetry is preserved in the complete theory because its transformation interchanges states from the two different sectors. The nontrivial topological structure of each sector was supported by explicit calculations of the Berry curvature, which revealed a pair of monopoles of the Berry flux at the positions of Weyl nodes in each of the two sectors of these semimetals \cite{Gorbar:2014sja}. In essence, these results demonstrated that Dirac semimetals $\mathrm{A_3Bi}$ ($\mathrm{A}=\mathrm{Na}, \mathrm{K}, \mathrm{Rb}$) are, in fact, $\mathbb{Z}_2$ Weyl semimetals. In Refs.~\cite{Fang,WangWeng}, the surface Fermi arcs in 3D Dirac semimetals were obtained in a tight-binding model by using an iterative method that produces the surface Green's function of the semi-infinite system \cite{Yu}. The imaginary part of the surface Green's function makes possible to determine the local density of states at the surface. While such a technique is very powerful, it is essentially a ``black box". In contrast, in the present paper, we study analytically the surface Fermi arc states by employing the continuum low-energy effective model with appropriate boundary conditions at the surface. We hope that such a consideration will provide a deeper understanding of the physical properties and characteristics of the surface Fermi arcs, as well as shed more light on the nontrivial topological properties of the $\mathrm{A_3Bi}$ compounds. The paper is organized as follows. In Sec.~\ref{sec:model}, we introduce the low-energy effective model and discuss its symmetries. The recently revealed $\mathbb{Z}_2$ Weyl semimetal structure of $\mathrm{A_3Bi}$ ($\mathrm{A}=\mathrm{Na}$, $\mathrm{K}$, $\mathrm{Rb}$) is emphasized. In order to clarify the origin and the structure of the surface Fermi arcs, we study in Sec.~\ref{sec:2x2Model} the corresponding states in a simplified model that contains a single Weyl semimetal sector. In Sec.~\ref{sec:Realistic4x4Model}, we present the rigorous analysis of the surface Fermi arc states in a realistic low-energy model of semimetals $\mathrm{A_3Bi}$ ($\mathrm{A}=\mathrm{Na}$, $\mathrm{K}$, $\mathrm{Rb}$). The effects of several possible symmetry breaking terms on the structure of the surface Fermi arc states are investigated in Sec.~\ref{sec:DynamicalParametersModel}. The discussion and the summary of the main results are given in Sec.~\ref{Conclusion}. Technical details regarding the symmetry properties and classification of the Fermi arc states are presented in Appendices~\ref{AppExtra} and \ref{AppA}. For convenience, throughout the paper, we set $\hbar=1$ and $c=1$. \section{Model} \label{sec:model} \subsection{Low-energy effective Hamiltonian} \label{effective-Hamiltonian} The low-energy Hamiltonian derived in Ref.~\cite{Fang} for $\mathrm{A_3Bi}$ ($\mathrm{A}=\mathrm{Na}, \mathrm{K}, \mathrm{Rb}$) has the form \begin{equation} H(\mathbf{k}) = \epsilon_0(\mathbf{k}) + H_{4\times 4}, \label{low-energy-Hamiltonian} \end{equation} where $\epsilon_0(\mathbf{k}) = C_0 + C_1k_z^2+C_2(k_x^2+k_y^2)$ and \begin{equation} H_{4\times 4} = \left( \begin{array}{cccc} M(\mathbf{k}) & Ak_+ & 0 & B^{*}(\mathbf{k}) \\ Ak_- & -M(\mathbf{k}) & B^{*}(\mathbf{k}) & 0 \\ 0 & B(\mathbf{k}) & M(\mathbf{k}) & -Ak_- \\ B(\mathbf{k}) & 0 & -Ak_+ & -M(\mathbf{k}) \\ \end{array} \right). \label{low-energy-Hamiltonian4x4} \end{equation} While the diagonal elements of $H_{4\times 4}$ are given in terms of a single function, $M(\mathbf{k}) = M_0 - M_1 k_z^2-M_2(k_x^2+k_y^2)$, the off-diagonal elements are determined by functions $Ak_{\pm}$ and $B(\mathbf{k}) = \alpha k_zk_{+}^2$, where $k_{\pm} = k_x\pm ik_y$. By fitting the energy spectrum of the effective Hamiltonian with the {\em ab initio} calculations, the numerical values of parameters in the effective model were determined in Ref.~\cite{Fang}. They are \begin{equation} \begin{array}{lll} C_0 = -0.06382~\mbox{eV},\qquad & C_1 = 8.7536~\mbox{eV\,\AA}^2,\qquad & C_2 = -8.4008~\mbox{eV\,\AA}^2,\\ M_0=-0.08686~\mbox{eV},\quad & M_1=-10.6424~\mbox{eV\,\AA}^2,\qquad & M_2=-10.3610~\mbox{eV\,\AA}^2,\\ A=2.4598~\mbox{eV\,\AA},\qquad & a=5.448~\mbox{\AA},\qquad & c=9.655~\mbox{\AA}, \end{array} \label{model-parameters} \end{equation} where we also included the lattice constants $a$ and $c$. Since no specific value for $\alpha$ was quoted in Ref.~\cite{Fang}, we will treat it as a free parameter below. The energy eigenvalues of the low-energy Hamiltonian (\ref{low-energy-Hamiltonian}) are given by the following explicit expression: \begin{equation} E(\mathbf{k})=\epsilon_0(\mathbf{k}) \pm \sqrt{M^2(\mathbf{k})+A^2k_{+}k_{-}+|B(\mathbf{k})|^2}. \label{energy-dispersion} \end{equation} It is easy to check that the term with the square root vanishes at the two Dirac points, $\mathbf{k}^{\pm}_0=\left(0, 0, \pm \sqrt{m}\right)$, where $\sqrt{m}\equiv \sqrt{M_0/M_1}$. With the choice of the low-energy parameters in Eq.~(\ref{model-parameters}), we find that $\sqrt{m}\approx 0.09034~\mbox{\AA}^{-1}$. The function $B(\mathbf{k})$ plays the role of a momentum dependent mass (gap) function that vanishes at the Dirac points. It is instructive to show that linearizing $M(\mathbf{k})$ in the vicinity of the Dirac points $\mathbf{k}^{\pm}_0$, Hamiltonian (\ref{low-energy-Hamiltonian4x4}) takes the form of a 3D massive Dirac Hamiltonian. In the vicinity of $\mathbf{k}^{-}_0$, expanding $M(\mathbf{k})$ to the linear order in $\mathbf{\delta{k}}=\mathbf{k}-\mathbf{k}^{-}_0$, we obtain \begin{equation} H^{\rm lin}_{4\times 4}=\left( \begin{array}{cc} A(\tilde{k}_x\sigma_x-\tilde{k}_y\sigma_y-\tilde{k}_z\sigma_z) & B^{*}(\mathbf{k}) \sigma_x \\ B(\mathbf{k}) \sigma_x & -A\,\mathbf{\tilde{k}}\cdot\bm{\sigma} \\ \end{array} \right), \label{model-Hamiltonian-1} \end{equation} where $\bm{\sigma}$ are Pauli matrices and $\mathbf{\tilde{k}}=(k_x,k_y,2\delta k_z\sqrt{M_0M_1}/A)$. Furthermore, by performing the unitary transformation, $\tilde{H}^{\rm lin}_{4\times 4}\equiv U_x^{+}H^{\rm lin}_{4\times 4}U_x$, where $U_x=\mbox{diag}( \sigma_x , I_2)$ and $I_2$ is the $2\times 2$ unit matrix, we find that the Hamiltonian takes the standard form of the Dirac Hamiltonian in the chiral representation, \begin{equation} \tilde{H}^{\rm lin}_{4\times 4}=\left( \begin{array}{cc} A\,\mathbf{\tilde{k}}\cdot\bm{\sigma} & B^{*}(\mathbf{k}) \\ B(\mathbf{k}) & -A\,\mathbf{\tilde{k}}\cdot\bm{\sigma} \\ \end{array} \right). \label{model-Hamiltonian-canonical} \end{equation} Taking into account that the mass term $B(\mathbf{k})$ vanishes at the Dirac point, we conclude that the upper and lower $2 \times 2$ blocks describe quasiparticle states of opposite chiralities. Also, since the leading order nonzero corrections to the mass function are quadratic in momentum, the chirality remains a good quantum number in a sufficiently small vicinity of the Dirac point. Hamiltonian (\ref{model-Hamiltonian-canonical}), describing two subsets of the opposite chirality states near a single Dirac point, does not appear to have any interesting topological properties. Also, by itself, it is unlikely to give rise to any Fermi arcs states. It is easy to check, however, that Hamiltonian (\ref{low-energy-Hamiltonian4x4}) linearized near $\mathbf{k}^{+}_0$ has a similar structure and describes two additional subsets of the opposite chirality states. As we argue below, the superposition of the two sectors of the theory is nontrivial and gives rise to an interesting topological structure \cite{Gorbar:2014sja}. \subsection{Symmetries} \label{sec:symmetries} Let us briefly review the symmetry properties of the low-energy Hamiltonian following Ref.~\cite{Gorbar:2014sja}. We start by pointing out that, as expected, the Hamiltonian (\ref{low-energy-Hamiltonian}) is invariant under the time-reversal and inversion symmetries, i.e., \begin{eqnarray} \Theta H_{-\mathbf{k}} \Theta^{-1} &=& H_{\mathbf{k}}, \qquad \mbox{(time-reversal symmetry)} \label{T-symmetry} \\ PH_{\mathbf{-k}}P^{-1} &=& H_{\mathbf{k}},\qquad \mbox{(inversion symmetry)} \label{P-symmetry} \end{eqnarray} where $\Theta=TK$ ($K$ is complex conjugation) and \begin{equation} T = \left( \begin{array}{cccc} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ -1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \end{array} \right), \qquad P=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \end{array} \right). \label{P-matrix} \end{equation} Having both, the time-reversal and inversion symmetries, suggests that the corresponding compounds are not Weyl semimetals. This is not the whole story, however. As shown in Ref.~\cite{Gorbar:2014sja}, the low-energy Hamiltonian in Eq.~(\ref{low-energy-Hamiltonian}) possesses a new discrete symmetry, the so-called up-down parity (ud-parity), that protects its topological nature. In order to understand the corresponding symmetry, it is instructive to start from the approximate Hamiltonian without the mass function $B(\mathbf{k})$ (or, alternatively, $\alpha =0$). In this case, the $4\times 4$ Hamiltonian takes a block diagonal form: $H_{4\times 4}(\alpha=0) \equiv H^{+}_{2\times 2}\oplus H^{-}_{2\times 2}$. The explicit form of the upper block is given by \begin{eqnarray} \label{Hamiltonians_+_New} &&H^{+}_{2\times 2}= \left( \begin{array}{cc} M_0-M_1k^2_z-M_2(k^2_x+k^2_y) & A(k_x+ik_y) \\ A(k_x-ik_y) & -\left[M_0-M_1k^2_z-M_2(k^2_x+k^2_y)\right] \\ \end{array} \right). \end{eqnarray} This block Hamiltonian defines a Weyl semimetal with two Weyl nodes located at $\mathbf{k}^{\pm}_0$. (The lower block $H^{-}_{2\times 2}$ has a similar form, except that $k_x$ is replaced by $-k_x$.) It is well known \cite{Okugawa:2014ina,Vishwanath} that such a Weyl semimetal has the surface Fermi arc connecting the Weyl nodes of opposite chirality at $\mathbf{k}^{+}_0$ and $\mathbf{k}^{-}_0$. Because of the sign difference, $k_x \to -k_x$, the chiralities of the states near the Weyl nodes at $\mathbf{k}^{\pm}_0$ are opposite for the upper and lower block Hamiltonians. Thus, the complete $4\times 4$ block diagonal Hamiltonian $H_{4\times 4}(\alpha=0)$ describes two superimposed copies of Weyl semimetal with two pairs of overlapping nodes. Since the opposite chirality Weyl nodes coincide exactly in momentum space, they effectively give rise to a pair of Dirac points at $\mathbf{k}^{\pm}_0$. At the same time, because the opposite chirality nodes come from two different Weyl copies, they cannot annihilate and cannot form topologically trivial Dirac points. In fact, the corresponding approximate model describes a $\mathbb{Z}_2$ Weyl semimetal \cite{Gorbar:2014sja}. The nontrivial topological properties, associated with the underlying $\mathbb{Z}_2$ Weyl semimetal structure, ensure that the resulting Dirac semimetal possesses surface Fermi arcs. It is easy to show that the existence of the $\mathbb{Z}_2$ Weyl semimetal structure in the absence of $B(\mathbf{k})$ is connected with the continuous symmetry $\mathrm{U}_{+}(1)\times \mathrm{U}_{-}(1)$ of the approximate Hamiltonian $H_{4\times 4}(\alpha=0)$. This symmetry describes independent phase transformations of the spinors that correspond to the up- and down-block Hamiltonians, $H^{+}_{2\times 2}$ and $H^{-}_{2\times 2}$, respectively. For $B(\mathbf{k})\neq 0$, the continuous symmetry $\mathrm{U}_{+}(1) \times \mathrm{U}_{-}(1)$ is broken down to its diagonal subgroup $\mathrm{U}_{\rm em}(1)$ that describes the usual charge conservation. However, the low-energy Hamiltonian (\ref{low-energy-Hamiltonian}) with the momentum dependent mass function $B(\mathbf{k}) =\alpha k_zk^2_{+}$ possesses a ud-parity, defined by the following transformation \cite{Gorbar:2014sja}: \begin{equation} U H_{-k_z} U^{-1}= H_{k_z} ,\quad \mbox{(ud-parity)}, \label{flavor-symmetry} \end{equation} where matrix $U$ has the following block diagonal form: $U\equiv \mbox{diag}(I_2,-I_2)$ and $I_2$ is the $2\times 2$ unit matrix. For the Hamiltonian to be symmetric under the ud-parity, it is crucial that the mass function $B(\mathbf{k})$ changes its sign when $k_z\to -k_z$ [while the functions $\epsilon_0(\mathbf{k})$ and $M(\mathbf{k})$ in the diagonal elements do not change their signs]. In the special case of a momentum independent mass function, such a discrete symmetry does not exist. As was argued in Ref.~\cite{Gorbar:2014sja}, the existence of the noncommuting time-reversal and ud-parity symmetries implies that the $\mathrm{A_3 Bi}$ semimetal is, in fact, a $\mathbb{Z}_2$ Weyl semimetal. In such a semimetal, all quasiparticle states can be split into two separate groups, labeled by the eigenvalues $\chi=\pm 1$ of $U_{\chi} =U\Pi_{k_z}$, where $\Pi_{k_z}$ is the operator that changes the sign of the $z$ component of momentum, $k_z \to -k_z$. Effectively, each group of states defines a Weyl semimetal with a broken time-reversal symmetry. The corresponding symmetry is preserved in the complete theory, in which the two copies of Weyl semimetals are superimposed. The $\mathbb{Z}_2$ Weyl semimetal structure of $\mathrm{A_3 Bi}$ ($\mathrm{A}=\mathrm{Na}, \mathrm{K}, \mathrm{Rb}$) is also supported by the explicit calculation of the Berry connection and the Berry curvature in each Weyl sector described \cite{Gorbar:2014sja}. In particular, the corresponding results for the curvature in the momentum space reveal a clear dipole structure. It is natural, that each Weyl sector, described by quasiparticle states with a fixed eigenvalue of $U_\chi$, should give rise to Fermi arcs connecting the pairs of Weyl nodes at $\mathbf{k}^{\pm}_0$ . Moreover, such arcs should be topologically protected and could not be removed by small perturbations of model parameters. In our discussion of Fermi arcs below, it will be also useful to take into account that there exists yet another discrete symmetry defined by the following transformation: \begin{eqnarray} \tilde{U} H_{-k_x} \tilde{U}^{-1} &=& H_{k_x}, \label{second-discrete-symmetry} \end{eqnarray} where \begin{equation} \tilde{U} = \left( \begin{array}{cccc} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{array} \right). \label{U-prime-matrix} \end{equation} It is interesting to note that the product of the $U_{\chi}$ and $\tilde{U}\Pi_{k_x}$ transformations $U_{\chi}\tilde{U}\Pi_{k_x}=T\Pi_{k_x}\Pi_{k_z}$ is also a symmetry of the low-energy Hamiltonian (\ref{low-energy-Hamiltonian}). The symmetry $T\Pi_{k_x}\Pi_{k_z}$ is related to the time-reversal symmetry. This follows from the fact that $K\Pi_{k_y}$ is also the symmetry of the low-energy Hamiltonian (\ref{low-energy-Hamiltonian}). Together the operators $U_{\chi}$, $\tilde{U}\Pi_{k_x}$, and $T\Pi_{k_x}\Pi_{k_z}$ form a non-commutative discrete group. Hamiltonian (\ref{low-energy-Hamiltonian}) is rather complicated, therefore, the corresponding analytic calculations of its surface Fermi states are quite involved and not much revealing. Therefore, our general strategy in analyzing these states will be to start from a simplified model and then move forward to the realistic model by adding step-by-step the necessary missing pieces. \section{Surface Fermi arcs in simplified $2\times 2$ model} \label{sec:2x2Model} In order to get an insight into the structure of the surface Fermi arcs in the low-energy model described by Hamiltonian (\ref{low-energy-Hamiltonian}), it is instructive to first study the surface Fermi arcs in a simplified $2\times 2$ model, given by one of the diagonal blocks, e.g., $H^{+}_{2\times 2}$ in Eq.~(\ref{Hamiltonians_+_New}). (The solutions for the other block Hamiltonian, $H^{-}_{2\times 2}$, can be obtained simply by changing $k_x \to -k_x$.) For completeness, we will also include the term $\epsilon_0(\mathbf{k})$ proportional to the unit matrix, which is present in the low-energy Hamiltonian. Thus, our model $2\times 2$ Hamiltonian reads \begin{equation} H_{2\times 2} = \epsilon_0(\mathbf{k})+H^{+}_{2\times 2}=\epsilon_0(\mathbf{k}) +\left( \begin{array}{cc} M_0-M_1k^2_z-M_2(k^2_x+k^2_y) & A(k_x+ik_y) \\ A(k_x-ik_y) & -\left[M_0-M_1k^2_z-M_2(k^2_x+k^2_y)\right] \\ \end{array} \right). \label{block-model} \end{equation} Before proceeding to the analysis, it is convenient to perform a unitary transformation, $\tilde{H}_{2\times 2} \equiv U_{y}^{-1}H_{2\times 2}U_{y}$, where $U_{y} = \frac{1}{\sqrt{2}} \left(I_2+i\sigma_y\right)$. The transformed Hamiltonian has the following explicit form: \begin{equation} \tilde{H}_{2\times 2} = \epsilon_0(\mathbf{k})+ \left[\gamma\left(k_z^2 - m\right)-M_2(k^2_x+k_y^2)\right]\sigma_x -vk_x\sigma_z-vk_y\sigma_y, \label{model-xxx} \end{equation} where we introduced the notations similar to those in Ref.~\cite{Okugawa:2014ina}: $v=A$ and $\gamma=-M_1$. To study the surface Fermi arcs, we will assume that the surface of a semimetal is at $y=0$. The semimetal itself is in the upper $y>0$ (lower $y<0$) half-space when we describe the surface arc states on the bottom (top) surface. (Of course, in the absence of any effects that break the inversion symmetry $k_y\to -k_y$ explicitly, the two cases will be related by a simple symmetry transformation.) Without loss of generality, we will concentrate primarily on the bottom surface states. The boundary condition on the semimetal surface will be imposed by replacing the parameter $m$ with the $-\tilde{m}$ on the vacuum side of the boundary and taking the limit $\tilde{m}\to \infty$ \cite{Okugawa:2014ina}. From a physics viewpoint, such a replacement is the simplest way to prevent quasiparticle from escaping into the vacuum. Taking into account that the Fermi arc states should be localized at the $y=0$ boundary, let us rewrite Hamiltonian (\ref{model-xxx}) in the following form: \begin{equation} \tilde{H}_{2\times 2} = \left( \begin{array}{cc} C_0+C_1k^2_z+C_2(k^2_x-\partial_y^2)-vk_x & \gamma\left(k_z^2 - m\right)-M_2(k^2_x-\partial_y^2)+v\partial_y \\ \gamma\left(k_z^2 - m\right)-M_2(k^2_x-\partial_y^2)-v\partial_y & C_0+C_1k^2_z+C_2(k^2_x-\partial_y^2)+vk_x \\ \end{array} \right), \label{block-model-U} \end{equation} where, for the convenience of further derivations, we replaced $k_y\equiv -i\partial_y$. \subsection{Simplified model with $C_2= M_2=0$} \label{sec:2x2ModelC2=0M2=0} We will see in what follows that the presence of the terms with the second derivative with respect to $y$ in Hamiltonian (\ref{block-model-U}) leads to many technical complications and makes the analysis rather involved. Therefore, to set up the stage, in this subsection we start our analysis in an even more simplified model, described by Hamiltonian (\ref{block-model-U}) with $C_2$ and $M_2$ set to zero. Then, by introducing the two-component spinor $\Psi = \left(\psi_1 , \psi_2 \right)^{T}$, we see that the eigenvalue problem $(\tilde{H}_{2\times 2} -E)\Psi = 0$ is equivalent to the following system of equations: \begin{eqnarray} \label{psi-equation-1-00} \left(-v k_x+C_1k_z^2+C_0\right)\psi_1 + \left[v \partial_y +\gamma k_z^2-\gamma m(y)\right]\psi_2 &=& E \psi_1 , \\ \label{psi-equation-2-00} \left(v k_x+C_1k_z^2+C_0\right)\psi_2 + \left[-v \partial_y +\gamma k_z^2-\gamma m(y)\right]\psi_1 &=& E \psi_2 . \end{eqnarray} Here $m(y)=m\theta(y)-\tilde{m}\theta(-y)$, where $\theta(y)$ is the step function. Recall that, by assumption, the boundary condition at $y=0$ is enforced by taking the limit $\tilde{m}\to \infty$ on the vacuum side ($y<0$). Formally, Eqs.~(\ref{psi-equation-1-00}) and (\ref{psi-equation-2-00}) have the following surface state solutions: \begin{equation} \Psi_{1}(y)=\left( \begin{array}{c} N_{1} e^{\frac{\gamma}{v} \int^y dy^{\prime} \left[k_z^2- m(y^{\prime})\right]} \\ 0 \end{array} \right), \qquad \Psi_{ 2}(y)=\left( \begin{array}{c} 0 \\ N_{2} e^{-\frac{\gamma}{v} \int^y dy^{\prime} \left[k_z^2- m(y^{\prime})\right]} \\ \end{array} \right), \label{psi_1-00} \end{equation} In the region occupied by the semimetal ($y>0$), the solution $\Psi_1(y)$ is normalizable only for $k_z^2- m<0$, while the solution $\Psi_2(y)$ is normalizable only for $k_z^2 - m>0$. However, on the vacuum side ($y<0$), only $\Psi_1(y)$ is normalizable. The dispersion relation for this normalizable surface state solution follows from Eq.~(\ref{psi-equation-1-00}). It is given by \begin{equation} E = - v k_x+C_1 k_z^2 + C_0. \label{E-00} \end{equation} By making use of this relation, we derive the equation for the bottom surface Fermi arc in the transverse $k_xk_z$ plane, \begin{equation} k_x=-\frac{E-C_1 k_z^2 - C_0}{v}. \label{kx-equation-1} \end{equation} It is instructive to compare this surface Fermi arc with that in the model of Ref.~\cite{Okugawa:2014ina}, where $C_1=0$. While the surface Fermi arcs run between $k_z=-\sqrt{m}$ and $k_z=\sqrt{m}$ in both models, the arcs in the model of Ref.~\cite{Okugawa:2014ina} do not depend on the momentum $k_z$. This is in contrast to the surface Fermi arc in Eq.~(\ref{kx-equation-1}), for which $k_x$ is a quadratic function of $k_z$. Thus, we see that the presence of the quadratic in $k_z$ term in the diagonal component of Hamiltonian (\ref{block-model-U}) produces a nonzero {\it curvature} of the surface Fermi arcs in momentum space. For illustration, several surface Fermi arcs for different values of the Fermi energy are shown in Fig.~\ref{fig:Fermi-arc-00}. The arcs have parabolic shapes. The corresponding arcs in the model of Ref.~\cite{Okugawa:2014ina} would be given by straight lines. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.48\textwidth]{Fermi_arcs_5energies_no_C2M2.eps} \caption{(Color online) The bottom surface Fermi arcs for several different values of the Fermi energy in a simplified two-component model, described by Hamiltonian (\ref{block-model-U}) with $C_2= M_2=0$. The analytical form of the arcs is given in Eq.~(\ref{kx-equation-1}).} \label{fig:Fermi-arc-00} \end{center} \end{figure} Before concluding this section, let us note that the solution $\Psi_2(y)$ in Eq.~(\ref{psi_1-00}) describes Fermi arcs on the top surface. We find from Eq.~(\ref{psi-equation-2-00}) that the corresponding dispersion relation is given by $E = v k_x+C_1 k_z^2 + C_0$. Let us also note in passing that there exists another set of the (top and bottom) Fermi arcs for the lower block Hamiltonian, $H^{-}_{2\times 2}$. The corresponding arcs are obtained from the solutions for the upper block Hamiltonian, $H^{+}_{2\times 2}$, by making the replacement $k_x\rightarrow-k_x$. \subsection{The case with $C_2\neq 0$ and $M_2 \neq 0$} \label{sec:2x2ModelC2M2} Let us now consider the general case with $C_2\neq 0$ and $M_2 \neq 0$. By noting that the Hamiltonian in Eq.~(\ref{block-model-U}) contains second derivatives with respect to $y$, the eigenvalue problem $(\tilde{H}_{2\times 2} -E)\Psi = 0$ becomes more complicated. In the semimetal ($y>0$), it is equivalent to the following system of coupled equations: \begin{eqnarray} \label{psi-equation-1-01} \left[ C_2(k_x^2 - \partial_y^2)-v k_x+C_1k_z^2+C_0\right]\psi_1 + \left[-M_2(k_x^2 - \partial_y^2)+v \partial_y +\gamma k_z^2 -\gamma m\right]\psi_2 &=& E \psi_1 , \\ \label{psi-equation-2-01} \left[ C_2(k_x^2 - \partial_y^2)+v k_x+C_1k_z^2+C_0\right]\psi_2 + \left[-M_2(k_x^2 - \partial_y^2)-v \partial_y +\gamma k_z^2 -\gamma m\right]\psi_1 &=& E \psi_2 . \end{eqnarray} On the vacuum side ($y<0$), the corresponding set of equations has the same form, but with $m$ replaced by $-\tilde{m}$. At the vacuum-semimetal interface ($y=0$), the wave functions and their derivatives should satisfy the conditions of continuity, see Eqs.~(\ref{cross-liking0}) through (\ref{cross-liking3}) in Appendix~\ref{AppExtra1}. The key details of the derivation of the surface Fermi arc solutions are presented in Appendix~\ref{AppExtra1}. On the semimetal side, the spinor structure of the solution takes the following form: \begin{equation} \Psi_{y>0}(y) =\sum_{i=1}^{2}\left(\begin{array}{r} a_i \\ b_i \end{array}\right)e^{- p_i y} , \end{equation} where the explicit expressions for the exponents are given in Eq.~(\ref{p1p2-two-component}). Note that the exponents take real values in the case of surface Fermi arc states. The condition of existence of nontrivial surface Fermi arc solutions is given by \begin{equation} \frac{-C_2(p_1^2-k_x^2)+C_1k_z^2+C_0-E-v k_x}{-M_2(p_1^2-k_x^2)-\gamma (k_z^2-m)+vp_1} =\frac{-C_2(p_2^2-k_x^2)+C_1k_z^2+C_0-E-v k_x}{-M_2(p_2^2-k_x^2)-\gamma (k_z^2-m)+vp_2}. \label{Q1=Q2_alpha=0} \end{equation} This equation defines the functional dependence $k_z(k_x)$ for the possible surface Fermi arc states. A numerical study shows that nontrivial solutions exist only in a finite range of energies, i.e., $-0.168~\mbox{eV} \lesssim E \lesssim 0.373~\mbox{eV}$. Several solutions for different values of the energy are shown in Fig.~\ref{fig:Fermi-Arc-5Es}. The results of the numerical analysis show that the following condition is satisfied: $b_1/a_1= b_2/a_2 = 0.5115$ for all solutions. It is worth noting that the $E=0$ surface Fermi arc in Fig.~\ref{fig:Fermi-Arc-5Es} appears to be almost identical to the corresponding arc, obtained by a very different method in Ref.~\cite{Fang}, see Fig.~3c in that paper. \begin{figure*} \begin{center} \includegraphics[width=0.48\textwidth]{Fermi_arcs_5energies_Down.eps} \caption{(Color online) The bottom surface Fermi arcs (\ref{Q1=Q2_alpha=0}) for several values of the Fermi energy in a two-component model, described by Hamiltonian (\ref{block-model-U}) with $C_2\neq 0 $ and $M_2\neq 0$.} \label{fig:Fermi-Arc-5Es} \end{center} \end{figure*} So far, we considered the arc states only for one of the two-component block Hamiltonians, defined in Eq.~(\ref{block-model}). Similar solutions also exist for the lower two-component block Hamiltonian, $\epsilon_0(\mathbf{k})+H^{-}_{2\times 2}$. It is straightforward to show that the solutions to the eigenvalue problem for the lower block are the same as for the upper one, after one makes the replacement $k_x\rightarrow-k_x$. Graphically, these solutions are mirror images of the arcs in Fig.~\ref{fig:Fermi-Arc-5Es}. Before concluding this subsection, let us also note that the description of the Fermi arc states on the top surface is similar to the bottom ones. By assuming that Weyl semimetal is at $y<0$ and the vacuum is at $y>0$, the appropriate boundary conditions are implemented by using the $y$-dependent parameter $m(y)=m\theta(-y)-\tilde{m}\theta(y)$ and taking the limit $\tilde{m} \to \infty$ at the end. Up to a reflection $k_x\to -k_x$, the corresponding final results for the Fermi arcs on the top surface look similar to those on the bottom surface, shown in Fig.~\ref{fig:Fermi-Arc-5Es}. \subsection{Effective Hamiltonian for surface Fermi arc states} \label{surface-Hamiltonian} Following the usual approach in the studies of topological insulator \cite{Shen}, it may be natural to derive an effective Hamiltonian for the surface Fermi arc states. The block Hamiltonians in the simplified model at hand can be naturally separated into two parts, i.e., $\tilde{H}^{\pm}_{2\times 2} = H_0+H^{\pm}_1$, where the zeroth order part $H_0$ corresponds to the original Hamiltonian at $k_x=k_z=0$, i.e., \begin{equation} H_0=\left( \begin{array}{cc} C_0-C_2\partial_y^2 & -\gamma m+M_2\partial_y^2+v\partial_y \\ -\gamma m+M_2\partial_y^2-v\partial_y & C_0-C_2\partial_y^2 \\ \end{array} \right). \end{equation} while $H_1$ contains all the terms with nontrivial dependence on $k_x$ and $k_z$, i.e., \begin{equation} H^{\pm}_1=\left( \begin{array}{cc} C_1k^2_z+C_2k^2_x \mp v k_x & \gamma k^2_z -M_2k^2_x \\ \gamma k^2_z -M_2k^2_x & C_1k^2_z +C_2k^2_x \pm v k_x \\ \end{array} \right). \label{block-model-M2} \end{equation} As in the previous analysis, we used $k_y\equiv -i\partial_y$. To start with, we have to solve the eigenvalue problem with the zeroth order Hamiltonian, $H_0\Psi_0=\lambda\Psi_0$. By following the same approach as in Appendix~\ref{AppExtra1}, but with $k_x=k_z=0$, we find straightforwardly the explicit solutions for the surface Fermi arcs $\Psi_0$. The corresponding energy parameter is found to be $\lambda = -0.13425~\mbox{eV}$. Then, the effective Hamiltonian for the surface states is obtained by integrating over the perpendicular direction $y$, i.e., \begin{eqnarray} H^{\pm}_{\rm surf} &=& \lambda+\int_0^{\infty} dy \Psi^{\dag}_0H_1\Psi_0 =\lambda+ C_1k^2_z+C_2k_x^2 \mp v k_x\frac{1-Q^2}{1+Q^2} +2(\gamma k^2_z -M_2k_x^2)\frac{Q}{1+Q^2} \nonumber\\ &\approx & \lambda \mp v_{\rm surf} k_x +\gamma_{\rm surf} k^2_z. \label{Hamiltonian-effective-1-M2} \end{eqnarray} where $Q\approx 0.5115$, $v_{\rm surf}\approx 1.440~\mbox{eV\,\AA}$ and $\gamma_{\rm surf} \approx 17.38~\mbox{eV\,\AA}^2$. Note that the quadratic term in $k_x$ vanishes after the model parameters are used. As is easy to check, the effective Hamiltonian in Eq.~(\ref{Hamiltonian-effective-1-M2}) reproduces almost perfectly the shape of the Fermi arcs in the $k_xk_z$ plane. However, it does not contain the information about the finite length of the arcs. We could explain this fact in part by pointing out that the corresponding information is encoded in the terms quadratic in momenta $k_x$ and $k_z$. When such terms are omitted from the zeroth order Hamiltonian $H_0$, the existence of the surface states formally appears to be unconstrained. Therefore, the effective Hamiltonian in Eq.~(\ref{Hamiltonian-effective-1-M2}) will be truly useful only when supplemented by its range of validity in the $k_xk_z$ plane. This, however, seems to diminish its practical value because the corresponding range depends on the energy. \section{Fermi arcs in realistic model} \label{sec:Realistic4x4Model} In this section we will consider the complete low-energy theory described by Hamiltonian (\ref{low-energy-Hamiltonian}) with $\alpha\neq0$. By performing a unitary transformation in Eq.~(\ref{low-energy-Hamiltonian}), defined by $U_{y} = \frac{1}{\sqrt{2}}I_2 \otimes\left(I_2+i\sigma_y\right)$, we arrive at the following equivalent form of the Hamiltonian: \begin{eqnarray} \tilde{H} &=& \left[C_2(k_x^2 - \partial_y^2)+C_1k_z^2+C_0\right] I_2\otimes I_2 -M_2(k_x^2 - \partial_y^2) I_2\otimes \sigma_x \nonumber \\ &&+\left( \begin{array}{cccc} -v k_x & v \partial_y +\gamma (k_z^2- m) & -\alpha k_z(k_x-\partial_y)^2 & 0 \\ -v \partial_y +\gamma (k_z^2- m) &v k_x & 0 & \alpha k_z(k_x-\partial_y)^2 \\ -\alpha k_z(k_x+\partial_y)^2 & 0 & v k_x & v \partial_y +\gamma (k_z^2- m) \\ 0 & \alpha k_z(k_x+\partial_y)^2 & -v \partial_y +\gamma (k_z^2- m) & -v k_x \\ \end{array} \right), \label{Wang-Hamiltonian-2} \end{eqnarray} By introducing the spinor wave function $\Psi = \left(\psi_1 , \psi_2 , \psi_3 , \psi_4 \right)^{T}$, we reduce the eigenvalue problem $(\tilde{H} -E)\Psi = 0$ in the semimetal ($y>0$) to the following system of equations: \begin{eqnarray} \label{psi-equation-1-four} \left[ C_2(k_x^2 - \partial_y^2)-v k_x+C_1k_z^2+C_0-E\right]\psi_1 +\left[-M_2(k_x^2 - \partial_y^2)+v \partial_y +\gamma k_z^2 -\gamma m\right]\psi_2 - \alpha k_z (k_x-\partial_y)^2\psi_3 =0, && \\ \label{psi-equation-2-four} \left[-M_2(k_x^2 - \partial_y^2)-v \partial_y +\gamma k_z^2-\gamma m\right]\psi_1 + \left[ C_2(k_x^2 - \partial_y^2)+v k_x+C_1k_z^2+C_0 -E\right]\psi_2 + \alpha k_z (k_x-\partial_y)^2\psi_4 =0, && \\ \label{psi-equation-3-four} -\alpha k_z (k_x+\partial_y)^2\psi_1 +\left[ C_2(k_x^2 - \partial_y^2)+v k_x+C_1k_z^2+C_0-E\right]\psi_3 +\left[-M_2(k_x^2- \partial_y^2)+v \partial_y +\gamma k_z^2-\gamma m\right]\psi_4 =0, && \\ \label{psi-equation-4-four} \alpha k_z (k_x+\partial_y)^2\psi_2 +\left[-M_2(k_x^2 - \partial_y^2)-v \partial_y +\gamma k_z^2-\gamma m\right]\psi_3 +\left[ C_2(k_x^2 - \partial_y^2)-v k_x+C_1k_z^2+C_0-E\right]\psi_4 =0 . && \end{eqnarray} On the vacuum side ($y<0$), the corresponding set of equations has the same form, but with $m$ replaced by $-\tilde{m}$. The corresponding full set of equations should be also supplemented by the conditions of continuity of the wave functions and their derivatives across the vacuum-semimetal interface at $y=0$, see Eqs.~(\ref{cross-liking-four1}) through (\ref{cross-liking-four5}) in Appendix~\ref{AppExtra2}. As shown in Appendix~\ref{AppExtra2}, the spinor structure of the solution on the semimetal side takes the form: \begin{equation} \Psi_{y>0}(y) =\sum_{i=1}^{2}\left(\begin{array}{r} a_i \\ b_i \\ c_i \\ d_i \end{array}\right)e^{- p_i y} , \end{equation} where the explicit expressions for the exponents are given in Eq.~(\ref{p12-four-comp}). In the case of surface Fermi arc solutions, the exponents take real values. A nontrivial solution exists when the following condition is satisfied: \begin{equation} \left(Q_1^{+} -Q_2^{+}\right)\left(Q_1^{-} -Q_2^{-}\right) -\left(T_1^{+} -T_2^{+}\right)\left(T_1^{-} -T_2^{-}\right)=0, \label{key-equation} \end{equation} where, by definition, $Q_{i}^{\pm} \equiv Q(p_i,\pm k_x)$ and $T_{i}^{\pm} \equiv T(p_i,\pm k_x)$, and the functions $Q(p,k_x)$ and $T(p,k_x)$ are defined in Eqs.~(\ref{def-Qpkx}) and (\ref{def-Tpkx}), respectively. By taking into account that $T(p,k_x)$ vanishes at $\alpha = 0$, one finds that the above condition reduces to its analog in Eq.~(\ref{Q1=Q2_alpha=0}) in the two-component model. Indeed, a nontrivial solution exists in the model with the two-component upper (lower) block Hamiltonian when $Q_1^{+} =Q_2^{+}$ ($Q_1^{-} =Q_2^{-}$) is satisfied. We would like to emphasize that the classification of the arc states remains essentially the same also in a general case with $\alpha \neq 0$. However, because of the mixing between the upper and lower block Hamiltonians, the arcs are labeled by the eigenvalues of the $U_{\chi}$ operator, see Appendix~\ref{AppA}. The eigenstates with $\chi=+1$ ($\chi=-1$) are the generalizations of the arcs from the upper (lower) block Hamiltonian. The numerical results for the surface Fermi arc states are shown in Fig.~\ref{fig:Fermi-Arc-5Es_alpha} for $\alpha=1~\mbox{eV\,\AA}^3$ (left panel) and $\alpha=50~\mbox{eV\,\AA}^3$ (right panel). At fixed energy, there are two surface Fermi arcs related to two different sectors of the $\mathrm{A_3Bi}$ (A=Na, K, Rb) compounds with definite eigenvalue of $U_{\chi}$. One can check that the wave functions that describe these surface Fermi arcs are related to each other by means of the $\tilde{U}\Pi_{k_x}$ transformation, see Appendix~\ref{AppA}. By comparing these results with those in the two-component model, see Fig.~\ref{fig:Fermi-Arc-5Es}, we find that the quantitative effect of a nonzero $\alpha$ on the Fermi arcs is small even when $\alpha$ is moderately large. The only qualitative effect due to $\alpha$ is a reconnection of the pair of arcs (from predominantly up and predominantly down sectors) at negative values of the Fermi energy. The underlying physics of such an effect is likely to be connected with the loss of the chirality as a good quantum number for quasiparticles away from the Dirac/Weyl nodes. Because of the discrete ud-parity, which is preserved even at large values of $\alpha$, there are still two sectors of the theory and there are still small nontrivial arcs present, as we see from the right panel of Fig.~\ref{fig:Fermi-Arc-5Es_alpha}. It will be interesting to explore whether the reconnection of the pairs of arcs would also appear in the microscopic theory. It may well be an artifact of the low-energy theory used here. \begin{figure*}[!ht] \begin{center} \includegraphics[width=0.47\textwidth]{Fermi_arcs_5energies_alpha1.eps} \includegraphics[width=0.47\textwidth]{Fermi_arcs_5energies_alpha50.eps} \caption{(Color online) The Fermi arcs solutions in the plane of transverse momenta for $\alpha=1~\mbox{eV\,\AA}^3$ (left panel) and $\alpha=50~\mbox{eV\,\AA}^3$ (right panel).} \label{fig:Fermi-Arc-5Es_alpha} \end{center} \end{figure*} \section{Fermi arcs and weak breaking of time-reversal symmetry} \label{sec:DynamicalParametersModel} As we discussed in detail in Sec.~\ref{sec:symmetries}, the low-energy effective Hamiltonian (\ref{low-energy-Hamiltonian}) is invariant under the time-reversal and inversion symmetries. Moreover, these symmetries play an important role in defining the physical properties of $\mathrm{A_3Bi}$ semimetals. Thus, it is natural to ask about possible effects on the structure (and perhaps even the existence) of surface Fermi arcs due to breaking of these symmetries. From the physics viewpoint, for example, the corresponding discrete symmetries could be broken explicitly by magnetic doping or an external magnetic field. In order to study the symmetry breaking effects, we will add to the low-energy Hamiltonian (\ref{low-energy-Hamiltonian}) two additional terms controlled by parameters $m_1$ and $\tilde{\mu}_1$: \begin{eqnarray} H_{\rm sb} &=& H(\mathbf{k}) - \left( \begin{array}{cc} \tilde{\mu}_1I_{2}+\sigma_z \gamma m_1 & 0 \\ 0 & -\tilde{\mu}_1I_{2}-\sigma_z \gamma m_1 \\ \end{array} \right). \label{Wang-Hamiltonian-0-3} \end{eqnarray} By analyzing the Schwinger--Dyson equation for the quasiparticle propagator in $\mathrm{A_3Bi}$ semimetals in a magnetic field, we found that these terms are indeed perturbatively generated. Alternatively, these terms can be induced by magnetic doping. The value of $\tilde{\mu}_1$ could be interpreted as a mismatch between the chemical potentials of quasiparticle states in the Weyl sectors of the theory. The value of $m_1$ is a mismatch of the parameter $m$ that determines the chiral shift in the two sectors. This means that whenever these symmetry breaking parameters appear, the $\mathbb{Z}_2$ Weyl semimetal will get automatically transformed into a true Weyl semimetal with four non-degenerate Weyl nodes. By performing a unitary transformation in Eq.~(\ref{Wang-Hamiltonian-0-3}), defined by matrix $U_{y} = \frac{1}{\sqrt{2}}I_{2}\otimes\left(I_{2}+i\sigma_y\right)$, we arrive at the following equivalent Hamiltonian: \begin{eqnarray} &&\tilde{H}_{\rm sb} =\left[C_2(k_x^2 - \partial_y^2)+C_1k_z^2+C_0\right] I_2\otimes I_2 -M_2(k_x^2 - \partial_y^2) I_2\otimes \sigma_x \nonumber \\ &&+\left( \begin{array}{cccc} -v k_x -\tilde{\mu}_1 & v \partial_y +\gamma \left(k_z^2- m-m_1\right) & -\alpha k_z(k_x-\partial_y)^2 & 0 \\ -v \partial_y +\gamma\left( k_z^2- m-m_1\right) & v k_x -\tilde{\mu}_1 & 0 & \alpha k_z(k_x-\partial_y)^2 \\ -\alpha k_z(k_x+\partial_y)^2 & 0 & v k_x+\tilde{\mu}_1 & v \partial_y +\gamma\left( k_z^2- m+m_1\right) \\ 0 & \alpha k_z(k_x+\partial_y)^2 & -v \partial_y +\gamma\left( k_z^2- m+m_1\right) & -v k_x+\tilde{\mu}_1 \\ \end{array} \right). \nonumber \\ \label{Wang-Hamiltonian-3} \end{eqnarray} It is straightforward, although tedious to repeat the same analysis as in Sec.~\ref{sec:Realistic4x4Model}. The general surface state solution is of the same type, i.e., $\Psi_{y>0} (y) = \Psi_0 e^{- p y}$ , where $\Psi_0\equiv (a,b,c,d)^T$ is a constant spinor. However, the characteristic equation is considerably more complicated, \begin{eqnarray} &&\left\{\left[ -C_2(p^2-k_x^2)+C_1k_z^2+C_0-\tilde{\mu}_1-E\right]^2 -\left[M_2(p^2-k_x^2) +\gamma (k_z^2-m-m_1)\right]^2+v^2 (p^2-k_x^2)- \alpha^2 k_z^2(p^2-k_x^2)^2\right\} \nonumber\\ &&\times\left\{\left[ -C_2(p^2-k_x^2)+C_1k_z^2+C_0+\tilde{\mu}_1-E\right]^2 -\left[M_2(p^2-k_x^2) +\gamma (k_z^2-m+m_1)\right]^2+v^2 (p^2-k_x^2)- \alpha^2 k_z^2(p^2-k_x^2)^2\right\}\nonumber\\ &&+4 \alpha^2 k_z^2(p^2-k_x^2)^2\left(\tilde{\mu}_1^2- \gamma^2 m_1^2\right)=0 . \label{char-eq-2} \end{eqnarray} The important effect of the symmetry breaking terms with nonzero $m_1$ and $\tilde{\mu}_1$ is that the new characteristic equation has {\em four} (instead of two degenerate) pairs of distinct solutions: $p=\pm p_i$, with $i=1,2,3,4$. The general spinor solution in the semimetal takes the following form: \begin{equation} \Psi_{y>0}(y) =\sum_{i=1}^{4}\left(\begin{array}{r} a_i \\ b_i \\ c_i \\ d_i \end{array}\right)e^{- p_i y} . \end{equation} By making use of the equation of motion, the components $b_i$ and $d_i$ can be expressed in terms of $a_i$ and $c_i$, \begin{eqnarray} b_i &=& \frac{ -C_2(p_i^2-k_x^2)+C_1k_z^2+C_0-\tilde{\mu}_1-E-v k_x} {-M_2(p_i^2-k_x^2)-\gamma\left( k_z^2- m-m_1\right) +v p_i } a_i - \frac{ \alpha k_z (p_i+k_x)^2} {-M_2(p_i^2-k_x^2)-\gamma\left( k_z^2- m-m_1\right) +v p_i } c_i ,\\ d_i &=& - \frac{ \alpha k_z (p_i-k_x)^2} {-M_2(p_i^2-k_x^2)-\gamma\left( k_z^2- m+m_1\right) +v p_i } a_i +\frac{ -C_2(p_i^2-k_x^2)+C_1k_z^2+C_0+\tilde{\mu}_1-E-v k_x} {-M_2(p_i^2-k_x^2)-\gamma\left( k_z^2- m+m_1\right) +v p_i } c_i . \end{eqnarray} In order to avoid a possible confusion, let us emphasize that the remaining two components $a_i$ and $c_i$ are not independent, but fixed unambiguously for each $p_i$. The final solutions for the Fermi arcs are determined after all four independent parameters (e.g., $a_i$ with $i=1,2,3,4$) are fixed by satisfying the continuity conditions for the wave function at the surface of the semimetal. The corresponding solutions can be obtained by numerical methods. To slightly simplify the analysis, let us consider a special case of vanishing $\alpha$ in more detail. In this case, the states from the two-component upper and lower block Hamiltonians decouple. Also, the characteristic equation factorizes, effectively giving two separate equations, i.e., \begin{eqnarray} \left[ -C_2(p^2-k_x^2)+C_1k_z^2+C_0-\tilde{\mu}_1-E\right]^2 -\left[M_2(p^2-k_x^2) +\gamma (k_z^2-m-m_1)\right]^2+v^2 (p^2-k_x^2) &=& 0 \quad \mbox{(up)}, \label{char-eq-up}\\ \left[ -C_2(p^2-k_x^2)+C_1k_z^2+C_0+\tilde{\mu}_1-E\right]^2 -\left[M_2(p^2-k_x^2) +\gamma (k_z^2-m+m_1)\right]^2+v^2 (p^2-k_x^2)&=& 0 \quad \mbox{(down)}, \label{char-eq-down} \end{eqnarray} cf. Eq.~(\ref{char-eq-no-alpha}). Then, the analysis of the surface Fermi arcs follows very closely the analysis in Sec.~\ref{sec:2x2ModelC2M2}. \begin{figure*}[ht!] \begin{minipage}[ht]{0.245\linewidth} \center{\includegraphics[width=1.0\linewidth]{FAs_dyn_m1_0.0001_mu1_0.05_EF_0.eps} \\ {\small (a) $m_1=10^{-4}$, $\tilde{\mu}_1=0.05$}} \end{minipage} \begin{minipage}[ht]{0.245\linewidth} \center{\includegraphics[width=1.0\linewidth]{FAs_dyn_m1_0.0001_mu1_-0.05_EF_0.eps} \\ {\small (b) $m_1=10^{-4}$, $\tilde{\mu}_1=-0.05$}} \end{minipage} \begin{minipage}[ht]{0.245\linewidth} \center{\includegraphics[width=1.\linewidth]{FAs_dyn_m1_0.005_mu1_0.0001_EF_0.eps} \\ {\small (c) $m_1=0.005$, $\tilde{\mu}_1=10^{-4}$}} \end{minipage} \begin{minipage}[ht]{0.245\linewidth} \center{\includegraphics[width=1.0\linewidth]{FAs_dyn_m1_-0.005_mu1_0.0001_EF_0.eps} \\ {\small (d) $m_1=-0.005$, $\tilde{\mu}_1=10^{-4}$}} \end{minipage}\\[5mm] \begin{minipage}[ht]{0.245\linewidth} \center{\includegraphics[width=1.0\linewidth]{FAs_dyn_m1_0.005_mu1_0.05_EF_0.eps} \\ {\small (e) $m_1=0.005$, $\tilde{\mu}_1=0.05$}} \end{minipage} \begin{minipage}[ht]{0.245\linewidth} \center{\includegraphics[width=1.0\linewidth]{FAs_dyn_m1_-0.005_mu1_-0.05_EF_0.eps} \\ {\small (f) $m_1=-0.005$, $\tilde{\mu}_1=-0.05$}} \end{minipage} \begin{minipage}[ht]{0.245\linewidth} \center{\includegraphics[width=1.0\linewidth]{FAs_dyn_m1_0.005_mu1_-0.05_EF_0.eps} \\ {\small (g) $m_1=0.005$, $\tilde{\mu}_1=-0.05$}} \end{minipage} \begin{minipage}[ht]{0.245\linewidth} \center{\includegraphics[width=1.0\linewidth]{FAs_dyn_m1_-0.005_mu1_0.05_EF_0.eps} \\ {\small (h) $m_1=-0.005$, $\tilde{\mu}_1=0.05$}} \end{minipage} \caption{The Fermi arcs solutions (thick black lines) in the model with the symmetry breaking parameters $m_1$ and $\tilde{\mu}_1$ at $E=0$. The shaded regions represent the projections of the bulk Fermi surfaces onto the $k_xk_z$ plane. The values of $m_1$ and $\tilde{\mu}_1$ are given in units of $\mbox{\AA}^{-2}$ and $\mbox{eV}$, respectively.} \label{fig:arcs_dynamical} \end{figure*} A number of representative numerical solutions for the Fermi surface arcs in the model with the symmetry breaking parameters $m_1$ and $\tilde{\mu}_1$ are shown in Fig.~\ref{fig:arcs_dynamical}. The results are obtained for the Fermi energy $E=0$. In order to shed light on the origin of the individual arcs, in the same figure we also show the projections (shaded regions) of the bulk Fermi surfaces onto the $k_xk_z$ plane. Such a representation reveals that some of the Fermi arcs link {\it disconnected} sheets of the bulk Fermi surface \cite{Haldane}, while others link different points of the {\it same} bulk Fermi surface sheet. As suggested by the physical meaning of the symmetry breaking parameters, $m_1$ and $\tilde{\mu}_1$, the Fermi surface arcs for the up and down Weyl sectors of the theory are not transformed into each other by a mirror symmetry. In addition to the expected effects of (i) changing the length of the arcs (primarily due to nonzero $m_1$) and (ii) shifting the arcs' position in the $k_x$ direction (primarily due to nonzero $\tilde{\mu}_1$), we also see some qualitative changes in the shape and branching of the arcs. By comparing Eqs.~(\ref{char-eq-up}) and (\ref{char-eq-down}) for the two sectors of the theory, we find that the whole asymmetric sets of the Fermi arcs turn into their mirror reflections when both parameters $m_1$ and $\tilde{\mu}_1$ change their signs. Examples of two pairs of such mirror configurations are shown in panels (e)--(f) and (g)--(h) in Fig.~\ref{fig:arcs_dynamical}. [Strictly speaking, the other two pairs of configurations, see (a)--(b) and (c)--(d), are not exact mirror reflections of each other because one of the symmetry breaking parameters does not change the sign. Because of a smallness of the parameter, there is an appearance of approximate mirror configurations.] It is interesting to point out that different topologies of the global (bulk-plus-arcs) Fermi hypersurfaces, including the bulk sheets and the surface Fermi arcs, are possible. For example, for a range of symmetry breaking parameters, represented by panels (c), (d), (e) and (f) in Fig.~\ref{fig:arcs_dynamical}, we find that the global Fermi hypersurfaces consist of pairs of clearly disconnected parts. This is in contrast to the configurations in panels (a) and (b), where different parts touch at four points, and in contrast to the configurations in panels (g) and (h), where all parts of the global Fermi hypersurfaces are linked by the Fermi arcs. If samples with completely disconnected parts of the global Fermi hypersurfaces are indeed possible, they will be very interesting to study in experiments. As we see from panels (g) and (h) in Fig.~\ref{fig:arcs_dynamical}, there are also qualitatively new types of the Fermi arcs possible for a range of symmetry breaking parameters. In particular, we find a pair of ``short" branches of the Fermi arcs that split off from the usual ``long" arcs. To the best of our knowledge, the corresponding short arcs have not been predicted before. So far, we could not establish a general criterion for the existence of the short arcs. In the configurations in panels (g) and (h), they play a profound role by linking two disconnected sheets of the bulk Fermi surface. \section{Conclusion} \label{Conclusion} In this paper, we studied the surface Fermi arc states by employing a continuum low-energy effective model. The use of analytical methods and a realistic low-energy model provide a deeper insight into the physical properties and characteristics of the surface Fermi arcs. In particular, we were able to classify the Fermi arcs with respect to the ud-parity and reconfirm the $\mathbb{Z}_2$ Weyl structure of $\mathrm{A_3Bi}$ semimetals \cite{Gorbar:2014sja}. In this context, it should be noted that the experimental observation of the corresponding Fermi arc states have been recently reported for $\mathrm{Na_3Bi}$ \cite{science.1256742}. While in agreement with the claimed topological semimetal structure, such an observation does not confirm it unambiguously. That is because the Fermi arc states are also possible in Dirac materials where the $\mathbb{Z}_2$ Weyl structure is absent \cite{WangWeng,Vishwanath}. The unambiguous confirmation of the $\mathbb{Z}_2$ Weyl structure could, however, be established via the quantum oscillations, whose period should dependent on the thickness of the semimetal in the same way as in true Weyl semimetals \cite{Vishwanath,Gorbar:2014qta}. By introducing the effects of several possible symmetry breaking terms, we show that the $\mathbb{Z}_2$ Weyl structure of $\mathrm{A_3Bi}$ is destroyed in a very special way: the compounds become true Weyl semimetals. We suggest that this finding can be tested in experiment. For example, by taking into account that the mirror-symmetric pairs of surface Fermi arcs in clean $\mathrm{A_3Bi}$ get distorted upon the introduction of explicit symmetry breaking (e.g., by magnetic doping), a number of specific features (size, shape and number of branches) should be seen in the surface Fermi arcs. The corresponding properties could be studied, for example, by analyzing the quantum oscillations sensitive to the surface states of this type \cite{Vishwanath}. In the absence of symmetry breaking, there will be a unique period of oscillations dependent in a specific way on the thickness of the semimetal slab \cite{Gorbar:2014qta}. On the other hand, the breaking of symmetry will produce pairs of inequivalent arcs of different lengths and the observation of two incommensurate periods of oscillations will be expected. In principle, by making use of the analytical results in this study, the details of the oscillations could be used to estimate the magnitude of the symmetry breaking terms. \acknowledgments The work of E.V.G. was supported partially by the Ukrainian State Foundation for Fundamental Research. The work of V.A.M. was supported by the Natural Sciences and Engineering Research Council of Canada. The work of I.A.S. was supported by the U.S. National Science Foundation under Grant No.~PHY-1404232.
2,869,038,155,379
arxiv
\section{Background} \label{sec:back} In this section we describe our data model and privacy conditions. We also review the fundamentals of the matrix mechanism, including error measurement and the problem of strategy selection. \revision{}{Throughout the paper, we use the notation of linear algebra and employ standard techniques of matrix analysis. For a matrix $\vect{A}$, $\vect{A}^T$ is its transpose and $\mbox{trace}(\vect{A})$ is the sum of values on the main diagonal. If $\vect{A}$ is a square matrix with full rank, $\vect{A}^{-1}$ denotes its inverse. We use $diag(c_1, \dots c_n)$ to indicate an $n \times n$ diagonal matrix with scalars $c_i$ on the diagonal.} \subsection{\hspace*{-9pt}Data Model, Linear Queries, and Workloads} The workloads considered in this paper consist of counting queries over a single relation. Let the database $I$ be an instance of a single-relation schema $R(\mathbb{A})$, with attributes $\mathbb{A}=\{A_1, A_2, \ldots, A_k\}$. The crossproduct of the attribute domains, written $dom(\mathbb{A})$, is the set of all possible tuples that may occur in $I$. In order to express our queries, we first transform the instance $I$ into a {\em data vector} $\x$ of cell counts. We may choose to fully represent instance $I$ by defining the vector $\x$ with one cell for every element of $dom(\mathbb{A})$. Then $\x$ is a bit vector of size $|dom(\mathbb{A})|$ with nonzero counts for each tuple present in $I$. This is often inefficient (the size of the $\x$ vector is the product of the attribute domain sizes) and ineffective (the base counts are typically too small to be estimated accurately under the privacy condition). A common way to form a vector of base counts over larger cells is to partition each $dom(A_i)$ into $d_i$ regions, which could correspond to ranges over an ordered domain or individual elements (or sets of elements) in a categorical domain. Then the individual cells are defined by taking the cross-product of the regions in each attribute. The choice of cells in the data vector is ultimately determined by the workload queries that need to be expressed. To formally define the data vector we associate, with each element $x_i$ of $\x$, a Boolean {\em cell condition} $\phi_i$, which evaluates to True or False for any tuple in $dom(\mathbb{A})$. We always require that the cell conditions be pairwise unsatisfiable: any tuple in $dom(\mathbb{A})$ will satisfy exactly one cell condition. Then $x_i$ is defined to be the count of the tuples from $I$ which satisfy $\phi_i$. \begin{definition}[Data vector] Given an ordered list of cell conditions $\phi_1, \phi_2 \dots \phi_n$ the data vector $\x$ is a length-$n$ column vector defined by $n$ positive integral counts $x_i = |\{t \in I \:|\: \phi_i(t) \mbox{ is True}\}|$. \end{definition} \vspace*{-1pt} In the sequel, the length of $\x$ is a key parameter, always denoted by $n$. \begin{example} Consider the relational schema $R (name,\\ gradyear, gender, gpa)$ describing students. If we wish to form queries only over $gender$ (Male or Female), and $gpa$ ranges $[1.0,2.0)$, $[2.0,3.0)$, $[3.0,3.5)$, $[3.5,4.0)$, then we can define the 8 cell conditions enumerated in Fig.~\ref{tbl:one}(a). \end{example} \vspace*{-1pt} A linear query computes a specified linear combination of the elements of the data vector $\x$. \begin{definition}[Linear query] A {\em linear query} is a \\length-$n$ row vector $x=[q_1 \dots q_n]$ with each $q_i \in \mathbb{R}$. The answer to a linear query $x$ on $\x$ is the vector product $x\x = q_1x_1 + \dots + q_nx_n$. \end{definition} \revision{}{In addition to basic predicate counting queries, other aggregates like sum and average, as well as group-by queries, can be expressed as linear counting queries.} A {\em workload} is a set of linear queries. A workload is represented as a matrix, where each row is a single linear counting query. \begin{definition}[Query matrix] A {\em query matrix} is a collection of $m$ linear queries, arranged by rows to form an $m \times n$ matrix. \end{definition} If $\vect{W}$ is an $m \times n$ query matrix, the query answer for $\vect{W}$ is a length $m$ column vector of query results, which can be computed as the matrix product $\vect{W} \x$. Note that cell condition $\phi_i$ defines the meaning of the $i^{th}$ position of $\x$, and accordingly, it determines the meaning of the $i^{th}$ column of a query matrix. \begin{example} Fig.~\ref{tbl:one}(b) shows a query matrix representing a workload of 8 linear queries. Fig.~\ref{tbl:one}(c) describes the meaning of the queries w.r.t. the cell conditions in Fig.~\ref{tbl:one}(a). \end{example} Note that the data analyst should include in the workload {\em all} queries of interest, even if some queries could be computed from others in the workload. In the absence of noise introduced by the privacy mechanism, it might be reasonable for the analyst to request answers to a small set of counting queries, from which other queries of interest could be computed. (E.g., it would be sufficient to recover $\x$ itself by choosing the workload defined by the identity matrix.) But because the analyst will receive private, noisy estimates to the workload queries, the error of queries computed from their combination is often increased. Our adaptive mechanism is designed to optimize error across the entire set of desired queries, so all queries should be included. As a concrete example, in Fig.~\ref{tbl:one}(b), $x_3$ can be computed as $(x_1 - x_2)$ but is nevertheless included in the workload. We introduce terminology for a few common workloads used throughout the paper. The relevant properties of workloads are reflected by their matrix representation, so we often drop explicit mention of the schema and attributes involved and focus simply on the number of distinct attributes and the number of disjoint buckets for each attribute, assuming that cells are formed uniformly in the manner described above. We consider {\em predicate queries}, {\em range queries} and {\em $k$-way marginal queries}. In addition, since each $k$-way marginal query covers a single value on the margin, one may need to sum answers to multiple marginal queries in order to answer any range query on the margin. When the answers to the marginal queries have noise, summing introduces too much accumulated noise. Therefore, in this paper, we also consider {\em $k$-way range marginal queries}, each of which aggregates multiple $k$-way marginal queries so as to cover a range on the margin. \begin{example} Of the queries in Fig.~\ref{tbl:one}, the first seven\\ are range queries (and therefore predicate queries as well). $x_1 \dots x_5$ are one-way range marginal queries, in which $x_1$, $x_2$, $x_3$ are one-way range marginal queries over gender and $x_1, x_4, x_5$ are one-way range marginal queries over gpa; $x_2,x_3$ are also one-way marginal queries \end{example} We often consider large workloads consisting of {\em all} queries of a given type, such as ``all predicate'', ``all range'', ``all $k$-way range marginal'', ``all $k$-way marginal'' and ``all marginal'' (the union of all $k$-way marginals for $0\leq k\leq m$). Notice there is no workload of ``all range marginal'' since it is equivalent to ``all range''. Later we will also consider {\em ad hoc} workloads consisting of arbitrary subsets of each of these types of queries and their combinations. In practice such workloads are important because they may arise from combining queries of interest to multiple users, or from specializing a general workload to a more specific task, to improve error. \begin{figure*} \vspace{1ex} \centering \subfigure[Cell conditions $\Phi$]{ \small \begin{tabular}{l} $\phi_1: gpa\in[1.0,2.0) \wedge gender=M$\\ $\phi_2: gpa\in[2.0,3.0) \wedge gender=M$ \\ $\phi_3: gpa\in[3.0,3.5) \wedge gender=M$ \\ $\phi_4: gpa\in[3.5,4.0) \wedge gender=M$ \\ $\phi_5: gpa\in[1.0,2.0) \wedge gender=F$ \\ $\phi_6: gpa\in[2.0,3.0) \wedge gender=F$ \\ $\phi_7: gpa\in[3.0,3.5) \wedge gender=F$ \\ $\phi_8: gpa\in[3.5,4.0) \wedge gender=F$ \\ \end{tabular} } \quad \subfigure[A query workload $\vect{W}$]{ \small $\begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\ 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 & \mbox{-}1 & \mbox{-}1 & \mbox{-}1 & \mbox{-}1 \\ \end{bmatrix}$ } \quad \subfigure[Counting queries defined by rows of $\vect{W}$]{ \small \begin{tabular}{l} $x_1$: all students; \\ $x_2$: female students;\\ $x_3$: male students;\\ $x_4$: students with $gpa < 3.0$;\\ $x_5$: students with $gpa \ge 3.0$;\\ $x_6$: female students with $gpa \ge 3.0$;\\ $x_7$: male students with $gpa < 3.0$;\\ $x_8$: difference between male and female students.\\ \end{tabular} } \vspace*{3pt} \caption{\label{tbl:one} For schema $R=(name, gradyear, gender, gpa)$, (a) shows 8 cell conditions on attributes $gender$ and $gpa$. The database vector $\x$ (not shown) will accordingly consist of 8 counts; (b) shows a sample workload matrix $\vect{W}$ consisting of 8 queries, each described in (c).} \vspace*{-12pt} \end{figure*} \subsection{Differential Privacy and Gaussian Noise} Standard $\epsilon$-differential privacy \cite{Dwork:2006Calibrating-Noise} places a bound (controlled by $\epsilon$) on the difference in the probability of query answers for any two {\em neighboring} databases. For database instance $I$, we denote by $nbrs(I)$ the set of databases differing from $I$ in at most one record. Approximate differential privacy~\cite{Dwork:2006Our-Data-Ourselves:,McSherry:2009fk}, is a modest relaxation in which the $\epsilon$ bound on query answer probabilities may be violated with small probability (controlled by $\delta$). \begin{definition}\hspace*{-2pt}{\sc(Approximate Differential Privacy)} A randomized algorithm $\alg$ is $(\epsilon,\delta)$-differentially private if for any instance $I$, any $I' \in nbrs(I)$, and any subset of outputs $S \subseteq Range(\alg)$, the following holds: \[ Pr[ \alg(I) \in S] \leq \exp(\epsilon) \times Pr[ \alg(I') \in S] +\delta \] \end{definition} When $\delta=0$, the condition is standard $\epsilon$-differential privacy. Both definitions can be satisfied by adding random noise to query answers. The magnitude of the required noise is determined by the {\em sensitivity} of a set of queries: the maximum change in a vector of query answers over any two neighboring databases. But the two privacy definitions differ in the measurement of sensitivity and in their noise distributions. Standard differential privacy can be achieved by adding Laplace noise calibrated to the $L_1$ sensitivity of the queries \cite{Dwork:2006Calibrating-Noise}. Approximate differential privacy can be achieved by adding Gaussian noise calibrated to the $L_2$ sensitivity of the queries \cite{Dwork:2006Our-Data-Ourselves:,McSherry:2009fk}. Our main results focus on approximate differential privacy, but we discuss extensions to standard differential privacy in Sec~\ref{sec:sub:l1}. Our query workloads are represented as matrices, so we express the sensitivity of a workload as a matrix norm. Because neighboring databases $I$ and $I'$ differ in exactly one tuple, and because cell conditions are disjoint, it follows that the corresponding vectors $\x$ and $\x'$ differ in exactly one component, by exactly one, in which case we write $\x' \in nbrs(\x)$. The $L_2$ sensitivity of $\vect{W}$ is equal to the maximum $L_2$ norm of the columns of $\vect{W}$. Below, $\mbox{cols}(\vect{W})$ is the set of column vectors $W_i$ of $\vect{W}$. \begin{proposition}[$L_2$ Query matrix sensitivity] The $L_2$ sensitivity of a query matrix $\vect{W}$ is denoted $\Ltwo{\vect{W}}$, defined as follows: \begin{eqnarray*} \Ltwo{\vect{W}} & \stackrel{\mathrm{def}}{=} & \max_{\x' \in nbrs(\x)} \Ltwo{\vect{W}\x - \vect{W}\x'} = \max_{W_i \in \mbox{cols}(\vect{W})} \Ltwo{W_i} \end{eqnarray*} \end{proposition} For example, for $\vect{W}$ in Fig.~\ref{tbl:one}(b), we have $\Ltwo{\vect{W}}=\sqrt{5}$. The classic differentially private mechanism adds independent noise calibrated to the sensitivity of a query workload. We use $\mbox{Normal}(\sigma)^m$ to denote a column vector consisting of $m$ independent samples drawn from a Gaussian distribution with mean $0$ and scale $\sigma$. \begin{proposition}{\sc (Gaussian mechanism \cite{Dwork:2006Our-Data-Ourselves:, McSherry:2009fk})}\label{thm:l2diffpriv} Given an $m \times n$ query matrix $\vect{W}$, the randomized algorithm $\GM$ that outputs the following vector is $(\epsilon,\delta)$-differentially private: $$\GM(\vect{W},\x) = \vect{W}\x + \mbox{Normal}(\sigma)^m$$ where $\sigma=\Ltwo{\vect{W}}\sqrt{2\ln(2/\delta)}/\epsilon$ \end{proposition} Recall that $\vect{W}\x$ is a vector of the true answers to each query in $\vect{W}$. The algorithm above adds independent Gaussian noise (scaled by $\epsilon$, $\delta$, and the sensitivity of $\vect{W}$) to each query answer. Thus $\GM(\vect{W},\x)$ is a length-$m$ column vector containing a noisy answer for each linear query in $\vect{W}$. \subsection{The Matrix Mechanism} The matrix mechanism has a form similar to the Gaussian mechanism but adds a more complex noise vector. It uses a strategy matrix, $\vect{A}$, to construct this vector. \begin{proposition}{\sc ($(\epsilon,\delta)$-Matrix Mechanism \cite{Li:2010Optimizing-Linear})} \label{def:m-mech} Given an $m \times n$ query matrix $\vect{W}$, and assuming $\vect{A}$ is a full rank $p \times n$ strategy matrix, the randomized algorithm $\MM_\vect{A}$ that outputs the following vector is $(\epsilon,\delta)$-differentially private: \begin{eqnarray*} \MM_\vect{A}(\vect{W},\x) &=& \vect{W}\x + \vect{W} \vect{A}^{\!+} \mbox{Normal}(\sigma)^m. \end{eqnarray*} where $\sigma=\Ltwo{\vect{A}}\sqrt{2\ln(2/\delta)}/\epsilon$ \end{proposition} Here $\vect{A}^{\!+}$ is the pseudo-inverse of $\vect{A}$: $\vect{A}^{\!+} = \inv{(\vect{A}^T\vect{A})}\vect{A}^T$; if $\vect{A}$ is a square matrix, then $\vect{A}^{\!+}$ is just the inverse of $\vect{A}$. The intuitive justification for this mechanism is that it is equivalent to the following three-step process: (1) the queries in the strategy are submitted to the Gaussian mechanism; (2) an estimate $\vect{\hat x}$ for $\x$ is derived by computing the $\vect{\hat x}$ that minimizes the squared sum of errors (this step consists of standard linear regression and requires that $\vect{A}$ be full rank to ensure a unique solution); (3) noisy answers to the workload queries are then computed as $\vect{W}\vect{\hat x}$. The answers to $\vect{W}$ derived in step (3) are always consistent because they are computed from a single noisy version of the cell counts, $\vect{\hat x}$. Like the Gaussian mechanism, the matrix mechanism computes the true answer vector $\vect{W}\x$ and adds noise to each component. But a key difference is that the scale of the Gaussian noise is {\em calibrated to the sensitivity of the strategy matrix $\vect{A}$, not that of the workload}. In addition, the noise added to query answers is no longer independent, because the vector of independent Gaussian samples is transformed by the matrix $\vect{W}\vect{A}^{\!+}$. \begin{figure}[t] \centering \begin{tabular}{cc} Identity & Wavelet \vspace{2ex} \\ \small $\left[\begin{smallmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \end{smallmatrix}\right]$ \hfill & $\left[\begin{smallmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & \mbox{-}1 & \mbox{-}1 & \mbox{-}1 & \mbox{-}1 \\ 1 & 1 & \mbox{-}1 & \mbox{-}1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 & \mbox{-}1 & \mbox{-}1 \\ 1 & \mbox{-}1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & \mbox{-}1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & \mbox{-}1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & \mbox{-}1 \\ \end{smallmatrix}\right]$ \vspace{2ex} \\ \multicolumn{2}{c}{Adaptive Algorithm Output} \vspace{2ex} \\ \multicolumn{2}{c}{ $\left[\begin{smallmatrix} 0 & 0 & \mbox{-}.1 & .1 & .19 & \mbox{-}.19 & 0 & 0 \\ 0 & 0 & .19 & \mbox{-}.19 & .1 & \mbox{-}.1 & 0 & 0 \\ \mbox{-}.3 & \mbox{-}.3 & .33 & .33 & .33 & .33 & \mbox{-}.3 & \mbox{-}.3 \\ .47 & .47 & \mbox{-}.5 & \mbox{-}.5 & .5 & .5 & \mbox{-}.47 & \mbox{-}.47 \\ .56 & .56 & .53 & .53 & \mbox{-}.53 & \mbox{-}.53 & \mbox{-}.56 & \mbox{-}.56 \\ .62 & .62 & .57 & .57 & .57 & .57 & .62 & .62 \\ \end{smallmatrix}\right]$ } \\ \end{tabular} \vspace*{-.5ex} \caption{\label{fig:strat} Alternative strategy matrices that can be used to answer workload $\vect{W}$ from Fig \ref{tbl:one}(b). The \revision{total error of $\vect{W}$}{root mean square error of answering $\vect{W}$} when using the identity, wavelet, and adapted strategies is \revision{$36.0$, $21.0$, and $15.5$,}{$45.36$, $34.62$ and $29.79$,} respectively. } \vspace*{-10pt} \end{figure} \begin{example} The three strategy matrices in Fig.~\ref{fig:strat} can be used by the matrix mechanism to answer the workload $\vect{W}$ in Fig.~\ref{tbl:one}(b), with differing results. The first strategy is the identity matrix, the second is the Haar wavelet strategy, and the third is the output of the algorithm proposed in Sec \ref{sec:alg}. \revision{If the workload itself is used as the strategy, the total error of $\vect{W}$}{With $\epsilon=0.5$ and $\delta=0.0001$, if the workload itself is used as the strategy, the root mean square error of answering $\vect{W}$} is \revision{$40.0$}{$47.78$}. The \revision{total error}{root mean square error} using the identity, wavelet, and adaptive strategies is \revision{$36.0$, $21.0$, and $15.5$,}{$45.36$, $34.62$ and $29.79$,} respectively. It is possible to prove that no strategy can answer $\vect{W}$ with error less than \revision{$14.93$}{$29.18$}, so the algorithm is finding a nearly optimal strategy for this workload. Intuitively, by using the identity strategy, we get noisy estimates of each cell count using the Gaussian mechanism, and then use those estimates to compute the workload queries. This strategy performs poorly for workload queries that sum many base counts because the variance of the independent noise increases additively. The wavelet addresses this limitation by allowing large range queries to be estimated by combining the answers to just a few of the wavelet strategy queries. It offers a dramatic improvement over the identity strategy for workload consisting of all range queries. However, the wavelet is not necessarily appropriate for every workload. Our algorithm produces a strategy customized to $\vect{W}$, allowing for reduced error. \revision{(The error rates above assume for simplicity that the privacy coefficient $P(\epsilon,\delta)$ is 1.)}{} \end{example} \subsection{Optimal Error for the Matrix Mechanism} \label{sec:sub:error} We measure the accuracy of a noisy query answer using \revision{mean squared error}{root mean square error.} For a workload of queries, the error is defined as the \revision{sum}{root mean square error} of \revision{individual query errors}{the vector of answers, which we refer to simply as {\em workload error} in the remainder of the paper. \begin{definition}[Query and Workload Error]\label{def:errors} Let \\$\vect{\hat w}$ be the estimate for query $\w$ under the matrix mechanism using query strategy $\vect{A}$. That is, $\vect{\hat w}=\MM_\vect{A}(\w,\x)$. The \revision{mean squared error}{query error} of the estimate for $\w$ using strategy $\vect{A}$ is: \revision{$$\error{\vect{A}}{\w} \stackrel{\mathrm{def}}{=} \mathbb{E}[ ( \w\x - \vect{\hat w})^2 ].$$}{$$\error{\vect{A}}{\w} \stackrel{\mathrm{def}}{=} \sqrt{\mathbb{E}[ ( \w\x - \vect{\hat w})^2 ]}.$$ } Given a workload $\vect{W}$\revision{, the \em total mean squared error}{ consisting of $m$ queries, the workload error} of answering $\vect{W}$ using strategy $\vect{A}$ is: \revision{$\error{\vect{A}}{\vect{W}}\stackrel{\mathrm{def}}{=}$\\$\sum_{\w_i \in \vect{W}} \error{\vect{A}}{\w_i}$.}{\[\error{\vect{A}}{\vect{W}}\stackrel{\mathrm{def}}{=}\\\sqrt{\frac{1}{m}\sum_{\w_i \in \vect{W}} \error{\vect{A}}{\w_i}^2}.\]} \vspace*{-2ex} \end{definition} The query answers returned by the matrix mechanism are linear combinations of noisy strategy query answers to which independent Gaussian noise has been added. Thus, as the following proposition shows, we can directly compute the error for any linear query $\w$ or workload $\vect{W}$ as a function of $\epsilon,\delta$, and $\vect{A}$: \begin{proposition}{\sc (\revision{Total Error}{Workload Error})} \label{prop:totalerror} Given a workload $\vect{W}$, the \revision{total error}{error} of answering $\vect{W}$ using the $(\epsilon,\delta)$ matrix mechanism with query strategy $\vect{A}$ is: \revision{ \begin{equation}\label{eqn:totalerror} \error\vect{A}{\vect{W}} = P(\epsilon, \delta)|| \vect{A} ||_2^2 \;\mbox{trace} (\vect{W}^T\vect{W}(\vect{A}^T\vect{A})^{-1}) \end{equation} } { \begin{equation}\label{eqn:totalerror} \error\vect{A}{\vect{W}} = ||\vect{A}||_2\sqrt{P(\epsilon, \delta)\;\mbox{trace} (\vect{W}^T\vect{W}(\vect{A}^T\vect{A})^{-1})} \end{equation} } where $P(\epsilon, \delta)=\frac{2\log(2/\delta)}{\epsilon^2}$. \end{proposition} \eat{According to proposition~\ref{prop:totalerror}, when the privacy parameters $\epsilon$ and $\delta$ are chosen, the total error of using strategy $\vect{A}$ to answer workload $\vect{W}$ is explicitly determined by \[|| \vect{A} ||_2^2 \mbox{trace} (\vect{W}^T\vect{W}(\vect{A}^T\vect{A})^{-1}).\] To make the representation easier, in the rest of this paper, we always assume $P(\epsilon, \delta)=1$ so that \[ \totalerror\vect{A}{\vect{W}} = || \vect{A} ||_2^2\mbox{trace} (\vect{W}^T\vect{W}(\vect{A}^T\vect{A})^{-1}).\]} To build a mechanism that adapts to a given workload $\vect{W}$, our goal is to select a strategy $\vect{A}$ to minimize the above formula. The optimal strategy for a workload $\vect{W}$ is defined to be one that minimizes the \revision{total error:}{workload error:} \begin{problem}{\sc (Optimal Strategy Selection)} \label{prob:mintotal} Given a workload $\vect{W}$, find a query strategy $\vect{A}_0$ such that: \begin{equation}\label{eqn:mintotalerror} \error{\vect{A}_0}\vect{W} = {\min}_\vect{A}\error\vect{A}\vect{W}. \end{equation} \end{problem} We denote the problem of computing an optimal strategy matrix as $\optstrategy{\vect{W}}$ and the \revision{}{workload error} under this strategy as $\opterror{\vect{W}}$. It is possible to compute an exact solution to $\optstrategy{\vect{W}}$ by representing it as a convex optimization problem~\cite{Li:2010Optimizing-Linear}. However, encoding the necessary constraints results in a problem with a large number of variables and optimization takes $O(n^8)$ time with standard solvers, making it infeasible for practical applications. One of our main goals is to efficiently find approximately optimal strategy matrices, for any provided workload. \revision{}{We emphasize that the algorithms in this paper optimize the workload error, an {\em absolute} measure of error. The solution to this optimization problem depends on the workload alone, not on the input database. (This is evident from the fact that $\x$, the vector of database counts, does not appear in Eq. (\ref{eqn:totalerror}) above.) We also consider {\em relative} error in the experimental evaluation, which inherently depends on the input database. We show that low relative error for a workload $\vect{W}$ can be achieved by optimizing (absolute) error of a workload whose rows have been scaled in a straightforward way.} \section{Conclusions and Future Work}\label{sec:conclusion} We have described an adaptive mechanism for answering complex workloads of counting queries under differential privacy. The mechanism can be seen to automatically select, for a given workload, a noise distribution composed of linear combinations of independent Gaussian noise. With no reduction in privacy, the mechanism can significantly reduce error over competing techniques and is close to optimal with respect to the class of perturbation methods considered. In the future we hope to extend our theoretical approximation bounds to the eigen-separation and principal vector optimizations, and \revision{continue to explore the factors that make some workloads harder to answer accurately than others.}{apply our approach to non-linear queries.} \vspace*{-2ex} \paragraph*{Acknowledgements} Li and Miklau were supported by the NSF through grants CNS-1012748 and IIS-0964094. We are grateful for the comments of the anonymous reviewers. \section{\revision{Performance}{Complexity And} Optimizations} \label{sec:eff} We focus next on methods to further reduce the complexity of approximate strategy selection. \revision{We first show that for workloads with low rank, the strategy selection algorithm can be solved more efficiently, }{We first analyze the complexity of the strategy selection algorithm and show that it can be solved more efficiently for low rank workloads, }with no impact on the quality of the solution. Then we propose two approaches which can significantly speed up strategy selection by reducing the size of the input to Program~\ref{prog:eigen}. Intuitively, both approaches perform strategy selection over a summary of the workload that is constructed from its most significant eigenvectors, potentially sacrificing fidelity of the solution. We evaluate the latter two techniques in Sec \ref{sec:exp:tradeoff}. \subsection{\revision{Workloads without full rank}{Complexity Analysis}} The rank of workload matrix $\vect{W}$, denoted by $\textup{rank}(\vect{W})$, is the size of the largest linearly-independent subset of the rows (or, equivalently, columns). When $\textup{rank}(\vect{W})$ is its maximum value, $n$, we say that $\vect{W}$ has full rank, which implies that accurate answers to the workload queries in $\vect{W}$ uniquely determine every cell count in $\x$. \revision{}{The complexity of the strategy selection algorithm can be broken into three parts: computing the eigenvectors and eigenvalues of matrix $\vect{W}^T\vect{W}$, solving the optimization problem, and constructing the strategy. If an eigenvalue is equal to zero, the eigenvalue and its corresponding eigenvectors are not actually involved the optimization and strategy construction, so they can be omitted in practice. Since the number of nonzero eigenvalues of $\vect{W}^T\vect{W}$ is equal to $\textup{rank}(\vect{W})$, the complexity of Programs~\ref{prog:eigen} is $O(nm\,\textup{rank}(\vect{W})+n\,\textup{rank}(\vect{W})^3)$. The complexity analysis above indicates that its efficiency can be significantly improved when $\textup{rank}(\vect{W})\ll n$.}\revision{ Workloads often have lower rank, especially when they are associated with multi-dimensional data sets. In this case, intuitively, some groups of cell counts always appear together and could be treated as a single variable. For example, the workload in Fig. \ref{tbl:one}(b) has 8 columns but rank $4$, and is therefore not full rank. In addition, workloads of low-order marginals are not full rank. We can take advantage of the low rank of a workload to improve the efficiency of our algorithm. Observing Programs~\ref{prog:expdesign} and \ref{prog:eigen}, if an eigenvalue of matrix $\vect{W}^T\vect{W}$ is 0, its corresponding $c_i$ in Program~\ref{prog:expdesign} will be $0$. When $c_i=0$, we can let $v_i$ be arbitrarily large and $u_i$ be arbitrarily small so that the optimal solution to Program~\ref{prog:expdesign} must have $u_i=0$ for any $c_i=0$. Therefore, we can directly remove this $u_i$ and its corresponding $v_i$ from the convex optimization problem to reduce the number of variables by 2 and the number of constraints by 1. A non-full rank workload $\vect{W}$ has exactly $\textup{rank}(\vect{W})$ non-zero eigenvalues, and the complexity of solving Program~\ref{prog:eigen} can be reduced to $O(n\,\textup{rank}(\vect{W})^3)$. The efficiency of the algorithm can be significantly improved when $\textup{rank}(\vect{W})\ll n$.}{} For example, the rank of low order marginal workloads can be bounded by the number of queries in the workload. Suppose a low-order marginal workload is defined on a $k$-dimensional space of cell conditions, each of which has size $d$. If the workload only contains one-way marginals, the complexity of solving Program~\ref{prog:eigen} over this workload is bounded by $O(k^3d^{3+k})$. If the workload consists of one and two-way marginals the complexity is $O(k^6d^{k+6})$. Both of these bounds are much smaller than $O(d^{4k})$\revision{, the complexity of running Program~\ref{prog:eigen} directly}{}. \subsection{Workload Reduction Approaches} Next we propose two approaches which allow us to reduce the number of variables in the optimization problem. Both are inspired by principal component analysis (PCA), in which a matrix is characterized by the so-called principal eigenvectors, which are the eigenvectors associated with the largest eigenvalues. In our case, recall that we cannot ignore the non-principal eigenvectors since the rank of the strategy matrix $\vect{A}$ cannot be lower than the workload matrix $\vect{W}$. Instead, we either compute separately the weights for the principal and remaining eigenvectors, or we choose the same weights for all the remaining eigenvectors. \subsubsection*{Eigen-Query Separation} In {\em eigen-query separation}, we partition the eigen-queries into groups of a specified size according to their corresponding eigenvalues. Treating one group at a time, Program~\ref{prog:expdesign} is executed to determine the optimal weights just for the eigenvectors of that group. After the individual group optimizations are finished, another optimization can be used to calculate the best factor to be applied to all queries in each group. If the group size is large, all of the principal eigenvectors may be contained in one group, in which case the most important weights will be computed precisely. The complexity of eigen-query separation depends on the group division. Notice that during the optimization of each group, the convex optimization problem is equivalent to setting all eigenvalues of excluded eigenvectors to zero. Analogous to the discussion of low rank workloads, letting the size of group be $n_g$, the complexity of solving the optimization problem over each group is $O(nn_g^3)$. Similarly, the time complexity to combine all the groups is $O(n(n/n_g)^3)$, and therefore $O(n^2n_g^3+n(n/n_g)^3)$ in total. Asymptotically, the complexity of eigen-query separation is minimized when $n_g=O(n^{1/3})$. Then the complexity of the entire process is $O(n^{3})$, the same as the cost of standard matrix multiplication. \subsubsection*{Principal Vectors Optimization} In the {\em principal vector optimization} we use a subset of the $k$ most important eigenvectors as the design set, computing the optimal weights as usual. Instead of ignoring the less important eigenvectors (as is typical in PCA) we simply use a single common weight for each of the excluded vectors that have non-zero eigenvalues. The number of variables in the convex optimization is reduced to $k+1$ so that the time complexity is reduced to $O(nk^3)$. Experimentally we find that good results are possible with as little as $10\%$ of the eigenvectors. \vspace{1ex} In Sec.~\ref{sec:exp:tradeoff} we show that both of the above approaches can improve execution time by two orders of magnitude with modest impact on solution quality. Extending our theoretical bound on the approximation rate to these approaches is an interesting direction for future work. \section{An Algorithm for Efficient\\ Strategy Selection} \label{sec:alg} In this section we present an approximation algorithm for the strategy selection problem, prove its approximation rate and other properties, and discuss adapting the algorithm to $\epsilon$-differential privacy. \subsection{Optimal Query Weighting} The main difficulty in solving $\optstrategy{\vect{W}}$ is computing (subject to complex constraints) all $n^2$ entries of a strategy matrix. To simplify the problem, we take inspiration from the related problem of {\em optimal experimental design} \cite{Pukelsheim93Optimal}. Consider a scientist who wishes to estimate the value of $n$ unknown variables as accurately as possible. The variables cannot be observed directly, but only by running one or more of a fixed set of feasible experiments, each of which returns a linear combination of the variables. The experiments suffer from observational error, but those errors are assumed independent, and it follows that the least square method can be used to estimate the unknown variables once the results of the experiments are collected. Each experiment has an associated cost (which may represent time, effort, or financial expense) and the scientist has a fixed budget. The optimal experimental design is the subset (or weighted subset) of feasible experiments offering the best estimate of the unknown variables and with a cost less than the budget constraint. There is an immediate analogy to the problem of strategy selection: our strategy queries are like experiments that provide partial information about the unknown data vector $\x$, and the final result will be computed using the least square method. However, in our setting, we are permitted to ask any query, with a cost (arising from the increase in sensitivity) which impacts the added noise. In addition, our goal is to minimize the \revision{sum of variances of the given workload queries}{workload error}, while experimental design always minimizes the error of the individual variables (i.e. the error metric in experimental design is equivalent to our problem only if $\vect{W}$ is the identity matrix). Despite these important differences, we adopt from experimental design the idea to limit the selection of our strategy to weighted combinations of a set of {\em design queries} that are fixed ahead of time. Naturally, design queries with a weight of zero are omitted. For a set of design queries $\mathcal{Q}$, the following problem, denoted $\appstrategy{\mathcal{Q}}{\vect{W}}$, selects the set of weights which minimizes the \revision{total error}{workload error} for $\vect{W}$. \begin{problem}[Approximate Strategy Selection] \label{prob:expdesign} $ $\\Let $\vect{W}$ be a workload and $\mathcal{Q}=\{x_1, \dots x_k\}$ the design queries. For weights $\vect{\Lambda}=(\lambda_1 \dots \lambda_k) \in \mathbb{R}^k$, let matrix $\vect{A}_{\vect{\Lambda}, \mathcal{Q}}=$\\ $[\lambda_1x_, \ldots, \lambda_kx_k]^T$. Choose weights $\vect{\Lambda}_0\in\mathbb{R}^k$ such that: \begin{equation}\label{eqn:approxtotalerror} \error{\vect{A}_{\vect{\Lambda}_0, \mathcal{Q}}}\vect{W} = {\min}_{\vect{\Lambda}\in\mathbb{R}^k}\error{\vect{A}_{\vect{\Lambda}, \mathcal{Q}}}\vect{W}. \end{equation} \end{problem} The solution to this problem only approximates the truly optimal strategy since it is limited to selecting a strategy that is a weighted combination of the design queries. But $\appstrategy{\mathcal{Q}}{\vect{W}}$ can be computed much more efficiently than $\optstrategy{\vect{W}}$. To do so, we describe $\appstrategy{\mathcal{Q}}{\vect{W}}$ as a semi-definite program~\cite{boyd2004convex}, a special form of convex optimization in which a linear objective function is minimized over the cone of positive semidefinite matrices. Below, $\circ$ is the Hadamard (entry-wise) product of two matrices, and for \revision{}{symmetric} matrix $\mathbf{Q}$, $\mathbf{Q}\succeq 0$ \revision{means}{denotes} that $\mathbf{Q}$ is positive semidefinite\revision{.}{, which means $\x^T\mathbf{Q}\x\geq 0$ for any vector $\x$.} \begin{algorithm}[ht] \small \vspace{-1ex} \begin{align*} \mbox{\textbf{Given: }} & c_1,\ldots,c_n, \,\mathcal{Q}=[x_1, \ldots, x_n].\\ \mbox{\textbf{Choose: }} & u_1,\ldots,u_n,\,v_1,\ldots,v_n.\\ \mbox{\textbf{Mimimize: }} & c_1v_1+\ldots+c_nv_n.\\ \mbox{\textbf{Subject to: }} & \left[\begin{array}{cc}u_i & 1\\1 & v_i\end{array}\right]\succeq 0, \quad i=1,\ldots, n.\\ & (\mathcal{Q}\circ \mathcal{Q})^T\mathbf{u}\leq \mathbf{1}. \end{align*} \vspace{-2ex} \caption{Optimal Query Weighting} \label{prog:expdesign} \end{algorithm} \vspace*{-3ex} \begin{theorem} \label{thm:sdp} Given a workload $\vect{W}$ and a set of design queries $\mathcal{Q}=\{x_1, \dots x_n\}$, let $c_1,\ldots, c_n$ be the squared $L_2$ norms of the columns of matrix $\vect{W}\mathcal{Q}^{+}$. If the output of Program \ref{prog:expdesign} is $u_1,\ldots, u_n$ then setting $\vect{\Lambda}=\{\sqrt{u_1} \dots \sqrt{u_n}\}$ achieves $\appstrategy{\mathcal{Q}}{\vect{W}}$. \end{theorem} \revision{ \begin{proof} (Sketch) To solve Problem~\ref{prob:expdesign}, notice that applying a scalar to $\lambda_1,\ldots,\lambda_n$ will not change the value of $\error{\vect{A}(\vect{\Lambda},\mathbf{Q})}{\vect{W}}$. Thus we can constrain the sensitivity of the strategy to be $1$. Then the problem is equivalent to minimizing $c_1/\lambda_1^2+\ldots+c_n/\lambda_n^2$ with the constraint that the sensitivity of the strategy is 1. In Program~\ref{prog:expdesign}, the larger $u_i$ leads to smaller $v_i$ so that smaller minimization goal. Thus the semidefinite constraints guarantee that $v_i=1/u_i$ and the inequality constraints require the sensitivity to be 1 for any optimal solution. \end{proof} }{} Algorithms for efficiently solving semidefinite programs have received considerable attention recently \cite{boyd2004convex}. Using standard algorithms, Program~\ref{prog:expdesign} can be solved in $O(n|\mathcal{Q}|^3)$ time. Recall that the complexity of computing $\optstrategy{\vect{W}}$ is $O(n^8)$. Thus, Program~\ref{prog:expdesign} offers an efficiency improvement as long as $|\mathcal{Q}|=O(n^2)$. This provides a target size for selecting the design set, which we turn to next. \subsection{Choosing the Design Queries}\label{sec:alg:choose} The potential of the above approach depends on finding a set of design queries, $\mathcal{Q}$, that is concise (containing no more than $n^2$, and preferably $n$, queries) and also expressive (so that near-optimal solutions can be expressed as weighted combinations of its elements). One straightforward idea is to adopt as the design queries one of the proposed strategy matrices from prior work. These are good strategy matrices for specific workloads such as the set of all range queries (wavelet or hierarchical strategy) or sets of low order marginals (the Fourier strategy). Choosing one of these for $\mathcal{Q}$ would guarantee that $\appstrategy{\mathcal{Q}}{\vect{W}}$ produces a solution that improves upon the error of using that strategy. Unfortunately these strategies are not sufficiently expressive for workloads very different from their target workloads. Another possibility is to use the workload itself as the set of design queries, but there are two difficulties with this. First, there is no guarantee that a workload includes within it the components from which a high quality strategy may be formed, especially if the workload only contains a small set of queries. The workloads of all range and all predicate queries are in fact sufficiently expressive (e.g. both the hierarchical strategy and a strategy equivalent to wavelet can be constructed by applying weights to the set of all range queries). But this leads to the second issue: these workloads, and others that serve important applications, are too large and fail to meet our conciseness requirement. To avoid these pitfalls, we will derive the design set from the given workload $\vect{W}$ by applying tools of spectral analysis. Intuitively this is a good choice because the eigenvectors of a matrix often capture its most important properties. We will also show in the next section that this choice aids in the theoretical analysis of the approximation ratio because it allows us to relate the output of $\appstrategy{\mathcal{Q}}{\vect{W}}$ to a lower bound on error that is a function of the workload eigenvalues. Recall that the key part of the expression for Eqn.~(\ref{eqn:totalerror}) in Prop. \ref{prop:totalerror} is $\mbox{trace} (\vect{W}^T\vect{W}(\vect{A}^T\vect{A})^{-1}) $, and notice that the workload occurs only in the form of $\vect{W}^T\vect{W}$. It follows that there are many workloads with equivalent \revision{total error }{error} because it is easy to construct a matrix $\vect{W}_0$ such that $\vect{W}_0^T\vect{W}_0=\vect{W}^T\vect{W}$ by letting $\vect{W}_0=\mathbf{Q}\vect{W}$ for any orthogonal matrix $\mathbf{Q}$. This suggests that, as far as \revision{total error}{workload error} under the matrix mechanism is concerned, the essential properties of the workload are reflected by $\vect{W}^T\vect{W}$. This motivates the following definition of eigen-queries of a workload, which we will use as our design set. \begin{definition}[Eigen-queries of a workload]$ $\\ Given a workload $\vect{W}$, consider the eigen-decomposition of $\vect{W}^T\vect{W}$ into $\vect{W}^T\vect{W}=\mathbf{Q}^T\vect{D}\mathbf{Q}$, where $\mathbf{Q}$ is an orthogonal matrix and $\vect{D}$ is a diagonal matrix. \eat{whose diagonal entries are $\sigma_1, \ldots, \sigma_n$. } The \textbf{eigen-queries} of $\vect{W}$ are the rows of $\mathbf{Q}$ (i.e. the eigenvectors of $\vect{W}^T\vect{W}$). \end{definition} Choosing the eigen-queries of $\vect{W}$ as the design set meets our conciseness requirement because there are never more than $n$ eigen-queries. Thus Program~\ref{prog:expdesign}, $\appstrategy{\mathcal{Q}}{\vect{W}}$, has complexity $O(n^4)$, which is $O(n^4)$ times faster than solving $\optstrategy{\vect{W}}$. We also find that the eigen-queries meet our expressiveness objective. We will show this next by proving a bound on the approximation ratio. In Sec. \ref{sec:eff} we propose techniques that exploit the fact that using subsets of the eigen-queries retain much of the expressiveness and increase efficiency. And in Section \ref{sec:exp}, we show experimentally that weighted eigen-queries allow for near-optimal strategies, and also that the eigen-queries outperform other natural alternatives for the design set. \subsection{The Eigen-Design Algorithm} It remains to define the complete Eigen-Design algorithm, which is Program \ref{prog:eigen}: \begin{algorithm}[h!] \small \begin{algorithmic}[1] \REQUIRE Workload matrix $\vect{W}$. \ENSURE Strategy matrix $\vect{A}$. \STATE Compute the eigenvalue decomposition of $\vect{W}^T\vect{W}=\mathbf{Q}^T\vect{D}\mathbf{Q}$, where $\vect{D}=diag(\sigma_1, \ldots, \sigma_n)$ and set $\mathcal{Q}=\mathbf{Q}$. \STATE Compute weights $\lambda_1,\ldots,\lambda_n$ by solving Program~\ref{prog:expdesign} for above $\mathcal{Q}$ and with $c_i=\sigma_i$, $i\in [1..n]$. \revision{ \STATE Construct matrix $\vect{M}=\mathbf{Q}^T\vect{\Lambda}\mathbf{Q}$ where $\vect{\Lambda}$ is the diagonal matrix whose entries are $\lambda^2_1,\ldots,\lambda^2_n$. \STATE Let $m_{11}, \ldots, m_{nn}$ be the diagonal entries of $\vect{M}$ and define $\vect{M}'$ to be the diagonal matrix whose entries are $\max_i\{m_{ii}-m_{11}\},\ldots, \max_i\{m_{ii}-m_{nn}\}$. \STATE Compute the eigenvalue decomposition of $\vect{M}+\vect{M}'=\mathbf{Q}'^T\vect{D}'\mathbf{Q}'$, where $\vect{D}'=diag(\sigma'_1, \ldots, \sigma'_n)$ and return $\vect{A}=diag(\sqrt\sigma'_1, \ldots, \sqrt\sigma'_n)Q'$.}{ \STATE Construct matrix $\vect{A}'=\vect{\Lambda}\mathbf{Q}$ where $\vect{\Lambda}=diag(\lambda_1,\ldots,\lambda_n)$. \STATE Let $m_{11}, \ldots, m_{nn}$ be the $L_2$ norm of columns of $\vect{A}'$ and define $\vect{D}'=diag(\max_i\{\sqrt{m^2_{ii}-m^2_{11}}\},\ldots, \max_i\{\sqrt{m^2_{ii}-m^2_{nn}}\})$. \STATE Return $\vect{A}=\left[\begin{smallmatrix}\vect{A}'\\ \vect{D}'\end{smallmatrix}\right]$. } \end{algorithmic} \caption{The Eigen-Design Algorithm} \label{prog:eigen} \end{algorithm} The algorithm performs the decomposition of $\vect{W}^T\vect{W}$ to derive the design queries (Step 1), and solves $\appstrategy{\mathcal{Q}}{\vect{W}}$ using the eigen-queries as the design set (Step 2). \revision{The matrix $\vect{M}$ that is constructed in Step 3 is a candidate matrix for $\vect{A}^T\vect{A}$. But the output of the algorithm after Step 3 may result in a strategy with one or more columns whose norm is less than the sensitivity. In this case, it is possible to add queries to the strategy without raising the sensitivity (Step 4), and these additional queries can only provide more information about the database, and hence reduce error. Step 5 decomposes the revised candidate for $\vect{A}^T\vect{A}$ in order to derive the final strategy matrix.}{The matrix $\vect{A}'$ that is constructed in Step 3 is a candidate strategy but may have one or more columns whose norm is less than the sensitivity. In this case, it is possible to add queries, completing columns, without raising the sensitivity (Step 4 and 5). These additional queries can only provide more information about the database, and hence reduce error.} \subsection{Analysis of the Eigen-Design Algorithm}\label{sec:alg:analysis} We now consider the accuracy and generality of the eigen-design algorithm, showing a bound on the worst-case approximation rate and that the accuracy of the algorithm is robust with respect to the representation of the input workload. \subsubsection*{Approximation Rate} To bound the approximation rate, we use an existing result showing a lower bound on the optimal error achievable for a workload using the $(\epsilon, \delta)$-matrix mechanism~\cite{Li11Measuring}. The existence of this bound does not imply an algorithm for achieving it, but it is a useful tool for understanding theoretically and experimentally the quality of the strategies produced by $\appstrategy{}{\vect{W}}$ using the eigenvalues of $\vect{W}$. \begin{theorem}{\sc(Singular Value Bound~\cite{Li11Measuring})}\label{thm:svdb} Given \\any $m\times n$ workload $\vect{W}$. Let $\sigma_1, \ldots, \sigma_n$ be the eigenvalues of matrix $\vect{W}^T\vect{W}$. The singular value bound of $\vect{W}$ is $\mbox{\sc svdb}(\vect{W})\!\!=\!\!\frac{1}{n}(\sqrt{\sigma_1}+\ldots+\sqrt{\sigma_n})^2$,\! and bounds $\opterror{\vect{W}}$: \revision{ \[\mbox{\sc MinError}(\vect{W})\geq P(\epsilon, \delta)\mbox{\sc svdb}(\vect{W}).\] }{ \[\opterror{\vect{W}}\geq \sqrt{P(\epsilon, \delta)\mbox{\sc svdb}(\vect{W})}.\]} \end{theorem} Intuitively, let $\vect{A}_l$ be the strategy that is defined by weighting the eigen queries of $\vect{W}$ by $\sqrt{\sigma_1},\ldots,\sqrt{\sigma_n}$. The singular value bound comes from underestimating the sensitivity of $\vect{A}_l$ using $\sqrt{\mbox{trace}(\vect{A}_l^T\vect{A}_l)/n}$. In practice, though the singular value bound may not be achieved since there is a gap between the sensitivity of $\vect{A}_l$ and $\sqrt{\mbox{trace}(\vect{A}_l^T\vect{A}_l)/n}$, the idea of weighting the eigen queries can be combined with the experimental design method to find good strategies to $\vect{W}$. Notice the strategy $\vect{A}_l$ is contained in the possible solutions of Program~\ref{prog:eigen}. Thus the approximation ratio of Program~\ref{prog:eigen} can be estimated by using the approximation ratio of the singular value bound. \begin{theorem} Given workload $\vect{W}$, let $\sigma_1$ be the largest \\eigenvalue of $\vect{W}^T\vect{W}$, Program~\ref{prog:eigen} gives a strategy that approximates \revision{the minimal error of $\vect{W}$}{$\opterror{\vect{W}}$} with a ratio of \revision{$\sqrt{n\sigma_1/\mbox{\sc svdb}(\vect{W})}$}{$(n\sigma_1/\mbox{\sc svdb}(\vect{W}))^{1/4}$}. \end{theorem} This theorem shows that the approximation ratio of applying Program~\ref{prog:eigen} to a workload $\vect{W}$ can be bounded by analyzing the eigenvalues of matrix $\vect{W}^T\vect{W}$. In practice, \revision{the gap between $\mbox{\sc svdb}(\vect{W})$ and $\apperror{}{\vect{W}}$}{the ratio between the error of the eigen strategies and the optimal error} is much smaller for a wide range of common workloads. In the experiments in Sec. \ref{sec:exp}, \revision{the largest ratio to $\mbox{\sc svdb}(\vect{W})$}{the largest ratio} is \revision{$1.6$}{at most $1.3$} and in a number of cases \revision{$\apperror{}{\vect{W}}$}{the ratio} is essentially equal to \revision{$\mbox{\sc svdb}(\vect{W})$}{1}, modulo numerical imprecision. \subsubsection*{Representation Independence} We say that the Eigen-Design algorithm is representation independent because its output is invariant for semantically equivalent workloads and error equivalent workloads. Recall that the logical semantics of a workload matrix $\vect{W}$ depends on its cell conditions. \revision{For any $\vect{W}$, there are many different matrices which can represent a semantically equivalent workload if we consider an alternative definition of the cell conditions. As a simple example, if we reorder the cell conditions of $\vect{W}$, then a new matrix with accordingly reordered columns will be semantically equivalent to the original.}{For any workload matrix $\vect{W}$, reordering its cell conditions leads to a new matrix $\vect{W}'$ with accordingly reordered columns. In this case, we say $\vect{W}$ and $\vect{W}'$ are \textsl{semantically-equivalent}.} Naturally, we hope for a mechanism with equal error for any two semantically-equivalent representations of a workload. Some prior approaches do not have this property. For example, the wavelet and hierarchical strategies exploit the locality present in the canonical representation of range queries. An alternative matrix representation of the range queries may result in significantly larger error. The Eigen-Design algorithm does not suffer from this pitfall: \begin{proposition}[Semantic equivalence] Let $\vect{W}_1$ \\and $\vect{W}_2$ be two semantically-equivalent workloads and suppose Prog.~\ref{prog:eigen} computes strategy $\vect{A}_1$ on workload $\vect{W}_1$ and $\vect{A}_2$ on workload $\vect{W}_2$. Then $\error{\vect{A}_1}{\vect{W}_1}=\error{\vect{A}_2}{\vect{W}_2}$. \end{proposition} \revision{\begin{proof} (Sketch) For any two semantically-equivalent workload matrices $\vect{W}_1$ and $\vect{W}_2$, there exist transformation matrices $\vect{T}_1$ and $\vect{T}_2$ such that $\vect{W}_1=\vect{T}_1\vect{W}_2\vect{T}_2$ where $\vect{T}_1$ performs row swaps and $\vect{T}_2$ performs a sequence of column swaps, column duplications, or duplicate column elimination. Because $\vect{T}_1$ is actually an orthogonal matrix, $\vect{W}_2^T\vect{W}_2=(\vect{T}_1\vect{W}_2)^T(\vect{T}_1\vect{W}_2)$. In addition, the operations on $\vect{T}_2$ do not change the nonzero of eigenvalues of $\vect{W}_2^T\vect{W}$ and using $\mathcal{Q}\vect{T}_2$ instead of $\mathcal{Q}$ in Program~\ref{prog:expdesign} does not change the inequality constraint w.r.t. those $x_i$ that have non-zero eigenvalues. Therefore, Program~\ref{prog:expdesign} computes semantically-equivalent strategies $\vect{A}\vect{T}_2$ and $\vect{A}$ for $\vect{W}_1$ and $\vect{W}_2$, respectively, and the final step in Program~\ref{prog:eigen} will leave the strategies semantically-equivalent as well. Thus $\vect{A}_1=\vect{A}_2\vect{T}_2$ and $\error{\vect{A}_1}{\vect{W}_1}=\error{\vect{A}_2}{\vect{W}_2}$. \end{proof}}{} A related issue arises for two workloads that may be semantically different, but can be shown to have equivalent error. Since $\vect{W}$ appears as $\vect{W}^T\vect{W}$ in the expression for \revision{total error}{error} of a workload, it follows that, for any orthogonal matrix $\mathbf{Q}$, workload $\mathbf{Q}\vect{W}$ has error equal to $\vect{W}$ under any strategy. And in particular, any two such workloads have equal minimum error. The Eigen-Design algorithm always finds the same strategies for any two error-equivalent workloads: \begin{proposition}[Error equivalence] Let $\vect{W}_1$ and \\$\vect{W}_2$ be two error-equivalent workloads (i.e. $\vect{W}_1=\mathbf{Q}\vect{W}_2$ for some orthogonal $\mathbf{Q}$) and suppose Program~\ref{prog:eigen} computes strategy $\vect{A}_1$ on workload $\vect{W}_1$ and $\vect{A}_2$ on workload $\vect{W}_2$. Then $\error{\vect{A}_1}{\vect{W}_1}=\error{\vect{A}_2}{\vect{W}_2}$ \end{proposition} This result follows from the fact that the input to Program~\ref{prog:expdesign} uses the eigenvectors of $\vect{W}^T\vect{W}$, and therefore operates identically on equivalent workloads. \revision{}{\subsubsection*{Optimizing for Relative Error} The discussion above is about workload error, an {\em absolute} measure of error. Our adaptive approach can also be used to find strategies offering low {\em relative} error. However, these are two fundamentally different optimization objectives and a single strategy matrix will not, in general, satisfy both. One major difference between computing absolute error and relative error is the impact of the $L_2$ norm of a query vector. According to Prop.~\ref{def:m-mech} and Def.~\ref{def:errors}, the query error of $\w$ under strategy $\vect{A}$ is proportional to the $L_2$ norm of $\w$. Therefore a scaled query $k\w$ has $k$ times larger query error compared with $\w$, and thus a query with higher $L_2$ norm contributes more to workload error. But because the relative error does not change with the $L_2$ norm of the query, using strategies optimized for workload error will not lead to optimal relative error. Because the matrix mechanism is a data-independent mechanism, it is not possible to optimize for relative error directly. If the distribution of the target dataset were known, we could scale each query by its weighted $L_2$ norm, where the weight on each cell is proportional to the inverse of its probability. This scaling will optimize towards relative error by neutralizing the fact that the designed strategies are biased towards high norm queries. Since the underlying distribution is typically unknown, we introduce a heuristic scaling, prior to applying the Eigen-Design algorithm, in which each query is normalized to make its $L_2$ norm $1$. This is equivalent to assuming a uniform distribution over the cells. In Sec~\ref{sec:exp}, we show that, for two real datasets, this approach results in significantly lower relative error than competing techniques.} \subsection{Application to the $\epsilon$-Matrix Mechanism} \label{sec:sub:l1} There are a number of challenges to applying the optimally weighted design approach under $\epsilon$-differential privacy. Recall, once again, the formula for \revision{total error}{workload error} from Prop. \ref{prop:totalerror}: \revision{$|| \vect{A} ||_2^2 \;\mbox{trace} (\vect{W}^T\vect{W}(\vect{A}^T\vect{A})^{-1})$\\}{\!\!$|| \vect{A} ||_2\sqrt{\mbox{trace} (\vect{W}^T\vect{W}(\vect{A}^T\vect{A})^{-1})}$}. To move to $\epsilon$-differential privacy, only the sensitivity term changes, from $L_2$ to $L_1$: \revision{$|| \vect{A} ||_1^2 \;\mbox{trace} (\vect{W}^T\vect{W}(\vect{A}^T\vect{A})^{-1})$\\}{$|| \vect{A} ||_1 \sqrt{\mbox{trace} (\vect{W}^T\vect{W}(\vect{A}^T\vect{A})^{-1})}$}. In the former case, the sensitivity term $|| \vect{A} ||_2$ is uniquely determined by $\vect{A}^T\vect{A}$. But in the latter case, computing a near-optimal $\vect{A}^T\vect{A}$ is not enough, because $|| \vect{A} ||_1$ remains undetermined and is itself hard to optimize. As a result, it is more challenging (although still possible) to represent the optimal query weighting as a convex optimization problem. We omit its formal encoding, but note that the resulting problem is also less efficient because we can no longer rely on second order cone programming \revision{Even though we can solve the problem using convex optimization, it is not clear that the eigen-queries will serve as a good design set since they characterize only the properties of $\vect{W}^T\vect{W}$ but do not account for the $L_1$ sensitivity.}{Furthermore, there does not seem to be a universally good design set: the eigen-queries do not outperform other bases, in general, because they characterize only the properties of $\vect{W}^T\vect{W}$ but do not account for the $L_1$ sensitivity. We can nevertheless still use our algorithm to improve existing strategies. For example, using the Wavelet basis in the algorithm can improve its performance on all range and random range queries by a factor of $1.1$ and $1.5$, respectively; using the Fourier basis can improve its performance on low order marginals by a factor of $1.6$.} Lastly, \revision{there is no}{we do not know of an} analogue of Thm \ref{thm:svdb} providing a guaranteed error bound for the $\epsilon$-Matrix Mechanism to verify the quality of the output. These challenges motivate our choice to focus on $(\epsilon, \delta)$-differential privacy. While the two privacy guarantees are strictly-speaking incomparable, for conservative settings of $\delta$, a user may be indifferent between the two. It is then possible to show that the asymptotic error rates for many workloads are roughly comparable between the two models. \section{Experimental Evaluation} \label{sec:exp} \subfigcapskip=-7pt The empirical evaluation of our mechanism has three objectives: $(i.)$ to measure \revision{}{solution quality of the Eigen-Design algorithm using both absolute and relative error}; $(ii.)$ to measure the trade-off between speed-up and solution quality of our two performance optimizations; and $(iii.)$ to measure the effectiveness of using the eigen-queries as the design set. Experimental conclusions are presented in Sec. \ref{sec:sub:con}. \begin{figure*}[th] \centering \subfigure[\small Absolute errors on range queries]{ \includegraphics[height=100pt]{combrange.pdf} \label{fig:range}} \subfigure[\small \revision{}{Relative errors on range queries}]{ \includegraphics[height=100pt]{realrange.pdf} \label{fig:realrange}}\\ \vspace*{3pt} \subfigure[\small Absolute errors on marginal queries]{ \includegraphics[height=96pt]{combmarginal.pdf} \label{fig:marginal} } \subfigure[\small \revision{}{Relative errors on marginal queries}]{ \includegraphics[height=100pt]{realmarginal.pdf} \label{fig:realmarginal}} \vspace*{-4pt} \caption{\revision{}{Absolute and relative error for the Eigen-Design algorithm and competitors, for range and marginal workloads, on 2048 cells. {\sf ``Lower Bound''} is a bound on the best possible error achievable by any strategy.}}\label{fig:regular} \vspace*{-8pt} \end{figure*} \subsubsection*{Experimental Setup} Recall that \revision{root mean squared error}{workload error is an absolute error measure based on root mean square error}. \revision{}{Workload} error can be analytically computed using Prop. \ref{prop:totalerror}, and this is precisely the error that will be witnessed when running repeated trials and computing the mean deviation. Further, workload error is independent of the true counts in data vector $\x$. That is, it is independent of the input data. These facts hold for all instances of the matrix mechanism, and therefore for each of the competing techniques we consider below. Therefore, \revision{}{when evaluating this absolute error measure}, we do not perform repeated trials with samples of random noise nor do we use any datasets. In addition, all measures of workload error include the same factor $P(\epsilon, \delta)$, so that changing the privacy parameters impacts each method \revision{equally}{with the same factor}, leaving the ratio of their error the same. Consequently, \revision{}{for workload error}, we simply \revision{assume $P(\epsilon, \delta)=1$}{fix $\epsilon=0.5$ and $\delta=0.0001$}. For \revision{}{workload} error, all error measurements are purely a function of the workload, reflecting the hardness of simultaneously answering a set of queries under differential privacy. In addition, these error rates can be compared directly with the lower bound as Theorem~\ref{thm:svdb}, reflecting a bound on the approximation rate. (This lower bound is not known to be achievable for all workloads, but nevertheless informs the quality of the eigen-strategy and its competitors.) \revision{We do not present relative error measures, which depend on the properties of an input dataset, however we have observed relationships between the techniques that are similar to those presented below.}{We also evaluate the relative error rates achievable using our algorithm by computing the strategy that minimizes absolute error on a scaled workload, as described in Sec.~\ref{sec:alg:analysis}. Of course, the relative error rates reported in experiments are always for the original input workload. In these experiments we vary the value of $\epsilon$, for a fixed $\delta=0.0001$, and consider two real datasets. The first dataset is the US individual census data in the past five years\footnote{Integrated Public Use Microdata Series: usa.ipums.org}, which are aggregated on age, occupation and income. The second is the Adult dataset\footnote{UCI Machine Learning Repository: archive.ics.uci.edu/ml/}, in which tuples are weight-aggregated on age, work, education and income. The size and dimensions of the datasets are: \vspace{-1ex} \begin{table}[h] \centering \revision{}{ \begin{tabular}{|c|c|c|} \hline Dataset & Dimension & \# Tuples \\ \hline US Census & $8\times 16\times 16$ & 15M \\ \hline Adult & $8\times 8\times 16\times 2$ & 33K \\ \hline \end{tabular}} \vspace*{-1ex} \caption{The size and dimensions of the datasets} \end{table} \vspace*{-1ex} } All experiments are executed on a quad-core $3.16$GHz Intel CPU with $8$ GB memory. Our Python implementation extends publicly-available code for the matrix mechanism \cite{mmech-code} and also uses the {\tt dsdp} solver \cite{dsdp5} in the {\tt cvxopt} \cite{cvxopt} package. \subsubsection*{Competing Approaches} We compare the Eigen-Design strategy with the following four alternatives. Although originally proposed in the context of $\epsilon$-differential privacy, each is easily adapted to $(\epsilon, \delta)$-differential privacy and the shift generally improves the relationship to the optimal error rate (with the exception of the Fourier strategy, noted below). \begin{description} \itemsep 0in \item[]\textbf{Fourier} is designed for workloads consisting of all $k$-way marginals, for given $k$~\cite{barak2007privacy}. The strategy transforms the cell counts with the Fourier transformation and computes the marginals from the Fourier parameters. When the workload is not full rank, the unnecessary queries of the Fourier basis are removed from the strategy to reduce sensitivity. The effectiveness of the Fourier strategy is somewhat reduced under $(\epsilon, \delta)$-differential privacy because dropping unnecessary queries results in a smaller sensitivity reduction using $L_2$. \item[]\textbf{DataCube} is an adaptive method that supports marginal workloads~\cite{Ding:2011fk}. We implemented the BMAX algorithm, which chooses a subset of input marginals so as to minimize the maximum error when answering the input workload. To adapt the algorithm to $(\epsilon, \delta)$-differential privacy, sensitivity is measured under $L_2$ instead of $L_1$. \item[]\textbf{Wavelet} supports multi-dimensional range workloads by applying the Haar wavelet transformation to each dimension~\cite{xiao2010differential}. When using $\epsilon$-differential privacy, Xiao et al. also introduced a hybrid algorithm that uses the identity strategy on dimensions with small size. This optimization is unnecessary under $(\epsilon,\delta)$-differential privacy: the hybrid algorithm does not lead to smaller error when sensitivity is measured under $L_2$. \item[]\textbf{Hierarchical} aims to answer workloads of range queries using a binary tree structure of queries: the first query is the sum of all cells and the rest of the queries recursively divide the first query into parts~\cite{Hay:2010Boosting-the-Accuracy}. We test binary hierarchical strategies (although higher orders are possible). The strategy in \cite{Hay:2010Boosting-the-Accuracy} supports one dimensional range workloads, but is adapted to multiple dimensions in a manner analogous to Wavelet \cite{xiao2010differential}. \end{description} We do not compare with the error of the standard Gaussian mechanism, which, for the workloads considered, is far worse than all alternatives. Prior works \cite{Hay:2010Boosting-the-Accuracy,xiao2010differential,Ding:2011fk} compared the error rates of their approaches with the identity strategy. We omit this explicit comparison, since the identity is always within the space of possible strategies the Eigen-Design could choose, but is not competitive. \subsection{Error of the Eigen-Design Algorithm} We now measure the improvement in \revision{}{absolute and relative error} offered by the Eigen-Design algorithm along with its approximation to optimal \revision{}{absolute} error. Below we refer to the strategy produced by the Eigen-Design algorithm, for a given workload, as the {\em eigen-strategy}. We consider three classes of workloads, \revision{beginning with structured workloads, then workloads of randomly sampled queries,}{beginning with workloads of range queries, then workloads of marginals}, and then some alternative workloads designed to test the adaptivity of the mechanism. \revision{ \paragraph*{Structured workloads} \textbf{Figs.~\ref{fig:regular}(a)-(c)} contain experiments on workloads of all range queries, all marginal queries, and all two-way marginal queries. For all range queries, the eigen-stratgy is compared with Hierarchical and Wavelet. For workloads of marginals, the eigen-strategy is compared with DataCube and Fourier. The results show that the eigen-design strategies reduce error by at least a factor of 2 compared to the best competing strategies in most cases. In addition, the error of eigen-design strategy is very close to (in \ref{fig:allrange}) or reaches (in \ref{fig:allmarginal} and \ref{fig:lowmarginal}) the lower bound of error, showing that it is not possible to improve much on the eigen-strategy. The error of the Fourier strategy is not fully shown in the high dimensional cases of \ref{fig:lowmarginal} since it is worse than all other strategies by more than an order of magnitude. \paragraph*{Randomized workloads} Next we consider randomly selected low-order marginal queries and randomly selected three dimensional range queries. For workloads of marginal queries, in \textbf{Fig.~\ref{fig:rmarginal}}, we randomly sample several attributes or pairs of attributes and use all marginal queries over the selected attributes. The result over the random sampled marginal queries is quite similar to the two way marginal case, but the gaps between the eigen-design strategy and the competitors are even larger: the errors are reduced by a factor of 5 compared with the DataCube strategy and the Fourier is about 6 times worse than DataCube. For workloads of random range queries, we follow the two-step sampling method in \cite{xiao2010differential}: the first step chooses the dimensions of the sampling query uniformly at random, and the second step picks one query from all queries over the selected dimensions uniformly at random. Those randomly generated queries are partitioned into five equal groups according to their coverage percentage of the domain, and each forms a workload. As shown in \textbf{Fig. \ref{fig:privlet}}, we compare the eigen-design strategy, computed for each individual workload, to the Wavelet and the Hierarchical strategy. We also compute the eigen-design strategy for the union of the five workloads to form a ``universal'' eigen-design strategy. The results indicate that the universal eigen-design strategy introduces error almost evenly to each group and consistently reduces the error by up to a factor of 2.5 compared to competitor strategies. The eigen-design strategies designed for each group are even better, and clearly show the benefit of an adaptive algorithm. Error is reduced by a factor of 5 compared to the Hierarchical strategy, and outperforms the Wavelet strategy by about an order of magnitude.} { \paragraph*{Workloads of Range Queries} \textbf{Figs.~\ref{fig:regular}(a),(b)} contain experiments on workloads of all range queries and random range queries. The random ranges are sampled with the two-step sampling method in \cite{xiao2010differential}. Here the eigen-strategies are compared with Hierarchical and Wavelet strategy. The figures are in log scale, except Fig.~\ref{fig:regular}(a) on all range queries. The results show that the eigen-design strategies reduce error by a factor of 1.2 to 2.1 in workload error and 1.3 to 1.5 in relative error compared to the best competing strategies. In addition, for workload error, the eigen-design strategy is within a factor of 1.3 to the lower bound. \paragraph*{Workloads of Marginals} \textbf{Figs.~\ref{fig:regular}(c),(d)} contain experiments on workloads of 2-way marginal queries and random marginal queries, in which the random marginals are sampled with the sampling method in \cite{Ding:2011fk}. Here the eigen-strategies are compared with Fourier and DataCube. The figures are in linear scale for workload error and log scale for relative error. The results show that the eigen-design strategies reduce error by a factor of 1.3 to 2.2 compared to the best competing strategies in workload error, and by a factor of 1.1 to 2.7 in relative error. In addition, the error of eigen-design strategies match the lower bound of workload error, indicating that our algorithm found an optimal strategy with respect to workload error.} \begin{table}[h] \centering \vspace*{-5pt} \revision{}{ \small \begin{tabular}{|p{40pt}||p{33pt}|c|p{17pt}|c|} \hline \multirow{2}{46pt}{Workload}& \multicolumn{3}{c|}{Error Ratio}& \multirow{2}{35pt}{Best/Worst Competitor}\\ \cline{2-4} & \hspace*{-1pt}Err Type & Best/Worst & \hspace*{-3.5pt}Bound & \\ \hline \multirow{2}{46pt}{\hspace*{-3pt}1D Range\newline\hspace*{-3pt}(Permuted)}& workload & 9.62/13.16 &0.99 &Wav./Hier.\\ \cline{2-5} & relative & 1.51/2.43 &- & Wav./Hier.\\ \hline \multirow{2}{47pt}{\hspace*{-3pt}1Way Range\newline\hspace*{-3pt}Marginal}& workload & 1.30/7.69 & 0.98 & D.Cube/Four.\\ \cline{2-5} & relative & 1.36/4.93 &- & D.Cube/Four.\\ \hline \multirow{2}{47pt}{\hspace*{-3pt}2Way Range\newline\hspace*{-3pt}Marginal}& workload & 1.63/3.23 & 0.95 & Hier./Four.\\ \cline{2-5} & relative & 1.81/2.38 &- & Wav./D.Cube \\ \hline \multirow{2}{46pt}{\hspace*{-3pt}1D CDF}& workload & 1.01/1.01 &0.80 & Wav./Hier.\\ \cline{2-5} & relative & 0.46/0.54 &- & Wav./Hier.\\ \hline \multirow{2}{46pt}{\hspace*{-3pt}Predicate}& workload & 1.39/1.94 &1.00 & Wav./Four. \\ \cline{2-5} & relative & 1.42/3.55 &- & Four./Hier.\\ \hline \end{tabular}} \vspace*{-8pt} \caption{\revision{}{The factor of error reduced for the Eigen-Design algorithm w.r.t. the best/worst competitors strategies and the theoretical bound, for alternative workloads, on $2048$ cells.}}\label{fig:mix} \vspace*{-12pt} \end{table} \begin{figure*}[t!] \subfigure[\small Approximation approches over all 1D ranges]{ \includegraphics[height=90pt]{approx_range.pdf} } \quad \subfigure[\small Approximation approches over all 2D marginals]{ \includegraphics[height=90pt]{approx_marginal.pdf} } \vspace*{-11pt} \caption{\revision{}{Quality and efficiency of approximation methods on 8192 cell conditions}}\label{fig:effvsacc} \vspace*{-12pt} \end{figure*} \paragraph*{Alternative Workloads} To demonstrate that our mechanism is adaptive over variety of workloads, we also include other workloads that have not been studied in prior work. First we show that our mechanism adapts to semantically equivalent workloads, in which we repeat \revision{the experiments of a structured workload}{the experiment on range Workload} but randomly permute the order of cell conditions. The justification for this experiment comes from the fact that the user may wish to answer queries in which the order of the cell conditions is not obvious, such as predicate queries over categorial attributes. \revision{Since the permutation destroys the structure of the marginals, the DataCube method can not be applied. In the experiments on permuted marginal queries, we use Wavelet instead. }{} In addition, we run experiments on three other workloads: the range marginals workload, the cumulative distribution (CDF) workload, and uniformly sampled predicate queries. The range marginals workload is important because most data analyses using marginals do not simply use individual counts, but also aggregate counts. If this is the case, simply computing the marginals workload privately is the wrong approach because error accumulates for aggregations. Last, the CDF workload is a highly-skewed set of one-dimensional range queries where the sensitivity in the first cell is $n$, decreasing linearly to 1 for the last cell. We summarize the experimental results on alternative workloads in \textbf{Table~\ref{fig:mix}}. \revision{Since the error on different workloads differs greatly, we represent them as multiples of the lower bound. For the permuted workloads, the quality of the Hierarchical and the Wavelet strategy is greatly reduced, whose factor of error to the lower bound (cut off in the figure) is actually larger than \revision{90}{9} on all one-dimensional ranges. By contrast, the Fourier and Eigen-Design strategies retain the same performance as the original workload. }{For relative errors, due to space constraints, we only present results on US census data with $\epsilon=0.5$ and $\delta=0.0001$. We present, for each workload, the factor of error reduction achieved by our algorithm compared to the best and worst competing approach, whose name is shown in the last column of the table. (Datacube is only considered for range marginals and Fourier is not considered on permuted range and CDF.) In addition, for workload error, we also include the ratio to the error lower bound.} \revision{In the comparison of the range marginal queries, we only include two of the best competing strategies: the Hierarchical and the DataCube strategy, whose errors are still as much as1.7 times the error of eigen-strategies. For the CDF workload, the eigen-strategy is only slightly better than the Hierarchical and Wavelet workloads. We suspect this is one case where the eigen-queries do not form a good design set due to the highly skewed nature of the workload. Lastly, \revision{}{for predicate workload} the Eigen-Design strategy reduced the error of Fourier and Wavelet method by a factor of \revision{}{1.4 and 2}, respectively and introduces error that is almost identical to the lower bound.}{ The results show that the eigen-strategy can improve absolute error by as much as 13 times (on permuted range queries) and relative error as much as 5 times (on one-way range marginals). The workload error of competing strategies is heavily impacted by the permutation but the relative errors are not as bad since queries of individual cells and small ranges dominate the workload, which do not change too much under permutation. On all workloads but one, the eigen-strategy beats every competitor by at least a factor of 1.3, and is very close to---or achieves---the theoretical error lower bound. The only exception is the CDF workload, in which the eigen-strategy is only a bit better than the competitor for workload error and worse (than Hierarchical and Wavelet) for relative error. Overall, the results for workload and relative error are largely similar for range marginals and the predicate workload. } \subsection{Performance Optimizations} \label{sec:exp:tradeoff} \textbf{Fig.~\ref{fig:effvsacc}} illustrates the trade-off between computational \\speed-up and solution quality for the {\em eigen-separation} and {\em principal vector} performance optimizations described in Section~\ref{sec:eff}. \revision{}{We only present results with workload errors here (the results with relative error are similar or even better). }Error and computation time are plotted together using two y-axes: the left axis measures \revision{average per query error}{workload error} and the right axis measures execution time in seconds. \revision{The baseline for time measurements is the time for the standard Eigen-Design algorithm. The baselines for error are the error of the standard Eigen-Design and the best competing technique.}{The baselines for error are the lower bound and the best competing technique.} \revision{We find that both methods can reduce the running time by two orders of magnitude while increasing error by less than $20\%$ over the standard eigen-design solution.}{The running time of using the standard Eigen-Design algorithm can be estimated from the running time of the principal vector method, which is more than an order of magnitude slower than the principal vector method with $25\%$ of the eigenvectors. Comparing with this estimated time, both methods can reduce the running time by two orders of magnitude while the error they introduced is less than $12\%$ over the lower bound.} For the eigen-separation method, the computation in each group takes more time with larger group sizes while the computation of merging groups takes more time with smaller group sizes. Theoretically, the best choice for group size of the eigen-separation method is $n^{1/3}$, which is closest to $16$ in this case. Using eigen-query separation with a group size of 16, \revision{strategy selection is $300$ times faster, while the strategy found has roughly $12\%$ greater error.}{the error is $5\%$ higher on all range queries and $11\%$ higher on all marginal queries.} Using the principal vectors optimization with $6\%$ of the eigenvectors, \revision{makes strategy selection $400$ times faster, while error is $13\%$ higher on all range queries and same as the optimal on all marginal queries.}{the error is $10\%$ higher on all range queries and the same as the optimal on all marginal queries.} According to the results, the eigen-separation performs better on range queries while the principal vectors method is better on marginals. In either case, the performance improvements still produce results that are significantly better than competing techniques. \subsection{The Choice of Design Queries} To evaluate our claim from Section~\ref{sec:alg:choose} that the eigen-queries are an effective choice for the design queries we compare strategies computed by Program~\ref{prog:expdesign} using the eigen-queries, the Wavelet matrix and Fourier matrix as the design queries. Since using the eigen-queries introduces the same error to semantically equivalent workloads, we also empirically verify this property on other sets of designed queries. \textbf{Fig.~\ref{fig:basis}} shows the results of those comparisons over two structured workloads considered above, as well as the same workloads with the order of the cell conditions permuted. \begin{figure}[ht] \centering \includegraphics[height=80pt]{basis.pdf} \vspace*{-10pt} \caption{Comparison of design queries} \label{fig:basis} \end{figure} The results show that using the Fourier or the Wavelet strategy as the set of design queries \revision{doubles the error}{introduces $20\%$ more error} over all one dimensional range queries and achieves the same error on two-way marginals. However these design queries can not maintain their performance for workloads represented under a permutation of the cell conditions: they are worse than the eigen-queries by more than \revision{an order of magnitude}{4 times} over the permuted one-dimensional range queries. \subsection{Experimental Conclusions} \label{sec:sub:con} The experimental results show that, for the workloads specifically targeted by competing techniques, those techniques achieve error that is not too far from optimal (usually a factor of about \revision{1.5 to 4}{1.2 to 3.4} times the lower bound on error). But for broader classes or workloads, or ad hoc subsets of structured workloads, existing techniques are limited and the adaptivity of the Eigen-Design can improve relative or absolute error by a larger factor. We have confirmed the versatility of our algorithm, as it improves on all competing techniques for virtually every workload considered. \revision{}{The one exception is the highly skewed CDF workload. The lowest error strategy we are aware of for this workload is produced by our design algorithm, but with an alternative basis.} \section{Introduction} \begin{outline}[enumerate] Differential privacy \cite{Dwork:2006Calibrating-Noise} guarantees that information released about participants in a data set will be virtually indistinguishable whether or not their personal data is included. There are now many algorithms satisfying differential privacy~\cite{Dwork:2011A-firm-foundation}, however, when adopting differential privacy, users must reason carefully about alternative mechanisms and the formulation of their task. Their choices may have a significant impact on the utility of the output, for the same level of privacy. Even using the PINQ framework \cite{mcsherry2009privacy}, designed to aid uninitiated users in writing differentially-private programs, users can be faced with vastly different degrees of accuracy depending on how their task is expressed. Further, there are few results showing that proposed algorithms are optimally accurate---that is, that they introduce the least possible distortion required to satisfy the privacy criterion.\footnote{For a single numerical query, the addition of appropriately-scaled discrete Laplace noise satisfies $\epsilon$-differential privacy and has been proven optimally accurate \cite{ghosh2009universally}. For workloads of multiple queries, optimally accurate mechanisms are not known.} Thus, if the utility they achieve is unacceptable, users often do not know if better utility is possible with a different algorithm, or if their utility goals are fundamentally incompatible with differential privacy. In this work, we attempt to relieve the user of some of these difficulties by developing a mechanism that automatically adapts to the set of submitted queries and provides significantly improved utility over competing approaches. We focus on batch query answering, in which a set of queries is answered at one time, in a single interaction with the private server. We call the set of queries a {\em workload}, which we allow to be any collection of linear counting queries. This general class of queries can be used to express histograms, marginals, data cubes, empirical cumulative distribution functions, common aggregation queries with grouping, and more. One of the motivations for considering batch query-ans\-wer\-ing of large workloads is to avoid the complications of online mechanisms in which a user must carefully manage their privacy budget, and, in addition, multiple users may be required to share a single privacy budget to avoid a breach of the privacy definition resulting from collusion. It is therefore appealing to structure large workloads that contain the sufficient statistics of a data mining task, or which can simultaneously support the intended tasks of a group of users. In fact, the output of our algorithms can often be treated as a synthetic data set, albeit one which is tailored specifically for accuracy on the queries in the given workload. The standard approach for answering a workload of queries under $\epsilon$-differential privacy is the Laplace mechanism, which adds to each query a sample chosen independently at random from a Laplace distribution. The noise distribution is scaled to the sensitivity of the workload: the maximum possible change to the query answers induced by the addition or removal of one tuple. Large workloads often have high sensitivity, in which case the Laplace mechanism results in extremely noisy query answers because the noise added to {\em each} query in the workload is proportional to the sensitivity of the workload. Recently, a number of related approaches have been proposed which improve on the Laplace mechanism, sometimes allowing for low error where only unacceptably high error was possible before. They each embody a basic (but perhaps counter-intuitive) principle: better results are possible when you {\em don't ask for what you want}. The earliest example of this approach focuses on workloads consisting of sets of k-way marginals, for which Barak et al. answer a set of Fourier basis queries using the Laplace mechanism, and then derive the desired marginals~\cite{barak2007privacy}. For workloads consisting of all range-count queries over an ordered domain, two approaches have been proposed. Xiao et al. \cite{xiao2010differential} first answer a set of wavelet basis queries, while Hay et al. \cite{Hay:2010Boosting-the-Accuracy} use a hierarchical set of counting queries which recursively decompose the domain. For workloads consisting of sets of marginals, Ding et al.~\cite{Ding:2011fk} recently proposed a method for selecting an alternative set of marginals, from which the desired counts can be derived. These techniques can each be described in the framework of the recently-proposed matrix mechanism \cite{Li:2010Optimizing-Linear}. Given a workload of queries, the matrix mechanism uses the Laplace mechanism to answer a set of {\em strategy} queries. The answers to the strategy queries are then used to derive answers to the workload queries by finding a solution that minimizes squared error. (The derivation by least squares is implicit in Barak \cite{barak2007privacy} and Xiao \cite{xiao2010differential}, but explicit in Hay \cite{Hay:2010Boosting-the-Accuracy} and Ding \cite{Ding:2011fk}). In these terms, the four approaches described above can each be seen as providing a set of strategy queries suitable for a particular kind of workload. Ultimately, the use of the strategy queries and the derivation process result in a more complex, non-independent noise distribution which can reduce error. The matrix mechanism makes clear that nearly any set of strategy queries can be used in this manner to answer a workload. Effective strategies have lower sensitivity than the workload, and are such that the workload queries can be concisely represented in terms of the strategy queries. But the approach remains limited to specific strategies for range queries \cite{Hay:2010Boosting-the-Accuracy,xiao2010differential}, and approaches which provide only limited choices of strategies for marginals \cite{barak2007privacy,Ding:2011fk}. We continue this line of work in order to create a truly adaptive mechanism that can answer a wide range of workloads with low error. The key to such a mechanism is {\em strategy selection}: the problem of computing the set of strategy queries that minimizes error for a given workload. Unfortunately, exact solutions to the strategy selection problem are infeasible in practice~\cite{Li:2010Optimizing-Linear}. One of our main contributions is an approximation algorithm capable of efficiently computing a nearly optimal strategy in $O(n^4)$ time (where $n$ is the number of individual counting queries required to express the workload). The result is a mechanism that adapts the noise distribution to the set of queries of interest, relieving the user of the burden of choosing among mechanisms or carefully analyzing their workload. A few main insights underlie our contributions. First, we shift our focus to $(\epsilon,\delta)$-differential privacy, a modest relaxation of $\epsilon$-differential privacy. The standard mechanism in this case is the Gaussian mechanism, which suffers the same limitations of the Laplace mechanism, and is also improved by the same approaches described above. The important difference for our results is that sensitivity is measured using the $L_2$ metric (instead of $L_1$) which ultimately allows for better approximate solutions.\footnote{Our algorithm can also be adapted to $\epsilon$-differential privacy, but it is less efficient, appears to be less effective, and is significantly harder to analyze. (Please see Sec. \ref{sec:sub:l1}.)} Second, inspired by the statistical problem of optimal experimental design~\cite{boyd2004convex,Pukelsheim93Optimal}, we formulate the strategy selection problem as a convex optimization problem which chooses $n$ coefficients to serve as weights for a fixed set of {\em design queries}. Third, we show that the eigenvectors of the workload (when represented in matrix form) capture the essential building blocks required for near-optimal strategies and are therefore a very effective choice for the design queries underlying the above optimization problem. Our adaptive mechanism advances the state-of-the-art in terms of accuracy, \revision{}{under both absolute and relative measures of error}: \begin{itemize} \itemsep 0in \item[--] For workloads targeted by prior approaches, our algorithm automatically computes strategies with uniformly lower error. For marginals, our error can be reduced by as much as \revision{$19$}{$6.2$} times over Barak and \revision{$2.6$}{$3.2$} times over Ding. For range queries, our error is reduced as much as \revision{$2.5$}{$2.6$} over Xiao and \revision{$3.5$}{$2.7$} times over Hay. \item[--] The power of our adaptive approach is most obvious when applying the mechanism to ad hoc workloads (which may result from specializing a larger workload to a given task, or by combining workloads from multiple users). Error is reduced by as much as \revision{$30$}{$13$} times over alternative techniques. \item[--] Our algorithm has a provable approximation ratio and \revision{}{produces strategies with near optimal absolute error} for many workloads of interest. We never witness an approximation rate greater than \revision{$1.6$}{$1.3$} times the optimal absolute error. For workloads of marginals, error rates consistently match the optimal achievable error rates. \end{itemize} Our mechanism is also significantly more general than prior work. It can be applied to any workload of linear counting queries: a much larger class of queries than marginals or range queries. In addition, the algorithm avoids a subtle limitation of some previous approaches \cite{Hay:2010Boosting-the-Accuracy,xiao2010differential,Ding:2011fk} in which achieving promised error rates depends on finding a proper representation for the workload. Throughout the paper, all improvements to accuracy are made with {\em absolutely no cost to privacy}: accuracy is improved by constructing a better noise distribution satisfying the same privacy condition. In addition, while strategy selection is the most comp\-utationally intensive part of the mechanism, it only needs to be performed once for any workload, and need not be recomputed to re-run the mechanism on a new database instance. Once the selected strategy is preprocessed, the complexity of executing the mechanism is no higher than applying the standard Laplace mechanism to the workload. The paper is organized as follows. We review definitions and formally describe the matrix mechanism in Sec. \ref{sec:back}. Our algorithm is presented in Sec. \ref{sec:alg}, along with a theoretical analysis that establishes the approximation rate and other properties. In Sec. \ref{sec:eff} we propose performance optimizations which significantly improve computation time with minimal impact on solution quality. In Sec. \ref{sec:exp}, we evaluate \revision{}{both absolute and relative} error rates of our mechanism on a range of workloads. \revision{}{We discuss related work and conclude in Sec. \ref{sec:related} and Sec. \ref{sec:conclusion}.} \end{outline} \section{Related Work}\label{sec:related} The present work uses the framework of the matrix mechanism to develop an adaptive query answering algorithm. The original work on the matrix mechanism \cite{Li:2010Optimizing-Linear} described and analyzed in a unified framework two prior techniques specifically tailored to range queries. The first used a wavelet transformation \cite{xiao2010differential}; the second used a hierarchical set of queries followed by inference \cite{Hay:2010Boosting-the-Accuracy}. Originally, the matrix mechanism focused mainly on $\epsilon$-differential privacy, although $(\epsilon,\delta)$-differential privacy was also considered briefly. Prior work on the matrix mechanism never considered strategies beyond those proposed in the previous literature, or natural candidates like the identity matrix. \revision{}{The convex optimization formalization in prior work only runs on small $n$ ($n<64$) and cannot be used in practice.} Low order marginals are studied in \cite{barak2007privacy} using Fourier transformation. They also consider enforcing integral consistency on the output, an objective we do not consider here. Recently, Ding et al. proposed an adaptive algorithm to answer workloads consisting of data cube queries~\cite{Ding:2011fk}, which (described in our terms) considers strategies composed only of individual marginal queries and optimizes the workload error approximately. The algorithm adapts a known approximation algorithm for the subset-sum problem and cannot be applied to general linear queries. Most of these techniques focus on $\epsilon$-differential privacy, however they are actually more effective under $(\epsilon,\delta)$- differential privacy, so comparisons with our algorithms are meaningful. The error rates of the matrix mechanism are independent of the database instance. Recently, a number of data dependent algorithms for answering linear queries under differential privacy have been proposed. Xiao et al.~\cite{xiaodifferentially} propose a method for computing a strategy matrix using KD-trees, and Cormode et al. \cite{Cormode11Differentially} propose a related method in which a differentially-private median computation is used to guide hierarchical range queries. While promising, these approaches appear to restrict the strategy to hierarchical structures which we have shown are suboptimal for many workloads. Dynamic strategy selection can also increase computation cost. These tradeoffs deserve further investigation. Focusing on relative error, Xiao et al.~\cite{Xiao11iReduct:} propose a data-dependent algorithm to minimize the relative error with an innovative resampling function. Data-dependent interactive (as opposed to batch) mechanisms have been considered by Roth and Roughgarden~\cite{Roth:2010The-Median-Mechanism:}, who answer predicate queries on databases with 0-1 entries. Hardt et. al~\cite{hardt2010multiplicative} provide a linear time algorithm for the same query and database setting.
2,869,038,155,380
arxiv
\section{Introduction} As our ability to measure galaxy evolution has grown, so has the possibility of observing how galaxy clustering has evolved. On various linear scales, this is relevant to the merger rate of both galaxies (and perhaps their smaller progenitors) and groups, and to cosmological parameters. As reviewed recently by Cen (1998), the growth of representative cluster masses depends strongly on $\Omega_0$. This is due not to a direct relation between $\Omega_0$ and structure growth {\it per se}, but more to the fact that the calculations must be normalized to match the present-epoch mass spectrum, which introduces a coupling between the amplitude $\sigma_8$ of the power spectrum and $\Omega_0$ for viable models. Several studies have indeed shown evidence for cluster-scale structures at redshifts $z > 2$. Our {\it HST} imaging (Pascarelle et al. 1996b, herefter P96b; Pascarelle et al. 1998, hereafter P98), using a combination of broadband and medium-band filters to isolate Lyman $\alpha$ emission in the relevant redshift range, showed that the $z=2.4$ radio galaxy 53W002 is part of a rich assemblage of Lyman $\alpha$ emitters. Most of these are compact (effective radius $r_e \approx 0.1$" or 0.8 kpc), and are powered by star formation rather than by classical active nuclei. In P98, we showed that the surface density of such objects varies between different random lines of sight by approximately a factor of 4, with the 53W002 field being the richest we have observed thus far. A somewhat different grouping at similar redshift ($z=2.38$) was identified by Francis et al. (1996, 1997), who found four Lyman $\alpha$ emitters very close to the redshifts of Lyman $\alpha$ absorbers seen against two background QSOs. These emitters are seen over a projected span of 0.63 Mpc, and are much redder than the objects found by P96b. In an analogous way, Malkan et al. (1995, 1996) used narrow-band near-infrared imagery to find three H$\alpha$--emitting objects at $z=2.50$ in the foreground of the QSO SBS 0953+545 at $z=2.58$, closely matching the redshifts of metal-line absorption systems seen in the QSO spectrum. Starting from a sample of Lyman-break galaxies, Steidel et al. (1998, also Steidel 1999) have found a concentration of galaxies at $z=3.090 \pm 0.015$ spanning about $4 \times 8$ Mpc. And at even higher redshift, Hu \& McMahon (1996) report spectroscopically-confirmed Lyman $\alpha$ companions to the $z=4.55$ QSO BR2237--0607. These results show that it is now possible to trace developing clusters, and other large-scale structure, at high redshift. The new generation of wide-field imagers has enabled survey strategies that can tell how common, how extensive, and of what amplitude structures are in the galaxy distribution at various redshifts. As a first step in this direction, we present here a Lyman $\alpha$ survey of large fields around the regions we have searched with {\it HST}, to place the object counts from those fields in a larger context, and in particular to probe the spatial extent and bright end of the luminosity function of the cluster which includes 53W002. In evaluating size and luminosity, we use $\rm{H_0}=80$ km s$^{-1}$ Mpc$^{-1}$, $q_0=1/2$, which gives an angular scale of 128" per Mpc. Scaling for other values, linear sizes scale directly with $\rm{H_0}$ and luminosities as $\rm{H_0}^2$. For other values of $q_0$, as a shortcut, we note that linear sizes (luminosities) quoted here would be multiplied by 2.9 (8.2) for $q_0=0.1$ and by 0.49 (0.24) for $q_0=1$. \section{Observations} We observed several fields around the radio galaxy 53W002 at $z=2.39$ (Windhorst et al. 1991), which had been shown from imaging in redshifted Lyman $\alpha$ to be part of a structure containing additional AGN and star-forming galaxies (P96ab, P98). This field therefore offered a unique opportunity to probe a known structure at significant redshift. We observed two further WFPC2 fields using the same filter set, as part of a parallel survey for additional objects in the window around $z=2.4$ (P98). For comparison, we also observed a large region adjacent to the 53W002 area using a filter tuned for Lyman $\alpha$ emission at $z \sim 2.55$. For the current wide-field extension of the {\it HST} medium-band survey, we used the PFCCD imager, with $2048^2$ Tektronix CCD, for observing runs in the 1997 and 1998 summer seasons on the 4m Mayall telescope of Kitt Peak National Observatory. At the time, this system had significantly better throughput at 4100--4300 \AA\ than the wider-field Mosaic system. Each exposure covered a region 14.3\arcmin\ on a side with 0.420\arcsec\ pixels. We isolated Lyman $\alpha$ in the redshift ranges $z=2.32-2.45$ and $z=2.49-2.61$ with intermediate-band filters, the first of which was intended as a clone of the WFPC2 F410M filter, manufactured by Custom Scientific, Inc., to the same specifications as the HST filter set. We refer to these as F413M and F433M to avoid confusion with the WFPC2 F410M filter; WFPC2 has no close counterpart to F433M. These filters have FWHM=150 \AA\ and peak transmission at 4150 and 4330 \AA\ respectively, as measured in a parallel beam; the peak transmission moves blueward by $\sim 12$ \AA\ and the FWHM increases by about $\sim 19$ \AA\ in the $f/2.7$ prime-focus beam of the Mayall telescope (Marcus 1998). In addition to the medium-band Lyman $\alpha$ filters, we also observed each field in $B$, for continuum magnitudes, and $V$ to account for color terms in the continuum subtraction (as in P96b). As it happened, we were able to observe three contiguous fields just to the northeast of 53W002 in F433M. No Lyman $\alpha$ emission candidates in the overlapping region were common to both filters, except for the brighter of the new QSOs discussed later, whose Lyman $\alpha$ emission is so strong that it was detected even in the extreme wings of the redder filter's passband. Total exposure times for each region and filter are listed in Table 1, with the area of full exposure extending 420\arcsec\ in each coordinate from the listed position. Individual exposures were 30 minutes for the medium passbands, and 10-20 minutes in the broad bands, with dither motions of 20-30 arcseconds between successive exposures to suppress residual flat-field and cosmetic effects in the stacked combination images. The various field pointings were sometimes shifted from an exact rectangular pattern to avoid stray light from bright stars within a region extending about 5\arcmin\ outward from the CCD edge. The image stacks show image FWHM in the range 1.2--1.6\arcsec\ . The number of objects detected in each field depends on both seeing and total exposure times, and the number shown in Table 1 reflects detections in both $B$ and medium-band filters. The $B$ limiting magnitude is given for each field using a $3 \sigma$ threshold. The broadband data were converted to standard Johnson $BV$ magnitudes via secondary standard stars in M92, NGC 7006, and NGC 4147 (Christian et al. 1985, Odewahn et al. 1992). The photometric zero points were consistent to 0.02 magnitude or better from night to night. Both $B$ and $V$ magnitudes show color terms at the 0.03-magnitude level per unit change in $(B-V)$. A more important issue is that of the color correction in continuum subtraction, as outlined below. The 1997 data suffered from an additive ghost image of the telescope pupil occupying much of the field, produced by internal reflections in the optical corrector. This ghost image was not present in the 1998 data, since the dewar had been offset from the optical axis to avoid the problem. As an additive artifact, the ghost image could be isolated by comparing medium- and broad-band sky flats, then removed by subtracting scaled versions to eliminate the ghosting in our stacked images as completely as possible. Many of the images suffered from spatially variable background structure in the ``blank sky" regions due to scattered starlight from stars both within and outside the field of view, sometimes modulated by passing cirrus clouds, which we subtracted using a $101 \times 101$-pixel (42") median filter, clipped around the brightest galaxies which would otherwise be partially subtracted. This allowed higher quality in the final average images, since pixels would not be artificially flagged for rejection because of a temporarily high background. This leaves spurious residual dark halos around bright stars, but since these are additive, local background subtraction will still give accurate photometry quite close to such stars. We identified emission- and absorption-line candidates starting with object lists and photometry generated using version 1.0a of SExtractor (Bertin \& Arnouts 1996), using visual inspection to reject putative detections which were compromised by bright stars or artifacts near the edges of individual exposures comprising the stacked mosaics. The detection parameters were: object detection threshold 2.5$\sigma$ above background over 5 contiguous pixels, and a deblending parameter 0.005 (which turned out to be essentially irrelevant at this level of crowding). Table 1 includes numbers of objects in each field appearing in the matched $B,V, m_{413}/m_{430}$ catalogs. Detections in all three bands were required to deal with color terms in the continuum-to-line comparison. The relative exposure depths suggest that we should not be missing comparable objects due to color effects (though the possibility of extreme colors, such as very red objects, still exists). This multiband matching requirement means that the listed detection totals do not simply reflect the relative exposure times. Coordinates were measured by fitting a celestial coordinate system to stars from the HST Guide-Star Catalog (GSC) on each frame; the formal accuracy is 0.25" rms, borne out by recovering positions of individual GSC stars. The threshold for emission-line detection is not completely straightforward, since each object's detectability depends both on the line flux and its equivalent width. Our primary criterion was for equivalent width incorporating individual error estimates, with a secondary list using formal significance of line emission as a basis for selection. Since the F413M medium-band filter sits on the blue edge of the $B$ passband, there is a color term accounting for the continuum slope between $B$ and 4130 \AA\ (a similar but smaller term exists for the F433M filter). We follow W91 and P96B in using the traced filter properties to compute the locus of featureless power-law spectra (a reasonable approximation for galaxies in the emitted ultraviolet) in the $(m_{413}-B)-(B-V)$ plane, as shown in Fig. 1 for the 53W002 field. This locus is well approximated by the line $$ (F413-B) = 0.32 (B-V) - 0.08, $$ which is a good fit to the observed distribution of ``field" objects in our data (the numerical constants become 0.10 for the slope and 0.0 for the intercept in the case of the F433M filter). Our primary sample of emission-line candidates consists of objects which fall more than $4 \sigma$ below this relation where $\sigma$ applies to the scatter of points on the emission side of the distribution's ridge line (Fig. 1), which puts our threshold at 0.6 magnitude in F413M/F433M excess (observed equivalent width about 110 \AA\ , corresponding to an emitted equivalent width 30-32 \AA\ at $z=2.4-2.6$). These candidates are listed in Tables 2 and 3 for the two filters, and enlargements of the intermediate-band and $B$ images are shown in Figs. 2 and 3. Here, the tabulated $(m_{413}-B)$ and $(m_{430}-B)$ have been corrected for first-order color terms as described below; negative values indicate an excess in the narrower passband. The listed equivalent widths are in the observed frame; the emitted value will be smaller by $ (1+z) \approx 3.4-3.6$. The Lyman $\alpha$ EW and flux for the three objects previously reported by P96b -- their object numbers 18 and 19 plus 53W002 itself -- are somewhat uncertain because each has a resolved Lyman $\alpha$ emission region (see section 6 below), which produces somewhat different values depending on how the flux is extracted. There is evidence that object 19 is itself variable as well (P96a). The reliability of this sample is supported by the fact that all the members observed spectroscopically (in the 53W002 field) are indeed active nuclei at $z=2.39$. To assess whether there is an additional population of detections with lower equivalent width but comparable statistical reliability, we also considered object selection by significance in the deep 53W002 F413M data, defined as $$ S = {{[(F413 - B) - 0.32 (B-V) +0.08 ]}/ \sigma}$$ (where $\sigma$ here is the statistical error in the F413-B color) with the additional requirements of $B > 23.5$ and computed equivalent width $\ge$ 90 \AA\ to avoid spurious detections of bright objects where the formal errors are much smaller than the scatter introduced by spectral features in stars and lower-redshift galaxies. All of the detections in the primary list have significance $> 5 \sigma$ by this criterion. Within the range of $B$ and $S$ that contains all the primary detections, the 53W002 field includes an additional five candidates, thus potentially augmenting the total number by about one third (which occur all over the field, unlike the clustered equivalent-width candidates). These are listed at the bottom of Table 2, but since this technique doesn't generate any additional objects with line flux significantly above the threshold of the original list (even while relaxing the possible error bounds), we concentrate on the equivalent-width defined list. This list includes some objects with line fluxes as low as $3.7 \times 10^{-17}$ erg cm$^{-2}$ s$^{-1}$, but a characteristic flux limit for approximate comparison with other results would be close to $5 \times 10^{-17}$ for the 53W002 field in the F410M filter. Corresponding values for the other fields at are $1.4 \times 10^{-16}$ for HU Aqr and $1.0 \times 10^{-16}$ for NGC 6251 and the three fields observed with the F430M filter. Contamination of the Lyman $\alpha$ sample by objects with [O II] $\lambda 3727$ emission at $z \simeq 0.11$ (F413M) or $z=0.15$ (F433M) should not be important, for the following reasons. The Keck spectroscopy of several of our candidates, plus objects from the {\it HST} lists, by Armus et al. (1999), shows that all the emission-line objects either have multiple lines at $z \sim 2.4$ or a single line with equivalent width plus continuum shape inconsistent with [O II] as judged from nearby objects. Finally, these objects are all smaller than 1\arcsec\ in effective radius (that is, the blue continuum is either unresolved or almost so from the ground) and fainter than $B=23$, which would translate simultaneously into linear extent of less than 4 kpc and absolute magnitude fainter than $M_B=-15$ for objects at redshift low enough to have [O II] emission in our passbands. These data are not deep enough to recover the star-forming objects seen by Lyman $\alpha$ emission in the WFPC2 data of P96b and P98, even in the deepest Kitt Peak F413 exposure on the 53W002 field. The brightest such candidate in the HU Aqr WFPC2 field from P98 also falls slightly below our equivalent-width limit. These known $z=2.4$ objects have emission-line intensities which would correspond to Kitt Peak detection levels typically $0.5 \sigma$, which is confirmed by comparison of our ground-based detections. Therefore we are tracing structures using objects which are, as far as we can tell from the spectroscopically identified subset, fairly luminous active nuclei. For 53W002 and its two immediate neighbors, {\it HST\/} imagery shows these to be accompanied by dimmer star-forming objects. Whether these are smoothly distributed throughout the clumping or form smaller structures in the regions traced by AGN is an important question for further work. The two strongest-emission new candidates in the 53W002 field, bright enough to appear on blink inspection of the data during the observing run, were observed spectroscopically using the Ritchey-Chretien spectrograph and T2KB CCD at the Mayall telescope a week after the 1997 imaging observations. Grating KPC-10A (316 lines/mm, first-order blaze at 4000 \AA\ ) gave 2.77 \AA\ pixels with usable sensitivity over the 3700--8000 \AA\ region, and resolution typically 2.1 pixels = 5.8 \AA\ FWHM. Each object was acquired by offset from bright stars, as measured on the CCD frames. Mediocre seeing mandated a relatively wide 1.7\arcsec\ slit opening, and even so the seeing was variable enough that only single 60-minute exposures of high quality were obtained for each object. While conditions were not photometric, the data could be placed on a relative flux scale using observations of the hot standard star PG 1708+602 (and the objects' broadband magnitudes are accurately known from the multiband imagery). Both new candidates observed were found to be QSOs, as shown in Fig. 4. Redshifts were measured by taking centroids of the bright emission lines, by Gaussian fits to the profile peaks, and by cross-correlation with the mean QSO spectrum assembled by Francis et al. (1991), adopting a mean of these redshift measures and using the differences as a measure of the error. Their spectroscopic properties are given in Table 4, with redshifts measured from Lyman $\alpha$ and C IV individually and from cross-correlation. Both have redshifts within $\Delta z = 0.005$ of the other objects in the 53W002 grouping (P96ab, Armus et al. 1999). We also include emitted-frame Lyman $\alpha$ widths, to make the point that these are fairly narrow-lined QSOs. As this paper was in revision, we received word that Pascarelle, Yahil, \& Puetter (1999) have confirmed candidate 6 from Table 2 with a redshift close to $z=2.38$ based on Keck spectroscopy, giving a total of six confirmations of the imaging candidates. \section {Emission-Line Detections and Field-to-Field Variations} Our most striking result is seen from Tables 2 and 3, and Fig. 5, where we show the distribution of candidate Lyman $\alpha$ emitters in part of the 53W002 F413M field. There is an extensive grouping of emission-line candidates including 53W002 which has no counterpart in the other fields observed at either $z=2.4$ or $z=2.55$. Fourteen objects in the 53W002 field passed our equivalent-width criterion for line emission, while only one at the edge of the HU Aqr field did, and none in the NGC 6251 field. Among the F413M fields, with differing exposure times, the field-to-field ratios to a highest common limiting line flux would be 4:1:0 (the first value rising to 6 for the intermediate limit appropriate to the empty NGC 6251 field). There are 6 candidates in the $z=2.55$ range sampled by the F433M filter, over almost three times the solid angle of the 53W002 field (and twelve times the solid angle encompassing the candidate emitters in that field). This shows that there are significant structures in place at $z=2.4$, but not necessarily over a large fraction of the sky; a simple estimate based on these data alone is that such assemblages cover less than 0.04 of the sky in a redshift range $\Delta z = 0.15$, with the limit arising from the fact that the 53W002 field was observed precisely because we already knew that some additional objects were present. The amplitude we find from field to field is even greater than that found by P98 from fainter HST detections at the center of each of these fields, which might indicate that luminous AGN are more clumped than fainter objects. This could mean, for example, that the more massive objects (thus more likely to host AGN) start life more strongly biased toward initial mass peaks. We can assess the statistical significance of the grouping around 53W002 in several ways. First, we address the reality of the clumping seen within the 53W002 field at $z=2.4$. Most simply, the probability of $n$ objects falling within a single region covering a fraction $f$ of the solid angle surveyed will be $p = f^{(n-1)}$, where the location of the region is not otherwise specified. In this case, using the superscribed circle about the 14 candidates for the region size and the area of full exposure in the stacked F410M as the overall field surveyed, $f=0.38$ and $p= 4 \times 10^{-6}$. A Monte Carlo simulation indicates a somewhat higher (though still small) probability of having 14 points drawn from a uniform random distribution fall within a circle of this size, $p = 0.0014$. Finally, we used the two-dimensional version of the Kolmogorov-Smirnov test as proposed by Peacock (1983) and examined in detail by Fasano \& Francheschini (1987), employing the routines presented by Press et al. (1992). This test gives a significance level of 95\% for the clustering within the 53W002 field. The critical values for the 2-dimensional test are slightly dependent on the distribution of sample points, but these points are not strongly correlated (Pearson $r=0.40$) and the effect would only act at the $\pm 1$\% level in this regime. This most conservative of the tests still shows the grouping with high significance. To test the sigificance of variations in the number of objects from field to field, we use combinatorics to ask how likely it is that a uniform distribution sampled with our total number of detections (to a common flux limit) would be so strongly weighted toward a specified field (since we already knew from previous data that there is an excess around 53W002). As noted above, to the highest of the three limiting fluxes, there are four objects in the 53W002 field, one in the HU Aqr field, and none in the NGC 6251 parallel field. Of the 207 ways to distribute 5 objects among three bins, 11 are at least this strongly weighted to the specified one, yielding a probability of $11/207=0.053$ of achieving this result by chance. Thus the significance of the higher number of objects in the 53W002 field is 95\%, even without taking into account their concentration within the observed area in this field. We note that there is somewhat weaker evidence for clumping of the detections in the F430M filter ($z \sim 2.55$) in the 53W002 NE field; these objects all have derived line luminosities in the QSO range. \section{Lyman-$\alpha$ absorption candidates} Many of the brightest Lyman-break galaxies observed by Steidel et al. (1996, 1998) show net absorption at Lyman $\alpha$, indeed sometimes with no significant emission. One of the factors contributing to the strength or weakness of the emission may be metallicity, through the enhanced formation of grains which can absorb resonantly scattered Lyman $\alpha$ photons (Bonilha et al. 1979), though observations of Lyman $\alpha$ in starbursts of different metallicity show that there must be more to the story than this single parameter (Giavalisco et al. 1996, Lequeux et al. 1995, Thuan \& Izotov 1997). Using imaging techniques, we are sensitive only to {\it net} emission, while some of the line emission may be cancelled by the line in absorption from stellar atmospheres and H I in the galaxy. Indeed, about 1/3 of the Lyman-break galaxies observed by Steidel et al. (1998) show net absorption at Lyman $\alpha$. Therefore, we consider here the possibility of detecting objects with strong absorption at Lyman $\alpha$. As a guide to the strength of Lyman $\alpha$ absorption expected from star-forming galaxies, we use the HST GHRS spectrum of the bright knot in NGC 4214 obtained by Leitherer et al. (1996). After excising the narrow emission component, this spectrum shows an absorption line of equivalent width $\approx 28$ \AA\ , which would be an observed value of EW=95 \AA\ at $z=2.4$. This is just within our detection threshold for objects to $B=25$ in the 53W002 field, so that we can extract absorption candidates in the same way as the emission candidates. Since even luminous star-forming galaxies are unlikely to exceed QSO luminosities, we restrict the selection to objects in the range $24<B<25$, thereby avoiding much of the potential confusion from foreground stars and galaxies which have an absorption edge between the continuum and narrowband filters, and using the equivalent-width criterion to screen out stars with strong H$\delta$ absorption. This is a particular issue for white dwarfs, which would also be distinguished by broadband colors much bluer than expected for any high-redshift galaxies. Accordingly, we restrict the candidate absorption objects to the range $0.15 < (B-V) < 1$ and require significance of the absorption to exceed $4 \sigma$. This color range is wider than we observe for the star-forming emitters in the WFPC2 data (P96b). These criteria leave 4 candidates in the 53W002 field (Table 5, Fig. 6). Three of these are in same spatial region as the emission candidates, but our ability to select these objects in a more precise way is limited by the fact that the scatter in the $(m_{413}-B), (B-V)$ diagram is asymmetric and larger to the absorption side, largely due to the natural signal-to-noise limitations at faint levels, so that there are many more interlopers at a given equivalent width for absorption than for emission. \section{The 53W002 ``Cluster" at $z=2.39$} These results strengthen the evidence for some sort of clustering at early cosmic times. We consider here what kind of assemblage we see in the 53W002 field, and how it might relate to the clustering we see today. This entails measures of its size, population, and dynamical state. \subsection{Cluster ``Size" and Radial Distribution} The virial radius $R_v = {{1}\over {n}} [\Sigma_{j < i} ({{1} \over {{ \vert \bf r}_i - {\bf r}_j}\vert })]^{-1}$ of this assemblage of 14 objects is 157" or 1.2 Mpc in proper coordinates, which would correspond to 1.9 Mpc (a factor $\pi/2$ larger) in three dimensions for a typical projection geometry. The radial distribution is so extended that fewer than half the candidates (four) lie within this projected radius of the centroid. For a distribution this sparsely sampled, whose centroid is not well determined by a strong central concentration, it may be more enlightening to consider the fraction of objects encompassed by circumscribed (projected) circles than by such a specific physical measure as the virial radius. All fourteen candidates are contained within a radius of 327", with 2/3 (10) contained within r=218" and half (seven) inside r=164". To examine whether the distribution of these objects looks like contemporary relaxed systems, we consider how well a King-law profile with any core radius can be fitted to our observations. Because the center is ill-defined from such a sparse sample, we use as a statistic for comparison the cumulative number of objects within an encircled radius, whose center can drift to accomodate the maximum number within a given radius. From $10^4$ Monte Carlo samples of 14 objects each drawn from a King profile (in number density), we generated the bounds containing various fractions of the trials and identify these with confidence intervals. Since the $z=2.4$ structure is traced by active nuclei at B$\le$ 24.5 -- which may occur in rather low-luminosity galaxies this close to the peak redshift for QSO number density -- we use a model for number density rather than incorporating some level of mass segregation to represent the luminosity density in a typical rich cluster. The core radius is also determined by fitting the radial scale of the distributions so that the true significance of each band may be slightly greater. The core radius was left as an adjustable parameter to be determined by the best fit to the observed cumulative distribution. This exercise should tell whether the observed two-dimensional distribution is likely to be drawn from one like the relaxed profiles of nearby (rich) clusters, and if so what its radial scale (the core radius for a King model) is. As shown in Fig. 7, the 53W002 association is less centrally condensed than a King model of any core radius. Specifically, if either the inner or outer four points in the number-radius relation are used to anchor the data to the Monte Carlo predictions, some points fall outside the 90\% band (and if the inner points are fit, outside the 97\% band). The difference is such that the inner points imply a core radius of 76" (0.6 Mpc), while the outer ones a core radius of 42" (0.3 Mpc). At this 90-97\% confidence level, we can reject a relaxed King distribution for these objects. If this grouping is not yet relaxed, we are left with an ambiguity in intepreting its linear scale -- would it evolve more nearly in comoving or proper coordinates? If it has yet to turn around from the Hubble expansion, it will grow for some time in proper coordinates but shrink (slightly) in comoving ones until it turns around. On the other hand, it may have already turned around but not yet have virialized, in which case the proper-coordinate linear scale will remain nearly constant. Other reports of high-redshift structures have made differing assumptions on this matter -- note that Steidel et al. (1998) quote a comoving extent for their structure at $z=3.1$, while some other workers use proper length. For the 53W002 structure, some intermediate case would be most appropriate, still allowing a wide range of current length scales for comparison with the present epoch. We can address the correlation function $w(\theta)$ as measured in the 53W002 field from these data. While it is clearly a high-amplitude structure, this measurement might furnish useful information on length scales as well as just how large an amplitude could be reached by $z=2.4$. Edge effects were assessed by Monte Carlo trials, employing the expression from Landy \& Szalay (1993) as used by Neuschaefer \& Windhorst (1995). At $\theta=30$\arcsec\ (200 kpc), $w(\theta )=3.2$, and we see positive correlation out to a radius $r=5.5$\arcmin\ (2.4 Mpc) above a threshold value $w \sim 1$. Following the treatment by Neuschafer \& Windhorst (1995), and using their sample to $g=25$, thus result does indicate that the region around 53W002 (sliced in both angle and redshift) is more strongly clustered than the field by a factor $\sim 3$, since the ``field" objects have an amplitude of only about 0.03 covering a redshift range roughly $z=0.5-2$. \subsection{Galaxy and AGN Content} The objects from our emission candidate list which have been spectroscopically confirmed are all obvious AGN, with a mix of broad- and narrow-lined cases. This is not surprising given the flux constraints on spectroscopy with 4-m class telescopes, but it is already an unusual AGN population for a single group. Their absolute magnitudes are in the range associated with, for example, low-redshift PG quasars ($M_B=-21.4$ to $-22.4$ for our adopted cosmology, following Weedman 1986 in dealing with spectral slope). The brightest objects known from the WFPC2 field {\it not} to be such AGN have $M_B \sim -20.5$, so it remains unclear what the fainter KPNO detections at $B=24-25$ represent. Certainly these would not be unusual luminosities for additional AGN, but this is a regime in which star-forming objects are not unreasonable either. One hint might come from image structure. If AGN are weaker for the fainter objects, they might appear more clearly resolved since the core is less dominant. We compared image FWHM values (from the SExtractor tables and as computed by the IRAF {\it imexamine} task) for the emission candidates with those of bright, unsaturated stellar images nearby in the $B$ frame. Except for the extended structures around 53W002 and object 18, which have substantial line-emission components, all the candidates are unresolved. This fits with the sizes of the objects in the WFPC2 field, whose typical half-light radii are 0.10\arcsec\ , but doesn't furnish any further constraints on whether these new objects are more likely to be AGN or bright star-forming systems. Even for only passive evolution of the star-forming objects, it is significant that we see none brighter than $M_B=-21$, consistent with the {\it HST} results but now covering a much larger region. A typical $L^*$ galaxy would have $M_B=-23$ at $z=2.4$ unless either active evolution continued to substantially lower redshift, or merging of these small objects continued to form today's luminous galaxies. More luminous galaxies could hide by lacking Lyman $\alpha$ emission, perhaps if they are more metal-rich and hence can suppress emission in this line, or if their star formation rate dropped quickly at early times. Near-IR line surveys could test the first possibility. \subsection{Velocity Dispersion} The five spectroscopically confirmed members of the 53W002 grouping have a velocity dispersion $\sigma_z=0.0060$, translating into $\sigma_v=532$ km s$^{-1}$ in the objects' frame. Adding the two additional faint emitters from P98 drops this value to 467 km s$^{-1}$. While one should respect the errors in estimating the velocity dispersion from such small samples, it does seem clear that we are not dealing with the dynamics of a rich virialized cluster with $\sigma_v = 1000$ km s$^{-1}$. Since many of the previously reported {\it HST} objects around 53W002 are apparently star-forming complexes, with very narrow Lyman $\alpha$, there is the possibility of introducing a systematic offset in comparison with redshift measurements of broad-lined AGN using the same line. This is less of an issue with the three narrow-lined (``type 2") AGN previously reported in this region (P96a,b). Comparison of the strong UV lines (Lyman $\alpha$, C IV, C III]) with lower-ionization species or with narrow emitted-optical lines expected to arise far from the core (especially [O II] $\lambda 3727$) have shown that substantial differences in central velocity can exist. The shifts can exceed 1000 km s$^{-1}$ for radio-loud objects, but have a mean close to zero for radio-quiet QSOs (Espey et al. 1989, Marziani et al. 1996). In addition to being radio-quiet (Richards et al. 1999), the two new QSOs have rather narrow lines compared to many of the ones studied for velocity shifts (and compared to the Francis et al. 1991 composite), so the shifts may not be as large. In fact, their close match to the redshifts of other objects in the field would be a remarkable coincidence if systematic shifts of more than a few hundred km s$^{-1}$ are present, but the possibility remains that the actual velocity range of these objects is larger than the value we measure from Lyman $\alpha$ and C IV alone. Furthermore, since the radial distribution suggests that the structure is not virialized and may still be coupled to the Hubble flow, we consider the limiting case in which the velocity range represents the Hubble flow across the depth of the structure rather than internal motions driven by gravity. For $q_0 = 1/2$ (or its $\Omega+\Lambda$ counterpart), the Hubble parameter $H$ would have been greater at $z=2.4$ than today's $H_0$ by a factor about 6 (scaling inversely with cosmic time for this cosmology), so that the relevant expansion rate would have been in the range 300--600 km s$^{-1}$ Mpc$^{-1}$ for ${\rm H}_0=$50--100 km s$^{-1}$ Mpc$^{-1}$. For a characteristic line-of-sight depth of 1.5 Mpc, comparable to the observed transverse extent containing most of the members, this implies a ``velocity dispersion" of 450-900 km s$^{-1}$ even for a completely unbound assemblage. Since the positional data show clearly that the grouping has decoupled from the Hubble flow to the extent of showing a density contrast of at least a factor 4, we interpret this comparison as showing that this group is still turning around from the Hubble expansion, so that the velocity data do not necessarily allow us to measure its mass (or anything else about the detailed dynamics). \section{Lyman $\alpha$ Haloes of Constituent AGN} Three of the bright AGN in this field show extended Lyman $\alpha$ structures in WFPC2 data (P96b, P98). These are either linear or roughly biconical, fitting with a general paradigm of ionizing radiation directed mostly along the poles of some disklike structure. The KPNO data have better sensitivity to large regions of low surface brightness than does {\it HST}, and reveal new aspects of the extended line emission. For 53W002 and object 19 (in the P96b nomenclature), this is an extension of the Lyman $\alpha$ structure seen in WFPC2 images (Windhorst, Keel, \& Pascarelle 1998), but for object 18, this resolved structure is not only much larger (extending more than 5\arcsec\ from the core) than the ionization or scattering cone inferred from WFPC2 data, but it is most extended in a different direction. The inner parts of these structures are detected as well in H$\alpha$ using IRTF narrowband imagery and in [O III] using NICMOS multiband and grism data (Keel et al. 1999). We can examine the structure of the Lyman $\alpha$ images by comparing both the $B$ continuum and emission-line images of each candidate emitter to stellar profiles from the same region of each image. This gives some insurance against minor PSF changes across the field, and avoids problems due to somewhat different PSF widths between the $B$ and F413M images. We consider extended emission to be detected when there is some scaling between broad-- and medium--band images for which the difference is flat across the core and shows flux more extensive than the PSF. Requiring a flat central profile is conservative, to minimize the possibility of false detections at the expense of underestimating the flux in the spatially extended component. Several of the brightest emission-line objects show Lyman $\alpha$ emission more extended than their continuum structures. This analysis suggests that both scattering and local recombination play roles in these emission-line halos. The three objects with extended emission-line regions illustrate this: \noindent Object 18 (P96b): The PSF subtraction shows that more than 75\% of the Lyman $\alpha$ flux from object 18 comes from outside the core, and recovers the gross features of the WFPC2 image. Similar results come from analysis of the $B$ image, while the relative count rates indicate that most of the $B$ light is in fact Lyman $\alpha$. Spectroscopy by Armus et al. (1999) shows that the extended cloud has almost no continuum component, consistent with these results. This accounts for the very blue color of the extended structure ($(B-V)=-0.4$), since there are no strong emission lines in the $V$ band. \noindent 53W002: For 53W002 and object 19, about half the line flux is spatially resolved, in accord with the HST PC data of Windhorst et al. (1998) and the ground-based Lyman $\alpha$ imaging from Windhorst et al. (1991). As noted earlier, the emission-line structure is approximately along the orientation of the 1\arcsec\ radio double source, but much larger. \noindent Object 19 (P96b): As in 53W002, about half the line flux is resolved, in accord with the {\it HST} data as well. In this case, the extended emission is all in Lyman $\alpha$ to our detection threshold; less than 10\% of the $B$-band flux comes from outside the core. These extended structures are illustrated in Fig. 8, comparing the medium-band image, the PSF-subtracted version, and HST imagery of the brightest regions. The large-scale line emission is well aligned with the small-scale emission observed with {\it HST}, which is well shown in the color figure of Windhorst et al. (1998) including scattered continuum components. For object 18, the KPNO data reveal that the inner emission region is identical with the two major components seen with {\it HST}, but much more extensive and amorphous material appears at this deeper surface-brightness threshold. For the two newly detected QSOs, any such resolved line-emitting region must have less than 10\% of the total Lyman $\alpha$ flux (and as low as 5\% for the brighter QSO 2). These values apply to structures that are extended on the scale resolved by the PFCCD images; as a guide, the image size in the final F413M stack has 1.2\arcsec\ FWHM. Lyman $\alpha$ emission by itself is difficult to interpret, since we lack useful density indicators and its radiative transfer is sensitive to the velocity field and dust content. At a minimum, if mechanical energy input isn't important in the extended nebulae, the number of Lyman $\alpha$ photons can give a lower limit to the number of ionizing photons reaching the gas, provided only that the situation is in a steady state. In turn, this can tell us whether the radiation field must be anisotropic to account for the structures we see - that is, whether we are correct in referring to some of these structures as ionization ``cones". The continua on our line of sight are measured from about 1100-2000 \AA\ in the emitted frame, so that we should be able to do a reasonable extrapolation to the Lyman limit and estimate the expected number of ionizing photons in the isotropic case. For the simple case of a photoionized cloud occupying solid angle $\Omega$ as seen from the central source, if we see the same ionizing continuum as the cloud does, the extrapolated continuum and observed Lyman $\alpha$ emission should satisfy $$ n_{LyC} \ge {{\Omega} \over {4 \pi}} {{n_{Ly \alpha}} \over {f_\alpha}} $$ where $n_{LyC}$ is the number of Lyman continuum photons per second extrapolated from the observed continuum, $n_{Ly \alpha}$ is the observed number of Lyman $\alpha$ photons per second, and $f_\alpha$ is the fraction of recombinations whose cascade includes Lyman $\alpha$ (0.64 for case A, following the tabulations in Osterbrock 1989). The luminosity distance has cancelled on both sides, though we still need to make a plausible assumption about the clouds' geometry to assign a subtended $\Omega$. The equality holds for an ionization-bounded nebula which is optically thin to all the Lyman lines, in the sense that violating these conditions increases the continuum/line ratio and therefore makes the observed continuum more sufficient to power the extended line region. Applying this test to the three resolved Lyman $\alpha$ regions shows that at least object 18, with its very extensive line emission, has an ionization source that we don't see. Extrapolating the observed continuum at its flat level in flux falls short of creating the observed Lyman $\alpha$ emission by at least a factor 2, suggesting either a bump in the ionizing spectrum or anisotropic radiation. The $B-K$ continuum shape is not unusually red, in fact quite normal for narrow-line AGN and almost identical to the other two objects with extended line emission, so that anisotropic illumination makes sense if it is not caused by material that would redden the observed continuum. A similar issue appears for many type 2 Seyfert nuclei, with a Lyman-continuum deficit implied by the observed continuum and line intensities, and blue UV continuum slopes. This has been variously attributed to scattering or reflection of radiation from a small continuum region (as in Antonucci, Hurt, \& Miller 1994), and surrounding star formation (Colina et al. 1997), with recent results suggesting that the nucleus itself may not be an important contributor to the UV flux in narrow-line objects. The geometry of the Lyman $\alpha$ cloud near object 18 offers little help; while the inner parts, as detected with {\it HST} (P96a, Windhorst et al. 1998) resemble an ionization cone, the outer regions are extended at $90^\circ$ in projection to this axis. Of course, additional energy sources might be considered, such as the radio jet interactions proposed for powerful radio galaxies. However, of these three objects, only 53W002 itself has significant resolved radio emission; the other two are both substantially weaker and unresolved by the VLA at the 1\arcsec\ level (Richards et al. 1999). The flux data alone do not require anisotropic radiation for 53W002 and object 19, though the emission-line structure at least suggests an anisotropic gas distribution, and it is suspicious that the Lyman $\alpha$ structure in 53W002 aligns with the smaller double radio source (Windhorst, Keel, \& Pascarelle 1998). Independent of the ionization mechanism, such a rich collection of large clouds around the brightest illuminating sources raises the question of whether the extended gas belongs exclusively to the AGN hosts or exists more widely throughout this cluster, where we cannot observe it so easily. Detection of these structures in the continuum at a level above the weak free-free emission accompanying recombination would imply the presence of dust, a tracer of the level of star formation early in the galaxies' history. Furthermore, if the nuclei are more often obscured early in cosmic time, we might expect to see ``disembodied" Lyman $\alpha$ clouds in upcoming deep surveys. \section{The Evolution of Structure: Forward to the Past} We have reported a Lyman $\alpha$ survey aimed at tracing structures in the range $z=2.3-2.6$, finding a clumping or clustering in one field, represented by 14 luminous objects spanning about 3 Mpc. This adds to the existing evidence for structure in place, if not necessarily well developed, at cosmologically early epochs. What does the 53W002 structure turn into? Based on its extent as found here, we can ask how many members might exist to the {\it HST} detection threshold. The existing WFPC2 data cover only a single 5.7-arcminute$^2$ area, while we find candidate members spread over an area of about 93 arcminutes$^2$. If the WFPC2 field is representative, there would be 16 times as many faint star-forming members as we've detected to date. With 8 objects in the {\it HST} field now spectroscopically confirmed as members (Armus et al. 1999), that means the total membership would surpass 120 if we're seeing a smooth distribution. Alternatively, if the star-forming objects are in clumps traced by the AGN that we detect from KPNO, there would still be $\sim 70$ in this structure. These numbers are lower estimates, since objects undoubtedly occur below our detection thresholds. These two cases represent rather different proposed histories for cluster and group formation -- in one case, that clumps of objects will merge into today's galaxies, and in the other, that individual objects we see at $z=2.4$ will either passively evolve as they begin to exhaust their gas or continue to acquire infalling material, with merging of initially separate galaxies a less important process. Our velocity information is largely confined to the original WFPC2 field, with the addition of the two newly-identified QSOs. Thus it is not very clear how the small velocity dispersion of these objects (of order 385 km s$^{-1}$, including redshifts of members from Armus et al. 1999) should be interpreted for the whole structure. Furthermore, the extended spatial distribution suggests that the structure has not yet relaxed, and may not yet be fully decoupled from the Hubble flow. Recent simulations of galaxy formation from a clumpy medium by Haehnelt, Steinmetz, \& Rauch (1998) indicate that line-of-sight velocity measurements not only have a factor 2 dispersion as seen from various directions, but underestimate the relaxed virial velocities by $\sim 60$\%. These considerations all make a virial mass estimate very uncertain, and likely a lower limit. It may be more realistic to consider the velocity dispersion as applying to the megaparsec-scale clumping including 53W002 itself and the AGN in objects 18 and 19. The velocity range we see is comparable to the dispersion expected purely from the Hubble flow on an assemblage 3 Mpc deep at this epoch, so we may well be seeing the group near the time of turnaround from cosmological expansion, in which case the velocity dispersion tells very little about the internal dynamics. These questions suggest several potentially fruitful lines for further work. Most notably, we need to know more about the content of this structure, especially for fainter objects both with and without strong Lyman $\alpha$ emission. Multiband imagery sufficient to derive photometric redshifts and narrowband near-infrared measurements tailored to find emission from [O II], [O III], or H$\alpha$ can help fill out our census of members. A more accurate accounting of how common such structures are in the early Universe will require wider-field multiband surveys, preferably with fine enough wavelength bands to both pick out line emitters and resolve multiple line-of-sight sheets or clusters. Eventually, dynamical studies should tell us how these early assemblages become the rich structural spectrum seen in today's Universe. \acknowledgments{We are grateful to Richard Green for approving, and the KPNO staff for implementing, a scheduling switch between the imaging and spectroscopic observing runs which allowed us to confirm the new QSO candidates. We acknowledge C. Leitherer and colleagues for making their starburst spectral templates available via WWW. Portions of this work were supported by NSF grant AST-9802963 and HST STScI grants GO-5985.0*.96A and AR-8388.0*.98A. We thank Paul Francis, the referee, for goading us into more quantitave probability assessments than we had originally incorporated, as well as some interesting suggestions on cosmology versus dynamics in the cluster redshift distribution.}
2,869,038,155,381
arxiv
\section{Introduction} Two-dimensional (2d) transition metal dichalcogenides (TMDs) have been investigated as a possible route toward producing well-controlled optical emitters, critical components in quantum technologies and photonics \cite{gabel2021imaging,dhakal2017local,branny2017deterministic,cho2021highly,turunen2022quantum,XYZ:2020:excitonic,chowdhury2021anomalous,koo2021tip,lee2021inducing,luo2020exciton,parto2021defect,basov2022nano}. The bandgap of monolayer TMDs is uniquely sensitive to doping\cite{xiaodong2019visualizing,kim2019electrical,liu2019direct,yao2017optically,chernikov2015population}, dielectric environment\cite{Heinz2014measurement} and mechanical strain\cite{li2015optoelectronic,Drew:2020,koo2021tip,parto2021defect,trainer2019effects}, thereby providing an ideal platform to create quantum confinement by local mechanical and electrical fields. Indeed, previous research has investigated moire patterns \cite{kim2021excitons,xiaodong2021moire,shabani2021deep,XYZ:2019:1d,crommie2021correlated}, wrinkles \cite{cho2021highly,dhakal2017local,koo2021tip}, bubbles \cite{gabel2021imaging,darlington2020imaging} and lithographically fabricated dielectric arrays as effective methods \cite{carmesin2019quantum} to create localized optical emitters. In spite of these notable achievements, a basic band diagram picture across such localized structures remains elusive. For example, recent theoretical calculations\cite{chirolli2019strain,carmesin2019quantum,morrow2021trapping} have predicted large bandgap changes at the edges of wrinkles and nanobubbles. However, such predictions have not been experimentally verified, and it remains unclear what the major contributors are to achieving localized optical emitters in TMD materials. In order to address these questions, high spatial resolution spectroscopic measurements across local perturbations in TMDs are necessary. Our approach is to use the local imaging and spectroscopic capabilities of scanning tunneling microscopy(STM) to measure the local band diagram across nano-bubbles, and directly correlate these measurements with near-field optical measurements with ~10 nm spatial resolution. \section{Results and Discussion} Our STM measurements are conducted on nearly aligned (61.7 $^{\circ}$) WSe$_2$/MoSe$_2$ heterobilayers that were stacked on Graphite/hBN as a conducting electrode for STM measurements (see Fig.~1a for a schematic). The relative orientation between two TMD monolayers was determined by second harmonic generation (SHG) (see supporting information figure S1 and accompanying discussion). The details of sample fabrication are presented in methods. During the stacking process, nanobubbles are commonly formed between layers. Figure 1b shows a large scale atomic force microscopic (AFM ) image of such nanobubbles. The nanobubbles range in size from 20 nm to 400 nm and in height from 2 nm to 50 nm. STM topographic measurements on one such nanobubble are shown in Fig. 1c. Figure 1d shows height profile of the nanobubble boundary corresponding to the dashed arrow in Fig.~1c. In order to obtain the STM topography, the tunneling current remains constant at a particular voltage bias, through a feedback loop; however, the tunneling current depends on the integrated local density of states between the sample Fermi level and the bias voltage. This dependency of current on electronic structure affects the topographic data. Thus, both electronic structure and actual topographic height influence the apparent topographic height in STM and the apparent heights are not directly translatable into real vertical displacements. This consideration is not relevant to AFM measurements; however, AFM measurements are dictated by tip-sample force interactions which can strongly influence the apparent height in AFM topographic scans. However, it is clearly evident from the line profile across the boundary shown in Fig. 1d that the edge of the bubble features a sharp step. Such a sharp step indicates that there is localized strain at the edge of the bubble. Finally, a moir\'e pattern is clearly visible both on and off the bubble. Its presence in all regions shows that the WSe$_2$ and MoSe$_2$ layers are in good contact throughout, and the bubble is under the heterobilayer. As a result of the device fabrication process under ambient conditions, compounds in air such as water molecules often are trapped between layers. While the chemical identity of the material in the bubble is unknown, our data below, which shows a highly doped region on the nanobubble, indicate the presence of polarized (or ionized) molecules trapped in the nanobubble during the fabrication. Such polarized molecules induce doping and electric field to the layers above and modify the electronic structure. To characterize the electronic properties in the vicinity of the nanobubble, we perform STM $dI/dV${} spectroscopy measurements at various points on and off the bubble (see Fig. 2a). Shown in Fig.~2b and c are a sequence of such measurements taken across the edge of the bubble, color coded according to the markers on Fig. 2a. Figure 2b shows the typical spectra well inside (blue) and outside (red) the nanobubble area. It is seen that the conduction band (CB) edge shows a substantial shift from the outer edge of the bubble into the interior while VB is nearly unchanged. This indicates a substantial reduction in the bandgap on the bubble as well a large electron doping. Shown in Fig.~2c are a sequence of spectra taken in the transition region across the bubble edge, with a focus on the CB edge where the most dramatic changes are observed. It is seen that at the edge of the bubble (red curve), electronic states exist that are deeply bound in the semiconductor gap. As we transition from outside to inside the bubble (green curve), these states move towards the CB edge, while at the same time the CB edge moves towards the Fermi level. The entire evolution is shown as a heat map of $dI/dV${} in Fig.~2d as a function of position along the arrow overlaid on Fig.~2a. The dashed lines overlaid on Fig.~2d indicate the evolution of the localized states and the conduction band edge upon moving from the outside to the inside the bubble. In order to confirm that the observations above are representative of the entire bubble and not specific to a particular region, we performed spectroscopic imaging experiments across the bubble interface at various energies, a subset of which are shown in Fig.~2e-g. At the valence band edge (Fig.~2e), the map does not distinguish between the interior and exterior of the bubble, showing the uniformity of the valence band edge across the bubble. Figure 2f and 2g, taken within the semiconducting gap, show the presence of the localized states clearly, and their evolution towards the interior of the bubble. At large positive energies (not shown), these localized states merge into the conduction band, and a clear contrast is seen between the interior and exterior of the bubble. These maps confirm the basic picture provided by point spectra -- the bubble region is characterized by localized states at its edge and a shifted conduction band edge throughout. We present larger maps across the entire bubble in the supporting information, to rule out moir\'e effects on the observed phenomena. Finally, to rule out possible damage to the bubble during spectroscopic maps, we measure the topographic heights before and after spectroscopy is performed. We next proceed to compare our measurements of the single particle spectroscopic properties of individual bubbles with their optical emission. To do this, we employed hyperspectral nanoscale photoluminescence measurements to map the changes in exciton energy across the bubble. We have previously applied this technique to the imaging of localized exciton states in transition metal dichalcogenide monolayers \cite{darlington2020imaging}, however the spatial resolution was limited due to the finite radius of curvature of the probe. To push the nano-optical resolution to scales on order the localized state, we prepared hetero-nanobubbles on template stripped gold (TS-Au), which allows for optical resolutions on order of the gap formed between the nano-optic probe and the substrate \cite{Khoury2020acs}. In addition, the quenching of photoluminence of the monolayer and heterobilayer regions in contact with the substrate greatly reduces background, improving nanobubble's PL contrast \cite{tyurnina2019strained}. Figure 3a shows an AFM image of a nanobubble WSe$_2$/MoSe$_2$ on TS-Au, that shows localized interlayer exciton emission (Fig. 3b) when compared with the flat heterostructure. The greater strength of the interlayer exciton emission on the nanobubble edge can be attributed to two factors. First, there is an enhancement from the strain of the nano-bubble itself, which shifts the absorption of both constituent layers to the red, allowing for greater absorption of the excitation photons. Second, recent work has shown that like their intra-layer cousins, the interlayer excitons transition dipole is primarily in the plane of the 2D layers\cite{sigl2022optical}. As the nano-optical probe moves across the edge of the nanobubble, more of the transition dipole is aligned with the polarization of the nano-optical field, allowing for significantly greater in- and out-coupling for optical fields. This alignment we believe is critical for near-field observation of inter-layer exciton luminescence, which has substantially weaker oscillator strength. We also expect that strain-localized PL signal is enhanced by funneling effects, where excitons are preferentially shuttled towards the lower energy states\cite{su2022dark,harats2020dynamics,lee2022drift} Figure 3c shows point spectra at various locations across the nanobubble edges, identified by the colored markers in Fig.~3a and 3d. Black arrows show visual identification of the localized interlayer exciton peak. Clear red shifting of the emission is observed as one moves from the outer edge of the bubble into the interior. To visualize this more clearly, in Fig.~3c we plot a hyperspectral linescan along the vector defined by colored points in Fig.~3a. The general trend is towards redder emission, shifting ~ 200 meV over 15 nm into the bubble, after which the emission quickly falls to zero in the nanobubble interior. In Figs.~3e - 3g we show this same trend with spatial maps with energy bins of 1.40, 1.30, and 1.25 eV, respectively. The size of the localized interlayer exciton emission “ring” clearly shrinks for the redder energy bins. The shifting of the localized interlayer exciton emission energy reflects the energic shift of the MoSe$_2$ conduction band observed in the STS. Indeed, comparing the STS and nano-PL hyperspectral linescans, the roll off of the interlayer emission energy and the conduction band shows remarkable similarities at the nanobubble, but the nano-PL shows a sudden drop-off in emission intensity soon after the edge, which is not seen in STS. This drop-off however is expected due to the silicon detector used for the nano-PL spectroscopy (see methods for details), which rapidly lose quantum-efficiency at energies lower than $\sim$1.3 eV. Similarly, we do not observe emission from the deeply localized defect states seen in Fig.~2 given that this would correspond to emission energies $\sim$0.7 eV, which is far below the cutoff our detector. While quantitative comparison between the STM and nano-PL data is not possible due to the sample differences, evaluation of the STS-derived band gap and exciton PL energy provides information on the binding energy for excitons associated with the conduction band near the bubble edge. Our results are consistent with a conduction band interlayer exciton binding energy with magnitude of a couple hundred meV within the bubble\cite{kamban2020interlayer}. To understand the contributions of doping and strain to the observed spectroscopic features in experiment, we performed Schr\"odinger-Poisson (SP) simulations using the approach and code of Ref. ~\cite{bussy2017strain}. The lengthscale associated with the entire bubble region is too large to model directly, but we will show that the pertinent experimental features can be captured with a significantly simplified model. Our focus on the conduction band of the heterostructure, which is derived from the MoSe$_2$ layer. Thus, we take band parameters for MoSe$_2$ from Ref. ~\cite{kormanyos2015k} and electromechanical parameters from Ref. ~\cite{duerloo2012intrinsic}. Since the important physics is localized to the bubble edge, we approximate the bubble region by a 2D inclusion in monolayer MoSe$_2$ (large enough to isolate the two interfaces required by periodic boundary conditions), with a downward shift of the conduction band in the bubble region, as well as a background doping, that both increase with strain \cite{Listrain}. The strain is assumed to be slightly larger at the edge of the bubble region. The DOS is calculated assuming a 2D parabolic dispersion in the direction parallel to the inclusion above the conduction-band minimum and below the valence band maximum. Figure 4a show the band diagram obtained by self-consistent solution of the SP equations, focusing on the CB. By construction, the Fermi level is at the conduction band within the bubble region due to the background doping. The dips in the electrostatic potential of the conduction-band at the edges of the ``bubble'' have two origins. The first is the band bending induced by the interface between the doped and undoped region. The second is that we assume a higher strain at the interface, which increases the effect of this band bending. We found that adding a piezoelectric polarization charge at the interface (not shown) did not change the qualitative features, other than introducing an asymmetry in the band-bending on the opposite sides of the bubble region. The red dot-dashed lines in Fig.~4a are the energies of the bound states in the bubble region, and the green dotted lines are their squared wavefunctions (shifted so that the zero-density level is at the energy for clarity). We can see that the band bending at the interface region results in a bound state, which can also be clearly seen in Fig.~4b as spikes in the electron density at the edge of the bubble regions. In Fig.~4c we plot the DOS of the total system, including the sum of 1D states in the bubble region, and the 2D DOS in the bulk, at different x points near the interface of the bubble (see the inset, denoted by curve color). Superimposed on the step-like increase from the 2D DOS at the conduction and valence band edges in the bubble, we can see a small enhancement in the DOS near the interface, where the localized state resides. As we move toward the center of the bubble region, this enhancement weakens, and ultimately disappears. We focused our model on the MoSe$_2$ layer of the bilayer structure since the MoSe$_2$ is the main contributor to the conduction band. The size of the bubble is much larger than the crystal cell. If you were to zoom in at the atomic level the edge of the bubble can be approximated by its tangent on the surface of the MoSe$_2$ layer. Hence using the deformation gradients as a baseline for the energy of the different strained portions we can accurately model the band edge with the one- dimensional model. \section{Conclusion} Our theoretical results, though the result of a simplified model, accurately captures the essential features of the experimental observations. They indicate that sharp lateral junctions in doping are an excellent way to engineer localized states in 2D TMD semiconductors. Such localized states do not depend on the presence of specific chemical defects or specialized strain fields. Recently, several 2D materials interfaces have shown the ability to realize large charge transfer at the interface \cite{rucl3,balgley2022ultra}. Our results indicate that such interfacial engineering together with nanostructuring can be employed to create optical emitters with arbitrarily desired shapes at the nanoscale. Our work provides a clear route to achieving this in the future.
2,869,038,155,382
arxiv
\section{Introduction} In the current digital era, streaming data is ubiquitous. In the context of Industrial Internet of Things, remote health monitoring services driven by sensor driven data analytics are becoming increasingly popular. Data-driven approaches for anomaly detection, diagnostics, prognostics and optimization have been proposed to provide operational support to engineers, ensure high reliability and availability of equipment, and to optimize the operational cost (\cite{da2014internet}). Typically, a large number of sensors (order of hundreds or sometimes thousands) are installed to capture the operational behavior of complex equipment with various sub-systems interacting with each other. Recently, deep learning approaches have been proposed for various data-driven health monitoring tasks including anomaly detection (\cite{p:lstm-ad,p:icmlLSTM-AD,gugulothu2018sparse}) and prognostics (\cite{malhotra2016multi,gugulothu2017predicting,zheng2017long}), yielding state-of-the-art results for RUL estimation (\cite{gugulothu2017predicting}) using Recurrent Neural Networks (RNNs). In this work, we focus on the problem of prognostics or Remaining Useful Life (RUL) estimation of operational instances given the current and historical readings from various sensors capturing their behavior. Deep learning approaches for prognostics, and equipment health monitoring in general, have certain limitations as highlighted in \cite{gugulothu2018on,gugulothu2017predicting,khan2018review}. In this work, we address two important practical challenges in deep learning based RUL estimation approaches. The challenges addressed and the corresponding key contributions of this work are as follows: \textbf{Challenge-I}: Deep neural networks are prone to overfitting and typically require a large number of labeled training instances to avoid overfitting. If failure time for an instance is known, a target RUL can be obtained at any time before the failure time. However, labeled training instances for RUL estimation are few as failures are rare. Also, any operational instance (or any instance for which failure time is not known, or which has not failed yet) is considered to be \textit{censored} as target RUL cannot be determined for such an instance. We note that deep RNNs (\cite{heimes2008recurrent,malhotra2016multi,gugulothu2017predicting,zheng2017long,zhang2018long}) and Convolutional Neural Networks (CNNs) (\cite{babu2016deep}) based approaches formulate RUL estimation as a metric regression (MR) problem where a normalized estimate of RUL is obtained given time series of sensor data via a non-linear regression metric function learned from the data. This MR formulation of RUL estimation cannot directly leverage censored data typically encountered in RUL estimation scenarios. \textbf{Key Contribution-I }: In addition to using failed instances for training, we propose a novel approach to \textbf{leverage the censored instances} in a supervised learning setting, in turn, increasing the training data and leading to more robust RUL estimation models. We cast RUL estimation as an \textbf{ordinal regression} (\cite{harrell2001ordinal}) problem (instead of the typically used metric regression formulation) and propose LSTM-OR (Long Short Term Memory Networks based Ordinal Regression) based RUL Estimation approach. We show that \textit{partially labeled training instances} can be generated from the readily available operational (non-failed) instances to augment the labeled training data in the ordinal regression setting to build more robust RUL estimation models. We empirically show that LSTM-OR outperforms LSTM-MR by effectively leveraging censored data when the number of failed instances available for training is small. \textbf{Challenge-II}: The black-box nature of deep neural networks makes it difficult to interpret the predictions/estimates \cite{}, and in turn, gauge the reliability of the predictions. It is, therefore, desirable to \textbf{quantify the predictive uncertainty} in deep neural network based predictions of RUL - it can aid engineers and operators in risk assessment and decision making while accounting for the reliability of predictions. \textbf{Key Contribution-II}: We propose a simple yet effective approach to \textbf{quantify uncertainty based on an ensemble of LSTM-OR models} (using similar idea as in \cite{NIPS2017_7219} as detailed in Section \ref{sec:uncertaintyQunatification}). Ensemble of deep LSTM-OR models leads to improved RUL estimation performance, and also the empirical standard deviation (ESD) of the predictions from LSTM-OR models provides an approximate measure of uncertainty. We empirically show that when ESD (i.e. the uncertainty in estimation) is low, the corresponding error in estimation is also low; making ESD a useful uncertainty quantification metric. \textbf{Organization of the paper}: We provide an overview of related literature in Section \ref{sec:rw}. In Section \ref{sec:lstm}, we briefly introduce deep LSTM networks as used to build our deep OR models. We provide details of LSTM-OR and uncertainty quantification approaches in Sections \ref{sec:deepOR} and \ref{sec:uncertaintyQunatification}, respectively. We provide experimental evaluation details and observations in Section \ref{sec:exp}, and finally conclude in Section \ref{sec:conc}. \section{Related Work\label{sec:rw}} \textit{Trajectory Similarity based RUL estimation}: An important class of approaches for RUL estimation is based on trajectory similarity, e.g. \cite{wang2008similarity,khelif2014rul,lam2014enhanced,malhotra2016multi,gugulothu2017predicting}. These approaches compare the health index trajectory or trend of a test instance with the trajectories of failed train instances to estimate RUL using a distance metric such as Euclidean distance. Such approaches work well when trajectories are smooth and monotonic in nature but are likely to fail in scenarios when there is noise or intermittent disturbances (e.g. spikes, operating mode change, etc.) as the distance metric may not be robust to such scenarios (\cite{gugulothu2017predicting}). \textit{Metric Regression based RUL estimation}: Another class of approaches is based on metric regression. Unlike trajectory similarity based methods which rely on comparison of trends, metric regression methods attempt to learn a function to directly map sensor data to RUL, e.g. \cite{heimes2008recurrent,benkedjouh2013remaining,dong2014lithium,babu2016deep,gugulothu2017predicting,zheng2017long,vishnu2018recurrent}. Such methods can better deal with non-monotonic and noisy scenarios by learning to focus on the relevant underlying trends irrespective of noise. Within metric regression methods, few methods consider non-temporal models such as Support Vector Regression for learning the mapping from values of sensors at a given time instance to RUL, e.g. \cite{benkedjouh2013remaining,dong2014lithium}. \textit{Temporal models for RUL estimation}: Deep temporal models such as those based on RNNs (\cite{heimes2008recurrent,malhotra2016multi,gugulothu2017predicting,zheng2017long}) or Convolutional Neural Networks (CNNs) (\cite{babu2016deep}) can capture the degradation trends better compared to non-temporal models, and are proven to perform better. Moreover, these models can be trained in an end-to-end learning manner without requiring feature engineering. Despite all these advantages of deep models, they are prone to overfitting in often-encountered practical scenarios where the number of failed instances is small, and most of the data is censored. Our approach based on ordinal regression provisions for dealing with such scenarios, by using censored instances in addition to failed instances to obtain more robust models. \textit{Ordinal Regression for Survival Analysis}: Ordinal Regression has been extensively used for applications such as age estimation from facial images (\cite{chang2011ordinal,yang2013automatic,niu2016ordinal,liu2017ordinal}), however the applications are restricted to non-temporal image data using Convolutional Neural Networks. \cite{cheng2008neural,luck2017deep} use feed-forward neural networks based ordinal regression for survival analysis. To the best of our knowledge, the proposed LSTM-OR approach is the first attempt to leverage ordinal regression based training using temporal LSTM networks for RUL estimation. \textit{Deep Survival Analysis}: A set of techniques for deep survival analysis have been proposed in the medical domain, e.g. \cite{katzman2018deepsurv,luck2017deep}. On similar lines, an approach to combine deep learning and survival analysis for asset health management has been proposed in \cite{liao2016combining}. However, it is not clear as to how such approaches can be adapted for RUL estimation applications, as they focus on estimating the survival probability at a given point in time, and cannot provide RUL estimates. Further, \cite{chapfuwa2018adversarial} proposes an approach that leverages adversarial learning for doing time-event modeling in health domain. On the other hand, LSTM-OR is capable of providing RUL estimates using time series sensor data. \textit{Uncertainty quantification in RUL estimation models}: Uncertainty analysis in data-driven equipment health monitoring is an active area of research and an unsolved problem. The approaches described in \cite{sankararaman2013novel}, \cite{6496971} use analytical algorithms, unlike sampling-based methods, to estimate the uncertainty in prognostics. They consider various sources of uncertainty such as the loading and operating conditions of the system at hand, inaccurate sensor measurements, etc. to quantify their combined effect on RUL predictions. The task is formulated as an uncertainty propagation problem where the various types of uncertainty are propagated through state space models until failure. Also, the future states of the system are estimated using the state space models and are used to arrive at an estimate of RUL. Unlike these approaches, we focus on estimating RUL as well as predictive uncertainty by using an ensemble of deep neural networks to model the time-series of sensor data available till a given point in time, without predicting the future states of the system. Our approach does not rely on any assumptions such as those needed in a state-space model. Further, domain knowledge of the underlying dynamics of a system is not needed to quantify uncertainty, and therefore, our approach is much simpler to adapt. \textit{Uncertainty quantification for deep neural networks}: Recently, \cite{gal2016dropout} proposed the use of dropout at the inference time to provide Bayesian approximation in the RUL estimation. Further, \cite{NIPS2017_7219} proposed the use of an ensemble of neural networks for predictive uncertainty estimation and demonstrated their use in comparison to Bayesian methods. Similarly, we also use an ensemble of LSTM networks to estimate the empirical uncertainty in RUL predictions. \section{Background: Deep LSTM Networks\label{sec:lstm}} We use a variant of LSTMs (\cite{hochreiter1997long}) as described in \cite{zaremba2014recurrent} in the hidden layers of the neural network. Hereafter, we denote column vectors by bold small letters and matrices by bold capital letters. For a hidden layer with $h$ LSTM units, the values for the input gate $\mathbf{i}_t$, forget gate $\mathbf{f}_t$, output gate $\mathbf{o}_t$, hidden state $\mathbf{z}_t$, and cell state $\mathbf{c}_t$ at time $t$ are computed using the current input $\mathbf{x}_t$, the previous hidden state $\mathbf{z}_{t-1}$, and the cell state $\mathbf{c}_{t-1}$, where $\mathbf{i}_t$, $\mathbf{f}_t$, $\mathbf{o}_t$, $\mathbf{z}_t$, and $\mathbf{c}_t$ are real-valued $h$-dimensional vectors. Consider $W_{n_1,n_2}:\mathbb{R}^{n_1} \rightarrow \mathbb{R}^{n_2}$ to be an affine transform of the form $\mathbf{z}\mapsto \mathbf{Wz}+\mathbf{b}$ for matrix $\mathbf{W}$ and vector $\mathbf{b}$ of appropriate dimensions. In the case of a multi-layered LSTM network with $L$ layers and $h$ units in each layer, the hidden state $\mathbf{z}_{t}^{l}$ at time $t$ for the $l$-th hidden layer is obtained from the hidden state at $t-1$ for that layer $\mathbf{z}_{t-1}^{l}$ and the hidden state at $t$ for the previous ($l-1$)-th hidden layer $\mathbf{z}_{t}^{l-1}$. The time series goes through the following transformations iteratively at $l$-th hidden layer for $t=1$ through $T$, where $T$ is length of the time series: \begin{equation}\label{eq:lstm1} \left(\begin{aligned} \mathbf{i}_t^l\\ \mathbf{f}_t^l\\ \mathbf{o}_t^l\\ \mathbf{g}_t^l \end{aligned}\right)=\left(\begin{aligned} \sigma\quad\\ \sigma\quad\\ \sigma\quad\\ tanh\\ \end{aligned}\right)W_{2h,4h} \left(\begin{aligned} \mathbf{D(z}_{t}^{l-1})\\ \mathbf{z}_{t-1}^l\\ \end{aligned}\right) \end{equation} where the cell state $\mathbf{c}_t^l$ is given by $\mathbf{c}_t^l=\mathbf{f}_t^l\mathbf{c}_{t-1}^l+\mathbf{i}_t^l\mathbf{g}_t^l$, and the hidden state $\mathbf{z}_t^l$ is given by $\mathbf{z}_t^l=\mathbf{o}_{t}^ltanh(\mathbf{c}_t^l)$. We use dropout for regularization (\cite{pham2014dropout}), which is applied only to the non-recurrent connections, ensuring information flow across time-steps for any LSTM unit. The dropout operator $\mathbf{D}(\cdot)$ randomly sets the dimensions of its argument to zero with probability equal to a dropout rate. The sigmoid ($\sigma$) and $tanh$ activation functions are applied element-wise. In a nutshell, this series of transformations for $t=1\ldots T$, converts the input time series $\mathbf{x}=\mathbf{x}_1\ldots\mathbf{x}_T$ of length $T$ to a fixed-dimensional vector $\mathbf{z}_T^L \in \mathbb{R}^h$. We, therefore, represent the LSTM network by a function $f_{LSTM}$ such that $\mathbf{z}_T^L = f_{LSTM}(\mathbf{x};\mathbf{W})$, where $\mathbf{W}$ represents all the parameters of the LSTM network. \begin{figure*}[h] \subfigure[Metric Regression]{\includegraphics[trim={25cm 3cm 2cm 2cm},clip,width=0.3\textwidth]{MR.pdf}} \subfigure[Ordinal Regression]{\includegraphics[trim={2cm 3cm 25cm 0cm},clip,width=0.3\textwidth]{OR.pdf}} \subfigure[Ordinal Regression For Censored Data]{\includegraphics[trim={2cm 3cm 25cm 0cm},clip,width=0.3\textwidth]{OR-Censored.pdf}} \caption{Deep Ordinal Regression versus Deep Metric Regression.} \end{figure*} \begin{figure*}[h] \subfigure[Process overview for LSTM-OR.\label{fig:flowchart}]{\includegraphics[trim={0cm 1.5cm 0cm 1cm},clip,width=0.9\columnwidth]{flowchart.pdf}} \subfigure[RUL and Uncertainty Estimation using Ensemble of LSTM-OR models.\label{fig:flowchart-orce}]{\includegraphics[trim={0cm 1.5cm 0cm 1cm},clip,width=1.1\columnwidth]{flowchart-orce.pdf}} \caption{Steps in LSTM-OR and Ensemble of LSTM-OR.} \end{figure*} \section{Deep Ordinal Regression for RUL Estimation\label{sec:deepOR}} \subsection{Terminology} Consider a learning set $\mathcal{D} = \{\mathbf{x}^{i},r^{i}\}^n_{i=1}$ of $n$ failed instances, where $r^i$ is the target RUL, $\mathbf{x}^{i}= \mathbf{x}^i_{1} \ldots \mathbf{x}^i_{{T^i}} \in \mathcal{X}$ is a multivariate time series of length $T^i$, $\mathbf{x}^i_{t} \in \mathbb{R}^{p}$, $p$ is the number of input features (sensors). The total operational life of an instance $i$ till the failure point is $F^i$, s.t. $T^i \leq F^i$. Therefore, $r^{i}=F^i-T^i$ is the RUL in given unit of measurement, e.g., number of cycles or operational hours. Hereafter, we omit the superscript $i$ in this section for better readability, and provide all the formulation considering an instance (unless stated otherwise). We consider an upper bound $r_u$ on the possible values of RUL as, in practice, it is not possible to predict too far ahead in future. So if $r > r_u$, we clip the value of $r$ to $r_u$. The usually defined goal of RUL estimation via Metric Regression (MR) is to learn a mapping $f_{MR}: \mathcal{X} \rightarrow [0,r_u]$. With these definitions, we next describe LSTM-based Ordinal Regression (LSTM-OR) approach as summarized in Figure \ref{fig:flowchart}, and then describe how we incorporate censored data into the LSTM-OR formulation. \subsection{LSTM-based Ordinal Regression} \begin{figure}[h] \subfigure[Failed Instance\label{fig:illus_failed}]{\includegraphics[trim={12cm 2cm 2cm 2cm},clip,width=0.45\columnwidth]{failure_instance.pdf}} \subfigure[Censored Instance\label{fig:illus_cens}]{\includegraphics[trim={12cm 2.2cm 2cm 2cm},clip,width=0.45\columnwidth]{censored_instance.pdf}} \caption{Target vector creation for failed versus censored instance.\label{fig:illus}} \end{figure} Instead of mapping an input time series to a real-valued number as in MR, we break the range $[0,r_u]$ of RUL values into $K$ intervals of length $c$ each, where each interval is then considered as a discrete variable. The $j$-th interval corresponds to $((j-1)\frac{r_u}{c},j\frac{r_u}{c}]$, and $r$ is mapped to the $k$-th interval with $k=\ceil*{\frac{r}{c}}$, where $\ceil*{.}$ denotes the ceiling function. We consider $K$ binary classification sub-problems for the $K$ discrete variables (intervals): a classifier $C_j$ solves the binary classification problem of determining whether $r\leq j\frac{r_u}{c}$. We train an LSTM network for the $K$ binary classification tasks simultaneously by modeling them together as a multi-label classification problem: We obtain the multi-label target vector $\mathbf{y}=[y_{1},\ldots,y_{K}] \in \{0,1\}^K$ from $r$ such that \begin{equation}\label{eq:target} y_{j} = \begin{cases} 0 & j<k\\ 1 & j\geq k\\ \end{cases} \end{equation} where $j=1,2, \dots,K$. For example, consider a scenario where $K=5$, and $r$ maps to the third interval such that $k=3$. The target is then given by $\mathbf{y}=[0,0,1,1,1]$, as illustrated in Figure \ref{fig:illus_failed}. Effectively, the goal of LSTM-OR is to learn a mapping $f_{OR}: \mathcal{X} \rightarrow \{0,1\}^K$ by minimizing the loss function $\mathcal{L}_{OR}$ given by: \begin{equation}\label{eq:OR} \begin{aligned} \mathbf{z}_{T}^L&=f_{LSTM}(\mathbf{x};\mathbf{W})\\ \mathbf{\hat{y}}&= \sigma(\mathbf{W}_C\:\mathbf{z}_{{T}}^L+\mathbf{b}_C)\\ \mathcal{L}_{OR}(\mathbf{y},\mathbf{\hat{y}})&=-\frac{1}{K}\sum_{j=1}^{K}y_{j}\cdot log(\hat{y}_{j})+(1-y_{j})\cdot log(1-\hat{y}_{j})\\ \end{aligned} \end{equation} where, $\mathbf{\hat{y}}$ is the estimate for target $\mathbf{y}$, $\mathbf{W}$ represents the parameters of the LSTM network, and $\mathbf{W}_C$ and $\mathbf{b}_C$ are the parameters of the layer that maps $\mathbf{z}_{T}^L$ to the output sigmoid layer. \subsection{Using Censored Data for Training} For any censored instance, the data is available only till a time $T$ prior to failure and the failure time $F$ is unknown (illustrated in Figure \ref{fig:illus_cens}). Therefore, the target RUL $r$ is also unknown. However, at any time $t_0$ s.t. $1\leq t_0<T$, it is known that the RUL $r > T-t_0$ since the instance is operational at least till $T$. Considering $\mathbf{x}=\mathbf{x}_{1} \ldots \mathbf{x}_{t_0}$ as the input time series, we next show how we assign labels to few of the dimensions $y_j$ of the target vector $\mathbf{y}$: Assuming $T-t_0$ maps to the interval $k'=\ceil*{\frac{T-t_0}{c}}$, since $T-t_0 < r$, we have $\ceil*{\frac{T-t_0}{c}} \leq \ceil*{\frac{r}{c}} \implies k^{\prime} \leq k$. Since $k$ is unknown (as $r$ is unknown) and we have $k^{\prime} \leq k$, the target vector $\mathbf{y}$ can only be partially obtained: \begin{equation}\label{eq:maskedtarget} y_{j} = \begin{cases} 0 & j<k'\\ unknown & j\geq k'\\ \end{cases} \end{equation} For all $j \geq k^{\prime}$, the corresponding binary classifier targets are masked, as shown in Figure \ref{fig:illus_cens}, and the outputs from these classifiers are not included in the loss function for the instance. The loss function $\mathcal{L}_{ORC}$ given by Equation \ref{eq:OR} can thus be modified for including the censored instances in training as: \begin{equation}\label{eq:maskedL} \mathcal{L}_{ORC}(\mathbf{y},\mathbf{\hat{y}})=-\frac{1}{K'}\sum_{j=1}^{K'}y_{j}\cdot log(\hat{y}_{j})+(1-y_{j})\cdot log(1-\hat{y}_{j})\\ \end{equation} where $K'=k'-1$ for a censored instance and $K'=K$ for a failed instance. \subsection{Mapping OR estimates to RUL} Once trained, each of the $K$ classifiers provides a probability $\hat{y}_j$ for RUL being greater than the upper limit of the interval corresponding to the $j$-th classifier. We obtain the point-estimate $\hat{r}$ for ${r}$ from $\hat{\mathbf{y}}$ for a test instance as follows (similar to \cite{chang2011ordinal}): \begin{equation}\label{eq:pointEst} \hat{r} = r_u(1-\frac{1}{K}\sum_{j=1}^{K}\hat{y}_{j}) \end{equation} It is worth noting that once learned, the LSTM-OR model can be used in an online manner for operational instances: at current time instance $t$, the sensor data from the latest $T$ time instances can be input to the model to obtain the RUL estimate $r$ at $t$. \section{Predictive Uncertainty Quantification using Ensemble of LSTM-OR Models \label{sec:uncertaintyQunatification}} Uncertainty quantification is very important in case of RUL estimation as equipment and operations involved are often of critical nature, and reliable predictions close to (but of course, prior to) failures can help avoid catastrophic failures by generating suitable alarms beforehand. Lack of sufficient training data, inherent noise in sensor readings, and uncertainty in the future usage and operation of equipment are few sources of uncertainty in case of data-driven predictive models for RUL estimation. Quantifying uncertainty in RUL estimates can assist ground engineers and operators to arrive at more informed decisions compared to scenarios where only RUL estimates are available without any metric indicating whether the model is certain about the estimate or not. In other words, uncertainty quantification of the RUL estimate enhances the reliability of data-driven models. This is even more relevant in deep neural network-based estimation models due to their otherwise black-box nature. An uncertainty metric can be considered to be reliable if: i) for low uncertainty values, i.e. whenever the model is confident about its estimations, the corresponding errors in the RUL estimations are low, and for high uncertainty values, the corresponding errors in the RUL estimation model should be high, ii) it produces RUL estimates with low uncertainty when a failure is approaching, i.e. the model should be able to precisely estimate the RUL with a high degree of certainty close to failures. To quantify the predictive uncertainty in the target vector estimate $\hat{\mathbf{y}}$ and the corresponding RUL estimate $\hat{r}$, we consider training an ensemble of LSTM-OR models. We consider an ensemble learning approach similar to that introduced in \cite{NIPS2017_7219}: For training an ensemble of LSTM-OR models, we consider all the training data while using different (random) initializations of the parameters ($\mathbf{W},\mathbf{W}_C,\mathbf{b}_C$) of LSTM-OR models and random shuffling of the training instances to obtain $m$ different models in an ensemble. The final RUL estimate of the ensemble is given by simple average of the RUL estimates of the $m$ models in the ensemble, and the empirical standard deviation (ESD) in the RUL estimates is used as an approximation of the predictive uncertainty in RUL estimation. More specifically, as shown in Figure \ref{fig:flowchart-orce}, we train $m$ LSTM-OR models such that we have $m$ RUL estimates $\hat{r_{i}}$ for any instance, $i=1,\ldots,m$. We obtain the point estimate $\hat{r}$ for ${r}$ from $\hat{r_{i}}$ for an instance as follows: \begin{equation}\label{eq:uncertaintyEst} \hat{r} = \frac{1}{m}\sum_{i=1}^{m}\hat{r}_{i} \end{equation} The uncertainty $\hat{u}$ in terms of ESD is given by: \begin{equation}\label{eq:uncertaintyEstStdDev} \hat{u}_{ESD} = \sqrt{\frac{1}{m}\sum_{i=1}^{m}(\hat{r}_{i} - \hat{r})^2} \end{equation} We normalize the uncertainty values ($\hat{u}_{ESD}$) using the minimum and maximum uncertainty values across all instances in a hold-out validation set through min-max normalization. We also consider other measures of uncertainty quantification in terms of entropy (similar to \cite{park2015using}) as explained in Appendix \ref{apx:ent_uncertainty} but found ESD to be the most robust measure of uncertainty. We support this with experimental evaluation in Section \ref{sec:uncertainty}. \section{Experimental Evaluation\label{sec:exp}} We evaluate RUL estimation and uncertainty quantification approaches using the publicly available C-MAPSS aircraft turbofan engine benchmark datasets (\cite{saxena2008turbofan}). We provide an overview of the dataset in Section \ref{sec:dd}. We consider metric regression models and ordinal regression models trained only on failed instances as baseline models, and compare following approaches for RUL estimation: i) \textbf{MR}: LSTM-MR using failed instances only (as in \cite{zheng2017long,heimes2008recurrent,gugulothu2017predicting}), ii) \textbf{OR}: LSTM-OR using failed instances only and using loss as in Equation \ref{eq:OR}, iii) \textbf{ORC}: LSTM-OR leveraging censored data along with failed instances using loss as in Equation \ref{eq:maskedL}, iv) \textbf{ORCE}: simple average ensemble of ORC models. We describe RUL estimation approaches in Section \ref{sec:RUL}. Further, to evaluate uncertainty quantification approach as described in Section \ref{sec:uncertaintyQunatification}, we study the relationship of uncertainty estimates with error and ground truth RUL in Section \ref{sec:uncertainty} while also introducing novel metrics to evaluate the efficacy of uncertainty estimates in context of prognostics. \subsection{Dataset Description\label{sec:dd}} We consider datasets FD001 and FD004 from the simulated turbofan engine datasets\footnote{\url{https://ti.arc.nasa.gov/tech/dash/groups/pcoe/\\prognostic-data-repository/\#turbofan}} (\cite{saxena2008turbofan}). The training sets (\textit{train\_FD001.txt} and \textit{train\_FD004.txt}) of the two datasets contain time series of readings for 24 sensors (21 sensors and 3 operating condition variables) of several instances (100 in FD001 and 249 in FD004) of a turbofan engine from the beginning of usage till end of life. The time series for the instances in the test sets (\textit{test\_FD001.txt} and \textit{test\_FD004.txt}) are pruned some time prior to failure, such that the instances are operational and their RUL needs to be estimated. The actual RUL values for the test instances are available in \textit{RUL\_FD001.txt} and \textit{RUL\_FD004.txt}. We randomly sample 20\% of the available training set instances, as given in Table \ref{tab:datastats}, to create a validation set for hyperparameter selection. For simulating the scenario for censored instances, a percentage $p_c \in \{0,50,70,90\}$ of the training and validation instances are randomly chosen, and time series for each instance is randomly truncated at one point prior to failure. We then consider these truncated instances as censored (currently operational) and their actual RUL values as unknown. The remaining $(100-p_c)$\% of the instances are considered as failed. Further, the time series of each instance thus obtained (censored and failed) is truncated at 20 random points in the life prior to failure, and the exact RUL $r$ for failed instances and the minimum possible RUL $T-t_0$ for the censored instances (as in Section \ref{sec:deepOR} and Figure \ref{fig:illus}) at the truncated points are used for obtaining the models. The number of instances thus obtained for training and validation for $p_c=0$ is given in Table \ref{tab:winstats}. The test set remains the same as the benchmark dataset across all scenarios (with no censored instances). The MR and OR approaches cannot utilize the censored instances as the exact RUL targets are unknown, while ORC can utilize the lower bound on RUL targets to obtain partial labels as per Equation \ref{eq:maskedtarget}. An engine may operate in different operating conditions and also have different failure modes at the end of its life. The number of operating conditions and failure modes for both the datasets are given in Table \ref{tab:datastats}. FD001 has only one operating condition, so we ignore the corresponding three sensors such that $p=21$, whereas FD004 has six operating conditions determined by the three operating condition variables. We map these six operating conditions to a 6-dimensional one hot vector as in \cite{zheng2017long}, such that $p=27$. \subsection{RUL Estimation \label{sec:RUL}} In this section, we define performance metrics to evaluate our RUL estimation models i.e ORC and ORCE. Further, we discuss our experimental settings which is followed by results and observations. We also draw a comparison between our proposed RUL estimation models and already existing RUL estimation models. \subsubsection{Performance Metrics for Evaluating RUL Estimation Models\label{sec:metrics}} There are several metrics proposed to evaluate the performance of prognostics models (\cite{saxena2008metrics}). We measure the performance of our models in terms of Timeliness Score (S) and Root Mean Squared Error (RMSE): For a test instance $i$, the error in estimation is given by $e_{i} =\hat{r}_{i} - {r}_{i}$. The timeliness score for $N$ test instances is given by $S=\sum^N_{i=1} (exp({\gamma\cdot|e_{i}|})-1)$, where $\gamma=1/\tau_1$ if $e_{i}<0$, else $\gamma=1/\tau_2$. Usually, $\tau_1 > \tau_2$ such that late predictions are penalized more compared to early predictions. We use $\tau_1=13$ and $\tau_2=10$ as proposed in \cite{saxena2008damage}. The lower the value of $S$, the better is the performance. The root mean squared error (RMSE) is given by: $RMSE=\sqrt{\frac{1}{N}\sum^N_{i=1}e_{i}^2}$. \begin{table}[h] \centering \footnotesize \caption{Number of train, validation and test instances. Here, OC: number of operating conditions, FM: number of fault modes. \label{tab:datastats}} \begin{tabular}{|c|c|c|c|c|c|c} \hline {\bfseries Dataset}&{\bfseries Train}&{\bfseries Validation }&{\bfseries Test}&{\bfseries OC}&{\bfseries FM}\\ \hline FD001&80&20&100&1&1\\ \hline FD004&199&50&248&6&2\\ \hline \end{tabular} \end{table} \begin{table}[h] \centering \footnotesize \caption{Number of truncated instances. \label{tab:winstats}} \begin{tabular}{|c|c|c|c|c} \hline {\bfseries Dataset}&{\bfseries Train}&{\bfseries Validation }&{\bfseries Test}\\ \hline FD001&1600&400&100\\ \hline FD004&3980&1000&248\\ \hline \end{tabular} \end{table} \begin{table*}[th] \caption{Comparison of various LSTM-based approaches considered in terms of RMSE and Timeliness Score (S) for FD001 and FD004 datasets. $n_f$ and $n_c$ denote number of failed and censored instances in training set, respectively. \label{tab:ORvsMR}} \scalebox{0.65}{% \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c||c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c||}{\bfseries} & \multicolumn{1}{|c}{} & \multicolumn{9}{c||}{\bfseries FD001}& \multicolumn{1}{|c}{} & \multicolumn{9}{c|}{\bfseries FD004}\\ \hline & \multicolumn{2}{|c|}{Instances}& \multicolumn{4}{|c|}{RMSE} & \multicolumn{4}{|c||}{Timeliness Score (S)} & \multicolumn{2}{|c|}{Instances} & \multicolumn{4}{|c|}{RMSE} & \multicolumn{4}{|c|}{Timeliness Score (S) $\times 10^{-3}$}\\ \hline $\textit{p}_c(\%)$ & $n_f$ &$n_c$& MR & OR & ORC & ORCE & MR & OR & ORC & ORCE & $n_f$ &$n_c$ & MR & OR & ORC & ORCE &MR & OR & ORC& ORCE \\ \hline 0 & 80 & 0 & 15.62 & 15.63 & 15.63 & \textbf{14.62} & 507.2 & 367.64 & 367.64 & \textbf{292.76} & 199&0 &\textbf{26.88} & 28.33 & 28.33 & 27.47 & $\mathbf{4.92}$ & $6.44$ &$6.44$ & $5.24$\\ \hline 50 & 40 & 40 & 17.56 & 19.06& 17.60 & \textbf{15.98} & 444.1 &564.14 & 572.63 & \textbf{372.26} & 100&99 & \textbf{29.71} & 32.85& 31.48 & 30.62 & $7.97$ &$17.9$&$9.83$ & $\mathbf{7.86}$\\ \hline 70 & 24 & 56 & 19.92 & \textbf{16.48} & 18.53 & 16.57 & 713.31 & \textbf{362.21} & 561.11 & 404.94 & 60&139 & 33.17 & 33.65 &32.13 & \textbf{31.27} & $18.8$ &$17.4$ & $12.0$ & $\mathbf{9.59}$ \\ \hline 90 & 8 & 72 & 25.32 & 24.83 & 21.51 & \textbf{20.38} & $1.26\times 10^4$ & $3.07\times 10^4$ & $20.64\times 10^2$ & $\mathbf{13.57\times 10^2}$ & 20 & 179 & 41.23 & 43.88 & 39.75 & \textbf{38.41} & $102.0$ & $111.0$ & $141.13$ & $\mathbf{60.72}$ \\ \hline \end{tabular}} \end{table*} \begin{figure*}[h] \subfigure[RMSE FD001]{\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=0.24\textwidth]{plot_rmse_gain_FD001.pdf}} \subfigure[Timeliness Score (S) FD001]{\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=0.24\textwidth]{plot_score_gain_FD001.pdf}} \subfigure[RMSE FD004]{\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=0.24\textwidth]{plot_rmse_gain_FD004.pdf}} \subfigure[Timeliness Score (S) FD004\label{fig:ScoreFD004}]{\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=0.24\textwidth]{plot_score_gain_FD004.pdf}} \caption{\%age gain of ORC and ORCE over MR with decreasing number of failed instances ($n_f$) in training.\label{fig:gain}} \end{figure*} \begin{figure*}[h] \subfigure[FD001\label{fig:PnRFD001}]{\includegraphics[trim={0.75cm 0cm 1.5cm 0cm},clip,width=0.24\textwidth]{fd001-10-recall-precision.eps}} \subfigure[FD004\label{fig:PnRFD004}]{\includegraphics[trim={0.75cm 0cm 1.5cm 0cm},clip,width=0.24\textwidth]{fd004-10-recall-precision.eps}} \subfigure[FD001\label{fig:F1FD001}]{\includegraphics[trim={0.75cm 0cm 1.5cm 0cm},clip,width=0.24\textwidth]{fd001-10-f1.eps}} \subfigure[FD004\label{fig:F1FD004}]{\includegraphics[trim={0.75cm 0cm 1.5cm 0cm},clip,width=0.24\textwidth]{fd004-10-f1.eps}} \caption{Comparison of ESD and ENT as measures of uncertainty in terms of (a)-(b) Precision Recall Curves; and (c)-(d) F1 Scores with varying $\tau_u$. ESD is a more robust uncertainty metric compared to ENT.} \end{figure*} \begin{figure*}[h] \subfigure[Average Error with varying uncertainty threshold.\label{fig:sigma-error-plots-fd001-fd004}]{\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=\columnwidth]{sigma-error-plots-fd001-fd004-new.eps}} \subfigure[Uncertainty Evaluation with varying RUL.\label{fig:ground-sigma-error-precision}]{\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=\columnwidth]{ground-coverage-sigma-error-fd001-fd004-10.eps}} \caption{Performance evaluation of ESD as an uncertainty metric showing: (a) lower uncertainty values corresponding to low RUL estimation errors, (b) highly precise and correct uncertainty estimates close to failures, i.e. when RUL is low.} \end{figure*} \subsubsection{Experimental Setup} We consider $r_u=130$ cycles for all models, as used in \cite{babu2016deep,zheng2017long}. For OR and ORC, we consider $K=10$ such that interval length $c=13$. For training the MR models, a normalized RUL in the range 0 to 1 (where 1 corresponds to a target RUL of 130 or more) is given as the target for each input. We use a maximum time series length of $T=360$; for any instance with more than 360 cycles, we take the most recent 360 cycles. Also, we use the standard z-normalization to normalize the input time series sensor wise using mean and standard deviation of each sensor from the train set. The hyperparameters $h$ (number of hidden units per layer), $L$ (number of hidden layers) and the learning rate are chosen from the sets $\{50,60,70,80,90,100\}$, $\{2,3\}$ and $\{0.001, \\ 0.005\}$ respectively. We use a dropout rate of 0.2 for regularization, and a batch size of 32 during training. The models are trained for a maximum of 2000 iterations with early stopping. The best hyperparameters are obtained using grid search by minimizing the respective loss function on the validation set. For ORCE, we consider an ensemble of $m=6$ models (we consider up to 10 models in the ensemble, and found $m=6$ to work best across the scenarios considered). The models are trained on the best hyperparameters selected from the corresponding hyperparameter sets of ORC. While training different models, we ensure random initializations of the parameters of neural network and random shuffling of the training instances. For selecting $m=6$ models from available $10$ models, we ordered the models in the ascending order of their respective loss values on the validation set and then select the first 6 models. \subsubsection{Results and Observations\label{sec:ro_RUL}} As summarized in Table \ref{tab:ORvsMR}, we observe that: As the number of failed training instances ($n_f$) decreases, the performance for all models degrades (as expected). However, importantly, for scenarios with small $n_f$, ORCE significantly outperforms MR and OR. For example, as shown in Figure \ref{fig:gain}, with $p_c=90\%$ (i.e. with $n_f=8$ and 20 for FD001 and FD004, respectively), ORCE performs significantly better than MR, and shows 19.5\% and 6.8\% improvement over MR in terms of RMSE, for FD001 and FD004, respectively. The gains in terms of timeliness score $S$ are higher because of the exponential nature of $S$ (refer Section \ref{sec:metrics}). It is evident from Figure \ref{fig:gain} that ORCE is performing better than ORC and MR in terms of both RMSE and S. The performance gap between ORCE and ORC significantly increases in case of timeliness score (S) for FD004 dataset when $p_c=90\%$, shown in \ref{fig:ScoreFD004}. Due to fewer number of failed training instances ($p_c=90\%$), some models in the ensemble are not trained properly and result in high errors even for the instances with lower RUL $r$. This results in very high values of $S$. In case of ORC, the overall value of $S$ tends to be high since, for ORC, we compute the average of timeliness scores corresponding to $m$ models in an ensemble. This is not the case for ORCE, since the instance-wise RUL estimations are obtained as the average of $m$ estimations from the $m$ models in the ensemble, the performance of ORCE in terms of $S$ is better when compared to ORC. While MR and OR have access to only a small number failed instances $n_f$ for training, ORCE and ORC have access to $n_f$ failed instances as well as partial labels from $n_c$ censored instances for training. Therefore, MR and OR models tend to overfit while ORC and ORCE models are more robust. We also provide a comparison with existing deep CNN-based (\cite{babu2016deep}) and LSTM-based (\cite{zheng2017long}) MR approaches in Table \ref{tab:litComp}. ORC (same as OR for $m=$0\%) performs comparably to existing MR methods. More importantly, as noted above, ORC and ORCE may be advantageous and more suitable for practical scenarios with few failed training instances. \subsection{Uncertainty Quantification\label{sec:uncertainty}} We introduce various metrics used to evaluate the performance of the proposed ensemble-based uncertainty estimation approach. Using these metrics, we demonstrate the efficacy of the proposed approach from a practical point of view. We compare the proposed ESD (Equation \ref{eq:uncertaintyEstStdDev}) and two variants of entropy (as introduced in Appendix \ref{apx:ent_uncertainty}) for uncertainty evaluation. \subsubsection{Performance Metrics for Evaluating Uncertainty \\Quantification Methods\label{sec:precision}} We expect our model to be certain (have high certainty) when the RUL estimates are correct, and less certain (have low certainty) for highly erroneous RUL estimates. We consider an RUL estimation to be correct if the absolute error $|r - \hat{r}| \leq \tau_e$, and to be certain if the corresponding uncertainty estimate $\hat{u} \leq \tau_u$. Also, for evaluating the performance of uncertainty metrics we restrict the target RUL $r$ to a maximum of $r_u=130$ because we train our models with a maximum target RUL of $r_u$ and so $\hat{r}$ cannot be greater than $r_u$. This is done because even if the model confidently estimates $\hat{r}$ close to $r_u$, a value of $r$ much greater than $r_u$ will lead to high error and cannot result in proper performace evaluation of the uncertainty metrics. Under above considerations, we measure precision and recall to evaluate the performance of uncertainty quantification approach as follows: Precision is the fraction of test instances with uncertainty below a threshold $\tau_u$ that also have error $\leq \tau_e$. Recall is defined as the fraction of test instances having uncertainty and error below some threshold $\tau_u$ and $\tau_e$, respectively. More specifically: \begin{equation}\label{eq:uncertaintyEstPrecison} \begin{aligned} Precision(P)&= \frac{\#(\hat{u} \leq \tau_u) \cap \#(|r - \hat{r}| \leq \tau_e)}{\#(\hat{u} \leq \tau_u)},\\ Recall(R)&= \frac{\#(\hat{u} \leq \tau_u) \cap \#(|r - \hat{r}| \leq \tau_e)}{\#(test\ instances)}, \\ F1&= 2\times \frac{P \times R} {P+R} \end{aligned} \end{equation} where $\#(X)$ denotes the number of instances satisfying the condition $X$. Further, it is desirable to have very certain and correct estimates close to failure to avoid fatal consequences upon failure. To evaluate performance from this point-of-view, we analyze the relation of uncertainty with nearness to failure. It is desirable to have low error as well as low uncertainty when $r$ is low. To evaluate this aspect, we study the variation in precision for different RUL thresholds $\tau_r$, considering test instances with low ground truth RULs. The modified precision $P_{l}$ in this context is given by: \begin{equation}\label{eq:uncertaintyCoverage} \begin{aligned} P_{l} = \frac{\#(r \leq \tau_r) \cap \#(\hat{u} \leq \tau_u) \cap \#(|r - \hat{r}| \leq \tau_e)}{\#(r \leq \tau_r) \cap \#(\hat{u} \leq \tau_u))} \end{aligned} \end{equation} For given thresholds $ \tau_r$ and $\tau_u$, $P_l$ quantifies the fraction of test instances with actual RUL $r \leq \tau_r$ and uncertainty $ \leq \tau_u$ that also have error $\leq \tau_e$ . \begin{table}[th] \caption{Performance comparison of the proposed approach with existing approaches in terms of RMSE and Timeliness Score (S).\label{tab:litComp}} \scalebox{0.73}{% \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{} & \multicolumn{2}{c}{\bfseries FD001} & \multicolumn{2}{|c|}{\bfseries FD004}\\ \hline & RMSE & S & RMSE & S\\ \hline CNN-MR (\cite{babu2016deep}) & 18.45 &$1.29\times 10^3$ & 29.16 &$7.89\times 10^3$ \\ \hline LSTM-MR (\cite{zheng2017long}) & 16.14 &$3.38\times 10^2$ &28.17 &$5.55\times 10^3$ \\ \hline MR (ours) & 15.62 & $5.07\times 10^2$&\textbf{26.88} & $\mathbf{4.92\times 10^3}$\\ \hline ORC (proposed) & 15.63 &$3.68\times 10^2$& 28.33 &$6.44\times 10^3$ \\ \hline ORCE (proposed) & \textbf{14.62} & $\mathbf{2.93\times 10^2}$ & 27.47 & $5.24\times 10^3$ \\ \hline \end{tabular}} \end{table} \subsubsection{Results and Observations} For sake of brevity, we restrict the results and observations to the uncensored scenario, i.e. $p_c=0\%$. Similar results and observations for models corresponding to censored scenarios are presented in Appendix \ref{apx:detailed}. \textit{Comparing ESD vs Entropy (ENT) as uncertainty metric}: Precision and Recall (as in Equation \ref{eq:uncertaintyEstPrecison}) are used to compare the two approaches for uncertainty estimation. Precision-Recall curves are obtained by varying the threshold on uncertainty $0.1\leq \tau_u \leq 1.5$ while keeping $\tau_e = 10$. We observe that for $R \geq 0.1$, P is higher in case of ESD for FD001 dataset, shown in Figure \ref{fig:PnRFD001}. Similar behavior is observed in case of FD004 dataset, for $R \geq 0.2$, shown in Figure \ref{fig:PnRFD004}. We further plot $F1$ score (as in Equation \ref{eq:uncertaintyEstPrecison}) by varying the $\tau_u$, shown in Figure \ref{fig:F1FD001} and \ref{fig:F1FD004}, which shows that ESD is a better uncertainty quantification metric compared to ENT. (We also analyze the instances for which ESD has unexpected behavior in terms of low uncertainty while having high error in RUL estimate. The observations are given in Appendix \ref{apx:detailed}.) \textit{Relation between uncertainty and error}: For a reliable model, RUL estimates with high certainty must be accurate, i.e. have low RUL estimation errors. To evaluate the performance of uncertainty metric in this context, we consider instances with uncertainty $\hat{u}\leq \tau_u$, and compute the average error in RUL estimation for these instances. As shown in Figure \ref{fig:sigma-error-plots-fd001-fd004}, we observe that for low values of $\tau_u$, the average error thus computed is also low, indicating that the model is more accurate when it is more certain. Further, as expected, we observe an increase in average error with increasing $\tau_u$, suggesting that the RUL estimates tend to be more erroneous when the model is uncertain. \textit{Relation between uncertainty and actual RUL}: For quantifying the relationship between RUL and uncertainty, $P_{l}$ is calculated as in Equation \ref{eq:uncertaintyCoverage}. $P_{l}$ is computed for varying $\tau_r$, ranging from $10$ to $130$ and, keeping $\tau_u$ and $\tau_e$ fixed as $0.2$ and $10$ respectively. From practical point of view, higher precision ($P_{l}$) in case of lower values of $\tau_r$ is expected to correctly and confidently handle instances that are approaching failure. Similar trend is observed in our case also, as shown in Figure \ref{fig:ground-sigma-error-precision}. For $\tau_r =20$, $P_l=0.917$ for FD001 dataset and $P_l=0.857$ for FD004 dataset suggests that the model is certain and accurate $91.7\%$ of the times for FD001 dataset and $85.7\%$ of the times for FD004 dataset. \section{Conclusion and Discussion\label{sec:conc}} In this work, we have proposed a novel approach for RUL estimation using deep ordinal regression based on multilayered LSTM neural networks. We have argued that ordinal regression formulation is more robust compared to metric regression, as the former allows for incorporation of more labeled data from censored instances. We found that leveraging censored instances significantly improves performance when the number of failed instances is small. In future, it would be interesting to see if a semi-supervised approach (e.g. as in \cite{yoon2017semi,gugulothu2018on}) with initial unsupervised pre-training of LSTMs using failed as well as censored instances can further improve the robustness of the models. Further, an extension to the proposed approach to address the usually encountered non-stationarity scenario using approaches similar to \cite{saurav2018online} can be considered. It is to be noted that although we have experimented with LSTMs for Ordinal Regression, our OR approach is generic enough to be useful for any neural network, e.g. CNNs. Further, we have proposed a simple yet effective approach to quantify uncertainty in the RUL estimates by using a simple average ensemble of the deep ordinal regression models. The proposed empirical standard deviation based metric for uncertainty provides accurate predictive uncertainty estimates: we observe low errors in RUL estimation for low uncertainty values. Further, the model is found to be accurate with high certainty when the remaining useful life is very low, i.e. the instance is approaching failure. It will be interesting to see if the ensemble based approach for uncertainty quantification can be extended to metric regression models as well using uncertainty methods for regression as proposed in \cite{NIPS2017_7219}. \bibliographystyle{apacite} \PHMbibliography{BibTex/phm-kdd2016,BibTex/phm-kdd2017,BibTex/sensor_analytics,BibTex/online-ad,BibTex/ijcai2017,BibTex/kdd2018,BibTex/ecml-pkdd2018,BibTex/dise}
2,869,038,155,383
arxiv
\section{Introduction} In his book ``Proximal Flows''~\cite[Section~\RNum{2}.3, p.\ 19]{glasner1976proximal} Glasner defines the notion of a {\em strongly amenable group}: A group is strongly amenable if each of its proximal actions on a compact space has a fixed point. A continuous action $G \curvearrowright X$ of a topological group on a compact Hausdorff space is proximal if for every $x, y \in X$ there exists a net $\{g_n\}$ of elements of $G$ such that $\lim_n g_n x = \lim_n g_n y$. Glasner shows that virtually nilpotent groups are strongly amenable and that non-amenable groups are not strongly amenable. He also gives examples of amenable --- in fact, solvable --- groups that are not strongly amenable. Glasner and Weiss~\cite{glasner2002minimal} construct proximal minimal actions of the group of permutations of the integers, and Glasner constructs proximal flows of Lie groups~\cite{glasner1983proximal}. To the best of our knowledge there are no other such examples known. Furthermore, there are no other known examples of minimal proximal actions that are not also {\em strongly proximal}. An action $G \curvearrowright X$ is strongly proximal if the orbit closure of every Borel probability measure on $G$ contains a point mass measure. This notion, as well as that of the related Furstenberg boundary~\cites{furstenberg1963poisson, furstenberg1973boundary, furman2003minimal}, have been the object of a much larger research effort, in particular because a group is amenable if and only if all of its strongly proximal actions on compact spaces have fixed points. Richard Thompson's group $F$ has been alternatively ``proved'' to be amenable and non-amenable (see, e.g.,~\cite{cannon2011thompson}), and the question of its amenability is currently unresolved. In this paper we pursue the less ambitious goal of showing that is it not strongly amenable, and do so by directly constructing a proximal action that has no fixed points. This action does admit an invariant measure, and thus does not provide any information about the amenability of $F$. It is a new example of a proximal action which is not strongly proximal. \vspace{0.3in} The authors would like to thank Eli Glasner and Benjamin Weiss for enlightening and encouraging conversations. \section{Proofs} Let $F$ denote Thompson's group $F$. In the representation of $F$ as a group of piecewise linear transformations of $\mathbb{R}$ (see, e.g.,~\cite[Section 2.C]{kaimanovich2016thompson}), it is generated by $a$ and $b$ which are given by \begin{align*} a(x) &= x-1\\ b(x) &= \begin{cases} x& x \leq 0\\ x/2& 0 \leq x \leq 2\\ x-1& 2 \leq x. \end{cases} \end{align*} The set of dyadic rationals $\Gamma =\mathbb{Z}[\frac{1}{2}]$ is the orbit of $0$. The Schreier graph of the action $G \curvearrowright \Gamma$ with respect to the generating set $\{a,b\}$ is shown in Figure~\ref{fig:schreier} (see~\cite[Section 5.A, Figure 6]{kaimanovich2016thompson}). The solid lines denote the $a$ action and the dotted lines denote the $b$ action; self-loops (i.e., points stabilized by a generator) are omitted. This graph consists of a tree-like structure (the blue and white nodes) with infinite chains attached to each node (the red nodes). \begin{figure}[ht] \centering \includegraphics[scale=0.6]{schreier.pdf} \caption{\label{fig:schreier}The action of $F$ on $\Gamma$.} \end{figure} Equipped with the product topology, $\{-1,1\}^\Gamma$ is a compact space on which $F$ acts continuously by shifts: \begin{align} \label{shift-action} [f x](\gamma) = x(f^{-1}\gamma). \end{align} \begin{proposition} \label{prop:pre_proximal} Let $c_{-1}, c_{+1} \in \{-1,1\}^{\Gamma}$ be the constant functions. Then for any $x \in \{-1,1\}^{\Gamma}$ it holds that at least one of $c_{-1},c_{+1}$ is in the orbit closure $\overline{F x}$. \end{proposition} \begin{proof} It is known that the action $F \curvearrowright \Gamma$ is highly-transitive (Lemma 4.2 in ~\cite{cannon1994notes}), i.e. for every finite $V, W \subset \Gamma$ of the same size there exists a $f \in F$ such that $f(V)=W$. Let $x\in \{-1,1\}^{\Gamma}$. There is at least one of -1 and 1, say $\alpha$, for which we have infinitely many $\gamma \in \Gamma$ with $x(\gamma)=\alpha$. Given a finite $W \subset \Gamma$ choose a $V \subset \Gamma$ of the same size and such that $x(\gamma) = \alpha$ for all $\gamma \in V$. Then there is some $f \in F$ with $f(V) = W$, and so $f x$ takes the value $\alpha$ on $W$. Since $W$ is arbitrary we have that $c_\alpha$ is in the orbit closure of $x$. \end{proof} Given $x_1,x_2 \in \{-1,1\}^{\Gamma}$, let $d$ be their pointwise product, given by $d(\gamma) = x_1(\gamma) \cdot x_2(\gamma)$. By Proposition~\ref{prop:pre_proximal} there exists a sequence $\{f_n\}$ of elements in $F$ such that either $\lim_n f_n d = c_{+1}$ or $\lim_n f_n d = c_{-1}$. In the first case $\lim_n f_n x_1 = \lim_n f_n x_2$, while in the second case $\lim_n f_n x_1 = -\lim_n f_n x_2$, and so this action resembles a proximal action. In fact, by identifying each $x \in \{-1,1\}^{\Gamma}$ with $-x$ one attains a proximal action, and indeed we do this below. However, this action has a fixed point --- the constant functions --- and therefore does not suffice to prove our result. We spend the remainder of this paper in deriving a new action from this one. The new action retains proximality but does not have fixed points. Consider the path $(\rfrac{1}{2}, \rfrac{1}{4},\rfrac{1}{8},\ldots,\rfrac{1}{2^n},\ldots)$ in the Schreier graph of $\Gamma$ (Figure~\ref{fig:schreier}); it starts in the top blue node and follows the dotted edges through the blue nodes on the rightmost branch of the tree. The pointed Gromov-Hausdorff limit of this sequence of rooted graphs\footnote{The limit of a sequence of rooted graphs $(G_n,v_n)$ is a rooted graph $(G,v)$ if each ball of radius $r$ around $v_n$ in $G_n$ is, for $n$ large enough, isomorphic to the ball of radius $r$ around $v$ in $G$ (see, e.g.,~\cite[p.\ 1460]{aldous2007processes}).} is given in Figure~\ref{fig:schreier2}, and hence is also a Schreier graph of some transitive $F$-action $F \curvearrowright F/K$. In terms of the topology on the space $\mathrm{Sub}_F \subset \{0,1\}^F$ of the subgroups of $F$, the subgroup $K$ is the limit of the subgroups $K_n$, where $K_n$ is the stabilizer of $\rfrac{1}{2^n}$. It is easy to verify that $K$ is the subgroup of $F$ consisting of the transformations that stabilize $0$ and have right derivative $1$ at $0$ (although this fact will not be important). Let $\Lambda = F/K$. \begin{figure}[ht] \centering \includegraphics[scale=0.6]{schreier2.pdf} \caption{\label{fig:schreier2}The action of $F$ on $\Lambda$.} \end{figure} We can naturally identify with $\mathbb{Z}$ the chain black nodes at the top of $\Lambda$ (see Figure~\ref{fig:schreier2}). Let $\Lambda'$ be the subgraph of $\Lambda$ in which the dotted edges connecting the black nodes have been removed. Given a black node $n \in \mathbb{Z}$, denote by $T_n$ the connected component of $n$ in $\Lambda'$; this includes the black node $n$, the chain that can be reached from it using solid edges, and the entire tree that hangs from it. Each graph $T_n$ is isomorphic to the Schreier graph of $\Gamma$, and so the graph $\Lambda$ is a covering graph of $\Gamma$ (in the category of Schreier graphs). Let \begin{align*} \Psi \colon \Lambda \to \Gamma \end{align*} be the covering map. That is, $\Psi$ is a graph isomorphism when restricted to each $T_n$, with the black nodes in $\Lambda$ mapped to the black node $0 \in \Gamma$. Using the map $\Psi$ we give names to the nodes in $\Lambda$. Denote the nodes in $T_0$ as $\{(0, \gamma) \,:\, \gamma \in \Gamma\}$ so that $\Psi(0,\gamma) = \gamma$. Likewise, in each $T_n$ denote by $(n,\gamma)$ the unique node in $T_n$ that $\Psi$ maps to $\gamma$. Hence we identify $\Lambda$ with \begin{align*} \mathbb{Z} \times \Gamma = \{(n, \gamma)\,:\, n \in \mathbb{Z}, \gamma \in \Gamma\} \end{align*} and the $F$-action is given by \begin{align} \label{a-action-on-Lambda} a (n,\gamma) &= (n, a \gamma)\\ \label{b-action-on-Lambda} b (n,\gamma) &= \begin{cases} (n, b \gamma)&\mbox{if }\gamma \neq 0\\ (n+1, 0)&\mbox{if }\gamma= 0 \end{cases} \end{align} Equip $\{-1,1\}^\Lambda$ with the product topology to get a compact space. As usual, the $F$-action on $\Lambda$ (given explicitly in ~\ref{a-action-on-Lambda} and ~\ref{b-action-on-Lambda}) defines a continuous action on $\{-1,1\}^\Lambda$. Consider $\pi:\{-1,1\}^\Gamma \to \{-1,1\}^\Lambda$, given by $\pi(x)(n, \gamma) = (-1)^n x(\gamma)$. Let $Y = \pi(\{-1,1\}^\Gamma) \subseteq \{-1,1\}^\Lambda$. \begin{claim} \label{clm:compact-and-invariant} $Y$ is compact and $F$-invariant. \end{claim} \begin{proof} $\pi$ is injective and continuous, so $Y = \pi(\{-1,1\}^\Gamma) \subseteq \{-1,1\}^\Lambda$ is compact and isomorphic to $\{-1,1\}^\Gamma$. Moreover, $Y$ is invariant to the action of $F$, because $a^{\pm 1}\pi(x) = \pi (a^{\pm 1}x)$ and $b^{\pm 1}\pi(x) = \pi(b^{\pm}\bar{x})$ where $\bar{x}(\gamma) = \begin{cases} x(\gamma)&\mbox{if }\gamma \neq 0\\ -x(\gamma)&\mbox{if } \gamma = 0 \end{cases}$. \end{proof} The last $F$-space we define is $Z$, the set of pairs of mirror image configurations in $Y$: \begin{align} \label{the-space-Z} Z = \left\{\{y, -y\}\,:\,y\in Y \right\}. \end{align} Now it is clear that equipped with the quotient topology, $Z$ is a compact and Hausdorff $F$-space. Furthermore, we now observe that $Z$ admits an invariant measure. Consider the i.i.d.\ Bernoulli $1/2$ measure on $\{-1,1\}^\Gamma$, i.e. the unique Borel measure on $\{-1,1\}^\Gamma$, for which \begin{align*} X_\gamma \colon & \{-1,1\}^\Gamma \to \{0, 1\},\quad x\mapsto \frac{x(\gamma)+1}{2} \end{align*} are independent Bernoulli $1/2$ random variables for all $\gamma \in \Gamma$. Clearly, it is an invariant measure and hence it is pushed forward to an invariant measure on $Y$, and then on $Z$. In particular, this shows that $Z$ is not strongly proximal. \begin{claim} \label{clm:no-fixed-points} The action $F \curvearrowright Z$ does not have any fixed points. \end{claim} \begin{proof} Pick $\hat{y} = \{y, -y\}\in Z$. We have $[by](0, -1) = y(0, -1) \neq -y(0, -1)$, so $by\neq -y$. Similarly, $[b y](0, 0) = y(-1, 0) = -y(0, 0) \neq y(0, 0)$, and so $by \neq y$. Hence $b\hat{y}\neq \hat{y}$. \end{proof} \begin{proposition} \label{thm:proximal} The action $F \curvearrowright Z$ is proximal. \end{proposition} \begin{proof} Let $\hat{y_1}=\{y_1, -y_1\}$ and $\hat{y_2}=\{y_2,-y_2\}$ be two points in $Z$, and let $y_i=\pi(x_i)$. Let $x_1 \cdot x_2$ denote the pointwise product of $x_1$ and $x_2$. Now by Proposition~\ref{prop:pre_proximal} there is a sequence of elements $\{f_n\}_n$ in $F$ such that $\{f_n (x_1 \cdot x_2)\}_n$ tends to either $c_{-1}$ or $c_{+1}$ in $\{-1,1\}^\Gamma$. Since $Y$ is compact, we may assume that $\{f_n y_1\}_n$ and $\{f_n y_2\}_n$ have limits, by descending to a subsequence if necessary. It is straightforward to check that $f_n y_1 \cdot f_n y_2 = f_n\pi(x_1)\cdot f_n\pi(x_2)=\pi(f_n x_1) \cdot \pi(f_n x_2)$. So: \begin{align*} [f_n y_1 \cdot f_n y_2](n,\gamma) &= [\pi(f_n x_1) \cdot \pi(f_n x_2)](n, \gamma)\\ &= (-1)^{2n}\;[f_n x_1](\gamma)\;[f_n x_2](\gamma)\\ &=[f_n x_1 \cdot f_n x_2](\gamma) = [f_n (x_1 \cdot x_2)](\gamma) \end{align*} So $\lim_n f_n y_1 = \pm \lim_n f_n y_2$, which implies $\lim_n f_n \hat{y_1} = \lim_n f_n \hat{y_2}$. \end{proof} \begin{theorem} Thompson's group $F$ is not strongly amenable. \end{theorem} \begin{proof} Since the space $Z$ we constructed above is proximal (Proposition~\ref{thm:proximal}), and has no fixed points (Claim~\ref{clm:no-fixed-points}), we conclude that $F$ has a proximal action with no fixed points, so $F$ is not strongly amenable. \end{proof}
2,869,038,155,384
arxiv
\section{Introduction} In the standard hierarchical paradigm, the morphological mix of massive galaxies is predicted to change from rotationally-supported discs to dispersion-dominated spheroids over cosmic time \citep[e.g.][]{Butcher1984,Dressler1997,Conselice2014}. Observations generally support this picture. While discs dominate the high redshift Universe, the morphologies of massive galaxies in the nearby Universe are mainly spheroidal in nature \citep{Bernardi2003,Wuyts2011,Kaviraj2014a,Kaviraj2014b,Conselice2014,Buitrago2014,Shibuya2015}. This disc-to-spheroid transformation is thought to be primarily driven by galaxy mergers \citep{Toomre1977,Barnes1992,Bournaud2007,DiMatteo2007,Oser2010,Kaviraj2010,Kaviraj2011,Dubois2013,Dubois2016,Lofthouse2017,Welker2018,Martin2018b}. The gravitational torques generated by mergers (especially `major' mergers, which have nearly equal mass ratios) remove stars from ordered rotational orbits and put them on random trajectories, producing dispersion-dominated remnants \citep[e.g.][]{Springel2005,Hilz2013,Font2017,Martin2018b}. The role of mergers is considered to be progressively more important for more massive galaxies. At the very highest stellar masses, i.e. beyond the knee of the mass function (M$_*$ $>$ 10$^{10.8}$ M$_\odot$, see e.g. \citet{Li2009,Kaviraj2017}), the consensus view is that mergers are essential for actually achieving the enormous stellar masses of such systems \cite[e.g.][]{Faber2007,McIntosh2008,Cattaneo2011}. Since mergers typically create dispersion-dominated stellar components, it is not surprising that massive galaxies are dominated by spheroidal systems. Interestingly, however, both observations \citep[e.g.][]{Conselice2006, Ogle2016, Ogle2019} and theory \citep[e.g.][]{Martin2018b} indicate that, even at the highest stellar masses, a significant minority of systems (e.g. more than 10 per cent at M$_*$ $>$ 10$^{11.4}$ M$_\odot$) surprisingly host significant disc components. For example, in the SDSS \citep{Abazajian2009}, \citet{Ogle2016,Ogle2019} find that a subset of the most optically luminous (M$_* = 0.3-3.4\times10^{11}$M$_{\odot}$) galaxies have clear disc components. They speculate that these `super spirals' may have formed via gas-rich major mergers between two spiral galaxies, with the gas of the two merging galaxies combining to form large gas discs which then create the discy stellar components. Other recent work supports the finding that such massive discs are relatively gas-rich \citep{Li2019} and indicates that these systems can be found in a variety of environments, including clusters \citep{Bogd2018,Li20192}. These studies suggest that these galaxies can even be the brightest galaxies in their respective groups and clusters, residing at the barycentres of these structures. Indeed, if such galaxies live in such high-density environments and host AGN \citep{Ogle2016}, they could provide a natural explanation for the minority of powerful radio AGN that apppear (somewhat surprisingly) to be hosted by discy systems \citep[e.g.][]{Tadhunter2016}. In the $\Lambda$CDM model, galaxy merger histories are a strong function of stellar mass, largely regardless of the morphology of the galaxy in question. The merger histories of extremely massive discs and spheroids are, therefore, very similar, both in terms of the total number of mergers they experience and the distribution of their mass ratios \citep[e.g][]{Martin2018b}. Since mergers typically destroy discs and create spheroids, it is surprising that any discs exist at all in this extreme mass regime. It is likely, therefore, that some peculiarity in the characteristics of their merger histories causes these massive discs to either survive or for discy (i.e. rotationally-supported) components to be regenerated. This is plausible as gas-rich mergers can regenerate discs, as the gas produces new, rotationally-supported stellar components during the course of the merger event \citep[e.g.][]{Springel2005,Robertson2006,Governato2009,Hopkins2009,Font2017,Martin2018b,Peschken2019}. Here, we explain the origin of extremely massive disc galaxies in the nearby Universe, by exploring how details of their merger histories create these surprising systems, using the Horizon-AGN cosmological hydrodynamical simulation. It is worth noting that a cosmological simulation, such as the one used here, is essential for this exercise. While idealised and/or zoom-in simulations of galaxy mergers have often been used to study the merger process \citep[e.g.][]{Barnes1988,Hernquist1992,Bois2011,Athanassoula2016,Sparre2016,Sparre2017}, the parameter space sampled by these experiments is typically small and not informed by a cosmological model (so that the effects of environment and gas accretion from the cosmic web cannot be fully tested). The plan for this paper is as follows. In Section \ref{sec:horizon}, we describe the Horizon-AGN simulation which underpins this study. In Section \ref{sec:disc formation}, we explain the different formation channels that lead to the formation of extremely massive discs and explore whether these massive discs can explain the discy hosts of powerful AGN (which are otherwise typically hosted by spheroidal galaxies). We summarise our findings in Section \ref{sec:summary}. \section{The Horizon-AGN Simulation} \label{sec:horizon} We use the Horizon-AGN cosmological hydrodynamical simulation \citep{Dubois2014}, which employs \textsc{ramses} \citep{2002A&A...385..337T}, an adaptive mesh refinement (AMR) hydrodynamics code. The simulation offers a 100 $h^{-1}$ comoving Mpc$^3$ volume and uses WMAP7 $\Lambda$CDM initial conditions \citep{2011ApJS..192...18K}. Horizon-AGN contains $1024^3$ dark matter particles on an initial 1024$^3$ cell gas grid, which is refined using a quasi Lagrangian criterion, when 8 times the initial total matter resolution is reached in a cell. This refinement continues down to 1 kpc in proper units (which therefore sets the minimum cell size and the spatial resolution of the simulation). The simulation has a dark-matter mass resolution of $8\times 10^7$ M$_{\odot}$, gas mass resolution of $\sim10^7$M$_{\odot}$ and stellar mass resolution of $4\times 10^6$ M$_{\odot}$. Horizon-AGN employs prescriptions for both stellar and AGN feedback. Stellar feedback includes momentum, mechanical energy and metals from Type Ia/Type II supernovae (SNe), with the Type Ia SNe implemented following \citet{1986A&A...154..279M}, assuming a binary fraction of 5\% \citep{2001ApJ...558..351M}. Feedback from Type II SNe and stellar winds is implemented using \textsc{starburst99} \citep{1999ApJS..123....3L,2010ApJS..189..309L}, via the Padova model \citep{2000A&AS..141..371G} with thermally pulsating asymptotic branch stars \citep{1993ApJ...413..641V}. Black holes (BHs) are implemented as `sink' particles and form in dense star-forming regions, where the gas densities are above a critical threshold $\rho_{0}$, where $\rho_{0}=1.67\times10^{-25}$g cm$^{-3}$ (equivalent to 0.1 H cm$^{-3}$), and the stellar velocity dispersion is larger than 100 kms$^{-1}$. Initial (seed) BH masses are $10^5$M$_\odot$ and BH growth occurs either via gas accretion or via mergers with other BHs. This growth is tracked self-consistently, based on a modified Bondi accretion rate \citep{Booth2009}, which is capped at Eddington. BHs impart feedback on ambient gas via two modes, depending on the accretion rate. For high Eddington ratios ($> 0.01$), 1.5 per cent of the energy is injected into the gas as thermal energy (a `quasar' mode). For Eddington ratios that are less than 0.01, bipolar jets are used with velocities of $10^4$ kms$^{-1}$, which constitutes a `radio' mode with an efficiency of 10 per cent \citep{Dubois2012,Dubois2014}. These parameters are chosen to produce agreement with the local M$_{BH}$-M$_{*}$ and M$_{BH}$-$\sigma_{*}$ relations \citep[e.g.][]{Haring2004}, as well as the local cosmic BH mass density \citep{Dubois2012,Volonteri2016}. \citet{Dubois2016} has shown that this two channel model for AGN feedback is important in influencing the morphological evolution of massive galaxies. In particular, AGN feedback is generally instrumental in stopping the persistent accretion of large amounts of gas directly from the surrounding filaments, which would otherwise result in almost all massive galaxies exhibiting large-scale discs. On the other hand, the quasar mode, which is typically triggered by events like mergers, does not completely quench the gas brought in by the merger, allowing new stars forming from this gas to maintain or enhance the rotational component of the galaxy. This is partly responsible for the formation and maintenance of massive discy systems as described below. Horizon-AGN reproduces an array of observational quantities that trace the evolution of stars and BHs in massive galaxies, e.g. the morphological mix of massive galaxies (M$_*$ $>$ 10$^{10.5}$ M$_\odot$) in the nearby Universe \citep{Dubois2016}, stellar mass/luminosity functions, rest-frame UV-to-near-infrared colours, the cosmic star formation history, the position of galaxies on the star formation main sequence \citep{Kaviraj2017} and the demographics of BHs over cosmic time \citep{Volonteri2016}. In each simulation snapshot, galaxy catalogues are produced by applying the \textsc{adaptahop} structure finder \citep{Aubert2004,Tweed2009} to the star particles. Structures are identified when the local density, calculated using the nearest 20 neighbours, exceeds the average matter density by a factor of 178. A minimum of 50 particles is required for identification of structures. This imposes a minimum limit for galaxy stellar masses of $\sim2\times10^{8}$M$_{\odot}$. Merger trees are produced for each galaxy from $z=0.06$ to $z=3$, with a typical timestep spacing of $\sim130$ Myr. It is worth noting that, in our stellar mass range of interest (M$_*$ $>$ 10$^{11.4}$ M$_\odot$), merger histories of galaxies are highly complete. For example, given the mass resolution of the simulation stated above, even mergers with mass ratios down to 1:100 will be visible in the low-redshift Universe. \subsection{Galaxy morphology} Following \citet{Martin2018b}, we define galaxy morphology using stellar kinematics, via $\sfrac{V}{\sigma}$, which is the ratio between the mean rotational velocity ($V$) and the mean velocity dispersion ($\sigma$) of the star particles in a galaxy. Objects with high values of $\sfrac{V}{\sigma}$ are rotationally-supported systems (i.e. discs), while those with lower $\sfrac{V}{\sigma}$ values are pressure-supported (spheroidal) systems. $\sfrac{V}{\sigma}$ is obtained after rotating the coordinate system, so that the $z$-axis is oriented along the stellar angular-momentum vector (calculated using all the star particles). $V$ is defined as $\bar{V_{\theta}}$, i.e. the mean tangential velocity component in cylindrical co-ordinates, while the velocity dispersion ($\sigma$) is calculated by taking the standard deviations of the radial ($\sigma_{r}$), tangential ($\sigma_{\theta}$) and vertical star particle velocities ($\sigma_{z}$) and summing them in quadrature. $\sfrac{V}{\sigma}$ is defined as: \begin{equation} \frac{V}{\sigma} = \frac{\sqrt{3} \bar{V_{\theta}}}{\sqrt{\sigma^2_r + \sigma^2_z + \sigma^2_\theta}} \end{equation} The predicted spheroid and disc fractions are compared to visual morphological classifications of massive galaxies in the low-redshift Universe \citep{Conselice2006} to calculate a threshold value of $\sfrac{V}{\sigma}$ (0.55) that separates disc galaxies from their spheroidal counterparts. In other words, galaxies with $\sfrac{V}{\sigma}>0.55$ are considered to be discs, while those with $\sfrac{V}{\sigma}<0.55$ are spheroids. In the mass range of interest in our study, the predicted morphological mix of the Universe compares well to observational values (see e.g. Figure 1 in \citet{Martin2018b}). Note that the discs we study in this paper have $\sfrac{V}{\sigma}$ values that are significantly higher than 0.55, i.e. these galaxies are firmly in the disc regime. \begin{center} \begin{table*} \centering \begin{tabular}{|| c | c | c | c | c | c | c | c | c | c |} \hline \hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9\\ \hline Morph. & \% & $\sfrac{V}{\sigma}$ & $\log_{10}$ M$_*$ & $\log_{10}$ M$_{\textnormal{BH}}$ & log$_{10}$ acc. rt & log$_{10}$ M$_{\textrm{halo}}$ & log$_{10}$ M$_{\textrm{gas}}$ & log$_{10}$ M$_{\textrm{sf-gas}}$ \\ & & & [M$_\odot$] & [M$_\odot$] & [M$_\odot$ yr$^{\textnormal{-1}}$] & [M$_\odot$] & [M$_\odot$] & [M$_\odot$] \\ \hline \hline Rejuv. discs & 7.7 & 0.68$^{0.02}$ & 11.48$^{0.02}$ & 8.90$^{0.03}$ & -2.03$^{0.04}$ & 12.79$^{0.05}$ & 10.45$^{0.06}$ & 10.27$^{0.08}$ \\ \\ Const. discs & 3.3 & 0.90$^{0.06}$ & 11.45$^{0.01}$ & 8.74$^{0.04}$ & -1.91$^{0.08}$ & 12.76$^{0.09}$ & 10.49 $^{0.04}$ & 10.39$^{0.04}$\\ \\ Spheroids & 89 & 0.15$^{0.01}$ & 11.57$^{0.01}$ & 8.98$^{0.02}$ & -2.08$^{0.04}$ & 13.19$^{0.06}$ & 10.29$^{0.03}$ & 9.86$^{0.03}$\\ \end{tabular} \caption{Mean properties (errors on the means are shown as superscripts) of massive galaxies. Columns: (1) morphological class, (2) percentage of a given morphological class in the massive (M$_*$ $>$ 10$^{11.4}$ M$_\odot$) galaxy population, (3) $\sfrac{V}{\sigma}$, (4) log$_{10}$ of the stellar mass, (5) log$_{10}$ of the black-hole mass, (6) log$_{10}$ of the mean black-hole accretion rate across the galaxy's lifetime, (7) log$_{10}$ of the dark-matter halo mass, (8) log$_{10}$ of the total gas mass (9) log$_{10}$ of the star-forming gas mass (i.e. gas that is dense enough to form stars, $\rho_{gas}$$>$0.1 H cm$^{−3}$).} \label{population properties} \end{table*} \end{center} \begin{center} \begin{table*} \centering \begin{tabular}{|| c | c | c | c | c |} \hline \hline 1 & 2 & 3 & 4 & 5\\ \hline Morph. & Redshift & Stellar mass ratio & Massive f$_{\textnormal{gas}}$ & Sat. f$_{\textnormal{gas}}$ \\ \hline \hline Rejuv. discs & 0.32$^{0.07}$ & 4.29$^{0.36}$ & 0.17$^{0.02}$ & 0.33$^{0.02}$\\ \\ Const. discs & 0.36$^{0.11}$ & 4.06$^{0.59}$ & 0.19$^{0.02}$ & 0.32$^{0.03}$\\ \\ Spheroids & 0.49$^{0.02}$ & 4.44$^{0.11}$ & 0.09$^{0.01}$ & 0.23$^{0.01}$\\ \\ \end{tabular} \caption{Mean properties (errors on the mean are shown as superscripts) of the most recent significant merger, defined as the last merger with a stellar mass ratio greater than 1:10. Columns: (1) morphological class (2) last merger redshift, (3) stellar mass ratio (4) gas fraction of the more massive galaxy, (5) gas fraction of the satellite.} \label{merger properties} \end{table*} \end{center} \subsection{Selection of extremely massive disc galaxies} We define massive disc galaxies as those with M$_*$ $>$ 10$^{11.4}$ M$_\odot$ and $\sfrac{V}{\sigma}>0.55$ at z $=$ 0. These systems are, therefore, well beyond the knee of the galaxy stellar mass function (M$_*$ $\sim$ 10$^{10.8}$ M$_\odot$) and are in the disc regime. The total number of galaxies with M$_*$ $>$ 10$^{11.4}$ M$_\odot$ is 569 and the fraction of discs in this mass range is around 11 per cent. This fraction is in good agreement with observational work. For example, \citet{Ogle2016,Ogle2019} find that their `super-spirals' constitute $6.5\%$ ($9.2\%$ when accounting for inclination incompleteness) of galaxies with M$_*$ $>$ 10$^{11.3}$ M$_\odot$, which is consistent with our simulated values (see Table \ref{population properties}). \subsection{Local environment} To explore local environment we utilise information about a galaxy's host dark matter halo and its group hierarchy, i.e. whether it is a `central' or a `satellite'. Satellites are systems whose host dark matter haloes have merged into a larger halo where they currently reside. For some of our analysis we also explore the vicinity of galaxies in the cosmic web produced by Horizon-AGN, via the persistence-based filament tracing algorithm \texttt{DisPerSE} \citep{Sousbie2011}, which uses the density field estimated via a delaunay tessellation \citep{SchappetVandeWeygaert2000} of the dark matter particles. We choose a persistence of 7 sigma. \texttt{DisPerSE} identifies ridges in the density field as special lines that connect topologically robust pairs of nodes. These ridges compose the filament network of the cosmic web, and the set of all segments defining these ridges are referred to as the `skeleton' \citep{Pogosyan2009}. We refer readers to \citet{Sousbie2011} and \citet{Sousbie2011b} for more details of the \texttt{DisPerSE} algorithm and to \citet{Dubois2014} and \citet{Laigle2018} for a description of its implementation in Horizon-AGN. We note that, out of the 64 massive discs in this study, 63 systems (i.e. more than 99 per cent) are central galaxies. This appears consistent with observational studies which indicate that many massive discs tend to be the brightest galaxies in their respective groups/clusters \citep[e.g.][]{Ledlow2001,Li20192,Bogd2018}. Figure \ref{fig:cosmic web} shows the positions of our three morphological classes (i.e. rejuvenated discs, constants discs and spheroids) in the cosmic web. Properties used to characterise local environment are tabulated in Table \ref{population properties}. \begin{figure} \includegraphics[width=\columnwidth]{Figure/locationxy.pdf} \centering \caption{Positions of rejuvenated discs (red), constant discs (blue) and spheroids (green) in the cosmic web from Horizon-AGN. Grey dots indicate the general galaxy population, with darker regions indicating regions of higher density.} \label{fig:cosmic web} \end{figure} \section{How do extremely massive disc galaxies form?} \label{sec:disc formation} \begin{figure*} \centering \includegraphics[width=0.32\textwidth]{Figure/Reff_gals_2124_small.pdf} \includegraphics[width=0.32\textwidth]{Figure/Reff_gals_2299_small.pdf} \includegraphics[width=0.32\textwidth]{Figure/Reff_gals_901_small.pdf} \caption[]{The evolution of the properties of the progenitor system of massive galaxies. Each column shows the evolution of an individual galaxy. The left, centre and right-hand columns show the evolution of a rejuvenated disc, a constant disc and a massive spheroid respectively (see text in Section \ref{sec:channels} for details). The top row shows the evolution in $\sfrac{V}{\sigma}$, while the bottom row shows the change in the stellar mass (solid), ex-situ (i.e accreted) stellar mass (dashed) and gas mass (dotted) of the system. The ex-situ stellar mass shows a near-step change when mergers take place, with the magnitude of the change indicating the mass brought in by the accreted satellite. The grey region highlights the most recent merger which produces the uptick in $\sfrac{V}{\sigma}$ that moves the rejuvenated disc into the disc regime. The dotted line at $\sfrac{V}{\sigma}$ = 0.55 demarcates the spheroid and disc regimes.} \label{fig:vsig} \end{figure*} \subsection{Two channels of massive disc formation} \label{sec:channels} We begin by illustrating, graphically, the two channels that create galaxies that are massive discs at the present day. In Figure \ref{fig:vsig} we describe the evolution in morphology and the stellar mass assembly in typical galaxies that form via these channels. The left and middle columns show the change in $\sfrac{V}{\sigma}$ (top) and the evolution of the stellar mass, ex-situ stellar mass and gas mass (bottom) for a typical system that represents each of the two channels of massive-disc formation. The right-hand column shows the same for a massive spheroid. The ex-situ stellar mass is defined as the stellar mass that is directly accreted from external objects via mergers, and not formed within the galaxy's main progenitor. As this figure indicates, massive discs form in one of two ways. In the first channel (left-hand column) the progenitor is initially a spheroid, until the most recent merger event causes a significant uptick in $\sfrac{V}{\sigma}$ which moves the system into the disc regime. This uptick coincides with this merger bringing in an appreciable amount of gas (since there is a coincident uptick in the gas mass) which builds a new disc component. We refer to these galaxies as `rejuvenated discs'. As we discuss below the most recent mergers that drive this rejuvenation have significant mass ratios ($>$ 1:10). It is worth noting that, in contrast, the most recent mergers in massive galaxies that exhibit spheroidal morphology (right-hand column) today are gas-poor. These mergers do not necessarily produce an uptick in $\sfrac{V}{\sigma}$ and, when they do, these upticks are not sufficient to move the system into the disc regime. Indeed $\sfrac{V}{\sigma}$ typically decreases, as is expected in mergers which are gas-poor, since the only effect of the merger is to randomise the stellar orbits and reinforce the spheroidal component of the system \citep[e.g.][]{Lotz2010,Taranu2013,Naab2014,Martin2018b}. In the second channel (middle column) the galaxy retains a disc component and remains in the disc regime throughout its lifetime. We refer to these systems as `constant discs'. Visual inspection of the $\sfrac{V}{\sigma}$ evolution of all massive discs indicates that the rejuvenated disc channel accounts for $\sim70\%$ of these systems (and $\sim8\%$ of all massive galaxies), with the remaining $\sim30\%$ having maintained a disc component over cosmic time (these systems comprise $\sim3\%$ of all massive galaxies). Table \ref{population properties} presents mean properties of the three morphological classes: rejuvenated discs, constant discs and spheroids. In the following sections we explore these two channels of massive disc formation in more detail. \subsection{The dominant channel of massive disc formation: disc rejuvenation via recent mergers} We begin by exploring the principal channel for massive disc formation - the rejuvenation of a disc by a recent merger. We note first that the rejuvenation is always driven by mergers with significant stellar mass ratios that are greater than 1:10. Given that the change in morphology from spheroid to disc appears to be driven by the properties of the most recent significant merger, we study how the properties of these mergers differ between rejuvenated discs and their spheroidal counterparts. As Table \ref{merger properties} indicates, the mass ratios of the most recent significant merger, defined as the last merger a galaxy has undergone with a mass ratio greater than 1:10, are similar in both the rejuvenated discs and their spheroidal counterparts. This is not unexpected, since the merger histories of galaxies with similar stellar masses tend to be comparable, regardless of morphology \citep{Martin2018b}. The rationale for the 1:10 mass ratio threshold is that mergers below this threshold affect the system very weakly and do not produce morphological change \citep{Martin2018b}. The differences between the rejuvenated discs and spheroids are, therefore, not driven by the mass ratio of the most recent significant merger. However, differences arise when we consider both the gas content of this merger event and its redshift. The progenitor galaxies in mergers that create rejuvenated discs show higher gas fractions than those in their spheroidal counterparts. The median gas fractions are elevated by a factor of $\sim$2 in both the more massive progenitor and the accreted satellite. Since the mass ratios of the most recent significant mergers are similar, the absolute gas mass brought into the merger therefore tends to be higher in events that create these systems. As has been shown in previous work \citep[e.g.][]{Lotz2010,Naab2014, Lagos2018,Martin2018b}, gas-rich mergers will `spin up' merger remnants, as the gas creates a new rotationally-supported stellar component. As shown graphically in Figure \ref{fig:vsig} (top row, left-hand column), these gas-rich recent mergers produce an uptick in $\sfrac{V}{\sigma}$, that moves the system from the spheroid to the disc regime.\footnote{For completeness, we have checked what fraction of massive spheroids which have a recent significant gas-rich (f$_{gas}>$0.3) merger remain spheroids after such an event. We find that only 2$\%$ of massive spheroids fit this description. In other words, 98$\%$ of massive spheroids that undergo such a gas-rich merger become discs.} If rejuvenated discs, which are the dominant channel of massive-disc formation, are principally created via recent gas-rich mergers, then it stands to reason that the fraction of massive galaxies that are discs should correlate positively with the availability of gas in the Universe. Figure \ref{fig:disc fraction evolution} shows the evolution of both the gas fraction of the Universe (red) and the fraction of massive galaxies that are discs (blue) with cosmic time. At any given redshift, we define massive galaxies as those whose descendants at $z=0$ have M$_* > 10^{11.4}M_{\odot}$, with massive discs defined as massive galaxies with $\sfrac{V}{\sigma} > 0.55$ at that redshift. The inset summarises this evolution by plotting the fraction of massive galaxies that are discs against the gas fraction of the Universe. This figure confirms that, as one would expect for such a rejuvenation process, a higher gas fraction in the Universe leads to a higher fraction of massive galaxies that are discs. In other words, the frequency of massive discs, and therefore the morphological mix of galaxies at the highest stellar masses, is strongly driven by the gas fraction of the Universe. Analysis of the local environment (Table \ref{population properties}) indicates that rejuvenated discs typically reside in less-massive dark-matter halo masses, i.e. they inhabit less dense environments. Galaxies in these regions will be less affected by processes like ram pressure stripping and tidal heating which can remove their constituent gas \citep[e.g.][]{Vollmer2001,Johansson2009,Martin2019}. This enables these systems to merge with more gas-rich satellites which can then drive the disc rejuvenation process. Finally, it is worth noting that the median redshift of the last significant merger event is lower in the rejuvenated discs ($z\sim0.3$, which corresponds to a look-back time of $\sim$3.5 Gyrs) compared to that in their spheroidal counterparts ($z\sim0.49$, which corresponds to a look-back time of $\sim$5 Gyrs). This likely assists in the survival of the discy components to the present day, because less time has elapsed since the recent merger event, reducing the possibility that further significant interactions take place which could enhance the spheroidal components of these systems. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figure/num_vs_gasfrac.pdf} \caption{Evolution of the gas fraction of the Universe (red) and the fraction of massive galaxies that are discs (blue), with cosmic time. The inset summarises this evolution by plotting these quantities against each other.} \label{fig:disc fraction evolution} \end{figure} \subsection{The secondary channel of massive disc formation: disc preservation over cosmic time} While the majority of massive discs have formed through disc rejuvenation via recent significant gas-rich mergers, a minority of this population have remained in the disc regime over cosmic time. In this section, we explore how these rare systems preserve their disc components over their lifetimes. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figure/Cumulative_merger_11_4_compare.pdf} \caption{Cumulative merger history for our three morphological classes: rejuvenated discs (dashed line), constant discs (solid line) and spheroids (dotted line). This figure presents the average number of mergers experienced by a galaxy over its lifetime, with mass ratios less than or equal to a given value, shown on the x-axis. For example, rejuvenated discs undergo, on average, $\sim$4.2 mergers with mass ratios greater than 1:10, while constant discs undergo $\sim$1.9.} \label{fig:cumulativemerger} \end{figure} The most recent significant mergers in these constant discs have similar properties, e.g. merger mass ratios, the redshift of the last significant merger and the gas fractions of the merging progenitors, to those in their rejuvenated counterparts (Table \ref{merger properties}). The two sub-populations also occupy similar local environments in terms of their dark-matter halo masses. However, strong differences arise when we consider the cumulative merger histories of the different morphological classes across cosmic time. Figure \ref{fig:cumulativemerger} presents the average number of mergers experienced by a galaxy, over its lifetime, with mass ratios less than or equal to the value shown on the x-axis. For example, rejuvenated discs undergo, on average, 4.2 mergers with mass ratios greater than 1:10, while constant discs undergo 1.9. The rate of mergers with significant mass ratios is therefore a factor of 2.2 higher in the rejuvenated discs compared to their constant counterparts. Not unexpectedly, the rejuvenated discs have similar merger histories to their spheroidal counterparts (since they were spheroidal before their most recent significant merger). This anomalously quiet merger history enables the constant discs to maintain their discy components over their lifetimes. Furthermore, since mergers typically accelerate the consumption of gas \citep[e.g.][]{Martin2017,Jackson2019}, a quieter merger history also enables the system to better retain its gas, as indicated by both the higher total and star-forming gas masses in Table \ref{population properties}. A potential explanation for this anomalously quiet merger history is the halo bias effect \citep[e.g.][]{Borzyszkowski2017}, whereby a more massive nearby halo (typically a node in the cosmic web) effectively shields the galaxy from mergers, allowing it to continue forming stars without being disturbed by interactions. We check for the presence of a halo bias effect using the position of nodes (and other halos) in the skeleton. However, we find no evidence that this effect may be driving the properties of the constant discs. These systems are no more likely to be close to a more massive halo/node than their rejuvenated counterparts and, in most cases, their halos actually dominate the local environment. This is perhaps not unexpected because all galaxies in our sample (including the constant discs) are extremely massive. Thus, the likelihood of finding a more massive system nearby is very low. The quieter merger history of the constant discs therefore seems to be a stochastic effect, which aligns well with the extreme rarity of these systems. \subsection{A note about massive discy hosts of AGN} \label{sec:AGN} We complete our study by considering whether the massive discs studied here may provide a natural explanation for the minority of powerful AGN that appear to (surprisingly) inhabit massive disc galaxies \citep[e.g.][]{Tadhunter1992,Koff2000,Canalizo2001,Guyon2006,Madrid2006,Georgakakis2009,Morganti2011,Singh2015}. Recall from the analysis above that the majority of massive discs are systems that are initially spheroidal, but in which discs have been rejuvenated via a recent gas-rich merger. Table \ref{population properties} indicates that the black-hole (BH) masses and accretion rates in the rejuvenated discs are predicted to be similar to those in their spheroidal counterparts, which is consistent with these systems originally being spheroids before the most recent significant merger. The BH masses in the constant discs are also comparable to the other morphological classes, although their BHs are slightly less massive (likely due to the quieter merger history) and their accretion rates are typically higher (due to the higher gas fractions in these systems). These theoretical predictions appear similar to what is seen in observations. For example, \citet{Tadhunter2016} show that a minority of the hosts of radio AGN at high stellar masses (M$_{*}>10^{11}$M$_{\odot}$), that have clearly discy morphologies, show the same patterns. They exhibit similarly high BH masses as their spheroidal counterparts (M$_{BH}>10^{8}$M$_{\odot}$), with broadly similar accretion rates. It is worth noting, however, that while in observed AGN that are hosted by massive discs, the accretion rates are slightly lower than that in their spheroidal counterparts, the opposite appears to be true in our theoretical analysis. This is largely explained by the different mass ranges considered, because our study is focused on galaxies that are more massive than those in observational studies like \citet{Tadhunter2016}. Indeed, if we reduce our stellar mass range to M$_*$ $>$ 10$^{11}$ M$_\odot$, we find that the massive discs then have lower accretion rates than their spheroidal counterparts, in line with the findings of \citet{Tadhunter2016}. Given the parallels between the massive discs in our theoretical study and their observed counterparts, the formation scenarios presented here appear to provide a natural explanation for the minority of powerful AGN that are observed to (surprisingly) inhabit disc galaxies at the highest stellar masses. \begin{comment} \begin{figure} \centering \includegraphics[width=\columnwidth]{Figure/bh_acc.pdf} \includegraphics[width=\columnwidth]{Figure/bh_acc_discs.pdf} \caption{Top: Histogram of the median log$_{10}$ accretion rate (over the lifetime of the galaxy) of the massive disc galaxies (blue) and control spheroids (red). Median values are given by the dashed line and the error on the mean by the dotted line. Surprisingly we find that the accretion rates of the discs are higher than those of the control spheroids, therefore these galaxies are likely to be akin to AGN hosted by disc galaxies that have been found in observations. Bottom: Histogram of the median log$_{10}$ accretion rate (over the lifetime of the galaxy) of the constant disc galaxies (blue) and rejuvenated discs (red). Median values are given by the dashed line and the error on the mean by the dotted line. We find constant discs have higher accretion rates than the rejuvenated discs, this is likely due to the constant discs high gas fraction allowing them to more easily feed their black hole.} \label{fig:bhacc} \end{figure} \end{comment} \section{Summary} \label{sec:summary} Both theory and observations indicate that the morphological mix of massive galaxies changes from being disc-dominated in the early Universe to being dominated by spheroidal systems at low redshift. In the standard $\Lambda$CDM paradigm, this morphological transformation is thought to be driven by mergers. Galaxy merger histories correlate strongly with stellar mass, largely regardless of the morphology of the galaxy in question. The frequency of mergers typically increases with stellar mass, so that galaxies at the highest stellar masses tend to have the richest merger histories. However, while most massive galaxies have spheroidal morphology, a minority of systems at the highest stellar masses are, in fact, discs. Since mergers typically destroy discs and create spheroids, and the most massive galaxies typically have the richest merger histories, it is surprising that disc galaxies exist at all at the highest stellar masses (e.g. those well beyond the knee of the mass function). We have studied the formation mechanisms of massive (M$_*$ $>$ 10$^{11.4}$ M$_\odot$) disc galaxies, in the Horizon-AGN simulation. Massive discs make up a significant minority ($\sim11\%$) of systems at such high stellar masses. We have shown that there are two channels of massive disc formation. The primary channel, which accounts for $\sim$70 per cent of these systems, is disc rejuvenation. In this channel, a massive spheroidal system experiences a recent gas-rich merger which rebuilds a disc and moves the system from the spheroid to the disc regime. The gas-rich mergers are facilitated by the fact that these systems typically inhabit less massive halos, i.e. less dense environments, than spheroidal counterparts with similar stellar masses. Galaxies in these regions are less likely to be affected by processes which deplete gas, like ram pressure and tidal stripping, making it more likely that massive galaxies can have gas-rich mergers. In the secondary channel, a massive disc remains in the disc regime over its entire lifetime. The maintenance of the disc is the result of an anomalously quiet merger history, whereby these systems undergo a factor of $\sim$2 fewer mergers with mass ratios greater than 1:10 than other galaxies with similar stellar masses. Since mergers accelerate gas consumption, a quieter merger history also enables the galaxy to retain its gas reservoir more easily, further enabling it to maintain its disc component over its lifetime. The dominance of the rejuvenation channel means that the fraction of massive galaxies that are discs is progressively larger at higher redshift, since the Universe is more gas-rich. The morphological mix of galaxies at the very highest stellar masses (at any epoch) is therefore a strong function of the gas fraction of the Universe. Finally, we have shown that the BH masses and accretion rates of massive discs are similar to those in their spheroidal counterparts. The formation mechanisms described here therefore provide a natural explanation for the minority of powerful AGN that are (surprisingly) found in disc galaxies. \section*{Acknowledgements} We are grateful to Stas Shabala and Lorenzo Posti for many interesting discussions. RAJ acknowledges support from the STFC [ST/R504786/1]. GM acknowledges support from the STFC [ST/N504105/1]. SK acknowledges a Senior Research Fellowship from Worcester College Oxford. CL is supported by a Beecroft Fellowship. JD acknowledges funding support from Adrian Beecroft, the Oxford Martin School and the STFC. This research has used the DiRAC facility, jointly funded by the STFC and the Large Facilities Capital Fund of BIS, and has been partially supported by grant Spin(e) ANR-13-BS05-0005 of the French ANR. This work was granted access to the HPC resources of CINES under the allocations 2013047012, 2014047012 and 2015047012 made by GENCI. This work is part of the Horizon-UK project. \bibliographystyle{mnras}
2,869,038,155,385
arxiv
\section{Superconducting Phase Qubit Device and the Generation of Microwave Drive Signal} \label{sec1} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{fig1_supp.eps} \caption{(a) Schematic diagram of our experimental setup including the room-temperature control and the low-temperature phase qubit. (b) The optical micrograph of the phase qubit. } \label{fig:s1} \end{figure} Figure~\ref{fig:s1}(a) displays a schematic diagram of our experimental setup, including a phase qubit and external control lines~\cite{MartinisReview}. The control signals are synthesized at room temperature, and then sent down to the low-temperature stage (inside a dilution refrigerator whose base temperature is $\sim 10$ mK) to manipulate and measure the qubit state. For the phase qubit placed in the dilution refrigerator, the main components are a qubit, a superconducting quantum interference device (SQUID) and their control lines. The optical micrograph of the phase qubit is shown in Fig.~\ref{fig:s1}(b). The phase qubit is comprised of a Josephson junction (with a critical current $I_0$ = 2 $\mu$A), a parallel loop inductance ($L_\mathrm{q}$ = 720 pH), and a capacitance ($C_\mathrm{q}$ = 1 pF). The qubit control signal combines the flux current bias and the microwave drive through a bias-tee. The former signal from the current source sets the qubit resonance frequency, while the latter signal drives the qubit state. A detailed description of the microwave drive signal is provided in the next paragraph. At the end of a quantum operation, the qubit state is projected to either the ground ($|0\rangle$) or excited ($|1\rangle$) state for the readout measurement~\cite{MartinisReview}. As the ground and excited states induce different fluxes in the qubit loop, the SQUID can detect the probability of the two states through the SQUID control line. In particular, the quantum state tomography (QST) technique is applied in the readout to extract the density matrix of the final qubit state. To describe the generation of a microwave drive signal $\lambda(t)$, we start with the time-dependent Hamiltonian $H^{(\mathrm{S})}(t)$ in the Schrodinger picture (or the lab frame). The general form of $H^{(\mathrm{S})}(t)$ is written as, \be H^{(\mathrm{S})}(t) &=& \hbar\omega_{10} |1\rangle\langle 1|+ \hbar \lambda(t) (|0\rangle\langle 1|+|1\rangle\langle 0|) \no \\ &=& \hbar\omega_{10} |1\rangle\langle 1|+ \hbar \Omega(t)\cos[\omega_\mathrm{d}t +\Phi(t)] (|0\rangle\langle 1|+|1\rangle\langle 0|), \label{eq_S1} \ee where $\omega_{10}$ is the resonance frequency of the qubit and $\omega_\mathrm{d}$ is the drive frequency of a local oscillator (LO) [Fig.~\ref{fig:s1}(a)]. The LO signal is provided by a single microwave source in our experiment. The two high frequencies, $\omega_{10}$ and $\omega_\mathrm{d}$, are in the magnitude of GHz. In addition, $\Omega(t)$ is a drive amplitude in the unit of angular frequency and $\Phi(t)$ is a time-varying phase. An IQ mixer mixes two low-frequency quadratures, $I(t)$ and $Q(t)$, with the LO signal, producing an output signal, $\lambda(t) = I(t)\cos\omega_\mathrm{d} t-Q(t)\sin\omega_\mathrm{d}t$. To generate the microwave drive as in Eq.~(\ref{eq_S1}), the two quadratures are given by \be I(t) = \Omega(t)\cos\Phi(t),~~~\mathrm{and}~~~Q(t) = \Omega(t)\sin\Phi(t), \label{eq_S3} \ee which are realized by two digital-to-analog-converter (DAC) outputs [Fig.~\ref{fig:s1}(a)]. \section{Rotating Frame of the External Field} \label{sec2} We introduce a rotating frame of the external field, in which the phase qubit can be regarded as a spin-$1/2$ particle driven by an effective magnetic field $\bm B(t)$. In Eq.~(\ref{eq_S1}), the phase $\Phi(t)$ in the microwave drive signal $\lambda(t)$ is separated into two parts, $\Phi(t)=\xi(t)-\phi(t)$, where $\phi(t)$ is considered as the phase in the $x$-$y$ plane. The additional phase $\xi(t)$ is used to construct a time-varying drive frequency, $\omega_\mathrm{d}+\delta\omega_\mathrm{d}t$ with $\delta\omega_\mathrm{d} (t)=\partial_t\xi(t)$. Next we introduce a rotating-reference Hamiltonian, $H_\mathrm{r}(t) = \hbar[\omega_\mathrm{d}+\delta\omega_\mathrm{d}(t)]|1\rangle\langle1|$, and its time propagator, $U_\mathrm{r}(t)=\exp[-(i/\hbar) \int_0^t H_\mathrm{r}(\tau)d\tau]$. The rotating frame with the reference frequency $\omega_\mathrm{d}+\delta\omega_\mathrm{d}(t)$ is built in the interaction picture, where the Hamiltonian and the system wavefunction are transformed into $H^{(\mathrm{R})}(t) = U_\mathrm{r}^\dag(t) [H^{(\mathrm{S})}(t) -H_\mathrm{r}(t)] U_\mathrm{r}(t)$ and $|\psi^{(\mathrm{R})}(t)\rangle = U_\mathrm{r}^\dag(t)|\psi^{(\mathrm{S})}(t)\rangle$. In particular, the Hamiltonian in the rotating frame is written as \be H^{(\mathrm{R})}(t) &=&-\hbar \Delta(t) |1\rangle\langle 1|+ \hbar\lambda(t) e^{-i [\omega_\mathrm{d}t+\xi(t)]}|0\rangle\langle 1|+\hbar\lambda(t) e^{i [\omega_\mathrm{d}t +\xi(t)]}|1\rangle\langle 0|. \label{eq_S15} \ee where $\Delta(t)=\omega_\mathrm{d}-\omega_{10}+\delta\omega_\mathrm{d}(t)$ is the detuning fluctuated around a fixed number, $\Delta_0=\omega_\mathrm{d}-\omega_{10}$. By expressing the microwave drive signal as \be \lambda(t)=\frac{\Omega(t)}{2}\left\{\exp[i \omega_\mathrm{d}t +i\Phi(t)]+\exp[-i \omega_\mathrm{d}t -i\Phi(t)]\right\}, \ee we take the rotating wave approximation (RWA) and ignore fast oscillations around $2\omega_\mathrm{d}$. As a result, Eq.~(\ref{eq_S15}) is simplified to be \be H^{(\mathrm{R})}(t) &=&-\hbar \Delta(t) |1\rangle\langle 1|+ \frac{\hbar\Omega(t)}{2} e^{-i \phi(t)}|0\rangle\langle 1|+\frac{\hbar\Omega(t)}{2} e^{i \phi(t)}|1\rangle\langle 0|. \label{eq_S04} \ee The introduction of Pauli operators, $\sigma_x = |0\rangle\langle 1|+|1\rangle\langle 0| $, $\sigma_y = -i|0\rangle\langle 1|+i|1\rangle\langle 0|$, and $\sigma_z = |0\rangle\langle0| - |1\rangle\langle1|$, allows us to rewrite Eq.~(\ref{eq_S04}) as \be H^{(\mathrm{R})}(t) &=&\frac{\hbar}{2} \left[-\Delta(t) I + \Omega(t)\cos\phi(t) \sigma_x+ \Omega(t)\sin\phi(t) \sigma_y+ \Delta(t)\sigma_z\right], \label{eq_S04a} \ee where $I=|0\rangle\langle 0|+|1\rangle\langle 1|$ is a unitary operator. After a energy shift of $-\hbar\Delta(t)/2$ for both the ground and excited states, the Hamiltonian in the rotating frame is expressed in a vector form, $H^{(\mathrm{R})}(t) = \hbar\bm B(t)\cdot \bm\sigma/2$, where $\bm{\sigma}= (\sigma_x, \sigma_y,\sigma_z)$ is the vector of Pauli operators, and \be \bm{B}(t) = (\Omega(t)\cos\phi(t), \Omega(t)\sin\phi(t), \Delta(t)) \label{eq_S05} \ee is an effective magnetic field in the unit of angular frequency. In our main text, we start the discussion from the adiabatic process in the rotating frame with a fixed reference frequency $\omega_\mathrm{d}$, which implies a fixed detuning $\Delta_0$ in the above derivation. However, the counter-diabatic Hamiltonian in the `shortcut-to-adibabticity' (STA) protocol induces a time-varying detuning $\Delta(t)$, which needs to be realized in the rotating frame with the reference frequency $\omega_\mathrm{d}+\delta\omega_\mathrm{d}(t)$. For a Hamiltonian $H^{(\mathrm{R})}(t)$ which is the same in the two rotating frames, the counterparts transformed in the lab frame are however different, i.e., \be H_1^{(\mathrm S)}(t) = U_\mathrm{r}(\omega_\mathrm{d}; t) H^{(\mathrm{R})}(t) U^\dag_\mathrm{r}(\omega_\mathrm{d}; t) + H_\mathrm{r}(\omega_\mathrm{d}; t) \label{eq_S06} \ee from the rotating frame of $\omega_\mathrm{d}$ and \be H_2^{(\mathrm S)}(t) = U_\mathrm{r}(\omega_\mathrm{d}+\delta\omega_\mathrm{d}(t); t) H^{(\mathrm{R})}(t) U^\dag_\mathrm{r}(\omega_\mathrm{d}+\delta\omega_\mathrm{d}(t); t) + H_\mathrm{r}(\omega_\mathrm{d}+\delta\omega_\mathrm{d}(t); t) \label{eq_S07} \ee from the rotating frame of $\omega_\mathrm{d}+\delta\omega_\mathrm{d}(t)$. In Eqs.~(\ref{eq_S06}) and (\ref{eq_S07}), the frequencies involved in $H_\mathrm{r}(t)$ and $U_\mathrm{r}(t)$ are explicitly provided to clarify the difference of the two Hamiltonians. Next we define the time propagators, $U^{(\mathrm S/\mathrm R)}(t)=T_+\exp[-(i/\hbar)\int_0^t H^{(\mathrm S/\mathrm R)}(\tau)d\tau]$, where $T_+$ is the forward time ordering operator and the superscript $\mathrm S$ ($\mathrm R$) denotes the lab (rotating) frame. For a given initial state $|\psi^{(\mathrm S)}(0)\rangle$ , the system states in the lab frame are \be |\psi_1^{(\mathrm S)}(t)\rangle &=& U_\mathrm{r}(\omega_\mathrm{d}; t) U^{(\mathrm R)}(t) |\psi^{(\mathrm S)}(0)\rangle, \label{eq_S08} \ee and \be |\psi_2^{(\mathrm S)}(t)\rangle &=& U_\mathrm{r}(\omega_\mathrm{d}+\delta\omega_\mathrm{d}(t);t)U^{(\mathrm{R})}(t)|\psi^{(\mathrm{S})}(0)\rangle, \label{eq_S09} \ee with respect to the two Hamiltonians in Eqs.~(\ref{eq_S06}) and (\ref{eq_S07}), respectively. In deriving Eqs.~(\ref{eq_S08}) and (\ref{eq_S09}), the relation, $U^{(\mathrm S)}(t)= U_\mathrm{r}(t) U^{(\mathrm R)}(t)$, is used, which then leads to \be |\psi_1^{(\mathrm S)}(t)\rangle &=& U^\dag_\mathrm{r}(\delta\omega_\mathrm{d}(t);t)|\psi_2^{\mathrm{(S)}}(t)\rangle \no\\ &=& \big\{\langle 0|\psi_2^{(\mathrm S)}(t)\rangle\big\}|0\rangle + e^{i\xi(t)}\big\{\langle 1|\psi_2^{(\mathrm S)}(t)\rangle\big\} |1\rangle. \label{eq_S11} \ee The phase shift $\exp[i\xi(t)]$ of the excited state is included in the QST, so that our experiment based on $H_2^{(\mathrm S)}(t)$ in the lab frame can be used to study the STA protocol in the rotating frame of $\omega_\mathrm{d}$. This rotating frame will be used throughout the rest of the Supplementary Material and the main text. To simplify the notation, we will drop the superscript R for the rotating frame and map the two-level system into a spin-$1/2$ particle by omitting the term $-\hbar\Delta(t) I/2$ in Eq.~(\ref{eq_S04a}). \section{Derivation of The `Shortcut-to-Adiabaticity' Protocol} \label{sec3} Here we provide a theoretical derivation of the STA protocol, which is slightly different from the original one in Ref.~\cite{Berry09} but leads to the same result. For a general quantum system, we consider a non-degenerate time-dependent Hamiltonian $H_0(t)=\sum_n \varepsilon_n(t)|n(t)\rangle\langle n(t)|$, where $|n(t)\rangle$ is the $n$th instantaneous eigenstate associated with the eigenenergy $\varepsilon_n(t)$. Each wavefunction can be linearly decomposed into $|\psi(t)\rangle = \sum_{n} a_n(t)|n(t)\rangle$ with $a_n(t)$ the time-dependent coefficient. Following the Schrodinger equation, the time evolution of $a_n(t)$ is given by \be \hbar\dot{a}_n(t) = -i \left[ \varepsilon_n(t)-i\hbar\langle n(t)|\partial_tn(t)\rangle\right]a_n(t) - \hbar\sum_{m(\not=n)} \langle n(t)|\partial_t m(t)\rangle a_m(t). \label{eq_S38} \ee In the adiabatic limit, the second term on the right hand side of Eq.~(\ref{eq_S38}) vanishes, resulting in \be \hbar\dot{a}_n(t) = -i \left[ \varepsilon_n(t)-i\hbar\langle n(t)|\partial_t n(t)\rangle\right]a_n(t). \label{eq_S39} \ee The amplitude of $a_n(t)$ is unchanged with time and only a phase is accumulated, i.e., $a_n(t)=\exp[i \varphi_n(t)]a_n(0)$. However, the influence from other eigenstates $|m(t)\rangle$ cannot be ignored if the time propagation of $H_0(t)$ is not slow enough. To achieve a fast `adiabaticity', the STA protocol was proposed through the assistance of a counter-diabatic Hamiltonian $H_{\mathrm{cd}}(t)$. For the total Hamiltonian, $H_{\mathrm{tot}}(t)=H_0(t)+H_{\mathrm{cd}}(t)$, the wavefunction, $|\psi(t)\rangle = \sum_{n} a_n(t)|n(t)\rangle$, is still decomposed using the eigen basis set of the reference Hamiltonian $H_0(t)$. The time evolution of $a_n(t)$ is changed to be \be \hbar\dot{a}_n(t) &=& -i \left[ \varepsilon_n(t)-i\hbar\langle n(t)|\partial_t n(t)\rangle \right]a_n(t) - i \langle n(t) |H_{\mathrm{cd}}(t)|n(t)\rangle a_n(t) \no \\ & & -i\sum_{m(\not=n)} \left[-i\hbar\langle n(t)|\partial_t m(t)\rangle +\langle n(t)|H_{\mathrm{cd}}(t)|m(t)\rangle\right] a_m(t). \label{eq_S40} \ee To recover the adiabatic time evolution in Eq.~(\ref{eq_S39}), the counter-diabatic Hamiltonian is required to satisfy \be \left\{\ba{ll}\langle n(t) |H_{\mathrm{cd}}(t)|n(t)\rangle = 0 &~~ \\ \langle n(t) |H_{\mathrm{cd}}(t)|m(t)\rangle = i\hbar\langle n(t)|\partial_t m(t)\rangle &~~~\mathrm{for}~~~m\neq n \ea \right. . \label{eq_S41} \ee Since the indices, $m$ and $n$, are arbitrary, the action of $H_\mathrm{cd}(t)$ applied to each $|n(t)\rangle$ must follow $H_{\mathrm{cd}}(t)|n(t)\rangle =i \hbar[|\partial_t n(t)\rangle- \langle n(t)|\partial_t n(t)\rangle |n(t)\rangle]$. The counter-diabatic Hamiltonian is thus given by \be H_{\mathrm{cd}}(t) = i \hbar\sum_n \big[ |\partial_t n(t)\rangle- \langle n(t)|\partial_t n(t)\rangle |n(t)\rangle \big] \langle n(t)|, \label{eq_S42} \ee which satisfies $\sum_{m, n}H^\ast_{0; m, n}(t)H_{\mathrm{cd}; m, n}(t)=0$. \section{The Berry Phase of a Two-Level System with the STA Protocol} \label{sec4} Here we derive the Berry phase of a two-level system with the STA protocol. As demonstrated in Supplementary Material II, the two-level system can be mapped to a spin-$1/2$ particle. The reference Hamiltonian is represented in a general form, $H_0(t) = \hbar\bm{B}_0(t)\cdot{\bm\sigma}/2$, where $\bm B_0(t)=(\Omega(t)\cos\phi(t), \Omega(t)\sin\phi(t), \Delta(t))$ is an effective magnetic field in the rotating frame. For simplicity, both $\Omega(t)$ (the amplitude in the $x$-$y$ plane) and $\Delta(t)$ (the detuning along the $z$-axis) are assumed to be positive. The vector amplitude of the control field is given by $B_0(t)=|\bm B_0(t)|=\sqrt{\Omega^2(t)+\Delta^2(t)}$. In a normalized parameter sphere of $\bm B_0(t)/B_0(t)$, we introduce the polar angle, $\theta(t)=\arctan[\Omega(t)/\Delta(t)]$, and the azimuthal angle (phase in the $x$-$y$ plane) $\phi(t)$ to define the spherical surface. For this reference Hamiltonian $H_0(t)$, its instantaneous spin-up ($|s_\uparrow(t)\rangle$) and spin-down ($|s_\downarrow(t)\rangle$) states are expanded over the qubit states ($|0\rangle, |1\rangle$), \be \left\{\ba{lll} |s_\uparrow(t)\rangle &=& \cos\frac{\theta(t)}{2}|0\rangle + e^{i\phi(t)} \sin\frac{\theta(t)}{2}|1\rangle, \\ |s_\downarrow(t)\rangle &=& - e^{-i\phi(t)}\sin\frac{\theta(t)}{2}|0\rangle+\cos\frac{\theta(t)}{2}|1\rangle. \ea \right. \label{eq_S44} \ee The reference Hamiltonian is recast into $H_0(t) = \sum_{n=\uparrow, \downarrow}\varepsilon_n(t) |s_n(t)\rangle\langle s_n(t)|$, with the instantaneous eigenvalues $\varepsilon_{\uparrow, \downarrow}(t) = \pm\hbar B_0(t)/2$ . The wavefunction is decomposed into $|\psi(t)\rangle = \sum_{n=\uparrow, \downarrow}a_n(t)|s_n(t)\rangle$, and Eq.~(\ref{eq_S44}) is rewritten as $|s_{n=\uparrow, \downarrow}(t)\rangle = \sum_{i=\uparrow,\downarrow} b_{n, i}(t) |i\rangle$ in a simplified notation. The counter-diabatic Hamiltonian in Eq.~(\ref{eq_S42}) is then written explicitly as \be H_{\mathrm{cd}}(t) = i\hbar\sum_{i, i^\pr=\uparrow,\downarrow}\left[\sum_n \partial_t b_{n,i}(t) b^\ast_{n,i^\pr}(t) -\sum_{n, j} \partial_t b_{n,j}(t) b^\ast_{n,j}(t) b_{n, i}(t)b^\ast_{n,i^\pr}(t)\right] |i\rangle\langle i^\pr|. \label{eq_S46} \ee With the help of Pauli operators, Eq.~(\ref{eq_S46}) is organized into a simple form, $H_{\mathrm{cd}}(t)= \hbar \bm B_\mathrm{cd}(t)\cdot \bm \sigma/2$, where the counter-diabatic effective magnetic field is given by \be \left\{\ba{lll} B_{\mathrm{cd}; x}(t) &= & -\dot{\theta}(t)\sin\phi(t)-\dot{\phi}(t)\sin\theta(t)\cos\theta(t)\cos\phi(t) \\ B_{\mathrm{cd}; y}(t) &= & \dot{\theta}(t)\cos\phi(t)-\dot{\phi}(t)\sin\theta(t)\cos\theta(t)\sin\phi(t) \\ B_{\mathrm{cd}; z}(t) &= &\dot{\phi}(t)\sin^2\theta(t) \ea \right. . \label{eq_S43} \ee In a vector representation, the counter-diabatic magnetic field is equal to a cross product, \be \bm B_\mathrm{cd}(t) = \frac{1}{|\bm B_0(t)|^2}\bm B_0(t)\times\dot{\bm B}_0(t). \label{eq_S48} \ee In the STA protocol, the time evolution of the two-level system becomes adiabatic with respect to the reference Hamiltonian. The coefficients $a_{n=\uparrow, \downarrow}(t)$ of the two instantaneous eigenstates are governed by \be \dot{a}_n(t) &=& -i \left[\varepsilon_n(t)/\hbar-i \langle s_n(t)|\partial_t s_n(t)\rangle \right] a_n(t). \label{eq_S49} \ee For each coefficient, a phase $\varphi_n(t)$ is accumulated with time and can be separated into two parts, $\varphi_n(t)=\alpha_n(t)+\gamma_n(t)$. The first part, $\alpha_{n}(t) = -\frac{1}{\hbar}\int_0^t \varepsilon_{n}(\tau) d\tau$, relies on the time-dependent vector amplitude $B_0(t)$ and is termed the dynamic phase. The second part, $\gamma_n(t)= i \int_0^t \langle s_n(\tau)|\partial_\tau s_n(\tau)\rangle d\tau$, is a function of the polar angle $\theta(t)$ and the azimuthal angle $\phi(t)$. A curve $\bm R(t)$ is defined on the surface of the Bloch sphere (or the normalized parameter sphere) by $\bm R(t)=\{\theta=\theta(t), \phi=\phi(t)\}$. The time differential in $\gamma_n(t)$ can be changed to the spatial gradient, giving $\gamma_n(t)= i \int_{\bm R(0)}^{\bm R(t)} \langle n(\bm{R})|\nabla_{\bm{R}} n(\bm{R})\rangle \cdot d{\bm R}$. If the path $\bm R(t)$ is closed after the time evolution, there is no explicit time dependence in the phase $\gamma_n(t)$, giving \be \gamma_n = i\oint_\mathcal C \langle n(\bm{R})|\nabla_{\bm{R}} n(\bm{R})\rangle \cdot d{\bm R}. \label{eq_S51} \ee The phase $\gamma_n$ is considered as the Berry phase with respect to the reference Hamiltonian, even though the fast STA protocol is applied. For the two-level system, the Berry phase in Eq.~(\ref{eq_S51}) can be further simplified to \be \gamma_{\uparrow, \downarrow} = \mp \frac{1}{2}\oint_\mathcal C [1-\cos\theta] d\phi. \label{eq_S53} \ee where the signs $\pm$ refer to the instantaneous spin-up and spin-down states, respectively. In the above definition of instantaneous eigenstates, we may consider a gauge transformation, i.e., $|s_n(t)\rangle \rightarrow \exp[i \zeta_n(t)]|s_n(t)\rangle$. After a straightforward re-derivation, we can demonstrate that a phase shift of $2k\pi$ ($k\in \mathrm{integers}$) is allowed in the Berry phase, i.e., $\gamma_n \rightarrow \gamma_n +2k\pi$. For convenience, the Berry phase in our experiment is assumed to follow the result in Eq.~(\ref{eq_S53}) without an additional phase shift of $2k\pi$. It is impossible to experimentally extract the absolute phase of a single quantum state. One way of indirectly extracting the Berry phase is to numerically calculate the solid angle, $\mathcal S = \oint_\mathcal C [1-\cos\theta] d\phi$, by measuring the trajectory of the qubit vector on the Bloch sphere. In a superconducting Cooper pair pump, the phase accumulation speed of the ground state can be measured through the pumped charge, which also allows an estimation of the Berry phase~\cite{MottononPhase}. Another approach lies on the measurement of the phase difference by preparing a superposition of two instantaneous eigenstates. In our experiment, a spin-echo scheme with the initial state $(|0\rangle+|1\rangle)/\sqrt{2}$ is applied. As the dynamic phase is removed by the spin-echo sequence, the phase difference of $|1\rangle$ relative to $|0\rangle$ gives rise to the difference of the Berry phase. \section{The Berry Phase Subject to a Rotating Field} \label{sec5} In this Supplementary Material, we provide the theoretical prediction of the Berry phase for the instantaneous spin-up state subject to a rotating field. At the very begining of our experiment, the $|0\rangle$ and $|1\rangle$ states of the qubit are the instantaneous spin-up ($|s_\uparrow(t)\rangle$) and spin-down ($|s_\downarrow(t)\rangle$) states in the rotating frame, respectively. Here we discuss the behavior of $|s_\uparrow(t)\rangle$, and the opposite way is applied to $|s_\downarrow(t)\rangle$. Since the Berry phase is not accumulated in the ramping-up and ramping-down steps due to the fact that $\phi(t)$ is not changed, we focus on the two rotating steps, where the reference magnetic field follows \be \bm B_{0}(t)= B_0(\sin\theta_0\cos\phi(t), \sin\theta_0\sin\phi(t), \cos\theta_0), \ee with $B_0=\sqrt{\Omega_0^2+\Delta_0^2}$ and $\theta_0 =\arctan(\Omega_0/\Delta_0)$. As the system evolves in the instantaneous spin-up state, the wavefunction is written as $|\psi(t)\rangle = a_\uparrow(t) |s_\uparrow(t)\rangle$, where the accumulated phase is included in the coefficient $a_\uparrow(t)$. The system wavefunction $|\psi(t)\rangle$ is represented by a Bloch vector, which points to the same direction as the reference magnetic field $\bm B_0(t)$. The trajectory of $|\psi(t)\rangle$ is characterized by \be r_\uparrow(t)=1,~~~\theta_\uparrow(t)=\theta_0, ~~~\mathrm{and}~~~ \phi_\uparrow(t)=\phi(t), \label{eq_S61} \ee where $r_\uparrow(t)$, $\theta_\uparrow(t)$ and $\phi_\uparrow(t)$ are the radius, polar and azimuthal angles on the Bloch sphere. In our experiment, we consider a constant rotating speed $\omega_0$ along the counterclockwise ($\mathcal C_+$) or clockwise ($\mathcal C_-$) direction, i.e., $\phi(t)=\pm \omega_0 t$. If $|\psi(t)\rangle$ evolves over a single circular rotation, we apply Eq.~(\ref{eq_S53}) to calculate the Berry phase, \be \gamma_\uparrow = \mp \pi (1-\cos\theta_0), \label{eq_S58} \ee where the $\mp$ signs correspond to the counterclockwise and clockwise rotations, respectively. Following the same approach, we can obtain the expressions of the spin-down state. In the first part of our spin-echo scheme, the accumulated phases for the $|s_\uparrow(t)\rangle$ and $|s_\downarrow(t)\rangle$ are opposite, i.e., $\alpha_\downarrow(t)=-\alpha_\uparrow(t)$ and $\gamma_\downarrow = -\gamma_\uparrow$. These two coefficients, $a_\uparrow(t)$ and $a_\downarrow(t)$, are swapped by a refocusing $\pi$-pulse. The wavefunction is changed to \be |\psi(t)\rangle &\propto& e^{i\alpha_\downarrow(t)} e^{i\gamma_\downarrow}|s_\uparrow(t)\rangle+e^{i\alpha_\uparrow(t)} e^{i\gamma_\uparrow} |s_\downarrow(t)\rangle \no \\ &=& e^{-i\alpha_\uparrow(t)} e^{-i\gamma_\uparrow}|s_\uparrow(t)\rangle+ e^{-i\alpha_\downarrow(t)} e^{-i\gamma_\downarrow}|s_\downarrow(t)\rangle. \ee For each intantaneous eigenstate subject to the second part of the spin-echo sequence, the dynamic phase is the same as that accumulated in the first part while the Berry phase is opposite due to a reversed rotating direction. At the echo time when the two instantaneous eigenstates return to their initial positions ($|s_\uparrow(t)\rangle=|0\rangle$ and $|s_\downarrow(t)\rangle=|1\rangle$), the wavefunction is given by \be |\psi(t)\rangle \propto e^{-2i\gamma_\uparrow} |0\rangle + e^{-2i\gamma_\downarrow}|1\rangle, \ee where $\gamma_{n=\uparrow, \downarrow}$ is the Berry phase from the one cycle in the first part. The density matrix of this final qubit state, $\rho=|\psi(t)\rangle\langle\psi(t)|$, is extracted by the QST. The phase difference, $\exp(i\gamma)=\langle 1|\rho| 0\rangle$, is used to measure the Berry phase, \be \gamma =\mp 4\pi (1-\cos\theta_0), \label{eq_S59} \ee where the $\mp$ signs refer to the $\mathcal C_{+-}$ and $\mathcal C_{-+}$ spin-echo procedures, respectively. \section{Analytical Prediction for a Slowly-Varying Noise in the STA Process} \label{sec7} We apply a theoretical method, similar to the approach in Ref.~\cite{NoiseTheory}, to obtain an analytical expression for a slowly-varying classical noise in the STA process. For simplicity, we ignore the intrinsic relaxation and decoherence. A classical Gaussian noise $\delta H(t)$ is considered for the total Hamiltonian, $H_\mathrm{tot}(t)=H_0(t)+H_\mathrm{cd}(t)$, during the rotation period. The total rotating field without noise is given by $\bm B_{\mathrm{tot}}(t)= (\Omega_\mathrm{tot}\cos\phi(t), \Omega_\mathrm{tot}\sin\phi(t), \Delta_\mathrm{tot})$, where $\Omega_\mathrm{tot}$ and $\Delta_\mathrm{tot}$ include the modifications of the counter-diabatic field. For convenience, we drop the symbol `rot' in the representation of an effective magnetic field in this Supplementary Material. We consider three possible fluctuations, $\delta \Delta(t)$ for the detuning along the $z$ direction, $\delta \Omega(t)$ and $\delta \phi(t)$ for the drive amplitude and phase in the $x-y$ plane. Accordingly, the three types of stochastic rotating fields are explicitly written as \be \left\{\ba{lll} \bm B_\mathrm{tot}(t)+\delta\bm B_{\Delta}(t)&=&(\Omega_\mathrm{tot}\cos\phi(t), \Omega_\mathrm{tot}\sin\phi(t), \Delta_\mathrm{tot}+\delta\Delta(t)) \\ \bm B_\mathrm{tot}(t)+\delta\bm B_\Omega(t)&=&([\Omega_\mathrm{tot}+\delta\Omega(t)]\cos\phi(t), [\Omega_\mathrm{tot}+\delta\Omega(t)]\sin\phi(t), \Delta_\mathrm{tot}) \\ \bm B_\mathrm{tot}(t)+\delta\bm B_\phi(t) &=& (\Omega_\mathrm{tot}\cos[\phi(t)+\delta\phi(t)], \Omega_\mathrm{tot}\sin[\phi(t)+\delta\phi(t)], \Delta_\mathrm{tot}) \ea \right. . \label{eq_S64} \ee Since the influences of $\delta\Delta(t)$ and $\delta \Omega(t)$ are similar, we only apply $\delta\bm B_\Omega(t)$ and $\delta\bm B_\phi(t)$ in our experiment. The Ornstein-Uhlenbeck process is assigned for $\delta \Omega(t)$ and $\delta \phi(t)$, giving \be \langle \delta\Omega(t)\rangle =0,~~~ \langle\delta\Omega(t)\delta\Omega(0)\rangle=c^2_\Omega \Omega^2_{\mathrm{tot}} \exp(-\Gamma t), \ee and \be \langle \delta\phi(t)\rangle =0, ~~~\langle\delta\phi(t)\delta\phi(0)\rangle = c_\phi^2\exp(-\Gamma t). \ee Here $c_\Omega$ and $c_\phi$ are the reduced noise strengths and $\Gamma$ is the noise bandwidth. Next we discuss the behaviors of the two noises separately. ({\bf i}) The Berry phase accumulated in the rotating period is not affected by the phase noise $\delta\phi(t)$. Since the polar angle is fixed at $\theta_0$ in the rotating step, the Berry phase is simplified to $\gamma_{n=\uparrow,\downarrow} = \mp(1/2)(1-\cos\theta_0)\oint_\mathcal C d[\phi+\delta\phi]$. If the effective magnetic field $\delta\bm B_\Omega(t)$ undergoes a closed path, the integration of $\delta\phi$ vanishes and the result of $\gamma_n$ is the same as that without noise. ({\bf{ii}}) For the influence of the amplitude noise, we first assume that $\delta\Omega(t)$ slowly varies with time (behaves similarly as a static disorder which is relevant in an adiabatic process). The fluctuated magnetic field, $\bm B_\mathrm{tot}(t)+\delta\bm B_\Omega(t)$, can be factorized into a fluctuated reference field, $\bm B_0(t)+\delta\bm B_{0; \Omega}(t)$, and its counter-diabatic correction. The first order expansion of $\delta\Omega(t)$, gives rise to \be \bm B_0(t)+\delta\bm B_{0; \Omega}(t)&=& ([B_0(t)+\delta B_0(t)]\sin[\theta_0+\delta\theta(t)]\cos\phi(t), \no \\ &&~[B_0(t)+\delta B_0(t)]\sin[\theta_0+\delta\theta(t)]\sin\phi(t), \no \\ &&~[B_0(t)+\delta B_0(t)]\cos[\theta_0+\delta\theta(t)]) \ee with \be \delta B_0(t) &=& \sin\theta_0(\Omega_\mathrm{tot} \mp \omega_0\sin\theta_0\cos\theta_0)\frac{\delta\Omega(t)}{\Omega_\mathrm{tot}}+O(\delta\Omega^2(t)), \label{eq_S65}\\ \delta \theta(t) &=& \sin\theta_0\cos\theta_0 \frac{ \delta\Omega(t)}{\Omega_\mathrm{tot}}+O(\delta\Omega^2(t)). \label{eq_S66} \ee where the signs $\mp$ refer to the counterclockwise ($\mathcal{C}_+$) and clockwise ($\mathcal{C}_-$) directions, respectively. For the example of a single $\mathcal{C}_+$-rotation, the dynamic and Berry phase differences of $|s_\downarrow(t)\rangle$ relative to $|s_\uparrow(t)\rangle$ are fluctuated, following \be \delta \alpha &=& \sin\theta_0(\Omega_\mathrm{tot} - \omega_0\sin\theta_0\cos\theta_0) \int_0^{T_\mathrm{rot}} \frac{\delta\Omega(\tau)}{\Omega_\mathrm{tot}} d\tau +O(\delta\Omega^2(t)), \label{eq_S67}\\ \delta \gamma &= & \omega_0\sin^2\theta_0\cos\theta_0 \int_0^{T_\mathrm{rot}} \frac{\delta\Omega(\tau)}{\Omega_\mathrm{tot}} d\tau +O(\delta\Omega^2(t)), \label{eq_S68} \ee respectively. In our experiment with a noisy pulse, we measure the total relative phase from the QST. It is however hard to directly extract $\gamma$ since the noise can destroy the cancellation of the dynamic phase in the spin-echo scheme. An indirect approach is to record the input noise $\delta\Omega(t)$ and theoretically calculate the relative dynamic phase, $\alpha+\delta\alpha$, for each noisy trajectory. The corresponding relative Berry phase is estimated by $\gamma[\delta\Omega(t)]=\varphi[\delta\Omega(t)]-\alpha-\delta\alpha[\delta\Omega(t)]$, where $\varphi[\delta\Omega(t)]$ is the total relative phase. Based on the perturbed result in Eq.~(\ref{eq_S68}), the statistics of the fluctuated Berry phase is characterized by the mean $\langle\delta \gamma\rangle=0$ and the standard deviation \be \sigma_\Omega^2 &=& \langle \delta \gamma^2 \rangle = 8 c^2_\Omega \pi^2\sin^4\theta_0\cos^2\theta_0 \frac{\Gamma {T_{\mathrm{rot}}} - 1 + \exp(-\Gamma {T_{\mathrm{rot}}})}{\Gamma^2T_{\mathrm{rot}}^2}. \label{eq_S70} \ee A Gaussian distribution is expected for $\delta \gamma$ since the underlying noise $\delta\Omega(t)$ is Gaussian. The alternative coherence parameter, $\nu=|\langle\exp(i\gamma)\rangle|$, is fully determined by the first and second moments of $\delta\gamma$, giving \be \nu &=& \exp(-\sigma^2_\gamma/2) \no \\ &=&\exp\left[-4 c^2_\Omega \pi^2\sin^4\theta_0\cos^2\theta_0 \frac{\Gamma {T_{\mathrm{rot}}} - 1 + \exp(-\Gamma {T_{\mathrm{rot}}})}{\Gamma^2T_{\mathrm{rot}}^2}\right]. \label{eq_S71} \ee \section{Estimation of a Geometric Phase Gate Fidelity with the STA Protocol} \label{sec8} The accumulated Berry phase can be utilized in the realization of a geometric phase gate. In this Supplementary Material, we provide a numerical estimation on the fidelity of a $\pi$-phase gate. A unitary operation is performed onto the initial state $|\psi(0)\rangle$ in an ideal quantum operation. The quantum state $|\psi(t_{\rm f})\rangle$ at the final time $t_{\rm f}$ is given by $|\psi(t_{\rm f})\rangle = U |\psi(0)\rangle$. For the $\mathcal C_{+-}$ spin-echo procedure in our experiment, the unitary operator $U$ is explicitly written as \be U = \left(\ba{cc} 0 & \exp[i\cal S] \\ \exp[-i\cal S] & 0 \ea \right), \label{eq_S73} \ee where the global phase is excluded and ${\cal S} = 2\pi(1-\cos\theta_0)$ is the designed solid angle. A subsequent $\pi_x$-pulse leads to the overall unitary operation, \be U_{\rm tot}=(-i\sigma_x) U=\exp[-i({\cal S}+\frac{\pi}{2})]\left(\ba{cc} 1 & 0 \\ 0 & \exp[i2\cal S]\ea \right), \label{eq_S73a} \ee which corresponds to a $2\cal S$-phase gate. In the special case of $\theta_0=\arccos(3/4)$, we obtain a $\pi$-phase gate, i.e., $U_{\rm tot}\propto \sigma_z$. \begin{table}[t!] \begin{tabular}{|c|c|c|c|c|c|}\hline & Protocol & $\Delta_0/2\pi$ & $T_\mathrm{ramp}$ & $T_{\mathrm{rot}}$ & Fidelity\\ \hline phase qubit & Adiabatic & 7 MHz & 350 ns & 1000 ns & 0.2500 \\ \cline{2-6} ( $T_1$ = 270 ns, $T_2^{\mathrm{echo}}$ = 450 ns)& STA & 7 MHz & 10 ns & 30 ns & 0.7023 \\ \hline Xmon qubit & Adiabatic & 7 MHz &350 ns & 1000 ns & 0.8465\\ \cline{2-6} ( $T_1$ = 20 $\mu$s, $T_2^{\mathrm{echo}}$ = 20 $\mu$s) & STA & 7 MHz & 10 ns & 30 ns & 0.9936\\ \hline \end{tabular} \caption{The fidelities of the $\pi$-phase gate in our phase qubit and a typical Xmon qubit. Both the STA and adiabatic protocols are studied. All the results are numerically obtained by the Lindbald simulation. } \label{tab_S1} \end{table} A practical quantum operation is limited by quantum dissipation. Here we use the Lindblad equation, \be \partial_t \rho(t) &=& -\frac{i}{\hbar} [H(t), \rho(t)]+\frac{1}{T_1}\left[\sigma_- \rho(t)\sigma_+-\frac{1}{2}\sigma_+\sigma_-\rho(t)-\frac{1}{2}\rho(t)\sigma_+\sigma_- \right] \no \\ &&+\frac{2}{T^{\mathrm{echo}}_2}\left[\sigma_+\sigma_- \rho(t)\sigma_+\sigma_--\frac{1}{2}\sigma_+\sigma_-\sigma_+\sigma_-\rho(t)-\frac{1}{2}\rho(t)\sigma_+\sigma_-\sigma_+\sigma_- \right], \label{eq_S74} \ee to numerically estimate the time evolution of the density matrix $\rho(t)$, where $\sigma_+=|1\rangle\langle0|$ and $\sigma_-=|0\rangle\langle1|$ are two Lindblad operators. A quantum dynamical map is then defined between the initial and final density matrices ($\rho(0)$ and $\rho(t_{\rm f})$ respectively), i.e., \be \rho(t_{\rm f}) = \sum_{i, j=1}^4 \chi_{i, j} u_i \rho(0) u^\dag_j, \label{eq_S75} \ee where the four operators of the SU(2) group, $\{u_1 =I, u_2 = \sigma_x, u_3 = \sigma_y, u_4 = \sigma_z\}$, are used as the expansion bases. The $4\times 4$ $\chi$-matrix defined in Eq.~(\ref{eq_S75}) is independent of the initial density matrix $\rho(0)$. For an ideal $\pi$-phase gate, the $\chi$-matrix satisfies $\chi^{\mathrm{ideal}}_{i, j} = \delta_{i, 4}\delta_{j, 4}$. The accuracy of a practical $\pi$-phase gate can be characterized by its gate fidelity, given by~\cite{nonAbelianGate} \be F = \mathrm{Tr} \left\{ \chi^{\mathrm{ideal}} \chi \right\}. \label{eq_S76} \ee In Table~\ref{tab_S1}, we provide the numerical estimations of $F$ for our phase qubit ($T_1=270$ ns and $T^{\mathrm{echo}}_2=450$ ns) and a typical Xmon qubit ($T_1=20~\mu$s and $T^{\mathrm{echo}}_2=20~\mu$s). Both the STA ($T_{\mathrm{ramp}}=10$ ns and $T_{\mathrm{rot}}=30$ ns) and adiabatic ($T_{\mathrm{ramp}}=350$ ns and $T_{\mathrm{rot}}=1000$ ns) protocols are considered. Our numerical results show that the STA protocol can help establish a higher fidelity in a operation time much shorter than that required by the adiabatic theroem. It will be interesting to explore the STA protocol in the Xmon qubit (e.g., the STA $\pi$-phase gate with fidelity $>99\%$) in the future.
2,869,038,155,386
arxiv
\section{Introduction}\label{Intro} Recently, Greene~\cite{17G} and Howie~\cite{17Ho}, independently, established intrinsic characterizations of alternating links in terms of a pair of spanning surfaces, answering an old question of R.\ H.\ Fox. These results can be regarded as characterizations of alternating link exteriors which have marked meridians (see \cite[Theorem 3.2]{17Ho}). The purpose of this paper is to give a characterization of alternating link exteriors from the viewpoint of cubed complexes. Our starting point is a cubical decomposition of alternating link exteriors, which is originally due to Aitchison, and is used by Agol~\cite{Agol}, Adams~\cite{Adams}, Thurston~\cite{DThurston}, Yokota~\cite{Yokota1, Yokota2} and Sakuma-Yokota~\cite{16SY}. Thus we call it the \textit{Aitchison complex}. The Aitchison complex for an alternating link is actually a mapping cylinder of the natural map from the boundary of the exterior of the alternating link onto the Dehn complex. For a detailed description and historical background, see \cite{16SY}. In this paper, we introduce the concepts of a \textit{signed BW squared-complex} (or an \textit{SBW squared-complex}, for short) and a \textit{signed BW cubed-complex} (or an \textit{SBW cubed-complex}, for short), and give a combinatorial description of the Dehn complex and the Aitchison complex as an SBW squared-complex and an SBW cubed-complex, respectively. The main theorem gives a necessary and sufficient condition for a given SBW cubed-complex to be isomorphic to the Aitchison complex of some alternating link exterior (Theorem~\ref{SCCA}). This implies a characterization of alternating link exteriors in terms of cubed complexes (Corollary~\ref{main-cor}). This paper is organized as follows. In Section~\ref{desc}, we give an intuitive description of the Aitchison complex and the Dehn complex following~\cite{DThurston, Yokota1}. In Section~\ref{SC}, we introduce the SBW squared-complex and the SBW cubed-complex, and describe the Dehn complex and the Aitchison complex in terms of the SBW squared-complex and the SBW cubed-complex, respectively. In Section~\ref{chara}, we prove the main theorem. The author would like to thank his supervisor, Makoto Sakuma, for valuable suggestions. He would also like to thank Naoki Sakata and Takuya Katayama for their support and encouragement. \section{An intuitive description of the Aitchison complexes \\ and the Dehn complexes for alternating links} \label{desc} In this section, we give an intuitive description of the Aitchison complexes and the Dehn complexes following~\cite{DThurston, Yokota1}. For detailed description, see~\cite{16SY}. Let $\Gamma \subset S^{2}$ be a connected alternating link diagram and $L\subset S^{3}$ the alternating link represented by $\Gamma$. We pick two points $P_{+}$ and $P_{-}$ in the components of $S^{3}\setminus S^{2}$ one by one. These points are regarded to lie above and below $S^{2}$, respectively. Identify $S^{3}\setminus \{P_{+}, P_{-}\}$ with $S^{2}\times \mathbb{R}$, and assume the following. The diagram $\Gamma$ is regarded as a $4$-valent graph in $S^{2}\times \{0\}$, $L\subset \Gamma \times [-1, 1]\subset S^{2}\times [-1, 1]$, and $L$ intersects $S^{2}\times \{0\}$ transversely in $2n$ points, where $n$ is the crossing number of $\Gamma$. For each vertex $x$ of $\Gamma$, consider a square $s$ in $S^{2}=S^{2}\times \{0\}$ which forms a relative regular neighborhood of $x$ in $(S^{2}, \Gamma)$ such that the four vertices of $s$ lie in the four germs of edges around $x$. Let $x^{+}$ and $x^{-}$ be the points of $L$ which lie above and below $x$, respectively. Consider the two pyramids, $\Delta^{\pm}$, in $S^{3}$ which are obtained as the joins $x^{\pm}*s$. We may assume $\Delta^{\pm}\cap L=\{x^{\pm}\}$, $\Delta^{+}\cap \Delta^{-}=s$. Note that $\Delta^{+}\cup \Delta^{-}$ is an octahedron which contains the crossing arc of $L$ determined by $x$ (see Figure~\ref{crosses}(b)). Let $\{\Delta_{1}^{\pm}, \dots, \Delta_{n}^{\pm}\}$ be the set of $2n$ pyramids in $S^{3}$ located around the vertices of $\Gamma$. Pick an edge $e$ of the graph $\Gamma$, and let $x_{1}$ and $x_{2}$ be the vertices of $\Gamma$ joined by $e$, such that the arc $\tilde{e}=L\cap (e\times [-1, 1])$ in $L$ joins $x_{1}^{+}$ and $x_{2}^{-}$ (see Figure~\ref{crosses}(a)). Let $a_{i}$ be the vertex of $s_{i}$ contained in $e$ ($i = 1,2$). Let $R$ be one of the two regions of $\Gamma$ in $S^{2}$ whose boundary contains the edge $e$, and let $b_{i}$ be the vertex of $s_{i}$ such that the edge $a_{i}b_{i}$ of $s_{i}$ is contained in $R$. \begin{figure} \begin{listliketab} \begin{longtable}[c]{rcp{0.6\textwidth}} (a) & \raisebox{-\height}{\includegraphics[width=180pt]{crosssquare2.pdf}}\\ (b) & \raisebox{-\height}{\includegraphics[width=230pt]{crosscube2.pdf}} \end{longtable} \end{listliketab} \caption{} \label{crosses} \end{figure} Let $w = \tilde{e} \cap S^{2}$ be the ``middle point'' of $\tilde{e}$, and consider the vertical line segments $wP_{+}$ and $wP_{-}$. Then we have the following relative isotopies in $(S^{3}, L)$ (see Figure \ref{crosses}(b)). \begin{enumerate} \item The edges $x_{1}^{+}a_{1}$ and $x_{1}^{+}b_{1}$ of the pyramid $\Delta_{1}^{+}$ are isotopic to the vertical line segments $wP_{-}$ and $wP_{+}$, respectively. \item The edges $x_{2}^{-}a_{2}$ and $x_{2}^{-}b_{2}$ of the pyramid $\Delta_{2}^{-}$ are isotopic to the vertical line segments $wP_{+}$ and $wP_{-}$, respectively. \item The edge $a_{1}b_{1}$ of $s_{1}$ is isotopic to an almost vertical line segment which starts $P_{-}$, passes through a point in the interior of $R$ and reaches $P_{+}$. \item The edge $b_{2} a_{2}$ of $s_{2}$ is isotopic to an almost vertical line segment which starts $P_{-}$, passes through a point in the interior of $R$ and reaches $P_{+}$. \end{enumerate} These isotopies determine an isotopy (and so a homeomorphism) from the face $x_{1}^{+}a_{1}b_{1}$ of $\Delta_{1}^{+}$ onto the face $x_{2}^{-}b_{2}a_{2}$ of $\Delta_{2}^{-}$. In this way, we obtain a paring of faces of the octahedra $\{\Delta_{i}^{+}\cup \Delta_{i}^{-}\}_{i}$ and a homeomorphism between the faces in each pair. Let $O_{i}^{\pm}$ be the cubes obtained from the pyramids $\Delta_{i}^{\pm}$ by chopping off a small regular neighborhoods of $x_{i}^{\pm}$. Then the above pairing and homeomorphisms determine a gluing information for the cubes $\{O_{i}^{\pm}\}_{i}$. Let $\mathcal{A}(\G)$ be the resulting cubed complex and $\mathcal{D}(\G)$ the subcomplex of $\mathcal{A}(\G)$ obtained by gluing the squares $\{\Delta_{i}^{+}\cap \Delta_{i}^{-}\}_{i}$. Then we have the following. \begin{prop} For a connected alternating diagram $\Gamma$, $\mathcal{A}(\G)$ gives a cubical decomposition of the exterior $E(L)$ of the link $L$ represented by $\Gamma$. Moreover, there is a deformation retraction of $\mathcal{A}(\G)$ onto $\mathcal{D}(\G)$, and so, $\mathcal{D}(\G)$ is a spine of $E(L)$. \end{prop} In fact, $\mathcal{D}(\G)$ is isotopic to the Dehn complex of the diagram $\Gamma$. (For the definition of the Dehn complex, see~\cite{BH, 13BH, Wise}.) We call $\mathcal{A}(\G)$ the \textit{Aitchison complex} of $\Gamma$. \section{Signed BW complexes}\label{SC} In this section, we introduce the concept of a \textit{signed BW squared-complex} and that of a \textit{signed BW cubed-complex}, and then describe the Dehn complexes and the Aitchison complexes for alternating links by using these concepts. By a \textit{signed BW square} (or an \textit{SBW-square}, for short), we mean the square $s:=[0,1]^{2}$ with the following information: \begin{enumerate} \item The vertices $(0, 0)$ and $(1, 1)$ are endowed with the sign $-$, and the vertices $(0, 1)$ and $(1, 0)$ are endowed with the sign $+$. \item The horizontal edges $I\times \{0\}$ and $I\times \{1\}$ are endowed with the color $B$ (Black), and the vertical edges $\{0\} \times I$ and $\{1\} \times I$ are endowed with the color $W$ (White). \end{enumerate} For an SBW-square $s$, we assume that each edge of $s$ is oriented so that the initial point and the terminal point have the sign $-$ and $+$, respectively. Now consider a set $S=\{s_{1}, \dots, s_{n}\}$ of $n$ copies of the SBW-square, and let $V_{+}(S)$ and $V_{-}(S)$, respectively, be the sets of positive vertices and negative vertices of the SBW-squares in $S$. For each bijection $\varphi \colon V_{+}(S)\to V_{-}(S)$, we construct a squared complex (i.e.\ two-dimensional cubed complex), $\mathcal{C}^{2}(S,\bi)$, as follows. Let $E_{B}(S)$ and $E_{W}(S)$, respectively, be the set of the black edges and the white edges of the SBW-squares in $S$. Then $\varphi$ induces a bijection $\Bib \colon E_{B}(S)\to E_{B}(S)$ as follows. For a black edge $e \in E_{B}(S)$, let $v$ be the positive vertex which forms the terminal point of $e$. Then $\Bib(e)$ is defined to be the unique black edge whose initial vertex is $\varphi(v)$ (see Figure~\ref{scsquare2}). \begin{figure} \includegraphics[width=200pt]{scsquare2.pdf} \caption{} \label{scsquare2.pdf} \end{figure} Similarly, $\varphi$ induces a bijection $\Biw \colon E_{W}(S)\to E_{W}(S)$, such that $\Biw(e)$, for $e\in E_{W}(S)$, is the unique white edge whose initial vertex is the image of the terminal vertex of $e$ by $\varphi$. Thus we obtain a bijection $\Bi:=\Bib \sqcup \Biw$ from $E(S):=E_{B}(S)\sqcup E_{W}(S)$ to itself. For each $e \in E(S)$, let $f_{e}\colon e\to \Bi(e)$ be the unique orientation-preserving linear homeomorphism. We regard the family $\{f_{e}\colon e\to \Bi(e)\}_{e\in E(S)}$ as a gluing information for the SBW-squares $S=\{s_{1}, \dots, s_{n}\}$, and denote the resulting squared complex by $\mathcal{C}^{2}(S,\bi)$. We call it the \textit{signed BW squared-complex} (or the \textit{SBW squared-complex}, for short) determined by the bijection $\varphi \colon V_{+}(S)\to V_{-}(S)$. \begin{rem}\upshape A signed BW squared-complex is a special case of a VH-complex introduced by Wise \cite{Wise}, which is defined to be a squared complex whose edges are partitioned into two classes V (vertical) and H (horizontal). Motivated by black/white checkerboard surfaces, we use W and B, instead of V and H. \end{rem} For the SBW squared-complex $\mathcal{C}^{2}(S,\bi)$, we define the associated SBW cubed-complex, $\mathcal{C}^{3}(S,\bi)$, as follows. For the set $S=\{s_{1}, \dots, s_{n}\}$ of the SBW-squares, consider the set of the ``upper SBW-cubes'' $\{s_{i}\times [0,1]\}_{i=1}^{n}$ and the ``lower SBW-cubes'' $\{s_{i}\times [-1,0]\}_{i=1}^{n}$. Consider also the set of ``upper side-faces" $F_{+}:=\{e\times [0,1]\}_{e\in E(S)}$ and the set of ``lower side-faces" $F_{-}:=\{e\times [-1,0]\}_{e\in E(S)}$. Then the bijection $\Bi \colon E(S)\to E(S)$ induces the bijection $\BI \colon F_{-}\to F_{+}$ defined by $\BI(e\times [-1, 0])=\Bi(e)\times [0,1]$. Moreover, the linear homeomorphism $f_{e}\colon e\to \Bi(e)$ induces the linear homeomorphism $\hat{f}_{e}\colon e\times [-1,0] \to \Bi(e)\times [0,1]$ defined by $\hat{f}_{e}(x,t)=(f_{e}(x),-t)$. By gluing the side-faces of the cubes $\{e\times [-1,0]\cup e\times [0,1]\}_{e\in E(S)}$ by the family of homeomorphisms $\{\hat{f}_{e}\colon e\times [-1,0]\to \Bi(e)\times [0,1]\}_{e\in E(S)}$, we obtain a three-dimensional cubed complex. We denote it by $\mathcal{C}^{3}(S,\bi)$, and call it the \textit{signed BW cubed-complex} (or the \textit{SBW cubed-complex}, for short) determined by $\varphi$. It should be noted that $\mathcal{C}^{2}(S,\bi)$ is a subcomplex of $\mathcal{C}^{3}(S,\bi)$, and there is a natural deformation retraction of $\mathcal{C}^{3}(S,\bi)$ onto $\mathcal{C}^{2}(S,\bi)$. For an alternating link $L \subset S^{3}$ represented by a connected alternating diagram $\Gamma \subset S^{2}$, the Dehn complex $\mathcal{D}(\G)$ and the Aitchison complex $\mathcal{A}(\G)$ are identified with the SBW squared-complex $\mathcal{C}^{2}(S,\bi)$ and the SBW cubed-complex $\mathcal{C}^{3}(S,\bi)$, respectively, where $S$ and $\varphi$ are defined as follows. Consider the checkerboard coloring of $(S^{2}, \Gamma)$ such that the associated black surface for $L$ has a positive half-twist at each crossing (see Figure~\ref{fig8}(a)). \begin{figure} \centering \begin{subfigure}{0.3\columnwidth} \centering \includegraphics[width=\columnwidth]{fig8_2_check_slide.pdf} \caption*{(a)} \end{subfigure} \hspace{15mm} \begin{subfigure}{0.3\columnwidth} \centering \includegraphics[width=\columnwidth]{fig8_2_dehn_slide.pdf} \caption*{(b)} \end{subfigure} \caption{} \label{fig8} \end{figure} Let $\{x_{1}, \dots, x_{n}\}$ be the vertex set of $\Gamma$ and let $S =\{s_{1}, \dots, s_{n}\}$ be the set of SBW-squares as illustrated in Figure~\ref{fig8}(b). To be precise, \begin{enumerate} \item $s_{i}$ is a square in $S^{2}$ which forms a relative regular neighborhood of $x_{i}$ in $(S^{2}, \Gamma)$, and the vertices of $s_{i}$ are contained in $\Gamma$. \item Each vertex of $s_{i}$ has the sign $+$ or $-$ according to whether it lies in an underpass or an overpass. \item Each edge of $s_{i}$ is colored $B$ or $W$ according to whether it lies in a black region or a white region. \end{enumerate} Observe that $\Gamma \setminus \bigcup_{i=1}^{n}\inter (s_{i})$ is a disjoint union of arcs and that the boundary of each of which consists of a vertex in $V_{+}(S)$ and a vertex in $V_{-}(S)$. This determines a bijection $\varphi \colon V_{+}(S)\to V_{-}(S)$. Then the following proposition is obvious from the construction of the Aitchison complex. \begin{prop}\label{SCSD} Under the above setting, the SBW squared-complex $\mathcal{C}^{2}(S,\bi)$ and the SBW cubed-complex $\mathcal{C}^{3}(S,\bi)$ are isomorphic to the Dehn complex $\mathcal{D}(\G)$ and the Aitchison complex $\mathcal{A}(\G)$, respectively. \end{prop} \section{Main results}\label{chara} Proposition~\ref{SCSD} shows that the Aitchison complex $\mathcal{A}(\G)$ of a connected alternating diagram $\Gamma$ can be described as the SBW cubed-complex $\mathcal{C}^{3}(S,\bi)$. In this section, we prove Theorem~\ref{SCCA}, which gives a characterization of the Aitchison complexes of connected alternating diagrams among the SBW cubed-complexes. Let $S=\{s_{1}, \dots, s_{n}\}$ be a set of SBW-squares, and let $\varphi \colon V_{+}(S)\to V_{-}(S)$ be a bijection, where $V_{\pm}(S)$ are the sets of positive and negative vertices of the SBW-squares in $S$. Let $\Bi =\Bib \sqcup \Biw$ be the bijection from $E(S)=E_{B}(S)\sqcup E_{W}(S)$ to itself determined by $\varphi$. \begin{thm}\label{SCCA} Under the above setting, the SBW cubed-complex $\mathcal{C}^{3}(S,\bi)$ is isomorphic to the Aitchison complex $\mathcal{A}(\G)$ of a connected alternating diagram $\Gamma$, if and only if the bijection $\Bi$ satisfies \begin{equation*} |E(S)/\langle \Bi \rangle |=|S|+2, \end{equation*} where $E(S)/\langle \Bi \rangle$ denotes the quotient space of the cyclic group action on $E(S)$ induced by $\Bi$, and $|\cdot|$ denotes the cardinality of a set. \end{thm} \begin{proof} We first prove the only if part. Suppose that an SBW cubed-complex $\mathcal{C}^{3}(S,\bi)$ is isomorphic to the Aitchison complex $\mathcal{A}(\G)$ of a connected alternating diagram $\Gamma \subset S^{2}$. Then we may assume $S$ and $\varphi$ are constructed from $\Gamma$ as in Section~\ref{SC}. Observe that there is a one-to-one correspondence between $E_{B}(S)/\langle \Bi \rangle$ (resp.\ $E_{W}(S)/\langle \Bi \rangle$) and the set of the black (resp.\ white) regions of $\Gamma$. Consider the cell decomposition of the projection plane $S^{2}$ obtained from $\Gamma$. Then the above observation implies that $|E(S)/\langle \Bi \rangle |$ is equal to the number of 2-cells of the cell decomposition. Since the cell decomposition has $n$ vertices and each vertex has degree four, the number of $1$-cells is equal to $2n$ when $n = |S|$. Hence, we have \[2=\chi(S^{2})=n-2n+|E(S)/\langle \Bi \rangle |.\] This implies $|E(S)/\langle \Bi \rangle |=|S|+2$, completing the proof of the only if part. Next, we prove the if part. Suppose $|E(S)/\langle \Bi \rangle |=|S|+2$. By using this condition, we construct a connected alternating diagram $\Gamma$ such that $\mathcal{C}^{3}(S,\bi) \cong \mathcal{A}(\G)$. Consider the two-dimensional complex, $X$, obtained from the set $S=\{s_{1}, \dots, s_{n}\}$ of SBW-squares by attaching a 1-cell $\gamma =\langle v, \varphi(v) \rangle$ for each $v\in V_{+}(S)$, and we now attach black/white 2-cells to $X$, as follows. Consider the action of the cyclic group $\langle \Bib \rangle$ on $E_{B}(S)$, and pick its orbit $\{e, \Bib(e), \dots, \Bib^{k-1}(e)\}$ with $\Bib^{k}(e)=e$. Then for each $i\in \{0, \dots, k-1\}$, the terminal vertex of $\Bib^{i}(e)$ is mapped by $\varphi$ to the initial vertex of $\Bib^{i+1}(e)$, and so there is an edge, $\gamma_{i}$, of $X$ joining these two points. Then, \[e+\gamma_{0}+\Bib(e)+\gamma_{1}+\dots +\Bib^{k-1}(e)+\gamma_{k-1}\] determines a simple 1-cycle, where $\gamma_{i}$ is given a natural orientation. We attach a black 2-cell to $X$ along the 1-cycle. Similarly, each orbit of the action of $\langle \Biw \rangle$ on $E_{W}(S)$ determines a simple 1-cycle, and we attach a white 2-cell to $X$ along the 1-cycle. Let $M$ be the two-dimensional cell complex obtained from $X$ by attaching black/white 2-cells in this way. We can easily observe that $M$ is an orientable $2$-manifold. To compute the Euler characteristic $\chi(M)$, observe the following. \begin{enumerate} \item The number of vertices of $M$ is $4n$ with $n=|S|$, since each vertex is contained in an SBW-square. \item The edge set of $M$ consists of $4n$ edges of the SBW-squares and $2n$ ``connecting'' edges. So, the number of edges of $M$ is equal to $6n$. \item The face set of $M$ consists of $n$ squares and $|E_{B}(S)/\langle \Bib \rangle |$ black 2-cells and $|E_{W}(S)/\langle \Biw \rangle |$ white 2-cells. So, the number of faces of $M$ is $n + | E(S)/\langle \Bi \rangle |$. \end{enumerate} Hence, \[\chi(M)=4n-6n+(n+|E(S)/\langle \Bi \rangle |)=-n+|E(S)/\langle \Bi \rangle |, \] and so, by the assumption, it is equal to $2$. Therefore, $M$ is homeomorphic to $S^{2}$. Add to an overpass connecting two negative vertices and an underpass connecting two positive vertices in each SBW-square. The union of connecting edges and overpasses and underpasses gives a connected link diagram $\Gamma$, which is clearly alternating. Moreover, it is obvious from the construction that $\mathcal{C}^{3}(S,\bi)$ is isomorphic to $\mathcal{A}(\G)$. \end{proof} \begin{cor}\label{main-cor} A compact $3$-manifold $M$ is homeomorphic to the exterior of an alternating link $L$ represented by a connected alternating diagram, if and only if $M$ is homeomorphic to the underlying space of an SBW cubed-complex $\mathcal{C}^{3}(S,\bi)$ such that $|E(S)/\langle \Bi \rangle |=|S|+2$. \end{cor} \begin{rem}\upshape If the identity in Theorem~\ref{SCCA} is not satisfied, then the surface $M$ in the proof is a closed orientable surface of genus $\ge 1$, and $\Gamma$ is an alternating diagram in the surface $M$. In this case the underlying space of $\mathcal{A}(\G)$ is homomorphic to the \textit{Dehn space} of the link $L$ in $M\times [-1, 1]$ represented by the diagram $\Gamma$, namely the space obtained from the exterior of $L$ by coning off $M \times \{\pm 1\}$ (see~\cite{13BH}). We note that the ``Dehn complexes'' of these spaces and related spaces are studied extensively in the works \cite{03HR, 12Ha, 13BH} by Harlander, Rosebrock, and Byrd. \end{rem}
2,869,038,155,387
arxiv
\section{Introduction} Consider a sequence $S=(s_1,\ldots,s_N)$ with $s_i=\pm 1$. The autocorrelations of $S$ are defined as \begin{equation} \label{eq:def-Ck} C_k(S) = \sum_{i=1}^{N-k} s_i s_{i+k} \end{equation} for $k=0,1,\ldots,N-1$, and the ``energy'' of $S$ is defined as the sum of the squares of all off-peak correlations, \begin{equation} \label{eq:def-E} E(S) = \sum_{k=1}^{N-1} C_k^2(S)\,. \end{equation} The \emph{low-autocorrelation binary sequence} (LABS) problem is to find a sequence $S$ of given length $N$ that minimizes $E(S)$ or, equivalently, maximizes the \emph{merit factor} \begin{equation} \label{eq:def-merit} F(S) = \frac{N^2}{2E(S)}\,. \end{equation} The LABS problem arises in practical applications in communications engineering, where low autocorrelation sequences are used for example as modulation pulses in radar and sonar ranging \cite{golay:72,beenker:etal:85,pasha:etal:00}. A particularly exciting application is the interplanetary radar measurement of spacetime curvature \cite{shapiro:etal:68}. In mathematics, the LABS problem appears in terms of the Littlewood problem \cite{littlewood:68,borwein:02}, the problem of constructing polynomials with coefficients $\pm1$ that are ``flat'' on the unit circle in the complex plane. In statistical physics, $E(S)/N$ can be interpreted as the energy of $N$ interacting Ising spins $s_i = \pm 1$. This is the Bernasconi model \cite{bernasconi:87}. It has long-range 4-spin interactions and is completely deterministic, i.e. there is no explicit or quenched disorder like in spin-glasses. Ne\-ver\-the\-less the ground states are highly disordered -- quasi by definition. This self-induced disorder resembles very much the situation in real glasses. In fact, the Bernasconi-model exhibits features of a glass transition like a jump in the specific heat and slow dynamics and aging \cite{krauth:mezard:95}. A clever variation of the replica method allows an ana\-ly\-ti\-cal treatment of the Bernasconi model in the high-temperature regime \cite{bouchaud:mezard:94,marinari:parisi:ritort:94a}. For the low-temperature re\-gime, analytical results are rare -- especially the ground states are not known. Due to this connection to physics we refer to the $s_i$ as spins throughout the paper. These examples illustrate the importance of the LABS problem in various fields. For more applications and the history of the problem we refer to existing surveys \cite{jedwab:survey,hoholdt:06}. In this contribution we focus on algorithms to solve the LABS problem. But before we discuss algorithms, we will give a brief survey on what is known about solutions. \section{What is known} The correlation $C_k$ is the sum of $N-k$ terms $\pm 1$, hence the value of $|C_k|$ is bounded from below by \begin{equation} \label{eq:bk} |C_k| \geq b_k = (N-k) \bmod 2\,. \end{equation} A binary sequence with $|C_k| = b_k$ is called a Barker sequence \cite{barker:53}. The merit factor of a Barker sequence is \begin{equation} \label{eq:merit-barker} F_N^{\text{Barker}} = \cases{ N & for $N$ even, \\ \frac{N^2}{N-1} & for $N$ odd. } \end{equation} If it exists, a Barker sequence is a solution of the LABS problem. Barker sequences exist for $N=2,3,4,5,7,11$ and $13$, but probably for no other values of $N$. In fact it can be proven that there are no Barker sequences for odd values of $N>13$ \cite{turyn:storer:61,schmidt:willms:15}. For even values of $N$, the existence of Barker sequences can be excluded for $4 < N\leq\numprint{2e30}$ \cite{leung:schmidt:12}. Let $F_N$ denote the maximum merit factor for sequences of length $N$. It is an open problem to prove (or disprove) that $F_N$ is bounded. For Barker sequences, $F_N\propto N$, and the same is true more generally for sequences such that $|C_k| \leq C^\star$ for some constant $C^\star$ that does not depend on $N$ or $k$. The common belief is that no such sequences exist and that $F_N$ is bounded by some constant. A non-rigorous argument for $F_N$ being bounded was given by Golay \cite{golay:82}. Assuming that the correlations $C_k$ are independent, he argued that asymptotically $F_N\lesssim 12.3248$, or more precisely, that \begin{equation} \label{eq:golay-bound} F_N \lesssim \frac{12.3248}{(8\pi N)^{\frac{3}{2N}}}\,. \end{equation} There are some rigorous results for lower bounds on $F_N$. The mean value of $1/F$, taken over all binary sequences of length $N$, is $(N-1)/N$ \cite{newmann:byrnes:90}. Hence we expect $F_N \geq 1$. In fact one can explicetly construct sequences for all values of $N$ that have merit factors larger than $1$. The current record is set by so called appended rotated Legendre sequences with an asymptotic merit factor of $6.342061\ldots$ \cite{jedwab:katz:schmidt:13,jedwab:katz:schmidt:13a}. \begin{figure} \centering \includegraphics[width=\columnwidth]{records} \caption{Largest known merit factors. Black symbols are exact solutions from exhaustive searches, grey symbols lower bounds from heuristic or partial searches. The solid line is the rigorous asymptotic lower bound $6.342061\ldots$ from appended rotated Legendre sequences \cite{jedwab:katz:schmidt:13a}, the dashed line is Golay's non-rigorous asymptotic upper bound \eqref{eq:golay-bound}. Data from the tables in Section~\ref{sec:results} and from \cite{boskovic:etal:14} and references therein.} \label{fig:records} \end{figure} Beyond that, our knowledge about solutions of the LABS problem is based on computer searches. Figure~\ref{fig:records} shows the best merit factors known for $N < 300$. For small values of $N$, we can exhaustively search through all sequences to find the sequences with the maximum merit factor $F_N$. An evaluation of $E(S)$ from scratch takes time $\Theta(N^2)$, but one can loop through all sequences sucht that any two successive sequences differ by exactly one spin, an arrangement known as Gray code \cite{savage:97}. The corresponding update of $E(S)$ takes only linear time, and the total time complexity of exhaustive enumeration is then given by $\Theta(N\,2^N)$. In this paper we will discuss a class of exact enumeration algorithms with time complexity $\Theta(N\,b^N)$ with $b < 2$ that we used to solve the LABS problem up to $N\leq 66$. For larger values of $N$ exhaustive enumeration is not feasible and one has to resort to either partial enumerations or heuristic searches. In both cases one obtains sequences with large but not necessarily maximal merit factors. Partial enumerations are exhaustive enumerations of a well defined subset of sequences. A particular promising subset is given by skewsymmetric sequences of odd length $N=2n-1$. These sequences satisfy \begin{equation} \label{eq:skew-symmetry} s_{n+\ell} = (-1)^\ell s_{n-\ell} \qquad (\ell=1,\ldots,n-1)\,, \end{equation} which implies that $C_k=0$ for all odd $k$. The restriction to skewsymmetric sequences reduces the size of the search space from $2^N$ to $2^{N/2}$. Sequences with maximum merit factor are often, but not always skewsymmetric: from the 31 LABS problems for odd $N\leq 65$, 21 have skewsymmetric solutions (Section \ref{sec:results}). For the other values of $N$, skewsymmetric sequences provide lower bounds for $F_N$. We used our enumeration algorithm to compute the optimal skewsymmetric sequences for all $N\leq 119$. Enumerative algorithms (complete or partial) are limited to small values of $N$ by the exponential size of the search space. Heuristic algorithms use some plausible rules to locate good sequences more quickly. Examples are simulated annealing, evolutionary algorithms, tabu search---the list of heuristic algorithms that have been applied to the LABS problem is much longer, see \cite{groot:etal:92}. The state of the art are the solvers described in \cite{boskovic:etal:14}, which have found many of the merit factors shown in Figure \ref{fig:records}. The figure shows a significant drop of the merit factors for $N > 200$. This is generally attributed to the fact that even sophisticated search heuristics fail for LABS problems of larger size. This hardness has earned the LABS problem a place in CSPLIB, a library of test problems for constraint solvers \cite[problem 005]{csplib}. \section{Algorithm} According to the current state of knowledge, the only way to get exact solutions for the LABS problem is exhaustive search. With a search space that grows like $2^N$, this approach is limited to rather small values of $N$, however. The exponential complexity calls for a method to restrict the search to smaller subspaces without missing the exact solutions. This is where branch\&bound comes in, a powerful and versatile method from combinatorial optimization \cite {noc}. All exact solutions of the LABS problem for $N>32$ have been obtained with variations of a branch\&bound algorithm proposed in \cite{mertens:96a} that reduces the size of the search space from $2^N$ to $b^N$ with $b < 2$. In this section we review these algorithms and we present a new variant which has $b=1.72$, the best value to date. The idea of branch\&bound is to solve a discrete optimization problem by breaking up its feasible set into successively smaller subsets ({\em branch}), calculating bounds on the objective function value over each subset, and using them to discard certain subsets from further consideration ({\em bound}) \cite{noc}. The procedure ends when each subset has either produced a feasible solution, or has been shown to contain no better solution than the one already in hand. The best solution found during this procedure is a global optimum. The goal is of course to discard many subsets as early as possible during the branching process, i.e.\ to discard most of the feasible solutions before actually evaluating them. The success of this approach depends on the branching rule and very much on the quality of the bound, but it can be quite substantial. For the LABS problem we specify a set of feasible solutions be fixing the $m$ leftmost and the $m$ rightmost spins of the sequence. The $N-2m$ centre spins are not specified, i. e.\ the set contains $2^{N-2m}$ feasible solutions. Given a feasible set specified by the $2m$ outer elements, four smaller sets are created by fixing the elements $s_{m+1}$ and $s_{N-m}$ to $\pm 1$ and $m$ is increased by $1$. This is applied recursively until all elements have been fixed. This is the branching rule introduced by the original branch\&bound algorithm \cite{mertens:96a}, and it is shared by all later versions. It has the nice property that the long range correlations are fixed early in the recursion process. Specifically, if the $m$ left- and rightmost spins are fixed, all $C_k$ for $k\geq N-m$ are fixed. In addition, this branching rule supports the computation of lower bounds very well, as we will see below. The branching process can be visualized as a tree in which nodes represent subsets. Each node has four children corresponding to the four possible ways to set the two spins in the $(m+1)$th shell. The branch\&bound algorithm traverses this tree and tries to exclude as many branches as possible by computing a bound on the energy that can be achieved in a branch. The number of nodes actually visited is a measure of quality for the bound. \subsection{Bounds} Bounds are usually obtained by replacing the original problem over a given subset with an easier (relaxed) problem such that the solution value of the latter bounds that of the former. A good relaxation is one that is easy and fast to solve and yields strong lower bounds. Most often these are conflicting goals. An obvious relaxation of the LABS problem is given by the problem to minimize all values $C_k^2$ \emph{independently}. Hence we replace the original problem \begin{equation} \label{eq:original} E_{\text{min}} = \min_{\text{free}}\left(\sum_{k=1}^{N-1} C_k^2\right) \end{equation} by the relaxed version \begin{equation} \label{eq:relaxation} E_{\text{ min}}^* = \sum_{k=1}^{N-1} \min_{\text{free}}(C_k^2) = \sum_{k=1}^{N-1} \left(\min_{\text{free}}(|C_k|)\right)^2 \leq E_{\text{ min}}\,, \end{equation} where ``free'' refers to the $N-2m$ center elements of $s$ that have not yet been assigned. All previous branch\&bound approaches to LABS considered $E_{\text{ min}}^*$ to be too expensive to compute and replaced it by a weaker, but easily computable bound $E_b \leq E_{\text{ min}}^*$ obtained from bounding $\min_{\text{free}} |C_k|$ from below. \subsubsection{The original bound.} In the original algorithm \cite{mertens:96a} the bound $E_b$ is computed by assigning (arbitrary) values to all free spins, thereby fixing the values for all correlations to $C_k^\star$. Since flipping a free spin can decrement $|C_k|$ at most by $2$, a lower bound for $|C_k|$ is given by \begin{equation} \label{eq:C-bound-mertens} \min_{\text{free}} |C_k| \geq \max(b_k, |C_k^\star|-2\hat{f}_k)\,, \end{equation} where \begin{equation} \label{eq:spin-fk} \hat{f}_k = \cases{ 0 & if $k \geq N-m$,\\ 2(N-m-k) & if $N/2 \leq k < N-m$ or\\ N-2m & otherwise } \end{equation} denotes the number of free spins that appear in $C_k$ and $b_k$ is given by \eqref{eq:bk}. The running time of this algorithm scales like $\bigo{1.85^N}$. A parallelized version of the algorithm was used to solve the LABS problem up to $N=60$ \cite{bauke:labs}. \subsubsection{The Prestwich bound.} The quality of the bound \eqref{eq:C-bound-mertens} depends on the values of $C_k^\star$ and hence on the arbitrary values assigned to the free spins. In principle, these values should be chosen to maximize $C_k^\star$, but this requires the solution of another optimization problem for each bound. This can be avoided by considering free \emph{products} instead of free spins: a product $s_i s_{i+k}$ is free if $s_i$ or $s_{i+k}$ is a free spin. Products $s_i s_{i+k}$ in which both spins are fixed ars called fixed. Let $c_k(s)$ denote the sum of all fixed products that contribute to $C_k$. Note that $c_k = C_k$ for $k \geq N-m$. Then \begin{equation} \label{eq:C-bound-prestwich} \min_{\text{free}} |C_k| \geq \max(b_k, |c_k(s)|-f_k)\,, \end{equation} where \begin{equation} \label{eq:product-fk} f_k = (N-k) - 2\max(m-k,0) - \max(k-N+2m,0) \end{equation} denotes the number of free products in $C_k$, and $b_k$ is given by \eqref{eq:bk}. The reasoning behind \eqref{eq:C-bound-prestwich} is that the sum $c_k$ of fixed products may be offset by the sum of free products, which is no greater than $f_k$. If $|c_k(s)| > f_k$ then $|c_k(s)|-f_k$ is a lower bound for $|C_k|$. If $|c_k(s)| \leq f_k$, this bound is useless and we have to resort to the trivial lower bound $|C_k(s)|\geq b_k$. The bound \eqref{eq:C-bound-prestwich} was used by Prestwich to prune parts of the search space in a local search algorithm for the LABS problem \cite{prestwich:07}. For his recent branch\&{}bound algorithm for LABS, Prestwich \cite{prestwich:13} improved that bound by taking into account some of the interactions between fixed and free spins. Suppose that $s_i$ is a free spin while $s_{i-k}$ and $s_{i+k}$ are fixed. If $s_{i-k}\neq s_{i+k}$, the contributions \begin{equation} \label{eq:cancellations} s_{i-k}s_i + s_i s_{i+k} = s_i (s_{i-k}+s_{i+k}) \end{equation} of $s_i$ to $C_k$ are zero, no matter what the value of $s_i$ is. For each such \emph{cancellation}, the number $f_k$ in \eqref{eq:C-bound-prestwich} can be decreased by two. For $s_{i-k}=s_{i+k}$, the contribution of the term \eqref{eq:cancellations} is $\pm 2$, a situation referred to as \emph{reinforcement} by Prestwich. Now, if all free contributions to $C_k$ are either cancellations or reinforcements, then $f_k$ must be even. If the sum of the fixed contributions $c_k$ is also even and $c_k\bmod 4 \neq f_k\bmod 4$, we can set $b_k=2$ in \eqref{eq:C-bound-prestwich}. With this bound, Prestwich reports a running time that scales like $\bigo{1.80^N}$. Since Prestwich didn't parallelize his algorithm, this estimate was based on enumerations only up to $N\leq 44$. \subsubsection{The Wiggenbrock bound.} A different bound was used by Wiggenbrock in his branch\&{}bound algorithm \cite{wiggenbrock:10}. Flipping a spin changes the sum $C_k+C_{N-k}$ by $\pm 4$ because every spin occurs twice in that sum. Taking the all $+1$ configuration as a reference, we get \begin{equation} \label{eq:willms} (N - C_{N-k}) \equiv C_k \pmod{4}\,. \end{equation} For $k \geq N-m$, the $C_k$ are completely fixed. For other values of $k$, the correlations can be bounded by \begin{equation} \label{eq:C-bound-wiggenbrock} |C_k| \geq \cases{ |(N-C_{N-k}) \bmod 4| & if $k \leq m$, \\ b_k & if $m < k < N-m$, } \end{equation} where we assumed the residue system $\{-1,0,1,2\}$ for the mod 4 operation. The Wiggenbrock bound seems to be weak since it bounds $|C_k|$ by small numbers $0,1,2$ only. Yet it is surprisingly efficient: Wiggenbrock reported a running time of $\bigo{1.79^N}$, slightly better than the scaling of Prestwich's bound. Using a parallelized implementation and running it on 18 GPUs, Wiggenbrock solved the LABS problem for $N\leq 64$ \cite{wiggenbrock:10}. \subsubsection{The combined bound.} High up in the search tree, where $m$ is small, the contributions of the free products overcompensate the fixed contributions and the Prestwich bound \eqref{eq:C-bound-prestwich} reduces to $b_k$. The Wiggenbrock bound \eqref{eq:C-bound-wiggenbrock} provides a better bound in exactly these situations. The fact that it yields such a good running time indicates that even this weak bound is efficient because it applies high up in the search tree: a branch, that can be pruned at this level, is usually very large. The Prestwich bound with the free products applies for larger values of $m$, on the other hand. An obvious idea is to combine these complimentary bounds and use \begin{equation} \label{eq:C-bound-combined} |C_k| \geq \cases{ \max \big(|(N-C_{N-k}) \bmod 4|, |c_k(s)|-f_k\big) & if $k \leq m$, \\ \max (b_k, |c_k(s)|-f_k) & if $m < k < N-m$, } \end{equation} as a bound. \begin{figure} \centering \includegraphics[width=\columnwidth]{calls.pdf}\\ \includegraphics[width=\columnwidth]{time_per_call.pdf} \caption{Branch\&{}bound algorithm with the combined bound \eqref{eq:C-bound-combined} (circles) and with the tight bound \eqref{eq:min-Ck} (triangles). The number of recursive calls (top) scales like $\Theta(b^N)$. A numerical fit to the existing data yields $b=1.729$ (solid line) for the combined bound and $b=1.727$ (dashed line) for the tight bound, but this small difference is caused by a non-exponential reduction of the number of calls, see Figure~\ref{fig:ratio}. The CPU time per call (bottom) is linear in $N$ for both bounds. \label{fig:performance}} \end{figure} When we measure the number of recursive calls (i.e. the number of nodes visited in the search tree) and the CPU time per call (Figure \ref{fig:performance}), we find that the running time of the branch\&bound algorithm with the combined bound \eqref{eq:C-bound-combined} scales like $\Theta(N 1.729^N)$. \subsubsection{The tight bound.} So far, all bounds are lower bounds for $\min|C_k|$ that yield only a lower bound for $E_{\text{min}}^*$, which already is a lower bound for $E_{\text{min}}$. We will now show that $\min|C_k|$ and hence $E_{\text{min}}^*$ can be computed exactly, thereby avoiding the ``second relaxation'' to $E_b$ and providing the best lower bound possible from the Ansatz \eqref{eq:relaxation}. We write $C_k$ as \begin{equation} \label{eq:c-plus-u} C_k = c_k + u_k\,, \end{equation} where $c_k$ is the sum of all fixed terms $s_i s_{i+k}$ (as above) and $u_k$ sums up all terms in which at least one spin is free. Let \begin{equation} \label{eq:granularity} g_k = \cases{ 4 & if $k \leq m$, \\ 2 & otherwise. } \end{equation} We will show below that there exist easy to compute integers $U_k^{\text{min}}$ and $U_k^{\text{max}}$ such that the free contribution $u_k$ can take on all values in \begin{equation} \label{eq:u-values} \{U_k^{\text{min}}, U_k^{\text{min}}+g_k, U_k^{\text{min}}+2g_k, \ldots, U_k^{\text{max}}-g_k, U_k^{\text{max}}\} \end{equation} All we need to know are the values of $c_k$, $U_k^{\text{min}}$ and $U_k^{\text{max}}$ to compute \begin{equation} \label{eq:min-Ck} \min |C_k| = \cases{ c_k + U_k^{\text{min}} & if $-c_k \leq U_k^{\text{min}}$, \\ c_k+U_k^{\text{max}} & if $-c_k \geq U_k^{\text{max}}$, \\ |(-c_k-U_k^{\text{min}}) \bmod g_k| & otherwise, } \end{equation} and then $E_{\text{min}}^* = \sum_k (\min|C_k|)^2$. To prove \eqref{eq:granularity} and \eqref{eq:u-values}, we rearrange the sum \eqref{eq:def-Ck} for $C_k$ a little bit. For $C_3$ and $N=12$, for example, we can write \begin{eqnarray*} C_3 &=& s_1s_4 + s_1s_4 + s_1s_4 + s_1s_4 + s_1s_4 + s_1s_4 + s_1s_4 + s_1s_4 + s_1s_4\\ &=& (s_1s_4 + s_4s_7+s_7s_{10}) + (s_2s_5+s_5s_8+s_8s_{11}) + (s_3s_6+s_6s_9+s_9s_{12})\,. \end{eqnarray*} We call every sum in parentheses a \emph{chain}. For general values of $k$ and $N$ we write \begin{equation} \label{eq:Ck-chains} C_k = \sum_{j=1}^k \sum_{q=1}^{\lfloor\frac{N-j}{k}\rfloor} s_{j+(q-1)k} s_{j+qk}\,. \end{equation} The chains are the sums over $q$. For $k<N-m$, each chain contains a subchain of free terms \begin{equation} \label{eq:chain} s_a s_{a+k} + s_{a+k}s_{a+2k} + \cdots + s_{b-k} s_b\, \end{equation} where only the spins $s_a$ and $s_b$ may be fixed. We refer to these subchains as free chains. The sum of all free chains equals $u_k$. Let us first prove the ``granularity'' \eqref{eq:granularity}. If both spins $s_a$ and $s_b$ are fixed, then every free spin appears exactly in two terms, and flipping any free spin changes the sum \eqref{eq:chain} by $0$ or $\pm 4$. If either $s_a$ or $s_b$ (or both) are free, then flipping this spin changes the sum \eqref{eq:chain} by $\pm 2$. Hence the granularity $g_k$ is $4$ if and only if all contributing free chains have both $s_a$ and $s_b$ fixed, and $2$ otherwise. Now $s_a$ can be free and the leftmost member of a free chain if and only if $a>m$ and if it has no left partner, i.e. if $a-k \leq 0$. Together, both conditions imply $k>m$. Hence by argumentum e contrario, $k\leq m$ implies that $s_a$ is fixed and, by similar reasoning, also that that $s_b$ is fixed. This proves that $g_k = 4$ for $k\leq m$. If $k > m$, we only need to find a single free chain that starts with a free spin. Consider the spin $s_{m+1}$: It is free and it has no left neighbor. Hence it is the leftmost spin of a free chain that contributes to $u_k$. Therefore $g_k=2$ for $k > m$. Note that for $k>m$ there can be free chains with both $s_a$ and $s_b$ fixed. All we have proven is that for $k > m$ this can't happen for all free chains. Now we will prove \eqref{eq:u-values}. Let $n$ denote the number of terms $s_j s_{j+k}$ in a free chain \eqref{eq:chain}, and let $u$ denote its value. If $s_a$ or $s_b$ (or both) are free, then $u$ can take on all values between $-n$ and $n$ with granularity $2$: \begin{equation} \label{eq:u-values-free} u \in [-n, -n+2, \ldots, n-2, n] \qquad \mbox{($s_a$ or $s_b$ free).} \end{equation} If both spins $s_a$ and $s_b$ are fixed, the granularity is $4$ and the range of values varies with $s_a$, $s_b$ and the parity of $n$ according to \begin{equation} \label{eq:u-values-fixed} u \in \cases{ [-n,\ldots,n] & if $s_a=s_b$ and $n$ even, \\ [-(n-2),\ldots,n] & if $s_a=s_b$ and $n$ odd, \\ [-(n-2),\ldots,(n-2)] & if $s_a\neq s_b$ and $n$ even, \\ [-n,\ldots,(n-2)] & if $s_a\neq s_b$ and $n$ odd. } \end{equation} This can be proven by induction over $n$. For $n$ odd, the base case is $n=3$, i.e. \begin{displaymath} u = s_a s_{a+k} + s_{a+k} s_{b-k} + s_{b-k}s_b\,. \end{displaymath} The value of $u$ is maximized by setting the free spins $s_{a+k}=s_a$ and $s_{b-k}=s_b$. If $s_a=s_b$, the center term is $1$ and $u_{\text{max}}=3$. For $s_a\neq s_b$, the center term is $-1$ and $u_{\text{max}}=1$. The value of $u$ is minimized by setting $s_{a+k}=-s_a$ and $s_{b-k}=-s_b$. If $s_a=s_b$, the center term is $1$ and $u_{\text{min}}=-1$. If $s_a\neq s_b$, the center term is $-1$ and $u_{\text{min}}=-3$. Now let us assume that \eqref{eq:u-values-fixed} holds for some odd $n \geq 3$ and consider a free chain \begin{displaymath} u = s_a s_{a+k} + s_{a+k} s_{a+2k} + \cdots + s_{b-2k}s_{b-k} + s_{b-k}s_b \end{displaymath} with $n+2$ terms. To maximize $u$, we set $s_{a+k}=s_a$ and $s_{b-k}=s_b$, and the remaining free chain has $n$ terms. Applying \eqref{eq:u-values-fixed}, we get $u_{\text{max}} = n+2$ if $s_a=s_b$ and $u_{\text{max}} = n$ if $s_a\neq s_b$. The induction step for $u_{\text{min}}$ is obvious. Since the proof for even $n$ is very similar, it is omitted here. We only mention that the base case ($n=2$) corresponds to the ``cancellation'' and ``reinforcement'' used by Prestwich to improve the bound \eqref{eq:C-bound-prestwich}. Now \eqref{eq:u-values-free} and \eqref{eq:u-values-fixed} tell us how to compute $u_{\text{min}}$ and $u_{\text{max}}$ for each individual free chain. The corresponding values $U_k^{\text{min}}$ and $U_k^{\text{max}}$ are obtained by summing over all free chains that contribute to $u_k$. \begin{figure} \centering \includegraphics[width=\columnwidth]{call-ratio.pdf}\\ \caption{Number of calls for the combined bound \eqref{eq:C-bound-combined} divided by the number of calls for the tight bound \eqref{eq:min-Ck}. For the values of $N$ considered here, the speedup due to the tight bound seems to grow linearly with $N$. \label{fig:ratio}} \end{figure} Every branch of the search tree that can be pruned according to the combined bound \eqref{eq:C-bound-combined} (or any other relaxation of \eqref{eq:relaxation}) is also pruned by the tight bound \eqref{eq:min-Ck}, but the tight bound allows us to prune additional branches. Hence the number of recursive calls with the tight bound can not be larger than the number of calls with any other bound based on \eqref{eq:relaxation}. What we observe is that for $N\leq 66$ the number of calls for the tight bound is in fact strictly smaller than that for the combined bound. A numerical fit to the existing data yields a scaling of $\Theta(1.727^N)$ for the tight bound, compared to $\Theta(1.729^N)$ for the combined bound, see Figure \ref{fig:performance}. This difference is too small to tell whether the tight bound actually provides an exponential speedup or not. In fact, if one looks at the ratio of the number of calls for the combined bound divided by the number of calls for the tight bound, one observes that the speedup factor grows linearly with $N$, not exponentially (Figure~\ref{fig:ratio}). Since the time per call scales linearly for both bounds (Figure~\ref{fig:performance} bottom), a reduction of the number of calls that grows with $N$ implies that the tight bound will asymptotically outperform the combined bound. For the values of $N$ considered in this paper, however, the absolute computational costs per call matter. And here the simpler combined bound \eqref{eq:C-bound-combined} is faster, see Figure~\ref{fig:performance} (bottom). If we extrapolate the number of calls and the time per call to $N=66$, we get a running time of roughly 12600 CPU days for the combined bound but 14300 CPU days for the tight bound. This is why we used the weaker combined bound for all the new solutions (exact and skewsymmetric) reported in this paper. Note that the time per call depends considerably on the implementation. It might well be possible to implement the tight bound such that it outperforms the combined bound already for the values of $N$ considered here. In any case, the measured running times illustrate that we need to parallelize the computation if we don't want to wait 35 years for the $N=66$ LABS solution. \subsection{Symmetry and Parallelization} The correlations $C_k$ \eqref{eq:def-Ck} are unchanged when the sequence is complemented or reversed. When alternate elements of the sequence are complemented, the even-indexed correlations are not affected, the odd-indexed correlations only change sign. Hence, with the exception of a small number of symmetric sequences, the $2^N$ sequences will come in classes of eight which are equivalent. The total number of nonequivalent sequences is slightly larger than $2^{N-3}$. The $m$ left- and $m$ rightmost elements of the sequence can be used to parameterize the symmetry classes. The total number $c(m)$ of symmetry classes that can be distinguished by $m$ left- and $m$ right-border elements reads \begin{equation} \label{eq:m-classes} c(m) = 2^{2m-3} + 2^{m-2+(N\bmod2)}\,. \end{equation} We derive this formula in \ref{sec:group-theory}, where we also describe how to compute the values of the $2m$ boundary spins that represent each symmetry class. The symmetry classes can be enumerated independently, which allows us to parallelize the computation. For our largest system ($N=66$) we used $c(m=10)=\numprint{131328}$ symmetry classes that we searched in parallel on various computers with number of computing cores ranging from \numprint{8} to \numprint{5700}. In principle, the branch\&bound algorithm requires some communication between the parallel tasks since every task should know the lowest energy found so far by other tasks to compare it to the bound. We avoid this communication completely by using a static value for this reference energy: the lowest energy found by heuristic searches. In all cases we considered, this value turned out to be the true minimum energy. \section{Results and Conclusions} \label{sec:results} We have used the branch\&bound algorithm with the combined bound to compute all sequences with maximum merit factor for $N\leq 66$, see Tables \ref{tab:labs-1} and \ref{tab:labs-2}. The previous record was $N\leq 64$, obtained with the Wiggenbrock bound \eqref{eq:C-bound-wiggenbrock} and using 18 GPUs \cite{wiggenbrock:10}. For the perfomance measurements for $40\leq N \leq 64$ shown in Figure~\ref{fig:performance} we have used a Linux cluster with a collection of Intel\textsuperscript{\textregistered}\ Xeon\textsuperscript{\textregistered}\ CPUs: $10\times$ E5-2630 (at 2.30 Ghz), $10 \times$ E5-2630 v2 (at 2.60 Ghz) and $2\times$ E5-1620 (at 3.60 Ghz) with a total of 248 (virtual) cores. On this machine, the computation for $N=64$ took about a week (wallclock time). As one can see in Figure~\ref{fig:performance}, the solution of $N=63$ and $N=64$ involves a surprisingly low number of calls and took therefore less time than actually expected. Note that with our algorithm systems of size $N\leq 43$ can be solved in less than an hour on a laptop. For $N=65$ and $N=66$ we used a variety of computing machinery that makes an accurate determination of ``single CPU time'' impossible. For $N=65$ and 66, the equivalent wallclock time on our benchmark cluster is roughly 32 and 55 days. \begin{table} \parbox{0.23\linewidth}{ \centering \small \begin{tabular}{rrrlc} $N$ & $E$ & $F_N$ & sequences & skew \\[1ex] 3 & 1 & 4.500 & 21 & $\times$ \\ 4 & 2 & 4.000 & 112 & \\ 5 & 2 & 6.250 & 311 & $\times$ \\ 6 & 7 & 2.571 & 141 & \\ & & & 123 & \\ & & & 312 & \\ & & & 1113 & \\ 7 & 3 & 8.167 & 1123 & $\times$ \\ 8 & 8 & 4.000 & 32111 & \\ & & & 31121 & \\ 9 & 12 & 3.375 & 311121 & \\ & & & 42111 & $\times$ \\ & & & 32211 & $\times$ \\ & & & 31122 & \\ 10 & 13 & 3.846 & 42211 & \\ & & & 52111 & \\ & & & 311122 & \\ & & & 41122 & \\ & & & 33121 & \\ 11 & 5 & 12.100 & 112133 & $\times$ \\ 12 & 10 & 7.200 & 4221111 & \\ & & & 4111221 & \\ 13 & 6 & 14.083 & 5221111 & $\times$ \\ 14 & 19 & 5.158 & 41112221 & \\ & & & 6221111 & \\ & & & 5222111 & \\ & & & 33111212 & \\ & & & 41111222 & \\ & & & 42211112 & \\ & & & 5221112 & \\ & & & 5311121 & \\ 15 & 15 & 7.500 & 52221111 & $\times$ \\ & & & 33131211 & $\times$ \\ 16 & 24 & 5.333 & 225111121 & \\ & & & 6322111 & \\ & & & 313311211 & \\ & & & 2131441 & \\ 17 & 32 & 4.516 & 252211121 & $\times$ \\ & & & 44121311 & \\ & & & 4221211112 & \\ & & & 36111221 & \\ & & & 2122411112 & \\ & & & 2112113132 & \\ 18 & 25 & 6.480 & 441112221 & \\ & & & 511211322 & \end{tabular} } \hfill \parbox{0.47\linewidth}{ \centering \small \begin{tabular}{rrrlc} $N$ & $E$ & $F_N$ & sequences & skew \\[1ex] 19 & 29 & 6.224 & 4111142212 & \\ 20 & 26 & 7.692 & 5113112321 & \\ 21 & 26 & 8.481 & 27221111121 & $\times$ \\ 22 & 39 & 6.205 & 51221111233 & \\ & & & 632111112211 & \\ & & & 511111212232 & \\ 23 & 47 & 5.628 & 212121111632 & \\ & & & 83211112211 & \\ & & & 314121131132 & \\ 24 & 36 & 8.000 & 2236111112121 & \\ 25 & 36 & 8.681 & 337111121221 & \\ 26 & 45 & 7.511 & 21212111116322 & \\ & & & 63231111121211 & \\ & & & 32361111121211 & \\ 27 & 37 & 9.851 & 34313131211211 & $\times$ \\ 28 & 50 & 7.840 & 34313131211212 & \\ 29 & 62 & 6.782 & 212112131313431 & $\times$ \\ & & & 323711111212211 & $\times$ \\ 30 & 59 & 7.627 & 551212111113231 & \\ & & & 461212111113231 & \\ 31 & 67 & 7.172 & 7332212211112111 & \\ 32 & 64 & 8.000 & 71112111133221221 & \\ 33 & 64 & 8.508 & 742112111111122221 & \\ 34 & 65 & 8.892 & 842112111111122221 & \\ 35 & 73 & 8.390 & 7122122111121111332 & \\ 36 & 82 & 7.902 & 3632311131212111211 & \\ 37 & 86 & 7.959 & 844211211111122221 & \\ 38 & 87 & 8.299 & 8442112111111122221 & \\ 39 & 99 & 7.682 & 82121121234321111111 & $\times$ \\ & & & 23241171111141122121 & $\times$ \\ 40 & 108 & 7.407 & 44412112131121313131 & \\ 41 & 108 & 7.782 & 343111111222281211211 & $\times$ \\ 42 & 101 & 8.733 & 313131341343112112112 & \\ 43 & 109 & 8.482 & 1132432111117212112213 & $\times$ \\ 44 & 122 & 7.934 & 525313113111222111211121 & \\ 45 & 118 & 8.581 & 82121121231234321111111 & $\times$ \\ 46 & 131 & 8.076 & 823431231211212211111111 & \\ & & & 821211212312343211111111 & \\ & & & 73235111112132122112121 & \\ 47 & 135 & 8.181 & 923431231211212211111111 & $\times$ \\ & & & 429422222112111111122111 & $\times$ \\ & & & 411121114131131312421242 & \\ & & & 383422132211212111111211 & $\times$ \\ & & & 236331611113121211112121 & $\times$ \\ & & & 212a21121234211111111231 & $\times$ \end{tabular} } \caption{All optimal low autocorrelation binary sequences for $N\leq 47$ modulo symmetries.} \label{tab:labs-1} \end{table} \begin{table} \centering {\small \begin{tabular}{rrrlc} $N$ & $E$ & $F_N$ & sequences & skew \\[1ex] 48 & 140 & 8.229 & 3111111832143212221121121 & \\ 49 & 136 & 8.827 & 215131311224112241141141 & \\ & & & 3337313221312111112121211 & $\times$ \\ 50 & 153 & 8.170 & 215131311224112241141142 & \\ & & & 72542221311111132111211211 & \\ & & & 4337313221312111112121211 & \\ 51 & 153 & 8.500 & 23432111141313116212112121 & $\times$ \\ 52 & 166 & 8.145 & 51161212121111131223123332 & \\ 53 & 170 & 8.262 & 4511311133251312221112111121 & \\ & & & 22b442222112112111111111221 & $\times$ \\ 54 & 175 & 8.331 & 356225141212112222111111121 & \\ 55 & 171 & 8.845 & 9212123212114321233211111111 & $\times$ \\ & & & 3232a41124112111111112212211 & $\times$ \\ 56 & 192 & 8.167 & 7612231123241111132112122111 & \\ 57 & 188 & 8.641 & 33232631111127121111221221211 & $\times$ \\ 58 & 197 & 8.538 & 1111131232138142121132432112 & \\ 59 & 205 & 8.490 & 772412242112231122111112111111 & $\times$ \\ & & & 6132123121111113112341221121242 & \\ 60 & 218 & 8.257 & 761112141111131124211322211222 & \\ & & & 222222111311114244161121161121 & \\ 61 & 226 & 8.232 & 314162331211111131112125621211 & \\ 62 & 235 & 8.179 & 323232111117111541121511222122 & \\ & & & a23223212135311221111112113112 & \\ 63 & 207 & 9.587 & 212212212711111511121143111422321 & \\ 64 & 208 & 9.846 & 212212212711111511121143111422322 & \\ 65 & 240 & 8.802 & 323224111341121115111117212212212 & \\ 66 & 257 & 8.475 & 2112111211222b2221111111112224542 & \end{tabular} } \caption{All optimal low autocorrelation binary sequences for $48 \leq N\leq 66$ modulo symmetries.} \label{tab:labs-2} \end{table} Tables~\ref{tab:labs-1} and \ref{tab:labs-2} show all sequences (except those related by symmetries) with maximum merit factors up to $N=66$ in run-length encoding, i.e. the digits specify the length of runs of equal spins. We use $a=10$, $b=11$ etc. for runs of spins that are longer than $9$. We have used our branch\&bound algorithm also to find all skewsymmetric sequences with maximum merit factor up to $N=119$. The previous record was $N\leq 89$ \cite{prestwich:13}. Table \ref{tab:skew} shows the skewsymmetric sequences with maximum merit factor as far as they are not listed in Tables \ref{tab:labs-1} and \ref{tab:labs-2}. Skewsymmetric merit factors marked with $\star$ are known to be not maximal. We know this either from exhaustive enumerations (for $N\leq 65$) or from heuristic searches that have yielded non skewsymmetric sequences with larger merit factors. \begin{figure} \centering \includegraphics[width=\linewidth]{F-ratio.pdf} \caption{Ratio of maximum merit factors: skewsymmetric $F_N^{\text{skew}}$ versus general $F_N$. Black symbols are exact, gray symbols are based on lower bounds for $F_N$, which are believed to be exact.} \label{fig:F-ratio} \end{figure} Figure~\ref{fig:F-ratio} shows the ratio of the maximum merit factors of skewsymmetric and general sequences for $N\leq 119$. In 20 out of 58 cases the skewsymmetric subset does not contain a maximum merit factor sequence. Note that the values of $F_N$ for $N>66$ are from heuristic searches, but we believe that these values are the true maximum merit factors. But strictly speaking, the gray symbols in Figure~\ref{fig:F-ratio} are only upper bounds for the ratio $F_N^{\text{skew}}/F_N$. The available data seems to indicate that roughly two thirds of all odd values of $N$ have skewsymmetric maximum merit factor sequences. Figure~\ref{fig:F-ratio} also suggests that \begin{equation} \label{eq:F-ration} \liminf_{N\to\infty} \frac{F_N^{\text{skew}}}{F_N} = 1\,. \end{equation} We think that the branch\&bound approach based on the relaxation \eqref{eq:relaxation} can be used to solve the LABS problem for $N>66$ by devoting more compute cores and more CPU time. Improving the implementation to reduce the constant factor in the $\Theta(N\,b^N)$ scaling can also help. Solving systems significantly larger than $N=66$, however, requires a stronger bound than \eqref{eq:relaxation}, i.e., a bound that takes into account the fact that the $C_k$ are not independent. Or a completely new approach other than branch\&bound. \begin{table} \centering {\small \begin{tabular}{rrrl} $N$ & $E$ & $F_N^{\text{skew}}$ & sequences \\[1ex] 19 & 33 & 5.470$^\star$ & 2113114141, 3513111211 \\ 23 & 51 & 5.186$^\star$ & 272221111121, 336111121211, 343131211211, 732212111111 \\ 25 & 52 & 6.010$^\star$ & 6332121211111 \\ 31 & 79 & 6.082$^\star$ & 6212211423211111 \\ 33 & 88 & 6.188$^\star$ & 84212321121111111, 22742211211111221 \\ 35 & 89 & 6.882$^\star$ & 472322122111112111, 552212232211121111 \\ 37 & 106 & 6.458$^\star$ & 2492222111111121121 \\ 61 & 230 & 8.089$^\star$ & 2121111221121411111122811342631 \\ 63 & 271 & 7.323$^\star$ & a1121112112221222322454111111111 \\ & & & 21242131111311112112461613211231 \\ & & & 23111131111323531211121221616121 \\ 65 & 272 & 7.767$^\star$ & 414411126121313133111125112113111 \\ & & & 231134321111114222211821211214121 \\ & & & 221111121211111132311224122183721 \\ & & & 2112111211222b2221111111112224541 \\ 67 & 241 & 9.313\phantom{$^\star$} & b412323441121121221231121111111111 \\ & & & 6216121225331212111223311113211111 \\ & & & 2454222111111111222b22211211121121 \\ 69 & 282 & 8.441$^\star$ & 211111111121132122121121144323214b1 \\ 71 & 275 & 9.165\phantom{$^\star$} & 241244124172222111113112311211231121 \\ 73 & 348 & 7.657$^\star$ & 2111211211111221131113213132151427451 \\ & & & 22c7442222221121121111121111111111221 \\ 75 & 341 & 8.248$^\star$ & 23231233481611113111111211212123122121 \\ 77 & 358 & 8.281\phantom{$^\star$} & 512174112122112221322423411211111331111 \\ 79 & 407 & 7.667$^\star$ & 4361113231311213321213413122151111212111 \\ & & & 3121312121411112131112112451361133313311 \\ & & & 2131211221311121211121131141453513243131 \\ & & & 2129214121112121311241335311321111111231 \\ & & & 2111213111121123314261111221131212461351 \\ 81 & 400 & 8.201$^\star$ & 53611132313112133212134131221511112121111 \\ 83 & 377 & 9.137\phantom{$^\star$} & 323633231172611112211111412212121111212211 \\ 85 & 442 & 8.173$^\star$ & 3912523121213351112121333122111231111111211 \\ 87 & 451 & 8.391$^\star$ & 43114242215111132131313216111322112211412111 \\ 89 & 484 & 8.183$^\star$ & 231143113311111143233221212212118121412114121 \\ & & & 231433161111121421112123521137111131212113121 \\ 91 & 477 & 8.680\phantom{$^\star$} & 2121416112211211111211321222321474241111311331 \\ 93 & 502 & 8.615$^\star$ & 91252112312341122322122411212312421112311111111 \\ & & & 21121213121261171226221111223111114111123313341 \\ & & & 25523581113122413112231511111121112122111211121 \\ 95 & 479 & 9.421\phantom{$^\star$} & 322322358115111351112151114111111211121222122211 \\ 97 & 536 & 8.777\phantom{$^\star$} & 5111415321132221132143121132142221421211131151111 \\ 99 & 577 & 8.493\phantom{$^\star$} & 5255212212a311224112241211111111232321112111221111 \\ 101 & 578 & 8.824\phantom{$^\star$} & 6255212212a3112241122412111111112323211121112211111 \\ 103 & 555 & 9.558\phantom{$^\star$} & 2452681222213111225111225132223111111211112211121121 \\ 105 & 620 & 8.891\phantom{$^\star$} & a1211121121411213112132221223222134134113453111111111 \\ 107 & 677 & 8.456\phantom{$^\star$} & 227311831111224113342221121214112261211111141211111221 \\ 109 & 662 & 8.974\phantom{$^\star$} & 3341111112431141111133222251112222212171141211281121211 \\ 111 & 687 & 8.967\phantom{$^\star$} & 21111323331321111135114211332121421141112172131212122161 \\ 113 & 752 & 8.490$^\star$ & 4555122142121212c1222311111111112333211323111211121112111 \\ & & & 231332171323311541212112134331121114121221311111321213121 \\ 115 & 745 & 8.876\phantom{$^\star$} & 5511135145311122113121222222233142512111211311121511121111 \\ 117 & 786 & 8.708\phantom{$^\star$} & 37117312111221111133222424112211222212172531211111411111211 \\ 119 & 835 & 8.480\phantom{$^\star$} & 312161412122123411121111314111321511316511212323311311113311 \end{tabular} } \caption{All optimal skewsymmetric low autocorrelation binary sequences for $N\leq 119$ as far as they are not listed in Table \ref{tab:labs-1} or \ref{tab:labs-2}. Merit factors marked with $\star$ are known to be not maximal, either from exhaustive enumeration (for $N\leq 65$) or from heuristic searches (for $N\geq 67$). } \label{tab:skew} \end{table}
2,869,038,155,388
arxiv
\section{Introduction} The response of many soft matter materials to deformations includes a viscous and an elastic component, thereby giving rise to the category of viscoelastic materials. Many experimental techniques have been developed throughout the years to investigate these unique properties, in both the linear and non-linear regime, including the combination of confocal microscope imaging and other techniques to monitor microstructural changes \cite{Smith1724}. Despite these new developments, many experimental challenges remain \cite{Wilson2011}, such as probing shorter time and length scales. Optical \cite{Tassieri2019} and magnetic tweezers \cite{Rich2011} enable the monitoring of minimal forces and displacements, but remain restricted to small observation windows. Computer simulations represent an interesting alternative to experimental observations for soft condensed matter. In Molecular Dynamics (MD) simulations, the movement of atoms is governed by Newton's laws of motion. One can thus access the coordinates of the atoms and compute all relevant observables~\cite{Frenkel2002,Allen2017}. The size and duration of MD simulations is mainly restricted by the available computational power. Dedicated computational methods have been developed to address larger system sizes and durations with respect to atomistic methods, such as the coarse-graining of atoms into ``effective atoms'' that represent atomic ensembles (e.g. water molecules or polymeric units), colloidal particles, or fluid elements. For the latter, methods include techniques such as dissipative particle dynamics (DPD)\cite{Hoogerbrugge1992} and multi-particle collision dynamics MPCD.\cite{malevanets1999, kapral2008} Another important feature that needs to be considered is the interaction of e.g. colloidal particles with their surroundings.\cite{Lee2008} Thermostats can also be used to achieve a set temperature, as opposed to constant-energy systems, using for instance the Langevin thermostat whose properties are well known~\cite{kubo_fluctuation-dissipation_1966}. In practice, the thermostatting is achieved by adding a random force, the noise, and a dissipative force, the friction, whose magnitudes are related by the fluctuation dissipation theorem. Such simulations, using a Langevin thermostat, do not represent the fluid flows and can only mimic liquid-like behavior. More specifically, they do not take collective effects into account and do not conserve momentum. Recent developments related to the fluctuation dissipation theorem could solve some of these restrictions. This allows the linear modulus of more complex systems, including yield stress systems, to be accessible.\cite{Wittmer2015} Despite these improvements, the fact that the Langevin thermostat breaks momentum conservation makes it a poor choice for pseudorheological measurements. The DPD method was introduced for the simulation of thermostatted particle-based soft-matter systems~\cite{Hoogerbrugge1992,Espanol1995,Soddemann2003}. DPD simulations use only pairwise forces, including for noise and friction, are not restricted to the linear regime, and provide direct access to nonequilibrium situations for which Green-Kubo methods would not apply. Due to momentum conservation, DPD can be used to study hydrodynamic phenomena. Applications of DPD include polymer solutions \cite{Symeonidis2005}, colloidal suspensions \cite{Pan2009}, multiphase flow phenomena \cite{Pan2014} and biological systems \cite{Li2013}. Recent investigations \cite{Leimkuhler2016} have shown further ways to improve the accuracy of the DPD method and pointed out its possible use to investigate non-linear material behaviour. To simulate the non-linear behavior using pseudorheological measurements, several solutions are available. The simplest way to introduce a deformation of the simulated volume element is to abandon the periodicity in the shear plane. Replacing these boundaries with moving walls leads to a shear flow \cite{Khare1996}. Applications of such simulations can be found in, e.g., the migration of polymer in small gaps as present in bearings \cite{Kreer2001}. The drawback of this method is the loss in periodicity and the appearance of boundary effects. Large systems can only be simulated by increasing the size of the primary simulations box, which entails a corresponding increase of computational work load. Two interesting techniques have been developed to avoid using walls for driving the shear flow. The first is the SLLOD technique, which consists of modified equations of motion in which the flow velocity is added to the particles' motion~\cite{Edberg1987, Travis1995, Evans2008}. When using the SLLOD equations of motion, thermostatting is done on the peculiar velocity of the particles (the laboratory reference frame minus the assigned flow velocity). In practice, this means that the flow profile is imposed via a bias in the thermostat~\cite{Shang2017, Imperio2008, Hess2002}. As the flow velocity is different at the boundaries of the shear plane, the simulation boundaries must be adjusted by deforming the box according to the shear velocity. Furthermore, there are several possibilities to implement the SLLOD method with differing theoretical backgrounds. There is no consensus on which implementation should be preferred and on the limitations that each one entails. These discussions include the limitations of each single technique in term of applicable general flow patterns and the need for artificial external forces.\cite{edwards2005, daivis2006,edwards2006} The SLLOD technique is available in the LAMMPS package (Large-scale Atomic/Molecular Massively Parallel Simulator~\cite{plimpton1995}), for instance. The second method consists in establishing, via an applied periodic external force, a periodic flow profile~\cite{Hess2002}. The periodic flow method is convenient to implement as it does not require modification to the box geometry or to the boundary conditions. Whereas the results of SLLOD simulation can be used to simulate materials in a simple shear flow, they do not provide any feedback between the material structure and the flow profile, which means they fail with regard to more complex systems such as yield stress fluids~\cite{Hess2002}. Shear banding is one of the many effects that cannot be observed with such a technique \cite{Cao2012}. Furthermore, it is not possible to achieve correct hydrodynamics, since the bulk fluid is only modelled implicitly. Viscous losses cannot be measured and hence the loss modulus of any material is inaccessible. A limitation of periodic flow simulations is that one cannot use them for linear shear profiles. The most promising technique to combine the advantages of the previous mentioned approaches are Lees-Edwards boundary conditions (LEbc). Introduced in 1972 \cite{Lees1972}, they are a technique to address non-linear material behaviour during flow, and are distinguished from other non-linear simulation methods as they do not require a biased thermostat \cite{Evans1986, Evans2008} or non-periodic walls to initiate a shear flow, but rely on the specific rules at the boundary that lead to a translationally invariant system. LEbc are sometimes referred to as sliding brick boundary conditions \cite{Leimkuhler2016}. Despite being developed more than four decades ago, there is presently no open-source implementation of the Lees-Edwards boundary condition. There is a clear need for such flow phenomena simulations and simulations using the Lees-Edwards boundary conditions are of broad interest in academic as well as industrial research. In this paper, we present the principle of the Lees-Edwards method and its implementation in the ESPResSo molecular simulation package in section~\ref{sec:implementation}. We provide the corresponding code under the same open-source license as ESPResSo. The source code availability, as well as the parameter and analysis files, are discussed in appendix \ref{sec:reproc}. We describe the simulation methods, including the details of the dissipative particle dynamics method, in section~\ref{sec:methods}. We present our results on the self-diffusion of DPD particles and on the viscosity of the DPD fluid in section~\ref{sec:results} and conclude in section~\ref{sec:conclusions}. \section{Lees-Edwards boundary conditions} \label{sec:implementation} \subsection{Principle of Lees Edwards boundary conditions} \label{subsec:LE} Lees-Edwards boundary conditions (LEbc), are a generalisation of the periodic boundary conditions for systems undergoing shear \cite{Lees1972}. With periodic boundary conditions, a particle exiting the simulation cell is replaced at its periodic location inside the cell and the computation of distances across the boundaries uses the minimal image convention~\cite{Allen2017}. When using LEbc, a particle crossing the shear plane is also replaced in the simulation box. The position and velocity of the particle, however, are shifted so that the trajectory of the particle is compatible with its image in the adjacent moving cell. The LEbc thus allows the simulation of infinitely extended systems, as with periodic boundary conditions with a prescribed shear, using a finite simulation cell. In the stationary regime, a constant shear flow in the shear direction, here the $x$-direction, is obtained and the simulation has translational invariance in the direction normal to the shear plane, i.e., the gradient ($y$) and vorticity ($z$) direction. The change in position $x'$ as a function of time $t$ for a particle that leaves the computational domain in the velocity gradient direction normal to the shear plane is \begin{equation} x'(t) = x(t) + x_{\text{LE}} \end{equation} where the Lees-Edwards offset, $x_{\text{LE}}$ the displacement of the adjacent simulation cell with respect to the primary cell, is \begin{equation} x_{\text{LE}} = v_{\text{LE}} \cdot t \end{equation} for steady shear, with $v_{\text{LE}}$ as the Lees-Edwards velocity. The change in velocity is \begin{equation} v_x'(t) = v_x(t) + v_{\text{LE}} \end{equation} based on the drift velocity $v_{\text{LE}}$ of the periodic images. This can be seen in Figure~\ref{fig:integrator}(a) where a particle leaves the primary simulation box and is re-introduced at position $p''$ instead of $p'$. The updated position is then wrapped into the primary simulation cell. When the periodic boundary conditions in the other directions remain unaltered these modifications result in a shear flow of the magnitude $\dot\gamma = v_\text{LE} / h$, where $h$ is the height of the simulation box (in $y$). \subsection{Application of Lees-Edwards boundary conditions to the velocity Verlet integrator} To implement the principle of the Lees-Edwards boundary conditions in a molecular dynamics (MD) program, it is necessary to specify practical details: the computation of distances, of relative velocities, and the combination with the velocity Verlet algorithm \cite{Verlet1967, Swope1982}. We implemented the LEbc method in the ESPResSo package with the goal to provide a reference implementation of LEbc and a user friendly interface for steady shear and for sinusoidal shear, which is useful to determine the dynamic moduli $G'$ and $G''$. The update of a particle's coordinates in the velocity Verlet integrator occurs in the following order: \begin{align} v^\ast &= v(t) + \frac{1}{2 m} f(t) \tag{VV 1} \label{vv1} \times \Delta t \\ x(t+\Delta t) &= x(t) + \frac{1}{2} \left( v(t) + v^\ast \right) \times \Delta t \tag{VV 2} \label{vv2} \\ &\textrm{update all forces at time } t+\Delta t \textrm{ , using } x(t+\Delta t)\cr v(t+\Delta t) &= v^\ast + \frac{1}{2 m} f(t+\Delta t) \tag{VV 3}\label{vv3} \times \Delta t \end{align} where $\Delta t$ is the time step, $m$ is the particle mass, and $f$ is the force on the particle. To translate the LEbc, we must apply the rules exposed above within this framework, which leads to a two step approach for the application of the velocity and position jumps which we illustrate in Figure~\ref{fig:integrator}(b). A one step approach for the integration is not suitable as it can lead to numerical instability. In the first step, we determine the Lees-Edwards velocity at the present simulation time $t$, i.e. $v_{\text{LE}}(t)$ and the Lees-Edwards offset at one half time-step $\Delta t$ ahead of the simulation time: $x_{\text{LE}}(t+ \frac{\Delta t}{2})$. After the position update in step \eqref{vv2}, we check if the particle has left the primary computational domain $0 \leq y(t+\Delta t) < h$. In that case, we apply the position jump \begin{equation} x(t+\Delta t) \to x(t+\Delta t) - x_\text{LE}\left(t + \frac{\Delta t}{2}\right) \end{equation} and apply half the velocity jump \begin{equation} v^{\ast} \to v^\ast - v_\text{LE}(t) / 2 ~, \end{equation} following the structure of the velocity Verlet scheme. The forces, including the DPD dissipative and random forces, are updated at the middle of the time step of the integration. It is thus necessary to have current velocities to ensure the correct thermalisation of the particles. Therefore, we tag the particle as undergoing the Lees-Edwards transformation, so that we can apply the second part of the jump after step \eqref{vv3}, using $v_\text{LE}(t+\Delta t)$. \begin{figure*}[htbp] \begin{center} \includegraphics[width=\textwidth]{figs/newintegrationroute.pdf} \caption{{\bf (a) A simplified representation of the particle movement including the Lees-Edwards boundary conditions. The particle is reintroduced at position $p''$ instead of $p'$ due to the shift in the image boxes. This has to be captured in the distance function as indicated by the arrows crossing the box boundary. (b) Changes to the commonly used velocity Verlet integrator, which is displayed in the central box introduced with the inclusion of the Lees-Edwards boundary conditions. As indicated by the surrounding boxes, the Lees-Edwards velocity is updated at $t$ and $t+\Delta t$, and the offset is included at $t+\Delta t /2 $. }} \label{fig:integrator} \end{center} \end{figure*} As simulation cells move with respect to each other due to the shear, the distance and the relative velocity between particles across the shear plane must include the Less-Edwards offset. This is shown in Figure~\ref{fig:integrator}(a) where the arrows crossing the boundary represent the applied distance function. While particles might experience a positive offset while crossing the box, the distance function in fact must include the negative offset to reflect these changes correctly. Accordingly, we modified the distance function in ESPResSo, so that the appropriate offset distance is used for all actions, such as computing the force, as well as building the neighbour lists. The modification of the relative velocities is necessary for the computation of the velocity-dependent DPD forces. Thus the velocity difference function is also modified in ESPResSo. The trajectories in LEbc simulations will display discontinuous jumps whenever a particle crosses the shear plane. Since LEbc simulations include an infinitely extended system, it is possible to reconstruct physically meaningful trajectories. For this purpose, we store the accumulated offset in position during the numerical integration $x_{\text{LE}}$ of a particle and its movement in a periodic image $i(t)$, along the shear direction, according to \begin{equation} \label{eq:accumulated-offset} x_{\text{part, LE}} = \sum_j x_{\text{LE}}(t_{\text{jump}}) + \sum_t \Delta t \cdot v_{\text{LE}}(t) \cdot i(t). \end{equation} where $j$ stands for occurring jumps at the LE boundary. $x_{\text{part, LE}}$ represents the displacement of the particle as it moves outside of the primary simulation cell. This data, which is necessary for the reconstruction of the full trajectory of the particle, must be computed as the simulation proceeds and cannot be obtained later using only recorded positions and velocities. \subsection{Modification of the cell system} \label{sec:cell-system} In principle, the number of pairs in a system of $N$ particles is $\mathcal{O}(N^2)$. Such a high computational cost of the force calculation is unpractical. It is, in principle, possible to compute only $\mathcal{O}(N)$ pair forces as long as only short-range forces are used, which can be cut off after a certain distance. This can be accomplished via neighbor lists and it is often practical to sort the particles into cells. This is realised in ESPResSo with the technique of domain decomposition, where the system is partitioned into cubic cells for the purpose of storing the particles' coordinates and for spatially sorting the particles\cite{Smith1991}. The sliding nature of the boundary in shear flow simulations breaks the periodic assumption on which the domain decomposition is based and requires an appropriate modification. To keep the computational advantage of domain decomposition, we introduce a columnar domain decomposition: we treat all cells in the layer adjacent to the boundary of the primary simulation box as neighbours, as shown using the orange-red colors in Figure 2. It does not influence the domain decomposition in the gradient and vorticity directions. A special node grid that consist of [x, y, z] = [m, n, o], i.e. $N_{\text{nodes}} = m \cdot n \cdot o$ nodes, has to be chosen. This grid must be chosen such that it has exactly one node in the shear direction, i.e. the x-direction as shear direction leads to a [1, n, o] node grid. This guarantees that no possible particle interactions are lost or considered twice due to the Lees-Edwards offset. In this way, a ``re-wiring'' of the cell-neighborship relations during a running simulation can be avoided. Figure \ref{fig:columnarcellsystem} shows a representation of this system and illustrates the column as well as the communication directions used. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\textwidth]{figs/ColumnarCellSystem.pdf} \caption{\bf{Communication pattern for the columnar cell system. The arrows shown on the $x$-$y$ surface show how the local cells communicate with all other cells of the same $x$-column and $y \pm 1$. For local cells in the other directions the usual domain decomposition method is used which is shown here.}} \label{fig:columnarcellsystem} \end{center} \end{figure} The cells located near the Lees-Edwards boundary, oriented along a column in the shear direction, communicate with all other cells in the $x$-column as well as with the cells in the columns directly above and below ($y \pm 1$). Thus, as shown in the $x$-$y$ plane wit arrows, all teal cells are considered in the neighbor list. These interactions are superimposed onto the usual domain decomposition between local cells, as shown for the grey box on the $y$-$z$ plane with the limitation that all cells in a column communicate with each other. Communications in the shear directions are carried out via all cells in a column, where as for communications in the other two directions a regular domain decomposition is used. During all distance calculations the modified minimum image vector is used because it accounts for jumps across the box. The strategy chosen here minimizes the changes to the source code of the existing simulation package ESPResSo. \subsection{Spurious discontinuity of the velocity profile} Some articles about the Lees-Edwards method have reported discontinuous velocity profiles near the shear plane boundary. We consider here the work of Chatterjee \cite{Chatterjee2007}: in order to mitigate the discontinuity in the velocity profile, Chatterjee proposed to disable the thermostatting (pairwise friction and noise terms) in the vicinity of the simulation cell boundary that corresponds to the shear plane. Since the LEbc method is invariant under translation, Leimkuhler and Shang argued that the strategy of Chatterjee was only necessary in order to counteract programming errors in the simulation code~\cite{Leimkuhler2016}. In their article, Leimkuhler and Shang verify their hypothesis by introducing voluntarily the suspected bug in their own simulation code. In the presence of this bug, they are able to reproduce the suspicious profile observed by Chatterjee. In our implementation, we have not introduced any local change to the thermostat and find a continuous and linear velocity profile for the DPD particles. We present those results in Section~\ref{subsec:flowprofile}. \section{Simulation methods} \label{sec:methods} We implemented the Lees-Edwards method, from scratch, within the software package ESPResSo (Extensible Simulation Package for Research on Soft Matter) \cite{Arnold2013a, Arnold2013b, Limbach2006, Weik2019}. We provide the details on the specific version in appendix~\ref{sec:reproc}. \subsection{Dissipative particle dynamics} We consider a bulk fluid consisting of point particles with a mass $m$, a position $\vec{r}_\mathrm{i}$ and a velocity $\vec{v}_\mathrm{i}$. We use Molecular Dynamics (MD), which solves Newton's equation for all particles, subject to interaction forces: \begin{equation}\label{newton} m \frac{\mathrm{d}^2 \vec{r}_\mathrm{i}}{\mathrm{d} t^2} = \vec{f}_\mathrm{i} \end{equation} where $\vec{f}_\text{i}$ is the total force on the i-th particle. For dissipative particle dynamics (DPD) \cite{groot1997}, there are two modifications from this starting point: the particles are coarse-grained to represent effective fluid elements instead of atoms, and the relative velocity of particle pairs is thermostatted to introduce thermal motion and damping. The pairwise thermostatting method in DPD implies that the method conserves linear momentum and can be used for hydrodynamic simulations, which sets it apart from Langevin dynamics, another common choice in coarse-grained MD simulations. We follow the presentation of DPD by Groot and Warren\cite{groot1997}, to which we refer the readers for more background about the method. The total DPD force $\vec{f}^{\textrm{DPD}}_\text{i}$ on particle $i$ can be decomposed into: \begin{equation} \vec{f}_\text{ij}^{\textrm{DPD}} = \sum_{i \neq j} \left( \vec{f}^{\text{R}}_{\text{ij}} + \vec{f}^{\text{D}}_{\text{ij}} + \vec{f}^{\text{C}}_{\text{ij}} \right) , \label{eqn:forces} \end{equation} where the random force $\vec{f}^{\text{R}}_{\text{ij}}$ is \begin{equation} \vec{f}^{\text{R}}_{\text{ij}} = \sigma\ w^{\text{R}}(r_{\text{ij}}) \ \theta_{\text{ij}} \ \hat{\vec{r}}_{\text{ij}} ~, \end{equation} the dissipative force $\vec{f}^{\text{D}}_{\text{ij}}$ is \begin{equation} \vec{f}^{\text{D}}_{\text{ij}} = - \gamma\ w^{\text{D}}(r_{ij})\ (\hat{\vec{r}}_{\text{ij}} \cdot \vec{v}_{\text{ij}} ) \ \hat{\vec{r}}_{\text{ij}} ~, \end{equation} and $\vec{f}^{\text{C}}_{\text{ij}}$ is a conservative force, which we define in Eq.~\eqref{fc}. $\vec{r}_{\text{ij}}$ is the distance vector, $r_{\text{ij}}$ is the distance, $\hat{\vec{r}}_{\text{ij}}$ is the unit vector along the direction of the distance vector, and $\vec{v}_{\text{ij}}$ is the relative velocity. $\theta_{ij}$ is a white noise with the following properties: \begin{equation} \left< \theta_{\text{ij}} \right> = 0 \end{equation} \begin{equation} \left< \theta_{\text{ij}}(t) \theta_{\text{kl}}(t') \right> = (\delta_{\text{ik}} \delta_{\text{jl}} + \delta_{\text{il}} \delta_{\text{jk}}) \delta(t-t') \end{equation} where $\delta_{\text{ij}}$ is Kronecker's delta and $\langle \bullet \rangle$ denotes averaging with respect to time. The factors $\sigma$ and $\gamma$ characterize the strength of the random and dissipative force, subject to the weight functions $w^{\text{R}}$ and $w^{\text{D}}$. In the DPD simulation model, all the components of the force are short-ranged with a cutoff distance $r_\textrm{cut}$. This is typical of particle-based simulation models and necessary for the technique of domain decomposition (see Sec. \ref{sec:cell-system}). Long-range forces, which must be taken into account for electrostatic or dipolar interaction between colloids, are computed with dedicated routines in ESPResSo for the corresponding simulation scenarios. For DPD simulations---as it is the case for the Langevin thermostat---we fix the relation between the intensity of the noise and the friction parameter by using the fluctuation-dissipation theorem:\cite{Espanol1995statistical} \begin{equation} \label{eq:fdt} \sigma^2 = 2 k_B T \gamma ~, \end{equation} where $k_B$ is Boltzmann's constant and $T$ is the temperature. In practice, we pick a value for the temperature $T$ and for the friction $\gamma$, and the noise amplitude $\sigma$ is set by ESPResSo's DPD thermostat according to Eq.~\eqref{eq:fdt}. It is furthermore required to select the weight function $w^{\text{R}}$ and $w^{\text{D}}$ so that \begin{equation} \left[ w^{\text{R}}(r)\right] ^2 = w^{\text{D}} (r) \end{equation} We chose $w^{\text{R}}$ as \begin{equation} w^{\text{R}}(r) = 1 - \frac{r}{r_{\text{cut}}} ~, \end{equation} which is a common choice.\cite{groot1997} In DPD simulations, it is customary to use a soft repulsive interaction with amplitude $a_{ij}$ to account for conservative (C) forces $\vec{f}^{\text{C}}_{\text{ij}}$. We use \begin{equation}\label{fc} \vec{f}^{\text{C}}_{\text{ij}}= a_{\text{ij}} \left( 1 - \frac{r}{r_{\text{cut}}} \right) \end{equation} for $r < r_{\text{cut}}$ and zero otherwise. This soft potential allows the use of larger time steps in the simulation. \subsection{Green-Kubo techniques} \label{subsec:GK} The simplest way to evaluate bulk properties in MD simulations is to use Green-Kubo relations. They allow bulk properties to be connected to macroscopic fluxes caused by thermal fluctuations. In the present study, we focus on the self-diffusion coefficient $D$ and the shear viscosity $\eta$. Thus we evaluate the fluxes of the particle velocities and the shear stress. The characteristic equation for the self-diffusion coefficient is \begin{equation} D = \frac{1}{3} \int_0^\infty \langle \vec{v}_i(0) \vec{v}_i(t+\tau) \rangle_{|t} d\tau \label{eqn:GK_diff} \end{equation} where $\vec{v}$ represents the velocity of an individual particle $i$ in three dimensions. The angular brackets $\langle \ \rangle_{|t}$ represent the ensemble average over all lag times present in the simulation. We use $\tau$ as the lag time. For the shear viscosity in the unsheared case, we use \begin{equation} \label{eq:gk} \eta = \frac{V}{k_BT} \int_0^\infty \langle \sigma_{xy}(0) \sigma_{xy}(t+\tau) \rangle_{|t} d\tau \end{equation} where $k_B T$ is the thermal energy, $V$ is the box volume, and $\sigma_{xy}$ is the off diagonal element of the instantaneous virial stress tensor, also known as Irving-Kirkwood stress tensor, \begin{equation} \label{eq:stress} \sigma_{k,l} = \frac{\sum_i m_i v_i^{(k)} v_i^{(l)}}{V} + \frac{\sum_{j>i} F_{ij}^{(k)} r_{ij}^{(l)} }{V} \end{equation} where $k$ and $l \in [x, y, z]$ indicate the dimension of the coordinate. \subsection{Brownian motion} \label{subsec:MSD} The migration of the fluid particles with time is evaluated using the mean-square displacement (MSD) \begin{equation} \text{MSD}(\tau) = \langle \left( \vec{x}(t+\tau) - \vec{x}(t) \right)^2 \rangle \label{eq:pure_msd} \end{equation} where $\vec{x}$ describes the position of a particle in three dimensions. In the absence of the shear flow, the MSD allows the diffusion coefficient to be calculated using the relation $6Dt = \text{MSD}(t)$. The computation of the mean square displacement is also relevant in shearing and non-steady state regimes. Furthermore, it can be evaluated for several directions independently allowing more detailed insight. This is of particular interest for the simulations with shear flow.\cite{Foister1980} Several predictions can be made for the diffusion of Brownian particles under shear flow. The MSD of our DPD fluid particles should follow the prediction for Brownian particles. In steady shear flow, the displacement of particles along the shear direction $x$ is given by \cite{Orihara2011} \begin{equation} \langle \left( x(t) - x(0) - \dot \gamma z(0) t \right)^2 \rangle = 2 Dt \left[1 + \frac{1}{3}(\dot\gamma t)^2 \right] \label{eq:msd} \end{equation} with a cubic dependence of time, indicating an enhanced diffusion due to the particles migrating through regions with different shear velocities \cite{Foister1980}. The term $-\dot\gamma z(0) t$ on the left-hand side of Eq.~\eqref{eq:msd} removes the mean horizontal drift that stems from the initial velocity of the particle. By measuring the MSD in this manner, it is possible to observe the cubic dependency also observed by Orihara and Takikawa~\cite{Orihara2011}. For oscillatory shear flow, the MSD in the shearing direction $x$ follows the relation $\langle \Delta x(t)^2 \rangle = 2 D_{\text{eff}}t$ with \begin{equation} D_{\text{eff}} = D \left[1 + \frac{\gamma_0^2}{2} \left(2 \sin^2\Phi +1 \right) \right] \label{eq:msd_osc} \end{equation} where $\Phi$ represents the phase and $\gamma_0$ the deformation amplitude\cite{Takikawa2012}. The motion of the Brownian particles in an oscillatory shear flow combines a periodic and a diffusive component. As Takikawa and Orihara\cite{Takikawa2012}, we evaluate the position of the particles in a stroboscopic manner---that is with a time interval that is a multiple of the forced oscillation period---so that the resulting motion appears as purely diffusive with the modified diffusion coefficient $D_{\text{eff}}$ that depends on the amplitude of the shear flow and on the phase of the oscillatory movement. \label{eq:osc} \subsection{Calculation of correlations} \label{sec:correl} We rely on two different procedures to compute formulas of the form \begin{equation} \label{eq:corr} \langle X(t_i) X(t_j) \rangle \end{equation} found in Eqs.~\eqref{eqn:GK_diff} and \eqref{eq:pure_msd}. A logarithmic correlator is available in ESPResSo for the set of built-in observables. Such a correlator samples the term $X(t_i) X(t_j)$ for fixed time differences $[0, M^m\Delta t, 2M^m\Delta t, \dots, N\cdot M^m\Delta t]$, for consecutive values of the exponent $m$, taking the form of blocks having time intervals that increase by a factor $M$ between successive blocks. Storing lag times that are $M$ times larger implies the addition of $N$ samples instead of $M$ times more samples: Storing samples up to a time lag $\tau_{max} = N\cdot M^{m_{max}}\Delta t$ requires $m_{max} N$, which is $\mathcal{O}(\log \tau_{max})$ hence the name logarithmic correlator. This technique, which is useful when the number of samples would otherwise exceed the available memory, is presented in the book by Frenkel and Smit~\cite{Frenkel2002}. Another technique is the Fast Correlation Algorithm (FCA) that relies on Fourier transforms~\cite{nmoldyn_1995} to speed up the computation. We use the implementation provided by the Python package tidynamics~\cite{tidynamics_2018} for autocorrelation and mean-square displacements. We refer to this method as a linear correlator as it requires data samples linearly spaced in time. To use the FCA method, we store the variables of interest to disk (the position for the mean squared displacement or selected components of the stress tensor for the viscosity). The logarithmic correlator and the FCA only differ in their statistical sampling. The FCA method is equivalent to the computation of the pairwise correlation for all time intervals available and provides the same results, up to rounding errors, as computing the correlations with a naive $\mathcal{O}(N_{samples}^2)$ loop, where $N_{samples}$ is the total number of sample times. For the computation of Eq.~\eqref{eq:msd_osc}, we do not perform an average over time. The correlation is not of the form \eqref{eq:corr} and the two techniques presented above do not apply. \section{Results and discussion} \label{sec:results} We carried out all of our simulations with 10,000 particles and densities of $\rho = [3, 4, 5, 6, 7]$. We used a strength of the repulsive parameter from $a_{ij} = 0$, i.e. no repulsive force, up to $a_{ij}=175$. These parameters are similar to the ones used by Zohravi {\em et al.}~\cite{Zohravi2018} as this was the most complete study concerning the influence of the density $\rho$ and the strength of the conservative interaction parameter $a_{ij}$ on the shear viscosity, thus providing a good reference point to benchmark our method. The temperature is set to $k_BT=1$ and the friction constant $\gamma$ to 4.5. All simulations use a time step of $\Delta t = 0.005$. The results are available in full in the analysis notebooks, see appendix \ref{sec:reproc} for details. \subsection{Self-diffusion coefficient and viscosity of DPD fluids} In this section, we use the mean square displacement, Green-Kubo techniques, and Lees Edwards boundary conditions to evaluate the equilibrium and non-equilibrium properties of the DPD fluid. First, we start by measuring the self-diffusion coefficient $D$ via the mean square displacement and then we investigate the shear-viscosity $\eta$ and its various contributions using the two other mentioned methods. Simulations using Green-Kubo were conducted at quiescent conditions whereas samples under shear used the LEbc method. \subsubsection{Self-diffusion coefficient} We show the diffusion coefficient that was obtained from the mean square displacement, equation \eqref{eq:pure_msd}, of 10,000 particles at $k_BT = 1.0$ in Figure~\ref{fig:diff_coeff_quiescent}. Each individual trajectory of the particle was correlated using the logarithmic correlator from ESPResSo and subsequent fitting of $\mathrm{MSD} = 6 D t$. This results in 10,000 individual diffusion coefficients per simulation run. The results, shown in Figure \ref{fig:diff_coeff_quiescent}, are obtained from an average of three individual quiescent runs. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\textwidth]{figs/diff_msd.pdf} \caption{\bf{The self-diffusion coefficient of a quiescent DPD fluid in dependence of the repulsive force parameter $a_{ij} $} for the densities $\rho = [3, 4, 5, 6, 7]$ and $k_B T =1.0$. The lines connecting the symbols are to guide the eye.} \label{fig:diff_coeff_quiescent} \end{center} \end{figure} Our results show that the diffusion coefficient decreases with an increase in repulsive strength as is expected since particles are hindered in their movement by the other surrounding particles and their repulsive interaction. By increasing either the number density of the particles or their repulsive strength, the overlap between the repulsive spheres increases, hence, the movement is hindered through ``caging" particles with other DPD particles due to the higher number of possible interaction partners or the higher interaction strength. This leads to the lower effective diffusion coefficients with increasing density and increasing repulsive strength shown in Figure \ref{fig:diff_coeff_quiescent}. The diffusion coefficient reduces monotonically with increasing repulsive strength for all number densities. Furthermore, the figure shows that diffusion coefficients decrease monotonically as a function of particle density. This effect is present over the entire range of repulsive forces. \subsubsection{Viscosity} The viscosity of the DPD fluid, on the one hand, is connected to the inter-particle forces that are described in equation~\eqref{eqn:forces} and on the other hand to the overall momentum of the particles that results in the kinetic contribution. The contributions can be further divided up into the viscosity based on the random force $\vec{f}^{\text{R}}$, the dissipative force $\vec{f}^{\text{D}}$, the conservative force $\vec{f}^{\text{C}}$. The sum of the kinetic and the conservative viscosity is referred to as the total viscosity $\vec{f}^{\text{T}}$. We measure the viscosity using the Green-Kubo method as well as the Lees Edwards method, which measures the instantaneous stress at the ``wall". That means that we include the noise in these measurements as it generates a non-negligible contribution to the integral of the autocorrelation. We measured the viscosity by two methods: First, by the Green-Kubo formula~\eqref{eq:gk} and, hence, by measuring the stress fluctuations in quiescent simulations and second, by directly measuring the stress in a fluid sheared with the Lees-Edwards method which will be explained in the next part of the paper. Here, we start by discussing the results of the quiescent simulations. In order to fully describe our methods, we first show how we obtained these results. Green Kubo results are calculated with the logarithmic correlator of Espresso. We also sampled the data in a trajectory file at linear time intervals and used the \texttt{acf} method of tidynamics. The online correlator collects data up to $\tau = 100,000$. Following the initial warm up of 1,000,000 integration steps, we run the simulation for a total 500,000 time steps of $\Delta t = 0.005$. We then plot the integral of the autocorrelation function and choose a uniform cut-off for all the simulation in order to avoid any bias between the iterations. The convergence of the autocorrelation function as needed by the Green Kubo method is illustrated in Figure~\ref{fig:acf_convergence}, where vertical red lines show the time cutoff and horizontal red lines show the corresponding plateau value. \begin{figure*}[htbp] \begin{center} \includegraphics[width=1.0\textwidth]{figs/acf_convergence.pdf} \caption{\bf{Integral of the autocorrelation (ACF) of the DPD fluid ($\rho = 6.0$ and $a_{ij} = 25$) as measured by the Green Kubo method. Data obtained using a linear correlator are shown in green and a logarithmic correlator in blue. Red lines indicate the time cutoff and the corresponding plateau value.}} \label{fig:acf_convergence} \end{center} \end{figure*} Depending on the contribution, we show the two possible correlation methods. For the kinetic component, a linear correlator was used (shown in green in the left plot of Figure~\ref{fig:acf_convergence}). This linear correlator was necessary because the kinetic component cannot be extracted from the simulation package directly. A distinct plateau beginning at $t=2.0$ can be found in this data. For the conservative stress, Figure 4 center, we used a linear (green) as well a logarithmic (blue) correlator, to illustrate that the differences between them are only due to the sampling. Once more, a distinct plateau beginning at the cut-off is visible. The dissipative part of the viscosity as shown in the right plot of Figure~\ref{fig:acf_convergence} could only be evaluated using the logarithmic correlator. The dominant part of this viscosity contribution originates in the delta peak around $\tau = 0$ of the autocorrelation due to the sampling of the random noise. The fluctuating contribution to the autocorrelation function are very small. We decided to cut off the integral at this very initial point where the noise starts to dominate. The results for the Lees Edwards experiments were obtained from three simulation runs per data point. The warm up of our fluid consists of 100,000 integration steps. After this warm up, we turn on the shear flow with $\dot \gamma = 1.0$ and re-equilibrate 2 runs for another 100,000 integration steps. As a test, one run was re-equilibrated for 500.000 integration steps, but showed the same results. The resulting equilibration time is larger than what is necessary to reach the linear regime in the Navier-Stokes equation, using there an approximation for the fluid viscosity. Then we start the data recording and obtain 200,000 stress values, 100 integration steps apart from each other (20,000,000 integration steps in total). We applied the blocking method \cite{flyvbjerg1989} to obtain mean and standard deviation of this data and present the results in the right column of Figure \ref{fig:viscosity_comparisson} using the pyblock Python module \cite{pyblock}. For this analysis, we only used the last $2^{17} = 131,072$ measurement values in order to ensure a steady shear profile is obtained. The usage of more or fewer data points, e.g. the last $2^{16}$ or 170,000, did not change the results, Therefore, we are confident with the assumption that a stationary regime was obtained. We also chose the number of sampling points as a power of 2 because it is most efficient to apply the blocking method on such a data set. In Figure~\ref{fig:viscosity_comparisson}, we show the collected results from all simulations. The left column shows the quiescent results from the Green Kubo method and the right column shows the shear results from the experiments using the Lees Edwards method. We also show a superimposed view of this data in the analysis notebook, part of the supplementary information to this paper. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\textwidth]{figs/comp_gk_le.pdf} \caption{\bf{The viscosity of a DPD fluid for various repulsive force parameters $a_{ij} $} for the densities $\rho = [3, 4, 5, 6, 7]$, at $k_b T = 1.0$. The left column shows the results of the quiescent Green Kubo analysis and the right column shows the same values as obtained with shearing Lees Edwards boundary conditions. The dashed line shown in the DPD viscosity represent the approximation found by Groot and Warren \cite{groot1997}. The dotted lines are shown to guide the eye.} \label{fig:viscosity_comparisson} \end{center} \end{figure} The kinetic viscosity decreases with increasing repulsive strength for both methods. This is in agreement with the reduced diffusion constant of the particles (Figure~\ref{fig:diff_coeff_quiescent}). Samples with a lower density $\rho$ have a lower kinetic viscosity, except for the case of $a_{ij} = 0.0$ (no repulsion). Here, the kinetic viscosity is the highest for all samples with a value of around $\eta_{kin} = 1.2$. The same trend can be observed for viscosity measured via the Lees Edwards technique. However, the reduction in the kinetic viscosity is more pronounced; the quiescent method always showing higher viscosities than for Lees Edwards at high repulsive forces. Since the total viscosity consists of the contribution from the kinetic part and from the conservative part that means it takes the contributions of the increasing repulsive strength into account. The results are consistent with the case of no repulsive force present where all samples show the same kinetic and total viscosity. We have noticed small negative values for the Lees-Edwards result, for $\rho=3$ and the higher values of $a_{ij}$, but the fluctuations of the kinetic stress are much larger than the actual value which makes it a difficult measurement. While there is a difference between the kinetic component of the viscosity between the quiescent and shearing samples for some values of the parameters, the similarity between the Green Kubo and Lees Edwards results is remarkable overall. There is only a maximal difference of $\frac{\Delta \eta}{\eta} = \frac{\eta_{le} - \eta_{gk}}{\eta_{gk}} = 0.07$ between the two methods. In our experiments, we also directly measure the dissipative part of the viscosity and compare these values to the prediction by Groot and Warren \cite{groot1997}. Figure \ref{fig:viscosity_comparisson} shows that, for no repulsive force present, the data points from Green Kubo lie exactly on the predicted line whereas all points seem to be slightly shifted downwards but remain on a constant value for higher repulsive forces. For the measurements obtained by the Lees Edwards technique, a bigger separation between the theoretical prediction and the obtained values can be observed. The values for an absent repulsive force also already deviate from the predicted values. A possible reason for this discrepancy could be caused by the linear interpolation of the velocity differences between the particles at the boundary. This might underestimate the real velocity difference and, hence, also the real stress caused by the relative movement. Furthermore, the introduction of additional energy via the shear could change the system in a way that makes is effectively different from the quiescent one. The conservative viscosity shows a similar trend as the total viscosity. It starts from $\eta_{\textit{cons}} = 0.0$ in the cases without conservative force and increases monotonically thereafter. The higher the density of a sample, the steeper is this increase. Error bars in this plot are based on the sum of errors from the kinetic viscosity and the total one. Overall, our experiments clearly show an agreement between both the static and the dynamic measuring technique, even though the trends and numerical values for the Lees Edwards measurements are less obvious and show a larger error. It would be useful in later work to study in more detail the dynamics of the sheared fluid to provide a better assessment of the difference between the quiescent fluid and the sheared fluid. \subsection{Flow profile} \label{subsec:flowprofile} We perform simulations for a DPD fluid with $N=10000$ particles, a number density $\rho = 3$, a friction coefficient $\gamma = 4.5$, a cut-off radius $r_{\text{cut}} = 1.0$ and $F_{\text{max}} = 25.0$. The shear velocities in our simulations were $v = 0.1$, $1.0$, and $1.5$ and represent the velocity added to the particle when it crosses the lower boundary or, respectively, subtracted from the particle when it crosses the upper boundary. The established shear gradient leads to flow velocities of $- v/2$ at the bottom of the simulation box and of $+v/2$ at the top of the simulation box. We investigated the height dependence of the flow velocity in the gradient direction to check if the flow profile was properly equilibrated and uniform across the box. For this purpose we divided the box in 50 horizontal slabs, oriented along the gradient direction and determined the average velocity of the DPD fluid particles in each slab after a start-up time of 1000 $t=50,000$. We show the average and standard deviation based on three different, independent snapshots for three different shear velocities in Figure \ref{fig:flowprofile}. The expected flow linear flow profile is also included as dashed lines for comparison. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\textwidth]{figs/flowprofile.pdf} \caption{\bf{Height dependent velocity profile of the DPD fluid for 3 different shear velocities $v = [0.1, 1.0, 1.5]$. The expected linear shear profile is shown as a dashed line.}} \label{fig:flowprofile} \end{center} \end{figure} The resulting shear gradient is linear and in good agreement with the expected shape. Furthermore, an increase in the number of bins to improve the sampling resolution along the box height did not have an impact on the linearity or slope. We find that our implementation does not show any discontinuity or spurious flows in the shear profile near the simulation boundary. Therefore, the correction proposed by Chatterjee of omitting dissipative forces at the boundary is not required here and such discontinuities are not inherent to Lees-Edwards boundary conditions. We conclude that the corrections suggested by Chatterjee~\cite{Chatterjee2007} are not necessary and confirm the previous research by Leimkuhler and Shang~\cite{Leimkuhler2016}. \subsection{Brownian motion with shear flow} The analysis of the mean-square displacement (MSD) enables the identification of Brownian motion by the linear dependence of the MSD on time. Whereas the viscosity of the DPD fluid is of Newtonian character, the MSD is influenced by the shear. In sheared systems, there exists a cubic-in-time contribution (Eq.~\eqref{eq:msd}) to the MSD that was observed experimentally for polystyrene spheres by Orihara and Takikawa~\cite{Orihara2011}. In computer simulations using the Lees-Edwards method, the study of diffusion depends on the ability to reconstruct the physical trajectories of the particles even though they experience ``jumps'' when crossing the boundaries. As in the case of periodic boundary conditions, the coordinates are wrapped in the primary simulation box. Instead of using the plain unwrapped coordinates, based on the number of jumps in each direction, we use the accumulated offset defined in Eq.~\eqref{eq:accumulated-offset} to obtain physically consistent trajectories. The study of Brownian motion thus serves as an extra verification of the correctness of our implementation. Once more we perform simulations for a DPD fluid with with the parameters mentioned in subsection \ref{subsec:flowprofile}. A repulsive force of $F_{\text{max}} = 25.0$ for the continuous shear simulations and $F_{\text{max}} = 5.0$ for the oscillatory shear simulations was used. We used a lower value for $F_\text{max}$ in the oscillatory case to obtain a higher diffusivity for the DPD particles and hence a better signal to noise ratio. The effective diffusion coefficient $D_{\textit{eff}}$ for oscillatory shear depends on the strain and phase of the movement. As these are both values restricted by the simulation (e.g. the time of the shear wave to travel through the box) we had to enhance the diffusion of the particles to show the effect in an illustrative way. \subsubsection{Continuous shear} We chose the same simulation conditions as reported in subsection \ref{subsec:flowprofile} and five different shear velocities between $v = 0.1$ and $v = 1.5$ resulting in shear rates ranging from $\dot \gamma \approx 0.003$ and $\dot \gamma \approx 0.05$. The mean-squared displacement (MSD) of the particles was measured after equilibration of the shear flow. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\textwidth]{figs/adv_diff_contishear.pdf} \caption{\bf{Results for the MSD and diffusion coefficient $D$ in continuous shear. The upper panel shows the development of the MSD over time for the neutral direction (black), which is linear, and for the shearing direction, where the leading term is cubic at large times, for 5 different shear rates $\dot \gamma$. To obtain the diffusion coefficients in the lower panel, we fitted the MSD in the neutral direction (with a linear function) and in the shear direction with Eq.~\eqref{eq:msd} with a set value for $\dot\gamma$. The fitted curves, in black, match the simulation data. In the lower panel, the black solid line indicates the value of $D$ obtain from the quiescent simulation (see Fig.~\ref{fig:diff_coeff_quiescent}), with the dotted lines at $\pm$ one standard deviation. The round (square) symbols show the value of $D$ in the neutral (shear) direction.}} \label{fig:msd_conti} \end{center} \end{figure} Figure \ref{fig:msd_conti} shows the MSD for the neutral and the shearing direction for one example per shear rate. The colored curves show the actual measurement data while the black dotted lines show fitted curves to this data. We comment on the fitting procedure and the relation between the curves in the caption. Following the relations presented in subsection \ref{subsec:MSD} the linear in time character of the MSD in the neutral direction is unaffected by the shearing. The MSD in the shearing direction shows a gradual transition to a cubic dependency as a function of time, as we expect from equation~\eqref{eq:msd}. The data and the superimpose and are undistinguishable in the figure. The lower part of Figure~\ref{fig:msd_conti} shows the measured diffusion coefficient $D$ and standard deviation for the neutral and shearing direction as determined by three independent runs. We obtain these values by fitting the theoretical expressions of subsection \ref{subsec:MSD} to the measured values. The ratio of the diffusion coefficient in the neutral and vorticity direction has a maximum value of around 2.5\%. We thus confirm numerically the validity of Eq.~\eqref{eq:msd}. This measurement, in a particle-based simulation using the Lees-Edwards method, is only possible thanks to the observation of the reconstructed trajectories based on Eq.~\ref{eq:accumulated-offset}. The observation of the diffusion of simple particles in shear flow only depends on the diffusion coefficient and on the shear rate, so that the same analysis holds for the experiments of Orihara and Takikawa~\cite{Orihara2011} and our simulations. \subsubsection{Oscillatory shear} We use the same settings as for the continuous shear flow experiments but a reduced conservative force of $F_{\text{max}} = 5.0$, in order to enhance the diffusion, and an oscillation period of 500. The diffusion coefficient in the neutral direction during the oscillatory flow is $D = 0.61 \pm 0.01$. We then plot the expected effective diffusion coefficient $D_{\text{eff}}$ following equation~\eqref{eq:msd_osc}. The results shown were obtained from fits to 299 periods of oscillatory shear. This way, we can slide a window over the trajectories to obtain results for different phases. Fittings of the MSDs were cut off at $\tau = 10^4$ as the MSD at larger times is the result of too few averaging points. \begin{figure}[h!] \begin{center} \includegraphics[width=0.5\textwidth]{figs/osc_shear_phase.pdf} \includegraphics[width=0.5\textwidth]{figs/osc_shear_strain.pdf} \caption{\bf{Results for the MSD and diffusion coefficient $D_{\text{eff}}$ in oscillatory shear. The upper part illustrates the phase dependent diffusion coefficient for two different strain amplitudes $\gamma_0 = 0.25$ and $\gamma_0 = 0.5$. The lower part shows the strain dependence of the diffusion coefficient $D_{\text{eff}}$.} for two phases $\Phi$. All predictions are based on Equation \ref{eq:msd_osc} and the diffusion coefficient measured in the neutral direction.} \label{fig:msd_osc} \end{center} \end{figure} For oscillatory shear, we can obtain a phase dependent diffusion coefficient, as shown in Figure \ref{fig:msd_osc} for two different strain amplitudes $\gamma_0$. Furthermore, we show the strain dependence of $D_{\text{eff}}$ at two fixed phases $\phi$ which follows a squared relationship with the strain. Both results indicate the correct handling of jumps across a boundary and the correct handling of interactions. For large strains, we can observe a deviation of the measured $D_{\text{eff}}$ that is significantly higher. This deviation can be explained by the high velocity at the boundaries. Since the shear velocity is higher in this case. Hence, Lees Edwards boundary conditions are indeed translationally invariant and do not require any special modification at the boundary in order to avoid a spurious discontinuities in the flow. As long as the thermalization and velocity difference is calculated correctly by taking the shear velocity into account one can even model shear flow phenomena with a shear velocity that is changing over time. \section{Conclusions} \label{sec:conclusions} We have designed and implemented the method proposed by Lees and Edwards in 1972 for the simulation of linear shear flows in Molecular Dynamics. We provide in section \ref{sec:implementation} the information for the practical implementation in the simulation software, specifically on the distance function, the velocity difference function, the cell system, and the storage of the trajectory offset, that will be useful as a starting point for other scientists. In addition, our code is available publicly under an open-source license. We demonstrated the Lees-Edwards method with a dissipative particle dynamics (DPD) fluid, a common choice in mesoscopic fluid simulations, to obtain a linear velocity profile. We find a good agreement between the equilibrium and non-equilibrium properties of DPD fluids as evaluated by Green-Kubo, for quiescent experiments, and by Lees-Edwards boundary conditions experiments under shear flow. Here, further work could be interesting to study the low shear-rate limit with the Lees-Edwards method. While requiring longer simulation runs, this should shed light on the remaining difference in numerical value seen in the comparison of Fig.~\ref{fig:viscosity_comparisson}. Next, we were able to reconstruct continuous trajectories from the shear simulations, as if the system was infinitely extended, as is typically done for periodic simulation boxes. We observe the diffusion of particles with the mean square displacement and diffusion coefficient in equilibrium as well as in non-equilibrium situations, using then the reconstructed trajectories. We recover the predicted enhanced diffusion of Brownian particles in shear flow, which would be impossible to do without the quantity $x_{\text{part, LE}}$ defined in Eq.~\eqref{eq:accumulated-offset}. These results are of special interest as they allow for a direct comparison to the experiment of Orihara and Takikawa under steady shear~\cite{Orihara2011} and to the one of Takikawa and Orihara under oscillatory shear~\cite{Takikawa2012}. As the results depend only on the diffusion coefficient and on the shear rate, they are promising for in-silico preparatory work for other types of colloids such as non-spherical colloids or polymers. Our work opens up new possibilities to conduct numerical experiments involving simulations that require an explicit solvent undergoing shear flow within the convenient simulation package ESPResSo. We confirm the results of Leimkuhler and Shang~\cite{Leimkuhler2016} that the combination of DPD and Lees-Edwards yields a translationally invariant system, which ensures a sound basis for further research with this simulation setup. Other methods have been devised to simulate the motion of particles in shear flow, such as the combination of the Lees-Edwards method with the ``Smoothed Profile Method'' (SPM)~\cite{kobayashi2011}. It is possible, instead, to use DPD for representing the fluid in such applications. This can for instance be useful in order to control the solvent quality of sheared polymer solutions. Another prospective use case is the yielding of gels where periodic boundaries are necessary to capture the macroscopic behavior of extended gel systems. Highly localized restructuring can lead to feedback between the applied shear deformation and network structure that would not be captured by other methods. This work also allows us to capture the shear-induced orientation of, e.g., soft particles or liquid crystals where many neighboring interactions must be considered. \begin{acknowledgments} We acknowledge funding of the Research Foundation - Flanders (FWO) Odysseus Program (grant agreement number G0H9518N) and from the International Fine Particle Research Institute (IFPRI). Pierre de Buyl was a postdoctoral fellow of the Research Foundation-Flanders (FWO) while preparing most of this work. The resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation - Flanders (FWO) and the Flemish Government. \end{acknowledgments}
2,869,038,155,389
arxiv
\section{Introduction} \label{sec: intro} Given polynomials $p, q \in {\mathbb{R}}[\mathsf{x}] := {\mathbb{R}}[x_1, \dots, x_n]$ where all the coefficients of $p$ are nonnegative, Handelman \cite{Handelman86} gave a necessary and sufficient condition (reproduced as Theorem \ref{thm: HandelmanCharacterization} below) for there to exist a nonnegative integer $m$ such that the coefficients of $p^m q$ are all nonnegative. In another paper \cite{Handelman92}, Handelman showed that, given a polynomial $p \in {\mathbb{R}}[\mathsf{x}]$ such that $p(1, \dots, 1) > 0$, if the coefficients of $p^m$ are all nonnegative for some $m > 0$, then the coefficients of $p^m$ are all nonnegative for every sufficiently large $m$. In the case where $p$ is a form (i.e.\ a~homogeneous polynomial), there is a stronger positivity condition that $p$ may satisfy. If $p(x) = \sum_{|w| = d} a_wx^w \in {\mathbb{R}}[\mathsf{x}]$ is homogeneous of degree $d$ (with $a_w\in{\mathbb{R}}$), we say that $p$ {\emph{has strictly positive coefficients}} if $a_w > 0$ for all $|w| = d$. Here we use standard multi-index notation, where an $n$-tuple $w = (w_1,\ldots, w_n)$ of nonnegative integers has length $|w| := w_1 + \cdots + w_n$ and $x^{w} = x_1^{w_1} x_2^{w_2} \cdots x_n^{w_n}$. Denote the closed positive orthant of real $n$-space by $${\mathbb{R}}_+^n\>:=\>\{x=(x_1,\dots,x_n)\in{\mathbb{R}}^n\colon x_1,\dots,x_n \ge0\}.$$ Our main result is as follows: \begin{theorem} \label{thm: strictPositivstellensatz} Let $p \in {\mathbb{R}}[\mathsf{x}]$ be a nonconstant real form. The following are equivalent: \begin{enumerate}[(A)] \item \label{thm: someOddPos} The form $p^m$ has strictly positive coefficients for some odd $m\ge1$. \item \label{thm: somePos} The form $p^m$ has strictly positive coefficients for some $m\ge1$, and $p(x) > 0$ at some point $x \in {\mathbb{R}}_+^n$. \item \label{thm: eventualPos} For each real form $q \in {\mathbb{R}}[\mathsf{x}]$ strictly positive on ${\mathbb{R}}_+^n \setminus\{0\}$, there exists a positive integer $m_0$ such that $p^m q$ has strictly positive coefficients for all $m \ge m_0$. \end{enumerate} \end{theorem} Theorem \ref{thm: strictPositivstellensatz} can be derived from an isometric imbedding theorem for holomorphic bundles, due to Catlin-D'Angelo \cite{CD99}. The argument is sketched in an appendix at the end of this paper. Another condition equivalent to each of the three conditions Theorem \ref{thm: strictPositivstellensatz} was given by the second author and To in \cite{TanTo}. The line of argumentation in \cite{TanTo} is analytic in nature, and the proof therein invokes Catlin-D'Angelo's isometric embedding theorem. As the statement of Theorem \ref{thm: strictPositivstellensatz} involves only real polynomials, it is desirable to give a purely algebraic proof, which is what we shall do below. We will in fact give two proofs of very different nature. Both are independent of Catlin-D'Angelo's proof in \cite{CD99}, which uses compactness of the von Neumann operator on pseudoconvex domains of finite-type domains in ${\mathbb{C}}^n$ and an asymptotic expansion of the Bergman kernel function by Catlin \cite{Catlin99}. We remark that in the case when $n = 2$, Theorem \ref{thm: strictPositivstellensatz} follows from De~Angelis' work in \cite{deAngelis03} and has been independently observed by Handelman \cite{HandelmanMO}. Our first proof of Theorem \ref{thm: strictPositivstellensatz} uses the criterion of Handelman \cite{Handelman86} mentioned above. Our second proof reduces Theorem \ref{thm: strictPositivstellensatz} to the archimedean local-global principle due to the first author, in a spirit similar to \cite{Scheiderer12}. For a real form, having strictly positive coefficients is a certificate for being strictly positive on ${\mathbb{R}}_+^n\setminus\{0\}$. Therefore Theorem \ref{thm: strictPositivstellensatz} can be seen as a Positivstellensatz for forms $q$, relative to ${\mathbb{R}}_+^n\setminus \{0\}$. In particular, the case where $p = x_1 + \cdots + x_n$ specializes to the classical P\'olya Positivstellensatz \cite{Polya28} (reproduced in \cite[pp. 57--60]{HLP52}). For any $n \ge 2$ and even $d \ge 4$, there are examples of degree-$d$ $n$-ary real forms $p$ with some negative coefficient that satisfy the equivalent conditions of Theorem \ref{thm: strictPositivstellensatz} (see Example \ref{eq: DV} below). \medbreak \paragraph{{\textbf{Acknowledgements.}}} The second author would like to thank his PhD supervisor Professor Wing-Keung To for his continued guidance and support. We would also like to thank David Handelman for his answer on MathOverflow \cite{HandelmanMO}, and the anonymous referee for pointing out reference \cite{kr}. \section{A theorem of Handelman} Let $p = \sum_{w \in {\mathbb{Z}}^n} c_w x^w\in{\mathbb{R}}[\mathsf{x},\mathsf{x}^{-1}] := {\mathbb{R}}[x_1, \ldots, x_n, x_1^{-1}, \ldots, x_n^{-1}]$ be a Laurent polynomial. Following Handelman \cite{Handelman86} we introduce the following terminology. The {\emph{Newton diagram}} of $p$ is the set $\Log(p) := \{w \in {\mathbb{Z}}^n : c_w \ne0\}$. A subset $F$ of $\Log(p)$ is a \emph{relative face} of $\Log(p)$ if there exists a face $K$ of the convex hull of $\Log(p)$ in ${\mathbb{R}}^n$ such that $F = K \cap \Log(p)$. In particular, the subset $\Log(p)$ is itself a relative face of $\Log(p)$, called the \emph{improper relative face}. Given a set $F\subset{\mathbb{Z}}^n$, an integer $k\ge1$ and a point $z\in{\mathbb{Z}}^n$, we write $kF+z:=\{w^{(1)}+\cdots+w^{(k)}+z\colon w^{(1)},\dots,w^{(k)}\in F\}\subset{\mathbb{Z}}^n$. For a subset $E$ of ${\mathbb{Z}}^n$ and the above Laurent polynomial $p$ we write $p_E:=\sum_{w\in E}c_wx^w$. \begin{definition}\label{dfnstratum} Let $p\in{\mathbb{R}}[\mathsf{x},\mathsf{x}^{-1}]$ be a nonzero Laurent polynomial. Given a relative face $F$ of $\Log(p)$ and a finite subset $S$ of ${\mathbb{Z}}^n$, a {\emph{stratum of $S$ with respect to $F$}} is a nonempty subset $E\subset S$ such that \begin{enumerate}[(i)] \item\label{cond: stratumOne} there exist $k\ge1$ and $z\in{\mathbb{Z}}^n$ such that $E\subset kF+z$; and \item\label{cond: stratumTwo} whenever $E\subset kF+z$ for some $z\in{\mathbb{Z}}^n$ and some $k\ge1$, it follows that $E=(kF+z)\cap S$. \end{enumerate} A stratum $E$ of $S$ with respect to $F$ is \emph{dominant} if, in addition, the following holds: \begin{enumerate}[(i)] \setcounter{enumi}{2} \item \label{cond: stratumDominant} If $E\subset(k\Log(p)+z)\setminus(kF+z)$ for some $k\ge1$ and some $z\in{\mathbb{Z}}^n$, then $(kF+z)\cap S=\emptyset$. \end{enumerate} \end{definition} \begin{theorem}[Handelman {\cite[Theorem A]{Handelman86}}] \label{thm: HandelmanCharacterization} Let $p$ and $q$ be Laurent polynomials in ${\mathbb{R}}[\mathsf{x},\mathsf{x}^{-1}]$, where $p$ has nonnegative coefficients. Then $p^m q$ has nonnegative coefficients for some positive integer $m$ if, and only if, both the following conditions hold: \begin{enumerate}[\indent(a)] \item \label{cond: HandelmanConditionOne} For each dominant stratum $E$ of $\Log(q)$ with respect to the improper relative face $\Log(p)$, the polynomial $q_{E}$ is strictly positive on the interior of ${\mathbb{R}}_+^n$. \item \label{cond: HandelmanConditionTwo} For each proper relative face $F$ of $\Log(p)$, and each dominant stratum $E$ of $\Log(q)$ with respect to $F$, there exists a positive integer $m$ such that $p_F^m q_E$ has nonnegative coefficients. \end{enumerate} \end{theorem} \noindent Here, for a Laurent polynomial $f$, by ``$f$ has nonnegative coefficients'', we mean that all coefficients of $f$ are nonnegative. As observed in \cite{Handelman86}, the product of a suitable monomial with $p_F$ (resp. $f_E$) is a Laurent polynomial involving fewer than $n$ variables (when $F$ is proper), so that the condition \eqref{cond: HandelmanConditionTwo} is inductive. \section{First proof of Theorem \ref{thm: strictPositivstellensatz}} \label{sec: proofOfMainTheorem} We fix an integer $n\ge1$ and use the notation $\mathsf{x}=(x_1,\dots,x_n)$ and $[n]=\{1,\dots,n\}$. Given $z\in{\mathbb{Z}}^n$ and a subset $J$ of $[n]$, let $z_J:=(z_j)_{j\in J}\in{\mathbb{Z}}^J$ denote the corresponding truncation of~$z$. For a nonnegative integer $d$, we write $({\mathbb{Z}}^n_+)_d=\{w\in{\mathbb{Z}}^n\colon w_1\ge0,\dots,w_n\ge0$, $w_1+\cdots+w_n=d\}$. \begin{lemma}\label{domstrat} Let $p\in{\mathbb{R}}[\mathsf{x}]$ be a form of degree $d\ge1$ with strictly positive coefficients. Let $e\ge0$, and let $S\subset({\mathbb{Z}}^n_+)_e$ be a nonempty subset. \begin{itemize} \item[(a)] The relative faces of $\Log(p)$ are the sets $F_J:=\{w\in ({\mathbb{Z}}^n_+)_d \colon w_J=0\}$, where $J\subset[n]$ is a subset. \item[(b)] Let $J\subset [n]$. For each stratum $E$ of $S$ with respect to $F_J$, there exists $\beta\in {\mathbb{Z}}_+^J$ satisfying $|\beta|\le e$ such that $$E\>=\>E_{J,\beta}\>:=\>\{w\in S\colon w_J=\beta\}$$ \item[(c)] If $S=({\mathbb{Z}}_+^n)_e$ and $\emptyset\ne J\subsetneq[n]$, the stratum $E_{J,\beta}$ of $S$ with respect to $F_J$ is dominant if and only if $\beta=0$. \end{itemize} \end{lemma} In particular, $E=S$ is the only stratum of $S$ with respect to the improper relative face $\Log(p)$ of $\Log(p)$, by (b). Note that this stratum is dominant for trivial reasons. \begin{proof} By assumption we have $\Log(p)=({\mathbb{Z}}^n_+)_d$. Denote this set by $F$. Assertion (a) is clear. Note that $J=\emptyset$ resp.\ $J=[n]$ gives $F_J=F$ resp.\ $F_J=\emptyset$. To prove (b), fix a subset $J\subset[n]$, and let $E\subset S$ be a stratum of $S$ with respect to $F_J$. So there exist $k\ge1$ and $z\in{\mathbb{Z}}^n$ such that $E=(kF_J+z)\cap S$. By the particular shape of $F$ we have $$kF_J+z\>=\>\{w\in{\mathbb{Z}}^n\colon|w|=ke+|z|,\ w\ge z,\ w_J=z_J\},$$ where $w\ge z$ means $w_i\ge z_i$ for $i=1,\dots,n$. Therefore $E\subset E_{J,\beta}$ with $\beta:=z_J$. Note that $E_{J,\beta}$ can be nonempty only when $|\beta|\le e$. The proof of (b) will be completed if we show that $E_{J,\beta}\subset lF_J+y$ holds for suitable $l\ge1$ and $y\in{\mathbb{Z}}^n$. To this end it suffices to observe that there exist $l\ge1$ and $y\in{\mathbb{Z}}^n$ such that $ld\ge e$, $|y|=e-ld$, $y_J=\beta$ and $y_i\le0$ for $i\in[n]\setminus J$. These $l$ and $y$ will do the job. It remains to prove (c), so assume now that $S=({\mathbb{Z}}^n_+)_e$. Let $J\subsetneq [n]$ be a proper subset, and let $\beta\in{\mathbb{Z}}_+^J$ be such that $E_{J,\beta}$ is nonempty (hence a stratum of $S$). First assume $\beta\ne0$. There exist $k\ge1$ and $z\in{\mathbb{Z}}^n$ such that $0\le z_J\le\beta$ and $z_J\ne\beta$, such that $z_{[n]\setminus J} \le0$, and such that $|z|=e-kd$. Then $E_{J,\beta}\subset kF+z$ and $E_{J,\beta}\cap(kF_J+z)=\emptyset$. But there exists $w\in S$ with $w_J=z_J$, showing that $(kF_J+z) \cap S\ne\emptyset$, whence $E_{J,\beta}$ is not dominant. On the other hand, $E_{J,\beta}$ is easily seen to be dominant when $\beta=0$. \end{proof} We now give a first proof of Theorem \ref{thm: strictPositivstellensatz}. The implications \eqref{thm: someOddPos} $\Rightarrow$ \eqref{thm: somePos} and \eqref{thm: eventualPos} $\Rightarrow$ \eqref{thm: someOddPos} are trivial. To prove \eqref{thm: somePos} $\Rightarrow$ \eqref{thm: eventualPos}, it suffices to show the following apparently weaker statement: \begin{lemma}\label{weakerstatement} Given forms $f,\,g\in{\mathbb{R}}[x_1,\dots,x_n]$, where $f$ is nonconstant with strictly positive coefficients and where $g$ is strictly positive on ${\mathbb{R}}^n_+\setminus\{0\}$, there exists $l\ge1$ such that $f^lg$ has nonnegative coefficients. \end{lemma} Assuming that Lemma \ref{weakerstatement} has been shown, we can immediately state a stronger version of this lemma. Namely, under the same assumptions it follows that $f^lg$ actually has strictly positive coefficients for suitable $l\ge1$. Indeed, choose a form $g'$ with $\deg(g')=\deg(g)$ such that $g'$ has strictly positive coefficients and the difference $h:=g-g'$ is strictly positive on ${\mathbb{R}}_+^n\setminus\{0\}$, for instance $g'=c(x_1+\cdots+x_n)^{\deg(g)}$ with sufficiently small $c>0$. Applying Lemma \ref{weakerstatement} to $(f,h)$ instead of $(f,g)$ gives $l\ge1$ such that $f^lh$ has nonnegative coefficients. Since $f^lg'$ has strictly positive coefficients, the same is true for $f^lg=f^lg'+f^lh$. Now assume that condition \eqref{thm: somePos} of Theorem \ref{thm: strictPositivstellensatz} holds. Then the form $p$ is strictly positive on ${\mathbb{R}}_+^n\setminus\{0\}$. In order to prove \eqref{thm: eventualPos}, apply the strengthened version of Lemma \ref{weakerstatement} to $(f,g)=(p^m,\,p^ih)$ for $0\le i\le m-1$. This gives $l\ge1$ such that $p^{lm+i}h$ has nonnegative coefficients for all $i\ge0$, which is \eqref{thm: eventualPos}. So indeed it suffices to prove Lemma \ref{weakerstatement}. \begin{proof}[Proof of Lemma \ref{weakerstatement}] The case $n = 1$ is trivial. Suppose that $n>1$ and the above statement holds in less than $n$ variables. Let $\deg(f)=d\ge1$ and $\deg(g)=e$. As before, choose a form $g'$ with $\deg(g')=e$ and with strictly positive coefficients such that $h:=g-g'$ is strictly positive on ${\mathbb{R}}^n_+\setminus\{0\}$. This can be done in such a way that $\Log(h)=({\mathbb{Z}}^n_+)_e$, i.e.\ all coefficients of $h$ are nonzero. We shall verify that the pair $(f,h)$ satisfies the conditions in Theorem \ref{thm: HandelmanCharacterization}. Since $f$ has strictly positive coefficients, the only (dominant) stratum of $S=\Log(h)$ with respect to $F=\Log(f)$ is $E=S$, by Lemma \ref{domstrat}. Thus $h_E=h$ is strictly positive on ${\mathbb{R}}_+^n\setminus\{0\}$, so that condition \eqref{cond: HandelmanConditionOne} is satisfied. Next, let $J\subset[n]$ be a proper nonempty subset. Using the notation of Lemma \ref{domstrat}, the only dominant stratum of $S=\Log(h) = ({\mathbb{Z}}_+^n)_e$ with respect to the proper relative face $F_J$ of $F=\Log(f)$ is $E:=E_{J,0}=\{w\in S\colon w_J=0\}$, according to Lemma \ref{domstrat}(c). Without loss of generality we may assume $J=\{r+1,\dots,n\}$ for some $1\le r<n$, where $J$ has cardinality $n-r$. Then $h_E$ is a form in ${\mathbb{R}}[x_1,\dots,x_r]$ that is strictly positive on ${\mathbb{R}}_+^r\setminus\{0\}$, since $h_E(x_1,\dots,x_r)=h(x_1,\dots,x_r,0,\dots,0)>0$ for all $(x_1,\dots,x_r)\in{\mathbb{R}}_+^r\setminus\{0\}$. Moreover, $f_{F_J}$ is a form in ${\mathbb{R}}[x_1,\dots,x_r]$ with strictly positive coefficients. By the inductive hypothesis there exists $m\ge1$ such that all coefficients of $(f_{F_J})^mh_E$ are nonnegative, which shows that $(f,h)$ satisfies condition \eqref{cond: HandelmanConditionTwo}. Therefore, by Theorem \ref{thm: HandelmanCharacterization}, there exists $l\ge1$ such that $f^lh$ has nonnegative coefficients. \end{proof} \section{Archimedean local-global principle for semirings} \label{sec:archlgpsemiring} Let $A$ be a (commutative unital) ring, and let $T\subset A$ be a subsemiring of $A$, i.e.\ a subset containing $0,\,1$ and closed under addition and multiplication. Recall that $T$ is said to be \emph{archimedean} if for any $f\in A$ there exists $n\in{\mathbb{Z}}$ with $n+f\in T$, i.e.\ if $T+{\mathbb{Z}}=A$. The real spectrum $\mathrm{Sper}(A)$ of $A$ (see e.g.\ \cite{bcr} 7.1, \cite{ma} 2.4) can be defined as the set of all pairs $\alpha=(\mathfrak{p},\le)$ where $\mathfrak{p}$ is a prime ideal of $A$ and $\le$ is an ordering of the residue field of~$\mathfrak{p}$. Given a semiring $T\subset A$, let $X_A(T)\subset\mathrm{Sper}(A)$ be the set of all $\alpha\in\mathrm{Sper}(A)$ such that $f\ge_\alpha0$ for every $f\in T$. We say that $f\in A$ satisfies $f\ge0$ (resp.\ $f>0$) on $X_A(T)$ if $f\ge_\alpha0$ (resp.\ $f>_\alpha0$) for every $\alpha\in X_A(T)$. We recall the archimedean Positivstellensatz in the following form. In a weaker form, this result was already proved by Krivine \cite{kr}. \begin{theorem}[{\cite{sw}} Corollary~2] \label{archpss} Let $A$ be a ring, and let $T\subset A$ be an archimedean semiring containing $\frac1n$ for some integer $n>1$. If $f\in A$ satisfies $f>0$ on $X_A(T)$, then $f\in T$. \end{theorem} We will need to apply Theorem \ref{archlgpsemiring} below, which is a local-global principle for archimedean semirings. A slightly weaker version of this result was already proved in \cite{bss} Theorem 6.5. We give a new proof which is considerably shorter than the proof in \cite{bss}. \begin{theorem}\label{archlgpsemiring} Let $A$ be a ring, let $T\subset A$ be an archimedean semiring containing $\frac1n$ for some integer $n>1$, and let $f\in A$. Assume that for any maximal ideal $\mathfrak{m}$ of $A$ there exists an element $s\in A\setminus\mathfrak{m}$ such that $s\ge0$ on $X_A(T)$ and $sf\in T$. Then $f\in T$. \end{theorem} \begin{proof} There exists an integer $k\ge1$ and elements $s_1,\dots,s_k\in A$ with $\langle s_1,\dots,s_k\rangle=\langle1\rangle$, and with $s_if\in T$ and $s_i\ge0$ on $X_A(T)$ for $i=1,\dots,k$. By \cite{sch:surf} Prop.\ 2.7 there exist $a_1,\dots,a_k\in A$ with $\sum_{i=1}^ka_is_i=1$ and with $a_i>0$ on $X_A(T)$ ($i=1,\dots,k$). Since $T$ is archimedean, the last condition implies $a_i\in T$, by the Positivstellensatz \ref{archpss}. It follows that $f=\sum_{i=1}^k a_i(s_if)\in T$. \end{proof} \section{Second proof of Theorem \ref{thm: strictPositivstellensatz}} \label{sec: 2ndproofOfMainTheorem} As in the first proof, it suffices to prove Lemma \ref{weakerstatement}. So let $f\in{\mathbb{R}}[\mathsf{x}]={\mathbb{R}}[x_1,\dots,x_n]$ be a form of degree $\deg(f)=d\ge1$ with strictly positive coefficients, say $f=\sum_{|\alpha|=d}c_\alpha x^\alpha$. Let $S\subset{\mathbb{R}}[\mathsf{x}]$ be the semiring consisting of all polynomials with nonnegative coefficients. We shall work with the ring $$A\>=\>\Bigl\{\frac p{f^r}\colon r\ge0,\ p\in{\mathbb{R}}[\mathsf{x}]_{dr}\Bigr\}$$ of homogeneous fractions of degree zero, considered as a subring of the field ${\mathbb{R}}(\mathsf{x})$ of rational functions. Let $V\subset\P^{n-1}$ be the complement of the projective hypersurface $f=0$. Then $V$ is an affine algebraic variety over ${\mathbb{R}}$, with affine coordinate ring ${\mathbb{R}}[V]=A$. As a ring, $A$ is generated by ${\mathbb{R}}$ and by the fractions $y_\alpha=\frac{x^\alpha}g$ where $|\alpha|=d$. Let $T$ be the subsemiring of $A$ generated by ${\mathbb{R}}_+$ and by the $y_\alpha$ ($|\alpha|=d$). So the elements of $T$ are precisely the fractions $\frac p{f^r}$, where $r\ge0$ and $p\in S$ is homogeneous of degree~$dr$. The semiring $T$ is archimedean, as follows from the identity $\sum_{|\alpha|=d}c_\alpha y_\alpha=1$ and from $c_\alpha>0$ for all $\alpha$ (\cite{BerrWormann} Lemma~1). First we prove Lemma \ref{weakerstatement} under an extra condition. \begin{lemma} \label{lem: assumeDegreeCondition} Let $f,\,g\in{\mathbb{R}}[x_1,\dots,x_n]$ be forms where $f$ is nonconstant with strictly positive coefficients and $g$ is strictly positive on ${\mathbb{R}}^n_+\setminus\{0\}$. If $\deg(f)$ divides $\deg(g)$, there exists $m\ge1$ such that $f^mg$ has nonnegative coefficients. \end{lemma} \begin{proof} Suppose that $r$ is a positive integer such that $\deg(g)=r\deg(f)$. Then the fraction $\frac g{f^r}$ lies in $A$ and is strictly positive on $X_A(T)$, since $g$ is positive on ${\mathbb{R}}^n_+$. Hence the archimedean Positivstellensatz (Theorem \ref{archpss}) gives $\frac g{f^r}\in T$, and clearing denominators we get the desired conclusion. \end{proof} \begin{remark} When $\deg(f)=1$, Lemma \ref{lem: assumeDegreeCondition} is in fact P\'olya's Positivstellensatz \cite{Polya28}. In this case, our proof above becomes essentially the same as the proof of \cite{Polya28} given by Berr and W\"ormann in \cite{BerrWormann}. \end{remark} For the general case when $\deg(f)$ does not necessarily divide $\deg(g)$, we need a more refined argument as follows. It is similar to the approach in \cite{Scheiderer12}. \begin{proof} [Proof of Lemma \ref{weakerstatement}] Fix integers $k\ge0$, $r\ge0$ such that $k+e=dr$, and consider the fraction $\varphi:=\frac{x_1^kg}{f^r}\in A$. It suffices to show $\varphi\in T$. Indeed, this means that there are $s\ge0$ and $p\in S$, homogeneous of degree $ds$, such that $\varphi=\frac p{f^s}$. We may assume $s\ge r$, then $f^{s-r}x_1^kg$ has nonnegative coefficients. Clearly this implies that $f^{s-r}g$ has nonnegative coefficients. We prove $\varphi\in T$ by applying the local-global principle \ref{archlgpsemiring}. So let $\mathfrak{m}$ be a maximal ideal of $A$. Then $\mathfrak{m}$ corresponds to a closed point $z$ of the scheme $V$, and hence of $\P^{n-1}$. There exist real numbers $t_1,\dots,t_n>0$ such that the linear form $l=\sum_{i=1}^nt_ix_i$ does not vanish in $z$. Hence the element $\psi:=\frac{l^d}f$ of $A$ does not lie in $\mathfrak{m}$. On the other hand, $\psi>0$ on $X_A(T)$, since $l$ and $f$ are strictly positive on ${\mathbb{R}}^n_+\setminus\{0\}$. By Lemma \ref{lem: assumeDegreeCondition}, applied to $l$ and $g$, there exists an integer $N\ge1$ for which $l^Ng\in S$. Choose an integer $m\ge1$ so large that $md\ge N$. Then $$\psi^m\varphi\>=\>\frac{l^{md}x_1^kg}{f^{m+r}}$$ lies in $T$. From Theorem \ref{archlgpsemiring} we therefore deduce $\varphi\in T$, as desired. \end{proof} We conclude with an example, as promised in the introduction. \begin{example} \label{eq: DV} For $n \ge 2$ and even $d = 2k \ge 4$, the form $$p_{\lambda} = (x_1+x_2)^{2k} - \lambda x_1^kx_2^k + \sum_{\stackrel{|w| = 2k}{w_i \neq 0 {\text{ for some } i \ge 3}}} x^w\>\in{\mathbb{R}}[x_1,\dots,x_n]$$ of degree $d$ satisfies the equivalent conditions of Theorem \ref{thm: strictPositivstellensatz} and has a negative coefficient (of the monomial $x_1^kx_2^k$) whenever $\binom{2k}{k}<\lambda < 2^{2k - 1}$. Indeed, it suffices to check the case when $n = 2$, in which case the vertification follows similarly as in a result of D'Angelo-Varolin \cite[Theorem 3]{DV04}. \end{example} \section*{Appendix: Proof of Theorem \ref{thm: strictPositivstellensatz} from Catlin-D'Angelo's Theorem} In this appendix, we sketch how the results of Catlin-D'Angelo \cite{CD99} can be used to deduce Theorem \ref{thm: strictPositivstellensatz}. As in the first and second proofs of Theorem \ref{thm: strictPositivstellensatz}, it suffices to prove Lemma \ref{weakerstatement}. Let $\mathsf{z} = (z_1, \ldots, z_n)$. Denote by ${\mathbb{C}}[\mathsf{z}, \conj{\mathsf{z}}]$ the complex polynomial algebra in the indeterminates $z_1, \dots, z_n, \conj{z_1}, \dots, \conj{z_n}$. Equipped with conjugation, ${\mathbb{C}}[\mathsf{z}, \conj{\mathsf{z}}]$ has the structure of a commutative complex $\ast$-algebra. A polynomial $P \in {\mathbb{C}}[\mathsf{z}, \conj{\mathsf{z}}]$ is said to be {\emph{Hermitian}} if $P$ equals its conjugate $\conj{P}$. Equivalently, $P$ is Hermitian if and only if $P(z, \conj{z})$ is real for all $z \in {\mathbb{C}}^{n}$. A Hermitian polynomial $P\in {\mathbb{C}}[\mathsf{z}, \conj{\mathsf{z}}]$ is said to be \emph{positive on ${\mathbb{C}}^{n}\setminus\{0\}$} if $P(z, \conj{z}) > 0$ for all $z \in {\mathbb{C}}^{n}\setminus\{0\}$. The {\emph{bidegree}} of a monomial $z^\alpha \conj{z}^{\beta} = z_1^{\alpha_1} z_2^{\alpha_2} \cdots z_n^{\alpha_n} \conj{z}_1^{\beta_1} \conj{z}_2^{\beta_2} \cdots \conj{z}_n^{\beta_n} \in {\mathbb{C}}[\mathsf{z},\conj{\mathsf{z}}]$ is $(|\alpha|,|\beta|) = (\alpha_1+ \cdots + \alpha_n, \beta_1 + \cdots+ \beta_n)$. A {\emph{bihomogeneous polynomial}} is a complex linear combination of monomials of the same bidegree. If a bihomogeneous polynomial $P = \sum_{|\alpha| = d, |\beta| = e} a_{\alpha\beta} z^\alpha \conj{z}^\beta$ is Hermitian, then $d = e$, i.e. $P$ has bidegree $(d, d)$. From \cite[Definition 2]{CD99}, a Hermitian bihomogeneous polynomial $P$ is said to satisfy the \emph{strong global Cauchy-Schwarz} (in short, \emph{SGCS}) \emph{inequality} if $|P(z, \conj{w})|^2 < P(z, \conj{z}) P(w, \conj{w})$ whenever $z, w\in {\mathbb{C}}^{n}$ are linearly independent. The following result is a special case of \cite[Corollary of Theorem 1]{CD99} (where the matrix $M$ of bihomogeneous polynomials in \cite[Corollary of Theorem 1]{CD99} has size $1 \times 1$). \begin{theorem}[Catlin-D'Angelo {\cite[Corollary of Theorem 1]{CD99}}] \label{thm: CDThm} Let $R \in {\mathbb{C}}[\mathsf{z},\conj{\mathsf{z}}]$ be a nonconstant Hermitian bihomogeneous polynomial such that $R$ is positive on ${\mathbb{C}}^n\setminus\{0\}$, the domain $\{z \in {\mathbb{C}}^{n} : R(z, \conj{z}) < 1\}$ is strongly pseudoconvex, and $R$ satisfies the SGCS inequality. Then for each Hermitian bihomogeneous polynomial $Q \in {\mathbb{C}}[\mathsf{z},\conj{\mathsf{z}}]$ positive on ${\mathbb{C}}^n \setminus\{0\}$, there exists $l \ge 1$ and polynomials $h_1,\ldots, h_N \in {\mathbb{C}}[\mathsf{z}] \subset{\mathbb{C}}[\mathsf{z},\conj{\mathsf{z}}]$ such that $R^l Q = \sum_{k = 1}^N h_k \conj{h_k}$. \end{theorem} \begin{proof}[Proof sketch of Lemma \ref{weakerstatement} from Theorem \ref{thm: CDThm}] Let $f = \sum_{|\alpha|=d} c_\alpha x^\alpha \in {\mathbb{R}}[\mathsf{x}]$ be a form of degree $d$. Suppose that $f$ is nonconstant with strictly positive coefficients. One verifies that $R := \sum_{|\alpha|=d} c_\alpha z^\alpha \conj{z}^\alpha$ is a nonconstant Hermitian bihomogeneous polynomial that is positive on ${\mathbb{C}}^n\setminus\{0\}$, the domain $\{z \in {\mathbb{C}}^{n} : R(z, \conj{z}) < 1\}$ is strongly pseudoconvex, and that $R$ satisfies the SGCS inequality. Now suppose that $g = \sum_{|\beta|= e} b_\beta x^\beta \in {\mathbb{R}}[\mathsf{x}]$ is a form of degree $e$ which is strictly positive on ${\mathbb{R}}^n_+\setminus\{0\}$. This implies that $Q := \sum_{|\beta|=e} b_\beta z^\beta \conj{z}^\beta$ is a Hermitian bihomogenous polynomial that is positive on ${\mathbb{C}}^n\setminus\{0\}$. Thus we may apply Theorem \ref{thm: CDThm} to obtain $l \ge 1$ such that $R^l Q = \sum_{k = 1}^N h_k \conj{h_k}$ for some polynomials $h_1,\ldots, h_N \in {\mathbb{C}}[\mathsf{z}] \subset{\mathbb{C}}[\mathsf{z},\conj{\mathsf{z}}]$. Hence $R^l Q = \sum_{|\alpha| = |\beta| = ld + e} a_{\alpha\beta} z^\alpha \conj{z}^\beta$ for some positive semidefinite Hermitian matrix $A = (a_{\alpha\beta})_{|\alpha| = |\beta| = ld + e}$. Writing $f^lg = \sum_{|\gamma| = ld + e} a_{\gamma}^\prime x^\gamma$, we see that $A$ is in fact the diagonal matrix $\mathrm{diag}(a_{\gamma}^\prime)_{|\gamma| = ld + e}$. Since $A$ is positive semidefinite, all the coefficients $a_{\gamma}^\prime$ of $f^l g$ are nonnegative. This completes the proof of Lemma \ref{weakerstatement}. \end{proof}
2,869,038,155,390
arxiv
\section{The goal of the present paper.} \label{sec1} \noindent Let \(A\in\mathfrak{H}_n\) and \(B\in\mathfrak{M}_n\). In the present paper we consider the matrix function \(L(t)=e^{tA+B}\) of the complex variable \(t\). We show that this function is representable as the bilateral Laplace transform of some matrix valued measure \(M(d\lambda)\): \begin{equation} \label{LR} e^{tA+B}=\int{}e^{t\lambda}\,M(d\lambda),\quad t\in\mathbb{C}, \end{equation} the values of the measure \(M\) belong to the set \(\mathfrak{M}_n\). Our considerations are based on the functional calculus for the matrix~\(A\). We relate the following objects with the matrix \(A\):\\[0.5ex] 1.\,The spectrum \(\sigma(A)\) of the matrix \(A\), that is the set \(\{\lambda_1,\,\ldots\,,\lambda_l\}\) of all its eigenvalues taken without multiplicities, i.e. \(\lambda_p\not=\lambda_q,\,\forall\,p\not=q,1\leq p,q\leq l\). Since \(A\in\mathfrak{H}_n\), \(\sigma(A)\subset\mathbb{R}\). (The number \(l\) is the cardinality of the set \(\sigma(A)\), \(l\leq{}n.\) If the spectrum \(\sigma(A)\) is simple, then \(l=n\).) \\ 2. The set \(\{E_{\lambda_1},\,\ldots\,,E_{\lambda_l}\}\) of spectral projectors of the matrix \(A\): \begin{gather} \label{SP} AE_{\lambda_j}=\lambda_jE_{\lambda_j},\quad 1\leq j\leq l,\\ E_{\lambda_1}+\,\cdots\,+E_{\lambda_l}=I_n. \end{gather} If \(f(\lambda)\) is a function defined on the spectrum \(\sigma(A)\), then \begin{equation} \label{f(A)} f(A)=\sum\limits_{1\leq j\leq l}f(\lambda_j)E_{\lambda_j}. \end{equation} In particular, \begin{equation} \label{e(A)} e^{tA}=\sum\limits_{1\leq j\leq l}e^{t\lambda_j}E_{\lambda_j}. \end{equation} If the matrices \(A\) and \(B\) commute, that is if \begin{equation} \label{com} AB=BA, \end{equation} then \begin{equation} \label{eom} e^{tA+B}=e^{tA}\cdot e^{B}. \end{equation} From \eqref{e(A)} and \eqref{eom} it follows that under the condition \eqref{com} the equality \begin{equation} \label{ir} e^{tA+B}=\sum\limits_{1\leq j\leq l}e^{t\lambda_j}M(\{\lambda_j\}) \end{equation} holds, where \begin{equation} \label{am} M(\{\lambda_j\})=E_{\lambda_j}e^{B}E_{\lambda_j}. \end{equation} \emph{The equality \eqref{ir} can be interpreted as the representation of the matrix function \(e^{tA+B}\) in the form of the bilateral Laplace transform \eqref{LR} of a very special matrix valued measure \(M\). This measure \(M\) is discrete and is supported on the spectrum \(\sigma(A)\) of the matrix \(A\). The point \(\{\lambda_j\}\in\sigma(A)\) carries the "atom" \(M(\{\lambda_j\})\)}. \emph{The goal of the present paper is to obtain the representation of the matrix function \(e^{tA+B}\) in the form \eqref{LR} without assuming that the matrices \(A\) and \(B\) commute.} The representation of the form \eqref{LR} was suggested by the following \begin{theo} \textup{(H.Stahl)}. Let matrices \(A\) and \(B\) be given, \(A\in\mathfrak{H}_n\), \(B\in\mathfrak{H}_n\). Let \(\lambda_{\textup{min}}\) and \(\lambda_{\textup{max}}\) be the smallest and the largest eigenvalues of the matrix~\(A\). Then the function \(\tr{}e^{At+B}\) is representable in the form \begin{equation}\label{StR} \tr{}e^{tA+B}=\!\!\int\limits_{[\lambda_{\textup{min}},\lambda_{\textup{max}}]}\!\! e^{t\lambda}\mu(d\lambda), \end{equation} where \(\mu(d\lambda)\) is a non-negative Borel measure. \end{theo} The first arXiv version of Stahl's Theorem appeared in \cite{S1}, the latest arXiv version - in \cite{S2}, the journal publication - in \cite{S3}. The proof of Stahl is based on ingenious considerations related to Riemann surfaces of algebraic functions. In \cite{E1},\cite{E2} a simplified version of Stahl proof is presented. The proof, presented in \cite{E1},\cite{E2}, preserves all the main ideas of Stahl; the simplification consists in technical details. In the paper \cite{K} a proof of Stahl's Theorem for the special case \(\textup{rank}\,A=1\) is presented. This proof is based on an elementary argument which does not require complex analysis. The main result of the present paper is Theorem \ref{MaT}. Stahl's Theorem does not follow from our Theorem \ref{MaT}. If \(A\in\mathfrak{H}_n\) and \(B\in\mathfrak{H}_n\), then the measure \(M(d\lambda)\) in \eqref{LR} is \(\mathfrak{H}_n\)-valued but not necessarily is non-negative. An appropriate example is given in Section \ref{NNN}. \section{The approximant \(\boldsymbol{L_{N}(t)}\).} \label{LAp} If the matrices \(A\) and \(B\) do not commute, then the equality \eqref{eom} breaks down. However the Lie product formula, which is a kind of surrogate for the formula \eqref{eom}, holds regardless of the condition \eqref{com}.\\ \noindent \textbf{Lie Product Formula.} \emph{Let \(X\in\mathfrak{M}_n\) and \(Y\in\mathfrak{M}_n\). Then} \begin{equation} \label{LPrFo} e^{X+Y}=\lim\limits_{N\to\infty}\big(e^{X/N}e^{Y/N}\big)^{N}. \end{equation} Versions of proof of the Lie Product Formula\footnote{ It should be mention that the Lie Product Formula can be extended to certain unbounded linear operators \(X\) and \(Y\) acting in a Hilbert space. First such extension was done by Trotter, \cite{Tr}. A version of the Trotter-Lie product formula was obtained by T.Kato, \cite{Ka1}. We refer also to the book of B.\,Simon, \cite{Si}. See Theorems 1.1 and 1.2 there. In \cite{Si}, the Lie-Trotter formula is used for the path integral formulation of quantum mechanics. } can be found in \cite{H}, \cite[Section 6.5]{HJ}, \cite[Theorem 2.10]{Ha}, \cite[Section 2.12, Corollary 2.12.5]{Va}.\\[2.0ex] \begin{lem} Let \(A\in\mathfrak{M}_n,\,B\in\mathfrak{M}_n\) and \(t\in\mathbb{C}\). Then \begin{equation} \label{LPF} e^{tA+B}=\lim\limits_{N\to\infty}\Big(e^{tA/N}e^{B/N}\Big)^{N}. \end{equation} \end{lem} \begin{proof} The formula \eqref{LPF} is a special case of the formula \eqref{LPrFo} corresponding to the choice \(X=tA,\,Y=B\). \end{proof} \begin{defn} The expression \begin{equation} \label{LA} L_N(t)=\Big(e^{tA/N}e^{B/N}\Big)^{N} \end{equation} which appears on the right hand side of \eqref{LA} is said to be the \emph{N-approximant for the matrix function \(L(t)=e^{tA+B}\). } \end{defn} Assuming that \(A\in\mathfrak{H}_n\), we express the matrix function \(e^{tA/N}\) in terms of the spectrum \(\sigma(A)\) of the matrix \(A\) and its spectral projectors: \begin{equation} \label{es} e^{tA/N}=\sum\limits_{1\leq j\leq l}e^{t\frac{\lambda_j}{N}}E_{\lambda_j}. \end{equation} Substituting \eqref{es} into \eqref{LA}, we represent the approximant \(L_{N}(t)\) as a multiple sum which contains \(l^N\) summands: \begin{equation} \label{EEL} L_{N}(t)=\sum\limits_{k_1,\ldots,k_N}\textup{exp}\, \Big\{t\,\tfrac{\lambda_{k_1}+\,\ldots\,+\lambda_{k_N}}{N}\Big\}M_{k_1,\,\ldots,\,k_N}. \end{equation} In \eqref{EEL}, the summation is extended over all integers \(k_1,\ldots,k_N\) such that \(1\leq~k_p\leq~l\), \(p=1,2,\ldots,N.\) The matrix \(M_{k_1,\,\ldots,\,k_N}\) is the product \begin{equation} \label{Pr} M_{k_1,\,\ldots,\,k_N}=E_{\lambda_{k_1}}e^{B/N}E_{\lambda_{k_2}}e^{B/N}\,\ldots\, E_{\lambda_{k_N}}e^{B/N}. \end{equation} Let us consider the numbers \(\tfrac{\lambda_{k_1}+\,\ldots\,+\lambda_{k_N}}{N}\) which appear in the exponents of the exponentials in \eqref{EEL}. \begin{lem} Given integers \(k_1,\ldots,k_N\) satisfying the conditions \(1\leq k_p\leq l\), \(p=1,2,\ldots,N\), then \begin{equation} \label{rlc} \frac{\lambda_{k_1}+\,\ldots\,+\lambda_{k_N}}{N}=\frac{n_1}{N}\lambda_1+ \frac{n_2}{N}\lambda_2+\,\cdots\,+\frac{n_l}{N}\lambda_l, \end{equation} where \begin{equation} \label{dnj} n_j(k_1,\ldots,k_N)=\#\{p:\,1\leq p\leq N,\,k_p=j\},\quad 1\leq j\leq l. \end{equation} The numbers \begin{equation} \xi_j=\frac{n_j}{N},\,j=1,2,\,\ldots\,l, \end{equation} where the \(n_j\) are defined by \eqref{dnj}, satisfy the conditions \begin{equation} \label{CC} \xi_j\geq0,\,j=1,2,\,\ldots\,l,\quad\sum\limits_{1\leq{}j\leq{}l}\xi_j=1. \end{equation} \end{lem} \begin{proof}The lemma is evident. \end{proof} The linear combination \(\xi_1\lambda_1+\xi_2\lambda_2+\,\cdots\,\xi_l\lambda_l\) which appears on the right hand side of \eqref{rlc} is a \emph{convex} linear combination of numbers \(\lambda_1,\lambda_2,\,\ldots\,,\lambda_l\). However this linear combination is a very special convex linear combination. Its coefficients \(\xi_1,\,\ldots\,\xi_l\) are numbers of the form \(\xi_j=\frac{n_j}{N}\), where \(n_j\) are non-negative integers. \begin{defn}\label{dNch} Let \(\lambda_1,\,\ldots\,,\lambda_l\) be real numbers, \(N\) be a positive integer. \emph{The \(N\)-convex hull of the set} \(\{\lambda_1,\,\ldots\,,\lambda_l\}\) is the set \(\xi_1\lambda_1+\xi_2\lambda_2+\,\cdots\,\xi_l\lambda_l\) of all convex linear combinations with coefficients of the form \(\xi_j=\frac{n_j}{N}\), where \(n_j\) are non-negative integers. (Since the considered linear combinations are convex, the equality \(n_1+n_2+\cdots+n_l=N\) must hold.) \end{defn} In what follows, the numbers \(\lambda_1,\,\ldots\,\lambda_l\) form the spectrum \(\sigma(A)\) of the matrix \(A\). The \(N\)-convex hull of the spectrum \(\sigma(A)\) is denoted by \(ch_N(\sigma(A))\). The convex hull of the spectrum \(\sigma(A)\) is denoted by \(ch(\sigma(A))\). \begin{rem} \label{con} It is clear that the convex hull \(ch(\sigma(A))\) is the closed interval \([\lambda_{\textup{min}},\lambda_{\textup{max}}]\), where \(\lambda_{\textup{min}}=\min\limits_{1\leq{}j\leq{}l}\lambda_j\), \(\lambda_{\textup{max}}=\max\limits_{1\leq{}j\leq{}l}\lambda_j\). It is also clear, that \begin{equation} \label{inc} ch_N(\sigma(A))\subset {}ch(\sigma(A)),\quad \forall\,N. \end{equation} The union \(\bigcup\limits_{N}ch_N(\sigma(A))\) of the sets \( ch_N(\sigma(A)\) is dense in the set \(ch(\sigma(A))\). \end{rem} The numbers \(\tfrac{\lambda_{k_1}+\,\ldots\,+\lambda_{k_N}}{N}\) which appear in the exponents of the exponentials in \eqref{EEL} belong to the set \(ch_N(\sigma(A))\). Collecting similar terms, we rewrite \eqref{EEL} in the form \begin{equation} \label{EEa} L_N(t)=\sum\limits_{\lambda\in{}ch_N(\sigma(A))}e^{t\lambda}\,M_N(\{\lambda\}), \end{equation} where \begin{equation} \label{EEb} M_N(\{\lambda\})=\sum\limits_{k_1,\ldots,k_N}M_{k_1,\ldots,k_N}, \end{equation} the matrices \(M_{k_1,\ldots,k_N}\) are defined by \eqref{Pr}. For each \(\lambda\in{}ch_N(\sigma(A))\), the sum in \eqref{EEb} is extended over all those \(k_1,\ldots,k_N\) for which \(\tfrac{\lambda_{k_1}+\,\ldots\,+\lambda_{k_N}}{N}=\lambda\). We interpret the equality \eqref{EEa} as the representation of the approximant \(L_N(t)\) in the form of the bilateral Laplace transform of a matrix valued measure \(M_N(d\lambda)\): \begin{equation} \label{LRN} L_N(t)=\int\limits_{\lambda\in{}ch_N(\sigma(A))} e^{t\lambda}\,M_N(d\lambda). \end{equation} The measure \(M_N(d\lambda)\) is discrete and is supported on the finite set \(ch_N(\sigma(A))\). The point \(\{\lambda\}\in{}ch_N(\sigma(A))\) carries the "atom" \(M_N(\{\lambda\})\). According to \eqref{LPF} \begin{equation} \label{LRI} e^{tA+B}=\lim\limits_{N\to\infty}\int e^{t\lambda}\,M_N(d\lambda),\quad \forall t\in\mathbb{C}. \end{equation} \section{The norm in the set \(\mathfrak{M}_n\).} \label{sec3} We have to prove that the sequence \(\big\{M_{N}(d\lambda)\big\}_{1\leq{}N<\infty}\) of matrix measures is weakly convergent. To prove this, we have to bound the total variations of these measures from above. To express such bound, we have to provide the set \(\mathfrak{M}_n\) with some norm. We provide the set \(\mathfrak{M}_{n}\) with the usual \emph{operator norm}. Let \(S=(s_{pq})_1^n\in\mathfrak{M}_{n}\). The norm \(\|S\|\) is defined as follows: \begin{equation} \label{DON} \|S\|\stackrel{\textup{\tiny{def}}}{=} \max\limits_{\xi,\eta}\frac{\Big|\sum\limits_{1\leq p,q\leq n}s_{pq}\xi_q\eta_p\Big|} {\sqrt{|\xi_1|^2+\,\cdots\,+|\xi_n|^2}\sqrt{|\eta_1|^2+\,\cdots\,+|\eta_n|^2}}, \end{equation} where \(\max\) is taken over all complex numbers \(\xi_1,\,\ldots\,,\xi_n\) and \(\eta_1,\,\ldots\,,\eta_n\) \begin{lem} \label{maj1} Let \(S=(s_{pq})_1^n\in\mathfrak{M}_{n}\). Then the inequality \begin{equation} \label{In1} \|S\|\leq\sum\limits_{1\leq{}p,q\leq n}|s_{pq}| \end{equation} holds. \end{lem} \begin{proof} The inequality \eqref{In1} is a direct consequence of the inequality \begin{equation*} \Big|\sum\limits_{1\leq p,q\leq n}s_{pq}\xi_q\eta_p\Big|\leq \Big(\sum\limits_{1\leq{}p,q\leq n}|s_{pq}|\Big)\cdot \max\limits_{1\leq{}p,q\leq{}n}\big|\xi_q\eta_p\big| \end{equation*} and of the inequality \begin{equation*} \max\limits_{1\leq{}p,q\leq{}n}\big|\xi_q\eta_p\big|\leq{}\sqrt{|\xi_1|^2+\,\cdots\,+|\xi_n|^2} \sqrt{|\eta_1|^2+\,\cdots\,+|\eta_n|^2}. \qedhere \end{equation*} \end{proof} \begin{lem} \label{maj2} Let \(S=(s_{pq})_1^n\in\mathfrak{M}_{n}^{+}\). Then the inequality \begin{equation} \label{In2} \sum\limits_{1\leq{}p,q\leq n}s_{pq}\leq{}n\cdot\|S\| \end{equation} holds. \end{lem} \begin{proof} The ratio \(\dfrac{\sum\limits_{1\leq{}p,q\leq n}s_{pq}}{n}\) can be considered as the ratio \begin{equation*} \frac{\sum\limits_{1\leq p,q\leq n}s_{pq}\xi_q\eta_p} {\sqrt{|\xi_1|^2+\,\cdots\,+|\xi_n|^2}\sqrt{|\eta_1|^2+\,\cdots\,+|\eta_n|^2}} \end{equation*} with \(\xi_1=1,\,\ldots\,,\xi_n=1\) and \(\eta_1=1,\,\ldots\,,\eta_n=1\). \end{proof} The inequality expressed by following Lemma can be considered as \emph{an inverse triangle inequality.} It holds for matrices with \emph{non-negative} entries. The total number of summands can be arbitrary large. \begin{lem} \label{ITI} Let \(S_r\in\mathfrak{M}_n^{+},\,r=1,2,\,\ldots\,,m\). Then the inequality \begin{equation} \label{iti} \sum\limits_{1\leq r\leq{}m}\|S_r\|\leq{}n\cdot\|\!\!\!\sum\limits_{1\leq r\leq{}m}S_r\| \end{equation} holds. \end{lem} \begin{proof} Lemma \ref{ITI} is a direct consequence of Lemmas \ref{maj1} and \ref{maj2}. \end{proof} {\ }\\ The following technical result is used later in Section \ref{BTV}. \begin{lem} \label{TeL} Let the following objects be given: \begin{enumerate} \item Matrices \(F_j\in\mathfrak{M}_n^{+},\, j=1,\,\ldots\,l,\) which satisfy the condition \begin{equation} \label{Mcc} \sum\limits_{1\leq j\leq l}F_j=I_n; \end{equation} \item A matrix \(R\in\mathfrak{M}_n^{+}\) and a number \(N\in\mathbb{N}\). \end{enumerate} Then the inequality \begin{equation} \label{FMI} \sum\limits_{k_1,k_2,\ldots,k_N} \big\|F_{k_1}e^{R/N}F_{k_2}e^{R/N}\,\ldots\,F_{k_N}e^{R/N}\big\|\leq n\cdot\big\|e^{R}\big\| \end{equation} holds. The summation in \eqref{FMI} is extended over all integers \(k_1,k_2,\ldots,k_N\) which satisfy the conditions\footnote{ So the sum in \eqref{FMI} contains \(l^N\) summands. } \(1\leq{}k_1\leq{}l,\,1\leq{}k_2\leq~l, \, \ldots\,,1\leq{}k_N\leq{}l.\) \end{lem} \begin{proof} According to Lemma \ref{ITI}, the inequality \begin{multline*} \sum\limits_{k_1,k_2,\ldots,k_N} \big\|F_{k_1}e^{R/N}F_{k_2}e^{R/N}\,\ldots\,F_{k_N}e^{R/N}\big\|\leq\\ n\cdot\Big\|\sum\limits_{k_1,k_2,\ldots,k_N} F_{k_1}e^{R/N}F_{k_2}e^{R/N}\,\ldots\,F_{k_N}e^{R/N}\,\Big\| \end{multline*} holds. From the condition \eqref{Mcc} it follow that \begin{equation*} \sum\limits_{k_1,k_2,\ldots,k_N} F_{k_1}e^{R/N}F_{k_2}e^{R/N}\,\ldots\,F_{k_N}e^{R/N}= e^{R/N}\cdot{}e^{R/N}\cdot\,\cdots\,\cdot{}e^{R/N}=e^{R}. \end{equation*} \end{proof} \begin{rem} If \(F_j\in\mathfrak{M_n^{+}},\,\forall j=1,\,\ldots\,l,\) and the equality \eqref{Mcc} holds, then \(F_j\in\mathfrak{D_n},\,\forall\, j=1,\,\ldots\,l\). The diagonal entries of each of the matrices \(F_j\) belong to the interval \([0,1]\). \end{rem} \section{An expression for the total variation of the measure \(\boldsymbol{M_N(d\lambda)}\).} Since the measure \(M_N(d\lambda)\) is discrete and its support is a finite set \(ch_N(\sigma(A))\), the total variation of this measure is expressed by the sum \[\sum\limits_{\lambda\in{}ch_N(\sigma(A))}\|{}M_N(\{\lambda\})\|. \] To prove that the family of measures \(\big\{M_{N}(d\lambda)\big\}_{1\leq{}N<\infty}\) is weakly convergent, we have to obtain an estimate of the form \begin{equation} \label{EV} \sum\limits_{\lambda\in{}ch_N(\sigma(A))}\|{}M_N(\{\lambda\})\|\leq C, \quad \forall\, N, \end{equation} where \(C<\infty\) does not depends on \(N\). \begin{lem} \label{reg} Let \(M_N(\{\lambda\})\), \(\lambda\in{}ch_N(\sigma(A))\), be the matrices which appear in the representation \eqref{EEa}-\eqref{EEb} of the \(N\)-approximant \(L_{N}(t)\). Then the inequality \begin{equation} \label{IE} \sum\limits_{\lambda\in{}ch_N(\sigma(A))}\|{}M_N(\{\lambda\})\|\leq{} \sum\limits_{k_1,\ldots,k_N}\| M_{k_1,\ldots,k_N} \|{}, \end{equation} holds, where \(M_{k_1,\ldots,k_N}\) are the same as in \eqref{Pr}. On the right hand side of \eqref{IE}, the summation is extended over all integers \(k_1,\ldots,k_N\) satisfying the conditions \(1\leq{}k_1\leq{}l,\,\ldots\,,1\leq{}k_N\leq{}l\). \end{lem} \begin{proof} Applying the triangle inequality to \eqref{EEb}, we obtain the inequality \begin{equation} \label{IEe} \|{}M_N(\{\lambda\})\|\leq{} \sum\limits_{k_1,\ldots,k_N}\| M_{k_1,\ldots,k_N} \|,\qquad \forall\,\lambda\in{}Nch(\sigma(A)). \end{equation} On the right hand side of \eqref{IEe}, the summation is extended over all those integers \(k_1,\,\ldots\,,k_N\) for which \(\tfrac{\lambda_{k_1}+\,\ldots\,+\lambda_{k_N}}{N}=\lambda\). Adding the inequalities \eqref{IEe} over all \(\lambda\in{}ch_N(\sigma(A))\), we come to the inequality \begin{equation} \label{IEf} \sum\limits_{\lambda\in{}ch_N(\sigma(A))}\|{}M_N(\{\lambda\})\|\leq{} \sum\limits_{\lambda\in{}ch_N(\sigma(A))}\Big(\sum\limits_{k_1,\ldots,k_N}\| M_{k_1,\ldots,k_N} \|\Big). \end{equation} Regrouping summands in the right hand side of \eqref{IEf}, we come to the inequality \eqref{IE}. \end{proof} \section{The subordination relation.} \begin{defn} Let \(M=(m_{pq})_1^n\in\mathfrak{M}_n\), \(S=(s_{pq})_1^n\in\mathfrak{M}_n^{+}\). We say that \emph{the matrix \(M\) is subordinated to the matrix \(S\)} and use \emph{the notation \(M\preceq{}S\) for the subordination relation} if the inequalities \begin{equation} \label{So} |m_{pq}|\leq{}s_{pq},\quad 1\leq{}p,q\leq n, \end{equation} hold for the entries \(m_{pq}\), \(s_{pq}\) of the matrices \(M,S\), respectively. \end{defn} \begin{lem} \label{NEs} We assume that \(M\in\mathfrak{M}_n, S\in\mathfrak{M}_n^{+}\), and \(M\preceq{}S\). Then \begin{equation} \label{InE} \|M\|\leq{}\|S\|. \end{equation} \end{lem} \begin{proof} Let \(m_{pq},s_{pq}\) be the entries of the matrices \(M\) and \(S\), and \(\xi_1,\,\ldots\,,\xi_n\), \(\eta_1,\,\ldots\,,\eta_n\) be arbitrary complex numbers. Then the inequality \begin{multline*} \big|\sum\limits_{1\leq p,q\leq n}m_{pq}\xi_p\eta_q\big|\leq \sum\limits_{1\leq p,q\leq n}|m_{pq}||\xi_p||\eta_q|\leq \sum\limits_{1\leq p,q\leq n}s_{pq}|\xi_p||\eta_q|\\ \leq{}\|S\|\sqrt{|\xi_1|^2+\,\cdots\,+|\xi_n|^2} \sqrt{|\eta_1|^2+\,\cdots\,+|\eta_n|^2}. \end{multline*} holds. \end{proof} \begin{defn} \label{USN} Given a matrix \(B\in\mathfrak{M}_n\), we associate the matrix \(R(B)\) with \(B\). By definition, \begin{equation} \label{SoR} R(B)=(r_{pq})_{1}^{n},\qquad r_{pq}=\|B\|,\quad 1\leq{}p,q\leq n, \end{equation} \end{defn} \begin{lem} \label{Sur} The matrix \(B\) is subordinated to the matrix \(R(B)\). \end{lem} \begin{proof} The entry \(b_{pq}\) of the matrix \(B\) satisfies the inequality \(|b_{pq}|\leq\|B\|=r_{pq},\,1\leq p,q\leq{}n\). \end{proof} \begin{lem}{\ } \label{Au} \begin{enumerate} \item The matrix \(R(B)\) is a Hermitian matrix of rank one. \item The norms of the matrix \(R(B)\) and its exponential \(e^{R(B)}\) are \begin{equation} \label{NeB} \|R(B)\|=n\|B\|,\quad \|e^{R(B)}\|=e^{n\|B\|}. \end{equation} \end{enumerate} \end{lem} \begin{proof} The only non-zero eigenvalue of the matrix \(R(B)\) is the number \(n\|B\|\). \end{proof} \begin{lem} \label{sol} Let \(\Psi_k\in\mathfrak{M}_n,\,\Phi_k\in\mathfrak{M}_n^{+},\,k=1,\,\ldots\,,m\). Assume that for each \(k=1,\,\ldots\,,m\), the matrix \(\Psi_k\) is subordinated to the matrix \(\Phi_k\): \[\Psi_{k}\preceq\Phi_k,\quad k=1,\,\ldots\,,m.\] Then the subordination relations \begin{align*} \Psi_1+\Psi_2+\,\cdots\,+\Psi_m &\preceq\Phi_1+\Phi_2+\,\cdots\,+\Phi_m,\\ \Psi_1\cdot\Psi_2\cdot\,\cdots\,\cdot\Psi_m &\preceq\Phi_1\cdot\Phi_2\cdot\,\cdots\,\cdot\Phi_m \end{align*} hold for the sum and the product of these matrices. \begin{proof} The assertion of Lemma is a direct consequence of the definition of matrix addition and multiplication and of elementary properties of numerical inequalities. \end{proof} \end{lem} \begin{lem} \label{SuE} Let \(X\in\mathfrak{M}_n^{+}\). Then \(e^{X}\in\mathfrak{M}_n^{+}\). If \(Y\in\mathfrak{M}_n\), \(Y\preceq{}X\), then \(e^{Y}\preceq{}e^{X}\). \end{lem} \begin{proof} According Lemma \ref{sol}, the subordination relations \(\frac{1}{m!}Y^{m}\preceq{}\frac{1}{m!}X^{m}\) hold for every \(m=0,1,2,\,\ldots\). Using Lemma \ref{sol} once more, we conclude that \(\sum\limits_{0\leq m<\infty}\frac{1}{m!}Y^{m}\preceq\sum\limits_{0\leq m<\infty}\frac{1}{m!}X^{m}\). \end{proof} \section{A bound for the total variation of the measure \(\boldsymbol{M_N(d\lambda)}\).} \label{BTV} \begin{lem} \label{CruE} Let \(A\in\mathfrak{H}_n\), \(B\in\mathfrak{M}_n\), \(N\in\mathbb{N}\), and let the matrices \(M_{k_1,\,\ldots\,,k_N}\) be defined according to \eqref{Pr}. Then the inequality \begin{equation} \label{crue} \sum\limits_{1\leq{}k_1,\ldots,k_N\leq{}l}\| M_{k_1,\ldots,k_N} \|\leq{}ne^{n\|B\|} \end{equation} holds. \end{lem} \begin{proof} {\ }\\ \textbf{1.} We impose the additional condition: the matrix \(A\) is diagonal. So \begin{equation} \label{AC} A\in\mathfrak{H}_n\cap\mathfrak{D}_n. \end{equation} \emph{Then all spectral projectors \(E_{\lambda_j}\) are diagonal matrices}. Hence \(E_{\lambda_j}\in\mathfrak{M}_n^{+}\), i.e. \begin{equation} \label{SpP} E_{\lambda_j}\preceq{}E_{\lambda_j},\quad j=1,\,\ldots\,,l. \end{equation} Let the matrix \(R(B)\) be defined according to Definition \ref{USN}. By Lemma \ref{Sur}, \(B\preceq{}R(B)\). By Lemma \ref{SuE}, \begin{equation} \label{Sue} e^{B/N}\preceq{}e^{R(B)/N}. \end{equation} By Lemma \ref{sol}, the subordination relation \begin{equation*} M_{k_1\ldots{}k_N}\preceq{} E_{\lambda_{k_1}}e^{R(B)/N}E_{\lambda_{k_2}}e^{R(B)/N}\,\ldots\, E_{\lambda_{k_N}}e^{R(B)/N} \end{equation*} is satisfied for every \(k_1,\,\ldots\,,k_N\). By Lemma \ref{NEs}, the inequality \begin{equation*} \|M_{k_1\ldots{}k_N}\|\leq\|E_{\lambda_{k_1}}e^{R(B)/N}E_{\lambda_{k_2}}e^{R(B)/N}\,\ldots\, E_{\lambda_{k_N}}e^{R(B)/N} \| \end{equation*} holds. Adding the above inequalities, we obtain the inequalty \begin{multline} \label{rhs} \sum\limits_{k_1,\ldots,k_N}\| M_{k_1,\ldots,k_N} \|\leq\\ \sum\limits_{k_1,\ldots,k_N}\big\|E_{\lambda_{k_1}}e^{R(B)/N}E_{\lambda_{k_2}}e^{R(B)/N}\,\ldots\, E_{\lambda_{k_N}}e^{R(B)/N}\big \| \end{multline} To estimate the sum in the right hand side of \eqref{rhs}, we apply Lemma \ref{TeL} with \(F_j=E_{\lambda_j}, R=R(B)\) and obtain the inequality \begin{equation} \label{subs} \sum\limits_{k_1,\ldots,k_N}\big\|E_{\lambda_{k_1}}e^{R(B)/N}E_{\lambda_{k_2}}e^{R(B)/N}\,\ldots\, E_{\lambda_{k_N}}e^{R(B)/N}\big \|\leq{}n\|e^{R(B)}\|. \end{equation} Now we refer to Lemma \ref{Au}. The inequality \eqref{crue} is a consequence of \eqref{rhs}, \eqref{subs} and \eqref{NeB}.\\ \textbf{2}. The inequality \eqref{crue} is proved under the extra assumption that the matrix \(A\) is diagonal. Now we get rid of the extra assumption, that the matrix \(A\) is diagonal. Let \(A\) be an arbitrary matrix from \(\mathfrak{H}_n\). There exists a unitary matrix \(U\) such that the matrix \begin{equation} \label{diA} A^d=UAU^{\ast} \end{equation} is diagonal. Of course \(A^d\in\mathfrak{H}_n\). Then we define the matrices \begin{equation} \label{diM} B^d=UBU^{\ast}, \quad E_{\lambda_{j}}^d=UE_{\lambda_{j}}U^\ast,\quad M_{k_1,\ldots,k_N}^d=U M_{k_1,\ldots,k_N}U^{\ast}. \end{equation} The matrices \(E_{\lambda_{j}}^d\) are the spectral projectors of the matrix \(A^d\). The matrices \(M_{k_1,\ldots,k_N}^d\) can be represented in the form \begin{equation} \label{MdR} M_{k_1,\,\ldots,\,k_N}^d=E_{\lambda_{k_1}}^de^{B^d/N}E_{\lambda_{k_2}}^de^{B^d/N}\,\ldots\, E_{\lambda_{k_N}}^de^{B^d/N} \end{equation} Since the matrix \(A^d\) is diagonal, the inequality \begin{equation} \label{crued} \sum\limits_{k_1,\ldots,k_N}\| M_{k_1,\ldots,k_N}^d \|\leq{}ne^{n\|B^d\|} \end{equation} holds. It remains to note that \begin{equation*} \|M_{k_1,\,\ldots,\,k_N}^d\|=\|M_{k_1,\,\ldots,\,k_N}\|,\quad \|B^d\|=\|B\|.\qedhere \end{equation*} \end{proof} \begin{lem} \label{BVM} Given the matrices \(A\in\mathfrak{H}_n,\,B\in\mathfrak{M}_n\), let \(M_N(d\lambda)\) be the matrix valued measure which appears in the representation \eqref{LRN} of the \(N\)-approximant \(L_N(t)\) of the matrix function \(e^{tA+B}\). The total variation of the measure \(M_N(d\lambda)\) admits the bound \begin{equation}\label{AIn} \sum\limits_{\lambda\in{}ch_N(\sigma(A))}\|{}M_N(\{\lambda\})\|\leq{}ne^{n\|B\|}. \end{equation} \end{lem} \begin{proof} We combine the inequalities \eqref{IE} and \eqref{crue}. \end{proof} \section{The representation of the matrix function \(\boldsymbol{e^{tA+B}}\) in the form of the Laplace transform of a matrix valued measure.} \begin{thm} \label{MaT} Let matrices \(A\) and \(B\) be given, \(A\in\mathfrak{H}_n\), \(B\in\mathfrak{M}_n\). Let \(\lambda_{\textup{min}}\) and \(\lambda_{\textup{max}}\) be the smallest and the largest eigenvalues of the matrix \(A\), \(\mathfrak{B}\) be the class of Borel sets of the closed interval \([\lambda_{\textup{min}},\lambda_{\textup{min}}]\). Then there exists a set function \(M:\mathfrak{B}\to\mathfrak{M}_n\) such that: \begin{enumerate} \item \(M\) is countably additive and regular on \(\mathfrak{B}\), i.e. \(M\) is a regular Borel \(\mathfrak{M}_n\)-valued measure on \([\lambda_{\textup{min}},\lambda_{\textup{min}}]\). \item The equality \begin{equation} \label{MaEq} e^{tA+B}=\int\limits_{[\lambda_{\textup{min}},\lambda_{\textup{min}}]} e^{t\lambda}M(d\lambda), \quad \forall\,t\in\mathbb{C}, \end{equation} holds. \item If \(B\in\mathfrak{H}_n\), then the measure \(M\) is \(\mathfrak{H}_n\)-valued, i.e. \(M:\mathfrak{B}\to\mathfrak{H}_n\). \end{enumerate} \end{thm} \begin{proof} Let us consider the Banach space \(C([\lambda_{\textup{min}},\lambda_{\textup{max}}])\) of \(\mathbb{C}\)-valued continuous functions on the interval \([\lambda_{\textup{min}},\lambda_{\textup{max}}]\) equipped with the standard norm \begin{equation*} \|x(\lambda)\|=\max\limits_{\lambda\in[\lambda_{\textup{min}},\lambda_{\textup{max}}]} |x(\lambda)|, \quad x(\lambda)\in{}C([\lambda_{\textup{min}},\lambda_{\textup{max}}]). \end{equation*} The support of each of the measures \(M_N(d\lambda)\) is contained in the closed interval \([\lambda_{\textup{min}},\lambda_{\textup{max}}]\). (See Remark \ref{con}.) The total variation of the measures \(M_N(d\lambda)\) is bounded from above by some finite value which does not depend on \(N\). (See Lemma \ref{BVM}.) According to \eqref{LRI}, for each \(t\in\mathbb{R}\) there exists the limit \begin{equation*} \lim\limits_{N\to\infty} \int\limits_{[\lambda_{\textup{min}},\lambda_{\textup{max}}]}e^{t\lambda}M_N(d\lambda). \end{equation*} The system of functions \(\{e^{t\lambda}\}_{t\in\mathbb{R}}\) is complete in the space \(C([\lambda_{\textup{min}},\lambda_{\textup{max}}])\). Therefore for each \(x(\lambda)\in{}C([\lambda_{\textup{min}},\lambda_{\textup{max}}])\) the limit \begin{equation} \label{FE} J(x)=\lim\limits_{N\to\infty} \int\limits_{[\lambda_{\textup{min}},\lambda_{\textup{max}}]}x(\lambda)\,M_N(d\lambda) \end{equation} exists. The mapping \(J:C([\lambda_{\textup{min}},\lambda_{\textup{max}}])\to\mathfrak{M}_n\) is a continuous linear mapping. Let \(M(d\lambda)\) be the weak limit of the sequence of measures \(M_N(d\lambda)\). The \(\mathfrak{M}_n\)-valued measure \(M(d\lambda)\) gives the integral representation of the mapping~\(J\): \begin{equation} \label{FEr} J(x)= \int\limits_{[\lambda_{\textup{min}},\lambda_{\textup{max}}]}x(\lambda)\,M(d\lambda), \quad x(\lambda)\in{}C([\lambda_{\textup{min}},\lambda_{\textup{max}}]). \end{equation} In view of \eqref{LRI}, \[J(e^{t\lambda})=e^{tA+B}.\] Thus the representation \eqref{MaEq} is established. If the matrix \(B\) is Hermitian: \(B\in\mathfrak{H}_n\), then \begin{equation*} e^{tA+B}=\big(e^{tA+B}\big)^{\ast},\quad \forall\,t\in\mathbb{R}. \end{equation*} Hence \[\int\limits_{[\lambda_{\textup{min}},\lambda_{\textup{min}}]} e^{t\lambda}M(d\lambda)=\int\limits_{[\lambda_{\textup{min}},\lambda_{\textup{min}}]} e^{t\lambda}(M(d\lambda))^{\ast},\quad \forall\,t\in\mathbb{R}.\] Since the system \(\{e^{t\lambda}\}_{t\in\mathbb{R}}\) is complete in the space \(C([\lambda_{\textup{min}},\lambda_{\textup{max}}])\), the measures \(M(d\lambda)\) and \((M(d\lambda))^{\ast}\) must coincide. In other words, the measure \(M(d\lambda)\) is \(\mathfrak{H}_n\)-valued. \end{proof} \section{The measure \(\boldsymbol{M(d\lambda)}\) is not necessarily non-negative.} \label{NNN} \begin{defn}\label{nNeg} Let \(S=(s_{pq})_1^n\in\mathfrak{M}_{n}\). The matrix \(S\) is said to be \emph{non-negative} if the inequality \begin{equation*} \sum\limits_{1\leq p,q\leq n}s_{pq}\xi_p\overline{\xi_q}\geq 0 \end{equation*} holds for all complex numbers \(\xi_1,\,\ldots\,,\xi_n\). \end{defn} \begin{defn}\label{nNeMe} Let \(\mathfrak{B}\) be the class of Borel sets of \(\mathbb{R}\), \(M(d\lambda):\mathfrak{B}\to\mathfrak{M}_n\) be a matrix valued measure. The measure \(M(d\lambda)\) is said to be \emph{non-negative} if the matrix \(M(\delta)\) is non-negative for every set \(\delta\in\mathfrak{B}\). \end{defn} Comparing the equalities \eqref{StR} and \eqref{MaEq}, we conclude that \begin{equation} \mu(d\lambda)=\tr{}M(d\lambda). \end{equation} The following question arises naturally.\\[2.0ex] \textbf{Question}. \emph{Let \(A\in\mathfrak{H}_n\), \(B\in\mathfrak{H}_n\), and \(M(d\lambda)\) be the \(\mathfrak{H}_n\)-valued measure which appears in the representation \eqref{MaEq} of the function \(e^{tA+B}\). Is the measure \(M(d\lambda)\) non-negative}? The following example shows that the answer to this question is negative already for \(n=2\).\\[2.0ex] \textbf{Example}. Let \begin{equation} A= \begin{bmatrix} 2&0\\ 0&0 \end{bmatrix}, \qquad B= \begin{bmatrix} 0&1\\ 1&0 \end{bmatrix} \cdot \end{equation} The eigenvalues of the matrix \(tA+B\) are \begin{equation} \lambda_1(t)=t+\sqrt{t^2+1},\qquad \lambda_2(t)=t-\sqrt{t^2+1}. \end{equation} The spectral projectors of the matrix \(tA+B\) corresponding to these eigenvalues are \begin{equation} E_1(t)= \begin{bmatrix} \frac{\sqrt{t^2+1}+t}{2\sqrt{t^2+1}}&\frac{1}{2\sqrt{t^2+1}}\\[1.5ex] \frac{1}{2\sqrt{t^2+1}}&\frac{\sqrt{t^2+1}-t}{2\sqrt{t^2+1}} \end{bmatrix} \mathpunct{\raisebox{0.5ex}{,}} \quad E_2(t)= \begin{bmatrix} \frac{\sqrt{t^2+1}-t}{2\sqrt{t^2+1}}&-\frac{1}{2\sqrt{t^2+1}}\\[1.5ex] -\frac{1}{2\sqrt{t^2+1}}&\frac{\sqrt{t^2+1}+t}{2\sqrt{t^2+1}} \end{bmatrix} \cdot \end{equation} The matrix \(e^{tA+B}\) can be calculated explicitly: \begin{equation} e^{tA+B}=e^{\lambda_1(t)}E_1(t)+e^{\lambda_2(t)}E_2(t). \end{equation} In particular, \begin{equation}\label{EEM} \Big(\frac{d{\ }}{dt}e^{tA+B}\Big)_{|_{t=0}}=D, \end{equation} where \begin{equation} D= \begin{bmatrix} e& \dfrac{e-e^{-1}}{2}\\[2.0ex] \dfrac{e-e^{-1}}{2}&e^{-1} \end{bmatrix} \cdot \end{equation} The matrix \(D\) is not non-negative: \begin{equation} \textup{det}\,D=\frac{6-e^2-e^{-2}}{4}<0. \end{equation} From \eqref{MaEq} it follows that \begin{equation} D=\int\limits_{[0,2]}\lambda{}M(d\lambda). \end{equation} If the measure \(M(d\lambda)\) would be non-negative, then the matrix \(D\) would be non-negative.
2,869,038,155,391
arxiv
\section{Introduction and problem delineation}\label{section:intro} A process $N=(N_t)_{t\in [0,\infty)}$ in continuous time is a homogeneous Poisson process (HPP) of intensity $c\in (0,\infty)$, by definition, if it is a counting process (i.e. if it has values in $\mathbb{N}_0\cup\{\infty\}$ and right-continuous nondecreasing paths) that is finite a.s., has jumps of size $1$ a.s., starts in $N_0=0$ a.s., and has independent increments, whose distribution is Poisson: $(N_t-N_s)\mathbbm{1}_{\{N_s<\infty\}}\sim \Pois(c(t-s))$ for $ \{s,t\}\subset [0,\infty)$, $s\leq t$. (Here, for $\lambda\in [0,\infty)$, $\Pois(\lambda)$ is the law on $\mathbb{N}_0$ that has $\Pois(\lambda)(\{k\})=\lambda^ke^{-\lambda}/k!$ for $k\in \mathbb{N}_0$.) We write: $N\sim \HPP(c)$. Homogeneous Poisson processes represent a fundamental type (in law a one-parametric family) of processes in continuous time and with a discrete state space, lying on the intersection of (at least) counting processes, L\'evy processes (hence strong Markov processes), renewal processes and (inhomogeneous) Poisson processes (hence continuous-time Markov chains). Consequently there has been, and there exists, a considerable interest in various characterizations of HPPs. An incomplete but illustrative list of such characterizations follows. (For unexplained terms the reader is referred to the cited works.) For $c\in (0,\infty)$ and a counting process $N$ with $N_0=0$ a.s., the assertion ``$N\sim \HPP(c)$'' is equivalent to each of the following: \begin{itemize} \item $N$ is a L\'evy process with jumps of size $1$ a.s. and with its L\'evy measure having mass $c$ \cite[Theorem~2.2.13]{applebaum}. \item $N$ has jumps of size $1$ a.s. and its inter-arrival times are independent, identically, exponentially with mean $c^{-1}$ (notation: $\Exp(c)$), distributed \cite[Theorem~6.5.5(d)]{cinlar}. \item (Watanabe) $N$ has jumps of size $1$ a.s. and $(N_t-c t)_{t\in [0,\infty)}$ is a martingale \cite[Theorem~6.5.5(c)]{cinlar}. \item $N$ is an ordinary (i.e. non-delayed non-defective no-simultaneous-arrivals) renewal process that is stationary, with its inter-renewal times having mean $c^{-1}$: follows from \cite[Corollary~V.3.6]{asmussen} coupled with the elementary observation that invariance under the transformation of the integrated tail characterizes the exponential distribution. \item (Srivastava) A non-trivial Bernoulli marking (thinning) of $N$ results in independent marked processes and $\EE N_t=ct$ for $t\in [0,\infty)$: this is a particular case of the more general characterization of Poisson processes in Euclidean space \cite[Theorem~2.1]{assuncao}, originally due to Fichtner; see \cite[Theorem~1]{nehring} for further extensions. \item (Samuels) $N$ is an ordinary renewal process with mean inter-renewal time $c^{-1}$ that results as the superposition of two independent ordinary renewal processes \cite[Theorem on p. 73]{samuels}. \label{samuels} \end{itemize} For still further characterizations see \cite[Sections~2.2,~2.3 and~4.3]{vere-jones} dealing with the property of `complete randomness', distribution form and various operations on stationary renewal processes, respectively; Slivnyak-Mecke's characterization of Poisson processes \cite[Proposition 13.1.VII]{vere-jones-II} \cite[Lemma~6.15]{kallenberg} in the context of Palm theory of random measures; \cite{liberman,gan} for the order statistics property; \cite{jagers,li,erickson} concerning age (a.k.a. spent or current life) and residual life; finally \cite{huang,chandramohan,disney} that deal with marking (thinning) of renewal processes. \label{lit} In this paper we present another characterization of HPPs in the context of marking (thinning) a general renewal process. To this end we first fix some notation. Let, on a probability space $(\Omega,\FF,\PP)$, $T=(T_i)_{i\in \mathbb{N}}$ be a sequence of independent random variables with values in $[0,\infty]$. Let $T_j$, $j\in \mathbb{N}_{\geq 2}$, be identically distributed. Define $S_n:=\sum_{i=1}^nT_i$ for $n\in \mathbb{N}$, and then $N_t:=\sum_{n\in \mathbb{N}}\mathbbm{1}(S_n\leq t)$ for $t\in [0,\infty)$ -- the associated renewal process (allowing delay: $T_1$ does not necessarily have the same distribution as $T_2$; defect: $T_i$, $i\in \mathbb{N}$, can take on the value $\infty$; and multiple simultaneous arrivals: $T_i$, $i\in \mathbb{N}$, can take on the value $0$). Let furthermore $p\in (0,1)$, and let $X=(X_i)_{i\in \mathbb{N}}$ be a sequence of independent, identically distributed (i.i.d.) random variables taking values in $\{0,1\}$, independent of $T$, and with $X_1\sim \Ber(p)$, where $\Ber(p)$ is the Bernoulli law: $\Ber(p)(\{1\})=1-\Ber(p)(\{0\})=p$. Define the marked processes $N^1$ and $N^0$ as follows: $$N_t^i:=\sum_{n\in \mathbb{N}}\mathbbm{1}(S_n\leq t,X_n=i),\quad t\in [0,\infty),\quad i\in \{0,1\}.$$ The strong Markov property for i.i.d. sequences implies that $N^0$ and $N^1$ are again renewal processes (with delay and defect): for $i\in \{0,1\}$, if one defines $S^i_0:=0$ and inductively $S^i_{n+1}:=\inf\{m>S^i_n:X_m=i\}$, $n\in \mathbb{N}_0$, then $(S^i_{n+1}-S^i_n)_{n\in \mathbb{N}_0}$ (where we set e.g. $\infty-\infty=\infty$ on the negligible event on which such a difference may occur) is an i.i.d. sequence, independent of $T$, and a.s. the sequence of the inter-renewal epochs of $N^i$ is given by $(\sum_{k=S^i_{n-1}+1}^{S^i_n}T_k)_{n\in \mathbb{N}}$. Finally define, for $i\in \{0,1\}$, $R_i:=\inf\{t\in [0,\infty):N^i_t\geq 1\}$, the time of the first renewal of the process $N^i$, and $L_i:=\inf\{j\in \mathbb{N}:X_j=i\}$. Then, on $\{L_i<\infty\}$ and hence a.s., $R_i=\sum_{j=1}^{L_i}T_j$, $i\in \{0,1\}$.\label{ns} We observe: if $N\sim \HPP(\theta)$ for some $\theta\in (0,\infty)$, then $N^0$ and $N^1$, in particular $R_0$ and $R_1$, are independent \cite[Theorem~4.4.1]{resnick}. It is then natural and interesting to ask, whether or not the latter property of the independence of $R_0$ and $R_1$ already characterizes HPPs. Indeed we will demonstrate the validity of\label{natural} \begin{theorem}\label{theorem} Assume $\PP(T_1<\epsilon)>0$ for all $\epsilon>0$, and that either $T_2$ is non-arithmetic or else $\PP(T_1=0)=0$. Then $R_0$ and $R_1$ are independent if and only if $N\sim\HPP(\theta)$ for some $\theta\in (0,\infty)$. The same equivalence obtains if $N$ is assumed to be ordinary (i.e. non-delayed, non-defective, and not having multiple simultaneous arrivals) instead. \end{theorem} Here: \begin{definition}\label{definition} $T_2$ is non-arithmetic, if there is no $\alpha\in (0,\infty)$ with $\PP(T_2\in \{\alpha n:n\in \mathbb{N}_0\cup \{\infty\}\})=1$. \end{definition} Theorem~\ref{theorem}, whose proof is given at the end of Section~\ref{section:result}, is most closely related to the findings of \cite{chandramohan,disney,huang}. Let us see how it compares. On the one hand, \cite[Corollaries~2.3 and~3.2]{disney} (respectively, \cite[Theorem~2.1]{chandramohan}; \cite[Corollary~2]{huang}) give that, when either $T_1$ has the distribution of the integrated tail of $T_2$ with (implicitly) $\EE T_2\in (0,\infty)$, or else when $T_1$ has the same distribution as $T_2$, $\PP(T_2<\infty)=1$ and $T_2$ is non-arithmetic (respectively, when $\PP(0<T_1)=1$ and (as implicitly assumed in the proof; not all the assumptions appear to be given explicitly) $\PP(T_1=\infty)<1$; when $\PP(0<T_1,0<T_2)=1$ and $\PP(T_1<\epsilon)>0$ for all $\epsilon>0$), then $\cov(N^0_t,N^1_t)=0$ for all $t\in (0,\infty)$ implies $N\sim \HPP(\theta)$ for some $\theta\in (0,\infty)$. (Strictly speaking the quoted result of \cite{chandramohan} is false. For, given a $\kappa\in (0,\infty)$, we can take independent $T_1-\kappa\sim \Exp(\lambda)$ and $T_j\sim \Exp(\lambda)$ for $j\in \mathbb{N}_{\geq 2}$ (a deterministically delayed HPP). This situation is however precluded by the conditions of \cite{disney,huang}.) On the other hand, the condition of Theorem~\ref{theorem} is one on the independence of \emph{the first renewal epochs $R^0$ and $R^1$ only}, and \emph{not} (\emph{a priori}) on the absence of correlation of the processes $N^0$ and $N^1$ at all deterministic times (viz. the condition of \cite{chandramohan,disney,huang}). In a similar vein, Theorem~\ref{theorem} is not subsumed in the result of Samuels described above (final bullet point on p.~\pageref{samuels}): in the case that $N$ is an ordinary renewal process, for sure $N^0$ and $N^1$ are ordinary renewal processes that superpose into $N$, but the condition of Theorem~\ref{theorem} is \emph{not} (\emph{a priori}) on $N^0$ and $N^1$ being independent. Thus Theorem~\ref{theorem} is a complement to existing characterizations of HPPs in the context of marked renewal processes. In fact we shall prove slightly more than what is the contents of Theorem~\ref{theorem}. Specifically, we shall provide a precise characterization of the independence of $R_0$ and $R_1$ (see Proposition~\ref{proposition:karakterizacija} in the section following), whose immediate corollary will be Theorem~\ref{theorem}. Excepting degenerate and trivial cases, and modulo deterministic time delay and scaling, we obtain here besides HPPs also what are continuous-time-embedded discrete-time stationary ``geometric'' renewal processes. It is interesting that these yield independence of the first renewal epochs of the two marked processes, however not the independence of the marked processes in their entirety (see Remark~\ref{remark}\ref{remark:c}). In addition to its theoretical appeal, our result appears to have some potential practical (statistical) relevance as well. We mean here a situation in which, for some reason, $N$ may be assumed to satisfy the assumptions of Theorem~\ref{theorem}, but it is not clear whether $N$ is an HPP. Then this can be statistically tested, via independent trials, based on (also) the hypothesis of the independence of the first renewal times $R_0$ and $R_1$ of a non-trivial Bernoulli marking (possibly of unknown parameter $p$) of $N$. This might in particular be useful when data is limited to $R_0$ and $R_1$. Compare the study \cite{assuncao} of a test for Poisson processes, based on the characterization result of Fichtner alluded to above. On a more pure level, note that $R_0$ and $R_1$ are relatively simple functionals of the paths of $N^0$ and $N^1$, and as such, in some given context, their independence may be more easily susceptible to analysis, than that of the whole of the processes $N^0$ and $N^1$, or of $N$. \label{practical} \section{The result and its proof}\label{section:result} So as to be able to state the precise result of this paper succinctly, let us agree on the following pieces of notation: $\LL(V)$ denotes the law of a random element $V$; for $x_0\in [0,\infty]$, $\delta_{x_0}$ is the Dirac measure at $x_0$; then for $r\in (0,1)$, $\geom_{\mathbb{N}}(r):=\sum_{k=1}^\infty r(1-r)^{k-1}\delta_k$, respectively $\geom_{\mathbb{N}_0}(r):=\sum_{k=0}^\infty r(1-r)^{k}\delta_k$, is the geometric law on $\mathbb{N}$, respectively $\mathbb{N}_0$, with success parameter $r$. Now the result of this paper follows. \begin{proposition}\label{proposition:karakterizacija} $R_1$ and $R_0$ are independent if and only if (precisely) one of the conditions below holds true. \begin{enumerate}[(a)] \item\label{kara:4} $\PP(T_1=\infty)=1$. \item\label{kara:3} There exists $\kappa\in [0,\infty)$ such that $\PP(T_2=0)=\PP(T_1=\kappa)=1$. \item\label{kara:0} There exist $\kappa\in [0,\infty)$ and $q_0\in (0,1)$ such that $\LL(T_1)=(1-q_0^2)\delta_\kappa+q_0^2\delta_\infty$ and $\LL(T_2)=(1-q_0)\delta_0+q_0\delta_\infty$ \item\label{kara:2} There exist $\kappa\in [0,\infty)$ and $\theta\in (0,\infty)$ such that $T_1-\kappa\sim \Exp(\theta)$ and $T_2\sim \Exp(\theta)$. \item\label{kara:1} There exist $q_0\in (0,1)$, $\kappa\in [0,\infty)$ and $\alpha\in (0,\infty)$ such that $\LL((T_1-\kappa)/\alpha)=\geom_{\mathbb{N}_0}(1-q_0^2)$ and $\LL(T_2/\alpha)=(1-q_0)\delta_0+q_0\geom_\mathbb{N}(1-q_0^2)$. \end{enumerate} \end{proposition} \begin{remark} \label{remark} \leavevmode \begin{enumerate}[(i)] \item The conditions of the proposition are clearly mutually exclusive. \item\label{remark:ii} In cases \ref{kara:4}, \ref{kara:3} and \ref{kara:2} even the processes $N^1$ and $N^0$ in their entirety are independent. \item\label{remark:c} In cases \ref{kara:1} and \ref{kara:0}, $N^1$ and $N^0$ are not independent. This may be seen as follows. Let $B_i:=\{N^i_\kappa=1\}$, $i\in \{0,1\}$. We compute $\PP(B_0)=\sum_{k=1}^\infty (1-q_0^2)(1-q_0)^{k-1}q_0{k\choose 1}p^{k-1}(1-p)=q_0(1-q_0^2)(1-p)/(1-(1-q_0)p)^2$, and similarly $\PP(B_1)=q_0(1-q_0^2)p/(1-(1-q_0)(1-p))^2$, finally $\PP(B_1\cap B_0)=2(1-q_0^2)(1-q_0)q_0p(1-p)$. Let furthermore $A_0:=\{N^0_\kappa=0\}$. We compute $\PP(A_0)=q_0^2+\sum_{k=1}^\infty (1-q_0^2)(1-q_0)^{k-1}q_0p^k=q_0^2+(1-q_0^2)q_0p/(1-(1-q_0)p)=q_0(q_0+p-pq_0)/(1-p+pq_0)$ and also $\PP(A_0\cap B_1)=(1-q_0^2)pq_0$. Elementary simplifications reveal that $\PP(B_0\cap B_1)=\PP(B_0)\PP(B_1)$ is equivalent to $q_0(1+q_0)=2[q_0+p-pq_0]^2[1-p+q_0p]^2$, whilst $\PP(A_0\cap B_1)=\PP(A_0)\PP(B_1)$ is equivalent to $[q_0+p-pq_0][1-p+q_0p]=q_0$. Both equalities together yield $q_0(1+q_0)=2q_0^2$, contradicting $q_0\in (0,1)$ \item The last case (item ~\ref{kara:1}) is (modulo the deterministic time delay by $\kappa$ and the scaling $\alpha$) a stationary renewal process in discrete time that has been embedded into continuous time. For, if $U_1\sim \geom_{\mathbb{N}_0}(1-q_0^2)$ and $U_k\sim (1-q_0)\delta_0+q_0\geom_\mathbb{N}(1-q_0^2)$ for $k\in \mathbb{N}_{\geq 2}$ are independent, then we can easily convince ourselves that, for $k\in \mathbb{N}_0$, $\PP(U_1=k)=\PP(U_2>k)/\EE U_2$ ($U_1$ has the distribution of the ``summed tail'' of $U_2$), and consequently (e.g. via moment functions) that the expected number of renewals at time $k\in \mathbb{N}_0$, i.e. $\EE \sum_{n=1}^\infty\mathbbm{1}(\sum_{l=1}^nU_l=k)$, is equal to $1/\EE U_2=(1-q_0^2)/q_0$, and hence does not depend on $k$ (property of stationarity; cf. \cite[pp. 34--36]{limnios}). For completeness' sake: the renewal process $L$ in discrete time, associated to the sequence $(U_k)_{k\in \mathbb{N}}$, is of course given by $L_n:=\sum_{k=1}^\infty\mathbbm{1}(\sum_{l=1}^kU_l\leq n)$ for $n\in \mathbb{N}_0$. \end{enumerate} \end{remark} The proof of Proposition~\ref{proposition:karakterizacija} will be via Laplace transforms. Let us recall this notion. \begin{definition} For a law $\LL$ on the Borel subsets of $[0,\infty]$ we define the Laplace transform of $\LL$ as the function $[0,\infty)\ni \lambda\mapsto \int_{[0,\infty)}e^{-\lambda x}\LL(dx)\in [0,\LL([0,\infty))]$. We denote it by $L_\LL$. \end{definition} \begin{remark} Such a Laplace transform is by bounded convergence continuous with limit $\lim_\infty L_\LL=\LL(\{0\})$ at $\infty$, its value at $0$ is $L_\LL(0)=\LL([0,\infty))$, and it is nonincreasing. It also determines the law (``injectivity of the Laplace transform''): if for two laws $\LL_1$ and $\LL_2$ on the Borel subsets of $[0,\infty]$, $L_{\LL_1}\vert_{[a,\infty)}=L_{\LL_2}\vert_{[a,\infty)}$ for some $a\in [0,\infty)$, then $\LL_1=\LL_2$. For, the restrictions of the measures $\LL_1(\cdot\cap [0,\infty))$ and $\LL_2(\cdot\cap [0,\infty))$ to the Borel subsets of $[0,\infty)$ then have the same (finite) Laplace transform on some neighborhood of $\infty$, and are thus the same \cite[Theorem~8.4]{bhattacharya}. It follows that $\LL_1=\LL_2$. \end{remark} We now prove Proposition~\ref{proposition:karakterizacija}. Briefly, the idea is to reduce the independence of $R_0$ and $R_1$ to the factorization of their Laplace transforms. This yields a functional equation for the Laplace transforms of the laws of $T_1$ and $T_2$. The latter is in turn analysed using the methods of regular variation, a technique that may be of independent interest. \begin{proof}\label{proof} Let $\fii:=L_{\LL(T_1)}$, respectively $\phi:=L_{\LL(T_2)}$, be the Laplace transform of the law of $T_1$, respectively $T_2$. \label{indeps} It follows from the relevant independences, i.e. from the fact that the law $\LL((X,T))$ of $(X,T)$ on the space $(\prod_{i\in \mathbb{N}}\{0,1\})\times (\prod_{i\in \mathbb{N}}[0,\infty])$ is given by the product law $\LL((X,T))=(\bigtimes_{i\in \mathbb{N}}\Ber(p))\times (\LL(T_1)\times (\bigtimes_{i\in \mathbb{N}_{\geq 2}}\LL(T_2)))$, from the a.s. equality $\Omega=(\cup_{k\in \mathbb{N}}\{X_1=0\}\cap \{L_1=k+1\})\cup (\cup_{k\in \mathbb{N}}\{X_1=1\}\cap \{L_0=k+1\})$, and from the countable additivity of mathematical expectation, that, for $\{\lambda,\mu\}\subset [0,\infty)$, on the one hand: $$\EE[e^{-\lambda R_1-\mu R_0}\mathbbm{1}(R_1<\infty,R_0<\infty)]$$ $$=\fii(\lambda+\mu)\left[(1-p)\sum_{k=1}^\infty(1-p)^{k-1}p\phi(\lambda)^k+p\sum_{k=1}^\infty p^{k-1}(1-p)\phi(\mu)^k\right]$$ $$=\fii(\lambda+\mu)p(1-p)\frac{\phi(\lambda)+\phi(\mu)-\phi(\lambda)\phi(\mu)}{(1-(1-p)\phi(\lambda))(1-p\phi(\mu))},$$ and on the other hand: $$\EE[e^{-\lambda R_1}\mathbbm{1}(R_1<\infty)]=\fii(\lambda)\left[p+(1-p)\sum_{k=1}^\infty (1-p)^{k-1}p\phi(\lambda)^k\right]=\frac{p\fii(\lambda)}{1-(1-p)\phi(\lambda)},$$ analogously: $$\EE[e^{-\mu R_0}\mathbbm{1}(R_0<\infty)]=\frac{(1-p)\fii(\mu)}{1-p\phi(\mu)}.$$ The class of bounded functions $\mathcal{K}:=\{e^{-\lambda\cdot}\mathbbm{1}_{[0,\infty)}:\lambda\in [0,\infty)\}$ is closed under multiplication and generates the Borel $\sigma$-field on $[0,\infty]$. The functional monotone class theorem hence implies that the independence of $R_1$ and $R_0$ is equivalent to \footnotesize \begin{equation}\label{eq:independence-cond} \EE[e^{-\lambda R_1-\mu R_0}\mathbbm{1}(R_1<\infty,R_0<\infty)]=\EE[e^{-\lambda R_1}\mathbbm{1}(R_1<\infty)]\EE[e^{-\mu R_0}\mathbbm{1}(R_0<\infty)],\quad \{\lambda,\mu\}\subset [0,\infty). \end{equation}\normalsize Indeed, the latter condition is clearly necessary. To see how the functional monotone class theorem intervenes in the proof of the sufficiency, note that for a fixed $\mu\in [0,\infty)$, the class of bounded measurable functions $f:[0,\infty]\to \mathbb{R}$ for which $\EE[f(R_1)e^{-\mu R_0}\mathbbm{1}(R_0<\infty)]=\EE[f(R_1)]\EE[e^{-\mu R_0}\mathbbm{1}(R_0<\infty)]$ is a vector space over $\mathbb{R}$ closed under nondecreasing limits of nonnegative functions. Since it contains the class $\mathcal{K}$ and also $\mathbbm{1}_{[0,\infty]}$, so by monotone class, $\EE[f(R_1)e^{-\mu R_0}\mathbbm{1}(R_0<\infty)]=\EE[f(R_1)]\EE[e^{-\mu R_0}\mathbbm{1}(R_0<\infty)]$ prevails for all bounded measurable $f:[0,\infty]\to\mathbb{R}$. With this having been established, it remains to repeat the preceding argument essentially \emph{verbatim}, except that now with a fixed bounded measurable $f:[0,\infty]\to \mathbb{R}$, and for the class of bounded measurable $g:[0,\infty]\to \mathbb{R}$ for which $\EE[f(R_1)g(R_0)]=\EE[f(R_1)]\EE[g(R_0)]$. Using the computations from the beginning of the proof, after some algebraic rearrangement, \eqref{eq:independence-cond} rewrites into \begin{equation}\label{eq:functional-1} \fii(\lambda+\mu)\left[\phi(\lambda)+\phi(\mu)-\phi(\lambda)\phi(\mu)\right]=\fii(\lambda)\fii(\mu),\quad \{\lambda,\mu\}\subset [0,\infty). \end{equation} The sufficiency of the conditions of the proposition may now be checked as follows. Under \ref{kara:4} $N=0$ a.s., hence $R_0$ and $R_1$ are equal to $\infty$ a.s. and so trivially independent. Under \ref{kara:3}, a.s. $N$ is zero up to $\kappa$ and then jumps to $\infty$ at $\kappa$, whence $R_0$ and $R_1$ are both equal to $\kappa$ a.s., again trivially independent. \ref{kara:2} is the case of a deterministically delayed (by $\kappa$) HPP, in which case the independence of $R_0$ and $R_1$ is well-known, as we have noted.\footnote{Indeed, in all the previous three cases, we see that even $N^0$ and $N^1$ are independent (viz. Remark~\ref{remark}\ref{remark:ii}).} \ref{kara:0}. The deterministic delay by $\kappa$ does not affect independence; we may assume $\kappa=0$. Then $\fii\equiv 1-q_0^2$, $\phi\equiv1-q_0$ and \eqref{eq:functional-1} becomes $(1-q_0^2)[2-2q_0-(1-q_0)^2]=(1-q_0^2)^2$, which holds true. \ref{kara:1}. Again without loss of generality $\kappa$ is set equal to $0$; similarly the scaling of time by the factor $\alpha$ is immaterial to independence, and we may assume $\alpha=1$. In that case we identify $\fii(\lambda)=\frac{1-q_0^2}{1-q_0^2e^{-\lambda}}$ and $\phi(\lambda)=1-q_0+q_0e^{-\lambda}\frac{1-q_0^2}{1-q_0^2e^{-\lambda}}=\frac{(1-q_0)(1+q_0e^{-\lambda})}{1-q_0^2e^{-\lambda}}$, $\lambda\in [0,\infty)$. Tedious but straightforward algebraic manipulations then yield \eqref{eq:functional-1}. We now prove necessity of the conditions. Set $q_0:=\PP(T_2>0)$. Assume $\PP(T_1<\infty)>0$ (otherwise we get \ref{kara:4}) and hence $\fii>0$. We see from \eqref{eq:functional-1} that then $\phi\not\equiv 0$, hence $\PP(T_2<\infty)>0$, equivalently $\phi>0$. Also from \eqref{eq:functional-1}, for each $\mu\in [0,\infty)$, there exists the limit $$\lim_{\lambda\to\infty}\frac{\fii(\lambda+\mu)}{\fii(\lambda)}=\frac{\fii(\mu)}{1-q_0+\phi(\mu)q_0}\in (0,1].$$ From the characterization of regular variation \cite[Theorem~1.4.1]{bgt} for the function $\fii\circ \ln\vert_{[1,\infty)}$ it follows that there exists a $\rho \in \mathbb{R}$, for which $$\frac{\fii(\ln r)}{1-q_0+\phi(\ln r)q_0}=r^\rho\text{ for all }r \in [1,\infty);$$ necessarily $\kappa:=-\rho \in [0,\infty)$. In other words $$\fii(\mu)=e^{-\kappa \mu}(1-q_0+\phi(\mu)q_0)\text{ for }\mu\in [0,\infty). $$ If $\PP(T_2=\infty)=q_0$, equivalently if $\LL(T_2)=(1-q_0)\delta_0+q_0\delta_\infty$, then $\phi\equiv 1-q_0$, and we obtain $\fii=e^{-\kappa \cdot}(1-q_0^2)$, whence from the injectivity of the Laplace transform, $\LL(T_1)=(1-q_0^2)\delta_\kappa+q_0^2\delta_\infty$ -- that is to say, we obtain \ref{kara:3} and \ref{kara:0}, according as to whether $q_0=0$ or $q_0>0$. Assume now $\PP(T_2=\infty)\ne q_0$, equivalently $\PP(T_2=\infty)<q_0$, in particular $q_0>0$ and $\phi>1-q_0$. The functional equation \eqref{eq:functional-1} may then be rewritten in the form \footnotesize \begin{equation*} (1-q_0+q_0\phi(\lambda+\mu))\left[\phi(\lambda)+\phi(\mu)-\phi(\lambda)\phi(\mu)\right]=(1-q_0+q_0\phi(\lambda))(1-q_0+q_0\phi(\mu)),\quad \{\lambda,\mu\}\subset [0,\infty). \end{equation*}\normalsize Introducing the substitution $\xi:=(\phi-(1-q_0))/q_0$, we obtain \begin{equation*} \xi(\lambda)\xi(\mu)=\xi(\lambda+\mu)(1-q_0^2+q_0^2(\xi(\lambda)+\xi(\mu)-\xi(\lambda)\xi(\mu))),\quad \{\lambda,\mu\}\subset [0,\infty). \end{equation*} Another substitution $\psi:=\xi^{-1}-1$ yields \begin{equation}\label{eq:functional-2} \psi(\lambda+\mu)=\psi(\lambda)+\psi(\mu)+\psi(\lambda)\psi(\mu)(1-q_0^2),\quad \{\lambda,\mu\}\subset [0,\infty). \end{equation} When $q_0=1$, this is Cauchy's functional equation for $\psi$. The latter being a monotone function, it follows that there exists an $\alpha\in \mathbb{R}$, such that $\psi(\mu)=\alpha\mu$ for all $\mu\in [0,\infty)$. From $\PP(T_2>0)=q_0>0$ we have of course $\alpha\ne 0$ and $\theta:=\alpha^{-1}\in (0,\infty)$. So in this case, for all $\mu\in [0,\infty)$, $\phi(\mu)=\xi(\mu)=(1+\psi(\mu))^{-1}=\theta/(\theta+\mu)$. Then the injectivity of the Laplace transform implies that $T_2\sim\Exp(\theta)$. Similarly it follows that $T_1-\kappa\sim \Exp(\theta)$. In other words, we have case \ref{kara:2}. Finally we are left with the case of $q_0<1$. When so, we get from \eqref{eq:functional-2} and $\lim_\infty\psi=\infty$ that for each $\mu\in [0,\infty)$ there exists the limit $$\lim_{\lambda\to\infty}\frac{\psi(\lambda+\mu)}{\psi(\lambda)}=1+\psi(\mu)(1-q_0^2)\in [1,\infty).$$ The characterization of regular variation gives the existence of an $\alpha\in \mathbb{R}$, such that $$1+\psi(\mu)(1-q_0^2)=e^{\alpha\mu}\text{ for }\mu\in [0,\infty);$$ necessarily $\alpha\in (0,\infty)$. In other words we obtain $$\phi(\mu)=1-q_0+q_0\frac{1-q_0^2}{e^{\alpha \mu}-q_0^2}\text{ and }\fii(\mu)=e^{-\mu\kappa}\frac{1-q_0^2}{1-q_0^2e^{-\alpha \mu}}\text{ for }\mu\in [0,\infty).$$ The injectivity of the Laplace transform finally implies that $\LL(T_2/\alpha)=(1-q_0)\delta_0+q_0\geom_\mathbb{N}(1-q_0^2)$ and $\LL((T_1-\kappa)/\alpha)=\geom_{\mathbb{N}_0}(1-q_0^2)$, viz. case \ref{kara:1}. \end{proof} As mentioned, Proposition~\ref{proposition:karakterizacija} has as its corollary Theorem~\ref{theorem}. \noindent \emph{Proof of Theorem~\ref{theorem}.} The conditions that $0$ belong to the support of the law of $T_1$, and that $T_2$ be non-arithmetic or else the law of $T_1$ have no atom in zero, exclude the cases \ref{kara:4}-\ref{kara:3}-\ref{kara:0}-\ref{kara:1} and force $\kappa=0$ in \ref{kara:2} of Proposition~\ref{proposition:karakterizacija}. Clearly the same transpires also when $N$ is ordinary. \qed\label{proof:thm} \bibliographystyle{amsplain}
2,869,038,155,392
arxiv
\section{Introduction} \label{intro} Contrary to the heteronuclear alkali diatomic molecules (e.g. \cite{Pashov:05,Docenko:06,Staanum:07}), the lowest triplet state a$^3\Sigma^+_{u}$ of the homonuclear ones is much less accurately characterized. The experimental data in this case are either fragmentary or from low resolution spectroscopy. The situation can be understood mainly by the presence of the gerade/ungerade symmetry in the homonuclear diatomics which makes the spectroscopic techniques with single-photon excitation inapplicable. On the other hand the demand for accurate data on both states, \mbox{X$^1\Sigma_{\mathrm{g}}^+$}\ and \mbox{a$^3\Sigma_{\mathrm{u}}^+$}, correlated to the lowest s + s asymptote of the alkalies, is high because of the very active research in the field of ultracold collisions on alkali species. The first spectroscopic observation of the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state in K$_2$ with partially resolved rotational structure was reported in Ref.~\cite{Li:90}. There, blue fluorescence to the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state was induced with the optical-optical double resonance (OODR) technique and resolved with a 0.85~m dual grating monochromator. The highest observed vibrational level of the ground triplet state in $^{39}$K$_2$ was v~=~17. In a further paper \cite{Zhao:96} the same group reported additional OODR measurements on the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state in order to resolve the problem that the derived potential curve of the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state crossed that of the \mbox{X$^1\Sigma_{\mathrm{g}}^+$}\ state taken from Ref.~\cite{Amiot:95}. The paper contains a few term energies of low vibrational levels of the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state with other rotational quantum numbers than the levels observed in \cite{Li:90}. The lowest atomic asymptote of K$_2$ was studied by Wang et al~\cite{Wang:00} through two-color photoassociation spectroscopy of ultracold $^{39}$K atoms. In the range between 1500 and 4600 MHz below the asymptote a total of 12 term energies of near asymptotic levels with high triplet character were determined. Thus the analysis of the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state level structure presented in these three publications \cite{Li:90,Zhao:96,Wang:00} could be performed as single channel cases ignoring the singlet-triplet mixing due to the hyperfine interactions with the \mbox{X$^1\Sigma_{\mathrm{g}}^+$}\ state. In another group of papers \cite{Loftus:02,Regal:03,Ticknor:04,Regal:04,Gaebler:07} s- and p-wave Feshbach resonances in $^{40}$K were measured and just recently s-wave Feshbach resonances in $^{39}$K were reported \cite{Errico:07} followed by an application for successful Bose-Einstein condensation of $^{39}$K \cite{Roati:07}. Bose-Einstein condensation was observed earlier for the isotope $^{41}$K by Modugno et al. \cite{Modugno:01} leading to an independent estimate of the triplet scattering length of that isotope. In their recent work Chu et al. \cite{Chu:05} examined series of two-step two-color and two-photon single-color laser excitations in K$_2$ which were devoted to a study of its 2$^{3}\Pi_{g}$ state. Along with the main subject of their study, the authors observed also laser induced fluorescence to the triplet a$^3\Sigma^+_{u}$ state. Unfortunately, this was done at low resolution and thus gave no additional spectroscopic data for a reliable characterization of the a$^3\Sigma^+_{u}$ state. The purpose of our present study is to record high resolution spectra for the lowest triplet state in K$_2$ and to construct potential energy curves accurate enough to model cold collisions between two potassium atoms in the coupled system of \mbox{X$^1\Sigma_{\mathrm{g}}^+$}\ and \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ states. We will investigate the importance of the singlet-triplet mixing for relatively deeply bound and asymptotic vibrational levels, since our experience on the heavier alkali compounds has shown that by ignoring it one is not able to reproduce satisfactorily the whole set of experimental observations (see e.g. \cite{Pashov:05}). Finally, the presence of accurate experimental data for several potassium isotopes gives the opportunity to look for the possible breakdown of the Born-Oppenheimer approximation and, consequently, the widely used mass-scaling for cold collisions. \section{Experiment} \begin{figure*} \centering \epsfig{file=SPECTR_1.eps,width=0.69\linewidth} \caption{The fluorescence progression following the excitation to the ($v'$=6, $J'$=29) rovibrational level in the 2$^{3}\Pi_{g}$ state. The weak lines around the strongest ones are rotational satellites, caused by the presence of the buffer gas in the heat pipe.} \label{spectr} \end{figure*} The experimental setup for the production of K$_2$ molecules is similar to that described in our previous papers \cite{Pashov:05,Docenko:06}. A single section heat-pipe was filled with about 10 g of potassium (natural isotopic composition) and heated to about 600 K. Ar was used as buffer gas at a pressure of about 1-2 mbar. For excitation of the potassium molecules we applied the laser lines listed in Ref.~\cite{Chu:05} and then used a Fourier-transform spectrometer to resolve the induced fluorescence to the a$^3\Sigma^+_{u}$ state with a typical resolution of 0.05 cm$^{-1}$. Two diode laser heads with an external grating cavity (DL 100 from Toptica) and the accompanying electronics were supplied with laser diodes delivering about 50mW at 850 nm or 100 mW at 980 nm, respectively. The lasers were superimposed collinearly by a dichroitic mirror and focused in the central part of the heat-pipe oven. The frequency of the lasers was controlled with a wavemeter (type Highfinesse WS7), which was calibrated against the He-Ne/I$_2$ frequency standard in our lab in Hannover. \subsection{Two-photon single-color excitations} For the two-photon transitions we applied only the 980 nm laser tuned to the frequencies of Table II and Table III of Ref.\cite{Chu:05}. In order to increase the detected signal we applied a Doppler-free excitation scheme since then all molecules independent of their velocity classes contribute to the fluorescence intensity. The laser beam was back reflected and refocused after its first pass through the heat-pipe. When the laser frequency was tuned to the center of the two-photon transition a narrow Doppler-free peak was observed on the Doppler-broadened pedestal, which stems from the two-photon processes by single laser beam direction. In this way we were able to increase the intensity of the induced fluorescence by about a factor of 5-7 and also eliminated the possible Doppler shift of the recorded fluorescence frequencies of the 2$^3\Pi_{g}\rightarrow$~a$^3\Sigma^+_{u}$ system. From the whole list of transitions given in \cite{Chu:05} we registered strong discrete spectra to the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state only for 5 (10199.200 \mbox{cm$^{-1}$}, 10226.307 \mbox{cm$^{-1}$}, 10251.126 \mbox{cm$^{-1}$}, 10258.240 \mbox{cm$^{-1}$}, and 10291.740 \mbox{cm$^{-1}$}) out of 28, which were in the wavelength region we could cover in the present experiment. The 5 used excitations gave sufficiently strong discrete fluorescence whereas the others mainly gave continuum fluorescence. We found a new two-photon excitation at 10210.991 \mbox{cm$^{-1}$}. While scanning the laser we frequently observed yellow-orange fluorescence also at other frequencies, but only in few cases we were able to record discrete spectral lines. We believe that such fluorescence comes from bound-free transitions to the repulsive branch of the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state. In some spectra (excitations at 10210.991~\mbox{cm$^{-1}$}\ and 10199.200~\mbox{cm$^{-1}$}) we observed in addition to the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state fluorescence also fluorescence to the b$^{3}\Pi_{u}$ state. This could be helpful in a future analysis of the coupled system of b$^{3}\Pi_{u}$ and A$^{1}\Sigma_{u}$ states. \subsection{Two-step two-color excitations} The main body of experimental data comes from the two-step excitations, which were selected from Table I of Ref. \cite{Chu:05}. The signals in this case were much stronger than the two-photon ones, higher levels of the 2$^{3}\Pi_{g}$ state were excited which allowed longer progressions to the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state to be observed. A typical progression following the excitation to the ($v'$=6, $J'$=29) level in the 2$^{3}\Pi_{g}$ state and reaching up to $v''$=21 is shown in Fig.~\ref{spectr}. The mutual stability of both laser frequencies with respect to each other was somewhat critical. Therefore, we usually tuned first the 850~nm laser to the desired transition frequency of the first step (\mbox{X$^1\Sigma_{\mathrm{g}}^+$} - (A$^1\Sigma_{\mathrm{u}}^+\sim$ b$^3\Pi_{\mathrm{u}}$)), then stabilized the frequency of the 980~nm laser (the second step by the transition b$^3\Pi_{\mathrm{u}}$ - 2$^3\Pi_{\mathrm{g}}$ ) on the maximum of the yellow-orange fluorescence appearing due to double resonance. For the stabilization of the laser frequency in the second step, the current of the 980 nm laser was modulated and the error signal was created by a Lock-in detection on the modulation frequency. Finally, the frequency of the 850 nm laser was fine tuned in order to maximize the fluorescence. During a typical scan of the Fourier spectrometer (about 20 min.) the stability of the 850 nm laser cavity was sufficient to keep its frequency to within few tens of MHz without active stabilization. The second-step laser followed the slow drifts of the first one by the feedback loop for the stabilization. This setup was sufficient to ensure stable conditions during the recording of the Fourier-transform spectrometer. \section{Analysis} \label{analysis} Initially, our identification of the observed two-photon progressions was based on the data (transition frequencies and assignments) from Ref.~\cite{Chu:05} and the Dunham coefficients for the a state from Ref.~\cite{Li:90}. After collecting several clear progressions we tried to fit a potential energy curve for the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state, but we found that the rotational numbering N, suggested with the help of the Dunham coefficients, was most likely incorrect at least for one of the transitions since it turned out to be impossible to describe these progressions with a single potential curve. The identification of the two-step processes in Ref.~\cite{Chu:05} is much more reliable therefore we used it to establish the assignment of the transitions to the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state. With a potential curve fitted to only two such progressions (using the pointwise potential presentation from Ref.~\cite{ipaasen}) we were able to fix the rotational assignment also of the two-photon transitions. In Table~\ref{excit} we present the list of the assigned transitions and corresponding laser frequencies used in the present two-photon and two-step excitations. Most frequencies were reported already in Ref.~\cite{Chu:05} and the vibrational assignment of the levels of the 2$^3\Pi_g$ state follows this reference, but the rotational quantum numbers are reassigned. The excitation at 10210.991 \mbox{cm$^{-1}$}\ was detected in our study, and the vibrational numbering of the upper state is based on the Dunham coefficients of the 2$^3\Pi_g$ state reported in Ref.~\cite{Chu:05}. \begin{table} \fontsize{8pt}{12pt}\selectfont \caption{List of the assigned transitions excited by the laser frequencies used in the present experiment. In the first four columns the quantum numbers for the 2$^3\Pi_g$ and the \mbox{X$^1\Sigma_{\mathrm{g}}^+$}\ states are given, respectively. The vibrational assignment of the 2$^3\Pi_g$ levels is taken from Ref.\cite{Chu:05}, except for the last two-photon transition, which was detected only in this study. In the last column the laser frequencies for the two-photon (one value) and the two-step excitations (two values) are listed. The uncertainties are less than 0.010 $\mathrm{cm}^{-1}$} \label{excit} \begin{tabular}{rr|rr|r} \hline $v'$ & $J'$ & $v''$ & $J''$ & Laser frequency ($\mbox{cm$^{-1}$}$) \\ \hline 0 & 53 & 13 & 53 & 10199.200 \\ 1 & 55 & 11 & 55 & 10291.740 \\ 2 & 74 & 12 & 72 & 10258.240 \\ 2 & 78 & 12 & 78 & 10251.125 \\ 5 & 62 & 15 & 62 & 10226.307 \\ 6 & 109 & 14 & 109 & 10210.991 \\ \hline 6 & 25 & 0 & 23 & 11641.184 + 10241.062 \\ 6 & 27 & 0 & 25 & 11644.249 + 10235.844 \\ 6 & 29 & 0 & 27 & 11643.657 + 10234.098 \\ 6 & 31 & 0 & 29 & 11643.237 + 10231.966 \\ 6 & 33 & 0 & 31 & 11642.946 + 10229.528 \\ 7 & 25 & 0 & 25 & 11644.248 + 10285.797 \\ 7 & 25 & 0 & 23 & 11641.185 + 10294.354 \\ 7 & 27 & 0 & 25 & 11644.249 + 10289.130 \\ 7 & 31 & 0 & 31 & 11642.948 + 10278.712 \\ 8 & 25 & 0 & 23 & 11641.185 + 10347.338 \\ 8 & 27 & 0 & 27 & 11643.657 + 10336.777 \\ 8 & 29 & 0 & 27 & 11643.657 + 10340.363 \\ \hline \end{tabular} \end{table} We estimated the experimental uncertainty of the Fou-rier-transform data conservatively to be 0.005 \mbox{cm$^{-1}$}\ from the applied resolution of 0.05 \mbox{cm$^{-1}$}. However, the dimensionless standard deviation of the preliminary potential fit, being about 0.5, suggests that the primary uncertainty is somewhat overestimated. The majority of the observed transitions was from the most abundant isotopic combination $^{39}$K$^{39}$K. Only in one spectrum (the two-photon excitation at 10251.125 \mbox{cm$^{-1}$}) we found lines also from $^{39}$K$^{41}$K. The data field of all observed rotational and vibrational quantum numbers is given in Fig.~\ref{dataset}. The full list of excitation frequencies, their new assignments, and the observed progressions containing 639 transitions to 238 levels of the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state can be found in the supplementary material \cite{sup}. \begin{figure} \centering \epsfig{file=datafield_1.eps,width=0.99\linewidth} \caption{The range of vibrational and rotational quantum numbers $v''$ and $N''$ of the energy levels of the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state, observed in the present study.} \label{dataset} \end{figure} \section{Coupled channels treatment} \label{CC} The initial pointwise potential for the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state derived in the section above is based only on the spectroscopic data of our experiment. As a second step of our analysis we included also the progressions to the \mbox{X$^1\Sigma_{\mathrm{g}}^+$}\ state measured by Amiot et al. \cite{Amiot:95} and fitted the complete data set to two potentials having the same long range behavior determined by the dispersion coefficients C$_6$, C$_8$ and C$_{10}$ and opposite exchange terms. The fit applied the analytic representation as described in our recent work on KRb \cite{Pashov:07}. In order to fix the absolute position of the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state with respect to the \mbox{X$^1\Sigma_{\mathrm{g}}^+$}\ state, we need a common origin with respect to which the energies of levels of both these states are known. As such origins we used the term energies of the upper 2$^3\Pi$ state levels involved in the two-step and the two-photon processes. The transition energy to the corresponding \mbox{X$^1\Sigma_{\mathrm{g}}^+$}\ state levels is given by the sum of the two laser frequencies, whereas the transition frequencies to the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state levels were measured directly by the FTS. For an easy understanding and a full definition of the parameters contained in later tables we repeat the relevant formulas of the analytic potential representation. The representation of the potentials is split into three regions: the repulsive wall (R$<$R$_{inn}$), the asymptotic region (R$>$R$_{out}$), and the intermediate region in between. The analytic form of each potential in the intermediate range is described by a finite power expansion with a nonlinear variable function $\xi$ of internuclear separation R: \begin{equation} \label{xv} \xi(R)=\frac{R - R_m}{R + b\,R_m} \end{equation} \begin{equation} \label{uanal} \mbox{U}_{\mathrm {IR}}(R)=\sum_{i=0}^{n}a_i\,\xi(R)^i \end{equation} \noindent where the \{a$_i$\} are fitting parameters and $b$ and $R_m$ are chosen during the transformation process from the pointwise representation to the analytic form of equation (\ref{uanal}), $R_m$ is close to the value of the equilibrium separation. The potential is extrapolated for R $< \mbox{R}_{inn}$ with: \begin{equation} \label{rep} \mbox{U}_{\mathrm {SR}}(R)= A + B/R^{N_s} \end{equation} \noindent by adjusting the $A$ and $B$ parameters to get a continuous transition at $\mbox{R}_{inn}$; N$_s$ was 12 and 6 for \mbox{X$^1\Sigma_{\mathrm{g}}^+$}~and \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ states, respectively. For large internuclear distances (R $> \mbox{R}_{out}$) we adopted the standard long range form of molecular potentials: \begin{equation} \label{lrexp} U_{\mathrm {LR}}(R)=U_{\infty}-C_6/R^6-C_8/R^8-C_{10}/R^{10}\pm E_{\mathrm{exch}} \end{equation} \noindent where the exchange contribution is given by \begin{equation} \label{exch} E_{\mathrm{exch}}=A_{\mathrm{ex}} R^\gamma \exp(-\beta R) \end{equation} and U$_{\infty}$ set to zero for fixing the energy reference. These potentials were applied in a coupled channels calculation including the hyperfine parameters and the electronic and nuclear g-factors of the potassium atoms \cite{Arimondo} and the magnetic spin-spin coupling of the two atomic doublet states. The full hamiltonian was already described in several publications, e.g. in Ref. \cite{Mies:00,Laue:02}. The Feshbach resonances reported in \cite{Regal:03,Regal:04,Gaebler:07,Errico:07} and the two-color photoassociation data from Ref. \cite{Wang:00} were included in the fit using the published error limits to determine the weighting. These data give information on asymptotic bound levels of the two isotopomers $^{39}$K$_2$ and $^{40}$K$_2$ and, especially the Feshbach resonances, on the singlet/triplet coupling, while the levels from photoassociation work turned out to be mainly of triplet character. For the Feshbach resonances on $^{40}$K$_2$ we selected the results from Ref. \cite{Regal:04,Gaebler:07} because these are the most precise ones and should be closely related to the two-body collision process while those from Ref. \cite {Regal:03} could be influenced by three-body effects as studied in Ref. \cite {Smirne:07} for Rb. The fit was performed iteratively. First, the fit of the asymptotic levels of the photo association spectroscopy and of the magnetic fields of the Feshbach resonances varies only the lowest order dispersion term and the exchange term, keeping all other parameters fixed for a preliminary potential representation. For the second fit step the preliminary results were used to calculate the binding energies of those levels to which the Feshbach resonances and the photoassociation levels correlate for the uncoupled case. These calculated energies with their quantum numbers, derived directly from the calculations, were then added as data points to the data field for the full potential fit and a new fit, now for all free parameters of the three regions of each potential, was performed. The procedure is iterated two times to find convergence. The standard deviation of the coupled channels fit is $\sigma$ = 0.84 for the Feshbach resonances and the photoassociation data, and $\sigma$ = 0.82 for the full potential step for both potentials together showing the good consistency of this approach. For the potential fit of the single channel case the standard deviation is 0.48 for the manifold of triplet levels alone and 0.82 for that of the singlet levels . At the end of the evaluation, the scattering calculations were extended by including d-waves for the s-wave resonances and f-waves for the p-wave resonance, the coupling is possible by spin-spin interaction and higher order spin-orbit interaction. The influence of the higher partial waves turned out to be insignificant with respect to the experimental uncertainty of the magnetic field determination in these cases. The potential results are listed in Table \ref{tabX}~for the \mbox{X$^1\Sigma_{\mathrm{g}}^+$}\ state and in Table \ref{taba}~for the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state; the given number of digits for the potential parameters are not checked for their absolute need according to roundoff errors, but copied from the computer output. It is quite certain, that fewer digits would be sufficient in several cases to reproduce all observations within experimental uncertainty. The spectroscopic data on \mbox{X$^1\Sigma_{\mathrm{g}}^+$}\ contain levels which have outer turning points up to 16.83~\AA, about 0.934 $\mathrm{cm}^{-1}$\ below the asymptote, whereas the levels derived from the Feshbach resonances start at outer turning points of 27.03~\AA\ and are 0.051 $\mathrm{cm}^{-1}$\ below the asymptote. This shows directly the remaining energy gap between the two data sets. The situation is similar for the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state: outer turning points from spectroscopic data up to 15.27~\AA~ and from photoassociation data starting from 23.00~\AA, and these correspond to energies about 1.716 $\mathrm{cm}^{-1}$\ and 0.136 $\mathrm{cm}^{-1}$\ below the asymptote. At such separations the exchange energy is already negligible compared to the long range contribution of the dispersion terms. The small energy gaps of about 1~$\mathrm{cm}^{-1}$~in both cases assure a reliable extrapolation to the dissociation energy. At this occasion, one should also note that the derived dispersion coefficients closely agree to the theoretical values reported by Derevianko et al. \cite{Derevianko:99,Porsev:03}; deviations of C$_8$ and C$_{10}$ are equal within digits shown in Ref. \cite{Porsev:03} and for C$_6$ the present value is larger by two times of the error given in Ref. \cite{Derevianko:99} . The hyperfine structure of $^{39}$K$_2$ is the largest for our spectroscopic observations, isotopomers with $^{40}$K were not detected in our spectroscopic investigation because of low natural abundance. The total hyperfine structure of a single rotational state of \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ spans 923 MHz with widest spacing between adjacent levels of about 150 MHz. Thus no hyperfine structure could be resolved within the resolution of our spectra. We checked with coupled channels calculations that the general turnover from "pure" singlet/triplet character to mixed spin states begins for binding energies smaller than 7.0 GHz or 0.23 $\mathrm{cm}^{-1}$, thus in our spectroscopic data set only accidental local perturbations by closely spaced singlet-triplet levels could give observable energy shifts of a singlet and a triplet group. We did not find any within the present data set. \begin{table} \fontsize{8pt}{13pt}\selectfont \caption{Parameters of the analytic representation of the \mbox{X$^1\Sigma_{\mathrm{g}}^+$}\ state potential. The energy reference is the dissociation asymptote. Parameters with $^\ast$ are set for continuous extrapolation of the potential. } \label{tabX} \begin{tabular*}{1.0\columnwidth}{@{\extracolsep{\fill}}|lr|} \hline \multicolumn{2}{|c|}{$R < R_\mathrm{inn}=$ 2.870 \AA} \\ \hline $A^\ast$ & -0.265443197$\times 10^{4}$ $\mathrm{cm}^{-1}$ \\ $B^\ast$ & 0.820372803$\times 10^{9}$ $\mathrm{cm}^{-1}$ \AA $^{12}$ \\ \hline \multicolumn{2}{|c|}{$R_\mathrm{inn} \leq R \leq R_\mathrm{out}=$ 12.000 \AA} \\ \hline $b$ & $-0.40$ \\ $R_\mathrm{m}$ & 3.92436437 \AA \\ $a_{0}$ & -4450.906205 $\mathrm{cm}^{-1}$\\ $a_{1}$ & 0.70355350020116$$ $\mathrm{cm}^{-1}$\\ $a_{2}$ & 0.13671174694653$\times 10^{5}$ $\mathrm{cm}^{-1}$\\ $a_{3}$ & 0.10750698806556$\times 10^{5}$ $\mathrm{cm}^{-1}$\\ $a_{4}$ & -0.20932329414778$\times 10^{4}$ $\mathrm{cm}^{-1}$\\ $a_{5}$ & -0.19384823376156$\times 10^{5}$ $\mathrm{cm}^{-1}$\\ $a_{6}$ & -0.49209429682855$\times 10^{5}$ $\mathrm{cm}^{-1}$\\ $a_{7}$ & 0.11026750296026$\times 10^{6}$ $\mathrm{cm}^{-1}$\\ $a_{8}$ & 0.72867383247088$\times 10^{6}$ $\mathrm{cm}^{-1}$\\ $a_{9}$ & -0.29310771189374$\times 10^{7}$ $\mathrm{cm}^{-1}$\\ $a_{10}$ & -0.12407064957537$\times 10^{8}$ $\mathrm{cm}^{-1}$\\ $a_{11}$ & 0.40333954923169$\times 10^{8}$ $\mathrm{cm}^{-1}$\\ $a_{12}$ & 0.13229846082365$\times 10^{9}$ $\mathrm{cm}^{-1}$\\ $a_{13}$ & -0.37617672560621$\times 10^{9}$ $\mathrm{cm}^{-1}$\\ $a_{14}$ & -0.95250412147591$\times 10^{9}$ $\mathrm{cm}^{-1}$\\ $a_{15}$ & 0.24655585672079$\times 10^{10}$ $\mathrm{cm}^{-1}$\\ $a_{16}$ & 0.47848258035225$\times 10^{10}$ $\mathrm{cm}^{-1}$\\ $a_{17}$ & -0.11582132128030$\times 10^{11}$ $\mathrm{cm}^{-1}$\\ $a_{18}$ & -0.17022518278642$\times 10^{11}$ $\mathrm{cm}^{-1}$\\ $a_{19}$ & 0.39469335089283$\times 10^{11}$ $\mathrm{cm}^{-1}$\\ $a_{20}$ & 0.43141949807984$\times 10^{11}$ $\mathrm{cm}^{-1}$\\ $a_{21}$ & -0.97616955371081$\times 10^{11}$ $\mathrm{cm}^{-1}$\\ $a_{22}$ & -0.77417530660299$\times 10^{11}$ $\mathrm{cm}^{-1}$\\ $a_{23}$ & 0.17314133620597$\times 10^{12}$ $\mathrm{cm}^{-1}$\\ $a_{24}$ & 0.96118849014390$\times 10^{11}$ $\mathrm{cm}^{-1}$\\ $a_{25}$ & -0.21425463052972$\times 10^{12}$ $\mathrm{cm}^{-1}$\\ $a_{26}$ & -0.78513081744374$\times 10^{11}$ $\mathrm{cm}^{-1}$\\ $a_{27}$ & 0.17539493137145$\times 10^{12}$ $\mathrm{cm}^{-1}$\\ $a_{28}$ & 0.37939637130987$\times 10^{11}$ $\mathrm{cm}^{-1}$\\ $a_{29}$ & -0.85271868544557$\times 10^{11}$ $\mathrm{cm}^{-1}$\\ $a_{30}$ & -0.82123528497789$\times 10^{10}$ $\mathrm{cm}^{-1}$\\ $a_{31}$ & 0.18626451763727$\times 10^{11}$ $\mathrm{cm}^{-1}$\\ \hline \multicolumn{2}{|c|}{$R_\mathrm{out} < R$}\\ \hline ${U_\infty}$ & 0.0 $\mathrm{cm}^{-1}$ \\ ${C_6}$ & 0.1889676057$\times 10^{8}$ $\mathrm{cm}^{-1}$\AA$^6$ \\ ${C_{8}}$ & 0.5527948928$\times 10^{9}$ $\mathrm{cm}^{-1}$\AA$^8$ \\ ${C_{10}}$ & 0.2185553504$\times 10^{11}$ $\mathrm{cm}^{-1}$\AA$^{10}$ \\ ${A_{ex}}$ & 0.21698263$\times 10^{5}$ $\mathrm{cm}^{-1}$\AA$^{-\gamma}$ \\ ${\gamma}$ & 5.19500 \\ ${\beta}$ & 2.13539 \AA$^{-1}$ \\ \hline \multicolumn{2}{|c|}{Derived constants:} \\ \hline \multicolumn{2}{|l|}{equilibrium distance:\hspace{2.2cm} $R_e^X$= 3.92436(5) \AA} \\ \multicolumn{2}{|l|}{electronic term energy:\hspace{1.6cm} $T_e^X$= -4450.906(50) $\mathrm{cm}^{-1}$}\\ \hline \end{tabular*} \end{table} \begin{table} \fontsize{8pt}{13pt}\selectfont \caption{Parameters of the analytic representation of the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state potential. The energy reference is the dissociation asymptote. Parameters with $^\ast$ are set for continuous extrapolation of the potential. } \label{taba} \begin{tabular*}{1.0\columnwidth}{@{\extracolsep{\fill}}|lr|} \hline \multicolumn{2}{|c|}{$R < R_\mathrm{inn}=$ 4.750 \AA} \\ \hline $A^\ast$ & -0.559417167$\times 10^{3}$ $\mathrm{cm}^{-1}$ \\ $B^\ast$ & 0.6432888245$\times 10^{7}$ $\mathrm{cm}^{-1}$ \AA $^{6}$ \\ \hline \multicolumn{2}{|c|}{$R_\mathrm{inn} \leq R \leq R_\mathrm{out}=$ 12.000 \AA} \\ \hline $b$ & $-0.300$ \\ $R_\mathrm{m}$ & 5.73392370 \AA \\ $a_{0}$ & -255.016965 $\mathrm{cm}^{-1}$\\ $a_{1}$ & -0.44746842073489$$ $\mathrm{cm}^{-1}$\\ $a_{2}$ & 0.20951803151410$\times 10^{4}$ $\mathrm{cm}^{-1}$\\ $a_{3}$ & -0.17131183698021$\times 10^{4}$ $\mathrm{cm}^{-1}$\\ $a_{4}$ & -0.17772657861768$\times 10^{4}$ $\mathrm{cm}^{-1}$\\ $a_{5}$ & 0.29413668239428$\times 10^{4}$ $\mathrm{cm}^{-1}$\\ $a_{6}$ & -0.20171041930434$\times 10^{5}$ $\mathrm{cm}^{-1}$\\ $a_{7}$ & -0.35711976066048$\times 10^{5}$ $\mathrm{cm}^{-1}$\\ $a_{8}$ & 0.59856336996119$\times 10^{6}$ $\mathrm{cm}^{-1}$\\ $a_{9}$ & -0.71043946542935$\times 10^{6}$ $\mathrm{cm}^{-1}$\\ $a_{10}$ & -0.61713401161663$\times 10^{7}$ $\mathrm{cm}^{-1}$\\ $a_{11}$ & 0.19365677976135$\times 10^{8}$ $\mathrm{cm}^{-1}$\\ $a_{12}$ & 0.67930464983208$\times 10^{7}$ $\mathrm{cm}^{-1}$\\ $a_{13}$ & -0.12020038974090$\times 10^{9}$ $\mathrm{cm}^{-1}$\\ $a_{14}$ & 0.21603950703685$\times 10^{9}$ $\mathrm{cm}^{-1}$\\ $a_{15}$ & -0.63530871042880$\times 10^{8}$ $\mathrm{cm}^{-1}$\\ $a_{16}$ & -0.52391336483017$\times 10^{9}$ $\mathrm{cm}^{-1}$\\ $a_{17}$ & 0.15913325190081$\times 10^{10}$ $\mathrm{cm}^{-1}$\\ $a_{18}$ & -0.24792577649852$\times 10^{10}$ $\mathrm{cm}^{-1}$\\ $a_{19}$ & 0.20325982754798$\times 10^{10}$ $\mathrm{cm}^{-1}$\\ $a_{20}$ & -0.68043793785293$\times 10^{9}$ $\mathrm{cm}^{-1}$\\ \hline \multicolumn{2}{|c|}{$R_\mathrm{out} < R$}\\ \hline ${U_\infty}$ & 0.0 $\mathrm{cm}^{-1}$ \\ ${C_6}$ & 0.1889676057$\times 10^{8}$ $\mathrm{cm}^{-1}$\AA$^6$ \\ ${C_{8}}$ & 0.5527948928$\times 10^{9}$ $\mathrm{cm}^{-1}$\AA$^8$ \\ ${C_{10}}$ & 0.2185553504$\times 10^{11}$ $\mathrm{cm}^{-1}$\AA$^{10}$ \\ ${A_{ex}}$ &-0.21698263$\times 10^{5}$ $\mathrm{cm}^{-1}$\AA$^{-\gamma}$ \\ ${\gamma}$ & 5.19500 \\ ${\beta}$ & 2.13539 \AA$^{-1}$ \\ \hline \multicolumn{2}{|c|}{Derived constants:} \\ \hline \multicolumn{2}{|l|}{equilibrium distance:\hspace{2.2cm} $R_e^a$= 5.7344(1) \AA} \\ \multicolumn{2}{|l|}{electronic term energy:\hspace{1.6cm} $T_e^a$= -255.017(50) $\mathrm{cm}^{-1}$}\\ \hline \end{tabular*} \end{table} \section{Discussion and conclusion} \subsection{Potentials and dissociation energies} The potentials determined in the present work describe the spectroscopic observation and the results from cold collisions within the experimental accuracy. Only the very few data obtained by \cite{Li:90,Zhao:96} from fluorescence progressions using a grating spectrometer show deviations beyond the reported accuracy. The standard deviation of these series derived with the help of the new potentials are 0.70 $\mathrm{cm}^{-1}$~and 0.37~$\mathrm{cm}^{-1}$, respectively, while the reported experimental accuracies are 0.17~$\mathrm{cm}^{-1}$\ and 0.05~$\mathrm{cm}^{-1}$ . We get an averaged shift between the two series of these reports of 2.54~$\mathrm{cm}^{-1}$~which is close to the shift derived in \cite{Zhao:96} and interpreted in that paper as a calibration difference in both grating instruments. Thus only the unusual large scatter in both series remains unexplained. Trials of reassignment of these spectra in N and v quantum numbers remained unsuccessful. From the potentials of the ground state one can read off the dissociation energies D$_e$, 4450.906 (50) $\mathrm{cm}^{-1}$\ for the \mbox{X$^1\Sigma_{\mathrm{g}}^+$}\ state and 255.017 (50) $\mathrm{cm}^{-1}$\ for the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state. Zhao et al.\cite{Zhao:96} reported values for these states as 4450.674(72) $\mathrm{cm}^{-1}$\ and 252.74(12) $\mathrm{cm}^{-1}$, respectively. For the \mbox{X$^1\Sigma_{\mathrm{g}}^+$}\ state both values almost agree, but for the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state a clear discrepancy is found, which is certainly related to the calibration problem and the surprisingly big scatter of the results from the grating spectrographs. Because of the new large body of data on the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}~state with an accuracy of better than 0.005~$\mathrm{cm}^{-1}$~we recommend undoubtedly the application of the new result. Because the derivation of the position of the potential minimum is dependent on the mathematical representation of the potential curve in principle and this dependence might show up at the present level of accuracy, we prefer to give the dissociation energy with respect to an observable bound level, e.g. v=0, J=0, normally named by D$_0$, but this value is then isotope dependent. For the main isotopomer $^{39}$K$_2$ we obtain D$_0= 4404.816(50)$~$\mathrm{cm}^{-1}$\ for the singlet state and D$_0=244.523(50)$~$\mathrm{cm}^{-1}$\ for the triplet state, these values are better suited for comparing results of future studies of expected high level of accuracy. Recently, high resolution molecular beam spectroscopy of asymptotic levels of the state A$^1\Sigma^+_u$ were reported by our group \cite{Falke:06}. With the help of these data a very reliable value of D$_0$ of the \mbox{X$^1\Sigma_{\mathrm{g}}^+$}~state of the main isotopomer $^{39}$K$_2$ was derived, namely 4404.808(4) $\mathrm{cm}^{-1}$, which agrees with the new value above. But this value is an order of magnitude more precise than the present, completely independently derived value. The good agreement between both experimental results certainly confirms our conclusion drawn in the paragraph above. Furthermore, incorporating the precise dissociation energy as a data point of the level v=0 and J=0 of \mbox{X$^1\Sigma_{\mathrm{g}}^+$}\ with respect to the dissociation limit in the fit, it directs us to reduce the error limit of the dissociation energy of the state \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ significantly, because we measured the relative position of the triplet and singlet level scheme by our two-photon and two-step investigation, as given in Tab. \ref{excit}. This results to D$_e= 255.017(10)$~$\mathrm{cm}^{-1}$\ or D$_0=244.523(10)$~$\mathrm{cm}^{-1}$\ of $^{39}$K$_2$. \subsection{Cold collisions and Feshbach resonances} In the data evaluation three different isotopomers of potassium are included, namely $^{39}$K$_2$, $^{40}$K$_2$, and $^{39}$K$^{41}$K. Thus, it might be possible to get a first answer, if mass scaling is applicable in the case of potassium at the present level of accuracy. At a first glance the obtained standard deviations are below 1.0, see section \ref{CC}. Thus the evaluation is within the reported experimental accuracies. However, looking more closely to the deviations of the highly precise Feshbach resonances, where a magnetic field uncertainty of 0.05 G relates to an uncertainty in the order of 100 kHz in the frequency scale, the fit for $^{40}$K$_2$ is excellent, but the scatter between the deviations for $^{39}$K$_2$ is fairly large and for such small set of data too often at the limit of the experimental accuracy. This is not so obvious in the published fit of Ref. \cite{Errico:07}, because they used the less precise data of $^{40}$K from Ref.\cite{Regal:03} in their fit. We also would like to note, that the standard deviation, given as reduced $\chi ^2$ with the value 0.52 is probably in error, we obtained with their data 1.04. Also for some data of the few photoassociation measurements on the isotope $^{39}$K the deviations come close to the reported experimental uncertainties. Thus we would like to recommend new experiments for getting improved results from two-color photoassociation, which are presently reported with a 40 MHz uncertainty limit, and to extend the measurements of Feshbach resonances, for which we will make predictions below (Fig. \ref{resonances}). In the same spirit we prepare presently high precision molecular beam studies on potassium as we performed in the case of Na$_2$ \cite{Elbs:99,Samuelis:01} some years ago. All this together might give the proper limit for mass scaling, i.e. of the Born-Oppenheimer approximation, for the ground states of potassium atom pairs. For the excited asymptote s + p we already reported an experimental evidence of necessary corrections to the Born-Oppenheimer approximation \cite{Falke:07}. \begin{figure} \centering \epsfig{file=res_39_39.eps,width=0.99\linewidth} \epsfig{file=res_41_41.eps,width=0.99\linewidth} \epsfig{file=res_39_41.eps,width=0.99\linewidth} \caption{s-wave Feshbach resonances of (a) $^{39}$K, (b)$^{41}$K and (c) their combination. For all cases the atomic angular momentum is f = 1, the projection m$_f$ on the space fixed axis is given in each graph. The unit of the scattering length is the Bohr radius $a_0 = 0.5292 \times 10^{-10}m$} \label{resonances} \end{figure} Assuming the validity of the Born-Oppenheimer approximation or i.e. mass scaling, the potentials allow reliable calculations of scattering lengths of the full manifold of isotopomers. The results are given in Table \ref{length} along with the maximum vibrational quantum number within the potentials for the lowest rotational state J=N=0. These results agree with the latest determinations from cold collision experiments; references are cited in the appropriate isotopomer column, where experimental data were directly used for the derivation of that isotopomer. Other predictions exist in the literature, which are close to the ones given in Table \ref{length}. The predictions of Table \ref{length} are homogeneous, because they all are derived from the same potential model. The slight difference for the triplet scattering length of $^{39}$K between Ref. \cite{Errico:07} and the present value originates from differences in the magnitude of the exchange force \cite{Simoni:07} used in both approaches. The combination of spectroscopic and Feshbach resonance data results in the increased value as given in the potential tables above. Ultracold potassium ensembles are often used for modeling condensed matter physics or cooling processes in connection with other species. To guide new experiments we calculated Feshbach resonances for the species $^{39}$K, $^{41}$K and their combination at the lowest atomic asymptote m$_f$ = 1 + 1 and at the low field seeking asymptote within a MOT at m$_f$= (-1) + (-1). The results are collected in Figure \ref{resonances} and show very promising structures at fairly low fields, which are of easy access by experiments. The calculations were done with a step size of 1 Gauss and thus in the cases of sharp resonances the curves are not going up to $\pm \infty$. Additionally, the scale of the vertical axis in Fig. \ref{resonances} does not extend to very large positive and negative scattering lengths; instead it is chosen to illustrate the behavior of the scattering length in the region of the bottom of the resonance profile, which is important for fine tuning of the two-boy interaction for experiments. In Ref. \cite{Errico:07} similar predictions are reported which are mainly consistent with ours. The reader should note that figure 5 and 6 are interchanged in \cite{Errico:07}. In Fig. \ref{resonances} (a) the broad resonances at about 400 G was used by Roati et al. \cite{Roati:07} to obtain Bose-Einstein condensation for $^{39}$K. For the two homonuclear cases calculated resonances at low field around 40 to 50 G appear, which allow the tuning of the two-body interaction in convenient field ranges. The resonance structure in the heteronuclear case is especially rich and would allow a very careful study of the validity of mass scaling. Fig. \ref{resonances} gives only examples, but we present in this paper all information needed for further calculations of collision properties at different atomic asymptotes. From the present fitting results we conclude that predictions of Feshbach resonances with our model potentials should be accurate to better than 1 Gauss. For the effective spin-spin coupling only the magnetic dipole-dipole contribution of atomic pairs was needed in the analysis by \cite {Ticknor:04} for the splitting of the p-wave resonance in $^{40}$K. Further studies on such resonances or two-color photoassociation spectroscopy with improved resolution could yield the missing information for deriving the second order spin-orbit contribution to the effective spin-spin coupling as it was obtained for Na$_2$ by de Araujo et al. \cite{Fatemi:03}, giving further improvement on the prediction of collision properties. \begin{table} \fontsize{7pt}{12pt}\selectfont \caption{Scattering lengths (unit $a_0=0.5292$ \AA ) and maximum vibrational quantum numbers within each potential for different isotopomers of potassium. } \label{length} \begin{tabular}{r|rr|rr|rr} \hline isotope & \multicolumn{2}{c|}{$a_{singlet}$} & \multicolumn{2}{c|}{$a_{triplet}$} & \multicolumn{2}{c}{ $v_{max}$} \\ & others & present & others & present & singlet & triplet \\ \hline $39/39$ & $138.90(15)$\cite{Errico:07} & $ 138.85$ & $-33.3(3)$\cite{Errico:07} & $ -33.15 $ & $85$ & $ 26 $ \\ $39/40$ & $ $ & $ -2.53$ & $ $ & $ -1926 $ & $85$ & $ 26 $ \\ $39/41$ & $ $ & $ 113.16$ & $ $ & $ 177.1 $ & $86$ & $ 27 $ \\ $40/40$ & $104.8 (4)$\cite{Loftus:02} & $ 104.45$ & $174 (7)$\cite{Loftus:02} & $ 169.6 $ & $86$ & $ 27 $ \\ $40/41$ & $ $ & $ -54.17 $ & $ $ & $ 97.26 $ & $86$ & $ 27 $ \\ $41/41$ & $ $ & $ 85.43 $ & $78 (20)$ \cite{Modugno:01} & $ 60.35 $ & $87$ & $ 27 $ \\ \hline \end{tabular} \end{table} \subsection {Summary} From high resolution Fourier-transform spectroscopy new spectroscopic information is obtained for the \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ state of K$_2$. It is combined with results from most recent cold collision studies \cite{Regal:04,Gaebler:07,Errico:07} and photoassociation spectroscopy \cite{Wang:00} by other laboratories and with previous spectroscopic results on the \mbox{X$^1\Sigma_{\mathrm{g}}^+$}\ state \cite{Amiot:95} to obtain potential curves for the coupled system ( \mbox{X$^1\Sigma_{\mathrm{g}}^+$}\ - \mbox{a$^3\Sigma_{\mathrm{u}}^+$}\ ). From this overall homogeneous approach of the derived potentials in connection with atomic hyperfine parameters and magnetic g-factors \cite{Arimondo} ultracold collisions are reliably modeled. Corrections to the Born-Oppenheimer approximation or the so called mass scaling are not yet seen within experimental accuracy. New measurements are proposed from which a new limit on the validity of mass scaling could be derived and a deviation might become obvious. Furthermore, new highly resolved measurements of deeply bound triplet states will give important information with which the assumption of using atomic parameters for describing the molecular hyperfine splitting can be checked. \section{Acknowledgments} The work is supported by DFG through SFB 407 and GRK 665. A.P. acknowledges a partial support from the Bulgarian National Science Fund Grants MUF 1506/05 and VUF 202/06 and from Sofia University through grants 72/2006 and 21/2007.
2,869,038,155,393
arxiv
\section{Introduction} Automatic skin disease detection would be valuable for both patients and doctors, and there has been success of applying deep supervised learning and CNN to the field of dermatology\cite{esteva2017dermatologist}. These models have large number of parameters and require large-scale labeled dataset for different kind of diseases. Nevertheless, human beings seem to be able to detect an abnormal skin lesion even if they are not trained, provided that they have enough experience with what a healthy mole looks like. Making our machine to have this behavior is interesting by itself, and it also provides practical advantages. By only observing normal skin image data, the algorithm is able to generalize to multiple diseases or even rare diseases, which saves time and money for data collection. Motivated by these aspects, we decide to focus on the problem of unsupervised anomaly detection for skin disease. Doing unsupervised learning over the space of images are challenging because of the curse of dimension, but recent development in deep generative models could address this issue. \\ There are two related models called Generative Adversarial Network (GAN) and Variational Autoencoder (VAE). Both VAE and GAN have been applied to anomaly detection \cite{kiran2018overview}. \cite{an2015variational} proposes using a direct ``reconstruction probability'' $E_{q(z|x)}\left[p(x|z)\right]$ for detection and shows VAE outperforms a PCA baseline on MNIST dataset. \cite{chen2018unsupervised} applies adversarial autoencoder to the unsupervised detection of lesions in brain MRI and improves the detection AUC for BRATS challenge dataset. Our major contribution is not proposing any fundamentally new methods, but to emphasize the potential usefulness of deep generative models in dermatology. We investigate VAE based methods instead of GAN for the following reason: 1) Even with recent tricks like gradient penalty, GAN training is still unstable and highly dynamic. As a contrast VAE training is more stable and therefore is more suitable as a proof of concept. 2) Most of GAN-based methods require training an additional network which maps from image space to the noise space in order to get the reconstruction \cite{kiran2018overview}, but it is unclear what the theoretical justification is of this additional network. On the contrary, VAE has a well defined mathematical framework and therefore is more interpretable. \section{Methods} We firstly give a brief introduction on the background of VAE and generative models. Then we propose different methods to use a trained VAE for anomaly detection. \subsection{Variational Autoencoder} VAE can be viewed as a directed probabilistic graphical model with the joint distribution defined as $p(\bm{x},\bm{z};\bm{\theta})=p(\bm{z})p(\bm{x}|\bm{z};\bm{\theta})$, where $\bm{x}\in\mathbb{R}^N$ is the data, $\bm{z}\in\mathbb{R}^M$ is the latent variable and $p(\bm{z})$ is the prior. We choose the prior to be $\mathcal{N}(0, I)$ in this work. When the true posterior $p(\bm{z}|\bm{x})$ is intractable, one can use a parametric distribution $q(\bm{z}|\bm{x};\bm{\phi})$ to approximate the posterior. Then in order to perform MLE, it is sufficient to maximize the evidence lower bound: \begin{equation} \log p(\bm{x}) \geq - KL(q(\bm{z}|\bm{x};\bm{\phi})||p(\bm{z})) + E_{\bm{z}\sim q(\bm{z}|\bm{x};\bm{\phi})}\left[\log p(\bm{x}|\bm{z};\bm{\theta}) \right] \label{eqn:elbo} \end{equation} We choose $q(\bm{z}|\bm{x};\bm{\phi})=\mathcal{N}(\bm{\mu}_{enc}(\bm{x};\bm{\phi}),diag({\bm{\sigma}_{enc}^{2}(\bm{x};\bm{\phi}))})$ to be a Gaussian distribution with diagonal covariance, where $\bm{\mu}_{enc}, \bm{\sigma}_{enc}^{2}$ are the output of a neural network. Then by the reparameterization trick, the evidence lower bound becomes \begin{equation} \log p(\bm{x}) \geq E_{\bm{\epsilon}\sim \mathcal{N}(0, I)}\left[\log p(\bm{x}|\bm{z};\bm{\theta}) \right]- KL(q(\bm{z}|\bm{x};\bm{\phi})||p(\bm{z})) \label{eqn:elbo_repara} \end{equation} where $\bm{z} = \bm{\mu}_{enc}(\bm{x};\bm{\phi}) + \bm{\epsilon} \bm{\sigma}_{enc}(\bm{x};\bm{\phi})$. Eqn. (\ref{eqn:elbo_repara}) is differentiable w.r.t. both \bm{$\theta$} and \bm{$\phi$}, and it can be trained from end to end. In this paper we choose $p(\bm{x}|\bm{z};\bm{\theta})\sim \mathcal{N}(\bm{\mu}_{dec}(\bm{z};\bm{\theta}), \sigma^{2}I)$ where $\sigma$ is pre-determined. Then maximizing Eqn. (\ref{eqn:elbo_repara}) is equivalent to minimizing $$ E_{\bm{\epsilon}\sim \mathcal{N}(0, I)}\left[ \frac{(\bm{x} - \bm{\mu}_{dec}(\bm{z}))^T(\bm{x} - \bm{\mu}_{dec}(\bm{z}))}{2\sigma^2} \right] + KL(q(\bm{z}|\bm{x};\bm{\phi})||p(\bm{z})) $$ One can observe that the function of $\sigma$ here is just adjusting the relative weight between reconstruction term and KL term, as a result, the final loss function to be minimized looks like \begin{equation} E_{\bm{\epsilon}\sim \mathcal{N}(0, I)}\left[ (\bm{x} - \bm{\mu}_{dec}(\bm{z}))^T(\bm{x} - \bm{\mu}_{dec}(\bm{z})) \right] + \beta KL(q(\bm{z}|\bm{x};\bm{\phi})||p(\bm{z})) \label{eqn:objective} \end{equation} The resulting training objective can be viewed as a specific case of $\beta-$VAE, but our derivation is not from an optimization perspective like in \cite{higgins2016beta}. \subsection{Anomaly Score} The degree of anomaly can be characterized by the possibility of seeing $x$ appear under distribution $p(x)$. Therefore computing the anomaly score is essentially estimating $s(x) = -\log p(x)$. Once we have a trained VAE, there are several ways to use it to generate an anomaly score $s(x)$ for the new image $x$. \subsubsection{VAE Based Score} One choice is to use the negative of Eqn. (\ref{eqn:elbo}) as an anomaly score. That is \begin{equation} s_{vae}(x) = KL(q(z|x)||p(z)) - \frac{1}{L}\sum_{i=1}^L \log p(x|z_i) \label{eqn:s_vae} \end{equation} where $z_i\sim q(z | x)$. If $s_{vae}(x)$ is larger, then $x$ has higher loss and thus is more likely to be an outlier. Since we can decompose the loss into reconstruction term and KL term, we can just define the corresponding anomaly scores: \begin{equation} s_{vae}^{kl} = KL(q(z|x)||p(z)) \label{eqn:s_vaekl} \end{equation} \begin{equation} s_{vae}^{reconst} = - \frac{1}{L}\sum_{i=1}^L \log p(x|z_i) \label{eqn:s_vaer} \end{equation} The motivation of decomposition is to investigate how each term in VAE loss is useful for anomaly detection. \subsubsection{Importance Weighted Autoencoder (IWAE) Based Score} Importance Weighted Autoencoder \cite{burda2015importance} proposes a tighter lower bound on $\log p(x)$, which is \begin{equation} \log p(x) \geq E_{z_1,...,z_K \sim q(z|x)}\left[ \log \frac{1}{K}\sum_{i=1}^K \frac{p(x|z_i)p(z_i)}{q(z_i|x)} \right] \label{eqn:iwae_elbo} \end{equation} When $k=1$, we recover the ELBO used by VAE. When $k$ becomes larger, it's proved in \cite{burda2015importance} that the Eqn. (\ref{eqn:iwae_elbo}) would become a tighter bound than Eqn. (\ref{eqn:elbo}), resulting in a more accurate inference. Similarily we can use the negative of Eqn. (\ref{eqn:iwae_elbo}) to compute the anomaly score as \begin{equation} s_{iwae}(x) = -\log \left(\frac{1}{L}\sum_{i=1}^L \frac{p(x|z_{i})p(z_{i})}{q(z_{i}|x)}\right) \label{eqn:s_iwae} \end{equation} where $z_{i} \sim q(z|x)$. The corresponding KL score and reconstruction score are \begin{equation} s_{iwae}^{kl}(x) = -\log \left(\frac{1}{L}\sum_{i=1}^L \frac{p(z_{i})}{q(z_{i}|x)}\right) \label{eqn:s_iwaekl} \end{equation} \begin{equation} s_{iwae}^{reconst}(x) = -\log \left(\frac{1}{L}\sum_{i=1}^L p(x|z_{i})\right) \label{eqn:s_iwaer} \end{equation} Although it is unclear whether a tighter lower bound estimate would help with outlier detection, we introduce these scores for the sake of comparison \section{Experiment} \subsection{Model Architecture} We use the architecture similar to DCGAN\cite{Radford2015umsupervised}. For the encoder, we avoid using linear layer to produce mean and log variance, but use two separate convolution layers. This architecture is fully convolutional and the number of convolution blocks are dependent on the input image size. In our implementation, the image size is 128, which makes the encoder consisted of 5 convolutional blocks and decoder consisted of 5 deconvolutional blocks respectively. ADAM is used as the optimizer with default setting. Hyperparameters are set as below. \begin{itemize} \item{$\beta$(weight for KL term): 0.01} \item{learning rate: 1e-4} \item{$L$(number of samples for calculating scores): 15} \item{batch size: 32} \item{training epochs: 40} \item{latent dimension:300} \end{itemize} \subsection{Dataset and Proprocessing} We use ISIC2018 Challenge dataset (task 3)\cite{codella2018skin,DVN/DBW86T_2018} which contains images from 7 diseases. A detailed dataset information can be found in Table \ref{table:isic_dataset}. For training the VAE, we use 6369 images as training set and 336 as validation set. For anomaly detection, we select 250 images from the validation set and 100 images from the rest of diseases. We normalize our data to have range from $-1$ to $1$ and resize each image to have size $128\times 128$. \begin{table}[] \centering \begin{tabular}{ l c c c c c c c } \hline Disease & MEL & NV & BCC & AKIEC & BKL & DF & VASC \\ \hline \# Images & 1113 & 6705 & 514 & 327 & 1099 & 115 & 142 \\ \hline \\ \end{tabular} \caption{ISIC2018 Challenge Task 3 Dataset} \label{table:isic_dataset} \end{table} \begin{table}[] \centering \begin{tabular}{lccccccc} & AKIEC & BCC & BKL & DF & MEL & VASC & All Disease \\ \hline $s_{vae}^{reconst}$ & \textbf{0.872} & \textbf{0.803} & 0.792 & \textbf{0.682} & 0.862 & \textbf{0.662} & \textbf{0.779} \\ $s_{iwae}^{reconst}$ & 0.871 & 0.802 & \textbf{0.793} & 0.678 & \textbf{0.864} & 0.657 & 0.777 \\ \hline $s_{vae}^{kl}$ & 0.441 & 0.454 & 0.472 & 0.398 & 0.690 & 0.487 & 0.491 \\ $s_{iwae}^{kl}$ & 0.406 & 0.431 & 0.441 & 0.383 & 0.677 & 0.477 & 0.469 \\ \hline $s_{vae}$ & 0.864 & 0.795 & 0.783 & 0.671 & 0.861 & 0.651 & 0.771 \\ $s_{iwae}$ & 0.864 & 0.795 & 0.784 & 0.670 & 0.861 & 0.648 & 0.771 \\ \hline \\ \end{tabular} \caption{The AUC ROC Results of Disease Detection. For each column $x$, we show the AUC results of different anomaly scores when $x$ is the abnormal class. The last column is test against all diseases. The results is the average of 5 runs.} \label{table:auc_results} \end{table} The AUC result is summarized in Table \ref{table:auc_results}. Our best AUC result is obtained by reconstruction scores with an overall AUC score of 0.77 In addition, the disease detection AUC for AKIEC and MEL reaches 0.87 and 0.86 respectively, even if the model has never seen a single image from these two diseases before. We notice that KL score is not very discriminative between normal and abnormal data. This is caused by using a small $\beta=0.01$ for the KL term, and model basically ignores the KL loss during training. We also try using a larger $\beta=1$ but it results in poorer AUC results. We also try using even smaller $\beta=0.001$, but it causes some numerical instability and the improvement is not significant. These results imply that the current prior is not expressive enough such that enforcing the approximated posterior $q(z|x)$ to be close to prior $p(z)$ hurts the model's expressiveness, which leads to worse AUC performance. We can also find that using IWAE variants scores does not make much difference from the VAE variants scores, which suggests that even if the bound is theoretically tighter\cite{burda2015importance}, its practical implication for anomaly detection might not be huge. A sample of reconstruction images is shown in Figure \ref{fig: reconstruction}. \indent We try to compare our method with a traditional baseline like PCA or Kernel-PCA for anomaly detection, but our image size (3x128x128) is way too large for these methods to be implemented without resorting to feature engineering. This also demonstrate the advantage of using VAE to cope with the curse of dimensionality in anomaly detection. \begin{figure} \centering {\includegraphics[width=.45\linewidth]{orig.png}} {\includegraphics[width=.45\linewidth]{reconst.png}} \caption[test]{\small{A non cherry-picked reconstruction result on validation set. \textit{left:} original images. \textit{right:} reconstruction images.} } \label{fig: reconstruction} \end{figure} \section{Future Work} Based on our current experiment results, there are several future research direction worth pursuing. \subsection{Improve VAE} As is shown above, our VAE faces the performance bottleneck because of the constraint to match posterior with a simple prior. One potential improvement would be adding a more expressive decoder like PixelVAE \cite{gulrajani2016pixelvae}. PixelVAE uses an expressive autoregressive structure for the decoder, which decomposes the lower level features from the higher level semantics. When the latent variable is only left to model the higher level feature, the simple Gaussian prior might be enough. From the reconstruction result, we can find the model is still outputting blurry images. This could be improved by using a more flexible posterior family or by doing hierachical variational inference \cite{sonderby2016ladder}.\\ \subsection{Improve Detection Methods} In this work we haven't fully explored the method to use VAE for anomaly detection, but just use different outputs from VAE to compute the scores. One could fit a probability distribution (e.g. Gamma distribution) to the distribution of normal scores and use the standard statistical tests for anomaly detection. The latents of VAE can also be used for anomaly detection in several ways. One could train a one-class SVM using the latents as features. The latent space can also be used as a metric space so that the distance between two images can be defined by their inner product in the latent space. This enables one to develop a method similar to the metric learning based anomaly detection method \cite{du2014discriminative}. \section{Conclusion} In this paper we apply Variational Autoencoder (VAE) to the problem of anomaly detection in dermatology. VAE based anomaly detection method has a solid theoretic framework and is able to cope with high dimension data, like raw image pixels. Our objective is a specific case of $\beta-$VAE but from a different derivation. We experiment on ISIC 2018 Challenge Task 3 Dataset\cite{codella2018skin, DVN/DBW86T_2018}. By training only on normal data (nevus), the model is able to detect abnormal disease with 0.77 AUC. In particular, the model is able to detect AKIEC and MEL with 0.87 and 0.86 AUC respectively. This is to our knowledge the first work of applying Variational Autoencoder to dermatology, and we argue that although there have been successful applications of supervised learning and CNN based methods in dermatology, applying deep unsupervised learning in dermatology is a fruitful yet not fully explored research direction. \bibliographystyle{splncs04}
2,869,038,155,394
arxiv
\section{Introduction} The growth-optimal portfolio (GOP) is a portfolio which has a maximal expected growth rate (namely log-return) over any time horizon. As the GOP can be usually tracked to the work \cite{K1956}, it is also called the ``Kelly criterion''. The GOP can also be obtained by maximizing log-utility which has a longer history. As the name implies, it can be used to maximize the expected growth rate of a portfolio. Indeed, it performs in some sense better than any other significantly different strategy as the time horizon increases. Over the past half century, a lot of papers have investigated the GOP. In theory and practice, the GOP has widely applications in a large number of areas including portfolio theory, utility theory, game theory, information theory, asset pricing theory, insurance theory. For instance, to name a few of them in the recent studies in the literature, \cite{A2000} study asset pricing problems in incomplete market; \cite{R2004} considers optimal investment problems; \cite{T2000} applies it to casino games. We want to emphasize that the GOP ignores the relevant risk when maximizing the expected growth rate of a portfolio. It is the seminal work \cite{markowitz1952portfolio} that takes the trade-off between the portfolio return and its risk into consideration when an investor chooses a portfolio. \cite{markowitz1952portfolio} suggests to use variance to measure the risk. Since then, the mean-variance theory becomes one of the most dominant financial theories in the realm of portfolio choice. Besides variance, alternative risk measures have been proposed to measure the risk for portfolio choice. Research along this line includes \cite{rockafellar2000optimization}, \cite{campbell2001optimal}, \cite{rockafellar2002conditional}, \cite{alexander2002economic}, \cite{alexander2004comparison}, \cite{jin2006note}, and \cite{adam2008spectral}, where the authors study single-period mean-risk portfolio selection with various risk measures, such as semi-variance, value-at-risk (VaR), expected shortfall (ES), and spectral risk measures. There also have numerous extensions of the mean-risk portfolio optimization from the single-period setting to the dynamic, continuous-time one \citep[e.g.][]{zhou2000continuous,bielecki2005continuous,jin2005continuous,basak2010dynamic,he2015dynamic,zhou2017dynamic,gao2017dynamic,dai2021dynamic,he2021mean}. In particular, \cite{he2015dynamic} study a continuous-time mean-risk portfolio choice when risk is measured by the weighted Value-at-Risk (WVaR) but their results are rather pessimistic. The WVaR is a quantile-based risk measure that generalizes VaR and ES, two popular risk measures in quantitative risk management. They find that, when using WVaR (including VaR and ES) on terminal wealth to measure portfolio risk, the mean-risk model is prone to be ill-posed (i.e., the optimal value is infinite) and the investor tends to take infinite leverage on risky assets, leading to extremely risk-taking behaviors. Furthermore, the optimal risk is independent of the expected terminal target so the efficient frontier is a vertical line on the mean-WVaR plane. Their results suggest that the mean-WVaR model is an improper modeling of the trade-off between return and risk, when the WVaR is applied to the terminal wealth. This paper proposes a continuous-time portfolio choice model with mean-WVaR criterion for portfolio log-returns, as opposed to the mean-WVaR criterion for terminal wealth in \cite{he2015dynamic}. The motivation is two-fold. First, the mean-risk criterion for log-returns is consistent with Markowitz's original idea to use mean and variance on portfolio returns. We consider a growth-optimal problem with risk control. Moreover, many single-period mean-risk models use risk measures on portfolio returns in the literature \citep[e.g.][]{alexander2002economic,alexander2004comparison,alexander2006does,jin2006note,adam2008spectral}. However, there is a discrepancy between single-period and dynamic portfolio choice models, as the latter typically considers mean-risk criterion for terminal wealth; an exception is \cite{dai2021dynamic} who study continuous-time mean-variance portfolio choice for portfolio log-returns. We similarly adopt the mean-WVaR criterion for log-returns which naturally generalize the single-period return when returns are continuously compounded. Second, such a criterion conquers the ill-posedness of the model in \cite{he2015dynamic}. As noted in \cite{he2015dynamic}, the mean-WVaR criterion for terminal wealth is essentially a linear program for the quantile function of terminal wealth. This linearity, in turn, leads to the optimal terminal wealth's quantile function being ``corner points'', leading to extreme risk-taking behaviors. By contrast, our mean-WVaR criterion for log-returns is not linear in the quantile function of terminal wealth, and thus conquers the ill-posedness. In a continuous-time, complete market framework, we solve the mean-WVaR portfolio choice for log-returns with the help of the so-called quantile formulation, developed in a series of papers \citep[e.g.][]{schied2004neyman,carlier2006law,jin2008behavioral,he2011portfolio,carlier2011optimal,xia2016arrow,xu2016note}. When risk is measured by a general WVaR risk measure, we characterize the optimal terminal wealth up to the concave envelope of a certain function through a detailed and involved analysis. When risk is measured by VaR or ES, two special cases of WVaR, we derive analytical expressions for the optimal terminal wealth and portfolio policy. The optimal terminal wealth turns out to be closely related to the growth optimal portfolio: the investor classifies market scenarios into different states, in which the terminal payoff can be constant, a multiple or fraction of the growth optimal portfolio. Furthermore, we obtain the efficient frontier, which is a concave curve that connects the minimum-risk (min-risk) portfolio with the growth optimal portfolio, as opposed to the vertical line in \cite{he2015dynamic}. Our model allows for a meaningful characterization of the risk-return trade-off and may serve as a guideline for investors to set a reasonable investment target. Although \cite{he2015dynamic} provides a critique of using WVaR to measure risk, our results advocate that it is more appropriate to use WVaR, in particular, VaR and ES, on portfolio log-returns instead of terminal wealth for dynamic portfolio choice. The rest of the paper is organized as follows. In Section \ref{sec:model}, we propose a mean-WVaR portfolio choice problem for portfolio log-returns. We solve the problem in Section \ref{sec:solution} by quantile optimization method. Section \ref{sec:examples} presents optimal solutions and efficient frontiers when the risk is measured by VaR or ES. Some new financial insights and a comparation to the existing work are presented as well. Some concluding remarks are given in Section \ref{sec:conclusion}. Appendix \ref{sec:A1} contains three useful lemmas. All remaining proofs are placed in Appendix \ref{sec:A2}. \section{Mean-WVaR portfolio choice model}\label{sec:model} \subsection{Financial market} Let $T>0$ be a given terminal time and $(\Omega,\mathcal{F}, \{ \mathcal{F}_t \}_{0 \le t \le T} ,\mathbb{P})$ be a filtered probability space, on which is defined a standard one-dimensional Brownian motion $\{ W_t \}_{0\le t\le T}$. It is assumed that $\mathcal{F}_t=\sigma \{ W_s, 0\le s\le t \}$ augmented by all $\mathbb{P}$-null sets and that $\mathcal{F}=\mathcal{F}_T$ is $\mathbb{P}$ complete. We consider a Black-Scholes market in which there are a risk-free asset and a risky asset (called stock). The risk-free asset pays a constant interest rate $r>0$ and the stock price $S$ follows a geometric Brownian motion $$\frac{dS_t}{S_t}=\mu dt+\sigma dW_t, $$ where $\mu$ and $\sigma$, the appreciation rate and volatility of the stock, are positive constants. There exists a unique positive state price density (pricing kernel) process $\xi$\footnote{With additional assumptions on $\xi_T$, our main results can be extended to a general complete market with stochastic investment opportunities.} satisfying \begin{equation}\label{eq:xi} \frac{d\xi_t}{\xi_t}=-r dt-\theta dW_t, \quad \xi_{0}=1, \end{equation} where $\theta=(\mu-r)/\sigma$ is the market price of risk in the economy. Therefore the market is complete. Consider an economic agent with an initial endowment $x>0$ and faced an investment time horizon $[0,T]$. The agent chooses a dynamic investment strategy $\pi_t$, which represents the dollar amount invested in the stock at time $t$. Assume the trading is continuous in a self-financing fashion and there are no transaction costs. The agent's wealth process $X_t$ then follows a stochastic differential equation \begin{equation}\label{eq:budget} dX_t=\left[ rX_t+(\mu-r) \pi_t \right] dt+\sigma \pi_t dW_t, ~ X_0=x. \end{equation} The portfolio process $\pi_t$ is called an admissible portfolio if it is $\{ \mathcal{F}_t \}_{0 \le t \le T}$ progressively measurable with $\int_0^T \pi_t ^2 dt < \infty, a.s.$, and the corresponding terminal wealth satisfies $X_T \ge 0, a.s$. Let $R_T$ be the continuously compounded return (log-return) over the horizon $[0,T]$, i.e., \begin{equation}\label{eq:log-return} R_T=\frac{1}{T} \ln \frac{X_T}{x}. \end{equation} By convention, we define $$\ln 0=\lim_{s \downarrow 0} \ln s=-\infty \mbox{ and } e^{-\infty}=\lim_{s \downarrow-\infty}e^s=0.$$ \subsection{Risk measure} We now introduce a risk measure that will be used in the portfolio choice model. In this paper, we focus on the weighted VaR (WVaR) risk measure proposed by \cite{he2015dynamic}, which is a generalization of Value-at-Risk (VaR) and Expected Shortfall (ES), and encompasses many well-known risk measures that are widely used in finance and actuarial sciences, such as spectral risk measures and distortion risk measures; see \cite{wei2018risk} for a review. For any $\mathcal{F}_T$-measurable random variable $X$, let $F_X$ denote its cumulative distribution function; and let $G_X$ denote its quantile function defined by \begin{equation*} G_X(z)=\inf \{x \in \mathbb{R} : F_X(x) > z \}=\sup \{x\in \mathbb{R} : F_X(x) \le z \}, ~z \in [0, 1), \end{equation*} with the convention $G_X(1)=\lim_{z \uparrow 1} G_X(z)$. The quantile function $G_X$ is non-decreasing, right-continuous with left limits (RCLL). The WVaR risk measure for $X$ is defined as \begin{equation}\label{eq:wvar} \rho _{\Phi} (X)=-\int _{[0,1]} G_X(z) \Phi (dz), \end{equation} where $\Phi\in P[0,1]$ and $P[0,1]$ is the set of all probability measures on $[0,1]$. The WVaR is a law-invariant comonotonic additive risk measure, and it covers many law-invariant coherent risk measures; see \cite{he2015dynamic} for a more detailed discussion. If $\Phi$ is the Dirac measure at $\alpha$, i.e., $\Phi (A)=\mathbf{1}_{\alpha \in A}$, for all $A \subset [0,1]$, then the corresponding WVaR measure becomes the VaR at $\alpha$, in other words, \begin{equation*} \rho _{\Phi} (X)=\text{VaR}_{\alpha}(X)=-G_{X}(\alpha). \end{equation*} If $\Phi$ admits a density $\phi (z)=\frac{1}{\alpha}\mathbf{1}_{z \le \alpha}, ~ \forall z \in [0,1]$, then the corresponding WVaR measure becomes the ES, i.e., \begin{equation*} \rho _{\Phi} (X)=\text{ES}_{\alpha} (X)=-\frac{1}{\alpha} \int_0^{\alpha} G_{X} (z)dz. \end{equation*} In the original paper of \cite{he2015dynamic}, WVaR is applied to measure the risk of a portfolio's terminal wealth. In this paper, we propose to apply WVaR to the portfolio's log-return instead of its terminal wealth. Let $X_T$ be the terminal wealth of a portfolio and $R_T$ be the log-return of $X_T$. Due to the monotonicity of logarithm functions, the quantile function of $R_T$ is \begin{equation*} G_{R_T}(z)=\frac{1}{T} \ln \frac{ G_{X_T}(z)}{x}, ~ z \in [0,1]. \end{equation*} Therefore, the WVaR of $R_T$ can be expressed as \begin{equation}\label{eq:wvar log-return} \rho _{\Phi} (R_T)=-\int _{[0,1]} \frac{1}{T} \ln \frac{ G_{X_T}(z)}{x} \Phi (dz)=-\frac{1}{T} \int _{[0,1]} \ln G_{X_T}(z) \Phi (dz)+\frac{1}{T} \ln x. \end{equation} However, the extension from terminal wealth to log-return is not straightforward as the integral in \eqref{eq:wvar log-return} may not be well-defined since $X_{T}$ may take the value of 0 with positive probability. Let \begin{equation*} \begin{aligned} \mathbb{G}=\Big\{G(\cdot) \colon [0, 1] \to [0,+\infty], ~ &G\text{ is nondecreasing and RCLL on [0,1], }\\ & \text{ left-continuous at } 1, \text{ and finite-valued on } [0,1)\Big\} \end{aligned} \end{equation*} be the set of quantile functions of all non-negative random variables, which include all terminal wealth of admissible portfolios. For any $G \in \mathbb{G}$ and $\Phi\in P[0,1]$, the integral $\int _{[0,1]} \ln G(z) \Phi (dz)$ is not well-defined if $G(s)=0$ for some $s \in[0,1]$ such that $\Phi ( [0,s] )>0$. Define \begin{equation*} \mathbb{G}_{\Phi}=\big\{G \in \mathbb{G} \colon G(s)>0 \text{ if } \Phi ( [0, s] )>0, ~ \forall s \in[0,1] \big\}. \end{equation*} We set \begin{equation}\label{eq:-infty integral} \int _{[0,1]} \ln G(z) \Phi (dz)=-\infty, ~ \forall G \in \mathbb{G} \setminus \mathbb{G}_{\Phi}. \end{equation} Intuitively, if the terminal wealth of a portfolio is $0$ (so that the log-return is $-\infty$) in some states, and the weighting measure $\Phi$ assigns non-zero weighs to these states, then the WVaR of the log-return is assumed to be $-\infty$. In particular, ${\mathbb{E}} [ R_T]=-\infty$ if $\mathbb{P} (X_T=0)>0$.\footnote{It is straightforward to verify ${\mathbb{E}} \left[ \max \left(R_T,0 \right) \right]<\infty$, given that $\xi_T$ is log-normally distributed. } \subsection{Portfolio choice model} We assume the agent chooses a dynamic portfolio strategy in the period $[0,T]$ to maximize the expected log-return while minimizing the risk of the portfolio's log-return. The risk is evaluated by a WVaR risk measure $\rho _{\Phi}$ on the portfolio's log-return $R_T$. Specifically, we consider the following dynamic portfolio choice problem \begin{equation}\label{prob:original} \begin{aligned} \max _{\pi_t} ~ &~ \lambda {\mathbb{E}} [ R_T]-\rho _{\Phi} (R_T)\\ \text{subject to} ~ &~dX_t=\left[ rX_t+(\mu-r) \pi_t \right]dt+\sigma \pi_t dW_t, ~X_{T}\geq 0, ~ X_0=x,\\ &R_T=\frac{1}{T} \ln \frac{X_T}{x}, \end{aligned} \end{equation} where $\lambda \ge 0$ is a ``risk-tolerance" parameter that reflects the investor's tradeoff between return and risk. This is a stochastic control problem, but not standard (namely, unlike those in \cite{yongzhou1999}) due to the existence of the nonlinear probability measure $\Phi$. In view of the standard martingale method, e.g., \cite{karatzas1998methods}, we can first solve the following static optimization problem\footnote{This formulation implies that the optimal log-return $R_{T}$ is independent of $x$. } \begin{equation}\label{prob:martingale} \begin{aligned} \max _{X_T \in \mathcal{F}_T} ~ &~ \lambda {\mathbb{E}} [ R_T]-\rho _{\Phi} (R_T) \\ \text{subject to} ~ &~{\mathbb{E}} [\xi_T X_T] \le x, ~X_{T}\geq 0, \\ &~R_T=\frac{1}{T} \ln \frac{X_T}{x}, \end{aligned} \end{equation} where $\xi_{T}$ is given by \eqref{eq:xi}. Then apply backward stochastic control theory to derive the corresponding optimal portfolio strategy $\pi_{t}$. The optimization problem \eqref{prob:martingale} nests two special cases. \begin{description} \item[Case $\lambda=0$.] In this case the investor minimizes the risk without any consideration of the expected log-return, and solves the following minimum-risk problem \begin{equation}\label{prob:min risk} \begin{aligned} \min _{X_T \in \mathcal{F}_T} ~ &~ \rho _{\Phi} (R_T)\\ \text{subject to} ~ &~{\mathbb{E}} [\xi_T X_T] \le x, ~X_{T}\geq 0, \\ &~R_T=\frac{1}{T} \ln \frac{X_T}{x}. \end{aligned} \end{equation} The resulting portfolio is termed the min-risk portfolio. \item[Case $\lambda=\infty$.] In this case the investor maximizes the expected log-return without any consideration of the risk. This is the so-called growth-optimal problem \begin{equation}\label{prob:growth} \begin{aligned} \max _{X_T \in \mathcal{F}_T} ~ & {\mathbb{E}} [ R_T] \\ \text{subject to} ~ &{\mathbb{E}} [\xi_T X_T] \le x, ~X_{T}\geq 0, \\ &R_T=\frac{1}{T} \ln \frac{X_T}{x}. \end{aligned} \end{equation} The optimal solution to \eqref{prob:growth} is well-known in the literature, i.e., the growth-optimal portfolio (or \cite{K1956} strategy) given by \begin{equation}\label{eq:growth} X_{\textrm{Kelly}}=\frac{x}{\xi_T}. \end{equation} The corresponding log-return is \begin{equation*} R_{\textrm{Kelly}}=\frac{1}{T} \ln \frac{X_{\textrm{Kelly}}}{x}=-\frac{1}{T} \ln \xi_T, \end{equation*} and its expected value is \begin{equation*} {\mathbb{E}} [R_{\textrm{Kelly}}]=-\frac{1}{T} {\mathbb{E}} [\ln \xi_T]=r+\frac{\theta^2}{2}. \end{equation*} \end{description} \section{Quantile formulation and optimal solution}\label{sec:solution} In this section, we solve the optimization problem \eqref{prob:martingale} for $0\le \lambda<\infty$. If $\Phi (\{ 1 \})>0$, then $\rho _{\Phi} (R_{T})=-\infty$. If $\Phi$ is the the uniform measure on $[0,1]$, then $\rho _{\Phi} (R_T)=-{\mathbb{E}} [R_T]$ and the growth optimal portfolio \eqref{eq:growth} is optimal to \eqref{prob:martingale}. To exclude these trivial cases, we make the following assumption on $\Phi$ from now on. \begin{Assumption}\label{assumption:phi} $\Phi (\{ 1 \})=0$ and $\Phi$ is not the uniform measure on $[0,1]$. \end{Assumption} The objective in \eqref{prob:martingale} is based on the quantile function of the log-return; thus, the standard convex duality method is not readily applicable. To overcome this difficulty, we employ the quantile formulation, developed in a series of papers including \cite{schied2004neyman}, \cite{carlier2006law}, \cite{jin2008behavioral}, \cite{he2011portfolio}, \cite{carlier2011optimal}, \cite{xia2016arrow}, and \cite{xu2016note}, to change the decision variable in \eqref{prob:martingale} from the terminal wealth $X_T$ to its quantile function. This allows us to recover the hidden convexity of the problem and solve it completely. We first show that the budget constraint in \eqref{prob:martingale} must hold with equality and the objective function is improved with a higher level of initial wealth. \begin{lemma}\label{lemma:3.1} If $X_T^{*}$ is an optimal solution to the problem \eqref{prob:martingale}, then $ {\mathbb{E}} [\xi_T X_T^{*}]=x$. \end{lemma} \noindent All the proofs of our results are put in Appendix \ref{sec:A2}. Denote by $F_\xi$ and $G_\xi$ the distribution and quantile functions of $\xi_T$, respectively. With slight abuse of notation, we suppress the subscript $T$ when there is no need to emphasize the dependence on $T$. Since $\xi_{T}$ is log-normally distributed, both $F_\xi$ and $G_\xi$ are $C^{\infty}$ functions. The following lemma can be found in \cite{jin2008behavioral}. \begin{lemma}[\cite{jin2008behavioral}]\label{lemma:3.2} We have ${\mathbb{E}} \left[ \xi_T G_X \left(1-F_{\xi}(\xi_T) \right) \right] \le {\mathbb{E}}[\xi_T X_T]$ for any lower bounded random variable $X_T$ whose quantile function is $G_X$. Furthermore, if ${\mathbb{E}} [\xi_T G_X(1-F_{\xi}(\xi_T))] < \infty$, then the inequality becomes equality if and only if $X_T=G_X \left(1-F_{\xi}(\xi_T) \right), ~a.s.$ \end{lemma} From Lemmas \ref{lemma:3.1} and \ref{lemma:3.2}, we know that if $X_T$ is optimal to \eqref{prob:martingale}, then $X_T=G_X(1-F_{\xi}(\xi_T))$ where $G_X$ is the quantile function of $X_T$. Let $R_T$ be the log-return of $X_T$. We have \begin{equation*} {\mathbb{E}} [R_T]=\int_{[0,1)} \frac{1}{T} \ln \frac{ G_{X}(z)}{x} dz, \end{equation*} \begin{equation*} \rho _{\Phi} (R_T)=-\int _{[0,1)} \frac{1}{T} \ln \frac{ G_{X}(z)}{x} \Phi (dz), \end{equation*} and \begin{equation*} {\mathbb{E}}[\xi_T X_T]=\int_{[0,1)} G_X(z) G_{\xi} (1-z)dz. \end{equation*} Therefore, we can consider the following quantile formulation of \eqref{prob:martingale} \begin{equation}\label{prob:quantile} \begin{aligned} \max _{G \in \mathbb{G} } ~ & \lambda \int_{[0,1)} \frac{1}{T} \ln \frac{ G(z)}{x} dz+\int _{[0,1)} \frac{1}{T} \ln \frac{ G(z)}{x} \Phi (dz) \\ \text{subject to} ~ & \int_{[0,1)} G(z) G_{\xi} (1-z)dz=x, \end{aligned} \end{equation} where the decision variable $G$ is the quantile function of the terminal wealth. Once we obtain the optimal solution $G^{*}$ to \eqref{prob:quantile}, then the optimal solution to \eqref{prob:martingale} is given by $$X_T^{*}=G^{*} \left( 1-F_{\xi}(\xi_T) \right).$$ Define \begin{equation} w(s)=\frac{ \int_{[0,s)} G_{\xi} (1-z)dz }{{\mathbb{E}} [\xi_T]}, ~ s \in [0,1]. \end{equation} Because $\xi_{T}$ is log-normally distributed, $w$ is a $C^{\infty}$ function with $w(0)=0$, $w(1)=1$ and $w'>0$, $w''<0$ on $(0,1)$. Let $w^{-1}$ be the inverse function of $w$ and define $$H(s)=G \left( w^{-1} (s) \right), ~ s \in [0,1].$$ Then $w^{-1}$ is a $C^{\infty}$ function with $w^{-1}(0)=0$, $w^{-1}(1)=1$, and $(w^{-1})'>0$, $(w^{-1})''>0$ on $(0,1)$. It is easy to see $G \in \mathbb{G}$ if and only if $H \in \mathbb{G}$, and $G \in \mathbb{G}_{\Phi}$ if and only if $H \in \mathbb{H}_{\Phi}$, where \begin{equation*} \mathbb{H}_{\Phi}=\Big\{H \in \mathbb{G} \colon H\left( w(s) \right)>0 \text{ if } \Phi ( [0,s] )>0, ~ \forall s \in[0,1] \Big\}. \end{equation*} In terms of new notation, we have \begin{equation*} \int_{[0,1)} G(z) G_{\xi} (1-z)dz={\mathbb{E}} [\xi_T] \int_{[0,1)} H(s) ds, \end{equation*} \begin{equation*} \int_{[0,1)} \frac{1}{T} \ln \frac{ G(z)}{x} dz=\int_{[0,1)} \frac{1}{T} \ln \frac{ H(s)}{x} dw^{-1} (s) , \end{equation*} and \begin{equation*} \int _{[0,1)} \frac{1}{T} \ln \frac{ G(z)}{x} \Phi (dz)=\int _{[0,1)} \frac{1}{T} \ln \frac{ H(s)}{x} d \Phi ([0,w^{-1} (s)]). \end{equation*} Consequently, solving \eqref{prob:martingale} reduces to solving the following quantile optimization problem (after dropping constant terms) \begin{equation}\label{prob:quantile H} \begin{aligned} \max _{H \in \mathbb{G} } ~ &~ \lambda \int_{[0,1)} \ln H(s) dw^{-1} (s)+\int _{[0,1)} \ln H(s) d \Phi ([0,w^{-1} (s)]) \\ \text{subject to} ~ &~ \int_{[0,1)} H(s)ds=\frac{x}{{\mathbb{E}} [\xi_T]}. \end{aligned} \end{equation} This is a concave optimization problem, so it can be tackled by the Lagrange method. Define the Lagrangian \begin{equation* L(H(\cdot) ; \lambda, \eta)=\lambda \int_{[0,1)} \ln H(s) dw^{-1} (s)+\int _{[0,1)} \ln H(s) d \Phi ([0,w^{-1} (s)])-\eta \int_{[0,1)}H (s) ds, \end{equation*} where $\eta > 0$ is a Lagrange multiplier to fit the budget constraint in \eqref{prob:quantile H}. Define \begin{equation*} \varphi ( s ; \lambda)=\frac{ \Phi ([0,w^{-1} (s)])+\lambda w^{-1} (s)}{1+\lambda}, ~ s \in [0,1], \end{equation*} and its left-continuous version \begin{equation*} \varphi ( s-; \lambda)=\frac{ \Phi ([0,w^{-1} (s)))+\lambda w^{-1} (s)}{1+\lambda}, ~ s \in (0,1]. \end{equation*} We additionally set $\varphi ( 0-; \lambda)=0$. We then have \begin{equation*} L(H(\cdot) ; \lambda, \eta)=(1+\lambda) \int _{[0,1)} \ln H(s) d\varphi ( s ; \lambda)-\eta \int_{[0,1)} H (s) ds, \end{equation*} and we can consider the following optimization problem \begin{equation}\label{prob:Lagrangian} \max_{H \in \mathbb{G} }~ L(H(\cdot) ; \lambda, \eta). \end{equation} Inspired by \cite{rogers2009optimal}, \cite{xu2016note}, and \cite{wei2018risk}, we introduce $\delta (s; \lambda)$, the convex envelope function of $\varphi ( s-; \lambda)$ on $[0,1]$, given by \begin{equation}\label{eq:concave envelope} \delta (s; \lambda)=\sup_{0 \le a \le s \le b \le 1} \frac{(b-s)\varphi (a-; \lambda)+(s-a)\varphi (b-; \lambda)}{b-a}, ~s \in [0,1]. \end{equation} The convex envelope $\delta (s; \lambda)$ is the largest convex function dominated by $\varphi ( s-; \lambda)$, and is affine on the set $\big\{s \in (0,1) \colon \delta (s; \lambda) < \varphi ( s-; \lambda)\big\}.$ The following proposition presents the optimal solution to \eqref{prob:Lagrangian}. \begin{proposition}\label{prop:3.1} The optimal solution to \eqref{prob:Lagrangian} is given by $$H^{*} (s; \lambda , \eta)=\frac{1+\lambda}{\eta} \delta' (s; \lambda), ~ s \in [0,1],$$ where $ \delta' (s; \lambda)$ is the right derivative of $\delta (s; \lambda)$ with respect to $s$. \end{proposition} We want to find a Lagrange multiplier $\eta$ such that $H^{*} (s; \lambda , \eta)$ satisfies the budget constraint in \eqref{prob:quantile H}. Clearly, \begin{equation*} \int_{[0,1)} H^{*} (s; \lambda , \eta)ds=\int_{[0,1)} \frac{1+\lambda}{\eta} \delta' (s; \lambda)ds=\frac{1+\lambda}{\eta}=\frac{x}{{\mathbb{E}} [\xi_T]}. \end{equation*} and consequently $$\eta=\frac{1+\lambda}{x}{\mathbb{E}} [\xi_T].$$ We are ready to state the optimal solution to \eqref{prob:quantile H}. \begin{proposition}\label{prop:3.2} The optimal solution to \eqref{prob:quantile H} is given by \begin{equation*} H^{*} \left(s; \lambda , \frac{1+\lambda}{x}{\mathbb{E}} [\xi_T] \right)=\frac{x}{{\mathbb{E}} [\xi_T]}\delta' (s; \lambda) . \end{equation*} \end{proposition} Finally, the optimal solution to \eqref{prob:martingale} is given by \begin{equation*} X^{*}_{T,\lambda}=H^{*} \left( w(1-F_{\xi}(\xi_T); \lambda , \frac{1+\lambda}{x}{\mathbb{E}} [\xi_T] \right)=\frac{x}{{\mathbb{E}} [\xi_T]}\delta' \left(w(1-F_{\xi}(\xi_T); \lambda \right). \end{equation*} In particular, we can obtain the min-risk portfolio by setting $\lambda=0$: \begin{equation*} X_{T,0}^{*}=\frac{x}{{\mathbb{E}} [\xi_T]} \delta' (w (1-F_{\xi}(\xi_T)); 0), \end{equation*} which solves \eqref{prob:min risk}. We summarize the main results of the paper in the following proposition. \begin{proposition}[Efficient portfolio]\label{prop:efficient} The efficient portfolio, i.e., the optimal solution to \eqref{prob:martingale} is \begin{equation*} X^{*}_{T,\lambda}=\frac{x}{{\mathbb{E}} [\xi_T]} \delta' (w (1-F_{\xi}(\xi_T)); \lambda ), \end{equation*} and the corresponding log-return is $$R^{*}_{T,\lambda}=\frac{1}{T} \ln \left( \frac{ \delta' (w (1-F_{\xi}(\xi_T)); \lambda ) }{{\mathbb{E}} [\xi_T]} \right).$$ In particular, the min-risk portfolio, i.e., the optimal solution to \eqref{prob:min risk} is \begin{equation*} X_{T,0}^{*}=\frac{x}{{\mathbb{E}} [\xi_T]} \delta' (w (1-F_{\xi}(\xi_T)); 0), \end{equation*} and the corresponding log-return is $$R^{*}_{T,0}=\frac{1}{T} \ln \left( \frac{ \delta' (w (1-F_{\xi}(\xi_T));0 ) }{{\mathbb{E}} [\xi_T]} \right).$$ \end{proposition} \section{Examples with explicit solution}\label{sec:examples} In this section, we present two examples to illustrate our general results. In particular, we consider the optimization problem \eqref{prob:martingale} when the WVaR risk measure is given by either VaR or ES, two popular risk measures. \subsection{Mean-VaR efficient portfolio} In this subsection, we specialize our setting to the mean-VaR optimization problem. In particular, we consider the optimization problem \eqref{prob:martingale} when the WVaR risk measure is given by the VaR at a confidence level $0<\alpha<1$, namely \begin{equation*} \rho _{\Phi} (X)=\text{VaR}_{\alpha}(X)=-G_{X}(\alpha). \end{equation*} In other words, $\Phi$ is given by the Dirac measure at $\alpha$. \begin{proposition}\label{prop:4.1} When $\Phi$ is given by the Dirac measure at $\alpha\in (0,1)$, we have the following assertions. \begin{description} \item[Case $\lambda=0$.] \begin{enumerate} \item The minimum-VaR (min-VaR) terminal wealth is \begin{equation*} X^{\text{VaR}}_{T,0}= \begin{cases} 0, ~ & \xi_T > \xi_{\alpha},\\ \underline{X}_{\text{VaR}} , ~ & \xi_T \le \xi_{\alpha}, \end{cases} \end{equation*} where \begin{equation*} \begin{aligned} \underline{X}_{\text{VaR}} &=\frac{x}{{\mathbb{E}} [\xi_T]} \cdot \frac{ 1}{ 1-w(\alpha)} ,\quad \xi_{\alpha}=G_{\xi} (1-\alpha). \end{aligned} \end{equation*} \item The optimal log-return is $$R^{\text{VaR}}_{T,0}=\frac{1}{T} \ln \frac{X^{\text{VaR}}_{T,0}}{x}.$$ \item The expected optimal log-return is ${\mathbb{E}} [R^{\text{VaR}}_{T,0}]=-\infty.$ \item The VaR of the optimal log-return is $$\text{VaR}_{\alpha}(R^{\text{VaR}}_{T,0})=\frac{1}{T} \ln \frac{\underline{X}_{\text{VaR}}}{x}.$$ \end{enumerate} \item[Case $0<\lambda<\infty$.] \begin{enumerate} \item The mean-VaR efficient terminal wealth is \begin{equation*} X^{\text{VaR}}_{T,\lambda}= \begin{cases} \frac{\lambda}{1+\lambda} \cdot \frac{x}{\xi_T} , ~ & \xi_T > \xi_{\alpha},\\ \underline{X}_{\text{VaR}} , ~ & \underline{\xi}_{\text{VaR}} < \xi_T \le \xi_{\alpha} ,\\ \frac{\lambda}{1+\lambda} \cdot \frac{x}{\xi_T} , ~ & \xi_T \le \underline{\xi}_{\text{VaR}}, \end{cases} \end{equation*} where \begin{equation*} \begin{aligned} \underline{X}_{\text{VaR}} &=\frac{\lambda}{1+\lambda} \cdot \frac{x}{ \underline{\xi}_{\text{VaR}}} ,\\ \xi_{\alpha} &=G_{\xi} (1-\alpha),\\ \underline{\xi}_{\text{VaR}} &=G_{\xi} (1-w^{-1}(s^{*}(\lambda) )) , \end{aligned} \end{equation*} and $s^{*}(\lambda)$ is given in Lemma \ref{lemma: f1}. \item The optimal log-return is $$R^{\text{VaR}}_{T,\lambda}=\frac{1}{T} \ln \frac{X^{\text{VaR}}_{T,\lambda}}{x} .$$ \item The VaR of the optimal log-return is $$\text{VaR}_{\alpha}(R^{\text{VaR}}_{T,\lambda})=\frac{1}{T} \ln \frac{ \underline{X}_{\text{VaR}} }{x} .$$ \end{enumerate} \end{description} \end{proposition} Figure \ref{figure:VaR wealth1} depicts the optimal terminal payoff of the min-VaR portfolio ($\lambda=0$), which resembles a digital option. Essentially, the investor invests all the money in a digital option that pays $\underline{X}_{\text{VaR}}$ in the good states of the market ($\xi < \xi_{\alpha}$) and 0 otherwise. The probability of winning the option depends solely on the confidence level of VaR and is given by ${\mathbb{P}} (\xi \le \xi_{\alpha})=1-\alpha$. \begin{figure}[H] \centering \centering \includegraphics[width=0.6\textwidth]{./Figures/VaR/VaR_min.eps} \caption{The Min-VaR Efficient Terminal Wealth ($\lambda=0$)}\label{figure:VaR wealth1} \end{figure} Figure \ref{figure:VaR wealth2} displays the optimal terminal payoff of the mean-VaR efficient portfolio ($0<\lambda<\infty$). The investor classifies market scenarios into three subsets: in the good states ($\xi \le \underline{\xi}_{\text{VaR}}$) and in the bad states ($\xi > \xi_{\alpha}$), the terminal payoff is a fraction ($\lambda/(1+\lambda)$) of the growth optimal portfolio; in the intermediate states ($\underline{\xi}_{\text{VaR}} < \xi \le \xi_{\alpha}$), the investor receives a constant payoff $\underline{X}_{\text{VaR}}$. Moreover, the terminal wealth has a jump discontinuity at $\xi=\xi_{\alpha}$ and the corresponding log-returns are always finite (but can be extremely large or small). \begin{figure}[H] \centering \includegraphics[width=0.6\textwidth]{./Figures/VaR/VaR_efficient.eps} \caption{The Mean-VaR Efficient Terminal Wealth ($0<\lambda<\infty$)}\label{figure:VaR wealth2} \end{figure} Figure \ref{figure:VaR efficient} plots the mean-VaR efficient frontiers for different confidence levels $\alpha$. The efficient frontier is a concave curve that connects the growth optimal portfolio (colored dots) with the min-VaR portfolio (not shown in the graph). The growth optimal portfolio has the highest expected log-return but also the highest VaR. By contrast, the min-VaR portfolio has the smallest expected log-return (negative infinity) but also the lowest VaR. Figure \ref{figure:VaR efficient} also displays a sensitivity analysis of the efficient frontier with respect to $\alpha$, the confidence level of VaR. As $\alpha$ increases, the efficient frontier shifts to the left: for a given level of the expected log-return, the VaR of the corresponding efficient portfolio decreases as $\alpha$ increases. \begin{figure}[H] \centering \includegraphics[width=0.6\textwidth]{./Figures/VaR/VaR_efficient_frontier.eps}\\ \caption{The Mean-VaR Efficient Frontier with $0<\lambda<\infty$, $T=1$, $r=0.05$, and $\theta=0.4$ }\label{figure:VaR efficient} \end{figure} As the optimal terminal wealth is known, we can solve for the optimal time-$t$ wealth and portfolio policy. \begin{corollary}\label{coro:4.1} We have the following assertions. \begin{description} \item[Case $\lambda=0$.] \begin{enumerate} \item The min-VaR efficient wealth at time $t$ is \begin{equation*} X^{\text{VaR}}_{t,0}=\underline{X}_{\text{VaR}} e^{-r (T-t)} N \left( d_2 ( t, \xi_t, \xi_{\alpha}) \right). \end{equation*} \item The optimal portfolio policy at time $t$ is \begin{equation*} \pi^{\text{VaR}}_{t,0}=\frac{\underline{X}_{\text{VaR}} e^{-r (T-t)} \nu \left( d_2 (t, \xi_t, \xi_{\alpha}) \right)}{ \sigma \sqrt{T-t} } . \end{equation*} \end{enumerate} \item[Case $0<\lambda<\infty$.] \begin{enumerate} \item The mean-VaR efficient wealth at time $t$ is \begin{equation*} \begin{aligned} X^{\text{VaR}}_{t,\lambda} =&\frac{\lambda}{1+\lambda} \cdot \frac{x}{\xi_t} \left( N \left(-d_1 (t, \xi_t, \xi_{\alpha}) \right)+N \left( d_1 (t, \xi_t, \underline{\xi}_{\text{VaR}} ) \right) \right) \\ &+\underline{X}_{\text{VaR}} e^{-r(T-t)} \left( N \left( d_2 (t, \xi_t, \xi_{\alpha}) \right)-N \left( d_2 (t, \xi_t, \underline{\xi}_{\text{VaR}} ) \right) \right). \end{aligned} \end{equation*} \item The optimal portfolio policy at time $t$ is \begin{equation*} \begin{aligned} \pi^{\text{VaR}}_{t,\lambda}=& \frac{\lambda}{1+\lambda} \cdot \frac{x}{\xi_t} \cdot \left( N\left(-d_1(t, \xi_t, \xi_{\alpha}) \right)+N\left(d_1(t, \xi_t, \underline{\xi}_{\text{VaR}}) \right) \right) \frac{\theta}{\sigma} \\ &+\frac{ e^{-r (T-t)} \nu \left( d_2 (t, \xi_t, \xi_{\alpha}) \right) }{\sigma \sqrt{T-t} } \cdot \left( \underline{X}_{\text{VaR}}-\frac{\lambda}{1+\lambda} \cdot \frac{x}{\xi_{\alpha}} \right). \end{aligned} \end{equation*} \end{enumerate} \end{description} Here and hereafter \begin{equation*} \begin{aligned} d_1 (t, \xi_t, y) &=\frac{\ln \frac{y}{\xi _t}+(r+\frac{\theta ^2 }{2} )(T-t) }{\theta \sqrt{T-t}}, \\ d_2 (t, \xi_t, y) &=d_1 (t, \xi_t, y)-\theta \sqrt{T-t}, \end{aligned} \end{equation*} and $N (\cdot)$ is the standard normal distribution function, and $\nu (\cdot)$ is the standard normal probability density function. \end{corollary} \subsection{Mean-ES efficient portfolio} In this subsection, we specialize our setting to the mean-ES optimization problem. In particular, we consider the optimization problem \eqref{prob:martingale} when the WVaR risk measure is given by the ES at a confidence level $0<\alpha<1$, namely \begin{equation*} \rho _{\Phi} (X)=\text{ES}_{\alpha} (X)=-\frac{1}{\alpha} \int_0^{\alpha} G_{X} (z)dz. \end{equation*} In other words, $\Phi$ admits a density $\phi (z)=\frac{1}{\alpha}\mathbf{1}_{z \le \alpha}$, for all $z \in [0,1]$. \begin{proposition}\label{prop:4.2} When $\Phi$ admits a density $\phi (z)=\frac{1}{\alpha}\mathbf{1}_{z \le \alpha}$ with $0<\alpha< 1$, we have the following assertions. \begin{description} \item[Case $\lambda=0$.] \begin{enumerate} \item The min-ES efficient terminal wealth is \begin{equation*} X^{\text{ES}}_{T,0}= \left \{ \begin{aligned} & \frac{x}{\alpha \xi_T} , ~ & \xi_T > \overline{\xi}_{\text{ES}},\\ & \underline{X}_{\text{ES}}, ~ & \xi_T \le \overline{\xi}_{\text{ES}}, \end{aligned} \right. \end{equation*} where \begin{equation*} \begin{aligned} \underline{X}_{\text{ES}} &=\frac{x}{\alpha \overline{\xi}_{\text{ES}} },\\ \overline{\xi}_{\text{ES}} &=G_{\xi} (1-w^{-1}(t_0)), \end{aligned} \end{equation*} and $t_0$ is given in Lemma \ref{lemma: f2}. \item The optimal log-return is $$R^{\text{ES}}_{T,0}=\frac{1}{T} \ln \frac{X^{\text{ES}}_{T,0}}{x}.$$ \item The ES of the optimal log-return is \begin{align*} \text{ES}_{\alpha}(R^{\text{ES}}_{T,0})&=\frac{1}{\alpha T} \left[ \ln \alpha \cdot N \left( - \frac{\ln \xi_{\alpha} + \left( r + \frac{\theta^2}{2} \right)T}{ \theta \sqrt{T}} \right) \right. \\ &\quad\;+ \ln \overline{\xi}_{\text{ES}} \cdot \left( N \left( \frac{\ln \overline{\xi}_{\text{ES}} + \left( r + \frac{\theta^2}{2} \right)T}{ \theta \sqrt{T}} \right) - N \left( \frac{\ln \xi_{\alpha} + \left( r + \frac{\theta^2}{2} \right)T}{ \theta \sqrt{T}} \right) \right) \\ &\quad\;\left. + \frac{\theta \sqrt{T}}{\sqrt{2 \pi}} e^{ - \frac{\left( \ln \overline{\xi}_{\text{ES}} + \left( r + \frac{\theta^2}{2} \right)T \right)^2}{2 \theta^2 T}} - \left( r + \frac{\theta^2}{2} \right) T N \left( - \frac{\ln \overline{\xi}_{\text{ES}} + \left( r + \frac{\theta^2}{2} \right)T}{ \theta \sqrt{T}} \right) \right] , \end{align*} where $\xi_{\alpha} =G_{\xi} (1-\alpha).$ \end{enumerate} \item[Case $0<\lambda<\infty$.] \begin{enumerate} \item The mean-ES efficient terminal wealth is \begin{equation*} X^{\text{ES}}_{T,\lambda}= \begin{cases} \frac{\frac{1}{\alpha}+\lambda}{1+\lambda} \cdot \frac{x}{\xi_T} , ~ & \xi_T > \overline{\xi}_{\text{ES}},\\ \underline{X}_{\text{ES}} , ~ & \underline{\xi}_{\text{ES}} < \xi_T \le \overline{\xi}_{\text{ES}} ,\\ \frac{\lambda}{1+\lambda} \cdot \frac{x}{\xi_T} , ~ & \xi_T \le \underline{\xi}_{\text{ES}}, \end{cases} \end{equation*} where \begin{equation*} \begin{aligned} \underline{X}_{\text{ES}} &=\frac{\frac{1}{\alpha}+\lambda}{1+\lambda} \cdot \frac{x}{\overline{\xi}_{\text{ES}} }=\frac{\lambda}{1+\lambda} \cdot \frac{x}{\underline{\xi}_{\text{ES}}},\\ \overline{\xi}_{\text{ES}} &=G_{\xi} (1-w^{-1}( t_1(\lambda) )),\\ \underline{\xi}_{\text{ES}} &=\frac{\lambda}{\frac{1}{\alpha}+\alpha} \overline{\xi}_{\text{ES}}, \end{aligned} \end{equation*} and $t_1 (\lambda)$ is given in Lemma \ref{lemma: f3}. \item The optimal log-return is $$R^{\text{ES}}_{T,\lambda}=\frac{1}{T} \ln \frac{X^{\text{ES}}_{T,\lambda}}{x} .$$ \item The ES of the optimal log-return is \begin{align*} \text{ES}_{\alpha}(R^{\text{ES}}_{T,\lambda})&=\frac{1}{\alpha T} \left[ \ln \left( \frac{1+\lambda}{\frac{1}{\alpha} + \lambda} \right) \cdot N \left( - \frac{\ln \xi_{\alpha} + \left( r + \frac{\theta^2}{2} \right)T}{ \theta \sqrt{T}} \right) \right. \\ &\quad\;+ \ln \overline{\xi}_{\text{ES}} \cdot \left( N \left( \frac{\ln \overline{\xi}_{\text{ES}} + \left( r + \frac{\theta^2}{2} \right)T}{ \theta \sqrt{T}} \right) - N \left( \frac{\ln \xi_{\alpha} + \left( r + \frac{\theta^2}{2} \right)T}{ \theta \sqrt{T}} \right) \right) \\ &\quad\;\left. + \frac{\theta \sqrt{T}}{\sqrt{2 \pi}} e^{ - \frac{\left( \ln \overline{\xi}_{\text{ES}} + \left( r + \frac{\theta^2}{2} \right)T \right)^2}{2 \theta^2 T}} - \left( r + \frac{\theta^2}{2} \right) T N \left( - \frac{\ln \overline{\xi}_{\text{ES}} + \left( r + \frac{\theta^2}{2} \right)T}{ \theta \sqrt{T}} \right) \right], \end{align*} where $\xi_{\alpha} =G_{\xi} (1-\alpha).$ \end{enumerate} \end{description} \end{proposition} Figure \ref{figure:ES wealth1} depicts the optimal terminal payoff of the min-ES portfolio ($\lambda=0$). The investor classifies market scenarios into two subsets: in the good states ($\xi \le \overline{\xi}_{\text{ES}}$), the investor receives a constant payoff $\underline{X}_{\text{ES}}$; in the bad states ($\xi > \overline{\xi}_{\text{ES}}$), the payoff is a multiple ($1/\alpha$) of the growth optimal portfolio. \begin{figure}[H] \centering \centering \includegraphics[width=0.6\textwidth]{./Figures/ES/ES_min.eps} \caption{The Min-ES Efficient Terminal Wealth ($\lambda=0$)}\label{figure:ES wealth1} \end{figure} Figure \ref{figure:ES wealth2} displays the optimal terminal payoff of the mean-ES efficient portfolio ($0<\lambda<\infty$). The investor classifies market scenarios into three subsets: in the good states ($\xi \le \underline{\xi}_{\text{ES}}$), the terminal payoff is a fraction ($\lambda/(1+\lambda)$) of the growth optimal portfolio; in the intermediate states ($\underline{\xi}_{\text{ES}} < \xi \le \overline{\xi}_{\text{ES}}$), the investor receives a constant payoff $\underline{X}_{\text{ES}}$; in the bad states ($\xi > \overline{\xi}_{\text{ES}}$), the terminal payoff is a multiple ($(\frac{1}{\alpha}+\lambda)/(1+\lambda)$) of the growth optimal portfolio. In contrast to the mean-VaR efficient portfolio, the terminal payoff of the mean-ES efficient portfolio is continuous in the state price density. \begin{figure}[H] \centering \centering \includegraphics[width=0.6\textwidth]{./Figures/ES/ES_efficient.eps} \caption{The Mean-ES Efficient Terminal Wealth ($0<\lambda<\infty$)}\label{figure:ES wealth2} \end{figure} Figure \ref{figure:ES efficient} shows the mean-ES efficient frontiers for different confidence levels $\alpha$. The efficient frontier is a concave curve that connects the growth optimal portfolio (colored dots) with the min-ES portfolio (colored crosses). The growth optimal portfolio has the highest expected log-return but also the highest ES. By contrast, the min-ES portfolio has the smallest expected log-return but also the lowest ES. In contrast to the min-VaR portfolio, the risk of the min-ES portfolio is finite and thus the mean-ES efficient frontier is a finite curve. Figure \ref{figure:ES efficient} also displays a sensitivity analysis of the efficient frontier with respect to $\alpha$, the confidence level of ES. As $\alpha$ increases, the efficient frontier shifts to the left: for a given level of the expected log-return, the ES of the corresponding efficient portfolio decreases as $\alpha$ increases. In particular, the minimum ES that the investor can achieve is decreasing in $\alpha$. \begin{figure}[H] \centering \includegraphics[width=0.6\textwidth]{./Figures/ES/ES_efficient_frontier.eps} \caption{The Mean-ES Efficient Frontier with $0<\lambda<\infty$, $T=1$, $r=0.05$, and $\theta=0.4$. }\label{figure:ES efficient} \end{figure} The following corollary presents the optimal time-$t$ wealth and portfolio policy. The proof is similar to that of Corollary \ref{coro:4.1} and thus we omit it. \begin{corollary} We have the following assertions. \begin{description} \item[Case $\lambda=0$.] \begin{enumerate} \item The min-ES efficient wealth at time $t$ is \begin{equation*} X^{\text{ES}}_{t,0}=\frac{x}{\alpha \xi_t} N \left(-d_1 (t, \xi_t, \overline{\xi}_{\text{ES}} ) \right)+\underline{X}_{\text{ES}} e^{-r (T-t)} N \left( d_2 (t, \xi_t, \overline{\xi}_{\text{ES}} ) \right). \end{equation*} \item The efficient portfolio policy at time $t$ is \begin{equation*} \pi^{\text{ES}}_{t,0}=\frac{x}{\alpha \xi_t} N \left(-d_1 (t, \xi_t, \overline{\xi}_{\text{ES}} ) \right) \frac{\theta}{\sigma} . \end{equation*} \end{enumerate} \item[Case $0<\lambda<\infty$.] \begin{enumerate} \item The mean-ES efficient wealth at time $t$ is \begin{align*} X^{\text{ES}}_{t,\lambda} &= \frac{\frac{1}{\alpha}+\lambda}{1+\lambda} \cdot \frac{x}{\xi_t} N \left(-d_1 ( t, \xi_t, \overline{\xi}_{\text{ES}} ) \right)\\ &\quad\;+\underline{X}_{\text{ES}} e^{-r(T-t)} \left( N \left( d_2 (t, \xi_t, \overline{\xi}_{\text{ES}} ) \right)-N \left( d_2 (t, \xi_t, \underline{\xi}_{\text{ES}} ) \right) \right)\\ &\quad\;+\frac{\lambda}{1+\lambda} \cdot \frac{x}{\xi_t} N \left( d_1 (t, \xi_t, \underline{\xi}_{\text{ES}} ) \right). \end{align*} \item The efficient portfolio policy at time $t$ is \begin{align*} \pi^{\text{ES}}_{t,\lambda}& =\frac{x}{\xi_t} \cdot \frac{\theta}{\sigma} \left(\frac{\frac{1}{\alpha}+\lambda}{1+\lambda} N \left(-d_1 (t, \xi_t, \overline{\xi}_{\text{ES}} ) \right)+\frac{\lambda}{1+\lambda} N \left( d_1 (t, \xi_t, \underline{\xi}_{\text{ES}} ) \right) \right). \end{align*} \end{enumerate} \end{description} \end{corollary} \subsection{Comparison with \cite{he2015dynamic}} \cite{he2015dynamic} consider a continuous-time mean-risk portfolio choice problem in which the risk is measured by WVaR. They assume the decision-maker minimizes the risk of terminal wealth, while maintaining the expected terminal wealth above a prescribed target. They find that the model can lead to extreme risk-taking behaviors. When bankruptcy is allowed, the optimal terminal wealth is binary, i.e., the investor invests a small amount of money in an extremely risky digital option and saves the rest of the money in the risk-free asset. When bankruptcy is prohibited, the terminal wealth can be three-valued and the optimal strategy is to invest a small amount of money in an extremely risky digital option and put the rest in an asset with moderate risk. These strategies are not commonly seen in practice and are not appropriate for many investors. Furthermore, the optimal value (the risk) is independent of the expected terminal wealth target. Therefore the efficient frontier is a vertical line in the mean-risk plane and there is no explicit trade-off between risk and return. They conclude that using the WVaR on terminal wealth is not an appropriate model of risk for portfolio choice. In contrast to \cite{he2015dynamic}, our model uses the expected target and risk measure on log-returns instead of terminal wealth. When the risk is evaluated by the VaR or ES, two popular risk measures, we find that the investor classifies market scenarios into different states, in which the terminal payoff is a multiple or fraction of the growth optimal portfolio, or constant. Furthermore, the efficient frontier is a concave curve that connects the min-risk portfolio with the growth optimal portfolio. Our model allows for an explicit characterization of the risk-return trade-off and may serve as a guideline for investors to set reasonable investment targets. Our results demonstrate that it is more appropriate to use the WVaR, in particular, the VaR and ES, on the log-return instead of the terminal wealth for portfolio choice. \section{Conclusion}\label{sec:conclusion} We have proposed and solved a dynamic mean-WVaR portfolio choice problem with risk measured to log-returns, as opposed to terminal wealth in \cite{he2015dynamic}. Our model conquers the ill-posedness of the mean-WVaR criterion for terminal wealth in \cite{he2015dynamic}, and allows for an explicit and meaningful characterization of the trade-off between return and risk. We have demonstrated that our proposed mean-WVaR criterion for log-returns is more appropriate and tractable than the mean-WVaR criterion for terminal wealth in serving as a guideline for dynamic portfolio choice. \newpage
2,869,038,155,395
arxiv
\section{Introduction} \noindent In this paper, we consider the following nonlinear Schr\"{o}dinger equation \begin{equation}\label{equ:double} \begin{cases} i\partial_tu+\Delta u+|u|^{\frac{4}{3}}u +\mu\left(|x|^{-2}*|u|^2\right)u=0,\,\,t\in\mathbb{R},\,x\in\mathbb{R}^3,\\ u(0,x)=u_0(x)\in H^1(\mathbb{R}^3), \end{cases} \end{equation} where $u=u(t,x)$ is complex-valued function in time-space $\mathbb{R}\times\mathbb{R}^3$. { It is well - known that the classical Schr\"odinger - Poisson - Slatter equation \begin{equation*} \begin{cases} i\partial_tu+\Delta u+|u|^{p-1}u -\mu Au=0,\,\,t\in\mathbb{R},\,x\in\mathbb{R}^3,\\ -\Delta A = 4\pi |u|^2, ~~ u(0,x)=u_0(x)\in H^1(\mathbb{R}^3), \end{cases} \end{equation*} is a model derived from Poisson - Newton interaction \cite{SS2004JSP}. This equation can be considered as a generalization of \eqref{equ:double} with $\mu<0$ and it is intensively studied (see for example \cite{R2010ARMA,Schlein:book,GPV2012Poincare,I2013TMNA} and references there). The equation \eqref{equ:double} can be rewritten as \begin{equation*} \begin{cases} i\partial_tu+\Delta u+|u|^{p-1}u -\mu A u=0,\,\,t\in\mathbb{R},\,x\in\mathbb{R}^3,\\ (-\Delta)^{1/2} A = 4\pi |u|^2,~~ u(0,x)=u_0(x)\in H^1(\mathbb{R}^3), \end{cases} \end{equation*} can be considered as a modification of the Poisson equation for the gravitational potential that is typical in the study of fractional Newtonian gravity as an alternative to standard Newtonian gravity ( \cite{I2013TMNA,G2020,GGV2020,V2020FP,V2021EP}). Another, case of nonlinear Schr\"odinger type equation with Hartree type nonlinearity is the following one \begin{equation}\label{equH1} i\partial_tu+\Delta u+Vu -(w*|u|^2) u=0,\,\,t\in\mathbb{R},\,x\in\mathbb{R}^3. \end{equation} Among the several contexts of relevance of \eqref{equH1}, one is surely the quantum dynamics of large Bose gases, where particles are subject to an external potential $V$ and interact through a two-body potential $w$. In this case \eqref{equH1} emerges as the effective evolution equation, rigorously obtained through the limit $N \to \infty$, where $N$ is the number of particles. The precise meaning of the control of the many-body wave function is in the sense of one-body reduced density matrices. This model is also intensively studied for sufficiently large class of cases, ranging from bounded to locally singular potentials w, and through a multitude of techniques to control the limit of infinitely many particles (see, e.g., \cite[Chapter 2]{Schlein:book} and the references therein). If we assume $w$ to be homogeneous function and the potential $V$ is of self interacting type, $V(u)=|u|^{p-1}$, then we arrive at the model \eqref{equ:double} with $\mu>0.$ } Let us recall some basic facts about the Cauchy problem \eqref{equ:double}. From \cite{Cazenave:book}, it is known that \eqref{equ:double} is local well-posedness of \eqref{equ:double}. That is, given $u_0\in H^1(\mathbb{R}^3)$, there exists a unique maximal solution $u\in C\left((-T_{min},T_{max});H^1(\mathbb{R}^3)\right)$ to \eqref{equ:double} and there holds the blowup alternative: \begin{align}\label{intro:blowup:alternative} T<+\infty\,\,\,\text{implies}\,\,\,\lim_{t\rightarrow T}\|u(t)\|_{H^1}=+\infty. \end{align} Furthermore, the $H^1$ flow admits the conservation laws: \begin{align*} &\text{Mass:}~~M(u)(t)=\int|u(t,x)|^2=M(u_0);\\ &\text{Energy:}~E_{\mu}(u)(t)=\frac{1}{2}\int|\nabla u(x,t)|^2-\frac{3}{10}\int|u(x,t)|^{\frac{10}{3}}-\frac{\mu}{4}\int \frac{|u(t,x)|^2|u(t,y)|^2}{|x-y|^2}=E_{\mu}(u_0);\\ &\text{Momentum:}~~ P_{\mu}(u(t))=\Im\int \bar{u}(t,x)\nabla{u}(t,x)dx=P_{\mu}(u_0). \end{align*} First, we recall the structure of the mass critical problem. In this case, the scaling symmetry \begin{align*} u_a(t,x)=a^{\frac{3}{2}}u(a^2t,ax),\,\,\text{where}\,\,a>0, \end{align*} acts on the set of solutions and leaves the mass invariant \begin{align*} \|u_a(t,\cdot)\|_{L^2}=\|u(a^2t,\cdot)\|_{L^2}. \end{align*} \subsection{The case \texorpdfstring{$\mu=0$}{mu=0}} A criterion of global-in-time existence for $H^1$ initial data is derived by using the Gagliardo-Nirenberg inequality with the best constant \begin{align*} \|u\|_{L^{\frac{10}{3}}}^{\frac{10}{3}}\leq C\|u\|_{L^2}^{\frac{4}{3}}\|\nabla u\|_{L^2}^2, \end{align*} where $C=\frac{5}{3}\frac{1}{\|Q\|_{L^2}^{\frac{4}{3}}}$, and $Q$ is the unique (\cite{BL1983ARMA,K1989ARMA}) up to symmetries solution to the positive ground state equation \begin{align*} -\Delta Q+Q-|Q|^{\frac{4}{3}}Q=0,\,\,Q(x)>0,\,\,Q\in H^1(\mathbb{R}^3). \end{align*} So that for all $u\in H^1(\mathbb{R}^3)$, we have \begin{align}\notag E_0(u)\geq\frac{1}{2}\|\nabla u\|_{L^2}^2\left[1-\left(\frac{\|u\|_{L^2}}{\|Q\|_{L^2}}\right)^{\frac{4}{3}}\right], \end{align} which together with the conservation of mass, energy and the blowup criterion \eqref{intro:blowup:alternative} implies that the global existence of all solution with initial data $\|u_0\|_{L^2}<\|Q\|_{L^2}$. At the mass critical level $\|u_0\|_{L^2}=\|Q\|_{L^2}$, the pseudo-conformal symmetry of \eqref{equ:double} yields an explicit minimal blowup solution: \begin{align}\notag S(t,x)=\frac{1}{|t|^{\frac{3}{2}}}Q\left(\frac{x}{t}\right)e^{-i\frac{|x|^2}{4t}}e^{\frac{i}{t}},\,\,\|S(t)\|_{L^2}=\|Q\|_{L^2},\,\,\|\nabla S(t)\|_{L^2}\stackrel{t\rightarrow0^-}{\sim}\frac{1}{|t|}. \end{align} Merle \cite{M1993Duke} obtained the classification in the energy space of the minimal blowup elements; the only $H^1$ finite time blowup solution with mass $\|u\|_{L^2}=\|Q\|_{L^2}$ is given by above up to the symmetries of the flow. Note that the minimal blow up dynamic can be extended to the super-critical mass case $\|u_0\|_{L^2}>\|Q\|_{L^2}$ and that is corresponds to an unstable threshold dynamics between global in time scattering solutions and finite time blow up solutions in the stable blow up regime \begin{align*} \|\nabla u(t)\|_{L^2}\sim\sqrt{\frac{\log|\log|T^*-t||}{T^*-t}},\,\,\text{as}\,\,t\sim T^*. \end{align*} For results about the existing literature for the $L^2$ critical blow up problem, one can see\cite{MR2004Invent,MR2005Ann,MRGFA2003,MR2005CMP,MR2006JAMS,MRS2013Amer} and references therein. \subsection{The case \texorpdfstring{$\mu<0$}{mu<0}} Let us now consider the case of a defocusing perturbation. At the threshold, we claim: \begin{lemma}\label{lemma1dou} \textbf{(Global existence for $\mu<0$).} Let $\mu<0$ and $u_0\in H^1(\mathbb{R}^3)$ with $\|u_0\|_{L^2}=\|Q\|_{L^2}$. Then the solution of \eqref{equ:double} is global and bounded in $H^1(\mathbb{R}^3)$. \end{lemma} The proof follows from the standard concentration compactness argument, see Appendix \ref{sectionlemma}, The similar results for the double power nonlinear Schr\"odinger equation can be found in \cite{LMR2016RMI}. The global existence criterion of lemma \ref{lemma1dou} is sharp in the sense that for all $\delta>0$, we can build an $H^{1}(\mathbb{R}^3)$ finite time blw-up solution to \eqref{equ:double} with the initial data $\|u_0\|_{L^2}=\|Q\|_{L^2}+\delta$. Now, we state the following blow up result. \begin{lemma}\label{lemma12dou} Let $\mu<0$ and close to $0$. For any $\delta>0$ there exists $u_0\in H^1_{rad}(\mathbb{R}^3)$ such that $$xu_0 \in L^2(\mathbb{R}^3), \ \|u_0\|_{L^2}=\|Q\|_{L^2}+\delta$$ and the solution $u$ of \eqref{equ:double} blowup in finite time. \end{lemma} By the Virial argument, we can obtain that the blowup solution with mass arbitrary close to (but larger than) the critical mass, see Appendix \ref{sectionlemma}. \begin{remark} Similar questions can be addressed for the nonlocal perturbation of the classical mass critical problem \begin{align}\notag iu_t+\Delta u+|u|^\frac{4}{3}u+\mu\left(|x|^{-\gamma}*|u|^p\right)|u|^{p-2}u=0\ \ \ \text{in $\mathbb{R}^3$,} \end{align} with $0<\gamma\leq2$ and $2\leq p\leq 2+\frac{5-\gamma}{3}$. (i) If $\mu<0$ and initial data $u_0\in H^1(\mathbb{R}^3)$ with $\|u_0\|_{L^2}=\|Q\|_{L^2}$, then the solution is global. (ii) If $\mu<0$, $3p+\gamma\leq8$ and the initial data $u_0\in H^1_{rad}(\mathbb{R}^3)$ such that $xu_0\in L^2(\mathbb{R}^3)$, $\|u_0\|_{L^2}=\|Q\|_{L^2}+\delta$. Then the solution blowup in finite time. The analysis could also be extended to the higher dimensional case. \end{remark} \subsection{The case \texorpdfstring{$\mu>0$}{mu>0}} In this section and what follows. For simplicity, we introduce the notation $$ A(u)(x) = \left(|x|^{-2}*|u|\right). $$ We now turn to the case $\mu>0$ for the rest of paper, i.e., we consider the model \begin{align}\label{equ1:double} i\partial_tu+\Delta u+|u|^{\frac{4}{3}}u+\mu A(u^2)u=0. \end{align} Now, we state our first main result. For the small $L^2$ solutions, there exist arbitrarily small solitary waves. \begin{thm}\label{Theorem1} \textbf{(Small solitary waves).} Let $\mu>0$ be small enough. There exists $\delta=\delta(\mu)>0$ such that for all $a\in\left(0,\|Q\|_{L^2}^2-\delta(\mu)\right)$, where $Q$ is the unique radial positive ground state solution of equation \begin{equation}\label{equation:mu0} -\Delta Q+Q=|Q|^{\frac{4}{3}}Q \end{equation} and the best constant $C_*$ in the Gagliardo-Nirenberg's inequality \begin{align}\label{GN:nonlocaldou} \|A(|u|^2)|u|^2\|_{L^1} \leq C_* \|\nabla u\|_{L^2}^2 \|u\|_{L^2}^2. \end{align} Then there exists a positive Schwartz radially symmetric solution $Q_{\mu}$ of \begin{align*} \Delta Q_{\mu}- Q_{\mu}+Q_{\mu}^{\frac{7}{3}}+\mu A(Q_{\mu}^2)Q_{\mu}=0,\,\,\|Q_{\mu}\|_{L^2}^2=a. \end{align*} In addition, define the linear operator $L_{+,\mu}$ and $L_{-,\mu}$ associated to $Q_{\mu}$ by \begin{align}\label{linear:operatordouble} L_{+,\mu}\xi=&-\Delta\xi+\xi-\frac{7}{3}Q_{\mu}^{\frac{4}{3}}\xi- 2\mu A(Q_{\mu}\cdot\xi)Q_{\mu}-\mu A\left(Q_{\mu}^2\right)\xi,\notag\\ L_{-,\mu}\xi=&-\Delta\xi+\xi-Q_{\mu}^{\frac{4}{3}}\xi-\mu A\left(Q_{\mu}^2\right)\xi, \end{align} acting on $L^2(\mathbb{R}^3)$ with form domain $H^1(\mathbb{R}^3)$, where $\xi\in H^1(\mathbb{R}^3)$. We have the following non-degeneracy result. \begin{align*} \ker L_{+,\mu}=\{\nabla Q_{\mu}\},~~ \ker L_{-,\mu}=\{Q_{\mu}\}. \end{align*} \end{thm} \textbf{Comments on Theorem \ref{Theorem1}:} 1. Existence. From the standard variational argument, we can easily obtain the existence. 2. The kernel of $L_{-,\mu}$. By using the Sturm argument, we can obtain the $\ker L_{-,\mu}=\{Q_{\mu}\}$. Here we do not need to assume that the parameter $\mu$ is small enough. 3. The kernel of $L_{+,\mu}$. This case seems more difficult, First, we restrict our attention to the case of radial Sobolev space $H^1_{rad}(\mathbb{R}^3)$. Here we develop an appropriate perturbation approach, together with the kernel of linear operators $L_{+,0}$ and $L_{-,0}$ to prove this result. On the other hand, we can easily obtain that $Q_{\mu}\rightarrow Q_0$ in $H^1_{rad}(\mathbb{R}^3)$ as $\mu \to 0$, but this is not enough. We need a more precise estimate on the rate of convergence, namely we prove \begin{align*} \|Q_{\mu}-Q\|_{H^2}\lesssim \mu. \end{align*} Here we assume that $\mu$ is small enough, the estimate is more delicate problem for $\mu$ large. \begin{remark} So far, only a few articles have considered the uniqueness and non-degeneracy of the non-local nonlinear Schr\"{o}dinger equation, see \cite{KLR2009poincare,L2009APDE,X2016CVPDE}. The uniqueness problem without nondegeneracy is treated in \cite{GS2018PD,GTV2019NA,L1976SAM,MZ2010ARMA}. For the general non-local nonlinear Schr\"{o}dinger equation or Choquard equation, the non-degeneracy property is still an open. \end{remark} A second main result is the existence of a minimal mass blowup solution for \eqref{equ1:double}. \begin{thm}\label{theorem:minimialD} \textbf{(Existence of minimal mass blowup elements).} Let $u_0\in H^1(\mathbb{R}^3)$ and $\mu>0$ be small enough. For $E_{\mu}(u_0)\in\mathbb{R}^*_+$, $P_{\mu}\in\mathbb{R}^3$, there exist $t^*<0$ and a minimal mass solution $u\in\mathcal{C}\left([t^*,0);H^1(\mathbb{R}^3)\right)$ of equation \eqref{equ1:double} with \begin{align*} \|u\|_{L^2}=\|Q_{\mu}\|_{L^2},\,\,E_{\mu}(u)=E_{\mu}(u_0),\,\,P_{\mu}(u)=P_{\mu}(u_0), \end{align*} which blows up at time $T=0$. More precisely, it holds that \begin{align*} u(t,x)-\frac{1}{\lambda^{\frac{3}{2}}(t)}Q_{\mu}\left(\frac{x-\alpha(t)}{\lambda(t)}\right)e^{i\gamma(t)}\rightarrow0\,\,\text{in}\,\,L^2(\mathbb{R}^3)\,\,\text{as}\,\,t\rightarrow0^-, \end{align*} where \begin{align*} \lambda(t)=\lambda^*t+\mathcal{O}(t^3),\,\,\,\gamma(t)=\frac{1}{\lambda^*|t|}+\mathcal{O}(t^2),\,\,\alpha(t)=x_0+\mathcal{O}(t^3), \end{align*} with some constant $\lambda^*>0$, and the blowup speed is given by \begin{align*} \|\nabla u(t)\|_{L^2}\sim\frac{C(u_0)}{|t|},\,\,\text{as}\,\,t\rightarrow0^-, \end{align*} where $C(u_0)$ is a constant only depend on the initial data $u_0$. \end{thm} \textbf{Comments on the result.} 1. $\mu>0$ is small. In the present work, we assume that $\mu>0$ is small enough. This condition guarantee the existence of $Q_{\mu}$ and the radial non-degeneracy of the linearized operator $L_{+,\mu}$. On the other hand, $\mu>0$ is small plays an important role in the refine energy estimate. 2. On the minimal elements. For an inhomogeneous problem \begin{align*} i\partial_tu+\Delta u-V(x)u+k(x)|u|^\frac{4}{N}u=0. \end{align*} When $V(x)=0$, $k(x)\neq0$ and $N=2$, Rapha\"{e}l and Szeftel \cite{RS2011JAMS} obtained the existence and uniqueness of the minimal mass blowup solution under a necessary and sufficient condition on $k(x)$, in the absence of pseudo-conformal transformation. When $V(x)\neq0$, $k(x)\neq0$ and $N=1,2$, Banica, Carles and Duyckaerts \cite{BCD2011CPDE} proved the existence of the minimal mass blowup solution. On the other hand, Le Coz, Martel and Rapha\"{e}l \cite{LMR2016RMI} also considered the double power nonlinear Schr\"{o}dinger equation \begin{align}\notag i\partial_tu+\Delta u+|u|^{4/d}u+|u|^{p-1}u=0,\,\,1<p<1+\frac{4}{d},\,\,d\leq3, \end{align} and obtained the existence of finite time blow up minimal solutions in the radial case. For the mass critical nonlocal problem such as half wave equation \begin{align}\label{equ:hw} i\partial_tu+\sqrt{-\Delta}u+|u|^{\frac{2}{N}}=0, \end{align} (the energy space for \eqref{equ:hw} is $H^{\frac{1}{2}}(\mathbb{R}^N)$), when $N=1$, an existence result similar to Theorem \ref{theorem:minimialD} was proved by Krieger, Lenzmann and Rapha\"{e}l \cite{KLR2013ARMA}; when $N=2,3$, the existence result in the Sobolev space was proved by the authors in the present paper \cite{GL2021CPDE,GL2021JFA}. Also, Lan \cite{Lan2021IMRN} obtained the blowup solution for the general fractional Schr\"odinger equation in the one dimension case. For the other constructions of minimal mass solutions for dispersive equations, such as Martel and Pilod \cite{MP2017MA} addressed the case of the modified Benjamin-Ono equation, which also involves the nonlocal operator $D$. This paper is organized as follows: in Section 2, we prove the Theorem \ref{Theorem1}; in section 3, we use the result of Theorem \ref{Theorem1} to construct the approximate blowup profile $R_{\mathcal{P}}$; In section 4, we establish the energy, modulation estimates and the refined energy/Morawetz type estimate, which will be a key ingredient in the compactness argument to construct minimal mass blowup solutions; In section 5, we prove the main result Theorem \ref{theorem:minimialD}; and the finally section is the Appendix. \textbf{Notations and definitions}\\ - $(f,g)=\int \bar{f}g$ as the inner product on $L^2(\mathbb{R}^3)$.\\ - $\|\cdot\|_{L^p}$ denotes the $L^p(\mathbb{R}^3)$ norm for $p\geq 1$.\\ - $\widehat{f}$ denotes the Fourier transform of function $f$.\\ - We shall use $X\lesssim Y$ to denote that $X\leq CY$ holds, where the constant $C>0$ may change from line to line, but $C$ is allowed to depend on universally fixed quantities only.\\ - Likewise, we use $X\sim Y$ to denote that both $X\lesssim Y$ and $Y\lesssim X$ hold.\\ - $\Re{f}$ and $\Im{f}$ denote the real part and imaginary part of function $f$, respectively. \section{Existence and Non-degeneracy} In this section, we consider the existence and non-degeneracy of the ground state solution of \eqref{equ1:double}. Now, we introduce the minimization problem \begin{align}\label{min:double3D} e_{\mu}=\inf_{\|u\|_{L^2}^2=a}\{E_{\mu}(u):u\in H^1(\mathbb{R}^3)\}. \end{align} This lemma will give the existence and properties of the minimizer. \begin{lemma} Let $\mu>0$ be small enough. There exists $\delta=\delta(\mu)>0$ such that the constrained minimization problem \eqref{min:double3D} with $ a <\|Q\|_{L^2}^2-\delta(\mu)$, where $C_*$ is the best constant of the Gagliardo-Nirenberg's inequality \eqref{GN:nonlocaldou}, has a minimizer $\phi_{\mu}\in H^1(\mathbb{R}^3)$, that after rescaling satisfies \eqref{equ1:double} and it is radial and symmetry decreasing function. \end{lemma} \begin{proof} First, we show that $E_{\mu}(u)$ is bounded from below, when $u$ obeys the constraint $\|u\|_{L^2}^2=a <\|Q\|_{L^2}^2-\delta(\mu)$. Indeed, by the Gagliardo-Nirenberg's inequalities and Hardy-Littlewood-Sobolev inequality, we have \begin{align}\notag E_{\mu}(u)\geq&\frac{1}{2}\|\nabla u\|_{L^2}^2-\frac{1}{2}\frac{\|u\|_{L^2}^{\frac{4}{3}}}{\|Q\|_{L^2}^{\frac{4}{3}}}\|\nabla u\|_{L^2}^2-\mu C_*\|\nabla u\|_{L^2}^2\|u\|_{L^2}^{2} =\frac{1}{2}\left(1-\frac{\|u\|_{L^2}^{\frac{4}{3}}}{\|Q\|_{L^2}^{\frac{4}{3}}}-2\mu C_*a\right)\|\nabla u\|_{L^2}^2. \end{align} From the assumption of $a$, we deduce that $E_{\mu}(u) \geq 0$. Next, we discuss the existence and the properties of the constrained minimizers. Taking a minimizing sequence $\{u_n\}$ and $\lim\limits_{n\rightarrow\infty}E_{\mu}(u_n)=e_{\mu}$. By the Riesz rearrangement inequality, we have \begin{align*} &\|\nabla u_n\|_{L^2}\geq\|\nabla u_n^*\|_{L^2},~~\|u_n\|_{L^q}=\|u_n^*\|_{L^q},\,\,\text{where}\,\,q\in[2,6]\\ &\int A(u_n^2)(x)u_n^2(x)dx\leq \int A((u_n^*)^2)(x)(u_n^*)^2(x)dx. \end{align*} Combining the above relations, we have $E(u_n)\geq E(u_n^*)$ while $\|u_n^*\|_{L^2}^2=a$. Hence, $\mathop{\lim}\limits_ {n\rightarrow\infty}E_{\mu}(u_n^*) =e_{\mu}$ and $u_n^*$ is uniformly bounded sequence in $H^1(\mathbb{R}^3)$. Moreover, $u_n^*$ are radial symmetry functions in the unit sphere of $L^2(\mathbb{R}^3)$. So we have a weakly convergent subsequence converging weakly in $L^2(\mathbb{R}^3)$. By the lower semi-continuity of the norm $\|\phi_{\mu}\|_{L^2}\leq a$ and \begin{align}\notag \liminf_{n\rightarrow\infty} \|\nabla u_n^*\|_{L^2}\geq\|\nabla \phi_{\mu}\|_{L^2}. \end{align} We also have that for every $|x|\geq0$, \begin{align}\notag a=\int|u_n^*|^2\geq\int_{|y|\leq|x|}|u_n^*(y)|^2dy\geq C |\cdot|^3|u_n^*(x)|^2, \end{align} whence $|u_n^*(x)|\leq C|x|^{-3/2}$ for every $x\in\mathbb{R}^3$. It follows that $\{u_n^*\}$ is a compact sequence $L^q(\mathbb{R}^3)$, $2\leq q<6$ (Rellich-Kondrachov's). Hence, we can assume (after taking subsequence), $\lim\limits_{n\rightarrow\infty}\|u_n^*-\phi\|_{L^q}=0$ for any $2\leq q<6$. As a consequence, by the triangle inequality and the Hardy-Littlewood-Sobolev inequality or see \cite[Lemma 2.1]{LZW2019ZAMP}, we can obtain \begin{align}\notag \lim_{n\rightarrow\infty}\int A(|u_n^*|^2)|u_n^*(x)|^2dx=\int A(\phi^2)\phi(x)^2dx. \end{align} Next, we will show that $\{u_n^*\}$ converges to $\phi$ in $H^1(\mathbb{R}^3)$. In fact, we have \begin{align*} \|\nabla u_n^*\|_{L^2}^2-\|\nabla\phi\|_{L^2}^2=&2E_{\mu}(u_n^*)-2E_{\mu}(\phi)+\frac{20}{3}\int|u_n^*|^{\frac{10}{3}}-\frac{20}{3}\int|\phi|^{\frac{10}{3}}\\ &+\frac{\mu}{2}\int A(|u_n^*|^2)|u_n^*(x)|^2dx-\frac{\mu}{2}\int A(\phi^2)\phi(x)^2dx\\ \leq&2E_{\mu}(u_n^*)-2e_{\mu}(\lambda)+\frac{20}{3}\int|u_n^*|^{\frac{10}{3}}-\frac{20}{3}\int|\phi|^{\frac{10}{3}}\\ &+\frac{\mu}{2}\int A(|u_n^*|^2)|u_n^*(x)|^2dx-\frac{\mu}{2}\int A(\phi^2)\phi(x)^2dx\rightarrow0, \end{align*} as $n\rightarrow\infty$. From this we have $\limsup\limits_{n\to\infty}\|\nabla u_n^*\|_{L^2}^2\leq\|\nabla\phi\|_{L^2}^2$. Combining the weakly lower semi-continuity, we can obtain that $\|u_n^*-\phi\|_{H^1}\rightarrow0$. And now this lemma is proved. \end{proof} Since $\phi_{\mu}$ is a minimizer of \eqref{min:double3D}, it satisfies the Euler-Lagrange equation \begin{equation*} -\Delta\phi_{\mu}+\beta_{\mu}\phi_{\mu}=|\phi_\mu|^{\frac{4}{3}}\phi_\mu +\mu A(\phi_{\mu}^2)|\phi_{\mu}. \end{equation*} Multiplying both side by $\phi_{\mu}$ and then integrate by part, we obtain \begin{align*} \beta_{\mu}\int|\phi_{\mu}|^2=-\int|\nabla\phi_{\mu}|^2+\int|\phi_{\mu}|^{\frac{10}{3}}+\mu\int A(\phi_{\mu}^2)|\phi_{\mu}|^2. \end{align*} On the other hand, we have the Pohozaev identity \begin{align}\label{Pohozaev:identitydou} \frac{1}{2}\int|\nabla\phi_{\mu}|^2+\frac{3}{2}\int|\phi_{\mu}|^2=\frac{9}{10}\int|\phi_{\mu}|^{\frac{10}{3}}+\mu\int A(\phi_{\mu}^2)|\phi_{\mu}|^2. \end{align} Combining the above two identities, we can obtain \begin{align*} \beta_{\mu}\int|\phi_{\mu}|^2=\frac{2}{5}\int|\phi_{\mu}|^{\frac{10}{3}}+\mu\frac{ 1}{2}\int A(\phi_{\mu}^2)|\phi_{\mu}|^2>0. \end{align*} This implies that $\beta_{\mu}>0$. Let $\phi_{\mu}(x)=\beta_{\mu}^{\frac{3}{4}}Q_{\mu}\left(\sqrt{\beta_{\mu}}x\right)$, then $\|\phi_{\mu}\|_{L^2}^2=\|Q_{\mu}\|_{L^2}^2$ and \begin{equation}\label{equ:ell1} -\Delta Q_{\mu}+Q_{\mu}=|Q_{\mu}|^{\frac{4}{3}}Q_{\mu}+\mu A(Q_{\mu}^2)Q_{\mu}. \end{equation} Throughout this paper, we denote the linearized operator (with respect to complex-valued functions) close to the ground state $Q_{\mu}$ by \begin{align}\notag L_{\mu}=\left[\begin{array}{cc}L_{+,\mu}&0\\0&L_{-,\mu}\end{array}\right], \end{align} with the scalar self-adjoint operators $L_{+,\mu},\,L_{-,\mu}$ defined in \eqref{linear:operatordouble}. For radial $\xi\in L^2(\mathbb{R}^3)$ in the kernel of $L_{-,\mu}$ we can write \begin{equation}\notag - \xi^{\prime\prime}(r) - \frac{2}{r} \xi^\prime(r) + \xi(r) - Q_{\mu}^{\frac{4}{3}}\xi -\mu A(Q_{\mu}^2)\xi =0. \end{equation} Using the argument from \cite[Lemma 1]{GS2018PD} and assuming $\xi \perp Q_{\mu}$ we easily obtain $\xi =0.$ Also we have \begin{align*} ( L_{-,\mu} u,u )_{L^2} \geq \delta \|u\|_{L^2}^2~~\text{for}~~u \perp Q_{\mu}. \end{align*} Indeed, we first prove that the operator $L_{-,\mu}$ is non-negative. Assume that $L_{-,\mu}$ has a negative eigenvalue, say $-\sigma^2.$ Without loss of generality, we may assume that it is the smallest eigenvalue, so that, \begin{align}\label{sec2min} -\sigma^2=\inf_{\|\psi\|_{L^2}^2=1}(L_{-,\mu}\psi,\psi). \end{align} The corresponding eigenfunction, say $\phi$ can be constructed as a minimizer of the minimization problem \eqref{sec2min}. It standard implication that the symmetric-decreasing rearrangement $\phi^*$ of $\phi$ is also minimizer, since we have \begin{align*} \|\nabla \psi\|_{L^2}\geq\|\nabla \psi^*\|_{L^2},~ \int A(\psi^2)|\psi(x)|^2dx\leq \int A((\psi^*)^2)|\psi^*(x)|^2dx,~ \|\psi\|_{L^q}=\|\psi^*\|_{L^q}. \end{align*} Thus $(L_{-,\mu}\psi,\psi)\geq(L_{-,\mu}\psi^*,\psi^*)$. It follows that $\phi^*$ is also minimizer and $\phi^* \geq 0$. But if such eigenfunction corresponds to a negative eigenvalue, then it must be perpendicular to the eigenfunction $Q_{\mu}$ corresponds to eigenvalue zero. However, both $\phi>0$ and $Q_{\mu}>0$, so we have a contradiction. It follows that $L_{-,\mu}\geq 0$. Next, we prove that $\ker L_{-,\mu}$ is one dimensional space generated by $Q_\mu.$ Indeed, if $\xi \in \ker L_{-,\mu}$ and $\xi\perp Q_{\mu}$, we have the equation $$-\Delta\xi(r)+\xi(r)-V(r) \xi(r) = 0, \ V(r) =Q_{\mu}^{\frac{4}{3}}+\mu A\left(Q_{\mu}^2\right).$$ Since $Q_\mu$ satisfies the same equation, the Sturm argument shows that between any two zeros of $\xi$ there is a zero of $Q_\mu$ and this is a contradiction. \subsection{Limit of~\texorpdfstring{$Q_\mu$}{Q-mu} and radial non-degeneracy of \texorpdfstring{$L_{+,\mu}$}{Lmu}} To show that the kernel of $L_{+,\mu}$ in $H^1_{rad}(\mathbb{R}^3)$ is trivial we have to show that the system \begin{equation}\notag - \xi^{\prime\prime}(r) - \frac{2}{r} \xi^\prime(r)+\xi - \frac{7}{3}Q_{\mu}^{\frac{4}{3}}\xi -\mu A(Q_{\mu}^2)\xi - 2\mu A(Q_{\mu}\cdot\xi))Q_{\mu}=0. \end{equation} has only trivial solutions. To prove the triviality of $ \ker L_{+,\mu},$ we shall make appropriate expansions of $Q_\mu$ and $ L_{+,\mu}$ around $\mu =0.$ We know that $Q_\mu$ is a solution to \begin{equation}\notag - Q_\mu^{\prime\prime}(r) - \frac{2}{r} Q_\mu^\prime(r)+Q_{\mu} - Q_\mu^{\frac{7}{3}} -\mu A(Q_{\mu}^2)Q_\mu =0. \end{equation} Next lemma we will give the relations between $Q_{\mu}$ and $Q$, where $Q$ is the ground state of equation \eqref{equation:mu0}. \begin{lemma} One can show that \begin{align*} Q_\mu \rightarrow Q_0=Q,~~\text{as $\mu \to 0$ in $H^1(\mathbb{R}^3)$}. \end{align*} \end{lemma} \begin{proof} Since $\|Q_{\mu}\|_{L^2}\leq \|Q_0\|_{L^2} $ is uniformly bounded, we only have to derive a uniformly bounded for $\|\nabla Q_{\mu}\|_{L^2}$, which can be done as follows. Note that $Q_{\mu}$ satisfies the equation \begin{align}\notag -\Delta Q_{\mu}+Q_{\mu}=|Q_{\mu}|^{4/3}Q_{\mu}+\mu A(Q_{\mu}^2)Q_{\mu}. \end{align} Multiplying both sides by $Q_{\mu}$ and then integrate by part \begin{align}\label{eq:ie1} \int|\nabla Q_{\mu}|^2+\int|Q_{\mu}|^2=\int|Q_{\mu}|^{\frac{10}{3}}+\mu \int A(Q_{\mu}^2)Q_{\mu}^2 \end{align} and the Pohozaev identity we can obtain that \begin{align}\notag \int|Q_{\mu}|^2=\frac{2}{5}\int|Q_{\mu}|^{\frac{10}{3}}+\frac{\mu}{2}\int A(Q_{\mu}^2)Q_{\mu}^2> \frac{2}{5}\int|Q_{\mu}|^{\frac{10}{3}}. \end{align} Hence, we have \begin{align}\notag \int |\nabla Q_{\mu}|^2+ |Q_{\mu}|^2<& \frac{5}{2}\int|Q_{\mu}|^2+\mu\int A( Q_{\mu}^2) Q_{\mu}^2 \leq \frac{5}{2}\int|Q_{\mu}|^2+\mu C\|Q_{\mu_n}\|_{L^2}^2\int|\nabla Q_{\mu}|^2. \end{align} Hence, $\|\nabla Q_{\mu_n}\|_{L^2}$ is uniformly bounded for $\mu$ sufficiently small. The above argument implies that $\{Q_{\mu_n}\}$ is uniformly bounded in $H^1_{rad}(\mathbb{R}^3)$. Therefore, we can assume that, up to a subsequence, still denote $Q_{\mu_n}$, such that $Q_{\mu_n}$ converges weakly to a non-negative radial function $Q_0\in H^1_{rad}(\mathbb{R}^3)$, that is \begin{align}\notag Q_{\mu_n}\rightharpoonup Q_0\,\,\text{weakly in}\,\,H^1_{rad}(\mathbb{R}^3). \end{align} Moreover, by the compact embedding $H^1_{rad}(\mathbb{R}^3)\hookrightarrow L^q(\mathbb{R}^3)$ for any $2<q<6$ (see Strauss \cite{S1977CMP}), we can assume that \begin{align*} &Q_{\mu_n}\rightarrow Q_0\,\,\text{in}\,\,L^q(\mathbb{R}^3)~~\text{for any $2<q<6$}\\ &Q_{\mu_n}\rightarrow Q_0,\,\,\,a.e.\,\text{in}\,\,\mathbb{R}^3. \end{align*} From the Hardy-Littlewood-Sobolev inequality (see \cite{LZW2019ZAMP,L2001:book}), we easily deduce that \begin{align*} \lim_{n\rightarrow+\infty}\int A(Q_{\mu_n}^2)Q_{\mu_n}^2=\int A(Q_{0}^2)Q_{0}^2. \end{align*} Furthermore, from \eqref{eq:ie1} and the above considerations we obtain \begin{align*} \|Q_{\mu_n}\|_{H^1}\rightarrow\|Q_0\|_{H^1}. \end{align*} Combining this with the weak convergence of $Q_{\mu_n}$, we obtain the strong convergence of $Q_{\mu_n}\rightarrow Q_0$ in $H^1(\mathbb{R}^3)$. In particular from $Q_{\mu_n}\rightarrow Q_0$ in $L^2(\mathbb{R}^3)$ we see that $\|Q_0\|_{L^2}^2=\|Q\|_{L^2}^2.$ Finally, we will show that $Q_0=Q$ is the ground state solution of equation \begin{equation}\label{equ:mu0} -\Delta u+u=|u|^{\frac{4}{3}}u. \end{equation} Indeed, from above argument, $Q_0$ is a (weak) solution of equation \eqref{equ:mu0}. In fact, the identity $-\Delta Q_0 + Q_0-|Q_0|^{\frac{4}{3}}Q_0=0$ is fulfilled in $H^{-1}$ sense, since $Q_0 \in H^1.$ From this fact we obtain that $Q_0 \in H^2.$ And by the weakly lower semicontinuous $$\|Q\|_{L^2}^2 = \|Q_0\|_{L^2}^2\leq\liminf_{\mu_n\rightarrow0}\|Q_{\mu_n}\|_{L^2}^2\leq\|Q\|_{L^2}^2.$$ This implies that $Q_0$ is a radial positive $L^2$ normalized solution of equation \eqref{equ:mu0} and using the uniqueness of the solution of equation \eqref{equ:mu0} we deduce $Q_0=Q.$ \end{proof} Next lemma we will give the regularity an estimate concerning the linearized operators $L_{+,0}$ and $L_{-,0}$. \begin{lemma}\label{lemma:operatorinverse} Let $f,g\in H^1(\mathbb{R}^3)$ and suppose $g\perp Q$ and $f\perp\nabla Q$. Then we have the regularity bound \begin{align}\notag \|L_{-,0}^{-1}g\|_{H^2}\lesssim \|g\|_{L^2}\,\,\text{and}\,\, \|L_{+,0}^{-1}f\|_{H^2}\lesssim \|f\|_{L^2}. \end{align} In particular, if $f,g\in H^1_{rad}(\mathbb{R}^3)$ and $g\perp Q$, we also have the same estimate. \end{lemma} \begin{proof} From \cite{CGN2007SIAM,W1985SIAM}, it is known that \begin{align*} \ker L_{+,0}=span\{\nabla Q\},\,\,\ker L_{-,0}=span\{Q\}. \end{align*} By the standard argument, we can easily obtain this lemma, here we omit the details. \end{proof} Since $Q_{\mu}$ satisfies \begin{align}\notag L_{-,\mu}Q_{\mu} - \left( L_{-,0}Q_{\mu}+\mu A(Q_{\mu}^2)Q_{\mu}\right) = Q_\mu^{7/3}-Q_0^{4/3}Q_\mu-\mu A(Q_{\mu}^2)Q_{\mu} = \mathcal{O}(\mu). \end{align} Let $Q_{\mu}=Q+\mu\xi_{\mu}$, where $\xi_{\mu}\perp Q$. Then \begin{align}\notag L_{+,0}\xi_{\mu}+A(Q_{\mu}^2)Q_{\mu}+\mathcal{O}(\mu)=0. \end{align} From this and the above lemma \ref{lemma:operatorinverse}, we have \begin{align*} \|\xi_{\mu}\|_{H^2}\lesssim \|L_{+,0}^{-1}\left(A(Q_{\mu}^2)Q_{\mu}+\mathcal{O}(\mu)\right)\|_{H^2}\lesssim\|A(Q_{\mu}^2)Q_{\mu}+\mathcal{O}(\mu)\|_{L^2} \lesssim \|A(Q_{\mu}^2)Q_{\mu}\|_{L^2}+\mathcal{O}(\mu). \end{align*} By the H\"{o}lder inequality and Young inequality, we have \begin{align}\notag \|A(Q_{\mu}^2)Q_{\mu}\|_{L^2}\leq \|A(Q_{\mu}^2)\|_{L^6}\|Q_{\mu}\|_{L^3}\leq \|Q_{\mu}\|_{L^4}^2\|Q_{\mu}\|_{L^3}. \end{align} Therefore, we deduce that \begin{align}\notag \|Q_{\mu}-Q\|_{H^2}=\mu\|\xi_{\mu}\|_{H^2}\leq C\mu. \end{align} As an important result, we prove the so-called non-degeneracy of $L_{+,\mu}$ on $L^2_{rad}(\mathbb{R}^3)$; that is, the triviality of its kernel. \begin{lemma}\label{lemma:raidalnondegeneracy} The linear operator $L_{+,\mu}$ given by \eqref{linear:operatordouble} satisfies the estimate \begin{align*} \ker L_{+,\mu}=\{0\}, \end{align*} when $L_{+,\mu}$ is restricted to $L^2_{rad}(\mathbb{R}^3)$ and $\mu$ is sufficiently small. \end{lemma} \begin{proof} Assume that $\xi_{\mu}\in L^2_{rad}(\mathbb{R}^3)$ and $\xi_{\mu}\perp Q_{\mu}$ satisfies \begin{align*} L_{+,\mu}\xi_{\mu}=0. \end{align*} Without loss of generality, we assume that $\|\xi_{\mu_k}\|_{L^2}=1$ and \begin{align*} \xi_{\mu_k}=\alpha_{\mu_k}Q+\eta_{\mu_k}, \end{align*} where $\eta_{\mu_k}\in H^1_{rad}(\mathbb{R}^3)$ and $\eta_{\mu_k}\perp Q$. Now, we claim that $\alpha_{\mu_k}\rightarrow0$ and $\|\eta_{\mu_k}\|_{L^2}\rightarrow1$ as $\mu_k\rightarrow0$. Indeed, if it is not true, then there exist $\alpha_0\neq 0$ and $\eta_0 \in H^1_{rad} (\mathbb{R}^3)$ so that (choosing suitable subsequences) we have $\alpha_{\mu_k}\rightarrow\alpha_0\neq0,$ $\eta_{\mu_k}\rightharpoonup \eta_0\neq 0,$ $\eta_0\perp Q$ so we have \begin{align*} \xi_{\mu_k}=\alpha_{\mu_k}Q+\eta_{\mu_k}\rightharpoonup\alpha_0 Q+\eta_0 \neq 0. \end{align*} From this, we deduce that \begin{align*} 0=L_{+,\mu}\xi_{\mu_k}\rightharpoonup L_{+,0}(\alpha_0Q+\eta_0)=0 \end{align*} and we see that $\alpha_0 Q+\eta_0 \in H^2_{rad} (\mathbb{R}^3).$ On the other hand, the kernel of $L_{+,0}$ in $H^2_{rad} (\mathbb{R}^3)$ is trivial, so we arrive at a contradiction and the claim is true. Now, we can rewrite the equation $ L_{+,\mu}\xi_{\mu}=0$ as \begin{align*} \alpha_{\mu_k}\left(-\Delta+1-\frac{7}{3}Q_{\mu_k}^{\frac{4}{3}}\right)Q-\alpha_{\mu_k}\mu_k\mathcal{K}(Q)+\left(-\Delta+1-\frac{7}{3}Q_{\mu_k}^{\frac{4}{3}}\right)\eta_{\mu_k}-\mu_k\mathcal{K}(\eta_{\mu_k})=0, \end{align*} where \begin{align*} \mathcal{K}(\xi)= A(Q_{\mu_k}^2)\xi+2 A(Q_{\mu_k}\cdot\xi)Q_{\mu_k}. \end{align*} That is \begin{align*} L_{+,0}\eta_{\mu_k}+\alpha_{\mu_k}L_{+,0}Q-\alpha_{\mu_k}\mu_k\mathcal{K}(Q)-\mu_k\mathcal{K}(\eta_{\mu_k})+\frac{7}{3}\left(Q^{\frac{4}{3}}-Q_{\mu_k}^{\frac{4}{3}}\right)(\alpha_{\mu_k}Q+\eta_{\mu_k})=0. \end{align*} Since $Q$ and $\eta_{\mu_k}$ are the radial functions, by the lemma \ref{lemma:operatorinverse}, we have \begin{align*} \|\eta_{\mu_k}\|_{H^2}\leq&\alpha_{\mu_k}\|Q\|_{H^2}+\alpha_{\mu_k}\mu_k\|\mathcal{K}(Q)\|_{L^2}+\mu_k\|\mathcal{K}(\eta_{\mu_k})\|_{L^2}+\left\|\left(Q^{\frac{4}{3}}-Q_{\mu_k}^{\frac{4}{3}}\right)(\alpha_{\mu_k}Q+\eta_{\mu_k})\right\|_{L^2}\\ \leq&\alpha_{\mu_k}\|Q\|_{H^2}+\alpha_{\mu_k}\mu_k C+\mu_k C\|\eta_{\mu_k}\|_{H^2}+\left\|\left(Q^{\frac{4}{3}}-Q_{\mu_k}^{\frac{4}{3}}\right)\eta_{\mu_k}\right\|_{L^2}+C\mu_k^{\frac{4}{3}}\alpha_{\mu_k}. \end{align*} Let $\mu_{k}$ be sufficiently small, by the above we have \begin{align*} \|\eta_{\mu_k}\|_{H^2}\leq\alpha_{\mu_k}\|Q\|_{H^2}+\alpha_{\mu_k}\mu_k C(p)+C\mu_k^{\frac{4}{3}}\alpha_{\mu_k}. \end{align*} Hence, $ \|\eta_{\mu_k}\|_{H^2}\rightarrow0$. This is a contradiction and the proof of this lemma is complete. \end{proof} Next lemma we shall give the general non-degeneracy property. \begin{lemma}\label{nondegeneracy} For the linear operator $L_{+,\mu}$ be given by \eqref{linear:operatordouble}, we have \begin{align*} \ker L_{+,\mu}=span\{\nabla Q_{\mu}\}. \end{align*} \end{lemma} \begin{proof} By the similar argument as \cite{L2009APDE}, we can obtain this lemma. For the reader's convenience, we will give the detail in the Appendix \ref{appendixnondegeneracy}. \end{proof} \section{Construction the approximate profile} In this section, we aim to construct the approximate blowup profile $R_{\mathcal{P}}$ with the parameter $b,\alpha$. For a sufficiently regular function $f:\mathbb{R}^3\rightarrow \mathbb{C}$, we define the generator of $L^2$ scaling given by \begin{align}\notag \Lambda f:=\frac{3}{2}f+x\cdot\nabla f. \end{align} Note that the operator $\Lambda$ is skew-adjoint on $L^2(\mathbb{R}^3)$, that is, we have $(\Lambda f,g)=-(f,\Lambda g)$. By the elementary calculation, we have the following algebraic identities, which very important in the this section. \begin{align}\label{identities:algebraic} L_{+,\mu}\Lambda Q_{\mu}=-2Q_{\mu},~~ L_{+,\mu}(\nabla Q_{\mu})=0,~~ L_{-,\mu}(Q_{\mu})=0. \end{align} From \cite{Cazenave:book,MV2013JFA}, we can obtain that $Q_{\mu}$ and its derivatives are exponentially decay: \begin{align}\notag |\nabla Q_{\mu}|+Q_{\mu}(x)\lesssim \frac{e^{-|x|}}{|x|},\,\,|x|\geq1. \end{align} By the above section (see Theorem \ref{Theorem1}), we have the following properties \begin{align}\label{solvability:conditiondou} \begin{cases} \forall g\in L^2(\mathbb{R}^3),~(g,\nabla Q)=0\,~\exists\, f_+\in L^2(\mathbb{R}^3),\,L_{+,\mu}f_+=g,\\ \forall g\in L^2(\mathbb{R}^3),~(g,Q_{\mu})=0,\,~\exists\, f_-\in L^2(\mathbb{R}^3),\,L_{-,\mu}f_-=g. \end{cases} \end{align} To construct minimal mass blowup solutions for problem \eqref{equ1:double}, we first renormalize the flow \begin{align}\notag u(t,x)=\frac{1}{\lambda^{\frac{3}{2}}(t)}v\left(s,\frac{x-\alpha(t)}{\lambda(t)}\right)e^{i\gamma(t)},\, \,\,\frac{ds}{dt}=\frac{1}{t^2}, \end{align} which leads the renormalized equation: \begin{equation}\label{equ:renormalizedD} i\partial_sv+\Delta v-v+|v|^{\frac{4}{3}}v+ \mu A(v^2)v=i\frac{\lambda_s}{\lambda}\Lambda v+i\frac{\alpha_t}{\lambda}\cdot\nabla v+\tilde{\gamma}_sv, \end{equation} where we have defined $\tilde{\gamma}_s=\gamma_s-1.$ Let the pseudo-conformal drift: \begin{align}\notag v=we^{-ib|y|^2/4}, \end{align} which leads to the slowly modulated equation \begin{align*} &i\partial_sw+\Delta w-w+|w|^{\frac{4}{3}}w+ \mu A(w^2)w +\frac{1}{4}(b_s+b^2)|y|^2w\notag\\ =&i\left(\frac{\lambda_s}{\lambda}+b\right)\Lambda w+b\left(b+\frac{\lambda_s}{\lambda}\right)\frac{|y|^2}{2}w+i\frac{\alpha_t}{\lambda}\left(\nabla w+\frac{iby}{2}\right)+ \tilde{\gamma}\tilde{w}. \end{align*} Since we look for blowup solutions, the parameter $\lambda(s)$ should converge to zero as $s\rightarrow\infty$. Therefore, we now proceed to the slow modulated ansatz construction as in \cite{KMR2009CPAM,RS2011JAMS,MRS2014Duke,LMR2016RMI,KLR2013ARMA}. We freeze the modulation equations \begin{align}\notag \frac{\lambda_s}{\lambda}=-b,\,\,\,\,\, b_s=-b^2,~~~\frac{\alpha_s}{\lambda}=d,~~~d_s=-2bd. \end{align} To have more clear information about the behaviour of these remodulation functions, we note that \begin{equation}\notag b(s) \sim s^{-1}, \lambda \sim s^{-1}, |d| \sim s^{-2}, |\alpha| \sim s^{-2} \end{equation} We look for an approximate solution to \eqref{equ:renormalizedD} of the form \begin{align}\notag v(s,y)=R_{(b(s),d(s))}(y) \end{align} with an expansion \begin{align}\label{eq:gex1} R_{\mathcal{P}}(y)=Q_{\mu}+\sum_{k+j\geq1}b^jd^k R_{j,k},~~\text{where}~~\mathcal{P}(s)=(b(s),d(s)),~~R_{j,k}=\left(T_{j,k}+iS_{j,k}\right). \end{align} This allows us to construct a high order approximation $R_{\mathcal{P}}$ solution to \begin{align}\label{equ:approximate:double} -ib^2\partial_bR_{\mathcal{P}}-ibd\cdot\partial_dQ_{\mathcal{P}}+\Delta R_{\mathcal{P}}&-R_{\mathcal{P}}+|R_{\mathcal{P}}|^{\frac{4}{3}}R_{\mathcal{P}} +\mu A(R_{\mathcal{P}}^2)R_{\mathcal{P}}\notag\\ &+ib\Lambda R_{\mathcal{P}}-id\cdot\nabla R_{\mathcal{P}}=-\Psi_{\mathcal{P}}, \end{align} where $\Psi_{\mathcal{P}}=\mathcal{O}(b^5+|d||\mathcal{P}|^2)$ is some small error term. \begin{lemma}\label{lemma:approximatedouble} (Approximate Blowup Profile) Let $\mathcal{P}=(b,d)\in\mathbb{R}\times\mathbb{R}^3$. There exists a smooth function $R_{\mathcal{P}}=R_{\mathcal{P}}(x)$ of the form \begin{align}\notag R_{\mathcal{P}}(y)=&Q_{\mu}+ibS_{1,0}+id\cdot S_{0,1}+bd\cdot T_{1,1}+b^2T_{2,0}+d^2\cdot T_{0,2}+ib^3S_{3,0}+b^4T_{4,0}+ib^2d\cdot S_{2,1} \end{align} is a solution satisfies \eqref{equ:approximate:double}. Here, the function $\{R_{k,l}\}_{0\leq k\leq 4,0\leq l\leq 2}$ satisfy the following regularity and decay bounds: \begin{align*} &\|R_{\mathcal{P}}\|_{H^2}+\|\Lambda R_{\mathcal{P}}\|_{H^2}\lesssim 1,~~ |R_{\mathcal{P}}(x)|+|\Lambda R_{\mathcal{P}}(x)|\lesssim e^{-|x|}. \end{align*} The remainder $\Psi$ in \eqref{equ:approximate:double} satisfies the estimate \begin{align}\label{approximate:decay:dou} \sup_{y\in\mathbb{R}^3}\left(|\Psi(y)|+|\nabla\Psi(y)|\right)\lesssim (b^5+|d|^2)e^{-c|y|},\,\,\text{where}\,\,0<c<1. \end{align} \end{lemma} \begin{proof} We shall use the complete expression \eqref{eq:gex1} and we divide the rest of the proof of this lemma as follows. \textbf{Step 1.} Determining the functions $\{T_{j,k}, S_{j,k}\}$. We discuss our ansatz for $R_{\mathcal{P}}$ to solve \eqref{equ:approximate:double} order by order. \textbf{Order} $\mathcal{O}(1):$ Clearly, we have that \begin{align}\notag -\Delta Q_{\mu}+Q_{\mu}-Q_{\mu}^{\frac{4}{3}}Q_{\mu}-\mu A\left(Q_{\mu}^2\right)(y)Q_{\mu}=0. \end{align} Since $Q_{\mu}=Q_{\mu}(|x|)>0$ being the ground state solution. \textbf{Order:} $\mathcal{O}(b)$: By the Taylor expansion, we have \begin{align*} |R_{\mathcal{P}}|^{\frac{4}{3}}R_{\mathcal{P}}=&|Q_{\mu}+b(T_{1,0}+iS_{1,0})|^{\frac{4}{3}}(Q_{\mu}+b(T_{1,0}+iS_{1,0}))\\ =&\left(Q_{\mu}^{\frac{4}{3}}+\frac{4}{3}bQ_{\mu}^{\frac{1}{3}}T_{1,0}\right)\left(Q_{\mu}+b(T_{1,0}+iS_{1,0})\right)+\mathcal{O}(b^2)\\ =&Q_{\mu}^{\frac{7}{3}}+\frac{7}{3} bQ_{\mu}^{\frac{4}{3}}T_{1,0}+bi Q_{\mu}^{\frac{4}{3}}S_{1,0} +\mathcal{O}(b^2), \end{align*} and the non-local term \begin{align*} A\left(|R_{\mathcal{P}}|^2\right)R_{\mathcal{P}}=&A\left(|Q_{\mu}+b(T_{1,0}+iS_{1,0})|^2\right)(Q_{\mu}+b(T_{1,0}+iS_{1,0}))\\ =&A\left(Q_{\mu}^2\right)Q_{\mu}+2bA\left(|Q_{\mu}T_{1,0}|\right)Q_{\mu}+bA\left(Q_{\mu}^2\right)T_{1,0}+ibA\left(Q_{\mu}^2\right)S_{1,0}+\mathcal{O}(b^2). \end{align*} Hence, we can obtain the equation \begin{align}\notag \begin{cases} L_{+,\mu}T_{1,0}=0,\\ L_{-,\mu}S_{1,0}=\Lambda Q_{\mu}. \end{cases} \end{align} Hence, choosing $T_{1,0}=0$ and note that $\Lambda Q_{\mu}\perp \ker L_{-,\mu}$ due to the fact that $(\Lambda Q_{\mu}, Q_{\mu})=0$. By \eqref{solvability:conditiondou}, there exists a unique $S_{1,0}$ satisfies the equation. \textbf{Order} $\mathcal{O}(d)$: By the similar expansion as above, so we have \begin{align}\notag \begin{cases} L_{+,\mu}T_{0,1}=0,\\ L_{-,\mu}S_{0,1}=-\nabla Q_{\mu}. \end{cases} \end{align} Choosing $T_{0,1}=0$, since $(\nabla Q_{\mu}, Q_{\mu})=0$, then there exists a solution $S_{0,1}\perp \ker L_{-,\mu}$. \textbf{Order}\ $\mathcal{O}(bd)$: We find that $R_{1,1}$ has to solve the equation \begin{align*} \begin{cases} L_{+,\mu}T_{1,1}=S_{0,1}-\Lambda S_{0,1}+\nabla S_{1,0}+\frac{4}{3}S_{1,0}S_{0,1}Q_{\mu}^{\frac{1}{3}}+2\mu A(S_{1,0}S_{0,1})Q_{\mu},\\ L_{-,\mu}S_{1,1}=0. \end{cases} \end{align*} Hence, { choosing $S_{1,1}=0$, we see that } the existence of solution to this equation is guaranteed if $$ S_{0,1}-\Lambda S_{0,1}+\nabla S_{1,0}+\frac{4}{3}S_{1,0}S_{0,1}Q_{\mu}^{\frac{1}{3}}+2\mu A(S_{1,0}S_{0,1})Q_{\mu} \perp Ker L_{+,\mu}^* $$ so Lemma \ref{nondegeneracy} and self - adjointness of $L_{+,\mu}$ require to check the following orthogonality condition \begin{align}\label{construct:bdclaim} \left(S_{0,1}-\Lambda S_{0,1}+\nabla S_{1,0}+\frac{4}{3}S_{1,0}S_{0,1}Q_{\mu}^{\frac{1}{3}}+2\mu A(S_{1,0}S_{0,1})Q_{\mu},\nabla Q_{\mu}\right)=0. \end{align} To verify it we use the commutator formula $[\Lambda,\nabla]=-\nabla$ and integrating by parts, we find \begin{align}\label{construct:bd1} -(\nabla Q_{\mu},\Lambda S_{0,1})=&(\Lambda\nabla Q,S_{0,1})=(\nabla\Lambda Q_{\mu},S_{0,1})-(\nabla Q_{\mu},S_{0,1})\notag\\ =&(\nabla L_{-,\mu}S_{1,0},S_{0,1})-(\nabla Q{\mu},S_{0,1}). \end{align} Next, since $L_{-,\mu}$ is self-adjoint, we observe that for any $f\in L^2(\mathbb{R}^3)$, \begin{align}\label{construct:bd2} (\nabla L_{-,\mu}f,S_{0,1})+(\nabla Q_{\mu},\nabla f)=&-(L_{-,\mu}f,\nabla S_{0,1})-(L_{-,\mu}S_{0,1},\nabla f)=(f,[\nabla,L_{-,\mu}]S_{0,1})\notag\\ =&-(f,(\nabla (Q_{\mu}^{\frac{4}{3}}+\mu A(Q_{\mu}^2)))\cdot S_{0,1})\notag\\ =&-\left(f,\frac{4}{3}Q_{\mu}^{\frac{1}{3}}\nabla Q_{\mu}\cdot S_{0,1}+\mu\nabla A(Q_{\mu}^2)\cdot S_{0,1}\right)\notag\\ =&-(\nabla Q_{\mu},\frac{4}{3}Q_{\mu}^{\frac{1}{3}}S_{0,1}f)-(f,2\mu A(Q_{\mu}\nabla Q_{\mu})\cdot S_{0,1})\notag\\ =&-(\nabla Q_{\mu},\frac{4}{3}Q_{\mu}^{\frac{1}{3}}S_{0,1}f)-(\nabla Q_{\mu},2\mu A(fS_{0,1})Q_{\mu}). \end{align} Combining \eqref{construct:bd1} and \eqref{construct:bd2}, we conclude that \eqref{construct:bdclaim} holds. Hence there exists a solution $T_{1,1}\perp\ker L_{+,\mu}$ and $T_{1,1} \in H^2.$ \textbf{Order} $\mathcal{O}(b^2):$ By the Taylor expansion, we can obtain the equation \begin{align*} \begin{cases} L_{+,\mu}T_{2,0}=\frac{2}{3}Q_{\mu}^{\frac{1}{3}}S_{1,0}^2+S_{1,0}-\Lambda S_{1,0}+\mu A(S_{1,0}^2)Q_{\mu},\\ L_{-,\mu}S_{2,0}=0 , \ \ { \text{hence \ \ $S_{2,0}=0$.}} \end{cases} \end{align*} The solvability condition reduce to \begin{align}\notag \left(\frac{2}{3}Q_{\mu}^{\frac{1}{3}}S_{1,0}^2+S_{1,0}-\Lambda S_{1,0}+\mu A(S_{1,0}^2)Q_{\mu},\nabla Q_{\mu}\right)=0. \end{align} { Now we can use the simple observation that $\left( f, \nabla g\right) =0 $ for any couple of radial $H^1$ functions, then we observe that $Q_{\mu}$ and $S_{1,0}$ are radial functions,so the orthogonality condition is true and there exists $ T_{2,0}\perp \ker L_{+,\mu}$ that satisfies the equation.} \textbf{Order} $\mathcal{O}(d^2):$ We have the following system \begin{align*} \begin{cases} L_{+,\mu}T_{0,2}=\nabla S_{0,1}+\frac{2}{3}S_{0,1}^2Q_{\mu}^{\frac{1}{3}}+\mu A(S_{0,1}^2)Q_{\mu},\\ L_{-,\mu}S_{0,2}=0. \end{cases} \end{align*} The solvability conditions reads \begin{align}\notag \left(\nabla S_{0,1}+\frac{2}{3}S_{0,1}^2Q_{\mu}^{\frac{1}{3}}+\mu A(S_{0,1}^2)Q_{\mu},\nabla Q_\mu\right)=0. \end{align} Obviously, this is true, since $Q_{\mu}$ is radial function and each component of $S_{0,1}$ is odd function in $x$. Hence, there exists a $T_{0,2}\perp \ker L_{+,\mu}$. \textbf{Order} $\mathcal{O}(b^3)$: By the Taylor expansion, we can obtain the equation \begin{equation}\notag \begin{cases} L_{+,\mu}T_{3,0}=0,\\ L_{-,\mu}S_{3,0}=\frac{4}{3}Q_{\mu}^{\frac{1}{3}}T_{2,0}S_{1,0}+\frac{2}{3}Q_{\mu}^{-\frac{2}{3}}S_{1,0}^3+\Lambda T_{2,0}-2T_{2,0}+\mu A(S_{1,0}^2)S_{1,0}+2\mu A(Q_{\mu}T_{2,0})S_{1,0}. \end{cases} \end{equation} Hence, choosing $T_{3,0}=0,$ we see that the solvability condition for $S_{3,0}$ is equivalent to \begin{align}\label{construction:S3dou} &-2(Q_{\mu},T_{2,0})+(Q_{\mu},\Lambda T_{2,0})+\frac{4}{3}\left(Q_{\mu},Q_{\mu}^{\frac{1}{3}}T_{2,0}S_{1,0}\right)+\frac{2}{3}\left(Q_{\mu},Q_{\mu}^{-\frac{2}{3}}S_{1,0}^3\right)\notag\\ &+\mu(Q_{\mu},A(S_{1,0}^2)S_{1,0})+2\mu A(Q_{\mu}T_{2,0})S_{1,0})=0, \end{align} where the functions $S_{1,0}$ and $T_{2,0}$ satisfy \begin{align*} L_{-,\mu}S_{1,0}=&\Lambda Q_{\mu},~~ L_{+,\mu}T_{2,0}=\frac{2}{3}Q_{\mu}^{\frac{1}{3}}S_{1,0}^2+S_{1,0}-\Lambda S_{1,0}+\mu A(S_{1,0}^2)Q_{\mu}. \end{align*} To see that \eqref{construction:S3dou} holds, we first note that \begin{align*} &\text{The left-hand side of \eqref{construction:S3dou}}\\ =&-2(Q_{\mu},T_{2,0})-(\Lambda Q_{\mu},T_{2,0})+\frac{4}{3}\left(T_{2,0},Q_{\mu}^{\frac{4}{3}}S_{1,0}\right)+\frac{2}{3}\left(Q_{\mu}^{\frac{1}{3}},S_{1,0}^3\right)\\ &+\mu(Q_{\mu},A(S_{1,0}^2)S_{1,0})+2\mu(Q_{\mu},A(Q_{\mu}T_{2,0})S_{1,0})\\ =&-2(Q_{\mu},T_{2,0})-(L_{-,\mu}S_{1,0},T_{2,0})+\frac{4}{3}(T_{2,0},Q_{\mu}^{\frac{4}{3}}S_{1,0})+\frac{2}{3}(Q_{\mu}^{\frac{1}{3}},S_{1,0}^3)\\ &+\mu(Q_{\mu},A(S_{1,0}^2)S_{1,0})+2\mu(Q_{\mu},A(Q_{\mu}T_{2,0})S_{1,0})\\ =&-2(Q_{\mu},T_{2,0})-(L_{+,\mu}S_{1,0},T_{2,0})+\frac{2}{3}\left(Q_{\mu}^{\frac{1}{3}},S_{1,0}^3\right)+\mu(Q_{\mu},A(S_{1,0}^2)S_{1,0})\\ =&-2(Q_{\mu},T_{2,0})-(S_{1,0},S_{1,0})+(S_{1,0},\Lambda S_{1,0})-\frac{2}{3}(Q_{\mu}^{\frac{1}{3}},S_{1,0}^3)+\frac{2}{3}(Q_{\mu}^{\frac{1}{3}},S_{1,0}^3)\\ &+\mu(Q_{\mu},A(S_{1,0}^2)S_{1,0})-\mu(S_{1,0},A(S_{1,0}^2)Q_{\mu})\\ =&-2(Q_{\mu},T_{2,0})-(S_{1,0},S_{1,0}), \end{align*} where in the last step we used that $(S_{1,0},\Lambda S_{1,0})=0$, since $\Lambda^*=-\Lambda$. Thus it remains to show that \begin{align}\label{construction:relation} -2(Q_{\mu},T_{2,0})=(S_{1,0},S_{1,0}). \end{align} Indeed, by using $L_{+,\mu}\Lambda Q_{\mu}=-2Q_{\mu}$ (see \eqref{identities:algebraic}) and the equations for $T_{2,0}$ and $S_{1,0}$ above, we deduce \begin{align}\label{construction3dou} -2(Q_{\mu},T_{2,0})=&(L_{+,\mu}\Lambda Q_{\mu},T_{2,0})\notag\\ =&\left(\Lambda Q_{\mu},\frac{2}{3}Q_{\mu}^{\frac{1}{3}}S_{1,0}^2+S_{1,0}-\Lambda S_{1,0}+\mu A(S_{1,0}^2)Q_{\mu}\right)\notag\\ =&(L_{-,\mu}S_{1,0},S_{1,0})-(L_{-,\mu}S_{1,0},\Lambda S_{1,0})+\frac{2}{3}(\Lambda Q_{\mu},Q_{\mu}^{\frac{1}{3}}S_{1,0}^2) +\mu\left(\Lambda Q_{\mu},A(S_{1,0}^2)Q_{\mu}\right)\notag\\ =&(S_{1,0},-\Delta S_{1,0})+(S_{1,0},S_{1,0})-(S_{1,0},Q_{\mu}^{\frac{4}{3}}S_{1,0})-\mu\left( A(Q_{\mu}^2)S_{1,0},S_{1,0}\right)\notag\\ &-(L_{-,\mu}S_{1,0},\Lambda S_{1,0})+\frac{2}{3}(\Lambda Q_{\mu},Q_{\mu}^{\frac{1}{3}}S_{1,0}^2)+\mu\left(\Lambda Q_{\mu},A(S_{1,0}^2)Q_{\mu}\right). \end{align} Next, we have the commutator formula $(L_{-,\mu}f,\Lambda f)=\frac{1}{2}(f,[L_{-,\mu},\Lambda]f),$ which show that \begin{align}\label{construction4dou} &(L_{-,\mu}S_{1,0},\Lambda S_{1,0})\notag\\ =&\frac{1}{2}(S_{1,0},[L_{-,\mu},\Lambda]S_{1,0})\notag\\ =&\frac{1}{2}(S_{1,0},[-\Delta,\Lambda]S_{1,0})-\frac{1}{2}\left(S_{1,0},[Q_{\mu}^{\frac{4}{3}},\Lambda]S_{1,0}\right)-\frac{1}{2}\mu\left(S_{1,0},[A(|Q_{\mu}|^2),\Lambda]S_{1,0}\right)\notag\\ =&(S_{1,0},-\Delta S_{1,0})+\frac{2}{3}\left(S_{1,0},(x\cdot\nabla Q_{\mu})Q_{\mu}^{\frac{1}{3}}S_{1,0}\right)-\frac{1}{2}\mu\left(S_{1,0},[A(|Q_{\mu}|^2),\Lambda]S_{1,0}\right) \end{align} using that $[-\Delta,\Lambda]=-2\Delta$ holds. Moreover, we have the pointwise identity \begin{align}\label{construction5dou} -(x\cdot\nabla Q_{\mu})Q_{\mu}^{\frac{1}{3}}+Q_{\mu}^{\frac{1}{3}}\Lambda Q_{\mu}=\frac{3}{2}Q^{\frac{4}{3}}. \end{align} Furthermore, we have \begin{align}\label{construction6dou} (S_{1,0},[A(|Q_{\mu}|^2),\Lambda]S_{1,0})=&(S_{1,0},x\cdot\nabla A(Q_{\mu}^2)S_{1,0})=(S_{1,0},(x\cdot\nabla(-\Delta)^{-\frac{1}{2}}Q_{\mu}^2)S_{1,0})\notag\\ =&(S_{1,0},((-\Delta)^{-\frac{1}{2}}x\cdot\nabla Q_{\mu}^2)S_{1,0})+(S_{1,0},(-\Delta)^{-\frac{1}{2}}Q_{\mu}^2S_{1,0})\notag\\ =&2(S_{1,0},(|x|^{-2}*(Q_{\mu}x\cdot\nabla Q_{\mu})S_{1,0})+(S_{1,0},A(Q_{\mu}^2)S_{1,0})\notag\\ =&2(\Lambda Q_{\mu},A(S_{1,0}^2)Q_{\mu}). \end{align} Here we also used the commutator formula $[(-\Delta)^{-\frac{1}{2}},x\cdot\nabla]=-(-\Delta)^{-\frac{1}{2}}.$ Now if we insert \eqref{construction4dou}, \eqref{construction5dou} and \eqref{construction6dou} into \eqref{construction3dou}, we can obtain the desire relation \eqref{construction:relation}, and thus the solvability condition \eqref{construction:S3dou} holds as well. \textbf{Order} $\mathcal{O}(b^4)$: By the Taylor expansion, we have the following equation \begin{equation*} \begin{cases} L_{+,\mu}T_{4,0}=-\frac{4}{3}Q_{\mu}^{\frac{1}{3}}S_{1,0}S_{3,0}+\frac{14}{9}Q_{\mu}^{\frac{1}{3}}T_{2,0}^2-\frac{1}{9}(Q_{\mu}^{-\frac{5}{3}}S_{1,0}^4+4Q_{\mu}^{-\frac{2}{3}}T_{2,0}S_{1,0}^2)+3S_{3,0}-\Lambda S_{3,0}+\mu B_1,\\ L_{-,\mu}S_{4,0}=0, \end{cases} \end{equation*} where $B_1=A(2Q_{\mu}T_{2,0}+S_{1,0}^2)T_{2,0}+A(T_{2,0}^2+2S_{1,0}S_{3,0})Q_{\mu}.$ By the above construction, we can obtain that $S_{1,0}$, $S_{3,0}$, $T_{2,0}$ are radial functions. Hence, there exists $T_{4,0}\perp\ker L_{+,\mu}$ satisfies the above equation. \textbf{Order} $\mathcal{O}(b^2d)$: By the calculation, we can obtain the following equation \begin{align*} \begin{cases} L_{+,\mu}T_{2,1}=0,\\ L_{-,\mu}S_{2,1}=\frac{4}{3}Q_{\mu}^{\frac{1}{3}}T_{1,1}S_{1,0}+\frac{4}{3}Q_{\mu}^{\frac{1}{3}}T_{2,0}S_{0,1}+2Q_{\mu}^{-\frac{2}{3}}S_{1,0}^2S_{0,1} -3T_{1,1}+\Lambda T_{1,1}-\nabla T_{2,0}+\mu B_1, \end{cases} \end{align*} where $B_2=A(2Q_{\mu}T_{2,0}+S_{1,0}^2)S_{0,1}+A(S_{1,0}S_{0,1})S_{1,0}$. Since $Q_{\mu}$, $S_{1,0}$, $T_{2,0}$ are radial functions and each components in $S_{0,1}$, $T_{1,1}$ are odd functions. Hence, there is $S_{2,1}\perp \ker L_{-,\mu}$. \begin{lemma}\label{lemma1decaydou} Let $f,g\in L^2(\mathbb{R}^3)$ and suppose that $f\perp Q_{\mu}$, $g\perp\nabla Q_{\mu}$. Then we have the regularity and decay estimate \begin{align*} \|L_{-,\mu}^{-1}f\|_{H^{2}}\lesssim\|f\|_{L^2},\,\, \|L_{+,\mu}^{-1}g\|_{H^{2}}\lesssim\|g\|_{L^2},\\ \|e^{c|x|}L_{-,\mu}^{-1}f\|_{H^2}\lesssim\|e^{c|y|}f\|_{L^2},\,\, \|e^{c|x|}L_{+,\mu}^{-1}g\|_{H^2}\lesssim\|e^{c|y|}g\|_{L^2}\,\,\,\text{where}\,\,0<c<1. \end{align*} \end{lemma} \begin{proof} It suffices to prove the lemma for $L_{-,\mu}^{-1}$, since the estimates for $L_{+,\mu}^{-1}$ follow in the same fashion. To show the decay estimate, we argue as follows. Assume that $\|e^{c|y|}f\|_{L^2}<+\infty$, because otherwise there is nothing to prove. Let $u=L_{-,\mu}^{-1}f$, and rewrite the equation satisfied by $u$ in resolvent form: \begin{align*} (-\Delta+1) u= Q_{\mu}^{\frac{4}{3}}u+\mu A(Q_{\mu}^2)u+ f. \end{align*} In fact, by the elliptic regularity theorem (see \cite{Cazenave:book,MV2013JFA}), we have $Q_{\mu}\in W^{2,p}(\mathbb{R}^3)$, where $p\geq1$. And $Q_{\mu}(|x|)=\frac{e^{-|x|}}{|x|}(c_0+\mathcal{O}(\frac{1}{|x|}))$ as $|x|\geq R$. Hence, we have \begin{align*} \|e^{c|y|}u\|_{H^2(|y|\geq R)}\sim&\|(-\Delta+1)(e^{c|y|}u)\|_{L^2(|y|\geq R)}\lesssim\|e^{c|y|}(-\Delta+1)u\|_{L^2(|y|\geq R)}\\ =&\|e^{c|y|}Q_{\mu}^{\frac{4}{3}}u\|_{L^2(|y|\geq R)}+\mu\|e^{c|y|}A(Q_{\mu}^2)u\|_{L^2(|y|\geq R)}+\|e^{c|y|}f\|_{L^2(|y|\geq R)}. \end{align*} From this inequality, we can deduce that \begin{align*} \|e^{c|y|}u\|_{H^2(|y|\geq R)}\lesssim \|e^{c|y|}f\|_{L^2}. \end{align*} On the other hand, we have \begin{align*} \|e^{c|y|}u\|_{H^2(|y|\leq R)}\lesssim\|u\|_{H^2(|y|\leq R)}\lesssim\|f\|_{L^2(|y|\leq R)}\lesssim\|e^{c|y|}f\|_{L^2}. \end{align*} From this, we can obtain the desired result. \end{proof} \textbf{Step 2.} Now, we turn back to the prove \eqref{approximate:decay:dou}. By the above lemma \ref{lemma1decaydou} and the similar argument as \cite{LMR2016RMI}, we can easily obtain \eqref{approximate:decay:dou}. For the regularity and decay estimate, this can be obtain by the following lemma \ref{lemmlast}. \end{proof} The following lemma show that the approximate profile $Q_{\mathcal{P}}$ is well-define. \begin{lemma}\label{lemmlast} By the definition of $R_{\mathcal{P}}$, we have \begin{align*} |R_{\mathcal{P}}|\lesssim Q_{\mu}. \end{align*} \end{lemma} \begin{proof} Since $R_{\mathcal{P}}=Q_{\mu}+\sum_{0\leq k\leq 4,0\leq l\leq2}R_{k,l}$, we need to prove \begin{align}\notag \left|\frac{R_k}{Q_{\mu}}\right|\lesssim1. \end{align} For any $f_k\in L^2(\mathbb{R}^3)$ and $|f_k|\lesssim e^{-c|x|}$, $c\geq1$, we assume that \begin{align*} L_{-,\mu}F_{k,l}=f_{k,l},\,\,\,\,\text{or}\,\,L_{+,\mu}F_{k,l}=f_{k,l}. \end{align*} In other words, we have to prove \begin{align*} L_{+,\mu}^{-1}: L_{G^c}^{\infty}:=\left\{f:\,\frac{f}{G^c}\in L^{\infty}\right\}\rightarrow L_{G^1}^{\infty}=:\left\{f:\,\frac{f}{G}\lesssim1\right\}, \end{align*} where $G(x)=\frac{e^{-|x|}}{|x|}$ and $c\geq1$. \textbf{Step~1}: We first prove the following holds: \begin{align*} G^{-1}(1-\Delta)^{-1} G^c: L^{\infty} \rightarrow L^{\infty}, \ c \geq 1. \end{align*} Indeed, we have \begin{align*} \left|G*f_k\right|\lesssim\left|\int\frac{e^{-|x-y|}}{|x-y|}\frac{e^{-c|y|}}{|y|^c}\frac{f_k}{G^c}\right|\lesssim G(x). \end{align*} \textbf{Step~2}: Let $L_+ = -\Delta+1-\frac{7}{3}Q^{\frac{4}{3}}.$ Our goal is to show that \begin{align*} G^{-1}(L_+)^{-1} G^c: L^{\infty} \rightarrow L^{\infty}, \ c \geq 1. \end{align*} Indeed, let $L_+u=g$, i.e., \begin{align} \label{eq:st2} (-\Delta+1)u=\frac{7}{3}Q_{\mu}^{\frac{4}{3}}u+g. \end{align} Using the $H^2$ estimate for $L_+$ we can write $$ \|e^{d|x|}L_{+}^{-1}g\|_{H^2}\lesssim\|e^{d|y|}g\|_{L^2}, ~~0 \leq d <1.$$ So from the Strauss estimate we get $$\|e^{c|x|}u\|_{L^\infty(|x|\geq 1)} \lesssim 1.$$ Plugging this estimate in the right hand side of \eqref{eq:st2} and using Step 1, we get $|u|\lesssim G.$ \textbf{Step~3}: The operator $K: f\rightarrow A(Q_{\mu}f)$ maps $ L^{\infty}_G$ into $ L^{\infty}_{G^{1-\epsilon}}$. Since $\mu>0$ is small enough, then we deduce that \begin{align*} G^{-1}(L_{+,\mu})^{-1} G^c: L^{\infty} \rightarrow L^{\infty}, \ c \geq 1. \end{align*} By the similar argument, we can obtain that $L_{-,\mu}^{-1}$ also satisfies this property. The proof of this lemma is now complete. \end{proof} \begin{remark}\label{remark1} (i) Note that $L_{-,\mu}>0$ on $Q_{\mu}^{\perp}$ and we have $S_{1,0}\perp Q_{\mu}$, $S_{0,1}\perp Q_{\mu}$. (ii) The proof of lemma \ref{lemma:approximatedouble} actually show that $R_{\mathcal{P}}$ satisfy \begin{align*} R_{\mathcal{P}}&=(Q_{\mu}+bd\cdot T_{1,1}+b^2T_{2,0}+d^2\cdot T_{0,2}+b^4T_4)+i(bS_{1,0}+d\cdot S_{0,1}+b^3S_{3,0}+b^2d\cdot S_{2,1})\\ &=R_1+iR_2. \end{align*} \end{remark} Let us compute the $L^2$-norm and energy of $R_{\mathcal{P}}$, which will appear as important quantities in the analysis. \begin{lemma} The mass, energy and momentum of $R_{\mathcal{P}}$ satisfy: \begin{align*} &\int|R_{\mathcal{P}}|^2=\int|Q_{\mu}|^2+\mathcal{O}(b^4+|d|^2+|d\mathcal{P}|^2 ),\\ &E_{\mu}(R_{\mathcal{P}})=b^2e_{\mu}+\mathcal{O}(b^4+|d|^2+|d\mathcal{P}|^2),\\ &P(R_{\mathcal{P}})=p_{\mu}d+\mathcal{O}(b^4+|d|^2+|d\mathcal{P}|^2), \end{align*} where $e_{\mu}=\frac{1}{2}(L_{-,\mu}S_{1,0},S_{1,0})>0$ and $p_{\mu}=2(L_{-,\mu}S_{0,1},S_{0,1})$ are constants and $S_{1,0}$, $S_{0,1}$ satisfy $L_{-,\mu}S_{1,0}=\Lambda Q_{\mu}$, $L_{-,\mu}S_{0,1}=-\nabla Q_{\mu}$, respectively. \end{lemma} \begin{proof} From the lemma \ref{lemma:approximatedouble} and the Remark \ref{remark1}, we deduce that \begin{align*} \int|R_{\mathcal{P}}|^2=&\int Q_{\mu}^2+b^2(S_{1,0},S_{1,0})+2b^2(Q_{\mu},T_{2,0})+\mathcal{O}(b^4+|d|^2+|d\mathcal{P}|^2)\\ =&\int Q_{\mu}^2+\mathcal{O}(b^4+|d|^2+|d\mathcal{P}|^2), \end{align*} where we use the relation $(S_{1,0},S_{1,0})+(Q_{\mu},T_{2,0})=0$, see \eqref{construction:relation}. To calculate the expansion of the energy, we first recall that $E_{\mu}(Q_{\mu})=0$, this can be obtained by the Pohozaev identity \eqref{Pohozaev:identitydou} and the equation \eqref{equ:ell1}. Moreover, from the remark \ref{remark1}, we have $(Q_{\mu},S_{1,0})=0$, we obtain \begin{align*} E_{\mu}(R_{\mathcal{P}}) =&b^2\Bigg[\left(T_{2,0},-\Delta Q_{\mu}-Q_{\mu}^{\frac{7}{3}}-\mu A(Q_{\mu}^2)Q_{\mu}\right)+\frac{1}{2}(S_{1,0},-\Delta S_{1,0})-\frac{1}{2}b^2(Q_{\mu}^{\frac{4}{3}},S_{1,0}^2)\\ &-\mu\frac{1}{2}\int A(Q_{\mu}^2)S_{1,0}^2\Bigg]+\mathcal{O}(b^4+|d|^2+|d\mathcal{P}|^2)\\ =&b^2\Bigg[-(T_{2,0}, Q_{\mu})+\frac{1}{2}(S_{1,0},-\Delta S_{1,0})-\frac{1}{2}(Q_{\mu}^{\frac{4}{3}},S_{1,0}^2)-\mu\frac{1}{2}\int A(Q_{\mu}^2)S_{1,0}^2\Bigg]\\ &+\mathcal{O}(b^4+|d|^2+|d\mathcal{P}|^2)\\ =&b^2\frac{1}{2}\Bigg[(S_{1,0},S_{1,0})+(S_{1,0},-\Delta S_{1,0})-(Q_{\mu}^{\frac{4}{3}},S_{1,0}^2)-\mu\int A(Q_{\mu}^2)S_{1,0}^2\Bigg]+\mathcal{O}(b^4+|d|^2+|d\mathcal{P}|^2)\\ =&b^2\frac{1}{2}(L_{-,\mu}S_{1,0},S_{1,0})+\mathcal{O}(b^4+|d|^2+|d\mathcal{P}|^2). \end{align*} For the linear momentum functional, we notice that $P(f)=2\int f_1\nabla f_2$ for $f=f_1+if_2$. Hence \begin{align*} P(R_{\mathcal{P}})=&2\int bQ_{\mu}\nabla S_{1,0}+2d\int Q_{\mu}\nabla S_{0,1}+2\int b^2d\cdot T_{1,1}\nabla S_{1,0}+2\int b^3T_{2,0}\nabla S_{1,0}\\ &+\mathcal{O}(b^4+|d|^2+|d\mathcal{P}|^2)\\ =&2d(L_{-,\mu}S_{0,1},S_{0,1})+\mathcal{O}(b^4+|d|^2+|d\mathcal{P}|^2). \end{align*} Here we used the fact that $L_{-,\mu}S_{0,1}=-\nabla Q_{\mu}$ and $S_{1,0}$, $T_{1,1}$, $T_{2,0}$ are radial function. The proof of this lemma is now complete. \end{proof} \section{Energy Estimates} \subsection{Nonlinear decomposition of the wave and modulation equations} Let $u(t)\in H^1(\mathbb{R}^3)$ be a solution of equation \eqref{equ1:double} on some time interval $[t_0,t_1]$ with $t_1<0$. Assume that $u(t)$ admits a geometrical decomposition of the form \begin{align}\label{decom:solutiondou} u(t,x)=\frac{1}{\lambda^{\frac{3}{2}}(t)}\big[R_{\mathcal{P}}+\epsilon\big]\left(t,\frac{x-\alpha(t)}{\lambda(t)}\right)e^{i\gamma(t)}, \end{align} with a uniform smallness bound on $[t_0,t_1]$: \begin{align}\notag b^2(t)+|d(t)|+\|\epsilon(t)\|_{H^1}^2\lesssim\lambda^2(t)\ll1. \end{align} Moreover, we assume that $u(t)$ has almost critical mass in the sense: $\forall\,t\in[t_0,t_1]$, \begin{align}\notag \left|\|u(t)\|_{L^2}^2-\|Q_{\mu}\|_{L^2}^2\right|\lesssim \lambda^{4}(t). \end{align} From a standard modulation argument, see e.g.\cite{RS2011JAMS,MR2005Ann}, the uniqueness of the nonlinear decomposition \eqref{decom:solutiondou} may be ensured by imposing a suitable set of orthogonality condition on $\epsilon=\epsilon_1+i\epsilon_2\in H^1(\mathbb{R}^3)$; namely, \begin{align}\label{orthogonality1dou} \begin{array}{c} (\epsilon_2,\Lambda R_1)-(\epsilon_1,\Lambda R_2)=0,\\ (\epsilon_2,\partial_bR_1)-(\epsilon_1,\partial_bR_2)=0,\\ (\epsilon_2,\rho_1)-(\epsilon_1,\rho_2)=0,\\ (\epsilon_2,\nabla R_1)-(\epsilon_1,\nabla R_2)=0,\\ (\epsilon_2,\partial_dR_1)-(\epsilon_1,\partial_dR_2)=0, \end{array} \end{align} where $R_1$ and $R_2$ are given by Remark \ref{remark1} and $\rho=\rho_1+i\rho_2$ is the unique function defined by \begin{align}\notag \begin{cases} L_{+,\mu}\rho_1=S_{1,0},\\ L_{-,\mu}\rho_2=\frac{4}{3}bQ_{\mu}^{\frac{1}{3}}S_{1,0}\rho_1+b\Lambda\rho_1-2bT_{2,0}+\mu \left(2A(Q_{\mu}\rho_1)S_{1,0}\right)\\ ~~~~+\frac{4}{3}dQ_{\mu}^{\frac{1}{3}}S_{0,1}\rho_1+d\cdot\nabla\rho_1+d \cdot T_{1,1}+2\mu A(Q_{\mu}\rho_1)S_{0,1}. \end{cases} \end{align} By \eqref{solvability:conditiondou}, $\rho_1$ is well-defined. Moreover, it is easy to see that the right-hand side in the equation for $\rho_2$ is perpendicular to $Q_{\mu}$. Hence, $\rho_2$ is well-defined, too. The orthogonality conditions \eqref{orthogonality1dou} correspond exactly in the cases $\mathcal{P}=0$ to the null space of the linearized operator close to $Q_{\mu}$ see \eqref{identities:algebraic}. From a standard argument, the obtained modulation parameters are $C^1$ functions of time; see \cite{RS2011JAMS,MR2005Ann},for related statements. Let \begin{align}\notag s(t)=\int_{t_0}^{t_1}\frac{d\tau}{\lambda^2(\tau)}, \end{align} be the rescaled time. Then, for $s\in[s_0,s_1]$, the function $\epsilon$ satisfies the system \begin{align}\label{equ:mod1} &(b_s+b^2)\partial_b R_1+(d_s+bd)\cdot\partial_dR_1+\partial_s\epsilon_1-M_2(\epsilon)+b\Lambda\epsilon_1-d\cdot\nabla\epsilon_1\notag\\ =&\left(\frac{\lambda_s}{\lambda}+b\right)(\Lambda R_1+\Lambda\epsilon_1)+\left(\frac{\alpha_s}{\lambda}-d\right)\cdot(\nabla R_1+\nabla\epsilon_1)+\tilde{\gamma}_s(R_2+\epsilon_2)+\Im{\Phi_b}-P_2(\epsilon),\\\label{equ:mod2} &(b_s+b^2)\partial_b R_2+(d_s+bd)\cdot\partial_dR_2+\partial_s\epsilon_2+M_1(\epsilon)+b\Lambda\epsilon_2-d\cdot\nabla\epsilon_2\notag\\ =&\left(\frac{\lambda_s}{\lambda}+b\right)(\Lambda R_2+\Lambda\epsilon_2)\left(\frac{\alpha_s}{\lambda}-d\right)\cdot(\nabla R_2+\nabla\epsilon_2)-\tilde{\gamma}_s(R_1+\epsilon_1)-\Re{\Phi_b}+P_1(\epsilon). \end{align} Here $\Phi_b$ denotes the error term and $M=(M_1,M_2)$ are small deformations of the linearized operator $L=(L_{+,\mu},L_{-,\mu})$ close to $Q_{\mu}$: \begin{align}\label{define:M1dou} M_1(\epsilon)=-\Delta\epsilon_1+\epsilon_1-|R_{\mathcal{P}}|^{\frac{4}{3}}\epsilon_1-\frac{4}{3}|R_{\mathcal{P}}|^{-\frac{2}{3}}(R_1R_2\epsilon_2+R_1^2\epsilon_1)-\mu D_1(\epsilon),\\\label{define:M2dou} M_2(\epsilon)=-\Delta\epsilon_2+\epsilon_2-|R_{\mathcal{P}}|^{\frac{4}{3}}\epsilon_2-\frac{4}{3}|R_{\mathcal{P}}|^{-\frac{2}{3}}(R_1R_2\epsilon_1+R_2^2\epsilon_2)-\mu D_2(\epsilon), \end{align} where $D_1$ and $D_2$ are given by \begin{align*} D_1{\epsilon}=A(2R_1\epsilon_1-2R_2\epsilon_2)R_1+A(R_1^2-R_2^2)\epsilon_1,~ D_2{\epsilon}=A(2R_1\epsilon_1-2R_2\epsilon_2)R_2+A(R_1^2-R_2^2)\epsilon_2. \end{align*} In addition, $P_1(\epsilon)$ and $P_2(\epsilon)$ are the high order terms respect to $\epsilon$. Let us collect the standard preliminary estimates on this decomposition which rely on the conservation laws and the explicit choice of orthogonality conditions. \begin{lemma}\label{lemma:modestimate} (\textbf{Preliminary estimates on the decomposition.}) For $t\in[t_0,t_1]$, it holds that 1. Energy and Momentum bound: \begin{align}\notag b^2+|d|+\|\epsilon\|_{H^1(\mathbb{R}^3)}^2\lesssim \lambda^2(|E_{0,\mu}|+|P_{0,\mu}|)+\mathcal{O}(\lambda^{4}+b^4+|d|^2+|d\mathcal{P}|^2). \end{align} Here $E_{0,\mu}=E_{\mu}(u_0)$ and $P_{0,\mu}=P_{\mu}(u_0)$ denote the conserved energy and momentum of $u=u(t,x)$, respectively. 2. Control of the geometrical parameters: Let the vector of modulation equations be \begin{align}\notag Mod(t)=\left(b_s+b^2,\frac{\lambda_s}{\lambda}+b,\tilde{\gamma_s},\frac{\alpha_s}{\lambda}-d,d_s+bd\right). \end{align} Then the modulation equations are to leading order: \begin{align}\notag |Mod(t)|\lesssim\lambda^4+b^4+|d|^2+|d\mathcal{P}|^2+b^2\|\epsilon\|_{L^2}+\|\epsilon\|_{L^2}^2+\|\epsilon\|_{H^1}^3, \end{align} with the improvement \begin{align}\notag \left|\frac{\lambda_s}{\lambda}+b\right|\lesssim b^5+b^2\|\epsilon\|_{L^2}+\|\epsilon\|_{L^2}^2+\|\epsilon\|_{H^1}^3. \end{align} \end{lemma} \begin{proof} \textbf{Step 1.} By the similar argument as \cite{RS2011JAMS,GL2021CPDE,GL2021JFA,KLR2013ARMA}, we can obtain the energy and momentum estimates. \textbf{Step 2.} Estimate the modulation parameters. 1. \textbf{Inner products.} We compute the inner products needed to compute the law of the parameters from the $R_{\mathcal{P}}$, where $M_1,M_2$ are given by \eqref{define:M1dou} and \eqref{define:M2dou}, respectively. The following estimates hold. \begin{align}\label{estimateM1dou} &(M_2(\epsilon)-b\Lambda \epsilon_1+d\cdot\nabla \epsilon_1,\Lambda R_2)+(M_1(\epsilon)+b\Lambda \epsilon_2-d\cdot\nabla \epsilon_1,\Lambda R_1)=-\Re(\epsilon,R_{\mathcal{P}})+\mathcal{O}(\mathcal{P}^2\|\epsilon\|_{L^2}),\\\label{estimateM2dou} &(M_2(\epsilon)-b\Lambda \epsilon_1+d\cdot\nabla \epsilon_1,\partial_bR_2)+(M_1(\epsilon)+b\Lambda \epsilon_2-d\cdot\nabla \epsilon_1,\partial_bR_1)=\mathcal{O}(\mathcal{P}^2\|\epsilon\|_{L^2}),\\\label{estimateM3dou} &(M_2(\epsilon)-b\Lambda \epsilon_1+d\cdot\nabla \epsilon_1,\rho_2)+(M_1(\epsilon)+b\Lambda \epsilon_2-d\cdot\nabla \epsilon_1,\rho_1)=\mathcal{O}(\mathcal{P}^2\|\epsilon\|_{L^2}),\\\label{estimateM4dou} &(M_2(\epsilon)-b\Lambda \epsilon_1+d\cdot\nabla \epsilon_1,\nabla R_2)+(M_1(\epsilon)+b\Lambda \epsilon_2-d\cdot\nabla \epsilon_1,\nabla R_1)=\mathcal{O}(\mathcal{P}^2\|\epsilon\|_{L^2}),\\\label{estimateM5dou} &(M_2(\epsilon)-b\Lambda \epsilon_1+d\cdot\nabla \epsilon_1,\partial_dR_2)+(M_1(\epsilon)+b\Lambda \epsilon_2-d\cdot\nabla \epsilon_1,\partial_d R_1)=\mathcal{O}(\mathcal{P}^2\|\epsilon\|_{L^2}). \end{align} We notice that the identity \begin{align}\label{identityS1dou} &L_{-,\mu}\Lambda S_{1,0}=-2(S_{1,0}-\Lambda Q_{\mu})+\frac{4}{3}(\Lambda Q_{\mu})Q_{\mu}^{\frac{1}{3}}S_{1,0}+\Lambda^2Q_{\mu}+2\mu A(\Lambda Q_{\mu}Q_{\mu})S_{1,0},\\\notag &L_{-,\mu}\Lambda S_{0,1}=-S_{0,1}-\nabla Q_{\mu}+\frac{4}{3}(\Lambda Q_{\mu})Q_{\mu}^{\frac{1}{3}}S_{0,1}+2\mu A(\Lambda Q_{\mu}Q_{\mu})S_{0,1}-\Lambda\nabla Q_{\mu}. \end{align} This two identities can deduce from $L_{-,\mu}S_{1,0}=\Lambda Q_{\mu}$ and $L_{-,\mu}S_{0,1}=-\nabla Q_{\mu}$. Next, recall that \begin{align*} \Lambda R_1=\Lambda Q_{\mu}+\mathcal{O}(\mathcal{P}^2),\,\,\Lambda R_2=b\Lambda S_{1,0}+d\Lambda S_{0,1}+\mathcal{O}(\mathcal{P}^2). \end{align*} Using the equality \eqref{identityS1dou} and \eqref{identities:algebraic}, we find that \begin{align*} &\text{Left-hand side of \eqref{estimateM1dou}}\\ =&(\epsilon_1,L_{+,\mu}\Lambda Q_{\mu})+b(\epsilon_2,L_{-,\mu}\Lambda S_{1,0})+d\cdot(\epsilon_2,L_{-,\mu}\Lambda S_{0,1})-\frac{4}{3}b(Q_{\mu}^{\frac{1}{3}}S_{1,0}\epsilon_2,\Lambda Q_{\mu})\\ &-\frac{4}{3}d\cdot(Q_{\mu}^{\frac{1}{3}}S_{0,1}\epsilon_2,\Lambda Q_{\mu})-b(\epsilon_2,\Lambda^2Q_{\mu})-d\cdot(\epsilon_2,\nabla\Lambda Q_{\mu})\\ &-2b (A(Q_{\mu}\Lambda Q_{\mu})S_{1,0},\epsilon_2)-2d\cdot(A(Q_{\mu}\Lambda Q_{\mu})S_{0,1},\epsilon_2)+\mathcal{O}(\mathcal{P}^2\|\epsilon\|_{L^2})\\ =&-2(\epsilon_1,Q_{\mu})-b(\epsilon_2,S_{1,0})+b(\epsilon_2,\Lambda Q_{\mu})-d\cdot(\epsilon_2,S_{0,1})-d\cdot(\epsilon_2,\nabla Q_{\mu})+\mathcal{O}(\mathcal{P}^2\|\epsilon\|_{L^2})\\ =&-2\Re{(\epsilon,R_{\mathcal{P}})}+\mathcal{O}(\mathcal{P}^2\|\epsilon\|_{L^2}). \end{align*} Here, we also used that $b(\epsilon_2,\Lambda Q_{\mu})=\mathcal{O}(\mathcal{P}^2\|\epsilon\|_{L^2})$, $d\cdot(\epsilon_2,\nabla Q_{\mu})=\mathcal{O}(\mathcal{P}^2\|\epsilon\|_{L^2})$, which follows from the orthogonality conditions \eqref{orthogonality1dou}. This completes the proof \eqref{estimateM1dou}. \textbf{Estimate \eqref{estimateM2dou}.} From the lemma \ref{lemma:approximatedouble} we recall that \begin{align*} \partial_bR_1=2bT_{2,0}+d\cdot T_{1,1}+\mathcal{O}(\mathcal{P}^2),\,\,\,\partial_bR_2=S_{1,0}+\mathcal{O}(\mathcal{P}^2), \end{align*} where \begin{align*} L_{+,\mu}T_{2,0}=&\frac{2}{3}Q_{\mu}^{\frac{1}{3}}S_{1,0}^2+S_{1,0}-\Lambda S_{1,0}+\mu A(S_{1,0}^2)Q_{\mu},\\ L_{+,\mu}T_{1,1}=&S_{0,1}-\Lambda S_{0,1}+\nabla S_{1,0}+\frac{4}{3}S_{1,0}S_{0,1}Q_{\mu}^{\frac{1}{3}}+2\mu A(S_{1,0}S_{0,1})Q_{\mu}. \end{align*} Using this fact, we compute \begin{align*} &\text{Left-hand side of \eqref{estimateM2dou}}\\ =&(\epsilon_2,L_{-,\mu}S_{1,0})-\frac{4}{3}b(Q_{\mu}^{\frac{1}{3}}S_{1,0}\epsilon_1,S_{1,0})-\frac{4}{3}d\cdot(Q_{\mu}^{\frac{1}{3}}S_{0,1}\epsilon_1,S_{1,0})-2\mu b(A(Q_{\mu}\epsilon_1)S_{1,0},S_{1,0})\\ &-\mu d\cdot (A(Q_{\mu}\epsilon_1)S_{0,1},S_{1,0})+b(\epsilon_1,\Lambda S_{1,0})-d\cdot(\epsilon_1,\nabla S_{1,0})+2b(\epsilon_1,L_{+,\mu}T_{2,0})\\ &+d\cdot(\epsilon_1,L_{+,\mu}T_{1,1})+\mathcal{O}(\mathcal{P}^2\|\epsilon\|_{L^2})\\ =&(\epsilon_2,L_{-,\mu}S_{1,0})-\frac{4}{3}b(Q_{\mu}^{\frac{1}{3}}S_{1,0}\epsilon_1,S_{1,0})-\frac{4}{3}d\cdot(Q_{\mu}^{\frac{1}{3}}S_{0,1}\epsilon_1,S_{1,0})-2\mu b(A(Q_{\mu}\epsilon_1)S_{1,0},S_{1,0})\\ &-\mu d\cdot (A(Q_{\mu}\epsilon_1)S_{0,1},S_{1,0})+b(\epsilon_1,\Lambda S_{1,0})-d\cdot(\epsilon_1,\nabla S_{1,0})\\ &+2b\left(\epsilon_1,\frac{2}{3}Q_{\mu}^{\frac{1}{3}}S_{1,0}^2+S_{1,0}-\Lambda S_{1,0}+\mu A(S_{1,0}^2)Q_{\mu}\right)\\ &+d\cdot\left(\epsilon_1,S_{0,1}-\Lambda S_{0,1}+\nabla S_{1,0}+\frac{4}{3}S_{1,0}S_{0,1}Q_{\mu}^{\frac{1}{3}}+2\mu A(S_{1,0}S_{0,1})Q_{\mu}\right)+\mathcal{O}(\mathcal{P}^2\|\epsilon\|_{L^2})\\ =&(\epsilon_2,\Lambda Q_{\mu})-b(\epsilon_1,\Lambda S_{1,0})-d\cdot(\epsilon_1,\Lambda S_{0,1})+d\cdot(\epsilon_1,S_{0,1})+\mathcal{O}(\mathcal{P}^2\|\epsilon\|_{L^2})\\ =&(\epsilon_2,\Lambda R_{\mathcal{P}})-(\epsilon_1,\Lambda R_{\mathcal{P}})+\mathcal{O}(\mathcal{P}^2\|\epsilon\|_{L^2}). \end{align*} Here we use the orthogonality conditions \eqref{orthogonality1dou}. This completes the proof of \eqref{estimateM2dou}. By the similar argument as above, we can obtain \eqref{estimateM3dou}, \eqref{estimateM4dou} and \eqref{estimateM5dou}. Here we omit the details. 2. \textbf{The law for $b$.} We take the inner product of the equation \eqref{equ:mod1} of $\epsilon_1$ with $-\Lambda R_2$ and we sum it with the inner product of equation \eqref{equ:mod2} of $\epsilon_2$ with $\Lambda R_1$. We obtain after integrating by parts: \begin{align*} &-(b_s+b^2)\big[(\partial_bR_1,-\Lambda R_2)+(\partial_bR_2,\Lambda R_1)\big]+(\partial_s\epsilon_1,-\Lambda R_2)+(\partial_s\epsilon_2,\Lambda\mathbb{R}_1)\\ &+[(M_2(\epsilon)-b\Lambda\epsilon_1,\Lambda R_2)+(M_1(\epsilon)+b\Lambda\epsilon_2,\Lambda R_1)]+\left(\frac{\alpha_s}{\lambda}-d\right)\mathcal{O}(\mathcal{P})\\ =&\left(\frac{\lambda_s}{\lambda}+b\right)\big[(\Lambda R_1+\Lambda\epsilon_1,-\Lambda R_2)+(\Lambda R_2+\Lambda\epsilon_2,\Lambda R_1)\big]\\ &+\tilde{\gamma}_s\big[(R_2+\epsilon_2,-\Lambda R_2)+(R_1+\epsilon_1,\Lambda R_1)\big]+(\Im\Phi_b,-\Lambda R_2)-(\Re{\Phi}_b,\Lambda R_1)\\ &+(P_2(\epsilon),\Lambda R_2)+(P_1(\epsilon),\Lambda R_1). \end{align*} From \eqref{estimateM1dou} and the orthogonality condition \eqref{orthogonality1dou}, we deduce that \begin{align*} & -(b_s+b^2)((L_{-,\mu}S_{1,0},S_{1,0})+\mathcal{O}(b^2))+\left(\frac{\alpha_s}{\lambda}-d\right)\mathcal{O}(\mathcal{P})\\ =&\Re{(\epsilon,R_{\mathcal{P}})}+(\Im\Phi_b,-\Lambda R_2)-(\Re{\Phi}_b,\Lambda R_1)+(P_2(\epsilon),\Lambda R_2)+(P_1(\epsilon),\Lambda R_1)\\ &+\mathcal{O}((\mathcal{P}^{\frac{4}{3}}+|Mod(t)|)(\|\epsilon\|_{L^2}+\mathcal{P}^2)). \end{align*} Hence, by using the fact that $2\Re{(\epsilon,R_{\mathcal{P}})}=-\int|\epsilon|^2+\int(|u|^2-|Q_{\mu}|^2)+\mathcal{O}(b^4+|d|^2+|d\mathcal{P}^2|)$, we have \begin{align*} &-(b_s+b^2)2e_{\mu}+\mathcal{O}(b^2))+\left(\frac{\alpha_s}{\lambda}-d\right)\mathcal{O}(\mathcal{P})\\ =&-\frac{1}{2}\int|\epsilon|^2+(P_2(\epsilon),\Lambda R_2)+(P_1(\epsilon),\Lambda R_1)\\ &+\mathcal{O}\left((\mathcal{P}^{\frac{4}{3}}+|Mod(t)|)(\|\epsilon\|_{L^2}+\mathcal{P}^2)+|\|u\|_{L^2}^2-\|Q_{\mu}\|_{L^2}|+b^4+|d|^2+|d\mathcal{P}^2|\right). \end{align*} 3. \textbf{The law for $\lambda$.} We multiply both sides of the equation \eqref{equ:mod1} and \eqref{equ:mod2} by $-\partial_bR_2$ and $\partial_bR_1$, respectively. Adding this and using \eqref{estimateM2dou} yields, we can obtain \begin{align*} &\left(\frac{\lambda_s}{\lambda}+b\right)(2e_{\mu}+\mathcal{O}(b^2))+(d_s+bd)\mathcal{O}(\mathcal{P})\\ =&(R_2(\epsilon),\partial_bR_1)+(R_1(\epsilon),\partial_bR_2)+\mathcal{O}\left((\mathcal{P}^{\frac{4}{3}}+|Mod(t)|)(\|\epsilon\|_{L^2}+\mathcal{P}^2)+b^5+|d|^2\mathcal{P}\right). \end{align*} Here we also used that $(R_2,\partial_bR_2)+(R_1,\partial_bR_1)=b(S_{1,0},S_{1,0})+2b(Q_{\mu},T_{2,0})+d\cdot(T_{1,1},Q_{\mu})+\mathcal{O}(b^2)=\mathcal{O}(b^2)$, since $(S_{1,0},S_{1,0})+(Q_{\mu},T_{2,0})=0$ and $(T_{1,1},Q_{\mu})=0$. 4. \textbf{The law for $\tilde{\gamma}$.} We multiply both sides of the equation \eqref{equ:mod1} and \eqref{equ:mod2} by $-\rho_2$ and $\rho_1$, respectively. Adding this and using \eqref{estimateM3dou} yields, we can obtain \begin{align*} \tilde{\gamma}_s\big((Q_{\mu},\rho_1)+\mathcal{O}(b^2)\big)=-(b_s+b^2)\big((S_{1,0},\rho_1)+\mathcal{O}(b^2)\big)+\left(\frac{\lambda_s}{\lambda}+b\right)\mathcal{O}(b)\\ +(P_2(\epsilon),\rho_2)+(P_1(\epsilon),\rho_1)+\mathcal{O}\left((b^{\frac{4}{3}}+|Mod(t)|)\|\epsilon\|_{L^2}+b^5\right). \end{align*} 5. \textbf{The law for $d$.} We project \eqref{equ:mod1} and \eqref{equ:mod2} onto $-\nabla R_2$ and $\nabla R_1$, respectively. Then we have \begin{align*} &(d_s+bd)(-p_{\mu}+\mathcal{O}(\mathcal{P}^2))+(b_s+b^2)\mathcal{O}(\mathcal{P})\\ =&(P_2,\nabla R_1)+(P_1,\nabla R_2)+\mathcal{O}((\mathcal{P}^2+|Mod(t)|)\|\epsilon\|_{L^2}+b^5+|d^2\mathcal{P}|). \end{align*} 6. \textbf{The law for $\alpha$.} We project \eqref{equ:mod1} and \eqref{equ:mod2} onto $-\partial_dR_2$ and $\partial_dR_1$, respectively. Then we deduce that \begin{align*} &(b_s+b^2)\mathcal{O}(\mathcal{P})+\left(\frac{\alpha_s}{\lambda}-d\right)(p_{\mu}+\mathcal{O}(\mathcal{P}^2))\\ =&(P_2,\partial_dR_1)+(P_1,\partial_dR_2)+\mathcal{O}((\mathcal{P}^2+|Mod(t)|)\|\epsilon\|_{L^2}+b^4+d^2+|d\mathcal{P}^2|). \end{align*} 7. \textbf{Conclusion.} We collect the results in previous points 2,3,4,5,6 and estimate the nonlinear terms in $\epsilon$ by Sobolev inequalities. This gives us \begin{align*} (A+B)Mod(t)=\mathcal{O}\Big((\mathcal{P}^{\frac{4}{3}}+|Mod(t)|)\|\epsilon\|_{L^2}+\|\epsilon\|_{L^2}^2+\|\epsilon\|_{H^1}^3\\ +|\|u\|_{L^2}^2-\|Q_{\mu}\|_{L^2}^2|+b^4+d^2+|d\mathcal{P}^2|\Big). \end{align*} Here $A=\mathcal{O}(1)$ is invertible $9\times9$ matrix, whereas $B=\mathcal{O}(b)$ is some $9\times9$-matrix that is polynomial in $b$. Inverting this relation to compute $Mod(t)$ and computing the Taylor expansion of $(A+B)^{-1}$ to sufficiently high order yields the desired result. This completes the proof of lemma \ref{lemma:modestimate}. \end{proof} \subsection{Refined energy identity} In this subsection, our aim is to derive a general refined mixed/Morawetz type estimate which allow us to derive a Lyapunov function for critical mass blow-up solutions. Let $u\in H^1(\mathbb{R}^3)$ be a solution to \eqref{equ1:double} on $[t_0,0)$ and let $w\in H^1(\mathbb{R}^3)$ be an approximate solution to \eqref{equ1:double}: \begin{align}\notag i\partial_tw+\Delta w+|w|^{\frac{4}{3}}w+\mu A(w^2)w=\psi, \end{align} with the a-priori bounds \begin{align}\label{priori:estimate1dou} \|w\|_{L^2}\lesssim1,\,\,\|\nabla w\|_{L^2}\lesssim\lambda^{-1},\,\,\|w\|_{\dot{H}^{2}}\lesssim\lambda^{-2}, \end{align} where $\delta>0$ is small enough. We then decompose $u=w+\tilde{u}$ so that $\tilde{u}\in H^1(\mathbb{R}^3)$ satisfies: \begin{align}\label{equ:approxiamteE2} i\partial_t\tilde{u}+\Delta\tilde{u}+\left(|u|^{\frac{4}{3}}u-|w|^{\frac{4}{3}}w\right)+\mu \left(A(u^2)u-A(w^2)w\right)=-\psi, \end{align} and assume the a-priori bounds on $\tilde{u}$: \begin{align}\label{priori:estimate2dou} \|\tilde{u}\|_{L^2}\lesssim\lambda^{2},\,\,\,\|\nabla\tilde{u}\|_{L^2}\lesssim\lambda, \end{align} and on the geometrical parameters: \begin{align}\label{priori:estimate3dou} |\lambda\lambda_t+b|\lesssim\lambda^4,\,\,b\sim\lambda,\,\,|b_t|\lesssim1,\,\,|\alpha_t|\lesssim\lambda^2 \end{align} for some non - negative parameters $0<\lambda,b\ll1$. Let $M>0$ be a large enough constant, which will be chosen later, and let $\phi:\mathbb{R}^3\rightarrow\mathbb{R}$ be a smooth radially symmetric cutoff function with \begin{align}\notag \phi^\prime(r)=\begin{cases} r\,\,\,&\text{for}\,\,\, r\leq1,\\ 3-e^{-r}\,\,\, &\text{for}\,\,\, r\geq2, \end{cases} \end{align} and the convexity condition \begin{align}\notag \phi^{\prime\prime}(r)\geq0 \,\,\text{for}\,\,r\geq0. \end{align} Let \begin{align*} &F(u)=\frac{3}{10}|u|^{\frac{10}{3}},\,\,\,f(u)=|u|^{\frac{4}{3}}u,\,\,\,F^\prime(u)\cdot h=\Re{(f(u)\bar{h})};\\ &G(u)=\frac{1}{4}A(u^2)|u|^2,\,\,\,g(u)=A(u^2)u\,\,\, G^\prime(u)\cdot h=\Re{(g(u)\bar{h})}. \end{align*} Now we give the following generalized energy estimate. \begin{lemma}\label{lemma:energyestimatedou} \textbf{(Generalized energy estimate).} Let \begin{align*} J=&\frac{1}{2}\int|\nabla\tilde{u}|^2+\frac{1}{2}\int\frac{|\tilde{u}|^2}{\lambda^2}-\int[F(w+\tilde{u})-F(w)-F^{\prime}(w)\cdot\tilde{u}]\notag\\ &-\mu\int[G(w+\tilde{u})-G(w)-G^{\prime}(w)\cdot\tilde{u}]+\frac{1}{2}\frac{b}{\lambda}\Im\left(\int M\nabla\phi\left(\frac{x-\alpha}{M\lambda}\right)\cdot\nabla\tilde{u}\tilde{u}\right). \end{align*} Then the following holds: \begin{align}\label{refine:energyestiamte} \frac{dJ}{dt}=&\Im{\left(\Delta\psi-\frac{1}{\lambda^2}\psi+f^{\prime}(w)\cdot\psi+\mu g^{\prime}(w)\cdot\psi,\bar{\tilde{u}}\right)}+\frac{b}{\lambda^4}\int|\tilde{u}|^2\notag\\ &-\frac{1}{\lambda^2}\Im\int\left(\frac{2}{3}|w|^{-\frac{2}{3}}w^2\bar{\tilde{u}}^2+\mu(A(w^2)\tilde{u}+A(2\Re(\bar{w}\tilde{u}))\bar{\tilde{u}} \right)\notag\\ &-\Re{\Big(\partial_tw,\overline{(f(\tilde{u}+w)-f(w)-f^{\prime}(w)\cdot\tilde{u}-f(\tilde{u})}\Big)}\notag\\ &-\Re{(\partial_tw,A(\tilde{u}^2)w+A(2\Re(w\bar{\tilde{u}})\tilde{u})}\notag\\ &+\frac{b}{\lambda^2}\Re\left(\int\nabla^2\phi\left(\frac{x-\alpha}{M\lambda}\right)(\nabla\tilde{u},\overline{\nabla\tilde{u}})\right)-\frac{1}{4}\frac{b}{M^2\lambda^4}\left(\int\Delta^2\phi\left(\frac{x-\alpha}{M\lambda}\right)|\tilde{u}|^2\right)\notag\\ &+\frac{b}{\lambda}\Re\left(\int M\nabla\phi\left(\frac{x-\alpha}{M\lambda}\right)\left(\frac{10}{9}|w|^{-\frac{2}{3}}w|\tilde{u}|^2-\frac{1}{9}|w|^{-\frac{8}{3}}\bar{w}^3\tilde{u}^2+\frac{5}{9}|w|^{-\frac{2}{3}}\bar{w}\tilde{u}^2\right)\cdot\overline{\nabla w}\right)\notag\\ &+\frac{\mu b}{\lambda}\Re\left(\int M\nabla\phi\left(\frac{x-\alpha}{M\lambda}\right)\left(A(w\nabla\bar{w}+\bar{w}\nabla w)\tilde{u}+A(u\nabla\bar{w}+\bar{\tilde{u}}\nabla w)w+A(2\Re(w\bar{\tilde{u}}))\cdot\nabla w\right)\right)\notag\\ &+\Im\left(\int\left[i\frac{b}{\lambda}M\nabla\phi\left(\frac{x-\alpha}{M\lambda}\right)\cdot\nabla\psi+i\frac{b}{2\lambda^2}\Delta\phi\left(\frac{x-\alpha}{M\lambda}\right)\psi\right]\bar{\tilde{u}}\right)\notag\\ &+\mathcal{O}\left(\lambda^2\|\psi\|_{L^2}+\|\tilde{u}\|_{H^1}^{\frac{2}{3}}\right). \end{align} \end{lemma} \begin{proof} \textbf{Step 1.} Algebraic derivation of the energy part. Using \eqref{equ:approxiamteE2}, a computation shows that \begin{align}\label{refine:energy11dou} &\frac{d}{dt}\Bigg\{\frac{1}{2}\int|\nabla\tilde{u}|^2+\frac{1}{2}\int\frac{|\tilde{u}|^2}{\lambda^2}-\int[F(w+\tilde{u})-F(w)-F^{\prime}(w)\cdot\tilde{u}]\notag\\ &-\mu\int[G(w+\tilde{u})-G(w)-G^{\prime}\cdot\tilde{u}]\Bigg\}\notag\\ =&-\Re{\left(\partial_t\tilde{u},\overline{\Delta\tilde{u}-\frac{1}{\lambda^2}\tilde{u}+(f(u)-f(w))+\mu(g(u)-g(w))}\right)}-\frac{\lambda_t}{\lambda^3}\int|\tilde{u}|^2\notag\\ &-\Re{\Big(\partial_tw,\overline{(f(\tilde{u}+w)-f(w)-f^{\prime}(w)\cdot\tilde{u}+\mu(g(\tilde{u}+w)-g(w)-g^{\prime}(w)\cdot\tilde{u})}\Big)}\notag\\ =&\Im{\left(\psi,\overline{\Delta\tilde{u}-\frac{1}{\lambda^2}\tilde{u}+(f(u)-f(w))+\mu(g(u)-g(w))}\right)}-\frac{\lambda_t}{\lambda^3}\int|\tilde{u}|^2\notag\\ &-\Re{\Big(\partial_tw,\overline{(f(\tilde{u}+w)-f(w)-f^{\prime}(w)\cdot\tilde{u}+\mu(g(\tilde{u}+w)-g(w)-g^{\prime}(w)\cdot\tilde{u})}\Big)}\notag\\ &-\frac{1}{\lambda^2}\Im{\Big(f(u)-f(w)+\mu(g(u)-g(w)),\bar{\tilde{u}}\Big)}\notag\\ =&\Im{\left(\psi,\overline{\Delta\tilde{u}-\frac{1}{\lambda^2}\tilde{u}+f^{\prime}(w)\cdot\tilde{u}+\mu g^{\prime}(w)\cdot\tilde{u}}\right)}-\frac{\lambda_t}{\lambda^3}\int|\tilde{u}|^2\notag\\ &-\frac{1}{\lambda^2}\Im{\Big(f(u)-f(w)+\mu(g(u)-g(w)),\bar{\tilde{u}}\Big)}\notag\\ &+\Im{\left(\psi-\frac{1}{\lambda^2}\tilde{u},\overline{f(u)-f(w)-f^{\prime}(w)\cdot\tilde{u}-\mu(g(u)-g(w)-g^{\prime}(w)\cdot\tilde{u})}\right)}\notag\\ &-\Re{\Big(\partial_tw,\overline{(f(\tilde{u}+w)-f(w)-f^{\prime}(w)\cdot\tilde{u}+\mu(g(\tilde{u}+w)-g(w)-g^{\prime}(w)\cdot\tilde{u})}\Big)}, \end{align} where we used that \begin{align*} &f^{\prime}(w)\cdot\tilde{u}=\frac{5}{3}|w|^{\frac{4}{3}}\tilde{u}+\frac{2}{3}|w|^{-\frac{2}{3}}w^2\bar{\tilde{u}},~~g^{\prime}(w)\cdot\tilde{u}=A(w^2)\tilde{u}+A(2\Re(\bar{w}\tilde{u}))w. \end{align*} From \eqref{priori:estimate3dou}, we obtain that \begin{align}\label{refine:energy1dou} -\frac{\lambda_t}{\lambda^3}\int|\tilde{u}|^2=\frac{b}{\lambda^4}\int|\tilde{u}|^2-\frac{1}{\lambda^4}(\lambda\lambda_t+b)\|\tilde{u}\|_{L^2}^2=\frac{b}{\lambda^4}\int|\tilde{u}|^2+\mathcal{O}(\|\tilde{u}\|_{H^1}^2). \end{align} Next, we estimate \begin{align}\label{refine:energy2dou} &\left|\Im{\left(\psi-\frac{1}{\lambda^2}\tilde{u},\overline{f(u)-f(w)-f^{\prime}(w)\cdot\tilde{u}}\right)}\right|\notag\\ \lesssim&(\|\psi\|_{L^2}+\lambda^{-2}\|\tilde{u}\|_{L^2})\big(\|f(u)-f(w)-f^{\prime}(w)\cdot\tilde{u}\|_{L^2}\big)\notag\\ \lesssim&(\|\psi\|_{L^2}+\lambda^{-2}\|\tilde{u}\|_{L^2})(\||\tilde{u}|^{\frac{7}{3}}+|w|^{\frac{1}{3}}|\tilde{u}|^2\|_{L^2})\notag\\ \lesssim&(\|\psi\|_{L^2}+\lambda^{-2}\|\tilde{u}\|_{L^2})\left[\|\nabla\tilde{u}\|_{L^2}^{2}\|\tilde{u}\|_{L^2}^{\frac{1}{3}}+\|w\|_{L^2}^{\frac{1}{3}}\|u\|_{L^6}^{2}\right]\notag\\ \lesssim&\lambda^2\|\psi\|_{L^2}+\mathcal{O}(\|\tilde{u}\|_{H^1}^2). \end{align} Here we also used the following inequality $|h(u+v)-h(u)-h^{\prime}(u)\cdot v|\lesssim|v|^{p}+|u|^{p-2}|v|^2,$ for $p>2$, $h(u)=|u|^{p-1}u$. On the other hand, we have \begin{align}\label{refine:energy22dou} &\left|\Im{\left(\psi-\frac{1}{\lambda^2}\tilde{u},\overline{g(u)-g(w)-g^{\prime}(w)\cdot\tilde{u}}\right)}\right|\notag\\ =&\left|\Im{\left(\psi-\frac{1}{\lambda^2}\tilde{u},\overline{A(\tilde{u}^2)(w+\tilde{u})+A(w\bar{\tilde{u}}+\bar{w}\tilde{u})\tilde{u}}\right)}\right|\notag\\ \lesssim&\left(\|\psi\|_{L^2}+\lambda^{-2}\|\tilde{u}\|_{L^2}\right)\left(\|\tilde{u}\|_{\dot{H}^1}^2(\|w\|_{L^2}+\|\tilde{u}\|_{L^2})+\|A(w\tilde{u})\|_{L^4}\|u\|_{L^4}\right)\notag\\ \lesssim&\left(\|\psi\|_{L^2}+\lambda^{-2}\|\tilde{u}\|_{L^2}\right)\left(\|\tilde{u}\|_{\dot{H}^1}^2(1+\lambda^2)+\|u\|_{L^{12/5}}|u\|_{L^4}\right)\notag\\ \lesssim&\lambda^2\|\psi\|_{L^2}+\|\tilde{u}\|_{{H}^1}^2. \end{align} Here we use the Hardy inequality, H\"older inequality and the priori estimate \eqref{priori:estimate2dou}. For the term that contain the $\partial_tw$, we replace $\partial_tw$ using \eqref{equ:approximate:double}, integrate by parts and then rely on \eqref{priori:estimate1dou} to estimate \begin{align}\label{refine:energy3dou} \left|\int \partial_tw |\tilde{u}|^{\frac{7}{3}}\right|\lesssim&\left(\|w\|_{\dot{H}^2}+\|w^{\frac{7}{3}}\|_{L^2}+\mu \|A(|w|^2)|w|\|_{L^2}+\|\psi\|_{L^2}\right)\|\tilde{u}^{\frac{7}{3}}\|_{L^2}\notag\\ \lesssim&\lambda^{-2}\|\tilde{u}\|_{L^{\frac{14}{3}}}^{\frac{7}{3}}+\|\psi\|_{L^2}\|\tilde{u}\|_{L^{\frac{14}{3}}}^{\frac{7}{3}}\notag\\ \lesssim &\|\tilde{u}\|_{H^1}^{\frac{2}{3}}+\lambda^2\|\psi\|_{L^2}. \end{align} Here we also used the H\"{o}lder inequality and Hardy inequality. And \begin{align}\label{refine:energy4dou} \left|\int\partial_tw|\overline{A(|\tilde{u}|^2)|\tilde{u}|}\right|\lesssim&\|w\|_{\dot{H}^2}\|A(|\tilde{u}|^2)|\tilde{u}|\|_{L^2}+\|w\|_{L^{\frac{14}{3}}}^{\frac{7}{3}}\|A(|\tilde{u}|^2)|\tilde{u}|\|_{L^2}\notag\\ &+\mu\|A(|w|^2)|w|\|_{L^2}\|A(|\tilde{u}|^2)|\tilde{u}|\|_{L^2}+\|\psi\|_{L^2}\|A(|\tilde{u}|^2)|\tilde{u}|\|_{L^2}\notag\\ \lesssim&\frac{1}{\lambda^{2}}\|\tilde{u}\|_{\dot{H}^1}^2\|\tilde{u}\|_{L^2}+\|w\|_{\dot{H}^1}^2\|w\|_{L^2}\|\tilde{u}\|_{\dot{H}^1}^2\|\tilde{u}\|_{L^2}\notag\\ &+\mu\|w\|_{L^{\frac{14}{3}}}^{\frac{7}{3}}\|w\|_{L^4}^{\frac{4}{3}}\|\tilde{u}\|_{\dot{H}^1}^2\|\tilde{u}\|_{L^2}+\|\psi\|_{L^2}\|\tilde{u}\|_{\dot{H}^1}^2\|\tilde{u}\|_{L^2}\notag\\ \lesssim&\lambda^2\|\psi\|_{L^2}+\mathcal{O}\left(\|\tilde{u}\|_{H^1}^2\right). \end{align} We now insert \eqref{refine:energy1dou}, \eqref{refine:energy2dou}, \eqref{refine:energy22dou},\eqref{refine:energy3dou} and \eqref{refine:energy4dou} into \eqref{refine:energy11dou}, we have \begin{align*} &\frac{d}{dt}\Bigg\{\frac{1}{2}\int|\nabla\tilde{u}|^2+\frac{1}{2}\int\frac{|\tilde{u}|^2}{\lambda^2}-\int[F(w+\tilde{u})-F(w)-F^{\prime}(w)\cdot\tilde{u}]-\mu\int[G(w+\tilde{u})-G(w)-G^{\prime}\cdot\tilde{u}]\Bigg\}\notag\\ =&\Im{\left(\Delta\psi-\frac{1}{\lambda^2}\psi+f^{\prime}(w)\cdot\psi+\mu g^{\prime}(w)\cdot\psi,\bar{\tilde{u}}\right)}+\frac{b}{\lambda^4}\int|\tilde{u}|^2\notag\\ &-\frac{1}{\lambda^2}\Im\int\left(\frac{2}{3}|w|^{-\frac{2}{3}}w^2\bar{\tilde{u}}^2+\mu(A(w^2)\tilde{u}+2A(\Re\bar{w}\tilde{u})\bar{\tilde{u}} \right)-\Re{\Big(\partial_tw,\overline{(f(\tilde{u}+w)-f(w)-f^{\prime}(w)\cdot\tilde{u}-f(\tilde{u})}\Big)}\notag\\ &-\Re{(\partial_tw,A(\tilde{u}^2)w+A(w\bar{\tilde{u}}+\bar{w}\tilde{u})\tilde{u})}+\mathcal{O}\left(\lambda^2\|\psi\|_{L^2}+\|\tilde{u}\|_{H^1}^{\frac{2}{3}}\right). \end{align*} \textbf{Step 2.} Algebraic derivation of the localized virial part. Let \begin{align*} \nabla\tilde{\phi}(t,x)=\frac{b}{\lambda}M\nabla\phi\left(\frac{x-\alpha}{M\lambda}\right). \end{align*} Then \begin{align}\label{virial1dou} &\frac{1}{2}\frac{d}{dt}\left( \frac{b}{\lambda}\Im\left(\int M\nabla\phi\left(\frac{x-\alpha}{M\lambda}\right)\cdot\nabla\tilde{u} \bar{\tilde{u}}\right)\right)\notag\\ =&\frac{1}{2}\Im\left(\int\partial_t\nabla\tilde{\phi}\cdot\nabla\tilde{u}\bar{\tilde{u}}\right)+\Re{\left(\int i\partial_t\tilde{u}\overline{\left(\frac{1}{2}\Delta\tilde{u}+\nabla\tilde{\phi}\cdot\nabla\tilde{u}\right)}\right)}. \end{align} Using \eqref{priori:estimate3dou}, we estimate \begin{align*} |\partial_t\nabla\tilde{\phi}|\lesssim\frac{1}{\lambda^3}\left(|\lambda^2b_t|+|\lambda\lambda_t+b|\right)+\frac{b}{\lambda}|\alpha_t|\lesssim\frac{1}{\lambda}, \end{align*} from which \begin{align}\label{virial:est1dou} \left|\Im\left(\int\partial_t\nabla\tilde{\phi}\cdot\nabla\tilde{u}\bar{\tilde{u}}\right)\right|\lesssim\frac{1}{\lambda}\|\tilde{u}\|_{L^2}\|\nabla\tilde{u}\|_{L^2}=\mathcal{O}\left(\frac{1}{\lambda^2}\|\tilde{u}\|_{L^2}^2+\|\tilde{u}\|_{H^1}^2\right). \end{align} The second term in \eqref{virial1dou} corresponds to the localized Morawetz estimate, and from \eqref{equ:approxiamteE2} and integration by parts, we get \begin{align}\label{virial2dou} &\Re{\left(\int i\partial_t\tilde{u}\overline{(\frac{1}{2}\Delta\tilde{u}+\nabla\tilde{\phi}\cdot\nabla\tilde{u})}\right)}\notag\\ =&\frac{b}{\lambda^2}\Re\left(\int\nabla^2\phi\left(\frac{x-\alpha}{M\lambda}\right)(\nabla\tilde{u},\bar{\nabla\tilde{u}})\right)-\frac{1}{4}\frac{b}{M^2\lambda^4}\left(\int\Delta^2\phi\left(\frac{x-\alpha}{M\lambda}\right)|\tilde{u}|^2\right)\notag\\ &-\frac{b}{\lambda}\Re\left(\int M\nabla\phi\left(\frac{x-\alpha}{M\lambda}\right)\left(\left(|u|^{\frac{4}{3}}u-|w|^{\frac{4}{3}}w\right)+\mu\left(A(|u|^2)u-A(w^2)w\right)\right)\cdot\overline{\nabla\tilde{u}}\right)\notag\\ &-\frac{1}{2}\frac{b}{\lambda^2}\Re\left(\int \Delta\phi\left(\frac{x-\alpha}{M\lambda}\right)\left(\left(|u|^{\frac{4}{3}}u-|w|^{\frac{4}{3}}w\right)+\mu\left(A(u^2)u-A(w^2)w\right)\right)\bar{\tilde{u}}\right)\notag\\ &-\frac{b}{\lambda}\Re\left(\int M\nabla\phi\left(\frac{x-\alpha}{M\lambda}\right)\psi\cdot\bar{\nabla\tilde{u}}\right)-\frac{1}{2}\frac{b}{\lambda^2}\Re\left(\int\Delta\phi\left(\frac{x-\alpha}{M\lambda}\right)\psi\bar{\tilde{u}}\right). \end{align} We now estimate the nonlinear terms \begin{align}\label{virial:est2dou} &\Bigg|-\frac{b}{\lambda}\Re\left(\int M\nabla\phi\left(\frac{x-\alpha}{M\lambda}\right)\left((f(u)-f(w)-f^{\prime}(w)\cdot\tilde{u})+\mu(g(u)-g(w)-g\prime(w)\cdot\tilde{u})\right)\cdot\overline{\nabla\tilde{u}}\right)\notag\\ &-\frac{1}{2}\frac{b}{\lambda^2}\Re\left(\int\Delta\phi\left(\frac{x-\alpha}{M\lambda}\right)(f(u)-f(w)-f^{\prime}(w)\cdot\tilde{u})\bar{\tilde{u}}+\mu(g(u)-g(w)-g\prime(w)\cdot\tilde{u})\bar{\tilde{u}}\right)\Bigg|\notag\\ \lesssim&\frac{b}{\lambda}\Re\int M\nabla\phi\left(\frac{x-\alpha}{M\lambda}\right)\left(|\tilde{u}|^{\frac{4}{3}+1}+|w|^{\frac{1}{3}}|\tilde{u}|^2+\mu(A(\tilde{u}^2)(w+\tilde{u})+A(w\bar{\tilde{u}}+\bar{w}\tilde{u})\tilde{u})\right)\cdot\overline{\nabla\tilde{u}}\notag\\ &+\frac{1}{2}\frac{b}{\lambda^2}\Re\left(\int\Delta\phi\left(\frac{x-\alpha}{M\lambda}\right)\left(|\tilde{u}|^{\frac{4}{3}+1}+|w|^{\frac{1}{3}}|\tilde{u}|^2+\mu(A(\tilde{u}^2)(w+\tilde{u})+A(w\bar{\tilde{u}}+\bar{w}\tilde{u})\tilde{u})\right)\bar{\tilde{u}}\right)\notag\\ \lesssim&\left(\|\tilde{u}\|_{L^{\frac{14}{3}}}^{\frac{7}{3}}+\|w\|_{L^2}^{\frac{1}{3}}\|\tilde{u}\|_{L^6}^2+\mu\|\tilde{u}\|_{\dot{H}^1}^2(\|w\|_{L^2}+\|\tilde{u}\|_{L^2})+\|w\|_{L^2}\|\tilde{u}\|_{L^{12/5}}\|\tilde{u}\|_{L^4} \right)\|\nabla\tilde{u}\|_{L^2}\notag\\ &+\frac{1}{\lambda}\left(\|\tilde{u}\|_{L^{\frac{10}{3}}}^{\frac{10}{3}}+\|w\|_{L^2}^{\frac{1}{3}}\|\tilde{u}\|_{L^{\frac{18}{5}}}^3+\mu\|\tilde{u}\|_{\dot{H}^1}^2(\|w\|_{L^2}+\|\tilde{u}\|_{L^2})+\|w\|_{L^2}\|\tilde{u}\|_{L^{12/5}}\|\tilde{u}\|_{L^4}\|\tilde{u}\|_{L^2}\right)\notag\\ \lesssim&\|\tilde{u}\|_{H^1}^2. \end{align} The remaining terms in \eqref{virial2dou} are integrated by parts: \begin{align}\label{virial:est3dou} &-\frac{b}{\lambda}\Re\left(\int M\nabla\phi\left(\frac{x-\alpha}{M\lambda}\right)\psi\cdot\bar{\nabla\tilde{u}}\right)-\frac{1}{2}\frac{b}{\lambda^2}\Re\left(\int\Delta\phi\left(\frac{x-\alpha}{M\lambda}\right)\psi\bar{\tilde{u}}\right)\notag\\ =&\Im\left(\int\left[i\frac{b}{\lambda}M\nabla\phi\left(\frac{x-\alpha}{M\lambda}\right)\cdot\nabla\psi+i\frac{b}{2\lambda^2}\Delta\phi(\frac{x-\alpha}{M\lambda})\psi\right]\bar{\tilde{u}}\right). \end{align} For the local term, we have \begin{align}\label{virial:est4dou} &-\frac{b}{\lambda}\Re\left(\int M\nabla\phi\left(\frac{x-\alpha}{M\lambda}\right)(f^{\prime}(w)\cdot\tilde{u})\cdot\overline{\nabla\tilde{u}}\right)-\frac{1}{2}\frac{b}{\lambda^2}\Re\left(\int \Delta\phi\left(\frac{x-\alpha}{M\lambda}\right)(f^{\prime}(w)\cdot\tilde{u})\bar{\tilde{u}}\right)\notag\\ =&\frac{b}{\lambda}\Re\left(\int M\nabla\phi\left(\frac{x-\alpha}{M\lambda}\right)\left(\frac{10}{9}|w|^{-\frac{2}{3}}w|\tilde{u}|^2-\frac{1}{9}|w|^{-\frac{8}{3}}\bar{w}^3\tilde{u}^2+\frac{5}{9}|w|^{-\frac{2}{3}}\bar{w}\tilde{u}^2\right)\cdot\overline{\nabla w}\right). \end{align} For the non-local term, we have \begin{align}\label{virial:est5dou} &-\frac{b}{\lambda}\Re\left(\int M\nabla\phi\left(\frac{x-\alpha}{M\lambda}\right)(g^{\prime}(w)\cdot\tilde{u})\cdot\overline{\nabla\tilde{u}}\right)-\frac{1}{2}\frac{b}{\lambda^2}\Re\left(\int \Delta\phi\left(\frac{x-\alpha}{M\lambda}\right)(g^{\prime}(w)\cdot\tilde{u})\bar{\tilde{u}}\right)\notag\\ =&\frac{b}{\lambda}\Re\left(\int M\nabla\phi\left(\frac{x-\alpha}{M\lambda}\right)\left(2A(\Re w\nabla\bar{w})\tilde{u}+2A(\Re\tilde{u}\nabla\bar{w})w+2A(\Re w\bar{\tilde{u}})\cdot\nabla w\right)\right). \end{align} Injecting \eqref{virial:est1dou}, \eqref{virial:est2dou}, \eqref{virial:est3dou}, \eqref{virial:est4dou},\eqref{virial:est5dou} into \eqref{virial2dou} yields after a further integration by parts \begin{align*} &\Re{\left(\int i\partial_t\tilde{u}\overline{\left(\frac{1}{2}\Delta\tilde{u}+\nabla\tilde{\phi}\cdot\nabla\tilde{u}\right)}\right)}\\ =&\frac{b}{\lambda^2}\Re\left(\int\nabla^2\phi\left(\frac{x-\alpha}{M\lambda}\right)(\nabla\tilde{u},\overline{\nabla\tilde{u}})\right)-\frac{1}{4}\frac{b}{M^2\lambda^4}\left(\int\Delta^2\phi\left(\frac{x-\alpha}{M\lambda}\right)|\tilde{u}|^2\right)\\ &+\frac{b}{\lambda}\Re\left(\int M\nabla\phi\left(\frac{x-\alpha}{M\lambda}\right)\left(\frac{10}{9}|w|^{-\frac{2}{3}}w|\tilde{u}|^2-\frac{1}{9}|w|^{-\frac{8}{3}}\bar{w}^3\tilde{u}^2+\frac{5}{9}|w|^{-\frac{2}{3}}\bar{w}\tilde{u}^2\right)\cdot\overline{\nabla w}\right)\\ &+\frac{\mu b}{\lambda}\Re\left(\int M\nabla\phi\left(\frac{x-\alpha}{M\lambda}\right)\left(2A(\Re w\nabla\bar{w})\tilde{u}+2A(\Re\tilde{u}\nabla\bar{w})w+2A(\Re w\bar{\tilde{u}})\cdot\nabla w\right)\right)\\ &+\Im\left(\int\left[i\frac{b}{\lambda}M\nabla\phi\left(\frac{x-\alpha}{M\lambda}\right)\cdot\nabla\psi+i\frac{b}{2\lambda^2}\Delta\phi\left(\frac{x-\alpha}{M\lambda}\right)\psi\right]\bar{\tilde{u}}\right)\\ &+\mathcal{O}\left(\|\tilde{u}\|_{H^1}^2\right). \end{align*} This completes the proof of lemma \ref{lemma:energyestimatedou}. \end{proof} \subsection{Backwards propagation of smallness} In this subsection, we first application of the energy estimate \eqref{refine:energyestiamte} is a bootstrap control on critical mass solution to \eqref{equ1:double}. More precisely, let $u\in H^1(\mathbb{R}^3)$ be a solution to \eqref{equ1:double} defined on $[t_0,0)$. Let $t_0<t_1<0$ and assume that $u$ admits on $[t_0,t_1]$ a geometrical decomposition of the form: \begin{align*} u(t,x)=\frac{1}{\lambda^{\frac{3}{2}}(t)}[R_{\mathcal{P}}+\epsilon]\left(t,\frac{x-\alpha(t)}{\lambda(t)}\right)e^{i\gamma(t)}, \end{align*} where $\epsilon=\epsilon_1+i\epsilon_2\in H^1(\mathbb{R}^3)$ satisfies the orthogonality conditions \eqref{orthogonality1dou} and $\|\epsilon(t)\|_{H^1}+|b(t)|+|d(t)|\ll1$. Let \begin{align}\notag \tilde{u}(t,x)=\frac{1}{\lambda^{\frac{3}{2}}(t)}\epsilon\left(t,\frac{x-\alpha(t)}{\lambda(t)}\right)e^{i\gamma(t)}. \end{align} Assume that the energy $E_{0,\mu}$ satisfies the $E_{0,\mu}=E_{\mu}(u)>0$ and define the constant \begin{align}\label{backBdefine} B_\mu=\sqrt{\frac{e_{\mu}}{E_{0,\mu}}}, \end{align} with the constant $e_\mu=\frac{1}{2}(L_{-,\mu}S_{1,0},S_{1,0})>0$. Moreover, Let $P_{0,\mu}=P_{\mu}(u_0)$ be the linear momentum and define the vector \begin{align}\label{backDdefine} D_{\mu}=\frac{P_{0,\mu}}{p_{\mu}}, \end{align} with the universal constant $p_{\mu}=2(L_{-,\mu}S_{0,1},S_{0,1})$. We claim the following backwards propagation estimates: \begin{lemma}\label{lemmabackwardsdou} \textbf{(Backwards propagation of smallness).} Assume that there holds for some $t_1<0$ close enough to $0$: \begin{align*} &\left|\|u\|_{L^2}^2-\|Q_{\mu}\|_{L^2}^2\right|\lesssim\lambda^4(t_1),~~\|\nabla\tilde{u}(t_1)\|_{L^2}^2+\frac{\|\tilde{u}(t_1)\|_{L^2}^2}{\lambda^2(t_1)}\lesssim\lambda^2(t_1),\\ & \left|\lambda(t_1)+\frac{t_1}{B_\mu}\right|\lesssim\lambda^2(t_1),\,\,\left|\frac{b(t_1)}{\lambda(t_1)}-\frac{1}{B_\mu}\right|\lesssim\lambda^2(t_1),\,\,\left|\frac{d(t_1)}{\lambda^2(t_1)}-D_{\mu}\right|\lesssim\lambda^2(t_1). \end{align*} Then there exists a backwards time $t_0$ depending only on $B_\mu$ such that for any $t\in[t_0,t_1]$, \begin{align*} &\|\nabla\tilde{u}(t)\|_{L^2}^2+\frac{\|\tilde{u}(t)\|_{L^2}^2}{\lambda^2(t)}\lesssim\|\nabla\tilde{u}(t_1)\|_{L^2}^2+\frac{\|\tilde{u}(t_1)\|_{L^2}^2}{\lambda^2(t_1)}+\lambda^{6}(t),\\\notag &\left|\frac{b}{\lambda}(t)-\frac{1}{B_\mu}\right|\lesssim\lambda^2(t),\,\, \left|\lambda(t)+\frac{t}{B_\mu}\right|\lesssim\lambda^2(t),\,\,\left|\frac{d(t)}{\lambda^2(t)}-D_{\mu}\right|\lesssim\lambda^2(t). \end{align*} \end{lemma} \begin{proof} By the similar argument as \cite{RS2011JAMS,GL2021CPDE,GL2021JFA,KLR2013ARMA}, we can obtain this Lemma \ref{lemmabackwardsdou}. Here we omit the details. \end{proof} \section{Existence of critical mass blow-up solutions} In this section, we prove the following result, which in particular yields Theorem \ref{theorem:minimialD}. \begin{lemma} \textbf{(Existence of critical mass blow-up solution).} Let $\gamma_0\in\mathbb{R}$, $x_0\in\mathcal{R}^3$, $B_\mu$ and $D_{\mu}$ be given by \eqref{backBdefine} and \eqref{backDdefine}, respectively. Then there exist $t_0<0$ and a solution $u_c\in \mathcal{C}([t_0,0),H^1(\mathbb{R}^3))$ to \eqref{equ1:double} which blows up at $T=0$ with \begin{align*} E_{\mu}(u_c)=E_{0,\mu}(u_0),\,\,P_{\mu}(u_c)=P_{0,\mu}(u_0)\,\,\text{and}\,\,\|u_c\|_{L^2}=\|Q_{\mu}\|_{L^2}. \end{align*} Furthermore, the solution admits on $[t_0,0)$ a geometrical decomposition: \begin{align}\notag u_c(t,x)=\frac{1}{\lambda_c^{\frac{3}{2}}(t)}[R_{\mathcal{P}_c}+\epsilon_c]\left(t,\frac{x-\alpha_c(t)}{\lambda_c(t)}\right)e^{i\gamma_c(t)}=\tilde{R}_{\mathcal{P}_c}+\tilde{u}_c, \end{align} where $\epsilon_c$ satisfies the orthogonality conditions \eqref{orthogonality1dou} and the following bounds hold: \begin{align*} &\|\tilde{u}_c\|_{L^2}^2\lesssim\lambda_c^4,\,\,\|\tilde{u}_c\|_{H^1}^2\lesssim\lambda^{2}_c,\\ &\lambda_c+\frac{t}{B_\mu}=\mathcal{O}(\lambda_c^3),\,\,\frac{b_c}{\lambda_c}-\frac{1}{B_\mu}=\mathcal{O}(\lambda_c^2),\,\,\frac{d_c}{\lambda^2_c}-\frac{1}{D_{\mu}}=\mathcal{O}(\lambda_c^2)\,\,\gamma_c=-\frac{B_\mu^2}{t}+\gamma_0+\mathcal{O}(\lambda_c). \end{align*} \end{lemma} \begin{proof} By the similar argument as \cite{RS2011JAMS,MRS2013Amer,M1990CMP,MRS2014Duke,LMR2016RMI} \cite{KMR2009CPAM}. We can obtain this result. Here we only show the uniform $\dot{H}^{\frac{3}{2}}(\mathbb{R}^3)$ bound; \begin{align}\label{minimal5:2dou} \|\tilde{u}_n\|_{L^{\infty}\left([t,t_n],\dot{H}^{\frac{3}{2}}(\mathbb{R}^3)\right)}\lesssim\lambda_n^{\frac{1}{2}}(t). \end{align} Indeed, our point is again the identity \begin{align*} i\partial_t\tilde{u}_n+\Delta\tilde{u}_n=-\psi_n-|\tilde{u}_n|^{\frac{4}{3}}\tilde{u}_n-\mu A(|\tilde{u}_n|^2)|\tilde{u}_n-H \end{align*} with \begin{align*} i\partial_t\tilde{R}_{\mathcal{P}_n }+\Delta\tilde{R}_{\mathcal{P}_n }+|\tilde{R}_{\mathcal{P}_n }|^{\frac{4}{3}}\tilde{R}_{\mathcal{P}_n }+\mu A(|\tilde{R}_{\mathcal{P}_n }|^2)\tilde{R}_{\mathcal{P}_n}=\psi_n,~~ H=H_1+H_2, \end{align*} where \begin{align*} H_1=&|\tilde{R}_{\mathcal{P}_n}-\tilde{u}_n|^{\frac{4}{3}}(\tilde{R}_{\mathcal{P}_n}+\tilde{u})-|\tilde{R}_{\mathcal{P}_n}|^{\frac{4}{3}}\tilde{R}_{\mathcal{P}_n}+\mu A(|\tilde{R}_{\mathcal{P}_n}+\tilde{u}_n|^2)(\tilde{R}_{\mathcal{P}_n}+\tilde{u}_n)-\mu A(|\tilde{R}_{\mathcal{P}_n}|^2)|\tilde{R}_{\mathcal{P}_n},\\ H_2=&-|\tilde{u}_n|^{\frac{4}{3}}\tilde{u}_n-\mu A(|\tilde{u}_n|^2)\tilde{u}_n. \end{align*} Hence, from the standard Strichartz bounds and the smoothing effect, we have \begin{align}\label{minimal:step2dou} \|\nabla^{\frac{3}{2}}\tilde{u}_n\|_{L^{\infty}([t,t_n])L^2(\mathbb{R}^3)} \lesssim&\|\nabla^{\frac{3}{2}}\psi_n\|_{L^{2}_{[t,t_n]}L^{6/5}(\mathbb{R}^3)}+\|\langle x\rangle \nabla H_1\|_{L^2_{[t,t_n]}L^2(\mathbb{R}^3)}\notag\\ &+\|\nabla^{\frac{3}{2}}(|\tilde{u}_n|^{\frac{4}{3}}\tilde{u}_n)\|_{L^{3/2}_{[t,t_n]}L^{18/13}(\mathbb{R}^3)}+\mu\|\nabla^{\frac{3}{2}}(A(\tilde{u}_n^2)\tilde{u}_n)\|_{L^{8/5}_{[t,t_n]}L^{4/3}(\mathbb{R}^3)}. \end{align} The error term $\psi_n$ is estimated from \eqref{equ:approximate:double} and lemma \ref{lemma:modestimate}, which yields a bound \begin{align*} \|\nabla^{\frac{3}{2}}\psi_n\|_{L^{6/5}(\mathbb{R}^3)}\lesssim\|\nabla^2\psi_n\|_{L^2}^{\frac{1}{4}}\|\psi_n\|_{L^2}^{\frac{3}{4}}\lesssim\lambda_n^{\frac{3}{2}}, \end{align*} where we used the Gagliardo-Nirenberg's inequality. Hence, \begin{align}\label{minimal22dou} \|\nabla^{\frac{3}{2}}\psi_n\|_{L^{2}_{[t,t_n]}L^{6/5}(\mathbb{R}^3)}\lesssim\lambda_n^{2}. \end{align} For the term $H_1$, we have \begin{align*} |H_1|\lesssim |\tilde{R}_{\mathcal{P}_n}^{\frac{4}{3}}\tilde{u}_n|+|\tilde{u}_{n}^{\frac{4}{3}}\tilde{u}_n|+\mu\left[A\left(|\tilde{R}_{\mathcal{P}_n}|^2\right)\tilde{u}_n+A\left(|\tilde{R}_{\mathcal{P}_n}\tilde{u}_n|\right)|\tilde{R}_{\mathcal{P}_n}|+A\left(|\tilde{u}_n|^2\right)|\tilde{u}_n|\right] =I+II, \end{align*} where \begin{align*} I=|\tilde{R}_{\mathcal{P}_n}^{\frac{4}{3}}\tilde{u}_n|+\mu\left(A\left(|\tilde{R}_{\mathcal{P}_n}|^2\right)\tilde{u}_n+A\left(|\tilde{R}_{\mathcal{P}_n}\tilde{u}_n|\right)|\tilde{R}_{\mathcal{P}_n}|\right),~~ II=|\tilde{u}_{n}^{\frac{4}{3}}\tilde{u}_n|+\mu A\left(|\tilde{u}_n|^2\right)|\tilde{u}_n|. \end{align*} We now estimate the local term that contain the linear terms of $\tilde{u}_n$, \begin{align*} \|\langle x\rangle \nabla(|\tilde{R}_{\mathcal{P}_n}|^{\frac{4}{3}}\tilde{u}_n)\|_{H^1(\mathbb{R}^3)} \lesssim& \|\langle x\rangle (|\tilde{R}_{\mathcal{P}_n}|^{\frac{4}{3}}\tilde{u}_n)\|_{L^2}+\|\langle x\rangle |\tilde{R}_{\mathcal{P}_n}|^{\frac{1}{3}}\tilde{u}_n\nabla\tilde{R}_{\mathcal{P}_n}\|_{L^2}+ \|\langle x\rangle |\tilde{R}_{\mathcal{P}_n}|^{\frac{4}{3}}\nabla\tilde{u}_n\|_{L^2}\notag\\ \lesssim&\frac{1}{\lambda_n}\|\tilde{u}_n\|_{L^2}+\frac{1}{\lambda_n^2}\|\tilde{u}_n\|_{L^2}+\frac{1}{\lambda_n}\|\nabla\tilde{u}_n\|_{L^2} \lesssim\lambda_n^2, \end{align*} where we used $\tilde{u}_n(t_n)=0$, Lemma \ref{lemmabackwardsdou} and the decay estimate of $R_{\mathcal{P}_n}$. Next, we estimate the non-local terms of I, by using the following estimate \begin{align*} A(|f|)=&\int_{|y|\leq\frac{|x|}{2}}\frac{|f(y)|}{|x-y|^2}dy+\int_{\frac{|x|}{2}<|y|<2|x|}\frac{|f(y)|}{|x-y|^2}dy+\int_{|y|\geq2|x|}\frac{|f(y)|}{|x-y|^2}dy\\ \lesssim&|x|^{-2}\|f\|_{L^1}+|x|^{-1}\int_{\frac{|x|}{2}<|y|<2|x|}\frac{|y|}{|x-y|^2}|f(y)|dy+|x|^{-2}\int_{|y|\geq2|x|}\frac{|y|^2}{|x-y|^2}|f(y)|dy\\ \lesssim&|x|^{-2}\|f\|_{L^1}+|x|^{-1}\left\|\frac{|y|}{|x-y|^2}\right\|_{L^2(|x|/2<|y|<2|x|)}\|f\|_{L^2}+|x|^{-2}\|f\|_{L^1}\\ \lesssim&|x|^{-2}\|f\|_{L^1}+|x|^{-1}\left\||y|^{\frac{1}{2}}f\right\|_{L^2}. \end{align*} we can obtain \begin{align*} &\left\|\langle x\rangle \left[A\left(|\tilde{R}_{\mathcal{P}_n}|^2\right)\tilde{u}_n+A\left(|\tilde{R}_{\mathcal{P}_n}|\tilde{u}_n\right)|\tilde{R}_{\mathcal{P}_n}|\right]\right\|_{H^1}\\ \lesssim&\left\|\langle x\rangle\left(\tilde{u}_n\nabla A\left(|\tilde{R}_{\mathcal{P}_n}|^2\right)\right)\right\|_{L^2}+\left\|\langle x\rangle A\left(|\tilde{R}_{\mathcal{P}_n}|^2\right)\nabla\tilde{u}_n\right\|_{L^2}\\ &+\left\|\langle x\rangle\left[|\tilde{R}_{\mathcal{P}_n}|\nabla A\left(|\tilde{R}_{\mathcal{P}_n}|\tilde{u}_n\right)+A\left(|\tilde{R}_{\mathcal{P}_n}|\tilde{u}_n\right)\nabla|\tilde{R}_{\mathcal{P}_n}|\right]\right\|_{L^2}\\ &+\left\|\langle x\rangle\left(\tilde{u}_n A\left(|\tilde{R}_{\mathcal{P}_n}|^2\right)\right)\right\|_{L^2}+\left\|\langle x\rangle|\tilde{R}_{\mathcal{P}_n}| A\left(|\tilde{R}_{\mathcal{P}_n}|\tilde{u}_n\right)\right\|_{L^2}\\ \lesssim&\|\langle x\rangle\nabla A(|\tilde{R}_{\mathcal{P}_n}|^2)\|_{L^6}\|\tilde{u}_n\|_{L^3}+\|\langle x\rangle A(\tilde{R}_{\mathcal{P}_n}^2)\|_{L^{\infty}}\|\nabla u_n\|_{L^2}+\frac{1}{\lambda_n^{1/2}}\|\nabla A(\tilde{R}_{\mathcal{P_n}}\tilde{u}_n)\|_{L^2}\\ &+\frac{1}{\lambda_n^{3/2}}\|A(\tilde{R}_{\mathcal{P_n}}\tilde{u}_n)\|_{L^2}+\|\langle x\rangle A(|\tilde{R}_{\mathcal{P}_n}|^2)\|_{L^6}\|\tilde{u}_n\|_{L^3}+\frac{1}{\lambda_n^{1/2}}\|A(\tilde{R}_{\mathcal{P_n}}\tilde{u}_n)\|_{L^2}\\ \lesssim&\frac{1}{\lambda_n^{1/2}}\|\nabla \tilde{R}_{\mathcal{P}}\|_{L^2}\|\tilde{u}_n\|_{L^3}+\left(\|\tilde{R}_{\mathcal{P}_n}\|_{L^2}^2+\||y|^{\frac{1}{2}}\tilde{R}_{\mathcal{P}_n}^2\|_{L^2}\right)\|\nabla \tilde{u}_n\|_{L^2}+\frac{1}{\lambda_n^{1/2}}\|\nabla(\tilde{R}_{\mathcal{P}_n}\tilde{u}_n)\|_{L^{6/5}}\\ &+\frac{1}{\lambda_n^{3/2}}\|(\tilde{R}_{\mathcal{P}_n}\tilde{u}_n)\|_{L^{6/5}}+\frac{1}{\lambda_n^{1/2}}\|\nabla \tilde{R}_{\mathcal{P}}\|_{L^2}\|\tilde{u}_n\|_{L^3}+\frac{1}{\lambda_n^{1/2}}\|(\tilde{R}_{\mathcal{P}_n}\tilde{u}_n)\|_{L^{6/5}}\\ \lesssim&\frac{1}{\lambda_n^{\frac{3}{2}}}\|\tilde{u}_n\|_{L^2}+\left(1+\frac{1}{\lambda_n}\|\tilde{R}_{\mathcal{P}_n}\|_{L^3}^{\frac{3}{2}}\right)\|\nabla \tilde{u}_n\|_{L^2}+\frac{1}{\lambda_n^{1/2}}\left(\|\nabla \tilde{R}_{\mathcal{P}_n}\|_{L^2}\|\tilde{u}_n\|_{L^3}+\|\tilde{R}_{L^3}\|\nabla\tilde{u}\|_{L^2}\right)\\ &+\frac{1}{\lambda_n^{3/2}}(\|\tilde{R}_{\mathcal{P}_n}\|_{L^3}\|\tilde{u}_n\|_{L^2})+\frac{1}{\lambda_n^{1/2}}\|\nabla \tilde{R}_{\mathcal{P}}\|_{L^2}\|\tilde{u}_n\|_{L^3}+\frac{1}{\lambda_n^{1/2}}\|\tilde{R}_{\mathcal{P}_n}\|_{L^3}\|\tilde{u}_n\|_{L^2}\\ \lesssim&\lambda_n^{\frac{1}{2}}. \end{align*} Thus, we have \begin{align}\label{minimal23dou} \|\langle x\rangle \nabla I\|_{L^2_{[t,t_n]}L^2(\mathbb{R}^3)}\lesssim\lambda_n. \end{align} The local nonlinear term is estimated from Sobolev embedding and standard nonlinear estimates in Besov spaces, \begin{align}\label{minimal24dou} \|\nabla^{\frac{3}{2}}(|\tilde{u}_n|^{\frac{4}{3}}\tilde{u}_n)\|_{L^{18/13}(\mathbb{R}^3)}\lesssim\|\tilde{u}_n^{\frac{4}{3}}\|_{L^{9/2}}\|\nabla^{\frac{3}{2}}\tilde{u}_n\|_{L^2}\lesssim\lambda_n^4\|\nabla^{\frac{3}{2}}\tilde{u}_n\|_{L^2}. \end{align} For the non-local nonlinear term, we have \begin{align}\label{minimal25dou} \|\nabla^{\frac{3}{2}}(A(\tilde{u}_n^2)\tilde{u}_n)\|_{L^{4/3}(\mathbb{R}^3)}\lesssim&\|\nabla^{\frac{3}{2}}A(\tilde{u}_n^2)|\tilde{u}_n|\|_{L^{4/3}}+\|A(\tilde{u}_n^2)\nabla^{\frac{3}{2}}\tilde{u}_n\|_{L^{4/3}}\notag\\ \lesssim&\|\nabla^{\frac{3}{2}}A(\tilde{u}_n^2)\|_{L^{12/7}}\|\tilde{u}_n\|_{L^6}+\|A(\tilde{u}_n^2)\|_{L^4}\|\nabla^{\frac{3}{2}}\tilde{u}_n\|_{L^2}\notag\\ \lesssim&(\|\tilde{u}_n\|_{L^{12/5}}\|\tilde{u}_n\|_{L^6}+\|\tilde{u}_n\|_{L^{24/7}}^2)\|\nabla^{\frac{3}{2}}\tilde{u}_n\|_{L^2}\notag\\ \lesssim&\lambda_n^3\|\nabla^{\frac{3}{2}}\tilde{u}_n\|_{L^2}. \end{align} Injecting \eqref{minimal22dou}, \eqref{minimal23dou}, \eqref{minimal24dou} and \eqref{minimal25dou} into \eqref{minimal:step2dou}, we deduce \begin{align*} \|\nabla^{\frac{3}{2}}\tilde{u}_n\|_{L^{\infty}([t,t_n])L^2(\mathbb{R}^3)}\lesssim\lambda_n+\lambda_n^5\|\nabla^{\frac{3}{2}}\tilde{u}_n\|_{L^{\infty}([t,t_n])L^2(\mathbb{R}^3)}, \end{align*} and \eqref{minimal5:2dou} holds. \end{proof}
2,869,038,155,396
arxiv
\section{Gory Details} \end{document} \section{Tracking Data Intellectual Property Using Adversarial Perturbation in Fourier Domain} \begin{figure*}[t!] \centering \includegraphics[width=.9\textwidth]{figure/overview.pdf} \caption{Overview of \textsc{DeepTaster}\xspace.} \label{fig:overview1} \end{figure*} \begin{figure*}[t!] \centering \includegraphics[width=.9\textwidth]{figure/Attack.pdf} \caption{\centering{Considered Attack Scenarios.}} \label{fig:attack} \end{figure*} \section{\textsc{DeepTaster}\xspace System Design}\label{sec:system_design} In this section, we present \textsc{DeepTaster}\xspace, a dataset IP tracking tool that verifies whether an attacker's model has stolen knowledge from a victim's dataset or model. We first discuss the design requirements before presenting the system design overview of \textsc{DeepTaster}\xspace. We then deep dive into the major three components of the system: adversarial perturbation generation and transformation, meta-classifier generation, and verification. \subheading{Design Requirements.} Protecting dataset IP presents several challenges compared to protecting model-dependent IP which may be solved by existing watermark and fingerprinting schemes. To solve its unique challenges, we identify the following criteria for a reliable copyright protection and verification method for protecting the dataset IP. \begin{enumerate} \item \textbf{Robustness.} The protection should capture the dataset ownership IP and be resilient to model architecture change. To the best of our knowledge, this is the first work that tackle this design challenge. The protection should also be generalisable to ensure robustness even when applied to protect various datasets. \item \textbf{Fidelity.} The ownership protection and verification process should not impact the normal model utility. \item \textbf{Efficacy.} The verification should have high accuracy and recall in detecting stolen dataset intelligence, even across multiple model architectures. \item \textbf{Efficiency.} The verification process should be efficient and lightweight, e.g., taking only a few samples to verify. \end{enumerate} \subsection{\textsc{DeepTaster}\xspace Overview} As depicted in Figure~\ref{fig:overview1}, \textsc{DeepTaster}\xspace consists of the following 3-step process: (a) the generation of adversarial perturbation samples and their translation to the Fourier frequency domain using Discrete Fourier Transform (DFT), (b) the creation of a meta-classifier that is trained on the spectra (i.e., DFT samples) in order to distinguish the dataset intelligence, and (c) verification of the suspect model by generating adversarial perturbation samples from it and then testing them using the meta-classifier. Details of each step are described in the following subsections. \subsubsection{Adversarial Perturbation Generation and Transformation}\label{sec:adversarial} \begin{algorithm} \caption{Adversarial Perturbation Generation and Transformation.}\label{alg:adveserial} \hspace*{\algorithmicindent} \textbf{Input}: Sample image $I$ and target model $M$\\ \hspace*{\algorithmicindent} \textbf{Output}: Adversarial DFT image $Adv$ \begin{algorithmic}[1] \Procedure{$GenerateAdv$}{$D$,$I$} \State $Adv_{raw} \gets FGSM(M,I)$ \State $Adv_{per} \gets Adv_{raw} - I$ \State $Adv_{Fourier} \gets FourierTransform(Adv_{per})$ \State $Adv \gets ShiftLog(Adv_{Fourier})$ \State return $Adv$ \EndProcedure \end{algorithmic} \end{algorithm} \begin{figure}[h!] \centering \begin{tabular}{cc} \includegraphics[width=.9\linewidth]{figure/adversarial_generation.pdf}& \end{tabular} \caption{Adversarial Generation and Transformation.} \label{fig:advGenTransform} \centering \end{figure} Given a victim dataset and models trained on it, the model owner uses Algorithm~\ref{alg:adveserial} to generate adversarial images from those models to capture the dataset intelligence. The FGSM attack is executed on the target model $M$ to create adversarial images that capture the characteristics of the victim model, according to the Equation~\ref{eq.fgsm}. We use the Foolbox tool\cite{foolbox} with the specific epsilon value of $0.03$ and $l^2$-norm to conduct the attack. The FGSM attack was carried out on all victim models using the same seed images. In the case where the seed image domain is different to the victim model's image domain, the attack was carried out by re-labeling the seed images with the prediction value of the model. We ensure to only select adversarial images where FGSM was successful in forcing the model to give a wrong/different prediction. The adversarial perturbation $Adv_{per}$ is the pixel-wise difference between the original image $I$ and its adversarial image $Adv_{raw}$. The adversarial perturbations $Adv_{raw}$ is then transformed into the frequency domain, resulting in the adversarial DFT images $Adv_{Fourier}$. To better capture the characteristics of the dataset intelligence using the adversarial DFT images, we generate the final DFT images $Adv$ by applying a shift and log to the adversarial DFT images $Adv_{DFT}$. The process of generating adversarial DFT images is summarised in Figure~\ref{fig:advGenTransform}. \subsubsection{Meta-Classifier Generation}\label{sec:meta_classifier} \begin{algorithm} \caption{Meta-classifier Generation.}\label{alg:cap} \hspace*{\algorithmicindent} \textbf{Input}: The subset of victim dataset $D'$, victim models ${M_1,..., M_{n}}$ trained on victim dataset $D$.\\ \hspace*{\algorithmicindent} \textbf{Output}: Meta-classifier $Model_{meta}$, Threshold $\tau$ \begin{algorithmic}[1] \State Split $D'$ into $D'_{train}, D'_{val}, D'_{test}$ \State $Adv_{train} \gets \bigcup_{k=0}^{n-1} GenerateAdv({M_k, D'_{train}})$ \State $Adv_{val} \gets \bigcup_{k=0}^{n-1} GenerateAdv({M_k, D'_{val}})$ \State Train $Model_{meta}$ on $Adv_{train}$ \State $output \gets Model_{meta}(Adv_{val})$ \State Sort $output$ \State $\tau \gets output[0.04*length(output)]$ \State return $Model_{meta}$ and $\tau$ \end{algorithmic} \end{algorithm} To ensure robust dataset intelligence characteristics are captured, we develop a one-class meta-classifier that is trained on adversarial DFT images generated from multiple model architectures, each of which is trained on the victim dataset. The intuition here is to \textit{build a resilient detector that can efficiently recognise the stolen dataset intelligence, even when the adversary changes the model architecture or transfers the intelligence to other model as an adaptive attack strategy.} We choose a Deep Support Vector Data Description (DeepSVDD)~\cite{Ruff2018Deep} model as a meta-classifier from the various types of one-class classification models. SVDD~\cite{Tax2004Support} tries to extract the common characteristics of data variation to conduct the classification. In particular, DeepSVDD trains a neural network to minimize the volume of a hypersphere that encloses the network representations of the data. Therefore, we utilize this mechanism of DeepSVDD to extract the common fingerprint across the different models trained on the same dataset. Algorithm~\ref{alg:cap} describes the procedure to generate the \textsc{DeepTaster}\xspace meta-classifier. The input to our algorithm is a subset, $D'$, of the victim dataset $D$ ($|D'|\ll |D|$), as well as the $n$ different victim models $M_1$, ..., $M_{n}$ trained on the victim dataset $D$. The output is the meta-classifier $Model_{meta}$ and the corresponding decision threshold $\tau$. First we split the sub-dataset $D'$ into a training dataset $D'_{train}$, validation dataset $D'_{val}$, and test dataset $D'_{test}$. The training dataset is used to train the meta-classifier, the validation dataset $D'_{val}$ is used to calculate the threshold $\tau$, and $D'_{test}$ is used to evaluate the suspect model. The training dataset $D'_{train}$ and validation dataset $D'_{val}$ are used to generate the adversarial samples $Adv_{train}$ and $Adv_{val}$ respectively from the victim models and the sub-dataset $D'$ using $GenerateAdv(\cdot)$ as in lines 1-3. The second step is to train the one-class classifier on the training adversarial samples $Adv_{train}$, as in line 4. The third step is to define a threshold value using the output of the meta-classifier on the validation adversarial samples $Adv_{val}$, as in lines 5-7. The classification decision threshold is selected so as to balance the true positive and true negative rate. Specifically, the threshold is chosen that 96\% of the validation set's samples lie below the meta-classifier's threshold (and therefore the misclassified validation samples account for at most 4\%). The selected threshold is used for classifying the suspect model based on the measurement of adversarial DFT samples generated from it via the meta-classifier. The details of the model verification process are given in Section \ref{sec:verification}. Note that the threshold is meta-classifier dependent, instead of suspect model dependent. The more victim models with variant architectures are used to train the meta-classifier, the more knowledge it acquires. As a result, the threshold value might be slightly different, even though the victim dataset is the same. Furthermore, the threshold could also be adaptively adjusted for different preferences. \subsubsection{Verification}\label{sec:verification} \begin{algorithm} \caption{Validation using \textsc{DeepTaster}\xspace.}\label{alg:val} \hspace*{\algorithmicindent} \textbf{Input}: Meta-classfier $Model_{meta}$, the threshold $\tau$, the test dataset $D'_{test}$, and the suspect model $S$. \\ \hspace*{\algorithmicindent} \textbf{Output}: Verification results \begin{algorithmic}[1] \State $Adv_{test} \gets GenerateAdv({S, D'_{test}})$ \State $X \gets 0$ \State $k \gets len(Adv_{test})$ \While{$k \neq 0$} \State $X \gets X + Model_{meta}(Adv_{test}[k])<=\tau$ \State $k \gets k - 1$ \EndWhile \If{$X > len(Adv_{test}) * \frac{1}{2}$} \State $S$ is stolen model \Else \State $S$ is benign model \EndIf \end{algorithmic} \end{algorithm} \begin{figure}[h!] \centering \begin{tabular}{cc} \includegraphics[width=\linewidth]{figure/threshold.pdf}& \end{tabular} \caption{Plot of the meta-classifier output of CIFAR10 validation set and test set using 3 model architectures vs ImageNet test set. It is obvious that the threshold line exhibits clear distinction to identify the dataset intelligence even in the presence of model architecture change.} \label{fig:threshold} \centering \end{figure} Algorithm~\ref{alg:val} describes the verification procedure using \textsc{DeepTaster}\xspace. The input to our algorithm is the meta-classifier $Model_{meta}$, the threshold value $\tau$, the test dataset $D'_{test}$, and the suspect model $S$. The datasets $D'_{train}$ and $D'_{val}$, which are used in Algorithm~\ref{alg:cap}, and $D'_{test}$ are subsets of $D'$ with no intersection, so that there is no bias in the validation and testing steps. The output is the verification result, indicating whether the suspect model is stolen or not --- \textit{i.e., contains stolen dataset intelligence from a victim dataset}. To test the suspect model, we generate the test adversarial DFT samples using the steps in Section~\ref{sec:adversarial} and feed the output to the meta-classifier one-by-one, as in lines 1-7. If more than half of the samples fall below the classifier's $threshold$, it means more than half of samples are discerned as stolen, and the suspect model is decided to be stolen, as in lines 8-12. Figure~\ref{fig:threshold} shows the CIFAR10 meta-classifier's threshold value and the results of the CIFAR10 validation and test sets versus the ImageNet test set. Given the protected dataset CIFAR10, Figure~\ref{fig:threshold} demonstrates that our meta-classifier is capable of distinguishing suspect models (trained on the validation set of CIFAR10, carrying the intelligence of CIFAR10) from benign models (trained on the Imagenet), with high accuracy and across a variety of model architectures (Densenet, ResNet, and VGG). \section{Introduction} Deep neural networks (DNNs) have recently gained much attention from academia and industry because they have proved useful in numerous applications, including image recognition~\cite{wang2018cosface}, autonomous driving~\cite{luo2017traffic}, and medical image classification~\cite{zhang2019medical}. One of the reasons for their success and widespread utilization in various domains is that IT giants such as Google, IBM, Microsoft, and OpenAI have released their pre-trained DNN models to the scientific community to promote further research and scientific advancement. In many cases, pre-trained models have been built on huge datasets collected, processed, organized, and labeled by the organisation. Organisations that wish to commercialise the use of their proprietary DNN model can now do so via a cloud provider that offers Machine Learning as a Service (MLaaS). However, DNN models or datasets can potentially be stolen when they are used for Machine Learning as a Service (MLaaS)~\cite{sun2021mind}. In particular, the dataset for MLaaS could be accessed and misused by a malicious insider. For example, a recent data breach incident on ``Capital One'' showed that an unauthorized insider attacker could access users' data on the cloud server~\cite{Murphy19:cloud}, demonstrating the possibility of dataset misuse from MLaaS providers -- a malicious MLaaS provider can steal a proprietary dataset and uses the dataset for her own DNN models without the dataset owner's permission. Another possibility is the theft of a DNN model by external attackers by querying the model via MLaaS APIs. Recent studies (e.g., \cite{papernot2017practical,yu2020cloudleak,Truong2021data}) have shown that DNN model stealing attacks can effectively be launched even in real-world services. Therefore, it would be necessary for the DNN model owners to protect the intellectual property (IP) of their own models from stealing attacks. Existing DNN IP protection mechanisms are categorized into \emph{DNN watermarking} and \emph{DNN fingerprinting}. DNN watermarking embeds the information of the model owner (i.e., watermark) into a proprietary model~\cite{uchida2017embedding,darvish2019deepsigns,adi2018turning,zhang2018protecting,chen2018deepmarks,chen2019blackmarks,abuadbba2021deepisign,jia2021entangled,szyller2021dawn}. The model ownership can be verified by retrieving the identical or a similar watermark from a suspect model. There have been many proposals for developing effective DNN watermarking schemes. However, DNN watermarking has two limitations: (a) DNN watermarking is inherently invasive by design because this approach requires modifying the original DNN model to embed a watermark, which may change the DNN model's behavior~\cite{zhang2018protecting,wang2019neural}. (b) DNN watermarking is not sufficiently resilient against adversarial attacks~\cite{Xue2021DNN, Yan2022Cracking}. Aiken \textit{et al.}~\cite{Aiken2020Neural} showed that attackers could effectively manipulate neurons or channels in DNN layers that contribute to the embedded watermark for most state-of-the-art DNN watermarking schemes. Lukas \textit{et al.}~\cite{lukas2021sok} recently demonstrated that transfer learning could remove nearly all of the tested 11 watermarking schemes. Unlike DNN watermarking, DNN fingerprinting is \textit{non-invasive} by design because this approach uses the unique characteristics (i.e., fingerprinting features) of each DNN model without modifying the model itself. A verifier can identify a model by examining its fingerprinting features~\cite{lukas2019deep,cao2021ipguard}. Generally, a single fingerprinting feature is insufficient to identify a model built through model stealing and adaptive attacks~\cite{chen2021copy}. Chen \textit{et al.}~\cite{chen2021copy} recently introduced the state-of-the-art fingerprinting scheme dubbed \textsc{DeepJudge}\xspace, which relies on multiple fingerprinting features to protect the copyright of a model. However, \textsc{DeepJudge}\xspace uses fingerprinting features associated with the model's parameters. This indicates that \textsc{DeepJudge}\xspace would not be effective in identifying the unauthorised use of the protected model's training dataset when a suspect DNN model is composed of different parameters or a different model architecture is used for the suspect DNN model. Additionally, in this paper, we found that \textsc{DeepJudge}\xspace is not sufficiently effective in detecting models constructed through \textit{transfer learning}~\cite{Torrey2010transfer}, which is a method of reusing a pre-trained model for another task~\cite{Yan2022Cracking}. Our experimental results show that \textsc{DeepJudge}\xspace's detection accuracy is significantly degraded for models built through transfer learning. The state-of-the-art DNN fingerprinting scheme, \textsc{DeepJudge}\xspace~\cite{chen2021copy}, is designed to detect the unauthorized use of a victim's DNN model where a suspect model's architecture is the same as a victim model's. Therefore, \textsc{DeepJudge}\xspace would fail to detect the case where a victim's data is illegally used to build a suspect model whose architecture is different from the victim's original model architecture. To cover such an attack scenario (see Figure~\ref{fig:motivation}), we present a novel DNN fingerprinting scheme dubbed \textsc{DeepTaster}\xspace. \begin{figure}[h!] \centering \includegraphics[width=.8\linewidth]{figure/intro.pdf} \caption{New attack scenario in which a victim's dataset is stolen to build a suspect model.} \label{fig:motivation} \end{figure} In this paper, we show that the characteristics of a specific dataset used to build a DNN model can be uniquely determined with the spectra of the gradient-based adversarial examples in terms of the decision boundaries of a target model. Interestingly, adversarial examples generated for different DNN models, which were all trained on the same dataset, show statistically similar patterns in the Discrete Fourier Transform (DFT) domain, even when their model architectures are different. Motivated by these findings, we propose \textsc{DeepTaster}\xspace as a scheme to detect data theft attacks (see Figure~\ref{fig:motivation}). \textsc{DeepTaster}\xspace generates a few adversarial images with perturbations, transforms them into the DFT domain, and uses their statistical properties as the features of a meta-classifier to identify the dataset used in a suspect model. According to our experimental results, \textsc{DeepTaster}\xspace can highly accurately identify a dataset used to build a suspect model even when the suspect model's architecture differs from a victim model's. To the best of our knowledge, \textsc{DeepTaster}\xspace is the first attempt to detect this new type of model stealing attack. We summarize our key contributions as follows: \begin{itemize} \item We propose a novel DNN fingerprinting scheme, \textsc{DeepTaster}\xspace, particularly to detect data theft attacks. \textsc{DeepTaster}\xspace uses a meta-classifier to determine whether a suspect model is built on a proprietary dataset within a small number of queries (see Section~\ref{sec:system_design}). \item We introduce six new attack scenarios, including multi-architectures, data augmentation, retraining, transfer learning, fine-tuning, and pruning in which a victim's data is stolen to build a suspect model -- a malicious cloud service provider or insider attacker can steal user data and use them to build her own model (see Section~\ref{sec:threat_model}). Our experimental results demonstrate that the state-of-the-art DNN fingerprinting scheme, \textsc{DeepJudge}\xspace, would be ineffective in preventing these attacks, especially when a suspect model architecture differs from a victim's original model architecture. We discuss the root cause of \textsc{DeepJudge}\xspace's limitation in detecting data theft attacks (see Section~\ref{sec:Discussion}). \item We comprehensively evaluate the effectiveness of \textsc{DeepTaster}\xspace under the six attack scenarios with three datasets (CIFAR10, MNIST, and Tiny-ImageNet) and three model architectures (VGG16, ResNet18, and DenseNet161). Overall, \textsc{DeepTaster}\xspace achieves a \emph{balanced accuracy} of 94.95\%, 94.95\%, and 93.60\% for CIFAR10, MNIST, and Tiny-ImageNet datasets, respectively, which outperformed \textsc{DeepJudge}\xspace in the same settings (see Section~\ref{sec:Experiments}). \end{itemize} \section{Background} This section provides the background of deep neural networks, adversarial perturbations, and Discrete Fourier Transform (DFT). \subsection{Deep Neural Networks (DNNs)} A DNN classifier is a function $f: X \to Y$ that maps the input $x \in X$ to the probability $y \in Y$ that the input belongs to each class~\cite{wang2022octopus}. DNN classifier consists of $L$ layers $\{l_0, l_1, ..., l_L\}$, each of which is a set of neurons $\{n_{L,0}, n_{L,1}, ..., n_{L,N_L}\}$. Here, the first layer $l_0$ is called the input layer, the last layer $l_L$ is called the output layer, and the rest $l_2,..., l_{L-1}$ are called the hidden layers. The parameters within hidden layers are called weights and biases. The neurons that compose each layer calculate the output by applying a linear function followed by a non-linear function called the activation function to the input sequentially. We then apply a softmax activation function $\sigma(\cdot)$ to output layer $f_L(\cdot)$ to convert likelihoods into probabilities for each predicted class. Training the above DNN classifier requires a loss function that can be optimised by gradient descent on all trainable weights and biases. An example of loss function is cross-entropy. A black-box deployment of a DNN classifier only exposes the API of the model. The user sends an input element $x \in X$, the server will query the model internally and respond with a confidence vector of $\sigma(f_L(x)) \in Y$. \subsection{Adversarial Perturbation and Attack} In the computer vision domain, an adversarial perturbation is a maliciously crafted perturbation of the input sample (image) that can lead to misclassification~\cite{FGSM,PGD} by the model. One known perturbation-generating mechanism is gradient-based adversarial attacks, such as the fast gradient sign method (FGSM)~\cite{FGSM}. FGSM generates a minimal random modification to the input image in the direction that affects the target classifier prediction. A ``small modification'' (perturbation), for instance, changing a single pixel’s color, may be enough to fool the model decision boundaries. We observe that adversarial algorithms craft the perturbation in correlation with the DNN dataset ownership IP within hidden layers and thus likely carry sufficient information of the learned knowledge to be used as an IP protection mechanism. We use foolbox\cite{foolbox}, a standard library that implements various adversarial attacks. Finally, we select FGSM as the best option. FGSM is a gradient-based adversarial algorithm proposed by Goodfellow~\cite{FGSM}. Assuming the original image is $x$, $\bigtriangledown$ is the slight permutation applied to $x$ that produces the adversarial sample $\bar{x}$. The training process starts with the goal of maximizing the loss function $J(x,y)$ to obtain the adversarial sample $\bar{x}$. Maximizing $J$ means the noise-added samples no longer belong to class $y$, thus accomplishing the goal. In the entire optimisation process, the $L_{\infty}$ constraint $\left \| \bar{x}-x \right \|_{\infty} \leq \epsilon$ must be satisfied. In summary, the FGSM adversarial examples can be obtained by the following equation: \begin{equation} \label{eq.fgsm} \bar{x} = x+ \epsilon \cdot sgn\left ( \bigtriangledown_x J (f(x),y)\right ) \end{equation} \subsection{Discrete Fourier Transform (DFT)} The Discrete Fourier Transform (DFT) transforms a sequence of numbers $\{x_0, x_1, ..., x_N\}$ in the time domain into another sequence of numbers $\{y_0, y_1, ..., y_N\}$ in the frequency domain using the equation $y_k=\sum_{n=0}^N x_n \cdot e^{-\frac{i2\pi}{N}kn}$. Applying DFT to an image allows the spectrum, which is the intensity of each frequency component, to be represented like an image. Observing the spectrum of an image allows us to gather more concentrated noise information that reflects the DNN dataset ownership IP. \textit{The intuition is that we aim to leverage those DFTs to track the dataset ownership IP across architectures.} This image processing technique is already widely known, and various methods, such as the Fast Fourier Transform (FFT), have been proposed to quickly generate the Fourier transform of the image. \section{Conclusion} In this paper, we proposed a novel fingerprinting technique dubbed \textsc{DeepTaster}\xspace which tracks the dataset IP using adversarial perturbations in the Fourier domain. We discovered that the learned knowledge of DNNs from a specific dataset can be exposed by the spectra of the gradient-based adversarial perturbation of the DNNs. That is then leveraged to identify if a suspect DNN contains intelligence from that particular dataset. The steps are as follows. $\textsc{DeepTaster}\xspace$ generates a few adversarial images using adversarial perturbations, and transforms them into the Fourier Frequency domain before training a meta-classifier that can be used to verify whether a target dataset has been used in the training of a DNN model. To demonstrate the effectiveness of \textsc{DeepTaster}\xspace we evaluated its detection accuracy on three datasets, with three model architectures, under various attack scenarios --- including mutating the model architectures, transfer learning, pruning, fine-tuning, and data augmentation. Our results suggest that \textsc{DeepTaster}\xspace is robust against all of these attacks. \section{Threat Model}\label{sec:threat_model} For the evaluation of \textsc{DeepTaster}\xspace, we assume different levels of adversarial settings to execute a DNN IP stealing attack. In all scenarios, the adversary aims to steal the dataset ownership IP either from the dataset itself or from the DNN model trained on it. \\ \subheading{Overview.} We consider the leakage of the dataset or/and the DNN model. From Dataset-perspective, it has been shown that the MLaaS ecosystem enables dataset access and misuse by malicious insiders, as shown recently in ``Capital One" data breach incident. With the aim to avoid IP violation detection, an adversary may (a) use the leaked dataset and use it to train on a different DNN architecture, or (b) augment the leaked dataset with more samples before training. We are not aware of any existing work that address these dataset intelligence IP violations. On the other hand, from the DNN model perspective, an adversary can steal models from the victim's private cloud or execute a model extraction attack using the MLaaS API of the victim model. In the former case, the adversary can fine-tune, prune, and transfer learn the stolen model to increase performance and to hide the fact that they were stolen. In addition, the adversary might commercially use DNN models released for education, or leak models they have stolen from the private cloud. We designed and tested the following threat models.\\ \subheading{Assumptions.} We consider the following assumptions. (a) Capacity: the adversary could steal the dataset and/or the model. (b) Goal: the adversary aims to steal intelligence of the dataset and fool the copyright verification. (c) Assumption: the surrogate model developed by the adversary is well-trained, with sufficient accuracy that the adversary stands to profit from its sale or commercialisation. \subheading{Settings.} In our experiments, we consider the following different adversarial settings. Table~\ref{tb:adveserialsettings} summarise those attacks with along with the access level assumptions.\\ \subheading{(1) Multi-Architecture Attack (MAA).} The adversary steals the victim's dataset and uses it to train a model with architecture that's different to the original victim model. None of the existing IP protection using fingerprinting or watermark schemes have considered this attack. \subheading{(2) Data Augmentation Attack (DAA).} The attacker in this case steals the victim's dataset. Then they create a new dataset by combining the stolen data with data from the same domain, with the aim to either hide the stolen data, or to achieve a better model learning rate. The attacker trains a different DNN model based on the combined dataset or transfer learning from a stolen pretrained model on the victim dataset into the combined dataset and uses it commercially. For each case, the attackers' model has some dataset intelligence obtained from the stolen dataset. Here, our goal is to show that \textsc{DeepTaster}\xspace can detect that dataset intelligence obtained from the DAA. \subheading{(3) Model Retraining Attack (MRA).} The adversary has part of the victim's dataset. They also know the structure of the victim's model. The adversary uses the dataset they have to retrain a model of the same structure as the victim's model in order to avoid IP detection and then use the retrained model commercially. \subheading{(4) Transfer Learning Attack (TLA)}. The adversary steals the victim's model. Then the adversary uses transfer learning to fine-tune the model on another dataset that the adversary has, in order to use the stolen model in another field. This neutralizes various attempts to detect the model is stolen, and allows the model to work in the desired domain. \subheading{(5) Model Fine-tuning Attack (MFA).} The adversary knows the structure and parameters of the victim's model. They also have a portion of the dataset used by the victim for model training. To conceal the fact that the model was stolen, the adversary fine-tunes the model on the portion of the dataset that they possess, and then use it commercially. \subheading{(6) Model Pruning Attack (MPA).} The adversary has the victim's model. However, the adversary does not have any information on the dataset used for training. The adversary aims to prune and redistribute the stolen model. \begin{table}[] \caption{Summary of Adversarial Settings.} \begin{tabular}{llcc} \hline \multirow{2}{*}{N} & \multicolumn{1}{c}{\multirow{2}{*}{{\ul \textbf{Attack}}}} & \multicolumn{2}{c}{{\ul \textbf{Access}}} \\ \cline{3-4} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{Dataset} & Model \\ \hline 1 & Multi-Architecture Attack (MAA) & \multicolumn{1}{c}{Full} & Without \\ \hline 2-1 & Data Augmentation Attack (DAA) & \multicolumn{1}{c}{Ful } & Without \\ \hline 2-2 & Data Augmentation Attack (DAA) & \multicolumn{1}{c}{Ful } & Full \\ \hline 3 & Model Retraining Attack (MRA) & \multicolumn{1}{c}{Partial} & Full \\ \hline 4 & Transfer Learning Attack (TLA) & \multicolumn{1}{c}{Without} & Full \\ \hline 5 & Model Fine-tuning Attack (MFA) & \multicolumn{1}{c}{Partial} & Full \\ \hline 6 & Model Pruning Attack (MPA) & \multicolumn{1}{c}{Without} & Full \\ \hline \end{tabular} \label{tb:adveserialsettings} \end{table} \section{Experiments} \label{sec:Experiments} We implemented \textsc{DeepTaster}\xspace as a self-contained toolkit in Python. In this section, we evaluate the performance of \textsc{DeepTaster}\xspace against an extensive list of six different attacks mentioned in Section~\ref{sec:threat_model}. Some of these attacks, such as fine-tuning and pruning, are well studied in watermarking. We also examine \textsc{DeepTaster}\xspace against more challenging adaptive attack scenarios such as transfer learning, retraining, and the most challenging - multi-architecture - which has never been considered before in the literature. To ensure the generalizability of \textsc{DeepTaster}\xspace, we generate three meta-classifiers which track CIFAR10, MNIST, and Tiny-ImageNet respectively. We also compare our results to the best state of the art fingerprinting technique named \textsc{DeepJudge}\xspace~\cite{chen2021copy}. \subsection{Experimental Setup} \subheading{Datasets and Victim Models.} We use four datasets including CIFAR10\cite{cifar10}, MNIST\cite{Lecun1998Gradient}, Tiny-ImageNet\cite{le2015tiny}, and ImageNet\cite{ImageNet}. The first three datasets are used as victim datasets --- where they are used to train a meta-classifier to be able to track each dataset respectively. The ImageNet dataset is used to check the True Negative Rate (TNR). All datasets are image classification datasets with a varying number of classes, ranging from 10 classes in CIFAR10 and MNIST to up to 1000 in ImageNet, as described in Table~\ref{tb:dataset}. We point out that we use only half of the Tiny-ImageNet dataset (i.e., 100 classes) for running the experiments in order to reduce the experimental compute time. \begin{table}[h!] \centering \caption{Experiment dataset.} \begin{tabular}{lll} \hline Dataset & \# Classes & Usage \\ \hline CIFAR10 & 10& Victim / Suspect\\ MNIST & 10 & Victim / Suspect\\ Tiny-ImageNet & 100*& Victim / Suspect\\ ImageNet& 1000& Suspect \\ \hline \end{tabular} \label{tb:dataset} \end{table} We use three commonly used DNN architectures to train the victim models on each of the victim datasets, including the VGG16\cite{vgg16}, ResNet18\cite{ResNet18}, and DenseNet161\cite{DenseNet161}. The details of each model are described in Table~\ref{tb:model_details}. We note that Tiny-ImagNet based models accuracy is fairly low. However, we use it as a generalisability use case to investigate that could we still track the proprietary of proportion of large datasets like ImageNet use in deep neural networks. \begin{table}[h!] \centering \caption{Datasets, Models, parameters we used and their baseline accuracy.} \begin{tabular}{llll} \hline Dataset & Architecture & \# Params & Accuracy \\ \hline & VGG16& 134301514& 81.67\%\\ \cline{2-4} CIFAR10 & ResNet18 & 11181642& 72.98\%\\ \cline{2-4} & DenseNet161& 26494090& 76.80\%\\ \hline & VGG16& 134301514 & 99.28\%\\ \cline{2-4} MNIST & ResNet18 & 11181642& 99.37\%\\ \cline{2-4} & DenseNet161& 26494090& 99.03\%\\ \hline & VGG16& 134301514& 36.2\%\\ \cline{2-4} Tiny-ImageNet & ResNet18 & 11181642& 40.00\%\\ \cline{2-4} & DenseNet161& 26494090& 51.12\%\\ \hline \end{tabular} \label{tb:model_details} \end{table} \subsection{Meta-Clasifier Evaluation Settings} \subheading{Training Configuration.} We create a meta-classifier that tracks the knowledge of a victim dataset using the method presented in Section~\ref{sec:meta_classifier} We generate 2176 adversarial DFT images for each victim model then divide that into train/val/test datasets as follows --- 1600 images as the training set for the meta-classifier, 288 images as the validation set to obtain the classifier thresholds, and the remaining 288 images to conduct evaluation of verification. In this case, the threshold has been set so that 96 percent of validation samples fall below the threshold. The meta-classifiers balanced accuracy, i.e., (true positive + true negative)/2, is 94.00\%, 95.40\%, and 94.47\% for CIFAR10, MNIST, and Tiny-ImageNet respectively. This indicates the reliability of using the meta-classifier to detect the existence of dataset intelligence within a suspect model. \subheading{Metrics.} We calculate three metrics: True Positive Rate (TPR), True Negative Rate (TNR), Balanced Accuracy (BA), and compute the Area Under the Receiver Operating Characteristic curve (ROC AUC) score. TPR means the ratio of right answer (i.e., detecting stolen model as ``Stolen''), when we test 288 adversarial samples of a \textit{stolen} model with Meta-classifier. TNR means the ratio of right answer (i.e., labelling benign model as ``Benign''), when test 288 adversarial samples of a \textit{benign} model. Balanced Accuracy (BA) is calculated as the average of the TPR and TNR. Using both adversarial samples of a \textit{stolen} and \textit{benign} model, the ROC AUC is calculated. \subsection{Defending Against Various Data IP Attacks} In the following, we focus on the feasibility of our \textsc{DeepTaster}\xspace against the six attack scenarios presented in the threat model in Section~\ref{sec:threat_model}. In Section~\ref{sec:MAA}, we check the performance of three meta-classifiers against Multi-Architecutre Attack (MAA). In other five attacks, without loss of generality, the meta-classifier in this section is built to protect the dataset intelligence of CIFAR10 across those six attacks. We consider the generalisability of \textsc{DeepTaster}\xspace in protecting two other datasets in Section~\ref{sec:generalisation}. \subsubsection{Multi-Architecture Attack (MAA)}\label{sec:MAA} \begin{table*}[t!] \centering \caption{MAA results for CIFAR10, MNIST, Tiny-ImageNet meta-classifiers. The copy field values below indicate the classification results (Yes indicates ``Stolen'') and (No indicates ``Benign''). The {\colorbox[HTML]{BDF3BD}{green}} indicates correct classification.} \begin{tabular}{clcccccc} \hline \multicolumn{1}{c|}{Victim} & \multicolumn{1}{c|}{Ground Truth} & \multicolumn{1}{c|}{Suspect} & \multicolumn{1}{c|}{ResNet} & \multicolumn{1}{c|}{VGG} & \multicolumn{1}{c|}{DenseNet} & \multicolumn{1}{c|}{Copy?} & \multicolumn{1}{l}{Balanced Accuracy (\%)} \\ \hline \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Stolen} & \multicolumn{1}{c|}{CIFAR10} & \multicolumn{1}{c|}{95.14} & \multicolumn{1}{c|}{90.97} & \multicolumn{1}{c|}{96.53} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}Yes} & \multicolumn{1}{c}{} \\ \cline{2-6} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{MNIST} & \multicolumn{1}{c|}{89.58} & \multicolumn{1}{c|}{100} & \multicolumn{1}{c|}{81.6} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}No} & \multicolumn{1}{c}{} \\ \cline{3-6} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Tiny-ImageNet} & \multicolumn{1}{c|}{87.85} & \multicolumn{1}{c|}{85.76} & \multicolumn{1}{c|}{97.57} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}No} & \multicolumn{1}{c}{} \\ \cline{3-6} \multicolumn{1}{c|}{\multirow{-4}{*}{CIFAR10}} & \multicolumn{1}{c|}{\multirow{-3}{*}{Benign}} & \multicolumn{1}{c|}{ImageNet} & \multicolumn{1}{c|}{99.31} & \multicolumn{1}{c|}{100} & \multicolumn{1}{c|}{98.61} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}No} & \multicolumn{1}{c}{\multirow{-4}{*}{94.95}} \\ \hline \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Stolen} & \multicolumn{1}{c|}{MNIST} & \multicolumn{1}{c|}{93.75} & \multicolumn{1}{c|}{97.57} & \multicolumn{1}{c|}{98.61} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}Yes} & \multicolumn{1}{c}{} \\ \cline{2-6} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{CIFAR10} & \multicolumn{1}{c|}{100} & \multicolumn{1}{c|}{100} & \multicolumn{1}{c|}{89.58} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}No} & \multicolumn{1}{c}{} \\ \cline{3-6} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Tiny-ImageNet} & \multicolumn{1}{c|}{76.39} & \multicolumn{1}{c|}{98.96} & \multicolumn{1}{c|}{99.65} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}No} & \multicolumn{1}{c}{} \\ \cline{3-6} \multicolumn{1}{c|}{\multirow{-4}{*}{MNIST}} & \multicolumn{1}{c|}{\multirow{-3}{*}{Benign}} & \multicolumn{1}{c|}{ImageNet} & \multicolumn{1}{c|}{100} & \multicolumn{1}{c|}{64.93} & \multicolumn{1}{c|}{100} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}No} & \multicolumn{1}{c}{\multirow{-4}{*}{94.95}} \\ \hline \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Stolen} & \multicolumn{1}{c|}{Tiny-ImageNet} & \multicolumn{1}{c|}{94.44} & \multicolumn{1}{c|}{96.53} & \multicolumn{1}{c|}{99.31} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}Yes} & \multicolumn{1}{c}{} \\ \cline{2-6} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{CIFAR10} & \multicolumn{1}{c|}{100} & \multicolumn{1}{c|}{73.26} & \multicolumn{1}{c|}{83.33} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}No} & \multicolumn{1}{c}{} \\ \cline{3-6} \multicolumn{1}{c|}{\multirow{-3}{*}{Tiny-ImageNet}} & \multicolumn{1}{c|}{\multirow{-2}{*}{Benign}} & \multicolumn{1}{c|}{MNIST} & \multicolumn{1}{c|}{97.57} & \multicolumn{1}{c|}{98.26} & \multicolumn{1}{c|}{99.65} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}No} & \multicolumn{1}{c}{\multirow{-3}{*}{93.60}} \\ \hline \end{tabular} \label{tb:MAA_results} \end{table*} \hfill\\ \subheading{Attack Strategies.} We evaluate \textsc{DeepTaster}\xspace against MAA to investigate if \textsc{DeepTaster}\xspace can detect whether the suspect model contains dataset intelligence from our stolen dataset. Here the attacker trains the stolen dataset on multiple different model architectures to subvert IP detection. We select three victim datasets as CIFAR10, MNIST, and Tiny-ImageNet and train three different architectures (VGG16, ResNet18, and DenseNet161) for each dataset respectively. These models trained on ImageNet~\cite{ImageNet} are used as the benign case for MNIST and CIFAR10. For each case, we target one dataset as victim across the three models and the other two datasets as benign across the same 3 models. Table~\ref{tb:model_details} shows the accuracy of those models. To calculate the TPR and TNR of \textsc{DeepTaster}\xspace, we generate 288 adversarial samples with 12 models trained on four different datasets and three different architectures. Then, we use Algorithm~\ref{alg:val} to test the adversarial DFT samples against the meta-classifier we built to detect stolen intelligence from the victim dataset.\newline \subheading{Efficacy.} As shown in Table~\ref{tb:MAA_results}, our \textsc{DeepTaster}\xspace exhibits high efficacy against MAA in both stolen and benign scenarios regardless of the victim datasets. Our \textsc{DeepTaster}\xspace can distinguish all models with at least 64\% accuracy. The BA of the meta-classifier of each of CIFAR10, MNIST, and Tiny-ImageNet shows high performance at 94.95\%, 94.95\%, and 93.60\%. \newline \observ{ \textsc{DeepTaster}\xspace is effective and efficient in identifying cross-architecture dataset intelligence copies} \subsubsection{Data Augmentation Attack (DAA)} \hfill\\ \subheading{Attack Strategies.} Here the target stolen dataset is CIFAR10, and we assume that the attacker creates a CIFAR15 dataset by adding extra five classes of images from CIFAR100 dataset to claim a different dataset from CIFAR10. Such a strategy aims to obtain better model utility while bypassing IP verification for stealing the intelligence of CIFAR10 dataset. We select the following 5 random classes from CIFAR100 that are not within CIFAR10: `apples', `bicycle', `can', `roses', and `clock'. We consider two attack cases. (a) The attacker uses a stolen pre-trained ResNet model that was trained on the target dataset CIFAR10, and then further fine-tunes that model on the CIFAR15 dataset they have created. (b) The attacker trains a model such as ResNet from scratch on the CIFAR15 dataset. In both cases, we investigate \textit{how \textsc{DeepTaster}\xspace performs against those two attacks at various epochs (20, 60, and 100).} We also use MNIST dataset trained on the same model as the benign case. The mean accuracy value of these attack models is about 72.48\% (see Table~\ref{tb:daa_results} for complete accuracy results). \newline \subheading{Efficacy.} As presented in Table~\ref{tb:daa_results}, \textsc{DeepTaster}\xspace is capable of detecting that the suspect models contain stolen knowledge from the victim dataset CIFAR10. For the first scenario where the attacker transfer learns from a pretrained stolen model CIFAR10, the average and SD of the TPR is 71.53\% and 7.11. For the second scenario where the attacker trains a model from scratch, the average and SD of the TPR is 68.17\% and 7.39. While \textsc{DeepTaster}\xspace has accurately detected all cases as stolen with 69.85\% mean accuracy, training from scratch as attack strategy seems more challenging and lower the detection rate especially when using high number of epochs. However, it also means the attacker would compromise the utility aspect by lowering the model accuracy as well.\\%\textcolor{red}{[Kristen: pls see my comment about this]}.\sharif{Seonhye, have you addressed this comment :)?} \seonhye{Maybe we solved it! If not, Kristen, tell me}\\ \begin{table}[h] \caption{DAA results for CIFAR10 meta-classifier. The copy fields values below indicate the classification results (Yes indicates ``Stolen'') and (No indicates ``Benign''). {\colorbox[HTML]{BDF3BD}{green}} indicates correct classification.} \begin{tabular}{c|c|c|c|c} \hline \begin{tabular}[c]{@{}c@{}}ResNet\\ Model\end{tabular} & Epochs & \begin{tabular}[c]{@{}c@{}}Detection \\ Acc.\end{tabular} & \begin{tabular}[c]{@{}c@{}}Model \\ Acc.\end{tabular} & Copy? \\ \hline & 20 & 63.19 & 72.19 & \cellcolor[HTML]{BDF3BD} \\ \cline{2-4} & 60 & 80.56 & 72.74 & \cellcolor[HTML]{BDF3BD} \\ \cline{2-4} \multirow{-3}{*}{\begin{tabular}[c]{@{}c@{}}TPR\% Positive\\ Pretrained\\ CIFAR10\end{tabular}} & 100 & 70.83 & 71.93 & \multirow{-3}{*}{\cellcolor[HTML]{BDF3BD}Yes (3/3)} \\ \hline & 20 & 58.33 & 72.87 & \cellcolor[HTML]{BDF3BD} \\ \cline{2-4} & 60 & 63.54 & 73.71 & \cellcolor[HTML]{BDF3BD} \\ \cline{2-4} \multirow{-3}{*}{\begin{tabular}[c]{@{}c@{}}TPR\% Positive\\ Scratch\\ CIFAR10\end{tabular}} & 100 & 82.64 & 74.63 & \multirow{-3}{*}{\cellcolor[HTML]{BDF3BD}Yes (3/3)} \\ \hline & 20 & 85.42 & 99.60 & \cellcolor[HTML]{BDF3BD} \\ \cline{2-4} & 60 & 79.86 & 99.69 & \cellcolor[HTML]{BDF3BD} \\ \cline{2-4} \multirow{-3}{*}{\begin{tabular}[c]{@{}c@{}}TNR\% Negative\\ Pretrained\\ MNIST\end{tabular}} & 100 & 99.65 & 99.48 & \multirow{-3}{*}{\cellcolor[HTML]{BDF3BD}No (3/3)} \\ \hline & 20 & 66.67 & 99.54 & \cellcolor[HTML]{BDF3BD} \\ \cline{2-4} & 60 & 86.11 & 99.67 & \cellcolor[HTML]{BDF3BD} \\ \cline{2-4} \multirow{-3}{*}{\begin{tabular}[c]{@{}c@{}}TNR\% Negative\\ Scratch\\ MNIST\end{tabular}} & 100 & 100 & 99.67 & \multirow{-3}{*}{\cellcolor[HTML]{BDF3BD}No (3/3)} \\ \hline \end{tabular} \label{tb:daa_results} \end{table} \observ{ Data Augmentation Attacks are more challenging in general especially when the attacker trains a model from scratch; however, \textsc{DeepTaster}\xspace still correctly identifies that the new model contains stolen dataset intelligence} \subsubsection{Model Retraining Attack (MRA)} \hfill\\ \subheading{Attack Strategies.} In MRA, an attacker trains the ResNet18 model on 10\%, 30\%, 50\%, 70\%, 90\%, and 100\% of the CIFAR10 dataset. We split the dataset uniformly --- including an equal number of samples from every class. The attacker aims to steal the dataset to build the attacker's model while evading the data theft attack detection. We evaluate the TPR and trained model accuracy every 50 epochs up to 200. Since MRA experiment results vary depending on the random seed initialization value, we repeat these experiments three times and report the average results. We also evaluate the TNR with the ResNet18 model on 10\%, 30\%, 50\%, 70\%, 90\%, and 100\% of the MNIST dataset with every 5 epochs up to 20. \subheading{Efficacy.} As shown in Table~\ref{tb:MRA_results}, the results demonstrate that when the portion of the stolen dataset that is used for training is $\geq 70\%$, \textsc{DeepTaster}\xspace is capable of detecting that the new trained suspect model contains a stolen dataset intelligence. Despite fluctuations in detection accuracy as the number of epochs varies, it is clear that the lowest TPR is $\> 55\%$ and reaches $81.48\%$ when $100\%$ of the dataset used in the training. On the other hand, when the attacker steals $< 70\%$ of the dataset, we might not be able to detect the suspect model as violating the copyright. This might also means the attacker might not be interested in the dataset intelligence as whole, but only target to steal certain portion of the samples. Likewise, our \textsc{DeepTaster}\xspace can detect benign models trained on $\geq 70\%$ of MNIST dataset as benign with TNR of 64.24\%. \\ \begin{table}[h] \centering \caption{MRA results for CIFAR10 meta-classifier. The copy field values below indicate the classification results (Yes indicates ``Stolen'') and (No indicates ``Benign''). {\colorbox[HTML]{BDF3BD}{green}} indicates correct classification, and {\colorbox[HTML]{FFCCC9}{red}} indicates misclassification.} \begin{tabular}{c|cccccc} \hline & \multicolumn{6}{c}{Dataset \%} \\ \cline{2-7} & \multicolumn{1}{c|}{10} & \multicolumn{1}{c|}{30} & \multicolumn{1}{c|}{50} & \multicolumn{1}{c|}{70} & \multicolumn{1}{c|}{90} & 100 \\ \cline{2-7} \multirow{-3}{*}{Epochs} & \multicolumn{6}{c}{TPR \%} \\ \hline 50 & \multicolumn{1}{c|}{0.0} & \multicolumn{1}{c|}{0.0} & \multicolumn{1}{c|}{4.17} & \multicolumn{1}{c|}{75.35} & \multicolumn{1}{c|}{56.02} & 81.48 \\ \hline 100 & \multicolumn{1}{c|}{0.0} & \multicolumn{1}{c|}{0.12} & \multicolumn{1}{c|}{34.61} & \multicolumn{1}{c|}{60.19} & \multicolumn{1}{c|}{75.35.94} & 68.40 \\ \hline 150 & \multicolumn{1}{c|}{0.0} & \multicolumn{1}{c|}{0.12} & \multicolumn{1}{c|}{34.84} & \multicolumn{1}{c|}{67.82} & \multicolumn{1}{c|}{77.54} & 80.67 \\ \hline 200 & \multicolumn{1}{c|}{0.0} & \multicolumn{1}{c|}{0.23} & \multicolumn{1}{c|}{36.69} & \multicolumn{1}{c|}{79.74} & \multicolumn{1}{c|}{63.78} & 70.14 \\ \hline Copy? & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}No} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}No} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFCCC9}No} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}Yes} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}Yes} & \cellcolor[HTML]{BDF3BD}Yes \\ \hline \end{tabular}\label{} \label{tb:MRA_results} \end{table} \observ{Retraining attacks have never been considered before due to their major manipulation of the model parameters, which presents challenges in detecting copyright infringements. \textsc{DeepTaster}\xspace shows a decent capability in detecting retraining attacks when $\geq 70\%$ of the stolen dataset is used in the training} \subsubsection{Transfer Learning Attack (TLA)} \hfill\\ \subheading{Attack Strategies.} We consider a scenario where an attacker steals the victim's ResNet18 model trained on CIFAR10 and performs transfer learning with the MNIST dataset with a learning rate $0.1$. We observe the TPR and model accuracy for every 10 epochs until 40 epochs. The transfer leaned model accuracy increases slightly from 99.34\% to 99.49\%. For the negative suspect model, we train ResNet18 directly on MNIST with the same settings. \newline \subheading{Efficacy.} As Table~\ref{tb:tla_results} shows, the average TPR of the four TLA models with different training epochs is 86.11\% (5.22), and the average TNR of the four benign models with different training epochs is 99.57\% (0.38). There is not much difference between training epochs, and overall it shows high accuracy in detecting transfer learning attacks. \begin{table}[h] \caption{TLA results for CIFAR10 meta-classifier with four positive models and four negative models. The copy field values below indicate the classification results (Yes indicates ``Stolen'') and (No indicates ``Benign''). {\colorbox[HTML]{BDF3BD}{green}} indicates correct classification.} \centering \begin{tabular}{c|c|c|c|c} \hline \begin{tabular}[c]{@{}c@{}}ResNet\\ Model\end{tabular} & Epochs & \begin{tabular}[c]{@{}c@{}}Detection \\ Acc.\end{tabular} & \begin{tabular}[c]{@{}c@{}}Model \\ Acc.\end{tabular} & Copy? \\ \hline & 10 & 79.17 & 99.34& \cellcolor[HTML]{BDF3BD} \\ \cline{2-4} & 20 & 90.62 & 99.40 & \cellcolor[HTML]{BDF3BD} \\ \cline{2-4} & 30 & 91.67 & 99.40 & \cellcolor[HTML]{BDF3BD} \\ \cline{2-4} \multirow{-4}{*}{\begin{tabular}[c]{@{}c@{}}TPR\%\\ Positive \\ CIFAR10 to\\ MNIST\end{tabular}} & 40 & 82.99 & 99.49 & \multirow{-4}{*}{\cellcolor[HTML]{BDF3BD}Yes (4/4)} \\ \hline & 50 & 99.65 & 99.34 & \cellcolor[HTML]{BDF3BD} \\ \cline{2-4} & \multicolumn{1}{c|}{100} & \multicolumn{1}{c|}{100} & \multicolumn{1}{c|}{99.40} & \cellcolor[HTML]{BDF3BD} \\ \cline{2-4} & \multicolumn{1}{c|}{150} & \multicolumn{1}{c|}{98.96} & \multicolumn{1}{c|}{99.40} & \cellcolor[HTML]{BDF3BD} \\ \cline{2-4} \multirow{-4}{*}{\begin{tabular}[c]{@{}c@{}}TNR\%\\ Negative\\ Only \\ MNIST\end{tabular}} & \multicolumn{1}{c|}{200} & \multicolumn{1}{c|}{99.65} & \multicolumn{1}{c|}{99.49} & \multirow{-4}{*}{\cellcolor[HTML]{BDF3BD}No (4/4)} \\ \hline \end{tabular} \label{tb:tla_results} \end{table} \observ{\textsc{DeepTaster}\xspace is fairly robust against transfer learning attacks} \begin{table*}[h] \caption{MFA results for CIFAR10 meta-classifier. The copy fields values below indicate the classification results (Yes indicates ``Stolen'') and (No indicates ``Benign''). {\colorbox[HTML]{BDF3BD}{green}} indicates correct classification.} \centering \begin{tabular}{cc|cc|cl|cl|c} \hline \multicolumn{2}{l|}{} & \multicolumn{2}{c|}{500 (0.01\%)} & \multicolumn{2}{c|}{1000 (0.02\%)} & \multicolumn{2}{c|}{2500 (0.1\%)} & \multicolumn{1}{l}{} \\ \hline \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}ResNet18\\ Model\end{tabular}} & Epochs & \multicolumn{1}{c|}{Model Acc.} & Detection Acc. & \multicolumn{1}{l|}{Model Acc.} & Detection Acc. & \multicolumn{1}{l|}{Model Acc.} & Detection Acc. & Copy? \\ \hline \multicolumn{1}{c|}{} & 1 & \multicolumn{1}{c|}{73.92} & 96.88 & \multicolumn{1}{c|}{73.87} & \multicolumn{1}{c|}{96.53} & \multicolumn{1}{c|}{73.91} & \multicolumn{1}{c|}{97.57} & \cellcolor[HTML]{BDF3BD} \\ \cline{2-8} \multicolumn{1}{c|}{} & 20 & \multicolumn{1}{c|}{74.05} & 95.83 & \multicolumn{1}{c|}{74.12} & \multicolumn{1}{c|}{95.49} & \multicolumn{1}{c|}{74.26} & \multicolumn{1}{c|}{94.79} &\cellcolor[HTML]{BDF3BD} \\ \cline{2-8} \multicolumn{1}{c|}{} & 40 & \multicolumn{1}{c|}{74.03} & 96.18 & \multicolumn{1}{c|}{74.27} & \multicolumn{1}{c|}{97.22} & \multicolumn{1}{c|}{74.50} & \multicolumn{1}{c|}{97.22} & \cellcolor[HTML]{BDF3BD} \\ \cline{2-8} \multicolumn{1}{c|}{\multirow{-4}{*}{\begin{tabular}[c]{@{}c@{}}TPR\%\\ Positive \\ CIFAR10\end{tabular}}} & 60 & \multicolumn{1}{c|}{74.15} & 94.79 & \multicolumn{1}{c|}{74.27} & \multicolumn{1}{c|}{96.88} & \multicolumn{1}{c|}{74.57} & \multicolumn{1}{c|}{98.26} & \multirow{-4}{*}{\cellcolor[HTML]{BDF3BD}Yes (4/4)} \\ \hline \multicolumn{1}{c|}{} & 50 & \multicolumn{1}{c|}{99.44} & \multicolumn{1}{c|}{88.19} & \multicolumn{1}{c|}{99.46} & \multicolumn{1}{c|}{89.24} & \multicolumn{1}{c|}{99.46} & \multicolumn{1}{c|}{84.72} & \cellcolor[HTML]{BDF3BD} \\ \cline{2-8} \multicolumn{1}{c|}{} & 100 & \multicolumn{1}{c|}{99.46} & \multicolumn{1}{c|}{89.24} & \multicolumn{1}{c|}{99.45} & \multicolumn{1}{c|}{90.28} & \multicolumn{1}{c|}{99.48} & \multicolumn{1}{c|}{87.50} & \cellcolor[HTML]{BDF3BD} \\ \cline{2-8} \multicolumn{1}{c|}{} & 150 & \multicolumn{1}{c|}{99.49} & \multicolumn{1}{c|}{89.93} & \multicolumn{1}{c|}{99.48} & \multicolumn{1}{c|}{89.24} & \multicolumn{1}{c|}{99.47} & \multicolumn{1}{c|}{85.07} & \cellcolor[HTML]{BDF3BD} \\ \cline{2-8} \multicolumn{1}{c|}{\multirow{-4}{*}{\begin{tabular}[c]{@{}c@{}}TNR\%\\ Negative\\ MNIST\end{tabular}}} & 200 & \multicolumn{1}{c|}{99.47} & \multicolumn{1}{c|}{92.71} & \multicolumn{1}{c|}{99.47} & \multicolumn{1}{c|}{89.58} & \multicolumn{1}{c|}{99.47} & \multicolumn{1}{c|}{85.76} & \multirow{-4}{*}{\cellcolor[HTML]{BDF3BD}No (4/4)} \\ \hline \end{tabular} \label{tb:MFA_results.} \end{table*} \subsubsection{Model Fine-tuning Attack (MFA)} \hfill\\ \subheading{Attack Strategies.} Here we consider the scenario where an attacker steals a pre-trained ResNet18 model trained on CIFAR10. Then they fine-tune the stolen model using a small portion of the CIFAR10 dataset --- consisting of either 100, 500, 1000, or 2500 samples, using a very small learning rate 0.00005. The trained model accuracy is fairly constant across the different settings at around 74.18\% (0.04). For the benign case, we fine-tune a pre-trained ResNet18 model trained on MNIST to measure the TNR of \textsc{DeepTaster}\xspace against MFA. The average model accuracy of the benign fine-tuning model is 99.50\% (0.01). \newline \subheading{Efficacy.} As presented in Table~\ref{tb:MFA_results.}, the average TPR across training epochs is 95.92\%, 96.53\%, and 96.96\% when the number of training samples equals 500, 1000, and 2500 respectively. It is clear that the size of fine-tuning training set and the number of training epochs have little impact on the TPR value as the SD of all TPR values is equal to 1.04. For the benign case, the obtained TNR with MNIST MFA models is 88.46\% (2.24). The combined ROC AUC score is 0.9256. Therefore, despite the variations in the number of samples used in fine-tuning, \textsc{DeepTaster}\xspace is capable of detecting the model as stolen with high confidence.\\ \observ{Our \textsc{DeepTaster}\xspace is robust against MFA regardless of the size of training dataset or the number of training epochs} \subsubsection{Model Pruning Attack (MPA)} \hfill\\ \subheading{Attack Strategies.} In this scenario an attacker prunes 20\%, 40\%, and 60\% of the victim's ResNet18 model that is trained on the target dataset CIFAR10. Then the attacker fine-tunes the model for 5 epochs with a small learning rate of 0.00005. To evaluate the TNR, we also perform the same pruning and fine-tuning to the benign ResNet18 model rained on MNIST. We ensure that all MPA and benign ResNet models have decent accuracy.\newline \subheading{Efficacy.} As exhibited in Table~\ref{tb:mpa_results}, the TPR of the pruned model is 98.26\%, 84.72\% and 85.07\% as the percentage of the parameters pruned increases from 20 to 60. As many parameters were pruned, the accuracy of the model decreased slightly, and thus the TPR decreased. Nevertheless, \textsc{DeepTaster}\xspace provides a high detection TPR at 89.35\% (6.30) on average. On the other hand, the TNR is 99.31\%, 99.31\%, and 100\%, which is uniformly high regardless of how many parameters are pruned. In the case of ROC AUC, the obtained results shows high performance at 0.9445. \\ \begin{table}[h] \caption{MPA results for CIFAR10 meta-classifier with three positive models and three negative models. The copy field values below indicate the classification results (Yes indicates ``Stolen'') and (No indicates ``benign''). {\colorbox[HTML]{BDF3BD}{green}} indicates correct classification.} \begin{tabular}{c|c|c|c|c} \hline \begin{tabular}[c]{@{}c@{}}ResNet18\\ Model\end{tabular} & Prune \% & Model Acc. & Detection Acc. & Copy? \\ \hline & 20 & 71.63 & 98.26 & \cellcolor[HTML]{BDF3BD} \\ \cline{2-4} & 40 & 72.10 & 84.72 &\cellcolor[HTML]{BDF3BD} \\ \cline{2-4} \multirow{-3}{*}{\begin{tabular}[c]{@{}c@{}}TPR\%\\ Positive\\ CIFAR10\end{tabular}} & 60 & 68.86 & 85.07 & \multirow{-3}{*}{\cellcolor[HTML]{BDF3BD}Yes (3/3)} \\ \hline & 20 & 99.41 & 99.31 & \cellcolor[HTML]{BDF3BD} \\ \cline{2-4} & 40 & 99.35 & 99.31 & \cellcolor[HTML]{BDF3BD} \\ \cline{2-4} \multirow{-3}{*}{\begin{tabular}[c]{@{}c@{}}TNR\%\\ Negative\\ MNIST\end{tabular}} & 60 & 99.18 & 100 & \multirow{-3}{*}{\cellcolor[HTML]{BDF3BD}No (3/3)} \\ \hline \end{tabular} \label{tb:mpa_results} \end{table} \observ{While an increase in the percentage of neurons pruned may result in a slightly lower detection accuracy, nevertheless \textsc{DeepTaster}\xspace is still robust against MPA} \subsection{Comparison with Existing Fingerprinting Techniques} \label{sec:Comparison with Existing Fingerprinting Techniques} \begin{table*}[h!] \centering \caption{Data theft attack detection results of \textsc{DeepJudge}\xspace, the leading state of the art fingerprinting technique. DeepJudge uses majority voting, where 3 out of 4 metrics have to produce values $<$ threshold to support the right final judgement of being stolen. {\colorbox[HTML]{BDF3BD}{green}} indicates correct classification, and {\colorbox[HTML]{FFCCC9}{red}} indicates misclassification.} \begin{tabular}{clccccccc} \hline \multicolumn{1}{c|}{Victim} & \multicolumn{1}{c|}{Ground Truth} & \multicolumn{1}{c|}{Suspect} & \multicolumn{1}{c|}{Metric1} & \multicolumn{1}{c|}{Metric2} &\multicolumn{1}{c|}{Metric3}& \multicolumn{1}{c|}{Metric4} & \multicolumn{1}{c|}{Copy?} & \multicolumn{1}{l}{TPR / TNR (\%)} \\ \hline\hline \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{\textbf{Threshold}} & \multicolumn{1}{c|}{\textbf{1.79}} & \multicolumn{1}{c|}{\textbf{6.14}} & \multicolumn{1}{c|}{\textbf{6.89}} & \multicolumn{1}{c|}{\textbf{3.01}} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{ } \\ \cline{2-9} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{CIFAR10} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}Yes} & \multicolumn{1}{c}{} \\ \cline{3-7} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{CIFAR10 DAA} & \multicolumn{1}{c|}{0.0019} & \multicolumn{1}{c|}{0.1111} & \multicolumn{1}{c|}{0.2370} & \multicolumn{1}{c|}{0.2828} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}Yes} & \multicolumn{1}{c}{} \\ \cline{3-7} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{CIFAR10 MFA} & \multicolumn{1}{c|}{\textcolor{red}{7.7778}} & \multicolumn{1}{c|}{0.0004} & \multicolumn{1}{c|}{0.0} & \multicolumn{1}{c|}{0.0} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}Yes} & \multicolumn{1}{c}{} \\ \cline{3-7} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{CIFAR10 MPA} & \multicolumn{1}{c|}{0.0093} & \multicolumn{1}{c|}{0.0377} & \multicolumn{1}{c|}{0.0593} & \multicolumn{1}{c|}{0.0551} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}Yes} & \multicolumn{1}{c}{} \\ \cline{3-7} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{CIFAR10 TLA} & \multicolumn{1}{c|}{0.0032} & \multicolumn{1}{c|}{0.0135} & \multicolumn{1}{c|}{0.1185} & \multicolumn{1}{c|}{0.0111} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}Yes} & \multicolumn{1}{c}{} \\ \cline{3-7} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\multirow{-6}{*}{Stolen}} & \multicolumn{1}{c|}{CIFAR10 MRA} & \multicolumn{1}{c|}{0.0019} & \multicolumn{1}{c|}{0.0123} & \multicolumn{1}{c|}{0.4988}& \multicolumn{1}{c|}{0.5477} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}Yes} & \multicolumn{1}{c}{\multirow{-6}{*}{100 (TPR)}} \\ \cline{2-9} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{MNIST} & \multicolumn{1}{c|}{\textcolor{red}{0.0034}} & \multicolumn{1}{c|}{\textcolor{red}{0.0183}} & \multicolumn{1}{c|}{\textcolor{red}{0.1541}} & \multicolumn{1}{c|}{\textcolor{red}{0.2558}} & \multicolumn{1}{c|}{\cellcolor[rgb]{1,0.8,0.788}Yes} & \multicolumn{1}{c}{} \\ \cline{3-7} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{MNIST DAA} & \multicolumn{1}{c|}{\textcolor{red}{0.0287}} & \multicolumn{1}{c|}{\textcolor{red}{0.1173}} & \multicolumn{1}{c|}{\textcolor{red}{1.5526}} & \multicolumn{1}{c|}{\textcolor{red}{1.4548}} & \multicolumn{1}{c|}{\cellcolor[rgb]{1,0.8,0.788}Yes} & \multicolumn{1}{c}{} \\ \cline{3-7} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{MNIST MFA} & \multicolumn{1}{c|}{\textcolor{red}{0.0034}} & \multicolumn{1}{c|}{\textcolor{red}{0.0183}} & \multicolumn{1}{c|}{\textcolor{red}{0.1541}} & \multicolumn{1}{c|}{\textcolor{red}{0.2558}} & \multicolumn{1}{c|}{\cellcolor[rgb]{1,0.8,0.788}Yes} & \multicolumn{1}{c}{} \\ \cline{3-7} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{MNIST MPA} & \multicolumn{1}{c|}{\textcolor{red}{0.0018}} & \multicolumn{1}{c|}{\textcolor{red}{0.0121}} & \multicolumn{1}{c|}{\textcolor{red}{0.4991}} & \multicolumn{1}{c|}{\textcolor{red}{0.5804}} & \multicolumn{1}{c|}{\cellcolor[rgb]{1,0.8,0.788}Yes} & \multicolumn{1}{c}{} \\ \cline{3-7} \multicolumn{1}{c|}{\multirow{-12}{*}{CIFAR10}} & \multicolumn{1}{c|}{\multirow{-5}{*}{Benign}} & \multicolumn{1}{c|}{Tiny-ImageNet} & \multicolumn{1}{c|}{\textcolor{red}{0.0094}} & \multicolumn{1}{c|}{\textcolor{red}{0.0369}} & \multicolumn{1}{c|}{\textcolor{red}{1.3261}}& \multicolumn{1}{c|}{\textcolor{red}{1.2877}} & \multicolumn{1}{c|}{\cellcolor[rgb]{1,0.8,0.788}Yes} & \multicolumn{1}{c}{\multirow{-5}{*}{0 (TNR)}} \\ \hline\hline \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{\textbf{Threshold}} & \multicolumn{1}{c|}{\textbf{0.45}} & \multicolumn{1}{c|}{\textbf{6.74}} & \multicolumn{1}{c|}{\textbf{1.03}} & \multicolumn{1}{c|}{\textbf{3.65}} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-9} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Stolen} & \multicolumn{1}{c|}{MNIST} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}Yes} & \multicolumn{1}{c}{100 (TPR)} \\ \cline{2-9} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{CIFAR10} & \multicolumn{1}{c|}{\textcolor{red}{0.0222}} & \multicolumn{1}{c|}{\textcolor{red}{0.1029}} & \multicolumn{1}{c|}{1.7333} & \multicolumn{1}{c|}{\textcolor{red}{2.7778}} & \multicolumn{1}{c|}{\cellcolor[rgb]{1,0.8,0.788}Yes} & \multicolumn{1}{c}{} \\ \cline{3-7} \multicolumn{1}{c|}{\multirow{-4}{*}{MNIST}} & \multicolumn{1}{c|}{\multirow{-2}{*}{Benign}} & \multicolumn{1}{c|}{Tiny-ImageNet} & \multicolumn{1}{c|}{\textcolor{red}{0.0352}} & \multicolumn{1}{c|}{\textcolor{red}{0.1414}} & \multicolumn{1}{c|}{\textcolor{red}{0.4444}} & \multicolumn{1}{c|}{\textcolor{red}{0.7889}} & \multicolumn{1}{c|}{\cellcolor[rgb]{1,0.8,0.788}Yes} & \multicolumn{1}{c}{\multirow{-2}{*}{0 (TNR)}} \\ \hline\hline \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{\textbf{Threshold}} & \multicolumn{1}{c|}{\textbf{0.0012}} & \multicolumn{1}{c|}{\textbf{0.0053}} & \multicolumn{1}{c|}{\textbf{0.8444}} & \multicolumn{1}{c|}{\textbf{0.8444}} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{2-9} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Stolen} & \multicolumn{1}{c|}{Tiny-ImageNet} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}Yes} & \multicolumn{1}{c}{100 (TPR)} \\ \cline{2-9} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{CIFAR10} & \multicolumn{1}{c|}{\textcolor{red}{0.0012}} & \multicolumn{1}{c|}{\textcolor{red}{0.0074}} & \multicolumn{1}{c|}{1.4444} & \multicolumn{1}{c|}{1.4444} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}No} & \multicolumn{1}{c}{} \\ \cline{3-7} \multicolumn{1}{c|}{\multirow{-4}{*}{Tiny-ImageNet}} & \multicolumn{1}{c|}{\multirow{-2}{*}{Benign}} & \multicolumn{1}{c|}{MNIST} & \multicolumn{1}{c|}{0.0031} & \multicolumn{1}{c|}{0.0126} & \multicolumn{1}{c|}{\textcolor{red}{0.1556}} & \multicolumn{1}{c|}{\textcolor{red}{0.1556}} & \multicolumn{1}{c|}{\cellcolor[HTML]{BDF3BD}No} & \multicolumn{1}{c}{\multirow{-2}{*}{100 (TNR)}} \\ \hline \end{tabular} \label{tb:comparison} \end{table*} \subheading{Comparison Settings.} We conduct experimental comparisons with \textsc{DeepJudge}\xspace~\cite{chen2021copy}, the leading fingerprinting technique. \textsc{DeepJudge}\xspace generates four metrics for white-box evaluation and two metrics for black-box evaluation. It uses majority voting where 3 out of 4 metrics have to produce values $<$ threshold to support the correct final judgement of being stolen. \textsc{DeepJudge}\xspace has been designed to provide architecture dependent protection --- namely, all model parameters need be the same including the number of classes, etc. On the other hand, our \textsc{DeepTaster}\xspace is designed to be architecture agnostic to enable the dataset intelligence to be tracked even when the model architecture is changed. Hence, \textit{\textsc{DeepJudge}\xspace is not able to detect MAA as stolen models owing to its design limitation}. Therefore, we compare \textsc{DeepJudge}\xspace verses our \textsc{DeepTaster}\xspace in the other 5 attacks in addition to direct cloning. For the transfer learning attack, we observe that \textsc{DeepJudge}\xspace only considered the same number of classes between the original model and the transfer-learned model, which might not always be the case. Thus, we utilize the scripts released with \textsc{DeepJudge}\xspace and apply small modifications to run the data augmentation and transfer learning attacks when the number of classes are different, in order to ensure a fair comparison. We set the target model of \textsc{DeepJudge}\xspace as a ResNet18 model trained on CIFAR10, MNIST, and Tiny-ImageNet. For the CIFAR10 dataset, we test with five attack models, using a ResNet18 model trained on MNIST and Tiny-ImageNet as the ``Benign'' cases. In other cases, we only check the TNR for two negative cases trained on other datasets. We use the threshold of \textsc{DeepJudge}\xspace as mentioned in their paper~\cite{chen2021copy} for CIFAR10 and MNIST. In the case of Tiny-ImageNet, we set the threshold with retrained model on Tiny-ImageNet following the method used in their paper. We carefully measure the TPR and TNR of \textsc{DeepJudge}\xspace verses \textsc{DeepTaster}\xspace. Note that we only use four white-box metrics of \textsc{DeepJudge}\xspace for a fair comparison.\newline \subheading{Results.} At first glance, \textsc{DeepJudge}\xspace seems to perform well in detecting data theft attacks (see the ``Stolen'' cases in Table~\ref{tb:comparison}). However, \textsc{DeepJudge}\xspace was ineffective in detecting the ``Benign'' cases. For all ``Benign'' cases in the CIFAR10 and MNIST datasets as victim datasets, \textsc{DeepJudge}\xspace recognized them as ``Stolen.'' For the only Tiny-ImageNet dataset as a victim dataset, \textsc{DeepJudge}\xspace successfully recognized the benign models trained on CIFAR10 and MNIST as ``Benign.'' Still, two metric values out of four were contained in the ranges of ``Stolen.'' \observ{Our \textsc{DeepTaster}\xspace is the only IP protection mechanism that can detect MAA attacks, owing to its design requirement to be architecture-agnostic. It also has low TNR} \subsection{Generalisation Efficacy}\label{sec:generalisation} To examine the generalisability of \textsc{DeepTaster}\xspace, we conduct evaluations using two other datasets: Tiny-ImageNet and MNIST. For Tiny-ImageNet, we experiment with the four most challenging attack scenarios: MAA, DAA, MRA, and MFA. Both MAA and DAA examine the robustness when the model architecture is changed or the number of output classes are altered. For MAA, we use DenseNet, ResNet, and VGG. For DAA, the attacker starts with a model that is pre-trained on the 100-class Tiny-ImageNet dataset. The attacker then creates a dataset of 110 classes by adding 10 more classes of Tiny-ImageNet that were not used in the initial victim model training, and fine-tunes the model on the new 110-class dataset. The other two attacks are the MRA and MFA to investigate the resilience of \textsc{DeepTaster}\xspace against retraining and fine-tuning. Figure~\ref{fig:tiny_ImageNet} shows all the TPR and TNR results of Tiny-ImageNet. It is clear that our \textsc{DeepTaster}\xspace is robust against all attacks and is able to detect them with high TPR and TNR. The only exception is when MRA is executed using a low percentage ($<50\%$) of the dataset, in which case our \textsc{DeepTaster}\xspace could not detect that as ``Stolen.'' (which is consistent with the Remark 3). Similarly for the MNIST dataset, we conduct experiments to further examine the generalisability and resilience of our \textsc{DeepTaster}\xspace on a diverse range of datasets. For DAA, the MNIST has only 10 digits from zero to nine, so we take a pre-trained MNIST model and we augment the MNIST samples within each class using another dataset from the same domain named EMNIST. The results of the 4 attacks are shown in Figure~\ref{fig:MNIST}. It is obvious that despite changing the target dataset that needs to be tracked to MNIST, our \textsc{DeepTaster}\xspace is robust in general against all the attacks with high TPR and TNR. Our retraining attack results are consistent with the earlier findings that when a low percentage of the dataset is used, our \textsc{DeepTaster}\xspace could not identify the retrained model as ``Stolen.'' Still, our TNR is very high which indicates we have no challenge to identify the benign cases. \begin{figure}[h!] \centering \begin{tabular}{cc} \includegraphics[width=1\linewidth]{figure/tiny_ImageNet.pdf}& \end{tabular} \caption{Performance of Meta-classifier for Tiny-ImageNet against MAA with DenseNet (DN), ResNet (RN), and VGG, DAA with two different versions which are from scratch (V1) and with pre-trained victim model (V2), MRA by dataset size, and MFA by number of data samples.} \label{fig:tiny_ImageNet} \centering \end{figure} \begin{figure}[h!] \centering \begin{tabular}{cc} \includegraphics[width=\linewidth]{figure/MNIST.pdf}& \end{tabular} \caption{Performance of Meta-classifier for MNIST against MAA with DenseNet (DN), ResNet(RN), and VGG, DAA with two different versions which are from scratch (V1) and with pre-trained victim model (V2), MRA by dataset size, and MFA by number of data samples.} \label{fig:MNIST} \centering \end{figure} \section*{Appendix A} \section{Discussion} \label{sec:Discussion} \subheading{Has \textsc{DeepTaster}\xspace met our target design requirements?} The obtained results demonstrate that \textsc{DeepTaster}\xspace has met the four design requirements defined in Section~\ref{sec:system_design}. Firstly, to meet the robustness criteria, our \textsc{DeepTaster}\xspace demonstrated its ability to capture the dataset ownership IP and be resilient even to model architecture changes or changes in the number of model output classes as a form of attack. Secondly, to meet the fidelity criterion, our IP protection mechanism has zero impact on the model accuracy due to its design as a fingerprinting technique rather than the traditional watermarking invasive method. Thirdly, to meet the efficacy criteria, \textsc{DeepTaster}\xspace has exhibited high detection accuracy across six attacks with reliable TPR and TNR. Lastly, to meet the efficiency criteria, we conducted further experiments to investigate \textit{what is the minimum number of adversarial DFT samples used in the inference to detect an attack?} In this experiment, the CIFAR10 meta-classifier was used against 69 suspect models. For the positive cases, we use a combination of 37 models. This includes three CIFAR10 models with MPA attack, 12 CIFAR10 models with MFA attack, six CIFAR10 models with DAA attack(three with DAA scratch, three with DAA pretrained), three CIFAR10 models with TLA attack, and 12 CIFAR10 with MRA (70\%, 90\%, and 100\% of dataset was used). For the negative cases, we use a combination of 32 models. This includes three ImageneNet binign models with different architecture, four MNIST benign models with various epochs, six MNIST benign models with DAA, 16 MNIST benign models with MFA, and three MNIST beningn models with MPA. We found that with \textit{only three adversarial DFT samples}, our sysname could detect the positive stolen from negative benign.\newline \subheading{How does the meta-classifier threshold impact the \textsc{DeepTaster}\xspace model's efficacy?} As stated in Section~\ref{sec:meta_classifier}, we define the threshold of the data classifier such that 4\% of the samples in the victim dataset validation set lie above the threshold (or equivalently, 96\% lie below). To determine this figure, we experimented with how the performance of three different meta-classifiers depends on the threshold. In the experiment, 9 models consisting of three datasets, three model architectures and three meta-classifiers targeting each dataset were used, and the performance index was set as balanced acuity. Figure~\ref{fig:threshold_performance} show the change in performance of each meta-classifier when the threshold value is changed from having the top 1\% of the samples lying above the threshold to the top 10\% of samples lying above the threshold. As the threshold value decreases, the performance of the 3 classifiers tends to increase and then decrease. When calculated from the deviation values of the three meta-classifiers, the highest performance is 94.52\% when the threshold is set to a value of top 4\%. Accordingly, we conducted all our experiments by setting the threshold of all meta-classifiers as a value of top 4\%. As our Meta-classifier is one class classifier, this threshold has been selected independently from the true-negative case. \newline \begin{figure}[h!] \centering \begin{tabular}{cc} \includegraphics[width=.9\linewidth]{figure/threshold_vs_performance.pdf}& \end{tabular} \caption{Threshold vs. Performance.} \label{fig:threshold_performance} \centering \end{figure} \subheading{How the meta-classifier training dataset size and dimensions impact \textsc{DeepTaster}\xspace model's efficacy?} The performance of meta-classifier may depend on the size of training adversarial DFTs samples dataset. Generally, a larger training dataset might facilitate producing a higher performance model. In our case, generating large adversarial DFTs samples dataset might mean higher time cost. For generating balanced model between TPR and TNR, we test the relationship of performance and dataset size. Training dataset is generated with various size: 2400, 4800, 7200, 9600 images. These four dataset is generated from ImageNet VGG, ResNet, and Densenet models and we use the same set of adversarial DFTs sample for consistency. The BA of meta-classifier is 97.28\%, 98.50\%, 96.28\%, and 96.82\% when the training dataset size is 2400, 4800, 7200, 9600 respectively. We choose the training dataset size as 4800 as it produces the best performance. We also observe that the performance of our \textsc{DeepTaster}\xspace can vary depending on the adversarial image dimensions. The smaller the size of the adversarial image, the smaller the perturbation that could capture from the model dataset intelligence, the lower the performance of the \textsc{DeepTaster}\xspace might be. If the size of the image is $32\cdot32\cdot3$, the model exhibits almost indistinguishable performance, but if the size of the image is $224\cdot224\cdot3$, as currently used in the experimental setting, the detection performance is high. Therefore, we recommend generating large-dimensional adversarial images when using \textsc{DeepTaster}\xspace.\newline \subheading{Model IP vs. Data IP.} Existing IP verification approaches for DNN (e.g.,~\cite{lukas2019deep,cao2021ipguard,chen2021copy,Jingjing2020AFA}) typically focus on a model's explicit properties, such as the parameters or weights of a DNN model, namely the \textit{model IP}, which are model dependent. Therefore, those existing DNN fingerprinting techniques would be ineffective in detecting data theft attacks. From the experimental results in the previous work~\cite{chen2021copy} and our work (see Section~\ref{sec:Comparison with Existing Fingerprinting Techniques}), we confirmed that \textsc{DeepJudge}\xspace recognized the cases in which different model architectures are built on the same dataset as separate models. In contrast, we consider a model's implicit properties representing the knowledge learned from the training data, namely the \textit{data IP}, which are the training dataset dependent. Instead of examining individual neuron-level metrics which are model dependent, we are trying to find the context features learned from adversarial examples in terms of the decision boundaries of a model, which would be more highly affected by the training dataset rather than the model itself. Even though we use different model architectures, we can obtain adversarial examples having statistically similar patterns in the DFT domain from each model architecture if we use the same dataset for training those models. Consequently, \textsc{DeepTaster}\xspace can effectively detect data theft attacks even when attackers use a model architecture different from a victim's.\newline \subheading{Adaptive Attacks.} In our experimental evaluations across the six targeted attacks, we extensively examined various adaptive attack strategies that could be employed by the attacker to evade dataset intelligence stealing detection. These include changing the model architecture in MAA, altering the number of classes in DAA and TLA, tuning the parameters in both MFA and MPA, and altering the proportion of the dataset used in the MRA. In all these cases, our \textsc{DeepTaster}\xspace exhibits a robust ability to detect the stolen dataset IP, while being able to recognise the benign cases. The only exception is that when the attacker uses small proportion of the dataset in the retraining attack, our \textsc{DeepTaster}\xspace might not be able to flag the new model as stolen. This is arguably an acceptable behavior as we are tracking the dataset intelligence, and stealing its intelligence would require the attacker to use large proportion of it, $\geq 70\%$, to be able to reproduce its utility. We next discuss \textit{what adaptive strategy the attacker could employ once our defence mechanism is released?} To mitigate that risk, our \textsc{DeepTaster}\xspace relies on two key pillars which are the adversarial perturbation generation/transformation and the meta-classifier. While we make no assumption around the adversarial generation, we assume that the meta-classifier pipeline including its parameters, training configurations and thresholds to be confidential and secure. It should be only accessible by the model owner or during the verification within a secure environment.\newline \subheading{Complexity Evaluation.} To calculate the complexity of \textsc{DeepTaster}\xspace, we measure the time it takes to distinguish the suspect model using the meta-classifier. As summarised in Table~\ref{tb:complexity}, we have 3 steps. For step 1 - the Adversarial Perturbation Generation and Transformation - the time is around 769 seconds (0.3 seconds per image). For step 2 - training the meta-classifier - the time taken is around 766.8 seconds. Note, steps 1 and 2 are one-off tasks for model development and are not repeated for every verification. Only a few adversarial DFT samples are needed for verification. For step 3 - suspect model verification - is around 9.78 (0.0339 second). The total time for generation, training and verification is about 1,546.44. This is a reasonable time considering \textsc{DeepJudge}\xspace takes a total of 1,937.79 seconds.\newline \begin{table}[h] \centering \caption{Complexity evaluation for \textsc{DeepTaster}\xspace.} \begin{tabular}{c|c|c} \hline Step & Task & Time (Sec) \\ \hline 1 & Adversarial DFT Generation & 0.3538 (per image) \\ \hline 2 & Meta-classifier Training & 766.8 \\ \hline 3 & Suspect Model Verification & 0.0339 (per image) \\ \hline \end{tabular} \label{tb:complexity} \end{table} \subheading{Limitations and Future Work.} Despite the robust efficacy of \textsc{DeepTaster}\xspace against the six attacks and existing works, we acknowledge the following limitations of our current design. \begin{itemize}[leftmargin=*] \item Reference models for the meta-classifier. We found that the adversarial DFT images contain both the dataset and model architecture information, which are entangled together. The main focus of this work is to track the IP of datasets across architectures. We observe that if the suspect model architecture is previously unseen by the meta-classifier, it may result in a higher false positive rate for detection. Therefore, ideally, the dataset information could be separated from the architecture information, to achieve better detection performance for unseen models architectures. We currently intend to reduce the architecture impact by using a larger variety of architectures as the reference models (victim models) for training the meta-classifier. A future research direction may be to propose disentanglement strategies to separate the architecture information in the DFT images. \item White-box design. To evaluate the suspect model, \textsc{DeepTaster}\xspace needs access to the suspect model to be able to generate adversarial DFT samples against that model before examining them with our meta-classifier. A possible future direction could be to explore ways to generate adversarial samples in a black-box setting to provide more flexibility and generalization. \item Sensitivity of detecting retraining models. The detection performance of \textsc{DeepTaster}\xspace for the models retrained on $\> 70\%$ of the stolen dataset is robust, over 71.37\% in general. However, when the percentage of stolen datasets drops to 30\% or 20\%, the detection accuracy is reduced to 1-2\%. In the future, we may propose an adaptive threshold for increasing the sensitivity of the data IP detection performance. \end{itemize} \section{Related Work} \subsection{DNN Watermarking} The first stream of related work uses watermarking to protect the copyright of DNN models~\cite{uchida2017embedding,adi2018turning,zhang2018protecting,darvish2019deepsigns,le2020adversarial,jia2021entangled}. As in classical multimedia watermarking, DNN watermarking includes two stages: \textit{embedding} and \textit{verification}. In the \textit{embedding} stage, the DNN model owner inserts a secret watermark (e.g., signature or a trigger) into the model during the training phase. Existing watermarking techniques can be categorised as either \textit{white-box} or \textit{black-box} based on how much knowledge is available during the \textit{verification} stage. White-box techniques assume the model parameters are available~\cite{uchida2017embedding,darvish2019deepsigns,wang2022integrity}. They insert a string of bits (signature) into the model parameter space via several regularization terms. The ownership of the IP could be claimed when the retrieved string of bits from the suspect model matches to the owner signature. Black-box techniques only have access to model predictions during verification. They leverage backdoor attacks~\cite{gu2019badnets} to embed a watermark (backdoor samples) into the ownership model during the training process, where the class of each backdoor sample is relabelled to a secret class~\cite{le2020adversarial,zhang2018protecting}. The ownership could be verified by querying the suspect model using the pre-defined backdoor samples and receiving the correct secret class for each sample. \subsection{DNN Fingerprinting} DNN fingerprinting mechanisms have been recently introduced as an alternative approach to verifying model ownership via two stages called fingerprint extraction and verification. Fingerprinting methods~\cite{IPGuard,Jingjing2020AFA,lukas2019deep,chen2021copy} are all \textit{black-box} techniques. They are \textit{non-invasive}, as opposed to watermarking techniques that are \textit{invasive}. Rather than altering the training process to inject the watermark, fingerprinting directly retrieves a unique property/feature of the owner's model as its fingerprint. The ownership can then be validated if the fingerprint matches with the one extracted from the suspect model. In general, there are two streams of work under this category: \textit{single} and \textit{multiple} fingerprinting. Single fingerprinting uses one feature/property as identifier. For example, IPGuard~\cite{IPGuard} uses data points close to the model's decision boundaries as that identifier. Lukas et. al.~\cite{lukas2019deep} propose a conferrable adversarial examples that transfers a target label from a source model to its stolen model. They use that as a model identifier. Multiple fingerprinting leverages multiple features/metrics as a fingerprint to handle different types of model stealing and adaptive attacks. For instance, \textsc{DeepJudge}\xspace~\cite{chen2021copy} recently introduced a multi-level metrics mechanism that could be used as a unique IP identifier between owner and stolen models. Although the above streams protect the model IP with high performance, they suffer from two main limitations. Firstly, they are architecture-dependent by design. For example, training the same (stolen) dataset on 3 different DNNs cannot be identified as IP violation, even though all 3 models absorbed the same dataset ownership IP. Secondly, due to being architecture-dependent, they struggle to detect transfer learning attacks. For instance, if a pre-trained DNN is stolen and used for transfer learning to a different domain, this cannot be tracked as stolen IP. In other words, they could not track the dataset ownership IP obtained from a dataset across various architectures. Therefore, we propose \textsc{DeepTaster}\xspace, a robust dataset ownership IP tracking technique against 6 attacks. \section*{Acknowledgments}
2,869,038,155,397
arxiv
\section{Introduction and background} The closeness of $5'$ and $3'$ ends of RNA molecules has distinct biological significance, for instance for the replication efficiency of single stranded RNA viruses or the efficient translation of messenger RNA molecules. It is speculated in \citep{Yoffe} that this effective circularization of large RNA molecules is rather a generic phenomenon of large RNA molecules and independent of sequence length. It is to large extend attributed to the high number of paired bases. In this paper we study the distribution of $5'$-$3'$ distances in RNA secondary structures. We first compute the distribution of $5'$-$3'$ distances of RNA secondary structures of length $n$ by means of a bivariate generating function. The key idea is to view secondary structures as tableaux sequences and to relate the $5'$-$3'$ distance to the nontrivial returns \citep{Emma:decom} of the corresponding path of shapes. Secondly, we derive the limit distribution of $5'$-$3'$ distances. The idea is to compute the singular expansion of the above generating function via the subcritical paradigm \citep{Flajolet:07a} and to employ a discrete limit theorem. Our results prove, that the $5'$-$3'$ distances of random RNA structures are distinctively smaller than those of biological RNA molecules and minimum free energy (mfe) RNA structures. This comes as a surprise since the number of paired bases in random structures is $55.2 \%$ \citep{Reidys:book} and therefore smaller than the $60\%$ of mfe structures \citep{Schuster:93}. An RNA structure is the helical configuration of its primary sequence, i.e.~the sequence of nucleotides {\bf A}, {\bf G}, {\bf U} and {\bf C}, together with Watson-Crick ({\bf A-U}, {\bf G-C}) and ({\bf U-G}) base pairs. The combinatorics of RNA secondary structures has been pioneered by Waterman \citep{Penner:93c,Waterman:78a,Waterman:79a,Waterman:80,Waterman:94a}. We interpret an RNA secondary structure as a diagram, i.e.~labeled graphs over the vertex set $[n]=\{1, \dots, n\}$, represented by drawing its vertices $1,\dots,n$ in a horizontal line and connecting them via the set of backbone-edges $\{(i,i+1)'\mid 1\le i\le n-1\}$. Besides its backbone edges a diagram exhibits arcs, $(i,j)$, that are drawn in the upper half-plane. Note that an arc of the form $(i,i+1)$ or $1$-arc, is distinguished from the backbone edge $(i,i+1)'$. However, no confusion can arise since an RNA secondary structure is a diagram having no $1$-arcs and only noncrossing arcs in the upper half-plane, see Fig.\ \ref{F:secon}.\\ The $5'$-$3'$ distance of an RNA secondary structure is the minimal length of a path of the diagram. Such a diagram-path is comprised of arcs and backbone-edges, see Fig.\ \ref{F:distance0}. The paper is organized as follows: In Section~\ref{S:prelim} we discuss some basic facts, in particular the structure-tableaux correspondence and how to express the $5'$-$3'$ distance via such tableaux-sequences. In Section~\ref{S:combi} we compute ${\bf W}(z,u)$, the bivariate generating function of RNA secondary structures of length $n$ having distance $d$. Section~\ref{S:singular} contains the computation of the singular expansion of ${\bf W}(z,u)$ and in Section~\ref{S:limit} we combine our results and derive the limit distribution. We finally discuss our results in Section~\ref{S:discuss}. \section{Preliminaries}\label{S:prelim} Let $\mathscr{S}_n$ denote the set of RNA secondary structures of length $n$, $\sigma_n$. All results of this paper easily generalize to the case of diagrams with noncrossing arcs that contain no arcs of length smaller than $\lambda>1$ and to canonical secondary structures \citep{Reidys:book}, i.e.~structures that contain no isolated arcs. The distance of $\sigma_n$, $d_n(\sigma_n)$, is the minimum length of a path consisting of $\sigma$-arcs and backbone-edges from vertex $1$ (the $5^{'}$ end) to vertex $n$ (the $3^{'}$-end). That is we have the mapping $ d_n\colon \mathscr{S}_n\longrightarrow \mathbb{N}$. A sequence of shapes $(\lambda_0, \lambda_1, \ldots, \lambda_n)$ is called a $1$-tableaux of length $n$, $T_n$, if all shapes contain only one row of squares and (a) $\lambda_0=\lambda_n =\varnothing$, (b) $\lambda_{i+1}$ is obtained from $\lambda_i$ by adding a square ($+\Box$), removing a square ($-\Box$) or doing nothing ($\varnothing$) and (c) there exists no sequence of $(+\Box, -\Box)$-steps. Let $\mathscr{T}_n$ denote the set of all $1$-tableaux of length $n$. We come next to the tableaux interpretation of secondary structures. The underlying correspondence is an immediate consequence of \citep{Reidys:vac07,Chen,Reidys:07pseu}. We shall subsequently express the $5'$-$3'$ distance via $1$-tableaux. \begin{proposition}\citep{Reidys:07pseu} There exists a bijection between RNA secondary structures and $1$-tableaux: \begin{equation} \beta_n\colon \mathscr{S_n}_n\longrightarrow \mathscr{T}_n. \end{equation} \end{proposition} \begin{proof} Given $\sigma_n$, we consider the sequence $(n,n-1,\dots,1)$ and, starting with $\varnothing$, do the following:\\ $\bullet$ if $j$ is the endpoint of an arc $(i,j)$, we add one square,\\ $\bullet$ if $j$ is the start point of an arc $(j, s)$, we remove one square,\\ $\bullet$ if $j$ is an isolated point, we do nothing.\\ This constructs a $1$-tableaux of length $n$ and thus defines the map $\beta_n$. Conversely, given a $1$-tableau $T_n$, $(\varnothing,\lambda^1,\dots, \lambda^{n-1},\varnothing)$, reading $\lambda^i\setminus\lambda^{i-1}$ from left to right, at step $i$, we do the following:\\ $\bullet$ for a $+\square$-step at $i$ we insert $i$ into the new square,\\ $\bullet$ for a $\varnothing$-step we do nothing,\\ $\bullet$ for a $-\square$-step at $i$ we extract the entry of the rightmost square $j(i)$. The latter extractions generate the arc-set $\{(i,j(i))\mid i \;\text{\rm is a $-\square$-step}\}$ that contains by definition of $T_n$ no $1$-arcs. Thus this procedure generates a secondary structure of length $n$ without $1$-arc, which, by construction, is the inverse of $\beta_n$ and the proposition follows. \end{proof} A secondary structure $\sigma_n$ is irreducible if $\beta(\sigma_n)$ is a sequence of shapes $(\lambda_0,\dots,\lambda_{n})$ such that $\lambda_j\neq \varnothing$ for $1\le j<n$. An irreducible substructure of $\sigma_n$ is a subsequence $(\lambda_i, \dots, \lambda_{i+k})$ such that $\lambda_{i-1}=\varnothing$ and $\lambda_{i+k}= \varnothing$ and $\lambda_j\neq \varnothing$ for $i\le j<i+k$. In the following we denote the terminal shapes ($\lambda_{i+k}$) of non-rightmost irreducibles by $\varnothing^*$ and the terminal shape of the rightmost irreducible by $\varnothing^{\#}$. Accordingly we distinguish three types of shapes $\varnothing,\varnothing^*$ and $\varnothing^{\#}$. We can now express the distance in terms of numbers of $\varnothing^*$ and $\varnothing$ shapes as follows \begin{equation} d_n(\sigma_n) = 2 \, \vert \{\varnothing^*\in \beta(\sigma_n)\}\vert + \vert \{\varnothing\in \beta(\sigma_n)\}\vert. \end{equation} \section{Combinatorial analysis}\label{S:combi} Let ${\mathbf w}(n,d)$ denote the number of RNA secondary structures $\sigma_n$ having distance $d_n$. In the following we shall write $d$ instead of $d_n$ and consider \begin{equation} {\mathbf W}(z,u)=\sum_{n\geq 0}\sum_{d \geq 0}{\bf w}(n,d)\,z^n u^d, \end{equation} the bivariate generating function of the number of RNA secondary structure of length $n$ having distance $d$ and set ${\mathbf w}(n)=\sum_{d \geq 0} {\mathbf w}(n,d)$. Let ${\mathbf S}(z)$ denote the generating function of RNA secondary structures and ${\bf Irr}$(z) denote the generating function of irreducible secondary structures (irreducibles). Let furthermore ${\mathscr S}_n$ denote the set of secondary structures of length $n$ and ${\mathscr I}_n$ denote the set of irreducible structures of length $n$. \begin{theorem}\label{T:exact} The bivariate generating function of the number of RNA secondary structures of length $n$ with distance $d$, is given by \begin{equation} \begin{split} &{\mathbf W}(z,u)= \frac{uz^2 ({\mathbf S}(z)-1)}{(1-zu)^2-(1-zu)(zu)^2 ({\mathbf S}(z)-1)} +\frac{z}{1-zu}. \end{split} \end{equation} \end{theorem} \begin{proof} We set ${\mathbf V}(z,u)=z/(1-zu)$ and ${\mathbf U}(z,u)={\mathbf W}(z,u)- {\mathbf V}(z,u)$.\\ {\it Claim 1:} ${\bf Irr}(z)=z^2\left({\mathbf S}(z)-1 \right)$.\\ To prove Claim 1 we consider the mapping $ \gamma: {\mathscr I}_n \longrightarrow {\mathscr S}_{n-2} $, obtained by removing the shapes $\lambda_1$ and $\lambda_{n-1}$ from $\beta(\sigma_n)$ and removing the rightmost box from all other shapes $\lambda_j, 2\leq j \leq n-2$. Note that for $1=(n-1)$ the tableaux $\beta(\sigma_n)$ corresponds to a $1$-arc which is impossible. Hence for an irreducible structure $\lambda_1=\Box$ and $\lambda_{n-1}=\Box$ are distinct shapes and the induced sequence of shapes $\mu=(\lambda_0,\lambda_2\setminus \Box,\dots,\lambda_{n-2}\setminus\Box, \lambda_n)$ is again a $1$-tableaux, i.e.~an element of $\mathscr{S}_{n-2}$, where $\lambda_j\setminus \Box$ denotes the shape $\lambda_j$ with the rightmost $\Box$ deleted. Thus $\gamma$ is welldefined. Given a $1$-tableaux $\tau=(\lambda_0,\dots,\lambda_{n-2})$ we consider the map \begin{equation} \gamma^*(\tau)=(\lambda_0,\Box,\lambda_1\sqcup\Box,\dots,\lambda_{n-3} \sqcup \Box,\Box, \lambda_{n-2}) \end{equation} where $\lambda_j\sqcup \Box$ denotes the shape $\lambda_j$ with a $\Box$ added, see Fig. ~\ref{F:mapIS}. By construction, $\gamma^*\circ \gamma={\rm id}$, whence Claim $1$. Let us first compute the contribution of secondary structures containing at least one irreducible.\\ {\it Claim 2:} Suppose $\sigma_n$ has distance $d$, then $(i+1)$ irreducibles can be arranged in exactly ${d-i \choose i+1}$ ways.\\ Indeed, in view of $d=2\, \vert \{\varnothing^*\in \beta(\sigma_n)\}\vert + \vert \{\varnothing\in \beta(\sigma_n)\}\vert$, the distance-contribution of the rightmost irreducible and each isolated point is one, while the contribution of all remaining $i$ irreducibles equals two. No two such contributions overlap, whence replacing $d$ by $d-i$ we have ${d-i\choose i+1}$ ways to place the $(i+1)$ irreducibles and Claim $2$ follows. Accordingly, we obtain for fixed $d$ \begin{equation} \sum_{n>d} {\mathbf u}(n,d)z^n=\sum_{i \geq 0}{d-i \choose i+1}\, {\bf Irr}(z)^{i+1} z^{d-2i-1}, \end{equation} where the indeterminant $z$ corresponds to the isolated points and ${\bf Irr}(z)$ represents the irreducible structures labeled by the $\varnothing^{*}$ and $\varnothing^{\sharp}$. Consequently, rearranging terms we derive \begin{equation} {\mathbf U}(z,u)=\sum_{d\geq 1}\sum_{n>d}\,{\mathbf u}(n,d)z^n u^d =\sum_{i \geq 0}\sum_{d \geq 1} {d-i \choose i+1}\, {\bf Irr}(z)^{i+1} z^{d-2i-1}\, u^d \end{equation} and therefore \begin{equation} \begin{split} {\mathbf U}(z,u) &=\sum_{i \geq 0}\sum_{d \geq 1} {d-i \choose i+1}\, (zu)^{d-i} (z)^{-i-1}\, u^i\, {\bf Irr}(z)^{i+1} \\ &= \sum_{i \geq 0}\sum_{d \geq 1} {d-i \choose i+1}\, (zu)^{d-i} \, \left(\frac{u \,{\bf Irr}(z)}{z}\right)^i \, \frac{{\bf Irr}(z)}{z}. \end{split} \end{equation} Using $\sum_{r \geq 0} {r \choose k}\,x^r= \frac{x^k}{(1-x)^{k+1}}, k \geq 0$, we compute \begin{equation*} \begin{split} {\mathbf U}(z,u)&=\sum_{i \geq 0}\,\frac{(zu)^{i+1}}{(1-zu)^{i+2}}\, \left(\frac{u {\bf Irr}(z)}{z} \right)^i \, \frac{{\bf Irr}(z)}{z}\\ &=\frac{1}{1-\frac{zu}{1-zu}\frac{u{\bf Irr}(z)}{z}}\, \frac{zu{\bf Irr}(z)}{z(1-zu)^2}\\ &=\frac{uz^2({\bf S}(z)-1)}{(1-zu)^2-(1-zu)z^2u^2({\bf S}(z)-1)}. \end{split} \end{equation*} It remains to consider RNA secondary structures that contain no irreducibles, i.e.~RNA secondary structures consisting exclusively of isolated vertices. Clearly, \begin{equation} {\bf V}(z,u)= \sum_{n \geq 1} z^n u^{n-1}= \frac{z}{1-zu} \end{equation} and the proof of the theorem is complete. \end{proof} Setting ${\bf p}(n,d)={\bf w}(n,d)/{\bf w}(n)$, Theorem~\ref{T:exact} provides the distribution of distances for RNA secondary structures of any fixed length, $n$, see Tab.\ \ref{Tab:30}. \section{The singular expansion}\label{S:singular} In this section we analyze the asymptotics of the $n$th coefficient, $[z^n]{\bf W}(z,u)$. This will play a crucial role for the computation of the limit distribution of distances in Section~\ref{S:limit}. Let us first establish some facts needed for deriving the singular expansion: \begin{lemma} ${\bf W}(z,u)$ is algebraic over the rational function field $\mathbb{C}(z,u)$ and has the unique dominant singularity, $\rho=(3-\sqrt{5})/2$, which coincides with the unique dominant singularity of ${\bf S}(z)$. \end{lemma} \begin{proof} The fact that ${\bf W}(z,u)$ is algebraic over the rational function $\mathbb{C}(z,u)$ follows immediately from Theorem~\ref{T:exact} where we proved \begin{equation*} \begin{split} &{\mathbf W}(z,u)= \frac{uz^2 ({\mathbf S}(z)-1)}{(1-zu)^2-(1-zu)(zu)^2 ({\mathbf S}(z)-1)} +\frac{z}{1-zu}, \end{split} \end{equation*} since evidently all nominators and denominators are polynomial expressions in $u$ and $z$ and \begin{equation}\label{E:root} {\bf S}(z)=\frac{1-z+z^2-\sqrt{(z^2 + z + 1) (z^2 - 3 z + 1)}}{2z^2}. \end{equation} Thus the field $\mathbb{C}(z,u)[{\bf S}(z)]$ is algebraic of degree two over $\mathbb{C}(z,u)$. The second assertion follows from $u\in (0,1)$ and a straightforward analysis of the singularities of the two denominators $(1-zu)^2-(1-zu)(zu)^2 ({\mathbf S}(z)-1)$ and $(1-zu)$. \end{proof} Given two numbers $\phi,r$, where $r>|\kappa|$ and $0<\phi<\frac{\pi}{2}$, the open domain $\Delta_\kappa(\phi,r)$ is defined as \begin{equation*} \Delta_\kappa(\phi,r)=\{ z\mid \vert z\vert < r, z\neq \kappa,\, \vert {\rm Arg}(z-\kappa)\vert >\phi\}. \end{equation*} A domain is a $\Delta_\kappa$-domain\index{$\Delta_\kappa$-domain} at $\kappa$ if it is of the form $\Delta_\kappa(\phi,r)$ for some $r$ and $\phi$. A function is $\Delta_\kappa$-analytic\index{$\Delta_\kappa$-analytic} if it is analytic in some $\Delta_\kappa$-domain. Suppose an algebraic function has a unique singularity $\kappa$. According to \citep{Flajolet:07a,Stanley:80} such a function is $\Delta_\kappa(\phi,r)$-analytic. In particular, ${\bf W}(z,u)$ is $\Delta_\rho(\phi,r)$-analytic. We introduce the notation \begin{eqnarray*} \left(f(z)=o\left(g(z)\right) \ \text{\rm as $z\rightarrow \kappa$}\right)\ &\Longleftrightarrow& \ \left(f(z)/g(z)\rightarrow 0\ \text{\rm as $z\rightarrow \kappa$}\right), \end{eqnarray*} and if we write $f(z)=o\left(g(z)\right)$ it is implicitly assumed that $z$ tends to the (unique) singularity. The following transfer theorem allows us to obtain the asymptotics of the coefficients from the generating functions. \begin{theorem}\label{T:transfer1b}{\bf }\citep{Flajolet:07a} {Let $f(z)$ be a $\Delta_{\kappa}$-analytic function at its unique singularity $z=\kappa$. Let $g(z)\in \{(\kappa-z)^{\alpha}\mid \alpha \in \mathbb{R}\}$. Suppose we have in the intersection of a neighborhood of $\kappa$ with the $\Delta_{\kappa}$-domain \begin{equation*} f(z) = o(g(z)) \quad \text{\it for } z\rightarrow \kappa. \end{equation*} Then we have \begin{equation*} [z^n]f(z)= o\left([z^n]g(z)\right). \end{equation*}} \end{theorem} In addition, according to \citep{Flajolet:05} we have for $\alpha\in\mathbb{C}\setminus \mathbb{Z}_{\le 0}$: \begin{eqnarray}\label{E:33} [z^n]\, (1-z)^{-\alpha} & \sim & \frac{n^{\alpha-1}}{\Gamma(\alpha)}\left[ 1+\frac{\alpha(\alpha-1)}{2n}+ O\left(\frac{1}{n^2}\right)\right]. \end{eqnarray} We next observe ${\bf W}(z,u)=h(z,u)\, f(g(z,u))$, where $g(z,u)=(uz^2({\bf S}(z)-1))/(1-uz)$, $f(z)=z/(1-uz)$, $h(z,u)=1/(1-zu)$ and $t(z,u)=uz^2/(1-uz)$. In preparation for the proof of Lemma~\ref{L:erni1} we set \begin{equation*} \begin{split} \alpha &= g(\rho,u)=\frac{2(-2+\sqrt{5})u}{2+(-3+\sqrt{5})u}\\ C_0 &= \frac{2}{2-(3-\sqrt{5})u}\,\\ &\left(f(\alpha)+\frac{d f(w)}{d w}|_{w=\alpha}\,t(\rho,u) \,\frac{\sqrt{5}-1}{3-\sqrt{5}}-\alpha\,\frac{d f(w)}{d w}|_{w=\alpha} +\rho\right)\\ r(\rho,u) &= - \frac{2}{2-(3-\sqrt{5})u}\,\frac{d f(w)}{d w}|_{w=\alpha}\, t(\rho,u)\,\frac{\sqrt{8(3\sqrt{5}-5)}}{(-3+\sqrt{5})^2}. \end{split} \end{equation*} Furthermore, let $v(z)$ and $w(z)$ be $D$-finite power series such that $w(0)=0$ and let $\rho_v$, $\rho_w$ denote their respective radius of convergence. We set $\tau_w =\lim_{z \rightarrow \rho_w^{-}} w(z)$ and call the $D$-finite power series $F(z) = v(w(z))$ subcritical if and only if $\tau_w < \rho_v$. \begin{lemma}\label{L:erni1} The singular expansion of ${\bf W}(z,u)$ at its unique, dominant singularity $\rho$ is given by \begin{equation} {\bf W}(z,u)=C_0+{\bf V}(\rho,u)+ r(\rho,u)(\rho-z)^{1/2}+O(\rho-z). \end{equation} \end{lemma} \begin{proof} Since $g(0,u)=0$, the composition $f(g(z,u))$ is well defined as a formal power series and ${\bf V}(z,u)=\frac{z}{1-zu}$ as well as $h(z,u)$ are regular at $\rho$. Since $u \in (0,1)$ we have $1/u> 1> \rho$, whence the dominant singularity of $g(z,u)$ equals $\rho$. Next we observe \begin{equation*} g(\rho,u)=\frac{u(1-\rho-\rho^2)}{2(1-u\rho)}< \frac{0.7 u}{2(1-0.4u)}=\frac{0.35u}{1-0.4u}<1, \end{equation*} whence $f(g(z,u))$ is governed by the subcritical paradigm. \\ {\it Claim $1$.} \begin{equation} g(z,u)=t(\rho,u)\,\frac{2}{3-\sqrt{5}}- t(\rho,u)\, \frac{\sqrt{8(3\sqrt{5}-5)(\rho-z)}}{(-3+\sqrt{5})^2} +O(\rho-z). \end{equation} To prove the Claim we consider the singular expansion of ${\bf S}(z)$ at $\rho$ \begin{equation} {\bf S}(z)=\frac{2}{3-\sqrt{5}}-\frac{\sqrt{8(3\sqrt{5}-5) (\rho-z)}}{(-3+\sqrt{5})^2}+O(\rho-z). \end{equation} The singular expansion of $g(z,u)$ at $\rho$ is obtained by multiplying the regular expansion of $t(z,u)$ and singular expansion of ${\bf S}(z)-1$. Clearly, \begin{equation} t(z,u)=t(\rho,u)-\frac{d t(z,u)}{dz}|_{z=\rho} \,(\rho-z)+O((\rho-z)^2), \end{equation} where $t(\rho,u)=(7-3\sqrt{5})u/(2-(3-\sqrt{5})u)$. Thus \begin{equation} g(z,u)=t(\rho,u)\,\frac{\sqrt{5}-1}{3-\sqrt{5}}- t(\rho,u)\, \frac{\sqrt{8(3\sqrt{5}-5)(\rho-z)}}{(-3+\sqrt{5})^2}+O(\rho-z). \end{equation} Setting $\alpha=g(\rho,u)=2(-2+\sqrt{5})u/(2+(-3+\sqrt{5})u)$, the regular expansion of $f(w)$ at $\alpha$ is \begin{equation} f(w) = f(\alpha)+\frac{d f(w)}{dw}|_{w=\alpha}\,(w-\alpha)-O(w-\alpha), \end{equation} where $\frac{d f(w)}{dw}|_{w=\alpha}= \left(\frac{2+(-3+\sqrt{5})u}{2+(-3+\sqrt{5})u- 2(-2+\sqrt{5})u^2}\right)^2$, and accordingly \begin{equation} f(g(z,u)) = C_1- \frac{d f(w)}{dw}|_{w=\alpha}\,t(\rho,u)\,\frac{\sqrt{8(3\sqrt{5}-5)(\rho-z)}} {(-3+\sqrt{5})^2}+O(\rho-z), \end{equation} where $C_1=f(\alpha)+\frac{d f(w)}{dw}|_{w=\alpha}\,t(\rho,u)\,\frac{\sqrt{5}-1}{3- \sqrt{5}}-\alpha\,\frac{d f(w)}{dw}|_{w=\alpha}$. Multiplying by the regular expansion of $h(z,u)$ at $\rho$ and adding the regular expansion of ${\bf V}(z,u)$ implies the lemma. \end{proof} \section{The limit distribution}\label{S:limit} In this Section we shall prove that for any finite $d$ holds \begin{equation} \lim_{n\to\infty}\frac{{\bf w}(n,d)}{{\bf w}(n)}={\bf q}(d). \end{equation} We furthermore determine the limit distribution via computing the power series \begin{equation} {\bf Q}(u)=\sum_{d\ge 1}{\bf q}(d)u^d. \end{equation} Theorem~\ref{T:continuity} below ensures that under certain conditions the point-wise convergence of probability generating functions implies the convergence of its coefficients. \begin{theorem}{\label{T:continuity}} Let $u$ be an indeterminate and $\Omega$ be a set contained in the unit disc, having at least one accumulation point in the interior of the disc. Assume ${\bf P}_n(u)=\sum_{d\ge 0}{\bf p}(n,d)u^d$ and ${\bf Q}(u)=\sum_{d\ge 0}{\bf q}(d) u^k$ such that\\ $\lim_{n\rightarrow \infty}{\bf P}_n(u)={\bf Q}(u)$ for each $u\in\Omega$ holds. Then we have for any finite $d$, \begin{equation} \lim_{n\rightarrow\infty}{\bf p}(n,d)={\bf q}(d) \quad \ \text{\it and }\quad \ \lim_{n\rightarrow \infty}\sum_{j\le d}{\bf p}(n,j)=\sum_{j\le d}{\bf q}(j). \end{equation} \end{theorem} Let ${m}_1(u)=(-7 + 3 \sqrt{5}) u$ and $$ {m}_2(u)=-2 -2 (-3 + \sqrt{5}) u + (-15 + 7 \sqrt{5}) u^2 + (22 - 10 \sqrt{5}) u^3 +2 (-9 + 4 \sqrt{5}) u^4. $$ \begin{theorem}\label{T:limit} For any $d\ge 1$ holds \begin{equation} \lim_{n\to\infty}{\bf p}(n,d) =\lim_{n\to\infty}\frac{{\bf w}(n,d)}{{\bf w}(n)}={\bf q}(d), \end{equation} where ${\bf q}(d)$ is given via the probability generating function ${\bf Q}(u)$ \begin{equation} {\bf Q}(u)=\frac{{m}_1(u)}{{m}_2(u)}. \end{equation} \end{theorem} \begin{proof} According to Lemma~\ref{L:erni1}, the singular expansion of ${\bf W}(z,u)$ is given by \begin{equation} {\bf W}(z,u)=C_0 + {\bf V}(\rho,u)+ r(\rho,u)(\rho-z)^{1/2}+O(\rho-z). \end{equation} Thus \begin{equation} [z^n]{\bf W}(z,u)=r(\rho,u)\, [z^n]\,(\rho-z)^{1/2} + [z^n]\,O(\rho-z). \end{equation} In view of $O(z-\rho)=o((z-\rho)^{1/2})$, Theorem~\ref{T:transfer1b} implies \begin{equation} [z^n]{\bf W}(z,u)\sim r(\rho,u)\,[z^n]\,(\rho-z)^{1/2}. \end{equation} Employing eq.~(\ref{E:33}) we obtain \begin{equation} [z^n]{\bf W}(z,u)\sim r(\rho,u) \, K \, n^{-3/2}\,\rho^{-n}(1+O(\frac{1}{n})), \end{equation} for some constant $K>0$. Substituting for $r(\rho,u)$ we arrive at \begin{equation*} [z^n]{\bf W}(z,u)= \frac{{m}_1(u)}{{ m}_2(u) } \cdot \frac{2\sqrt{6\sqrt{5}-10}}{(-3+\sqrt{5})^2} \cdot K \, n^{-3/2}\rho^{-n}(1+O(\frac{1}{n})) \end{equation*} and in particular for $u=1$ \begin{equation*} [z^n]{\bf W}(z,1)=\frac{2\sqrt{6\sqrt{5}-10}}{(-3+\sqrt{5})^2} \cdot K\, n^{-3/2}\, \rho^{-n}(1+O(\frac{1}{n})). \end{equation*} We consequently have \begin{equation}\label{E:converge} \begin{split} \lim_{n \rightarrow \infty}\frac{[z^n]{\bf W}(z,u)}{[z^n]{\bf W}(z,1)} =\frac{{m}_1(u)}{{m}_2(u)}. \end{split} \end{equation} Therefore, setting ${\bf P}_n(u)=\sum_d{\bf p}(n,d)u^d$, \begin{equation} \lim_{n\to\infty}{\bf P}_n(u)={\bf Q}(u). \end{equation} Since $u \in (0,1)$, $0$ is an accumulation point of $\Omega=(0,1)$, and eq.~(\ref{E:converge}) holds for each $u \in \Omega$, Theorem~\ref{T:continuity} implies for any finite $d$ \begin{equation} \lim_{n\to \infty} {\bf p}(n,d)=\lim_{n\to\infty}\frac{{\bf w}(n,d)}{{\bf w}(n)}= {\bf q}(d). \end{equation} \end{proof} We finally compute the asymptotic expression of ${\bf q}(d)$. For this purpose we recall that the density function of a $\Gamma{(\lambda, r)}$-distribution is given by \begin{equation} f_{\lambda, r}(x)= \begin{cases} \frac{\lambda^r}{\Gamma{(\lambda)}}\,x^{r-1} e^{-\lambda x}, \quad x>0 \\ 0, \quad x \geq 0 \end{cases} \end{equation} where $\lambda >0$ and $r>0$. \begin{corollary}\label{C:1} Let $\rho$ be the real positive dominant singularity of ${\bf S}(z)$ and set $\delta=\frac{1}{4}(-1 - \sqrt{5} + \sqrt{38+18\sqrt{5}})$. Then \begin{equation*} {\bf q}(d)\sim \frac{C_3}{\delta}(d+1)(\frac{1}{\delta})^{d+1} =\frac{C_3}{\delta}\,(\ln(\delta))^{-2}\, f_{\ln(\delta),2} (d). \end{equation*} That is, in the limit of large distances the coefficient ${\bf q}(d)$ is determined by the density function of a $\Gamma(\ln \delta, 2)$-distribution. \end{corollary} \section{Discussion}\label{S:discuss} The results of this paper suggest that the number of base pairs alone is not sufficient to explain the distribution of $5'$-$3'$ distances. Surprisingly, we find that the $5'$-$3'$ distances of random are much smaller than those of mfe-structures, despite the fact that they contain a lesser number of base pairs, see Fig.\ \ref{F:kkk}. By definition, only irreducibles and isolated vertices contribute to the $5'$-$3'$ distance. The particular number of base pairs contained within irreducible substructures is irrelevant. It has been shown in \citep{Emma:09} that there exists a limit distribution for the number of irreducibles in random RNA secondary structures. This limit distribution is a determined by a $\Gamma$-distribution similar to Corollary~\ref{C:1}. As a result, random RNA secondary structures have only very few irreducibles, typically two or three. This constitutes a feature shared by RNA mfe-structures. Thus in case of random and mfe-structures a few irreducibles ``cover'' almost the entire sequence since the $5'$-$3'$ distance is, even in the limit of large sequence length, finite. The distinctively larger $5'$-$3'$ distance of mfe-structures consequently stems from the fact that their irreducibles cover a distinctively smaller fraction of the sequence. Hence the irreducibles of mfe-structures differ in a subtle way from those of random RNA structures. We show in the following that the shift of the $5'$-$3'$ distance is a combinatorial consequence of large stacks observed in mfe-structures, see Fig.~\ref{F:kk1}. Here a stack of length $r$ is a maximal sequence of ``parallel'' arcs, $((i,j),(i+1,j-1),\dots,(i+(r-1),j-(r-1)))$. RNA secondary structures with stack length $\ge r$ is called $r$-canonical RNA secondary structures. Let ${\mathbf w}_r(n,d)$ denote the number of $r$-canonical RNA secondary structures $\sigma_{r,n}$ having distance $d_n$. We shall write $d$ instead of $d_n$ and consider \begin{equation} {\mathbf W}_r(z,u)=\sum_{n\geq 0}\sum_{d \geq 0}{\bf w}_r(n,d)\,z^n u^d, \end{equation} the bivariate generating function of the number of RNA secondary structure with minimum stack-size $r$ of length $n$ having distance $d$ and set ${\mathbf w}_r(n)=\sum_{d \geq 0} {\mathbf w}_r(n,d)$. Let ${\mathbf S}_{r}(z)$ denote the generating function of $r$-canonical RNA secondary structures. Set \begin{equation}\label{E:P} \begin{split} p_r(z)=&(z^{2r}-(z-1)(z^{2r}-z^2+1))^2-4z^{2r}(z^{2r}-z^2+1). \end{split} \end{equation} Then the generating function of $r$-canonical secondary structures is given by \begin{equation}\label{E:root1} {\bf S}_r(z) = \frac{(z^{2r}-(z-1)(z^{2r}-z^2+1)-\sqrt{p_r(z)})}{2z^{2r}}. \end{equation} and we can derive it using symbolic enumeration \citep{Flajolet:07a}. \begin{theorem}\label{T:exact2} The bivariate generating function of the number of $r$-canonical RNA secondary structures of length $n$ with distance $d$, is given by \begin{equation} \begin{split} &{\mathbf W}_r(z,u)= \frac{u\, z^{2r} ({\mathbf S}_r(z)-1)}{(1-zu)^2\,(1-z^2+z^{2r}) -(1-zu)u^2\, z^{2r} ({\mathbf S}_r(z)-1)} +\frac{z}{1-zu}. \end{split} \end{equation} \end{theorem} Along the lines of our analysis subsequent to Theorem~\ref{T:exact} we can then obtain the singular expansion and the limit distributions for the $5'$-$3'$ distances of $r$-canonical RNA secondary structures, see Fig.~\ref{F:kkk}. \section{ Acknowledgments.} We are grateful to Thomas J. X. Li for carefully reading the manuscript and special thanks for Fenix W.D.~Huang for generating Figure.~\ref{F:kk1} and Emeric Deutsch for pointing out an error in Theorem.~\ref{T:exact2} in the discussion.
2,869,038,155,398
arxiv
\section{INTRODUCTION} \label{sec:intro} The ultraviolet (UV) spectral slope, $\beta$, where $f_\lambda\propto \lambda^\beta$, is by far the most commonly used indicator of dust obscuration---usually parameterized as the ratio of the infrared-to-UV luminosity, $L_{\rm IR}/L_{\rm UV}$, or ``IRX'' \citep{calzetti94, meurer99}---in moderately reddened high-redshift ($z\ga 1.5$) star-forming galaxies. The UV slope can be measured easily from the same photometry used to select galaxies based on the Lyman break, and the slope can be used as a proxy for the dust obscuration in galaxies (e.g., \citealt{calzetti94, meurer99, adelberger00, reddy06a, daddi07a, reddy10, overzier11, reddy12a, buat12}) whose dust emission is otherwise too faint to directly detect in the mid- and far-infrared (e.g., \citealt{adelberger00, reddy06a}). Generally, these studies have indicated that UV-selected star-forming galaxies at redshifts $1.5\la z\la 3.0$ follow on average the relationship between UV slope and dust obscuration (i.e., the IRX-$\beta$ relation) found for local UV starburst galaxies (e.g., \citealt{nandra02, reddy04, reddy06a, daddi07a, sklias14}; c.f., \citealt{heinis13, alvarez16}), though with some deviations that depend on galaxy age \citep{reddy06a, siana08, reddy10, buat12}, bolometric luminosity (e.g., \citealt{chapman05, reddy06a, casey14b}), stellar mass \citep{pannella09, reddy10, bouwens16b}, and redshift \citep{pannella15}. Unfortunately, typical star-forming ($L^{\ast}$) galaxies at these redshifts are too faint to directly detect in the far-infrared. As such, with the exception of individual lensed galaxy studies \citep{siana08, siana09, sklias14, watson15, dessauges16}, most investigations that have explored the relation between UV slope and dust obscuration for moderately reddened galaxies have relied on stacking relatively small numbers of objects and/or used shorter wavelength emission---such as that arising from polycyclic aromatic hydrocarbons (PAHs)---to infer infrared luminosities. New avenues of exploring the dustiness of high-redshift galaxies have been made possible with facilities such as the Atacama Large Millimeter Array (ALMA), allowing for direct measurements of either the dust continuum or far-IR spectral features for more typical star-forming galaxies in the distant universe \citep{carilli13, dunlop17}. Additionally, the advent of large-scale rest-optical spectroscopic surveys of intermediate-redshift galaxies at $1.4\la z\la 2.6$---such as the 3D-HST \citep{vandokkum13}, the MOSFIRE Deep Evolution Field (MOSDEF; \citealt{kriek15}), and the Keck Baryonic Structure surveys (KBSS; \citealt{steidel14})---have enabled measurements of obscuration in individual high-redshift star-forming galaxies using Balmer recombination lines (e.g., \citealt{price14, reddy15, nelson16}). While these nebular line measurements will be possible in the near future for $z\ga 3$ galaxies with the {\em James Webb Space Telescope} ({\em JWST}), the limited lifetime of this facility and the targeted nature of both ALMA far-IR and {\em JWST} near- and mid-IR observations means that the UV slope will remain the only easily accessible proxy for dust obscuration for large numbers of {\em individual} typical galaxies at $z\ga 3$ in the foreseeable future. Despite the widespread use of the UV slope to infer dust attenuation, there are several complications associated with its use. First, the UV slope is sensitive to metallicity and star-formation history (e.g., \citealt{kong04, seibert05, johnson07b, dale09, munoz09, reddy10, wilkins11, boquien12, reddy12b, schaerer13, wilkins13, grasha13, zeimann15}). Second, there is evidence that the relationship between UV slope and dust obscuration depends on stellar mass and/or age (e.g., \citealt{reddy06a, buat12, zeimann15, bouwens16b}), perhaps reflecting variations in the shape of the attenuation curve. Third, the measurement of the UV slope may be complicated by the presence of the 2175\,\AA\, absorption feature \citep{noll09, buat11, kriek13, buat12, reddy15}. Fourth, as noted above, independent inferences of the dust attenuation in faint galaxies typically involve stacking mid- and far-IR data, but such stacking masks the scatter in the relationship between UV slope and obscuration. Quantifying this scatter can elucidate the degree to which the attenuation curve may vary from galaxy-to-galaxy, or highlight the sensitivity of the UV slope to factors other than dust obscuration. In general, the effects of age, metallicity, and star-formation history on the UV slope may become important for ultra-faint galaxies at high redshift which have been suggested to undergo bursty star formation (e.g., \citealt{weisz12, hopkins14, dominguez15, guo16, sparre17, faucher17}). Obtaining direct constraints on the dust obscuration of UV-faint galaxies is an important step in evaluating the viability of the UV slope to trace dustiness, quantifying the bolometric luminosities of ultra-faint galaxies and their contribution to the global SFR and stellar mass densities, assessing possible variations in the dust obscuration curve over a larger dynamic range of galaxy characteristics (e.g., star-formation rate, stellar mass, age, metallicity, etc.), and discerning the degree to which the UV slope may be affected by short timescale variations in star-formation rate. Separately, recent advances in stellar populations models that include realistic treatments of stellar mass loss, rotation, and multiplicity \citep{eldridge09, brott11, levesque12, leitherer14} can result in additional dust heating from ionizing and/or recombination photons. Moreover, the intrinsic UV spectral slopes of high-redshift galaxies with lower stellar metallicities may be substantially bluer \citep{schaerer13, sklias14, alavi14, cullen17} than what has been typically assumed in studies of the IRX-$\beta$ relation. Thus, it seems timely to re-evaluate the IRX-$\beta$ relation in light of these issues. With this in mind, we use a newly assembled large sample of galaxies with secure spectroscopic or photometric redshifts at $1.5\le z\le 2.5$ in the GOODS-North and GOODS-South fields to investigate the correlation between UV slope and dust obscuration. Our sample takes advantage of newly acquired {\em Hubble} UVIS F275W and F336W imaging from the HDUV survey (Oesch et~al. 2017, submitted) which aids in determining photometric redshifts when combined with existing 3D-HST photometric data. This large sample enables precise measurements of dust obscuration through the stacking of far-infrared images from the {\em Herschel Space Observatory}, and also enables stacking in multiple bins of other galaxy properties (e.g., stellar mass, UV luminosity) to investigate the scatter in the IRX-$\beta$ relation. We also consider the newest stellar population models---those which may be more appropriate in describing very high-redshift ($z\ga 2$) galaxies---in interpreting the relationship between UV slope and obscuration. \begin{deluxetable*}{lc} \tabletypesize{\footnotesize} \tablewidth{0pc} \tablecaption{Sample Characteristics} \tablehead{ \colhead{Property} & \colhead{Value}} \startdata Fields & GOODS-N, GOODS-S \\ Total area & $\sim 329$\,arcmin$^2$ \\ Area with HDUV imaging & $\sim 100$\,arcmin$^{2}$ \\ UV/Optical photometry & 3D-HST Catalogs\tablenotemark{a} and HDUV\tablenotemark{b} F275W and F336W \\ Mid-IR imaging & {\em Spitzer} GOODS Imaging Program\tablenotemark{c} \\ Far-IR imaging & GOODS-{\em Herschel}\tablenotemark{d} and PEP\tablenotemark{e} Surveys \\ Optical depth of sample & $H \simeq 27$ \\ UV depth of sample & $m_{\rm UV} \simeq 27$ \\ Total number of galaxies & 4,078 \\ Number of galaxies with far-IR coverage & 3,569 \\ Final number (excl. far-IR-detected objects) & 3,545 \\ $\beta$ Range & $-2.55\le \beta \le 1.05$ ($\langle\beta\rangle = -1.71$) \enddata \tablenotetext{a}{\citet{skelton14}.} \tablenotetext{b}{Oesch et~al., submitted.} \tablenotetext{c}{PI: Dickinson.} \tablenotetext{d}{\citet{elbaz11}.} \tablenotetext{e}{PI: Lutz, \citet{magnelli13}.} \label{tab:sample} \end{deluxetable*} The outline of this paper is as follows. In Section~\ref{sec:sample}, we discuss the selection and modeling of stellar populations of galaxies used in this study. The methodology used for stacking the mid- and far-IR {\em Spitzer} and {\em Herschel} data is discussed in Section~\ref{sec:stacking}. In Section~\ref{sec:predirx}, we calculate the predicted relationships between IRX and $\beta$ for different attenuation/extinction curves using energy balance arguments. These predictions are compared to our (as well as literature) stacked measurements of IRX in Section~\ref{sec:discussion}. In this section, we also consider the variation of IRX with stellar masses, UV luminosities, and the ages of galaxies, as well as the implications of our results for modeling the stellar populations and inferring the ionizing efficiencies of high-redshift galaxies. AB magnitudes are assumed throughout \citep{oke83}, and we use a \citet{chabrier03} initial mass function (IMF) unless stated otherwise. We adopt a cosmology with $H_{0}=70$\,km\,s$^{-1}$\,Mpc$^{-1}$, $\Omega_{\Lambda}=0.7$, and $\Omega_{\rm m}=0.3$. \section{SAMPLE AND IR IMAGING} \label{sec:sample} \subsection{Parent Sample} A few basic properties of our sample are summarized in Table~\ref{tab:sample}. Our sample of galaxies was constructed by combined the publicly-available ground- and space-based photometry compiled by the 3D-HST survey \citep{skelton14} with newly obtained imaging from the {\em Hubble} Deep UV (HDUV) Legacy Survey (GO-13871; Oesch et~al. 2017, submitted). The HDUV survey imaged the two GOODS fields in the F275W and F336W bands to depths of $\simeq 27.5$ and $27.9$\,mag, respectively ($5\sigma$; $0\farcs4$ diameter aperture), with the UVIS channel of the {\em Hubble Space Telescope} WFC3 instrument. A significant benefit of the HDUV imaging is that it allows for the Lyman break selection of galaxies to fainter UV luminosities and lower redshifts than possible from ground-based surveys (Oesch et~al. 2017, submitted), and builds upon previous efforts to use deep UVIS imaging to select Lyman break galaxies at $z\sim 2$ \citep{hathi10, windhorst11}. The reduced UVIS images, covering $\approx 100$\,arcmin$^{2}$, include previous imaging obtained by the CANDELS \citep{koekemoer11} and UVUDF surveys \citep{teplitz13, rafelski15}. \subsection{Photometry and Stellar Population Parameters} Source Extractor \citep{bertin96} was used to measure photometry on the UVIS images using the detection maps for the combined F125W$+$F140W$+$F160W images, as was done for the 3D-HST photometric catalogs \citep{skelton14}. The publicly-available 3D-HST photometric catalogs were then updated with the HDUV photometry---i.e., such that the updated catalogs contain updated photometry for objects lying in the HDUV pointings as well as the original set photometry for objects lying outside the HDUV pointings. This combined dataset was then used to calculate photometric redshifts using EAZY \citep{brammer08} and determine stellar population parameters (e.g., stellar mass) using FAST \citep{kriek09}. Where available, grism and external spectroscopic redshifts were used in lieu of the photometric redshifts when fitting for the stellar populations. These external spectroscopic redshifts are provided in the 3D-HST catalogs \citep{momcheva16}. We also included 759 spectroscopic redshifts for galaxies observed during the 2012B-2015A semesters of the MOSDEF survey \citep{kriek15}. For the stellar population modeling, we adopted the \citet{conroy10} stellar population models for $Z=0.019$\,Z$_\odot$, a delayed-$\tau$ star-formation history with $8.0\le \log[\tau/{\rm yr}] \le 10.0$, a \citet{chabrier03} initial mass function (IMF), and the \citet{calzetti00} dust attenuation curve with $0.0\le A_{\rm V}\le 4.0$.\footnote{Below, we consider the effect of stellar population age on the IRX-$\beta$ relations. In that context, the ages derived for the vast majority of galaxies in our sample are within $\delta\log[{\rm Age/yr}] \simeq 0.1$\,dex to those derived assuming a constant star-formation history.} We imposed a minimum age of $40$\,Myr based on the typical dynamical timescale for $z\sim 2$ galaxies \citep{reddy12b}. The UV slope for each galaxy was calculated both by (a) fitting a power law through the broadband photometry, including only bands lying redward of the Lyman break and blueward of rest-frame $2600$\,$\mu$m; and (b) fitting a power law through the best-fit SED points that lie in wavelength windows spanning rest-frame $1268\le \lambda\le 2580$\,\AA, as defined in \citet{calzetti94}. Method (a) includes a more conservative estimate for the errors in $\beta$, but generally the two methods yielded values of the UV slope for a given galaxy that were within $\delta\beta \simeq 0.1$ of each other. We adopted the $\beta$ calculated using method (a) for the remainder of our analysis, and note that in Section~\ref{sec:predirx}, we consider the value of $\beta$ using windows lying strictly blueward of $\approx 1800$\,\AA. \begin{figure*} \epsscale{1.15} \plotone{f1.pdf} \caption{Redshift ($z$), absolute magnitude ($M_{1600}$), and UV slope ($\beta$) distributions of the 3,545 galaxies in our sample, color-coded by $\beta$ (left panel) and denoted by cyan symbols (right panel). For comparison, the distributions for the 124 UV-selected galaxies from \citet{reddy12a} are indicated by the red symbols and histograms in the right panel, while the black dashed line denotes the value of $M^{\ast}_{1500} \simeq -20.2$ at $z\sim 1.9$ from \citet{oesch10}. The present HDUV+3D-HST sample is a factor of $\approx 30\times$ larger than that of \citet{reddy12a}, and includes galaxies over broader ranges of $M_{1600}$ and $\beta$, particularly at fainter magnitudes and bluer UV slopes. The blue squares and solid line in the right panel indicate the median relationship between $\beta$ and $M_{1600}$ for our sample, compared with the mean relationship (blue dashed line) found for 168 $U$-dropouts at $z\sim 2.5$ (note this is the upper bound in redshift for our sample) from \citet{bouwens09}. Removing the $m_{(1+z)\times 1700}\le 27.0$ requirement imposed to construct our sample (Section~\ref{sec:sample}) would allow for a larger number of faint galaxies with redder UV slopes to be selected, increasing the number density of objects in the upper right-hand region of the right panel.} \label{fig:zmag} \end{figure*} \subsection{Criteria for Final Sample} The photometric catalogs, along with those containing the redshifts and stellar population parameters, were used to select galaxies based on the following criteria. First, the object must have a Source Extractor ``class star'' parameter $< 0.95$, or observed-frame $U-J$ and $J-K$ colors that reside in the region occupied by star-forming galaxies as defined by \citep{skelton14}---these criteria ensure the removal of stars. Second, the galaxy must have a spectroscopic or grism redshift, or $95\%$ confidence intervals in the photometric redshift, that lie in the range $1.5\le z\le 2.5$. Note that the high photometric redshift confidence intervals required for inclusion in our sample naturally selects those objects with $H\la 27$. Third, the object must not have a match in X-ray AGN catalogs compiled for the GOODS-North and GOODS-South fields (e.g., \citealt{shao10, xue11}). Additionally, we use the \citet{donley12} {\em Spitzer} IRAC selection to isolate any infrared-bright AGN. While the X-ray and IRAC selections may not identify {\em all} AGN at the redshifts of interest, they are likely to isolate those AGN that may significantly influence our stacked far-IR measurements. Fourth, the object must not have rest-frame $U-V$ and $V-J$ colors that classify it as a quiescent galaxy \citep{williams09, skelton14}. The object is further required to have a specific star-formation rate sSFR$\ga 0.1$\,Gyr$^{-1}$. These criteria safeguard against the inclusion of galaxies where $\beta$ may be red due to the contribution of older stars to the near-UV continuum, or where dust heating by older stars may become significant. Fifth, to ensure that the sample is not biased towards objects with red $U-H$ colors at faint $U$ magnitudes (owing to the limit in $H$-band magnitude mentioned previously), the galaxy must have an apparent magnitude at $[1+z]\times 1600$\,\AA\, of $\le 27.0$\,mag. This limit still allows us to include galaxies with absolute magnitudes as faint as $M_{\rm 1600}\simeq -17.4$. These criteria result in a sample of 4,078 galaxies. \subsection{Spitzer and Herschel Imaging} We used the publicly available {\em Spitzer}/MIPS 24\,$\mu$m and {\em Herschel}/PACS 100 and 160\,$\mu$m data in the two GOODS fields for our analysis. The 24\,$\mu$m data come from the {\em Spitzer} GOODS imaging program (PI: Dickinson), and trace the dust-sensitive rest-frame $7.7$\,$\mu$m emission feature for galaxies at $1.5\le z\le 2.5$ (e.g., \citealt{reddy06a}). The observed $24$\,$\mu$m fluxes of $z\sim 2$ galaxies have been used extensively in the past to derive infrared luminosities ($L_{\rm IR}$) given the superior sensitivity of these data to dust emission when compared with observations taken at longer wavelengths (roughly a factor of three times more sensitive than {\em Herschel}/PACS to galaxies of a given $L_{\rm IR}$ at $z\sim 2$; \citealt{elbaz11}). However, a number of observations have highlighted the strong variation in $L_{\rm 7.7}/L_{\rm IR}$ with star-formation rate \citep{rieke09, shipley16}, star-formation-rate surface density \citep{elbaz11}, and gas-phase metallicity and ionization parameter at high-redshift \citep{shivaei16}. As such, while we stacked the $24$\,$\mu$m data for galaxies in our sample, we did not consider these measurements when calculating $L_{\rm IR}$. In Appendix~\ref{sec:l8lir}, we consider further the variation in $L_{\rm 7.7}/L_{\rm IR}$ with other galaxy characteristics. The {\em Herschel} data come from the GOODS-{\em Herschel} Open Time Key Program \citep{elbaz11} and the PACS Evolutionary Probe (PEP) Survey (PI: Lutz; \citealt{magnelli13}), and probe the rest-frame $\simeq 30-65$\,$\mu$m dust continuum emission for galaxies at $1.5\le z\le 2.5$. We chose not to use the SPIRE data given the much coarser spatial resolution of these data (FWHM$\ga 18\arcsec$) relative to the $24$\,$\mu$m (FWHM$\simeq 5\farcs 4$), $100$\,$\mu$m (FWHM$\simeq 6\farcs 7$), and $160$\,$\mu$m (FWHM$\simeq 11\arcsec$) data. The pixel scales of the 24, 100, and 160\,$\mu$m images are $1\farcs 2$, $1\farcs 2$, and $2\farcs 4$, respectively. As noted above, only the $100$ and $160$\,$\mu$m data are used to calculate $L_{\rm IR}$. Of the 4,078 galaxies in the sample discussed above, 3,569 lie within the portions of the {\em Herschel} imaging that are $80\%$ complete to flux levels of 1.7 and 5.5\,mJy for the 100 and 160\,$\mu$m maps in GOODS-N, respectively, and 1.3 and 3.9\,mJy for the 100 and 160\,$\mu$m maps in GOODS-S, respectively. Of these galaxies, 24 (or $0.67\%$) are directly detected with signal-to-noise $S/N>3$ in either the $100$ or $160$\,$\mu$m images. As we are primarily concerned with constraining the IRX-$\beta$ relation for {\em moderately} reddened galaxies, we removed all directly-detected {\em Herschel} objects from our sample---the latter are very dusty star-forming galaxies at the redshifts of interest with $L_{\rm IR}\ga 10^{12}$\,$L_\odot$. The very low frequency of infrared-luminous objects among UV-faint galaxies in general could have been anticipated from the implied low number density of $L_{\rm IR}\ga 10^{12}$\,$L_\odot$ objects from the IR luminosity function \citep{reddy08, magnelli13} and the high number density of UV-faint galaxies inferred from the UV luminosity function \citep{reddy08, reddy09, alavi16} at $z\sim2$. The inclusion of such dusty galaxies does not significantly affect our stacking analysis owing to the very small number of such objects. Excluding these dusty galaxies, our final sample consists of 3,545 galaxies with the redshift and absolute magnitude distributions shown in Figure~\ref{fig:zmag}. \subsection{Summary of Sample} To summarize, we have combined HDUV UVIS and 3D-HST catalogued photometry to constrain photometric redshifts for galaxies in the GOODS fields and isolate those star-forming galaxies with redshifts $z=1.5-2.5$ down to a limiting near-IR magnitude of $\simeq 27$\,AB (Table~\ref{tab:sample}). All galaxies are significantly detected (with $S/N>3$) down to an observed optical (rest-frame UV) magnitude of $27$\,AB. Our sample includes objects with spectroscopic redshifts in the aforementioned range wherever possible. This sample is then used as a basis for stacking deep {\em Herschel} data, as discussed in the next section. One of the most beneficial attributes of our sample is that it contains the largest number of UV-faint galaxies---extending up to $\approx 3$ magnitudes fainter than the characteristic absolute magnitude at $z\sim 2.3$ ($M^{\ast}_{1700}=-20.70$; \citealt{reddy09}) and $z\sim 1.9$ ($M^{\ast}_{1500}=-20.16$; \citealt{oesch10})---with robust redshifts at $1.5\le z\le 2.5$ assembled to date (Figure~\ref{fig:zmag}). The general faintness of galaxies in our sample is underscored by their very low detection rate ($S/N>3$) at $24$\,$\mu$m---85 of 3,545 galaxies, or $\approx 2.4\%$---compared to the $\approx 40\%$ detection rate for rest-frame UV-selected galaxies with ${\cal R}\le 25.5$ \citep{reddy10}. Consequently, unlike most previous efforts using ground-based UV-selected samples of limited depth, the present sample presents a unique opportunity to evaluate the IRX-$\beta$ relation for the analogs of the very faint galaxies that dominate the UV and bolometric luminosity densities at $z\gg3$ (e.g., \citealt{reddy08, smit12}), but for which direct constraints on their infrared luminosities are difficult to obtain. \begin{deluxetable*}{lccccccccc} \tabletypesize{\footnotesize} \tablewidth{0pc} \tablecaption{Stacked Fluxes and Infrared and UV Luminosities} \tablehead{ \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{$\langle f_{24}\rangle$\tablenotemark{c}} & \colhead{$\langle f_{100}\rangle$\tablenotemark{c}} & \colhead{$\langle f_{160}\rangle$\tablenotemark{c}} & \colhead{$\langle L_{7.7}\rangle$\tablenotemark{d}} & \colhead{$\langle L_{\rm IR}\rangle$\tablenotemark{d}} & \colhead{$\langle L_{\rm UV}\rangle$\tablenotemark{d}} \\ \colhead{Sample} & \colhead{$N$\tablenotemark{a}} & \colhead{$\langle z\rangle$\tablenotemark{b}} & \colhead{$\langle \beta\rangle$\tablenotemark{b}} & \colhead{[$\mu$Jy]} & \colhead{[$\mu$Jy]} & \colhead{[$\mu$Jy]} & \colhead{[$10^{10}$\,$L_\odot$]} & \colhead{[$10^{10}$\,$L_\odot$]} & \colhead{[$10^{10}$\,$L_\odot$]}} \startdata {\bf All} & 3545 & 1.94 & -1.71 & $1.54\pm0.14$ & $29\pm6$ & $62\pm17$ & $0.26\pm0.03$ & $2.1\pm0.4$ & 0.80 \\ \\ {\bf $M_{1600}$ bins:} \\ $M_{1600} \le -21$ & 81 & 2.12 & -1.74 & $4.83\pm0.96$ & $177\pm30$ & $377\pm93$ & $1.00\pm0.20$ & $17.1\pm2.4$ & 6.73 \\ $-21<M_{1600}\le -20$ & 575 & 2.07 & -1.68 & $4.37\pm0.28$ & $87\pm13$ & $171\pm43$ & $0.86\pm0.06$ & $7.6\pm1.0$ & 2.92 \\ $-20<M_{1600}\le -19$ & 1390 & 1.99 & -1.67 & $2.33\pm0.20$ & $38\pm8$ & $84\pm25$ & $0.41\pm0.03$ & $3.1\pm0.6$ & 1.26 \\ $M_{1600}>-19$ & 1499 & 1.92 & -1.72 & $1.00\pm0.16$ & $31\pm9$ & $54\pm24$ & $0.16\pm0.03$ & $2.0\pm0.5$ & 0.48 \\ \\ {\bf $\beta$ bins:} \\ $\beta \le -1.70$ & 2084 & 1.96 & -2.04 & $0.52\pm0.16$ & $5\pm7$ & $21\pm18$ & $0.09\pm0.03$ & $<1.4$ & 0.77 \\ $-1.70<\beta\le -1.40$ & 722 & 1.92 & -1.56 & $1.89\pm0.41$ & $43\pm13$ & $86\pm37$ & $0.31\pm0.07$ & $2.9\pm0.7$ & 0.95 \\ $-1.40<\beta\le -1.10$ & 345 & 1.94 & -1.26 & $3.92\pm0.55$ & $52\pm18$ & $103\pm56$ & $0.65\pm0.09$ & $3.7\pm1.1$ & 0.93 \\ $-1.10<\beta\le -0.80$ & 205 & 1.93 & -0.97 & $7.07\pm0.53$ & $80\pm25$ & $173\pm73$ & $1.15\pm0.09$ & $5.7\pm1.4$ & 0.81 \\ $\beta>-0.80$ & 189 & 1.90 & -0.31 & $5.09\pm0.62$ & $167\pm23$ & $340\pm63$ & $0.80\pm0.10$ & $11.0\pm1.2$ & 0.59 \\ \\ {\bf $M_{1600}$ \& $\beta$ bins:}\\ $M_{1600}\le -19$ $+$ $\beta \le -1.4$ & 1616 & 2.01 & -1.86 & $1.86\pm0.21$ & $25\pm9$ & $51\pm21$ & $0.33\pm0.04$ & $1.9\pm0.5$ & 1.58 \\ $M_{1600}\le -19$ $+$ $\beta > -1.4$ & 430 & 1.97 & -1.02 & $7.20\pm0.50$ & $117\pm19$ & $288\pm46$ & $1.25\pm0.09$ & $9.5\pm1.0$ & 1.47 \\ $M_{1600}> -19$ $+$ $\beta \le -1.4$ & 1190 & 1.92 & -1.97 & $0.36\pm0.21$ & $13\pm7$ & $26\pm23$ & $0.06\pm0.03$ & $<1.0$ & 0.48 \\ $M_{1600}> -19$ $+$ $\beta > -1.4$ & 309 & 1.90 & -0.79 & $3.11\pm0.38$ & $95\pm19$ & $176\pm75$ & $0.49\pm0.06$ & $6.3\pm1.2$ & 0.48 \\ \\ {\bf Stellar Mass \& $\beta$ bins:} \\ $\log[M_{\ast}/{\rm M}_\odot]\le 9.75$ & 2571 & 1.94 & -1.88 & $0.75\pm0.14$ & $10\pm7$ & $17\pm20$ & $0.13\pm0.03$ & $<1.2$ & 0.71 \\ $\,\,\,\,\,\,+\beta\le -1.4$ & 2385 & 1.94 & -1.95 & $0.63\pm0.19$ & $11\pm6$ & $28\pm15$ & $0.10\pm0.03$ & $<1.0$ & 0.72 \\ $\,\,\,\,\,\,+\beta>-1.4$ & 186 & 1.89 & -1.12 & $2.95\pm0.76$ & $19\pm25$ & $72\pm73$ & $0.47\pm0.12$ & $<4.0$ & 0.57 \\ $\log[M_{\ast}/{\rm M}_\odot]>9.75$ & 974 & 1.96 & -0.92 & $4.93\pm0.40$ & $111\pm 12$ & $229\pm36$ & $0.84\pm0.07$ & $8.3\pm0.7$ & 1.22 \\ $\,\,\,\,\,\,+\beta\le -1.4$ & 421 & 2.04 & -1.61 & $4.22\pm0.42$ & $59\pm14$ & $118\pm46$ & $0.80\pm0.07$ & $5.1\pm1.1$ & 2.26 \\ $\,\,\,\,\,\,+\beta>-1.4$ & 553 & 1.94 & -0.72 & $5.23\pm0.48$ & $132\pm14$ & $263\pm44$ & $0.87\pm0.09$ & $9.4\pm0.8$ & 0.90 \\ \\ {\bf Age bins:} \\ $\log[{\rm Age}/{\rm yr}]\le 8.00$ & 81 & 1.96 & -1.49 & $0.32\pm0.92$ & $62\pm39$ & $135\pm91$ & $<0.51$ & $<6.3$ & 0.55 \\ $\log[{\rm Age}/{\rm yr}]>8.00$ & 3464 & 1.94 & -1.71 & $1.43\pm0.22$ & $25\pm6$ & $52\pm19$ & $0.23\pm0.04$ & $1.8\pm0.4$ & 0.81 \enddata \tablenotetext{a}{Number of objects in the stack.} \tablenotetext{b}{Mean redshift and UV slope of objects in the stack.} \tablenotetext{c}{Stacked 24, 100, and 160\,$\mu$m fluxes.} \tablenotetext{d}{Stacked 8\,$\mu$m and total infrared luminosities, and the mean UV luminosity of objects in the stack.} \label{tab:stackedresults} \end{deluxetable*} \section{STACKING METHODOLOGY} \label{sec:stacking} To mitigate any systematics in the stacked fluxes due to bright objects proximate to the galaxies in our sample, we performed the stacking on residual images that were constructed as follows.\footnote{As discussed in \citet{reddy12a}, stacking on the science images themselves yields results similar to those obtained by stacking on the residual images.} We used the $24$\,$\mu$m catalogs and point spread functions (PSFs) included in the GOODS-{\em Herschel} data release to subtract all objects with $S/N>3$ in the $24$\,$\mu$m images, with the exception of the 85 objects in our sample that are directly detected at $24$\,$\mu$m. Objects with $S/N>3$ in the 24\,$\mu$m images were used as priors to fit and subtract objects with $S/N>3$ in the 100 and 160\,$\mu$m images. The result is a set of residual images at 24, 100, and 160\,$\mu$m for both GOODS fields. For each galaxy contributing to the stack, we extracted from the 24, 100, and 160\,$\mu$m residual images regions of $41\times 41$, $52\times 52$, and $52\times 52$ pixels, respectively, centered on the galaxy. The sub-images were then divided by the UV luminosity, $L_{\rm UV}=\nu L_\nu$ at $1600$\,\AA, of the galaxy, and these normalized sub-images for each band were then averaged together using 3\,$\sigma$ clipping for all the objects in the stack. We performed PSF photometry on the stacked images to measure the fluxes. Because the images are normalized by $L_{\rm UV}$, the stacked fluxes are directly proportional to the average IRX. The corresponding weighted average fluxes in each band ($\langle f_{24}\rangle$, $\langle f_{100}\rangle$, and $\langle f_{160}\rangle$), where the weights are $1/L_{\rm UV}$, were computed by multiplying the stacked fluxes by the weighted average UV luminosity of galaxies in the stack. The measurement uncertainties of these fluxes were calculated as the 1\,$\sigma$ dispersion in the fluxes obtained by fitting PSFs at random positions in the stacked images, avoiding the stacked signal itself. While stacking on residual images aids in minimizing the contribution of bright nearby objects to the stacked fluxes, this method will not account for objects that are blended with the galaxies of interest in the {\em Herschel}/PACS imaging. This presents a particular challenge in our case, where the galaxies are selected from {\em HST} photometry, as a single galaxy (e.g., as observed from the ground) may be resolved with {\em HST} into several subcomponents, each of which is of course unresolved in the {\em Herschel} imaging but each of which will contribute to the stacked flux. Galaxies that are resolved into multiple subcomponents will contribute more than once to the stack, resulting in an over-estimate of the stacked far-IR flux. This effect is compounded by that of separate galaxies contributing more than once to the stack if they happen to be blended at the {\em Herschel}/PACS resolution. This bias was quantified as follows. For a given band, we used the PSF to generate $N$ galaxies, where $N$ is the number of galaxies in the stack, each having a flux equal to the weighted average flux of the stacked signal. These simulated galaxies were added to the residual image at locations that were shifted from those of the real galaxies by offsets $\delta x$ and $\delta y$ in the x- and y-directions, respectively, where the offsets were chosen randomly. This ensures that the spatial distribution of the simulated galaxies is identical to that of the real galaxies. We then stacked at the locations of the simulated galaxies and compared the simulated and recovered stacked fluxes. This was done 100 times, each time with a different pair of (randomly chosen) $\delta x$ and $\delta y$. The average ratio of the simulated and recovered fluxes, or the bias factor, from these 100 simulations varied from $\approx 0.60-0.90$, depending on the number of galaxies contributing to the stack and the particular band. These simulations were performed for every band and for every stack in our analysis, and the stacked fluxes of the galaxies in our sample were multiplied by the bias factors calculated from these simulations. To further investigate this bias, we also stacked all galaxies in our sample that had no {\em HST}-detected object within $3\farcs 35$, corresponding to the half-width at half-maximum of the {\em Herschel}/PACS 100\,$\mu$m PSF. While this criterion severely restricts the size of the sample to only 465 objects, it allowed us to verify the bias factors derived from our simulations. Stacking these 465 objects yielded weighted average fluxes at $24$, $100$, and $160$\,$\mu$m that are within 1\,$\sigma$ of the those values obtained for the entire sample once the bias factors are applied.\footnote{While the 160\,$\mu$m PSF has a half-width at half-maximum that is larger than the exclusion radius of $3\farcs 35$, the agreement in the average $f_{100}/f_{160}$ ratio, or far-infrared color, between the stack of the full sample and that of the 465 galaxies suggests that the bias factors also recover successfully the average 160\,$\mu$m stacked flux.} Infrared luminosities were calculated by fitting the \citet{elbaz11} ``main sequence'' dust template to the stacked $\langle f_{100}\rangle$ and $\langle f_{160}\rangle$ fluxes. We chose this particular template as it provided the best match to the observed infrared colors $f_{100}/f_{160}$ of the stacks, though we note that the adoption of other templates (e.g., \citealt{chary01, dale02, rieke09}) results in $L_{\rm IR}$ that vary by no more than $\approx 50\%$ from the ones calculated here (see \citealt{reddy12a} for a detailed comparison of $L_{\rm IR}$ computed using different dust templates). Upper limits in $L_{\rm IR}$ are quoted in cases where $L_{\rm IR}$ divided by the modeled uncertainty is $>3$. In a few instances, $\sim 2\,\sigma$ detections of both the $100$ and $160$\,$\mu$m stacked fluxes yield a modeled $L_{\rm IR}$ that is significant at the $3\sigma$ level. The mean UV slope of objects contributing to the stack was computed as a weighted average of the UV slopes of individual objects where, again, the weights are $1/L_{\rm UV}$. These same weights were also applied when calculating the weighted average redshift, absolute magnitude, stellar mass, and age of objects contributing to the stack. Table~\ref{tab:stackedresults} lists the average galaxy properties and fluxes for each stack performed in our study. \section{PREDICTED IRX-$\beta$ RELATIONS} \label{sec:predirx} We calculated the relationship between IRX and $\beta$ using the recently developed ``Binary Population and Spectral Synthesis'' (BPASS) models \citep{eldridge12, stanway16} with a stellar metallicity of $Z=0.14Z_\odot$ on the current abundance scale \citep{asplund09} and a two power-law IMF with $\alpha = 2.35$ for $M_{\ast} > 0.5$\,$M_\odot$ and $\alpha= 1.30$ for $0.1\le M_{\ast}\le 0.5$\,$M_\odot$. We assumed a constant star formation with an age of $100$\,Myr and included nebular continuum emission. This particular BPASS model (what we refer to as our ``fiducial'' model) is found to best reproduce simultaneously the rest-frame far-UV continuum, stellar, and nebular lines, and the rest-frame optical nebular emission line strengths measured for galaxies at $z\sim 2$ \citep{steidel16}. Two salient features of this model are the very blue intrinsic UV continuum slope $\beta_0 \simeq -2.62$ relative to that assumed in the \citet{meurer99} calibration of the IRX-$\beta$ relation ($\beta_0 = -2.23$), and the larger number of ionizing photons per unit star-formation-rate (i.e., $\approx 20\%$ larger than those of single star models with no stellar rotation; \citealt{stanway16}) that are potentially available for heating dust. For comparison, the BPASS model for the same metallicity with a constant star-formation history and an age of $300$\,Myr (the median for the sample considered here, and similar to the mean age of $z\sim 2$ UV-selected galaxies; \citealt{reddy12b}) is $\beta_0 = -2.52$. Below, we also consider the more traditionally used \citet{bruzual03} (BC03) models. We calculated the IRX-$\beta$ relation assuming an energy balance between flux that is absorbed and that which is re-emitted in the infrared \citep{meurer99}. The absorption is determined by the extinction or attenuation curve, and we considered several choices including the SMC extinction curve of \citet{gordon03}, and the \citet{calzetti00} and \citet{reddy15} attenuation curves. The original forms of these extinction/attenuation curves were empirically calibrated at $\lambda \ga 1200$\,\AA. The \citet{calzetti00} and \citet{reddy15} curves were extended down to $\lambda = 950$\,\AA\, using a large sample of Lyman Break galaxy spectra at $z\sim 3$ and a newly-developed iterative method presented in \citet{reddy16a}. The SMC curve of \citet{gordon03} was extended in the same way, and we used these extended versions of the curves in this analysis. For reference, our new constraints on the shape of dust obscuration curves imply a lower attenuation of $\lambda\la 1250$\,\AA\, photons relative to that predicted from polynomial extrapolations below these wavelengths \citep{reddy16a}. In practice, because most of the dust heating arises from photons with $\lambda>1200$\,\AA, the implementation of the new shapes of extinction/attenuation curves does little to alter the predicted IRX-$\beta$ relation. For reference, the following equations give the relationship between $\ebmv$ and $\beta$ for the fiducial (BPASS) model with nebular continuum emission and the shapes of the attenuation/extinction curves derived above: \begin{eqnarray} {\bf \rm Calzetti+00:} & \beta = -2.616 + 4.684\times\ebmv; \nonumber \\ {\bf \rm SMC:} & \beta = -2.616 + 11.259\times\ebmv; \nonumber \\ {\bf \rm Reddy+15:} & \beta = -2.616 + 4.594\times\ebmv. \label{eq:betaebmv} \end{eqnarray} The intercepts in the above equations are equal to $-2.520$ for the $300$\,Myr BPASS model. For each value of $\ebmv$, we applied the aforementioned dust curves to the BPASS model and calculated the flux absorbed at $\lambda > 912$\,\AA. Based on the high covering fraction ($\ga 92\%$) of optically-thick $\hi$ inferred for $z\sim 3$ galaxies \citep{reddy16b}, we assumed a zero escape fraction of ionizing photons and that photoelectric absorption dominates the depletion of such photons, rather than dust attenuation \citep{reddy16b}. We then calculated the resultant Ly$\alpha$ flux assuming Case B recombination and the amount of Ly$\alpha$ flux absorbed given the value of the extinction/attenuation curve at $\lambda=1216$\,\AA, and added this to the absorbed flux at $\lambda>912$\,\AA. This total absorbed flux is equated to $L_{\rm IR}$, where we have assumed that all of the dust emission occurs between $8$ and $1000$\,$\mu$m. Finally, we divided the infrared luminosity by the UV luminosity of the reddened model at $1600$\,\AA\, to arrive at the value of IRX. The UV slope was computed directly from the reddened model using the full set of \citet{calzetti94} wavelength windows. Below, we also consider the value of $\beta$ computed using the subset of the \citet{calzetti94} windows at $\lambda < 1740$\,\AA, as well as a single window spanning the range $1300-1800$\,\AA. Formally, we find the following relations between IRX and $\beta$ given Equation~\ref{eq:betaebmv}, where $\beta$ is measured using the full set of \citet{calzetti94} wavelength windows: \begin{eqnarray} {\bf \rm Calzetti+00:} & {\rm IRX} = 1.67 \times [10^{0.4(2.13\beta + 5.57)} - 1]; \nonumber \\ {\bf \rm SMC:} & {\rm IRX} = 1.79 \times [10^{0.4(1.07\beta + 2.79)} - 1]; \nonumber \\ {\bf \rm Reddy+15:} & {\rm IRX} = 1.68 \times [10^{0.4(1.82\beta + 4.77)} - 1]. \end{eqnarray} These relations may be shifted redder by $\delta\beta = 0.096$ to reproduce the IRX-$\beta$ relations for the 300\,Myr BPASS model. For reference, Appendix~\ref{sec:sumrelations} summarizes the relations between $\beta$ and $\ebmv$ and between IRX and $\beta$ for different assumptions of the stellar population model, nebular continuum, Ly$\alpha$ heating, and the normalization of the dust curve. \begin{figure*} \epsscale{1.0} \plotone{f2.pdf} \caption{Predicted IRX-$\beta$ relations for different assumptions of the stellar population. {\em Left:} IRX-$\beta$ relation for the fiducial $0.14Z_\odot$ BPASS model with constant star formation and an age of 100\,Myr assuming the \citet{calzetti00} dust attenuation curve. The solid black line shows the \citet{meurer99} relation, shifted $0.24$\,dex upward to account for the flux difference between the far-infrared ($40-120$\,$\mu$m) passband used in that study and the total infrared ($8-1000$\,$\mu$m) passband assumed here. The dotted line indicates the $0.14Z_\odot$ BPASS model with an age of 300\,Myr. {\em Right:} Comparison of our fiducial BPASS model with the ``solar metallicity'' \citet{bruzual03} model which, given the currently measured solar abundances \citep{asplund09}, equates to $1.4Z_\odot$. The latter also assumes a constant star formation with an age of 100\,Myr. Also shown are the \citet{meurer99} curve and the update to this curve provided in \citet{overzier11}. A $0.28Z_\odot$ \citet{bruzual03} model results in an IRX-$\beta$ relation which is essentially identical to that of the BPASS model for the same age. The shifts in the IRX-$\beta$ relations between the models are attributable primarily to differences in the intrinsic UV slope, with even the commonly-used \citet{bruzual03} model having $\beta_0 = -2.44$ (without including nebular continuum), substantially bluer than the $\beta_0 =-2.23$ model adopted in \citet{meurer99}.} \label{fig:irxpred1} \end{figure*} Figures~\ref{fig:irxpred1} and \ref{fig:irxpred2} convey a sense for how the stellar population and nebular continuum, Ly$\alpha$ heating, UV slope measurements, and the total-to-selective extinction ($R_{\rm V}$) affect the IRX-$\beta$ relation. Models with a bluer intrinsic UV slope require a larger degree of dust obscuration to reproduce a given observed UV slope, thus causing the IRX-$\beta$ relation to shift towards bluer $\beta$. Relative to the \citet{meurer99} relation, the IRX-$\beta$ relations for the fiducial (BPASS) $100$ and $300$\,Myr models predict a factor of $\approx 2$ more dust obscuration at a given $\beta$ for $\beta \ga -1.7$, and an even larger factor for $\beta$ bluer than this limit (left panel of Figure~\ref{fig:irxpred1}). The commonly utilized BC03 model results in a factor of $\approx 30\%$ increase in the IRX at a given $\beta$ relative to the \citet{meurer99} curve, while the $0.28Z_\odot$ BC03 model results in an IRX-$\beta$ relation that is indistinguishable from that of the BPASS model for the same age (right panel of Figure~\ref{fig:irxpred2}). These predictions underscore the importance of the adopted stellar population model when using the IRX-$\beta$ relation to discern between different dust attenuation/extinction curves (e.g., \citealt{meurer99, boquien12, schaerer13}). Note that the inclusion of nebular continuum emission shifts the IRX-$\beta$ relation by $\delta\beta \simeq 0.1$ to the right (i.e., making $\beta$ redder), so that the IRX at a given $\beta$ is $\approx 0.1$\,dex lower (leftmost panel of Figure~\ref{fig:irxpred2}). \begin{figure*} \epsscale{1.00} \plotone{f3.pdf} \caption{Predicted IRX-$\beta$ relations for different assumptions of the contribution of nebular continuum emission, effect of Ly$\alpha$ heating, systematics associated with the measurement of the UV slope, and the normalization of the dust curves. {\em Left:} IRX-$\beta$ relations for the fiducial $0.14Z_\odot$ BPASS model, with (solid line) and without (dashed line) including nebular continuum emission. Relations are show for both the \citet{calzetti00} and SMC dust curves. Neglecting the contribution to the SED from nebular continuum emission will cause one to measure a slightly bluer UV slope ($\delta\beta \simeq 0.1$). {\em Middle Left:} IRX-$\beta$ relations for the fiducial model, assuming the \citet{calzetti00} attenuation and the SMC extinction curves (solid lines). The dashed lines indicate the result if we assume that none of the Ly$\alpha$ emission from the galaxy is absorbed by dust. {\em Middle Right:} Same as the middle left panel, where the dashed lines now indicate the IRX-$\beta$ relations if $\beta$ is measured using a window spanning the range $1300-1800$\,\AA. {\em Right:} Same as the middle left panel, where the dashed and dotted lines now show the effect of lowering the total-to-selective extinction by $\delta R_{\rm V} = 1.0$ and $1.5$, respectively.} \label{fig:irxpred2} \end{figure*} The specific treatment of dust heating from Ly$\alpha$ photons has a much less pronounced effect on the IRX-$\beta$ relation. If none of the Ly$\alpha$ flux is absorbed by dust---also equivalent to assuming that the escape fraction of ionizing photons is $100\%$---then the resulting IRX is $\approx 10\%$ lower at a given $\beta$ than that predicted by our fiducial model. Similarly, assuming that all of the Ly$\alpha$ is absorbed by dust results in an IRX-$\beta$ relation that is indistinguishable from that of the fiducial model. \begin{figure*} \epsscale{1.00} \plotone{f4.pdf} \caption{Predicted IRX-$\beta$ relation for our fiducial model, shifted to have an intrinsic UV slope of $\beta_0 = -2.23$, assuming the \citet{calzetti00} dust attenuation curve (red line). The blue curve shows the fiducial model when assuming $\beta_0 = -2.62$ and the SMC curve. There is a substantial overlap (within an factor of two in IRX) between the two IRX-$\beta$ relations over the range $-2.1\la \beta\la -1.3$ (light green shaded region), where $L^{\ast}$ LBGs at $z\sim 2-3$ tend to lie (vertical dashed line; \citealt{reddy12a}).} \label{fig:keyplot} \end{figure*} The wavelengths over which $\beta$ is computed will also effect the IRX-$\beta$ relation to varying degrees, depending on the specific wavelength ranges and the stellar population model. For the BPASS model, computing $\beta$ from the reddened model spectrum within a single window spanning the range $1300-1800$\,\AA\, results in an IRX-$\beta$ relation that is shifted by as much as $\delta\beta = 0.4$ to redder slopes. This effect is due to the fact that the stellar continuum rises less steeply towards shorter wavelengths for $\lambda \la 1500$\,\AA. Consequently, the $\log({\rm IRX})$ is $\simeq 0.18$\,dex lower in this case relative to that computed based on the full set of \citet{calzetti94} windows. Similar offsets are observed when using the subset of the \citet{calzetti94} windows lying at $\lambda < 1800$\,\AA, while the offsets are not as large with the BC03 model. Most previous studies of the IRX-$\beta$ relation adopted a $\beta$ computed over relatively broad wavelength ranges coinciding with the \citet{calzetti94} windows. However, the systematic offsets in the IRX-$\beta$ relation arising from the narrower wavelength range used to compute UV slopes become relevant for very high-redshift (e.g., $z\ga 8$) galaxies where {\em Hubble} photometry is typically used to constrain the UV slope and where such observations only go as red as rest-frame $\la 1800$\,\AA. Finally, the rightmost panel of Figure~\ref{fig:irxpred2} shows the effect of lowering the total-to-selective extinction ($R_{\rm V}$), or normalization, of the attenuation/extinction curves by various amounts. Of the {\em physical} factors discussed above, the IRX-$\beta$ relation is most sensitive to the effects of changing the intrinsic UV slope and/or $R_{\rm V}$. To underscore the importance of the assumed stellar population when interpreting the IRX-$\beta$ relation, we show in Figure~\ref{fig:keyplot} the comparison of our fiducial BPASS model assuming the \citet{calzetti00} curve and an intrinsic $\beta_0 = -2.23$ (accomplished by simply shifting the model to asymptote to this intrinsic value), along with the same model assuming an SMC curve with $\beta_0 = -2.62$. As is evident from this figure, the two IRX-$\beta$ relations that assume different attenuation curves and intrinsic UV slopes have a significant overlap (within a factor two in IRX) over the range $-2.1\la \beta \la -1.3$. Notably this range includes the typical $\beta \simeq -1.5$ found for UV-selected galaxies at $z\sim 2$ (see \citealt{reddy12a}). In the next section, we examine these effects further in the context of the stacked constraints on IRX-$\beta$ provided by the combined HDUV and 3D-HST samples. \section DISCUSSION} \label{sec:discussion} \subsection{IRX-$\beta$ for the Entire Sample} \label{sec:irxbeta} As a first step in constraining the IRX-$\beta$ relation at $z=1.5-2.5$, we stacked galaxies in bins of UV slope. The resulting IRX for each of these bins, as well as for the whole sample, are shown in Figure~\ref{fig:irxbetarv}. The predicted IRX-$\beta$ relations for different assumptions of the stellar population (BPASS or BC03) intrinsic UV slope, $\beta_0$, and the difference in normalization of the dust curves, $\delta R_{\rm V}$, are also shown. To account for the former, we simply shifted the fiducial relation (computed assuming $\beta_0 = -2.62$) so that it asymptotes to a redder value of $\beta_0 = -2.23$, similar to that assumed in \citet{meurer99}. \begin{figure*} \epsscale{1.1} \plotone{f5.pdf} \caption{Predicted IRX-$\beta$ relations assuming the \citet{calzetti00}, \citet{reddy15}, and SMC dust curves---the \citet{overzier11} IRX-$\beta$ relation is indistinguishable from that obtained from \citet{reddy15} attenuation curve---for the fiducial stellar population model (BPASS). The intrinsic slope of this model is $\beta_0 = -2.62$. In the left-hand panels, the predicted IRX-$\beta$ relations have been shifted to show the effect of assuming a redder intrinsic UV slope of $\beta_0 = -2.23$, the same as that in \citet{meurer99}. The two bottom panels show the effect of lowering the normalization of the dust curves by $\delta R_{\rm V} = 1.5$, with the specific values of $R_{\rm v}$ indicated. The gray points in each panel denote our stacked measurements of IRX for galaxies in bins of $\beta$, while the black points shows the result of the stack for all galaxies. The dashed line in the upper right-hand panel indicates the IRX-$\beta$ relation implied by the SMC extinction curve for an age of 300\,Myr.} \label{fig:irxbetarv} \end{figure*} \begin{figure*} \epsscale{1.0} \plotone{f6.pdf} \caption{Comparison of our IRX-$\beta$ measurements with several from the literature for (primarily) UV-selected galaxies at $1.5\la z\la 3.0$, including those from \citet{reddy12a}, \citet{bouwens16b}, \citet{heinis13}, and \citet{alvarez16}. Measurements are also shown for small samples of gravitationally-lensed and dust obscured galaxies from \citet{sklias14} and \citet{penner12}, respectively. The predicted IRX-$\beta$ relations (see Section~\ref{sec:predirx}) for our fiducial model and the SMC, \citet{reddy15}, and \citet{calzetti00} dust curves are indicated, along with the original \citet{meurer99} relation which assumed $\beta_0 = -2.23$. } \label{fig:irxbetalit} \end{figure*} Our stacked results indicate a highly significant ($\ga 20\sigma$) correlation between IRX and $\beta$. However, none of the predicted relations calculated based on assuming an intrinsic UV slope of $\beta_0 = -2.23$, as in \citet{meurer99}, are able to reproduce our stacked estimates for the full range of $\beta$ considered here. For example, the upper left panel of Figure~\ref{fig:irxbetarv} shows that while both the \citet{calzetti00} and \citet{reddy15} attenuation curves predict IRX that are within $3\sigma$ of our stacked values for $\beta < -1.2$, they over-predict the IRX for galaxies with redder $\beta$. Lowering the normalization of the \citet{reddy15} attenuation curve by $\delta R_{\rm V} = 1.5$ results in a better match to the stacked determinations, but with some disagreement (at the $>3\sigma$ level) with the stack of the entire sample (lower left panel of Figure~\ref{fig:irxbetarv}). \citet{reddy15} estimated the systematic uncertainty in their determination of $R_{\rm V}$ to be $\delta R_{\rm V}\approx 0.4$, which suggests that their curve may not have a normalization as low as $R_{\rm V} = 1.0$ given their favored value of $R_{\rm V} = 2.51$. Regardless, without any modifications to the normalizations and/or shapes of the attenuation curves in the literature \citep{calzetti00, gordon03, reddy15}, the corresponding IRX-$\beta$ relations are unable to reproduce our stacked estimates if we assume an intrinsic UV slope of $\beta_0 = -2.23$. At face value, these results suggest that the attenuation curve describing our sample is steeper than the typically utilized \citet{calzetti00} relation, but grayer than the SMC extinction curve. However, this conclusion depends on the intrinsic UV slope of the stellar population, as we discuss next. Independent evidence favors the low-metallicity BPASS model in describing the underlying stellar populations of $z\sim 2$ galaxies \citep{steidel16}. The very blue intrinsic UV slope characteristic of this model---as well as those of the BC03 models with comparable stellar metallicities (e.g., the $0.28Z_\odot$ BC03 model with the same high-mass power-law index of the IMF as the BPASS model has $\beta_0 = -2.65$)---is also favored in light of the non-negligible number of galaxies in our sample ($\approx 9\%$) that have $\beta<-2.23$ at the $3\sigma$ level, the canonical value assumed in \citet{meurer99}. Figure~\ref{fig:irxpred1} shows that the low-metallicity models with blue $\beta_0$ result in IRX-$\beta$ relations that are significantly shifted relative to those assuming redder $\beta_0$. With such models, we find that our stacked measurements are best reproduced by an SMC-like extinction curve (upper right-hand panel of Figure~\ref{fig:irxbetarv}), in the sense that all of the measurements lie within $3\sigma$ of the associated prediction. On the other hand, with such stellar population models, grayer attenuation curves (e.g., \citealt{calzetti00}) over-predict the IRX at a given $\beta$ by a factor of $\approx 2-7$. More generally, we find that the slope of the IRX-$\beta$ relation implied by our stacked measurements is better fit with that obtained when considering the SMC extinction curve, while grayer attenuation curves lead to a more rapid rise in IRX with increasing $\beta$. Our stacked measurements and predicted IRX-$\beta$ curves are compared with several results from the literature in Figure~\ref{fig:irxbetalit}. In the context of the IRX-$\beta$ predictions that adopt sub-solar metallicities, we find that most of the stacked measurements for UV-selected galaxies at $z\sim 1.5-3.0$ suggest a curve that is SMC-like, at least for $\beta \la -0.5$. Several of the samples, including those of \citet{heinis13}, \citet{alvarez16}, and \citet{sklias14}, indicate an IRX that is larger than the SMC prediction for $\beta \ga -0.5$. Such a behavior is not surprising given that the dust obscuration has been shown to decouple from the UV slope for galaxies with large star-formation rates, as is predominantly the case for most star-forming galaxies with very red $\beta$ \citep{goldader02, chapman05, reddy06a, reddy10, penner12, casey14b, salmon16}. As discussed in a number of studies \citep{reddy06a, reddy10, penner12, casey14b, koprowski16}, dusty galaxies in general can exhibit a wide range in $\beta$ (from very blue to very red) depending on the particular spatial configuration of the dust and UV-emitting stars. Figure~\ref{fig:irxbetalit} shows that the degree to which such galaxies diverge from a given attenuation curve depends on $\beta_0$. Many of the dusty galaxies that would appear to have IRX larger than the \citet{meurer99} or \citet{calzetti00} predictions may in fact be adequately described by such curves if the stellar populations of these galaxies are characterized by very blue intrinsic UV spectral slopes. On the other hand, if these dusty galaxies have relatively enriched stellar populations, and redder intrinsic slopes, then their departure from the \citet{calzetti00} prediction would be enhanced. \begin{figure*} \epsscale{1.0} \plotone{f7.pdf} \caption{{\em Left:} IRX-$\beta$ relation as a function of stellar mass. The dark blue arrow indicates the $3\sigma$ upper limit for low-mass galaxies with $M_{\ast}\le 10^{9.75}$\,$M_\odot$, while the light blue arrows are for the low-mass subsample with $\beta\le -1.4$ and $\beta > -1.4$. Similarly, the dark red and orange circles indicate the same for the high-mass ($M_{\ast}> 10^{9.75}$\,$M_\odot$) sample. Our measurements suggest that high-mass galaxies at $z=1.5-2.5$ are consistent with SMC extinction, while low-mass galaxies have upper limits in IRX that suggest a dust curve that is steeper than the SMC curve. {\rm Right:} IRX as a function of stellar mass for galaxies in our sample (blue symbols) and that of \citet{bouwens16b} (red symbols), where the arrows indicate $3\sigma$ upper limits. As the \citet{bouwens16b} measurements are based on ALMA 2\,mm continuum data, the derived $L_{\rm IR}$ are sensitive to the assumed dust temperature. The dark and light red symbols indicate their results when assuming an evolving dust temperature and a constant temperature with $T=35$\,K, respectively. The thick gray line denotes the ``consensus relation'' at $z\sim 2$ computed by \citet{bouwens16b} and based on data from \citet{reddy10}, \citet{whitaker14} and \citet{alvarez16}.} \label{fig:irxmass} \end{figure*} Undoubtedly, large variations in IRX can also be expected with different geometries of dust and stars. Regardless, if sub-solar metallicity models are widely representative of the stellar populations of typical star-forming galaxies at $z\ga 1.5$, then our stacked measurements, along with those in the literature, tend to disfavor gray attenuation curves for these galaxies. The large sample studied here, as well as those of \citet{bouwens16b} and \citet{alvarez16}, suggest an SMC-like curve. At first glance, this conclusion may appear to be at odds with the wide number of previous investigations that have found that the \citet{meurer99} and \citet{calzetti00} attenuation curves generally apply to moderately-reddened star-forming galaxies at $z\ga 1.5$ (e.g., \citealt{nandra02, seibert02, reddy04, reddy06a, daddi07a, pannella09, reddy10, magdis10, overzier11, reddy12a, forrest16, debarros16}; c.f., \citealt{heinis13}). In the framework of our present analysis, the reconciliation between these results is simple. Namely, our analysis does not call into question previous {\em measurements} of IRX-$\beta$, but calls for a different {\em interpretation} of these measurements. In the previous interpretation, most of the stacked measurements from the literature were found to generally agree with the \citet{meurer99} relation {\em if we assume a relatively red intrinsic slope of $\beta=-2.23$}. In the present interpretation, we argue that sub-solar metallicity models necessitate a steeper attenuation curve in order to reproduce the measurements of IRX-$\beta$ (e.g., see also \citealt{cullen17}). Our conclusion is aided by the larger dynamic range in $\beta$ probed by the HDUV and 3D-HST samples, which allows us to better discriminate between different curves given that their corresponding IRX-$\beta$ relations separate significantly at redder $\beta$ (Figures~\ref{fig:keyplot} and \ref{fig:irxbetalit}). \subsection{IRX Versus Stellar Mass} \label{sec:irxmass} The well-studied correlations between star-formation rate and stellar mass (e.g., \citealt{noeske07, reddy06a, daddi07a, pannella09, wuyts11, reddy12b, whitaker14, schreiber15, shivaei15}), and between star-formation rate and dust attenuation (e.g., \citealt{wang96, adelberger00, reddy06b, buat07, buat09, burgarella09, reddy10}), have motivated the use of stellar mass as a proxy for attenuation \citep{pannella09, reddy10, whitaker14, pannella15, alvarez16, bouwens16b} as the former can be easily inferred from fitting stellar population models to broadband photometry. The connection between reddening and stellar mass can also be deduced from the mass-metallicity relation \citep{tremonti04, kewley08, andrews13, erb06a, maiolino08, henry13, maseda14, steidel14, sanders15}. Motivated by these results, we stacked galaxies in two bins of stellar mass divided at $M_{\ast} = 10^{9.75}$\,$M_\odot$ (and further subdivided into bins of $\beta$; Table~\ref{tab:stackedresults}) to investigate the dependence of the IRX-$\beta$ relation on stellar mass.\footnote{The stellar masses obtained with \citet{conroy10} models (Section~\ref{sec:sample}) are on average within $0.1$\,dex of those obtained assuming the fiducial (BPASS) model with the same \citep{chabrier03} IMF.} The high-mass subsample ($M_\ast > 10^{9.75}$\,$M_\odot$) exhibits a redder UV slope ($\langle\beta\rangle = -0.92$) and larger IRX ($\langle{\rm IRX}\rangle=6.8\pm 0.6$) than the low-mass subsample with $\langle\beta\rangle = -1.88$ and $\langle{\rm IRX}\rangle<1.7$ ($3\sigma$ upper limit). Moreover, the high-mass subsample exhibits an IRX-$\beta$ relation consistent with that predicted assuming our fiducial stellar population model and the SMC extinction curve (Figure~\ref{fig:irxmass}). Separately, the low-mass subsample as a whole, as well as the subset of the low-mass galaxies with $\beta \le -1.4$, have $3\sigma$ upper limits on IRX that require a dust curve that is at least as steep as the SMC. The constraints on the IRX-$M_{\ast}$ relation from our sample are shown relative to previous determinations in the right panel of Figure~\ref{fig:irxmass}. The ``$z\sim 2$ Consensus Relation'' presented in \citet{bouwens16b} was based on the IRX-$M_{\ast}$ trends published in \citet{reddy10}, \citet{whitaker14}, and \citet{alvarez16}. Formally, our stacked detection for the high-mass ($M_{\ast}>10^{9.75}$\,$M_\odot$) subsample lies $\approx 4\sigma$ below the consensus relation, but is in excellent agreement with the mean IRX found for galaxies of similar masses ($\simeq 2\times 10^{10}$\,$M_\odot$) in \citet{reddy10}. The upper limit in IRX for the low-mass ($M_{\ast}\le 10^{9.75}$\,$M_\odot$) sample is consistent with the predictions from the consensus relation. Based on these comparisons, we conclude that the IRX-$M_{\ast}$ relation from the present work is in general agreement with previous determinations, and lends support for previous suggestions that stellar mass may be used as a rough proxy for dust attenuation in high-redshift star-forming galaxies (e.g., \citealt{pannella09, reddy10, bouwens16b}). Moreover, these comparisons underscore the general agreement between our IRX measurements (e.g., as a function of $\beta$ and $M_{\ast}$) and those in the literature, in spite of our different interpretation of these results in the context of the dust obscuration curve applicable to high-redshift galaxies (Section~\ref{sec:irxbeta}). \subsection{IRX Versus UV Luminosity} \label{sec:irxuvlum} As alluded to in Section~\ref{sec:intro}, quantifying the dust attenuation of UV-faint (sub-$L^{\ast}$) galaxies has been a longstanding focus of the high-redshift community. While the steep faint-end slopes of UV luminosity functions at $z\ga 2$ imply that such galaxies dominate the UV luminosity density at these redshifts, knowledge of their dust obscuration is required to assess their contribution the cosmic star-formation-rate density (e.g., \citealt{steidel99, adelberger00, bouwens07,reddy08}). Several studies have argued that UV-faint galaxies are on average less dusty than their brighter counterparts \citep{reddy08, bouwens09, bouwens12, kurczynski14}. This inference is based on the fact that the observed UV luminosity is expected to monotonically correlate with star-formation rate for galaxies fainter than $L^{\ast}$ (e.g., see Figure~13 of \citealt{reddy10} and Figure~10 of \citealt{bouwens09}) and that the dustiness is a strong function of star-formation rate \citep{wang96, adelberger00, reddy06b, buat07, buat09, burgarella09, reddy10}. While several investigations have shown evidence for a correlation between IRX and UV luminosity \citep{bouwens09, reddy10, reddy12a}, others point to a roughly constant IRX as a function of UV luminosity \citep{buat09, heinis13}. As discussed in these studies, the different findings may be a result of selection biases, in the sense that UV-selected samples will tend to miss the dustiest galaxies, which also have faint observed UV luminosities. Hence, for purely UV-selected samples, IRX would be expected to decrease with $L_{\rm UV}$. Alternatively, the rarity of highly dust-obscured galaxies compared to intrinsically faint galaxies (e.g., as inferred from the shapes of the UV and IR luminosity functions; \citealt{caputi07, reddy08, reddy09, magnelli13}) implies that in a number-weighted sense, the mean bolometric luminosity should decrease towards fainter $L_{\rm UV}$. How this translates to the variation of IRX with $L_{\rm UV}$ will depend on how quickly the dust can build up in dynamically-relaxed faint galaxies. From a physical standpoint, dust enrichment on timescales much shorter than the dynamical timescale would suggest a relatively constant IRX as a function of $L_{\rm UV}$. The HDUV/3D-HST sample presents a unique opportunity to evaluate the trend between IRX and $L_{\rm UV}$ as the selection is based on rest-optical criteria. Consequently, our sample is less sensitive to the bias against dusty galaxies that are expected to be significant in UV-selected samples (e.g., \citealt{chapman00, barger00, buat05, burgarella05, reddy05, reddy08, casey14a}). Indeed, Figure~\ref{fig:zmag} shows that our sample includes a large number of UV-faint galaxies that are also quite dusty based on their red $\beta \ga -1.4$---these galaxies, while dusty, still have bolometric luminosities that are sufficiently faint to be undetected in the PACS imaging. \begin{figure*} \epsscale{1.00} \plotone{f8.pdf} \caption{Dust attenuation as a function of UV luminosity at $z\sim 1.5-3.7$. Our stacked measurements in bins of $L_{\rm UV}$ are indicated by the large gray circles, and the large blue and red circles indicate the values obtained by further subdividing the stacks in bins of $\beta$ (see Table~\ref{tab:stackedresults}). The right panel also shows other results for UV-selected galaxies at $z\sim 1.5$ \citep{heinis13}, $z\sim 3$ \citep{alvarez16}, and $z\sim 3.7$ \citep{lee12}. The samples of \citet{heinis13} and \citet{alvarez16} include galaxies that are primarily redder ($\beta \ga -1.4$) than those in our sample. When restricting our sample to a similar range of $\beta$, we find an IRX-$L_{\rm UV}$ relation that is in excellent agreement with that of \citet{heinis13} and \citet{alvarez16}. Unsurprisingly, the IRX-$L_{\rm UV}$ relation for blue galaxies with $\beta \le -1.4$ is offset by $\approx 1$\,dex to lower IRX.} \label{fig:irxuv} \end{figure*} Figure~\ref{fig:irxuv} shows the relationship between dust attenuation and UV luminosity for our sample (gray points) and those of \citet{heinis13} at $z\sim 1.5$ and \citet{alvarez16} at $z\sim 3$. The latter two indicate an IRX that is roughly constant with $L_{\rm UV}$, but one which is a factor of $2-3$ offset towards higher IRX than found for our sample. This offset is easily explained by the fact that the IRX-$L_{\rm UV}$ relations for the $z\sim 1.5$ and $z\sim 3.0$ samples were determined from galaxies that are on average significantly redder than in our sample. In particular, most of the constraints on IRX-$L_{\rm UV}$ from these studies come from galaxies with $\beta \ga -1.5$. When limiting our stacks to galaxies with $\beta > -1.4$ (red points in Figure~\ref{fig:irxuv}), we find an excellent agreement with the IRX-$L_{\rm UV}$ relations found by \citet{heinis13} and \citet{alvarez16}. On the other hand, stacking those galaxies in our sample with $\beta\le -1.4$ results in an IRX-$L_{\rm UV}$ relation that is not surprisingly offset towards lower IRX than for the sample as a whole. Thus, while the IRX-$L_{\rm UV}$ relation appears to be roughly constant for all of the samples considered here, Figure~\ref{fig:irxuv} implies that the $\beta$ distribution as a function of $L_{\rm UV}$ is at least as important as the presence of dusty star-forming galaxies in shaping the observed IRX-$L_{\rm UV}$ relation. Furthermore, the trend of a bluer average $\beta$ with decreasing $L_{\rm UV}$ (e.g., Figure~\ref{fig:zmag}; see also \citealt{reddy08, bouwens09}) suggests that the mean reddening should be correspondingly lower for UV-faint galaxies than for UV-bright ones once the effect of the less numerous dusty galaxies with red $\beta$ are accounted for. The blue ($\beta \le -1.4$) star-forming galaxies in our sample have IRX$\la 1$, such that the infrared and UV luminosities contribute equally to the bolometric luminosities of these galaxies. The expectation of rapid dust enrichment from core-collapse supernovae \citep{todini01} implies that it is unlikely that the dust obscuration can be significantly lower than this value when observed for dynamically-relaxed systems. Consequently, the observation that the mean UV slope becomes progressively bluer for fainter galaxies at high-redshift (e.g., \citealt{bouwens09, wilkins11, finkelstein12, alavi14}) may simply be a result of systematic changes in metallicity and/or star-formation history where the intrinsic UV slope also becomes bluer but IRX remains relatively constant (e.g., Figure~\ref{fig:irxpred1}; \citealt{wilkins11, wilkins13, alavi14}). {\em Thus, the common observation that UV-faint galaxies are bluer than their brighter counterparts may not directly translate into a lower dust obscuration for the former.} Moreover, $\beta$ is relatively insensitive to IRX for $\beta-\beta_0\la 0.2$ (e.g., Figure~\ref{fig:irxpred1}). Our results thus indicate caution when using the IRX-$\beta$ relation to infer the dust reddening of blue galaxies at high-redshift, as such estimates may be highly dependent on the intrinsic UV slope of the stellar population and even otherwise quite uncertain if the difference in observed and intrinsic UV slopes is small. \subsection{Young/Low-Mass Galaxies} \label{sec:young} ALMA has opened up new avenues for investigating the ISM and dust content of very high-redshift galaxies, and a few recent efforts have focused in particular on the [\ion{C}{2}]$158$\,$\mu$m line in galaxies at $z\ga 5$ \citep{schaerer15, maiolino15, watson15, capak15, willott15, knudsen16, pentericci16} and dust continuum at mm wavelengths. \citet{capak15} report on ALMA constraints on the IRX of a small sample of 10 $z\sim 5.5$ LBGs and find that they generally fall below the SMC extinction curve. The disparity between the SMC curve and their data points is increased if one adopts a bluer intrinsic slope than that assumed in \citet{meurer99}, a reasonable expectation for these high-redshift and presumably lower metallicity galaxies. \begin{figure*} \epsscale{1.10} \plotone{f9.pdf} \caption{IRX-$\beta$ diagram for young and/or low-mass galaxies at $z\ga 2$. The upper limits in IRX for low-mass ($M_{\ast}\le 10^{9.75}$\,$M_\odot$) galaxies in our sample are indicated by the dark and light blue arrows, same as those shown in the left panel of Figure~\ref{fig:irxmass}. Upper limits in IRX derived from ALMA measurements for low-mass ($M_{\ast}\le 10^{9.75}$\,$M_\odot$) galaxies at $z\sim 2-3$ are indicated by the dark gray arrows \citep{bouwens16b}. The upper limit in IRX derived from a {\em Herschel}/PACS stack for UV-selected $z\sim 2$ galaxies with ages $\la 100$\,Myr is indicated by the purple arrow, while the purple filled circle indicates the stacked inference from {\em Spitzer}/MIPS 24\,$\mu$m data \citep{reddy12a}. Similarly, the light purple arrows and open square denote the upper limits for individual $<100$\,Myr galaxies and the stacked result, respectively, derived from 24\,$\mu$m data \citep{reddy10}. Also shown are results from two lensed galaxies (large green diamonds; cB58 and the Cosmic Eye; \citealt{baker01, siana09}), and upper limits for low-mass galaxies at $z\sim 4-10$ (light gray arrows; \citealt{bouwens16b}). For reference, the black dashed line shows the SMC curve as parameterized by \citet{pettini98}, while the red line and shaded region denotes the average and dispersion in the IRX-$\beta$ relation for the average extinction curve derived for quasars \citep{zafar15}. For clarity, the results from \citet{capak15} for $z\sim 5.5$ LBGs are not shown, but we note that most of their points lie at least $1\sigma$ below the \citet{pettini98} curve shown in the figure. The detections and upper limits for young and low-mass galaxies at a variety of redshifts suggest a dust curve that is on average steeper than the SMC curve, particularly if we assume a blue intrinsic UV slope of $\beta_0 = -2.62$ (thick black solid line). } \label{fig:irxyoung} \end{figure*} More generally, earlier results suggesting that ``young'' LBGs (ages $\la 100$\,Myr) and/or those with lower stellar masses at $z\ga 2$ are consistent with an SMC curve \citep{baker01, reddy06a, siana08, siana09, reddy10, reddy12a, bouwens16b} would also require a steeper-than-SMC curve if their intrinsic slopes are substantially bluer than that normally assumed in interpreting the IRX-$\beta$ relation. Unfortunately, there is only a very small number of galaxies in our sample that have SED-determined ages of $<100$\,Myr (81 galaxies), and stacking them results in an unconstraining upper limit on IRX (Table~\ref{tab:stackedresults}). Note that an ambiguity arises because the ages are derived from SED-fitting, which assumes some form of the attenuation curve. Following \citet{reddy10}, the number of galaxies considered ``young'' would be lower under the assumption of SMC extinction rather than \citet{calzetti00} attenuation, as the former results in lower $\ebmv$ for a given UV slope, translating into older ages. Self-consistently modeling the SEDs based on the location of galaxies in the IRX-$\beta$ plane results in fewer $<100$\,Myr galaxies, but of course their location in the IRX-$\beta$ plane is unaffected \citep{reddy10}, as is the conclusion that such young galaxies would require a dust curve steeper than that of the SMC if they have blue intrinsic UV slopes. In addition, as noted in Section~\ref{sec:irxmass}, galaxies in our low-mass ($M_\ast \le 10^{9.75}$\,$M_\odot$) subsample appear to also require a dust curve steeper than that of the SMC. Figure~\ref{fig:irxyoung} summarizes a few recent measurements of IRX-$\beta$ for young and low-mass galaxies at $z\sim 2$ \citep{baker01, siana09, reddy10, reddy12a}, low-mass galaxies and LBGs in general at $z\sim 4-10$ \citep{bouwens16b}, and our own measurements. The compilation from \citet{reddy10} and \citet{reddy12a} includes 24\,$\mu$m constraints on the IRX of young galaxies. Shifting their IRX by $\approx 0.35$\,dex to higher values to account for the deficit of PAH emission in galaxies with $\la 400$\,Myr \citep{shivaei16} results in upper limits or a stacked measurement of IRX that are broadly consistent with either the SMC curve or one that is steeper. Considering the {\em Herschel} measurements here and in \citet{reddy12a}, and ALMA measurements at $z\sim 2-10$ \citep{capak15, bouwens16b}, we find that young/low-mass galaxies at $z\ga 2$ follow a dust curve steeper than that of the SMC, particularly in the context of a blue intrinsic slope, $\beta_0 = -2.62$. Note that unlike cB58, which has a stellar metallicity of $\simeq 0.25$\,$Z_\odot$ \citep{pettini00}, the Cosmic Eye has a metallicity of $\sim 0.5$\,$Z_\odot$, suggesting a relatively red intrinsic UV slope. In this case, the IRX of the Cosmic Eye may be described adequately by the SMC curve. There is additional evidence of suppressed IRX ratios at lower mass from rest-optical emission line studies of $z\sim 2$ galaxies. In particular, \citet{reddy15} found that a significant fraction of $z\sim 2$ galaxies with $M_{\ast}\la 10^{9.75}$\,$M_\odot$ have very red $\beta$, or $\ebmv$, relative to the reddening deduced from the Balmer lines (e.g., see their Figure~17), implying that such galaxies would have lower IRX for a given $\beta$ than that predicted by common attenuation/extinction curves. Evidence for curves steeper than the SMC average have been observed along certain sightlines within the SMC \citep{gordon03}, in the Milky Way and some Local Group galaxies \citep{gordon03, sofia05, gordon09, amanullah14}, and in quasars (e.g., \citealt{hall02, jiang13, zafar15}). Our results, combined with those in the literature, suggest that such steep curves may be typical of low-mass and young galaxies at high redshift. While the attenuation curve will undoubtedly vary from galaxy-to-galaxy depending on the star-formation history, age, metallicity, dust composition, and geometrical configuration of stars and dust, the fact that young/low-mass galaxies lie {\em systematically} below the IRX-$\beta$ relation predicted with an SMC curve suggests that a steep curve may apply uniformly to such galaxies. An unresolved issue is the physical reason why young and low-mass galaxies may follow a steeper attenuation curve than their older and more massive counterparts. \citet{reddy12a} suggest a possible scenario in which galaxies transition from steep to gray attenuation curves as they age due to star formation occurring over extended regions and/or the cumulative effects of galactic outflows that are able to clear the gas and dust along certain sightlines. On the other hand, if young and low-mass galaxies have higher ionizing escape fractions as a result of lower gas/dust covering fractions (e.g., \citealt{reddy16b}), then one might expect their attenuation curve to exhibit a shallower dependence on wavelength than the SMC extinction curve. In any case, curves steeper than the SMC may be possible with a paucity of large dust grains and/or an over-abundance of silicate grains \citep{zafar15}. In particular, large dust grains may be efficiently destroyed by SN shock waves \citep{draine79, mckee89, jones04}, which would have the effect of steepening the dust curve (i.e., such that proportionally more light is absorbed in the UV relative to the optical). If the destruction of large grains is significant in young/low-mass galaxies, then it may explain both their red $\beta$ and their low IRX. Alternatively, the lower gas-phase [Si/O] measured from the composite rest-frame UV spectrum of $z\sim 2$ galaxies relative to the solar value indicates significant depletion of Si onto dust grains, while carbon is under-abundant relative to oxygen \citep{steidel16}. This result may suggest an enhancement of silicate over carbonaceous grains that may result in a steeper attenuation curve. \begin{figure*} \epsscale{1.0} \plotone{f10.pdf} \caption{ Comparison of the derived reddening, $\ebmv$, and SFRs when considering the cases where (a) $1.4Z_\odot$ BC03 model with the \citet{calzetti00} attenuation curve (``1.4,C'' subscripts) and (b) $0.28Z_\odot$ BC03 model with SMC extinction curve (``0.28,S'') subscripts. The typical random uncertainties are $\delta \ebmv \simeq 0.04$ and $\delta \log[{\rm SFR}/M_\odot\,{\rm yr}^{-1}] \simeq 0.17$\,dex. The sparse sampling in the left-hand panel is due to the fixed increments of $\delta A_{\rm V}=0.1$ used when fitting the SEDs of galaxies in our sample. The difference in reddening between the two cases becomes increasingly discrepant for objects with redder colors. A lower $\ebmv$ is required to reproduce an observed UV slope given the steep far-UV rise of the SMC extinction curve compared to that of the \citet{calzetti00} curve. These differences in reddening, combined with the particular values of the SMC and \citet{calzetti00} attenuation curves at $\lambda = 1600$\,\AA, result in SFRs that are generally lower in case (b) than in case (a).} \label{fig:sedcompare} \end{figure*} \subsection{Implications for Stellar Populations and Ionizing Production Efficiencies} Inferring the intrinsic stellar populations of galaxies based on their observed photometry requires one to adopt some form of the dust attenuation curve. It is therefore natural to ask whether these inferences change in light of our findings of a steeper (SMC-like) attenuation curve for $z\sim 2$ galaxies with intrinsically blue UV spectral slopes. To address this issue, we re-modeled (using FAST) the SEDs of galaxies in our sample assuming two cases: (a) a $1.4Z_\odot$ BC03 model (canonically referred to as the ``solar'' metallicity model using old solar abundance measurements) with the \citet{calzetti00} attenuation curve; and (b) a $0.28Z_\odot$ BC03 model with an SMC extinction curve. The ages derived in case (b) are on average $30\%$ older than those derived in case (a), primarily because less reddening is required to reproduce a given UV slope with the SMC extinction curve, resulting in older ages (e.g., see discussion in \citealt{reddy12b}). Similarly, the stellar masses derived in case (b) are on average $30\%$ lower than those derived in case (a). Perhaps most relevant in the context of our analysis are the changes in inferred reddening and SFR. As shown in Figure~\ref{fig:sedcompare}, the reddening deduced from the SMC extinction curve is lower than that obtained with \citet{calzetti00} attenuation curve, owing to the steep rise in the far-UV of the former relative to the latter. The largest differences in $\ebmv$ and SFR occur for the reddest objects and those with larger SFRs. We find the following relations between the reddening and SFRs derived for the two cases discussed above: \begin{eqnarray} \ebmv_{0.28Z_\odot,{\rm SMC}} = 0.65\ebmv_{1.4Z_\odot,{\rm Calz}} \end{eqnarray} and \begin{eqnarray} \log[{\rm SFR}_{0.28Z_\odot,{\rm SMC}}/M_\odot\,{\rm yr}^{-1}] = \nonumber \\ 0.79 \log[{\rm SFR}_{1.4Z_\odot,{\rm Calz}}/M_\odot\,{\rm yr}^{-1}] -0.05. \end{eqnarray} The lower SFRs derived in the SMC case result in a factor of $\approx 2$ lower SFR densities at $z\ga 2$, as discussed in some depth in \citet{bouwens16b}. The general applicability of an SMC-like dust curve to high-redshift galaxies is also of particular interest when considering its impact on the ionizing efficiencies of such galaxies, a key input to reionization models \citep{robertson13, bouwens15b, bouwens16a}. Specifically, the ionizing photon production efficiency, $\xi_{\rm ion}$, is simply the ratio of the rate of H-ionizing photons produced to the non-ionizing UV continuum luminosity. This quantity is directly related to another commonly-used ratio, namely the Lyman continuum flux density (e.g., at $800$ or $900$\,\AA) to the non-ionizing UV flux density (e.g., at $1600$\,\AA), $f_{\rm 900}/f_{\rm 1600}$. The ionizing photon production efficiency is typically computed by combining Balmer emission lines, such as H$\alpha$, with UV continuum measurements (e.g., \citealt{bouwens15b}), but both the lines and the continuum must be corrected for dust attenuation. In the context of our present analysis, the dust corrections for the UV continuum are lower by factors of $1.12$, $1.82$, and $2.95$, for galaxies with Calzetti-inferred SFRs of 1, 10, and 100\,$M_\odot$\,yr$^{-1}$, respectively. Thus, for given Balmer line luminosities that are corrected for dust using the Galactic extinction curve \citep{calzetti94, steidel14, reddy15}, $\xi_{\rm ion}$ would be correspondingly larger by the same factors by which the dust-corrected UV luminosities are lower. A secondary effect that will boost $\xi_{\rm ion}$ above the predictions from single star solar metallicity stellar population models is the higher ionizing photon output associated with lower metallicity and rotating massive stars \citep{eldridge09, leitherer14}. In particular, the fiducial $0.14Z_\odot$ BPASS model that includes binary evolution and an IMF extending to 300\,$M_\odot$ predicts a factor of $\approx 3$ larger H$\alpha$ luminosity per solar mass of star formation after 100\,Myr of constant star formation relative to that computed using the \citet{kennicutt12} relation, which assumes a solar metallicity Starburst99 model. On the other hand, the UV luminosity is larger by $\approx 30\%$. Thus, such models predict a $\xi_{\rm ion}$ that is elevated by a factor of $\approx 2$ relative to those assumed in standard calibrations between H$\alpha$/UV luminosity and SFR (e.g., see also \citealt{nakajima16, bouwens16a, stark15, reddy16b}). Consequently, calculations or predictions of $\xi_{\rm ion}$ for high-redshift galaxies should take into account the effects of a steeper attenuation curve and lower metallicity stellar populations that may include stellar rotation/binarity. {\em Our results suggest that an elevated value of $\xi_{\rm ion}$ is not only a feature of very high-redshift ($z\ga 6$) galaxies, but may be quite typical for $z\sim 2$ galaxies as well.} \section{CONCLUSIONS} In this paper, we have presented an analysis of the relationship between dust obscuration (IRX$=L_{\rm IR}/L_{\rm UV}$) and other commonly-derived galaxy properties, including UV slope ($\beta$), stellar mass ($M_{\ast}$), and UV luminosity ($L_{\rm UV}$), for a large sample of 3,545 rest-optically selected star-forming galaxies at $z=1.5-2.5$ drawn from HDUV UVIS and 3D-HST photometric catalogs of the GOODS fields. Our sample is unique in that it significantly extends the dynamic range in $\beta$ and $L_{\rm UV}$ compared to previous UV-selected samples at these redshifts. In particular, close to $60\%$ of the objects in our sample have UV slopes bluer than $\beta = -1.70$ and $>95\%$ have rest-frame UV absolute magnitudes fainter than the characteristic magnitude at these redshifts, with the faintest galaxies having $L_{\rm UV}\approx 0.05L^{\ast}_{\rm UV}$. We use stacks of the deep {\em Herschel}/PACS imaging in the GOODS fields to measure the average dust obscuration for galaxies in our sample and compare it to predictions of the IRX-$\beta$ relation for different stellar population models and attenuation/extinction curves using energy balance arguments. Specifically, we consider the commonly adopted \citet{bruzual03} stellar population models for different metallicities ($0.28$ and $1.4Z_\odot$), as well as the low-metallicity ($0.14Z_\odot$) BPASS model. Additionally, we compute predictions of the IRX-$\beta$ relations for the \citet{calzetti00} and \citet{reddy15} dust attenuation curves, and the SMC extinction curve. The lower metallicity stellar population models result in significant shifts in the IRX-$\beta$ relation of up to $\delta\beta = 0.4$ towards bluer $\beta$ relative to the canonical relation of \citet{meurer99}. In the context of the lower metallicity stellar population models applicable for high-redshift galaxies, we find that the strong trend between IRX and $\beta$ measured from the HDUV and 3D-HST samples follows most closely that predicted by the SMC extinction curve. We find that grayer attenuation curves (e.g., \citealt{calzetti00}) over-predict the IRX at a given $\beta$ by a factor of $\ga 3$ when assuming intrinsically blue UV spectral slopes. {\em Thus, our results suggest that an SMC curve is the one most applicable to lower stellar metallicity populations at high redshift.} Performing a complementary stacking analysis of the {\em Spitzer}/MIPS $24$\,$\mu$m images implies an average mid-IR-to-IR luminosity ratio, $\langle L_{7.7}/L_{\rm IR}\rangle$, that is a factor of $3-4$ lower than for galaxies with reddest ($\beta>-0.5$) and the UV-brightest ($M_{1600}\la -21$) and UV-faintest ($M_{1600}\ga -19$) galaxies relative to the average for all galaxies in our sample (Appendix~\ref{sec:l8lir}). These results indicate large variations in the conversion between rest-frame 7.7\,$\mu$m and IR luminosity. At any given UV luminosity, galaxies with redder $\beta$ have larger IRX. IRX-$L_{\rm UV}$ relations for blue and red star-forming galaxies average together to result in a roughly constant IRX of $\simeq 3-4$ over roughly two decades in UV luminosity ($2\times 10^9 \la L_{\rm UV}\la 2\times 10^{11}$\,$L_\odot$). Consequently, the bluer $\beta$ observed for UV-faint galaxies seen in this work and previous studies may simply reflect intrinsically bluer UV spectral slopes for such galaxies, rather than signifying changes in the dust obscuration. Galaxies with stellar masses $M_{\ast}>10^{9.75}$\,$M_\odot$ have an IRX-$\beta$ relation that is consistent with the SMC extinction curve, while the lower mass galaxies in our sample with $M_{\ast}\le 10^{9.75}$\,$M_\odot$ have an IRX-$\beta$ relation that is {\em at least} as steep as the SMC. The shifting of the IRX-$\beta$ relations towards bluer $\beta$ for the lower metallicity stellar populations expected for high-redshift galaxies implies that the low-mass galaxies in our sample, as well as the low-mass and young galaxies from previous studies, require a dust curve steeper than that of the SMC. The low metallicity stellar populations favored for high-redshift galaxies result in steeper attenuation curves and higher ionizing photon production rates which, in turn, facilitate the role that galaxies may have in reionizing the universe at very high redshift or keeping the universe ionized at lower redshifts ($z\sim 2$). There are several future avenues for building upon this work. Detailed spectral modeling of the rest-UV and/or rest-optical spectra of galaxies (e.g., \citealt{steidel16, reddy16b}) may be used to discern their intrinsic spectral slopes and thus disentangle the effects of the intrinsic $\beta$ and the attenuation curve relevant for high-redshift galaxies. Second, the higher spatial resolution and depth of X-ray observations (compared to the far-IR) makes them advantageous for investigating the bolometric SFRs and hence dust obscuration for galaxies substantially fainter than those directly detected with either {\em Spitzer} or {\em Herschel} in reasonable amounts of time, provided that the scaling between X-ray luminosity and SFR can be properly calibrated (e.g., for metallicity effects; \citealt{basuzych13}). In addition, nebular recombination line estimates of dust attenuation (e.g., from the Balmer decrement; \citealt{reddy15}) may be used to assess the relationship between IRX and $\beta$ for individual star-forming galaxies, rather than through the stacks necessitated by the limited sensitivity of far-IR imaging. \acknowledgements This work was supported by NASA through grant HST-GO-13871 and from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555. K. Penner kindly provided data from his published work in electronic format. NAR is supported by an Alfred P. Sloan Research Fellowship.
2,869,038,155,399
arxiv
\section{Introduction} Prior work has demonstrated how online supervised learning with labeled data can be used for tasks such as rapid, event-driven learning from neuromorphic sensor data \cite{stewart2020online}. However, real-world data is unlabeled and, in the case of classification, can have classes that were not anticipated. Therefore, to leverage real-world data, labels must be generated by a supervisor in real-time without \textit{a priori} knowledge of the number of classes. Additionally, while a trained classifier is trained to generate class labels, it cannot generalize to new classes. This is compounded by the fact that neural network classifiers trained using gradient descent are usually overconfident of their classification, making the learning of new classes impractical. An alternative approach is to use the intermediate layers of a trained neural network classifier as pseudo-labels or features for learning classes. In this work, we formalize this idea using an event-driven guided Variational Auto-Encoder (VAE) which is trained to generate an embedding that disentangles according to labels in a labeled dataset and that generalizes to new data. The resulting embedding space can then either be used for (pseudo)labels for supervised learning or for measuring data similarity. We focus our demonstration of the guided VAE on a Dynamic Vision Sensor (DVS) gesture learning problem because of the availability of an event-based gesture dataset and the high relevance of gesture recognition use cases \cite{Amir_etal17_lowpowe}. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{figs/hybrid_guided_vae.png} \end{center} \caption{The Hybrid Guided-VAE architecture. Streams of gesture events recorded using a Dynamic Vision Sensor (DVS) are input into a Spiking Neural Network (SNN) that encodes the spatio-temporal features of the input data into a latent structure $z$. $P$ and $Q$ are pre-synaptic traces and $U$ is the membrane potential of the spiking neuron. For clarity, only a single layer of the SNN is shown here and refractory states $R$ are omitted. To help disentangle the latent space, a portion of the $z$ equal to the number of target features $y^\ast$ is input into a classifier that trains each latent variable to encode these features (Exc. Loss). The remaining $z$, noted $\setminus m$ are input into a different classifier that adversarially trains the latent variables to not encode the target features so they encode for other features instead (Inh. Loss). The latent state $z$ is decoded back into $x^\ast$ using the conventional deconvolutional decoder layers.} \label{fig:vae} \vspace{-3mm} \end{figure} Neuromorphic Dynamic Vision Sensors (DVS) inspired by the biological retina capture temporal, pixel-wise intensity changes as a sparse stream of binary events \citep{Gallego_etal19_evenvisi}. This approach has key advantages over traditional RGB cameras, such as faster response times, better temporal resolution, and invariance to static image features like lighting and background. Thus, raw DVS sensor data intrinsically emphasizes the dynamic movements that comprise most natural gestures. However, effectively processing DVS event streams remains an open challenge. Events are asynchronous and spatially sparse, making it challenging to directly apply conventional vision algorithms \citep{guillermo2020survey,Gallego_etal19_evenvisi}. Spiking Neural Networks (SNNs) can efficiently process and learn from event-based data while taking advantage of temporal information \citep{Neftci_etal19_surrgrad}. SNN models emulate the properties of biological neurons and can be used for hierarchical feature extraction from the precise timing of events through event-by-event processing \citep{Gerstner_etal14_neurdyna}. Recent work demonstrated how SNNs can be trained end-to-end using gradient backpropagation in time and standard autodifferentiation tools, making the integration of SNNs possible as part of modern machine learning and deep learning methods \citep{Zenke_Neftci21_brailear,Bellec_etal19_biolinsp,Shrestha_Orchard18_slayspik}. Here, we take advantage of this capability by incorporating a convolutional SNN into a Variational Autoencoder (VAE) to encode spatio-temporal streams of events recorded by the DVS (Figure \ref{fig:vae}). The goal of the VAE is to embed the streams of DVS events into a latent space which facilitates the evaluation of event data similarity for semi-supervised learning from real-world data. To best use the underlying hardware, we implement a \textit{hybrid} VAE to process the DVS data, with an SNN-based encoder and a conventional (non-spiking) convolutional network decoder. To ensure the latent space represents features which are perceptually salient and useful for recognition, we use a guided VAE to disentangle the features that account for variation in the underlying structure of the data. Our Hybrid Guided-VAE encodes and disentangles the variations of the structure of event data allowing for the clustering of similar patterns, such as similar looking gestures, and assigning of pseudo-labels to novel samples. The key contributions of this work are: \begin{enumerate} \item End-to-end trainable event-based SNNs for processing neuromorphic sensor data event-by-event and embedding them in a latent space. \item A Hybrid Guided-VAE that encodes event-based camera data in a latent space representation of salient features for clustering and pseudo-labeling. \item A proof-of-concept implementation of the Hybrid Guided-VAE on Intel's Loihi Neuromorphic Research Processor. \end{enumerate} The ability to encode event data into a disentangled latent representation is a key feature to enable learning from real-world data for tasks such as mid-air gesture recognition systems that are less rigid and more natural because they can adapt to each user. \section{Related Work} \subsection{Measuring Data Similarity} To automatically and objectively measure the similarity of event data between and across classes for automatic pseudo-labeling, we build a novel Hybrid Guided-VAE model that can take advantage of the DVS' temporal resolution and robustness while being end-to-end trainable using gradient descent on a variational objective. Hybrid VAEs that combine both spiking and ANN layers have been used before on DVS event data for predicting optical flow. The hybrid architecture efficiently processes sparse spatio-temporal event inputs while preserving the spatio-temporal nature of the events \citep{lee2020spikeflownet}. \subsection{Variational Autoencoders} VAEs are a type of generative model which deal with models of distributions $p(x)$, defined over data points $x\in X$ \cite{Kingma_Welling13_autovari}. A VAE commonly consists of two networks, 1) an encoder ($Enc$) that encodes the captured dependencies of a data sample $x$ into a latent representation $z$; and 2) a decoder ($Dec$) that decodes the latent representation back to the data space making a reconstruction $\Tilde{x}$ using Gaussian assumptions for the latent space : \begin{align} & Enc(x) = q(z|x) = N(z|\mu(x),\Sigma(x)), \\ & \Tilde{x} \approx Dec(z) = p(x|z), \end{align} where $q$ is the encoding probability model into latent states $z$ that are likely to produce $x$, and $p$ is the decoding probability model conditioned on $z$. The functions $\mu(x)$ and $\Sigma(x)$ are deterministic functions whose parameters can be trained through gradient-based optimization. Using a variational approach, the VAE loss consists of the sum of two terms resulting from the variational lower bound: \begin{equation} \log p({x}) \ge \underbrace{\mathbb{E}_{{z}\sim q} \log p({x}|{z})}_{\mathcal{L}_{ll}} - \underbrace{D_{KL}(q({z}|{x})||p({z}))}_{\mathcal{L}_{prior}}. \end{equation} The first term is the expected log likelihood of the reconstructed data computed using samples of the latent space, and the second term acts as a prior, where $D_{KL}$ is the Kullback-Leibler divergence. The \emph{VAE} loss is thus formulated to maximize the variational lower bound by maximizing $-\mathcal{L}_{prior}$ and $\mathcal{L}_{ll}$. A VAE's latent space captures salient information for representing the data $X$, and thus similarities in the data \citep{larsen2016latentsim}. \subsection{Disentangling Variational Autoencoders} VAEs do not necessarily disentangle all the factors of variation which can make the latent space difficult to interpret and use. Several approaches have been developed to improve the disentangling of the latent representation, such as Beta VAE \citep{Higgins2017betaVAELB} and Total Correlation VAE \citep{Chen2018TCVAE}. However, the disentangled representation these unsupervised methods learn can have high variance since disentangled labels are not provided during training. Furthermore, these methods do not learn to relate labels to the representations making them unable to provide pseudo-labels from unlabeled data. For these reasons, we employ a Guided-VAE, which has been developed specifically to disentangle the latent space representation of key features in a supervised fashion \citep{Ding2020guided}. We describe here the supervised Guided-VAE algorithm, which is the basis of our hybrid model described in the next section. To learn a disentangled representation, a supervised Guided-VAE trains latent variables to encode existing ground-truth labels while making the rest of the latent variables uncorrelated with that label. The supervised Guided-VAE model targets the generic generative modeling task by using an adversarial excitation and inhibition formulation. This is achieved by minimizing the discriminative loss for the desired latent variable while maximizing the minimal classification error for the rest of the latent variables. For $N$ training data samples $X = (x_1, ..., x_N)$ and $M$ features with ground-truth labels, let $z = (z_{1}, ..., z_m,...z_{M}) \oplus z_{\setminus m}$ where $z_{m}$ defines the ``guided'' latent variable capturing feature $m$, and $z_{\setminus m}$ represents the rest of the latent variables. Let $y_m(x_n)$ be a one-hot vector representing the ground-truth label for the $m$-th feature of sample $x_n$. For each feature $m$, the excitation and inhibition losses are defined as follows: \begin{equation} \begin{aligned} \mathcal{L}_{Exc}(z,m) =& \, \underset{c_m}{\max}\left( \sum_{n=1}^N \mathbb{E}_{q(z_m|x_n)}\log p_{c_m}(y=y_m(x_n)|z_m) \right),\\ \mathcal{L}_{Inh}(z,m) =&\, \underset{k_m}{\max}\left( \sum_{n=1}^N \mathbb{E}_{q(z_{\setminus m}|x_n)}\log p_{k_m}(y=y_m(x_n)|z_{\setminus m}), \right) \end{aligned} \end{equation} where $c_m$ is a classifier making a prediction on the $m$-th feature in the guided space and $k_m$ is a classifier making a prediction over $m$ in the unguided space $z_{\setminus m}$. By training these classifiers adversarially with the VAE's encoder, it learns to disentangle the latent representation, with $z_{m}$ representing the target features and $z_{\setminus m}$ representing any features other than the target features. \section{Methods} \subsection{Dynamic Vision Sensors and Preprocessing} Dynamic Vision Sensors (DVS) are a type of event-based sensor that record event streams at a high temporal resolution and are compatible with SNNs \citep{Liu_Delbruck10_neursens}. DVS sensors detect brightness changes on a logarithmic scale with a user-tunable threshold, instead of RGB pixels like typical cameras. An event consists of its location $x$, $y$, timestamp $t$, and the polarity $p\in[\mathrm{OFF},\mathrm{ON}]$ representing the direction of change. Using a dense representation, the DVS event stream is denoted $S^t_{DVS, x,y,p} \in \mathbb{N}^+$, indicating the number of events that occurred in the time bin ($t$,$t+\Delta t$) with space-time coordinates ($x$,$y$,$p$,$t$). $\Delta t$ is the temporal discretization and equal to $1\mathrm{ms}$ in our experiments. The recorded DVS event stream is provided as input to the Hybrid Guided-VAE network implemented on a GPU (network dynamics described in the following section). Each polarity of the event stream $(x,y,p)$ is fed into one of the two channels of the first convolutional layer. Within the time step, most pixels have value zero, and very few have values larger than two. Note that time is \textit{not} represented as a separate dimension in the convolutional layer, but through the dynamics of the SNN. The VAE targets take a different form because the decoder is not an SNN. Time surfaces (TS) are widely used to preprocess event-based spatio-temporal features \citep{Lagorce2016HOTS}. TS can be constructed by convolving an exponential decay kernel through time in the event stream as follows: \begin{equation} TS^t_{x,y,p} = \epsilon^t \ast S^t_{DVS,x,y,p}\text{ with }\epsilon^t=\mathrm{e}^{-\frac{t}{\tau}} \end{equation} where $\tau$ is a time constant. Here, we convolve over the time length of the input event data stream $S^t_{DVS,x,y,p}$. This results in two 2D images, one for each polarity, that are used as VAE targets (\emph{i.e.}, for the reconstruction loss). \subsection{Hybrid Guided Variational Auto-Encoder} DVS cameras produce streams of events containing rich spatio-temporal patterns. To process these streams, event-based computer vision algorithms typically extract hand encoded statistics and use these in their models. While efficient, this approach discards important spatio-temporal features from the data \citep{Gallego_etal19_evenvisi}. Rather than manually selecting a feature set, we process the raw DVS events while preserving key spatio-temporal features using a spiking neural network (SNN) trained end-to-end in the Hybrid Guided-VAE architecture shown in Figure \ref{fig:vae}. A key advantage of the VAE is that the loss can be optimized using gradient backpropagation. To retain this advantage in our hybrid VAE, we must ensure that the encoder SNN is also trainable through gradient descent. Until recently, several challenges hindered this: the spiking nature of neurons' nonlinearity makes it non-differentiable and the continuous-time dynamics raise a challenging temporal credit assignment problem. These challenges are solved by the surrogate gradients approach \citep{Neftci_etal19_surrgrad}, which formulates the SNN as an equivalent binary RNN and employs a smooth surrogate network for the purposes of computing the gradients. Our Hybrid Guided-VAE uses a convolutional SNN to encode the spatio-temporal streams in the latent space, and a non-spiking convolutional decoder to reconstruct the TS of the data. We chose an event-based encoder because the SNN can bridge computational time scales by extracting slow and relevant factors of variation in the gesture \citep{Wiskott_Sejnowski02_slowfeat} from fast event streams recorded by the DVS. We chose a conventional (non-spiking) decoder for three reasons: (1) for similarity estimation, we are mainly interested in the latent structure produced by the encoder, rather than the generative features of the network, (2) as we demonstrate in the results section, a dedicated neuromorphic processor \citep{Indiveri_etal11_neursili,Davies_etal18_loihneur} only requires the encoder to produce this latent structure, and (3) SNN training is compute- and memory- intensive. Thus a conventional decoder enables us to dedicate more resources to the SNN encoder. Our Hybrid Guided-VAE network architecture is shown in Figure \ref{fig:vae} and the architecture description is provided in Table \ref{tab:arch}. The SNN encoder consists of four discrete leaky integrate and fire convolutional layers (see following section) followed by linear layers, and outputs a pair of vectors ($\mu$, $\Sigma$) for sampling the latent state $z$. According to the guided VAE, part or all of latent state $z$ is input into one of three connected networks, the excitation classifier, the adversarial inhibition classifier, or the decoder. The target features for the excitatory network are given as one-hot encoded vectors of length $M$. The excitation classifier is jointly trained with the encoder to train the first $M$ latent variables to only encode information relevant to the corresponding target feature. The inhibition classifier takes as input the remaining latent variables in the latent space, $z_{\setminus m}$, and are adversarially trained on two sets of targets. One set of targets is the same target features that the excitation classifier trains on. The other set of target features is a vector of length $M$ but all of the values are set to $0.5$ indicating that none of the values correspond to any target. The inhibition classifier is jointly trained with the encoder to train the remaining $z_{\setminus m}$ latent variables to not encode any information relevant to target features, forcing them to encode information for other features instead. The decoder is a transposed convolutional network that takes the full latent state $z$ as input to construct the TS, denoted $\tilde{x}$ in Figure \ref{fig:vae}. \begin{table}[!ht] \begin{center} \caption{Hybrid Guided-VAE architecture \label{tab:arch}} \def$\times${$\times$} \scalebox{1}{ \begin{tabular}{|c|c|c||c|} \hline Layer & Kernel & Output & Layer Type\\ \hline input & & 32\x32\x2 & DVS128\\ \cline{4-4} 1 & 2a & 16\x16\x2 & SNN LIF Encoder\\ 2 & 32c7p0s1 & 16\x16\x32 &\\ 3 & 1a & 16\x16\x32 &\\ 4 & 64c7p0s1 & 16\x16\x64 &\\ 5 & 2a & 8\x8\x64 &\\ 6 & 64c7p0s1 & 8\x8\x64 &\\ 7 & 1a & 8\x8\x64 &\\ 8 & 128c7p0s1 & 8\x8\x128 &\\ 9 & 1a & 8\x8\x128 &\\ 10 & - & 128 &\\ \cline{4-4} 11 & - & 100 & $\mu(U^t)$ (ANN) \\ 12 & - & 100 & $\Sigma(U^t)$ (ANN)\\ \cline{4-4} 13 & - & 128 & ANN Decoder \\ 14 & 128c4p0s2 & 4\x4\x128 & \\ 15 & 64c4p1s2 & 8\x8\x64 & \\ 16 & 32c4p1s2 & 16\x16\x32 & \\ \cline{4-4} output & 2c4p1s2 & 32\x32\x2 & Time Surface\\ \hline \end{tabular}} \end{center} \scriptsize{Notation: \verb~Ya~ represents \verb~YxY~ sum pooling, \verb~XcYpZsS~ represents \verb~X~ convolution filters (\verb~YxY~) with padding $Z$ and stride $S$. } \normalsize \vspace{-3mm} \end{table} \subsection{Encoder SNN Dynamics} \label{EncoderDyn} To take full advantage of the event-based nature of the DVS input stream and its rich temporal features, the data is encoded using an SNN. SNNs can be formulated as a type of recurrent neural network with binary activation functions (Figure \ref{fig:vae}) \citep{Neftci_etal19_surrgrad}. With this formulation, SNN training can be carried out using standard tools of autodifferentiation. In particular, to best match the dynamics of existing digital neuromorphic hardware implementing SNNs \citep{Davies_etal18_loihneur,Furber_etal14_spinproj}, our neuron model consists of a discretized Leaky Integrate and Fire (LIF) neuron model with time step $\Delta t$ \citep{Kaiser_etal20_synaplas}: \begin{equation}\label{eq:lif_equations} \begin{split} U_i^{t} &= \sum_j W_{ij} P_j^{t} - U_{th} R_i^{t} + b_i, \\ S_i^{t} &= \Theta( U_i^{t}), \\ \end{split} \qquad \begin{split} P_j^{t+\Delta t} &= \alpha P_{j}^{t} + (1-\alpha) Q_{j}^{t}, \\ Q_j^{t+\Delta t} &= \beta Q_{j}^{t} + (1-\beta ) S_{in, j}^{t},\\ \end{split} \end{equation} \noindent where the constants $\alpha=\exp(-\frac{\Delta t}{\tau_{\mathrm{mem}}})$ and $\beta=\exp(-\frac{\Delta t}{\tau_{\mathrm{syn}}})$ reflect the decay dynamics of the membrane potential and the synaptic state during a $\Delta t$ timestep, where $\tau_{mem}$ and $\tau_{syn}$ are membrane and synaptic time constants, respectively. The time step in our experiments was fixed to $\Delta t = 1$ms. $R_i$ here implements the reset and refractory period of the neuron (with dynamcis similar to $P$), and states $P_i$, $Q_i$ are pre-synaptic traces that capture the leaky dynamics of the membrane potential and the synaptic currents. $S^t_i = \Theta(U_i^t)$ represents the spiking non-linearity, computed using the unit step function, where $\Theta(U_i) = 0$ if $U_i < U_{th}$, otherwise $1$. We distinguish here the input spike train $S_{in}^t$ from the output spike train $S^t$. Following the surrogate gradient approach \citep{Neftci_etal19_surrgrad}, for the purposes of computing the gradient, the derivative of $\Theta$ is replaced with the derivative of the fast sigmoid function \citep{Zenke_Ganguli17_supesupe}. Note that equation (\ref{eq:lif_equations}) is equivalent to a discrete-time version of the spike response model with linear filters \citep{Gerstner_Kistler02_spikneur}. Similar networks were used for classification tasks on the DVSGesture dataset, leading to state-of-the-art accuracy on that task \citep{Kaiser_etal20_synaplas,Shrestha_Orchard18_slayspik}. In the Appendix, we show how these equations can be obtained via discretization of the common LIF neuron. The SNN follows a convolutional architecture, as described in Table \ref{tab:arch}, encoding the input sequence $S^t_{in}$ into a membrane potential variable $U^t$ in the final layer. The network computes $\mu(U^t)$ and $\Sigma(U^t)$ as in a conventional VAE, but uses the final membrane potential state $U^t$. Thanks to the chosen neural dynamics, the TS can be naturally computed by our network. In fact, using an appropriate choice of $\tau = \tau_{syn}$ for computing the TS, it becomes exactly equivalent to the pre-synaptic trace $Q^t$ of our network (See Appendix). Hence, our choice of input and target corresponds to an autoencoder in the space of pre-synaptic traces $Q^t$. \subsection{Datasets} We trained and evaluated the model using the Neuromorphic MNIST (NMNIST) and IBM DVSGesture datasets, both of which were collected using DVS sensors \citep{Lichtsteiner_etal08_128x120,Posch_etal11_qvga143}. NMNIST consists of $32\times 32$, $300$ms event data streams of MNIST images recorded with a DVS \citep{Orchard_etal15_convstat}. The dataset contains 60,000 training event streams and 10,000 test event streams. \begin{figure*}[!t] \centering \includegraphics[width=.9\linewidth]{figs/orig_recon.png} \caption[] {Original (top) and reconstructed (bottom) time-surfaces for a sample gesture from each class. The reconstructions reflect the location of each gesture but with some smoothing of the detail.} \label{fig:orig_recon} \end{figure*} The IBM DVSGesture dataset \citep{Amir_etal17_lowpowe} consists of recordings of 29 different individuals performing 10 different gestures, such as clapping, and an `other' gesture class containing gestures that do not fit into the first 10 classes \citep{Amir_etal17_lowpowe}. The gestures are recorded under four different lighting conditions, so each gesture is also labeled with the associated lighting condition under which it was performed. Samples from the first 23 subjects were used for training and the last 6 subjects were used for testing. The training set contains 1078 samples and the test set contains 264 samples. Each sample consists of about 6 seconds of the gesture being repeatedly performed. In our work we scale each sample to 32x32 and only use a randomly sampled 200ms sequence to match real time learning conditions. For both datasets, to reduce memory requirements, the gradients were truncated to 100 time steps (\textit{i.e. } $100$ms worth of data). For both datasets, the model learns a latent space encoding that can be used to reconstruct the digits or gestures and to classify instances accurately. \subsection{Neuromorphic Hardware Implementation} As a first step towards an online self-supervised recognition system that uses the Hybrid Guided-VAE as a pseudo-labeler, we developed a proof-of-concept implementation of our SNN encoder which can be trained and run on the Intel Loihi. Since only the encoder is required for estimating the feature embeddings, it is not necessary to map the decoder onto the Loihi. We trained the neuromorphic encoder with our Hybrid Guided-VAE method with a couple key differences in order for the encoder to work on Loihi as follows: 1) To train with the same neuron model and quantization as the Loihi we used SLAYER which has a differentiable functional simulator of the Loihi chip (\cite{Shrestha_Orchard18_slayspik,Stewart_etal20_onlifew-}) allowing for one-to-one mapping of trained networks onto the hardware; and 2) The $\mu$ and $\Sigma$ parts of the network were made spiking and used the quantized membrane potential of the neuron for the latent representation instead of ANN trained full precision values. \subsection{System Specifications For Measurement} The training of the hybrid Guided-VAE model and the latent space classifier were performed with Arch Linux 5.6.10, and PyTorch 1.6.0. The machine consists of AMD Ryzen Threadripper CPUs with 64GB RAM and Nvidia GeForce RTX 1080Ti GPUs. For the neuromorphic hardware implementation, an Intel Nahuku 32 board consisting of 32 Loihi chips running on the Intel Neuromorphic Research Community (INRC) cloud were used with Nx SDK 1.0. The machine consists of an Intel Xeon E5-2650 CPU with 4GB RAM. \section{Results} \subsection{NMNIST Accuracy and Latent Space} \begin{figure}[h] \begin{center} \includegraphics[width=0.49\textwidth]{figs/nmnist_tsne.PNG} \caption[]{A T-SNE plot of the $z_{m}$ portion of the latent space of the encoded NMNIST dataset. Color of the data points corresponds to the digit classes. Clear separation between classes indicates that the algorithm learns to encode the spiking data into a latent space that strongly emphasizes class-relevant features over other variation.} \vspace{-1mm} \label{fig:nmnist_tsne} \end{center} \end{figure} The use of the NMNIST spiking digit classification data allows us to test the ability of the algorithm to learn to encode spiking input data in a manner that preserves information needed for accurate classification and captures salient features in the latent space. Trained on the NMNIST dataset, the excitation classifier achieved both training and test accuracy of approximately 99\% indicating that the SNN encoder learned a latent representation that clearly disentangles digit classes. We used T-SNE to visualize the learned representations of the algorithm. T-SNE embeds both the local and global topology of the latent space into a low-dimensional space suitable for visualization \citep{Mukherjee2019cluster}, allowing us to observe clustering and separation between gesture classes and lighting conditions. As shown in Figure \ref{fig:nmnist_tsne}, each digit class in the NMNIST dataset is clearly separable in the latent space, with only a few data points inaccurately clustered. \subsection{DVSGesture Accuracy, Latent Space, and Reconstructions} \begin{figure*}[h!] \centering \includegraphics[width=.9 \linewidth]{figs/right_left.png} \includegraphics[width=.9\linewidth]{figs/right_other_traversal.png} \caption[]{Traversals of the latent space learned from the DVSGesture dataset. (Top) Beginning with the right hand wave latent variable maximized and the left hand wave variable minimized, traverse along the latent space by gradually decreasing the right hand wave latent variable and increasing the left hand wave latent variable. Note the initial TS shows a small, focal area of motion in the top-left corresponding to the participant's right hand waving. (Bottom) The latent traversal along all of the non-target $z_{\setminus m}$ latent variables illustrates the relative insensitivity of the model to these features. \label{fig:latent_traversal}} \label{fig:traversals} \end{figure*} To analyze the learned representation of gestures in the latent space we examine the accuracy of the excitation classifier in correctly identifying a gesture, the T-SNE projections of the different parts of the latent space, and traversals of the latent space. Finally, we observe the quality of the embeddings based on our own DVS recordings of novel gestures. The excitation classifier results on the DVSGesture dataset achieves a training accuracy of approximately 97\% and test accuracy of approximately 87\%. Qualitatively, the SNN encoder learns a disentangled latent representation of features unique to each gesture class but has some difficulty distinguishing between gestures that are very similar. In Figure \ref{fig:orig_recon}, a sample gesture from each of the gesture classes is visualized as a TS. Colors in the samples correspond to the TS value of the events at the end of the sequence. Note that the TS leaves a significant amount of fine detail intact. In contrast, encoding and then decoding the samples results in a reconstruction that preserves the general structure of the gesture but smooths out some of the detail. Note that, for the purposes of estimating gesture similarity and producing labels from novel data, the fidelity of the reconstruction is less important than the capacity of the model to disentangle the gesture classes. Furthermore, disentangling autoencoders are known to provide lower quality reconstructions compared to unguided VAEs \citep{Chen2018TCVAE}. We use T-SNE to examine the capacity of the network to disentangle salient gesture features in the latent space. Figure \ref{fig:main_tsnes} shows a T-SNE plot of the guided portion of the latent space trained on the DVSGesture dataset. This plot indicates clear clustering of gesture classes, with some overlap between similar gestures such as left arm clockwise and counterclockwise. This global structure in the learned representations indicates the Hybrid Guided-VAE is identifying useful, class-relevant features in the encoder and suppressing noise and unhelpful variability from the spiking sensor. \begin{figure}[!t] \includegraphics[width=0.49\textwidth]{figs/new_gestures_tsne.png} \centering \caption[]{A T-SNE plot of the $z_{m}$ portion of the latent space of the encoded DVSGesture dataset. Additionally, projections of the $z_m$ portion of the latent space of encoded new gestures we recorded using a different DVS, and not part of the DVSGesture dataset are shown. Bottom color plots are the TS of the new gestures.} \label{fig:main_tsnes} \end{figure} \begin{figure}[!t] \includegraphics[width=0.49\textwidth]{figs/comparison_tsne.png} \centering \caption[]{A T-SNE plot DVSGesture dataset and real-world gestures using the convolution layer output of SLAYER and DECOLLE models. } \label{fig:comparison_tsnes} \end{figure} As an additional tool to investigate the structure of the latent space representations, we use latent traversals to interpret the features of the space. Traversals consist of a set of generated TS based on positions in the space computed using off polarity events. The positions traverse a line in the latent space, revealing how the network encodes salient features. Figure \ref{fig:latent_traversal} contains two traversals illustrating the shift in position and intensity of motion in the TS encoded by the latent variables $z_{2}$ and $z_{3}$. Because those features correspond to distinguishing, salient characteristics of the gesture classes "Right Hand Wave" and "Left Hand Wave", this encoding allows the model to disentangle the gestures. \subsection{Labeling Unlabeled Gestures} To test the generalization of the learned encoder, we evaluated how the VAE model performs when provided with new gesture data captured in a new environment intended to replicate ecologically valid conditions for a real-world gesture recognition system. We recorded gestures belonging to two new classes, right and left swipe down, which are not present in the DVSGesture dataset. We used a different DVS sensor (the DAVIS 240C sensor \citep{Brandli_etal14_240180}) and processed the data with the trained Hybrid Guided-VAE. Each gesture was repeated three times for approximately $3$s by the same subject under the same lighting conditions. Figure \ref{fig:main_tsnes} shows the new gesture TS and associated T-SNE embeddings in the $z_m$ portion of the latent space. The right swipe down gestures were represented by the model similar to right hand wave gestures, as indicated by their proximity in the T-SNE plot of the latent space. Similarly, the left swipe down gestures were represented most similar to left hand wave gestures. Interestingly, both new classes cluster near the edges of the existing classes, possibly indicating the presence of a feature gradient. We also compare the clustering and ability of the VAE model to pseudo-label novel real-world data to two methods that give state-of-the-art classification accuracy on the DVSGesture dataset, SLAYER \cite{Shrestha_Orchard18_slayspik} and DECOLLE \cite{Kaiser_etal20_synaplas}. To compare the models we did a T-SNE visualization of the features learned by the convolutional layers of the SLAYER and DECOLLE models which are shown in Figure \ref{fig:comparison_tsnes}. The features learned by the models do not clearly disentangle the classes. Additionally, when the new gesture classes outside of the DVSGesture dataset are given to the models the are not clustered together near the space of a particular class and instead are identified as being closest to classes such as "Air Guitar" and "Air Drums" instead. These results demonstrate that the Hybrid Guided-VAE is capable of appropriately representing novel gestures in a manner that support pseudo-labeling. With additional data points of new gestures, the hybrid Guided-VAE can eventually learn new classes of gestures on it own. \subsection{Ablation Study} \begin{table}[!ht] \small \begin{center} \caption{\small Classification from latent space \label{tab:results}} \begin{tabular}{|l|r|l|l|} \hline Algorithm & \multicolumn{1}{l|}{Dataset} & Train & Test\\ \hline Hybrid Guided VAE & DVSGesture & \textbf{97.6\%} & \textbf{86.8\%} \\ \cline{2-4} & NMNIST & \textbf{99.6\%} & \textbf{98.2\%} \\ \hline CNN Guided VAE & DVSGesture & 86.7\% & 82.3\% \\ \cline{2-4} & NMNIST & 97.2\% & 96.8\% \\ \hline Hybrid VAE & DVSGesture & 99.7\% & 38.3\% \\ \cline{2-4} & NMNIST & 96.8\% & 92.4\% \\ \hline CNN VAE & DVSGesture & 97.5\% & 28.4\% \\ \cline{2-4} & NMNIST & 96.8\% & 91.5\% \\ \hline \end{tabular}% \vspace{-3mm} \end{center} \end{table} We present the results of an ablation study of our Hybrid Guided-VAE method to demonstrate why we used this method for latent space disentanglement. We compare the Hybrid Guided-VAE in our work to a hybrid VAE with the guided part ablated, as well as a Guided-VAE that does not use an SNN encoder and instead use a CNN encoder, and an ordinary CNN VAE. Comparing the clustering of the hybrid VAEs and the CNN VAEs in Figure \ref{fig:tsnes}, both the CNN and hybrid VAEs are able to disentangle and cluster the latent space when using the guided method, with our hybrid method in Figure \ref{fig:main_tsnes} showing less overlap between clusters and therefore better disentanglement. For classification from the latent space representation of the data, Table \ref{tab:results} shows that the CNN and hybrid VAEs both achieve high accuracy on both datasets, with the hybrid VAEs achieving higher performance. Therefore, our hybrid guided VAE approach is more suitable for latent space disentanglement with data taken from event-based sensors. The disentangled latent space shows the hybrid VAEs learn a measure of similarity in event-based data, such as gesture similarity, which demonstrate the effectiveness of the Hybrid Guided VAEs semi-supervised learning from the event data. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{figs/CNN_VAE_New_gestures.png} \centering \caption[] {{\small CNN Guided VAE}} \label{fig:cnn_new} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{figs/cnn_unguided.png} \centering \caption[] {{\small CNN Unguided VAE}} \label{fig:cnn_unguided} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{figs/hybrid_unguided.png} \centering \caption[] {{\small Hybrid Unguided VAE}} \label{fig:snn_unguided} \end{subfigure} \setlength{\belowcaptionskip}{-10pt} \caption[] {TSNE plots of the DVSGesture latent space. The images below each figure show novel gestures and where they are placed in the latent space by the different models.} \label{fig:tsnes} \end{figure} \subsection{Guiding on Other Factors of Variation: Lighting} \begin{figure}[!t] \begin{center} \includegraphics[width=.4\textwidth]{figs/light_tsne.png} \caption[] {A T-SNE plot of the guided $z_m$ latent space using lighting condition labels.} \label{fig:lights} \end{center} \end{figure} A key feature of the guided VAE is to incorporate alternative features not directly related to the gesture class to disentangle the factors of variation in the data. To demonstrate this, in a separate experiment, we trained on the lighting condition provided in the DVSGesture dataset instead of the gesture class. The T-SNE projection of the $z_m$ latent space is shown in Figure \ref{fig:lights}. The model clusters the lighting conditions with some overlap between LED lighting and the other lighting conditions. This is likely due to the fact that the lighting conditions under which the gestures are performed are combinations of the labeled lighting conditions, with the label given to the most prominent lighting condition used \cite{Amir_etal17_lowpowe}. \subsection{Neuromorphic Implementation Results} \begin{figure}[!t] \includegraphics[width=.49\textwidth]{figs/loihi_3_class.png} \centering \caption[]{T-SNE plot of the $z_{m}$ portion of the latent space disentanglement of three gesture classes implemented on the Intel Loihi.} \label{fig:loihi_tsne} \end{figure} Figure \ref{fig:loihi_tsne} shows the latent space representations of three gesture classes mapped using the neuromorphic encoder. With just three classes, the model running on Loihi demonstrates separation between gestures and global structure, but these features are not as defined as in the conventional model. With all ten classes of gestures used to train the neuromorphic encoder, there was no clear disentanglement or obvious structure to the latent space. It is possible that the lack of separation for the more challenging 10-way task is due to the low-precision integers used for synaptic weights and membrane potentials of spiking neurons on the neuromorphic chip, thus adding variability to the embedded positions. In future work we will test our algorithm on more precise hardware and adopt powerful on-chip learning algorithms such as SOEL \citep{stewart2020online}. \section{Conclusions} We presented a novel algorithm to process DVS sensor data and show that it is capable of learning highly separable, salient features of real-world gestures to automatically generate labels from unlabeled event-based gesture data. The Hybrid Guided-VAE contains an encoder model that learns to represent extremely sparse, high-dimensional visual data captured at the neuromorphic sensor in a small number of latent dimensions. The encoder is jointly trained by two classifiers such that the latent space disentangles and accurately represents target features. The algorithm represents a significant step towards self-supervised gesture recognition by enabling the generation of labels from unlabeled data. In addition, we demonstrated a first-of-its-kind implementation of the algorithm on neuromorphic hardware. Due to the sparse nature of event-based data and processing, the SNN encoding implementation offers significant benefits for applications at the edge, including extremely low power usage and the ability to learn on-device to avoid intrusive remote data aggregation. This could enable flexible recognition capabilities to be embedded into home electronics or mobile devices, where computing power is limited and privacy is paramount. While the model performance currently decreases running on the neuromorphic device compared to the conventional GPU-based implementation, improvements to the neuromorphic hardware or the adoption of more powerful learning algorithms, e.g. \citep{stewart2020online}, could alleviate these limitations. These techniques may owe some measure of their success to the computational principles they share with human perceptual systems and we expect that this approach will open new possibilities for interaction between humans and intelligent machines. \begin{acks} We would like to thank Lazar Supic for his contributions to the preliminary research experiments and Intel Labs for their support and access to the Intel Loihi neuromorphic processor. \end{acks} \bibliographystyle{ACM-Reference-Format}
2,869,038,155,400
arxiv
\section{Introduction} The human population gets along with different microorganisms. Some of them lead to transmitted diseases that result in epidemics, even pandemics such as SARS-Cov-2. It is very important to define reliable measures to characterize the spread of those pathogens, both at the beginning and in the course of epidemics. The reproduction number $\mathcal{R}(t)$ indicates the number of new infections that result from a single infected individual at any time $t$. When it is greater than 1, each infected individual tends to generate more than one infected individuals resulting in the occurrence of an epidemic outbreak. The reproduction number is usually known from its initial value, when all of the population is susceptible, known as the basic reproduction number $\mathcal{R}_0$. Due to the recovery of infected individuals, a portion of the population becomes immune to the disease, thus, herd immunity begins to emerge in the population. This acts as a barrier to the transmission of the disease, so $\mathcal{R}(t)$ takes into account the evolution of the number of susceptible individuals in the population and is usually referred as the effective reproduction number. It provides us with information on how disease transmission occurs during the outbreak it is an important metric to make healthcare managers aware of the current level of the disease propagation. By considering heterogeneities in the homogeneous compartmental models, the reproduction number is affected. Indeed different classes of infected individuals can lead to different contributions for the generation of new infections. A lot of studies have taken the heterogeneity into account when dealing with the basic reproduction number $\mathcal{R}_0$ \cite{Hyman,Brauer}, however, this is not the same for the effective reproduction number, $\mathcal{R}(t)$. To study reproduction number, the renewal equation, a Lotka-Euler type equation \cite{Wallinga}, is usually used. This equation emerges from the mathematical models dynamics and can be derived using infection-age models. This type of modelling comes from the study of age-structured population growth and was first used within the scope of mathematical demography with the aim of modeling the birth dynamics due to the offspring produced by an individual over its lifespan. Analogously, the same methodology can be applied to the infectious diseases scenario by modelling the generation of new infected individuals during the infection period of those previously infected \cite{Martcheva}. Infection-age models are the origins of the widely known SIR and SEIR models developed by Kermack-Mckendrick \cite{Kermack}. Most known compartment ODE models for infectious diseases have their infection-age identical counterparts. The advantage with respect to the infection-age models is that they allow us to have access to the distribution of the infected individuals during the infectious period. The estimation of the reproduction number in the course of an outbreak usually rely on the daily count of new cases along with the renewal equation. This approach uses the generation interval distribution, also called the serial interval or generation time distribution, and was popularized by \cite{Wallinga}. Several works indicate how to obtain this distribution through empirical data\cite{Park,Huber,Lessler,Nishiura 2020} or from other known distributions and models \cite{Nishiura,Wallinga,Champredon,Akkouchi,Champredon2}. However, these distributions rely on epidemiological and empirical studies and several systems are very difficult to analyse or even do not have the required data for the envisaged evaluation. Moreover, highly heterogeneous systems do not have a general methodology for estimating the reproduction number and its generation interval distribution \cite{Nishiura 2009,Park 2019, Wesley, Fraser}. In this study, we use mathematical models to bridge the gap between the reported cases and the reproduction number. Therefore, we present a methodology that derives reproduction numbers and the generation interval distribution for an arbitrary compartment mathematical model. With such methodology, we can investigate highly heterogeneous systems using appropriate models and evaluate their reproduction numbers. With that in mind, apply the method to a meta-population model to analyse the role of spatial heterogeneity in the spreading of COVID-19 in the Brazilian territory. Our work is separated as follow: in section 2, we introduce heterogeneity by allowing the existence of different groups of individuals in a population that can be sorted into compartments \cite{Hethcote}, considering then a very general heterogeneous model that can be reduced to most models in literature. We use this general formulation to develop our methodology to obtain the reproduction numbers and expressions that correlate it to actual data. Then, in section 3, we apply it to a specific scenario related to the role of the inter-municipal commuter flow of people and extract its reproduction number and the generation interval distribution. In section 4 and 5 we use actual data of the emerging SARS-Cov-2 coronavirus pandemic in the municipalities of the state of Rio de Janeiro, Brazil, to estimate the reproduction numbers that emerge from the system. Additionally, we reconstruct the time series of cases from the contributions of each municipality in the propagation of the disease. \section{Reproduction number in a heterogeneous population} \subsection{A general infection-age model} In a heterogeneous population, individuals can be discernible by age-groups, spatial locations, behaviour, different susceptibility to diseases or any other factor that may distinguish them for one another. In this section, we consider a general heterogeneous model that separates the individuals in $m$ homogeneous compartments. For the purpose of evaluating the reproduction number, we first consider a sub-set of variables encompassing the infected compartments, i.e., those compartments containing individuals that carry the etiologic agent in their organisms. These compartments are denoted by $\bm{x}(t,\tau)=(x_1(t,\tau) , ...,x_n(t,\tau))$, where $t$ and $\tau$ indicate, respectively, the calendar time and the infection-age, the time elapsed since they got the infection. Of course the condition $\bm{x}(t,\tau)=0$ for $\tau >t$ must be satisfied. If $m=m_i+m_0$, where $m_i$ and $m_0$ indicate the number of infected and non-infected compartments in the homogeneous model, $n=m_i$. Therefore, $\bm{x}$ represents the number of individuals in each compartment at a specific calendar time ``$t$'' and at infection age ``$\tau$''. $\bm{x}$ entries can be called the infection-age distributions. The total number of individuals in each compartment, denoted as $X_i(t)$, can be obtained by integrating $x_i(t,\tau)$ with respect of $\tau$ from zero to infinity, where $\bm{X}(t)=(X_1(t) , ...,X_n(t))$. For simplicity, we will be using matrix notation in some equations. The matrices and vectors will be identified by using bold font, whereby $\bm{M}(y,z)= \Big [M_{ij}(y,z) \Big]$. Also, as usual, integrals and derivatives with respect of scalar variables operate componentwise. Drawing a parallel with Van den Driessche's next generation method \cite{van den Driessche}, we distinguish the new infections from the other compartment flows, defining $\bm{\mathcal{F}}(t)=(\mathcal{F}(t) , ...,\mathcal{F}_n(t))$ as the rate of appearance of new infections and $\bm{\mathcal{V}}(t,\tau)=(\mathcal{V}_1(t,\tau) , ...,\mathcal{V}_n(t,\tau))$ as the rate of transfer into compartments. Therefore, $\bm{\mathcal{F}}(t)$ describes the flow from non-infected compartments into infected ones and depends on $\bm{X}(t)$. On the other hand,$\bm{\mathcal{V}}(t,\tau)$ is related to the flow between infected compartments or from infected to non-infected ones and must depend on $\bm{x}(t,\tau)$. Thus, an usual infection-age model can be written as: \begin{align} \Big( \frac{\partial}{\partial t} +\frac{\partial}{\partial \tau}\Big)\bm{x}(t,\tau)=& -\bm{\mathcal{V}}(t,\tau),\label{Infec_V}\\ \bm{x}(t,\tau=0)=& \bm{\mathcal{F}}(t). \label{Infec_F} \end{align} Infection-age models are partial differential equations (PDE) that takes into account the time elapsed since the infection $\tau$. The equations \eqref{Infec_V} and \eqref{Infec_F} can usually be linked to their respective ODE models, and in fact the Kermack-Mckendrick SIR and SEIR models \cite{Kermack} are the special cases of their infection-age correspondents. This modeling allows us to have access to the distribution of the infected during the infectious period, which enable us to estimate the number of active infected individuals and the reproduction number along the outbreak using the reported data. Integrating \eqref{Infec_V} from zero to infinity of with respect of $\tau$ we obtain: \begin{equation}\label{correspondece} \frac{d}{dt} \bm{X}(t) = \bm{\mathcal{F}}(t) - \int_0 ^{\infty} \bm{\mathcal{V}}(t,\tau) d\tau, \end{equation} \noindent which is the correspondent to the sub-set of equations for the infected variables of the ODE model. Therefore, even though we are dealing with an infection-age model, one can simply build $\bm{\mathcal{F}}$ and $\bm{\mathcal{V}}$ from an ODE system and apply the methodology in this work. We demonstrate this for usual ODE models in the Supplementary Material 1. Since $\mathcal{F}_i(t)$ and $\mathcal{V}_i(t,\tau)$ are well defined, we proceed to solving equation \eqref{Infec_V} using the method of integration along the characteristic line. Thus, the left side of the equation \eqref{Infec_V} indicates that the characteristics of those PDE's are lines of slope 1, which implies that $t=\tau+c$ with $c$ being an arbitrary constant. We fix a point ($t_0$,$\tau_0$) and introduce a variable $\omega$ such that $u_i(s)=x_i(t_0+\omega,\tau_0+\omega)$ are functions that provide the values of the compartment densities along the characteristic. After straightforward calculations \cite{Martcheva}, we obtain \begin{equation}\label{solve_V} \frac{d}{d\omega} \bm{u}(\omega) = -\bm{\overline{\mathcal{V}}}(\omega), \end{equation} \noindent where $\overline{\mathcal{V}}_i(\omega)= \mathcal{V}_i(t_0 +\omega, \tau_0 +\omega)$. In epidemic models, $\Bar{\mathcal{V}}_i(\omega)$ are usually linear equations, making this system easy to solve. In that case, the system can be represented as: \begin{equation}\label{solve_V_x} \frac{d}{d\omega} \bm{u} = -\frac{\partial \; \bm{\overline{\mathcal{V}}}}{\partial \bm{u}} \; \bm{u}, \end{equation} \noindent where $-\frac{\partial\bm{\overline{\mathcal{V}}}}{\partial \bm{u}}= \Big[ -\frac{\partial}{\partial u_j} \overline{\mathcal{V}}_i(\omega) \Big]$ is the matrix of the linear system. Assuming that there are no infected individuals prior to $t=0$, we only need to take into account the solution where $t>\tau$, leading to $\omega=\tau$, $t=\tau +t_0$ and $\tau_0 = 0$. The solution for a linear system can be written as \begin{equation}\label{Solution_u} \bm{u}(\omega) = \bm{\overline{\Gamma}}(\omega) \; \bm{u}(0) , \end{equation} \noindent where $\bm{\overline{\Gamma}}(\omega)=\bm{\Gamma}(t,\tau)$, is the fundamental matrix obtained by solving \eqref{solve_V}. Therefore, we identify $\bm{\mathcal{F}}(t_0)= \bm{u}(0)$ in \eqref{Infec_F}. So, \eqref{Solution_u} becomes \begin{equation}\label{Solution_V} \bm{x}(t,\tau) = \bm{\Gamma}(t,\tau) \; \bm{\mathcal{F}}(t-\tau). \end{equation} For a linear $\bm{\mathcal{V}}(t,\tau)$, $\bm{\Gamma}(\tau)$ components are exponential functions. Equation \eqref{Solution_V} can also be used to express the solution of a general non-linear $\bm{\mathcal{V}}(t,\tau)$, but if it is not the case, such system will not be considered in this work. On the other hand, we can also assume that $\mathcal{F}_i(t)$ can be written as \begin{equation}\label{F_het} \mathcal{F}_i(t)= \sum_{j}^{n} \int_{0}^{\infty} \Omega_{ij}(t,\tau) \; x_j(t,\tau) d\tau, \end{equation} \noindent in which $\bm{\Omega}(t,\tau)$ is related to the generation of infected individuals in "$i$" due to "$j$". If $\bm{\Omega}$ does not depend on $\tau$, as in ODE models, it assumes the form of \begin{equation}\label{Omega_deriv} \bm{\Omega}(t)= \Big[ \frac{\partial}{\partial x_j} \mathcal{F}_i(t) \Big] \end{equation} Most system in literature satisfies \eqref{Solution_V} and \eqref{F_het}, thus the method in this work is very general and can be applied to a wide range of models. \subsection{Obtaining the reproduction numbers} To estimate the reproduction number using the reported data of new infections, we need to link them with the equations of the model. This can be done by substituting \eqref{Solution_V} in \eqref{F_het}. Arranging the equation we get \begin{equation}\label{renewal_het} \bm{\mathcal{F}}(t)=\int_{0}^{\infty} \bm{A}(t,\tau) \bm{\mathcal{F}}(t-\tau) d\tau, \end{equation} \noindent for \begin{equation}\label{A_het} \bm{A}(t,\tau)= \bm{\Omega}(t,\tau) \bm{\Gamma}(t,\tau). \end{equation} The functions $A_{ij}(t,\tau)$, analogously to \cite{Nishiura}, are referred as the rate of new infections in $X_i$ due to $X_j$ previous infections at a calendar time $t$ and infection-age $\tau$, whereby $A_{ij}(t,\tau>t)\equiv 0$. So, we can describe the new cases in "$X_i$" from the cases that occurred previously in all the groups. In fact, \eqref{renewal_het} is a general form for the widely known renewal equation, \cite{Nishiura, Wallinga}, actually a sum of renewal equations. Thus, by defining the number of new infections in the "$i$" compartment due to "$j$" previews infections as \begin{equation}\label{renewal_het_ij} \mathcal{J}_{ij}(t)= \int_{0}^{\infty} A_{ij}(t,\tau) \mathcal{F}_j(t-\tau) d\tau, \end{equation} \noindent it becomes clear that $\mathcal{F}_i(t)=\sum_j^{n} \mathcal{J}_{ij}(t)$. Thus, it is possible to quantify the influence of the infections occurred in one compartment new ones in another compartment. To obtain the expected number of new infections in $X_i$ a newly infected individual form $X_j$ is expected to generate, thus the reproduction number, we integrate $A_{ij}(t,\tau)$ from zero to infinity with respect to $\tau$, as in \cite{Nishiura}. \begin{equation}\label{Rt_ij} \mathcal{R}_{ij}(t)= \int_0^\infty A_{ij}(t,\tau) d\tau. \end{equation} $\bm{\mathcal{R}}$ is usually called the next-generation matrix, where its entries corresponds to the reproduction numbers of the system \cite{Nishiura 2009}. It is remarkable that, for a ODE model where $\bm{\mathcal{V}}$ is linear, the next-generation matrix evaluated at the free-disease fixed point $\bm{\mathcal{R}|}_{x_0}$, is equivalent to the one developed in \cite{van den Driessche}. Naturally from \eqref{Rt_ij}, we define \begin{equation}\label{g_ij} g_{ij}(t,\tau) = \frac{A_{ij}(t,\tau)}{\int_0^\infty A_{ij}(t,\tau) d\tau}, \end{equation} \noindent where $g_{ij}(t,\tau)$ is normalized and denoted as the generation interval distribution \cite{Nishiura,Wallinga}. Thus, it is related to the flow of individuals between infected compartments and their recover process. We assume a generation interval distribution depending on $t$ and $\tau$, which is general enough to be applied to models whose dynamics can change over time. Therefore, using \eqref{Rt_ij} and \eqref{g_ij} in \eqref{renewal_het_ij} we obtain: \begin{equation}\label{J_ij_R} \mathcal{J}_{ij}(t)= \mathcal{R}_{ij}(t) \int_0^\infty g_{ij}(t,\tau) \;\mathcal{F}_j(t-\tau) d\tau. \end{equation} It is important to highlight that $ \mathcal {R} _ {ij} (t) $ is not necessarily the number of new infected in $ X_i $ generated by $ X_j $. Instead, the meaning of $ \mathcal {R} _ {ij} (t) $ is linked to the generation of new infected ones in ``$i$'', $ \mathcal {F} _i (t) $, due to infected individuals previously generated in ``$ j $'', $ \mathcal {F} _j (t- \tau) $, regardless of what stage of the disease these infected individuals who entered ``$ j $'' are at instant $ t $. That is the case because a new infected individual may generate new infections through out many stages of the disease. In the SEIR model, for example, all of the new infections are in the $E$ compartment, but the individual only becomes infective when it goes to the $I$. To access the reproduction number a newly infected in $X_j$ is expected to generate to all of the other compartments, we can just sum the $\mathcal{R}_{ij}$ over $i$, defining \begin{equation}\label{R_bar_j} \overline{\mathcal{R}}_j(t) = \sum_i ^n\mathcal{R}_{ij}(t). \end{equation} \noindent Through out this work, the over-line represents the merge of the first index , in this case, in the form of a sum. This lead us to analogously define $\overline{A}_j= \sum_i ^n A_{ij}$, whereby its integral from zero to infinity with respect to $\tau$ is $\overline{\mathcal{R}}_j$. Thus, we are able to write \begin{equation}\label{J_bar} \overline{\mathcal{J}}_{j}(t)= \overline{\mathcal{R}}_j(t) \int_0^\infty \overline{g}_{j}(t,\tau) \;\mathcal{F}_j(t-\tau) d\tau, \end{equation} \noindent where $\overline{\mathcal{J}}_{j}(t) = \sum_i ^{n} \mathcal{J}_{ij}(t)$ and \begin{equation}\label{g_bar} \overline{g}_{j}(t,\tau) = \frac{\overline{A}_{j}(t,\tau)}{\int_0^\infty \overline{A}_{j}(t,\tau) d\tau}= \frac{\sum_i ^n \mathcal{R}_{ij} (t) g_{ij}(t,\tau)}{\sum_i ^n \mathcal{R}_{ij} (t) } \end{equation} Therefore, $\overline{\mathcal{J}}_j(t)$ is the total number of new infections generated by previews infections in $X_j$. It is interesting that the generation interval distribution $\overline{g}_j(t,\tau)$ takes the form of a weighted average in which the $g_{ij}(t,\tau)$ are the weights. Thus, the implementation of the proposed method in this work amounts to: identifying the terms $\bm{\mathcal{F}}$ and $\bm{\mathcal{V}}$ from a model; using them to find $\bm{\Omega}$ and $\bm{\Gamma}$; obtaining $\bm{A}$ and integrating to get $\bm{\mathcal{R}}$ and $\bm{g}$. Further in this work we present applications of the method and estimations using actual data. Examples of the method in different types of models can be found on the Supplementary Material 1. \subsection{The total reproduction number} After obtaining the reproduction numbers of the constituents of the system, we now intend to define a reproduction number for the whole system. The number of new infections in a group, $\mathcal{F}_i(t)$, can be described as a fraction of the total number of cases from all groups, $\mathcal{F}^T(t)=\sum_i ^n \mathcal{F}_i(t)$ such that \begin{equation}\label{F_prop} \mathcal{F}_i(t)= \alpha_i(t) \mathcal{F}^T(t). \end{equation} \noindent Since $\alpha_i(t)$ is the proportion of the total number of cases that $\mathcal{F}_i(t)$ represents, the condition $1=\sum_i \alpha_i(t)$ must be satisfied. Thus, using \eqref{renewal_het} with \eqref{F_prop}, we obtain: \begin{equation}\label{renewal_het_prop} \mathcal{F}^T(t)= \mathcal{R}^T(t) \int_{0}^{\infty} g^T(t,\tau)\mathcal{F}^T(t-\tau) d\tau, \end{equation} \noindent where $\mathcal{R}^T(t)= \bm{\alpha \cdot \overline{\mathcal{R}}}$ is the reproduction number of the system and \begin{equation}\label{g_prop} g^T(t,\tau) = \frac{\sum_i \alpha_i(t) \overline{\mathcal{R}}_i(t) \; \overline{g}_i(\tau)}{\sum_i \alpha_i(t) \overline{\mathcal{R}}_i(t)} . \end{equation} Since $ \mathcal{R}^T (t) $ is the scalar product between $ \bm {\alpha} $ and $ \bm{\overline{\mathcal{R}}} $, we can interpret it as the projection of the $ \bm{\overline{\mathcal{R}}} $ over the fractions $ \bm {\alpha} $ forming a weighted average. Thus, it is interesting to note that we can describe the system as an average of its heterogeneities. Since the definition of the total number of reproductions $ \mathcal{R}^T $ is very general, it is not always meaningful. For example, if we form a system of two independent dynamics, that is, ($ \mathcal{R}_{ij} = 0 \text{ for } i \neq j $) it is still possible to obtain a $ \mathcal{R}^T $, even if there is no meaning to it. The $\alpha_i(t)$ functions are the key for analysing the feasibility of a total reproduction number that has a dynamical meaning. We focus our attention on a case where the $\alpha_i(t)$ appears naturally in the equations. If $\Omega_{ij}(t,\tau)$ can be separated in a function of $t$ depending on the "$i$" index and another function of $t$ and $\tau$ depending on the "$j$" index, then it can be written as \begin{equation}\label{Omega_prop} \bm{\Omega} = \bm{\alpha} \otimes \overline{\bm{\Omega}}= \Big [\alpha_i(t) \overline{\Omega}_j(t,\tau) \Big] \end{equation} \noindent where $ \otimes $ represents a tensor product. We define $ \overline {\Omega}_j = \sum_i \Omega_ {ij} $, and noticed that $ \overline {A} _j = \sum_ {k} \overline {\Omega} _k \Gamma_ {kj} $ e $ \bm {A} = \bm {\alpha} \otimes \overline {\bm { A}} $. Thus, $ \bm {\Omega} $ and $ \bm {A} $ can be separated in proportions, $ \bm {\alpha} $ and $ \overline {\bm {\Omega}} $. In fact, the above equation also impacts \eqref{R_bar_j} and \eqref{J_bar} which can be factorized in terms of $ \bm {\mathcal {R}} = \bm {\alpha} \otimes \overline {\bm {\mathcal {R} }} $ e $ \bm {\mathcal {J}} = \bm {\alpha} \otimes \overline {\bm {\mathcal {J}}} $. Using \eqref{g_bar} we can also get $ g_ {ij} (t, \tau) = \overline {g} _j (t, \tau) $. Furthermore, because the next generation matrix is obtained from a tensor product of vectors, the largest eigenvalue of $ \bm {\mathcal {R}} $ corresponds to the scalar product of $ \bm {\overline {\mathcal {R }}} $ e $ \bm {\alpha} $, that is $ \mathcal {R} ^ T $. Thus, in these systems the total reproduction number assessed at the disease-free equilibrium point, $t=0$, corresponds to the basic reproduction number, $\mathcal {R} ^ T (0)= \mathcal{R}_0$. Also, in that case, the $\mathcal{R}^T$ is the spectral radius of $\bm{\mathcal{R}}$, as in \cite{van den Driessche}. This is very common in disease transmission models, in fact the SIR and SEIR model are examples of this case. \subsection{Estimations with real data } So far, we've developed a general framework to estimate the reproduction numbers from the rate of new infections $\mathcal{F}$. However, when we deal with real data what we have is the collection of all of the new infections in an $\Delta t$ period of time, which leads to the definition of $\bm{\mathcal{B}}$ and $\bm{\mathcal{T}}$. \begin{equation}\label{reported_cases} \mathcal{B}_i(t) = \rho_i \int_{t} ^{t+\Delta t} \mathcal{F}_i(t') dt', \qquad \mathcal{T}_{ij}(t) = \rho_i \int_{t} ^{t+\Delta t} \mathcal{J}_{ij}(t') dt'. \end{equation} Therefore $\mathcal{B}_i(t)$ are the reported cases in $X_i$, where $\rho_i$ is a constant related to a higher or lower notification, due to sub/super-notification. Analogously we define $\mathcal{T}_{ij}(t)$ as the number of reported cases in $X_i$ due to previous cases in $X_j$. By assuming that $\mathcal{R}_{ij}$, $g_{ij}(t,\tau)$ and $\mathcal{F}_i $ are approximately constants during a $\Delta t$ interval, we use \eqref{J_ij_R} and \eqref{F_het} to derive \begin{equation}\label{T} \mathcal{T}_{ij}(t) = \mathcal{R}_{ij} (t) \sum_{\tau=0}^{t} \frac{\rho_i}{\rho_j} \; g_{ij} (t,\tau) \mathcal{B}_j(t-\tau) \Delta t \end{equation} \noindent for $\mathcal{B}_i(t)=\sum_j^n\mathcal{T}_{ij}$. \eqref{T} is a general form for the discrete version of the renewal equation. In fact, by using \eqref{renewal_het_prop} we are able to recover a well known result in literature \cite{Fraser} \begin{equation} \mathcal{R}^T(t)= \frac{\mathcal{B}^T(t)}{\sum_{\tau=0}^{t} \frac{\rho_i}{\rho_j} \; g^T (t,\tau) \mathcal{B}^T(t-\tau)\Delta t }, \end{equation} \noindent where $\mathcal{B}^T(t) = \sum_i ^n \mathcal{B}_i(t)$. \section{Explicit expression of $\mathcal{R}(t)$ for two meta-population models} In this section we analyse two meta-population models detailed in Supplementary Material 2. The methodology developed on this study is applied to both models to obtain the correspondent reproduction numbers and generation interval distributions, see Supplementary Material 1 for detailed calculations. \subsection{SIR-type meta-population model} In this section we will consider a meta-population model takes into account groups of spatially separated "island" populations with some level of interaction. Such models are widely used for systems in which the movement of individuals between meta-populations is considered \cite{van den Driessche_spatial,Bichara,Lloyd,Lajmanovich,Arino,Keeling,Miranda}. Since each population is connected with the others, the system can be interpreted as a network for which the nodes represent the meta-populations and the weight of their edges represent the intensity of the movement between them. This type of model does not describe the daily movement of individuals explicitly, but as an interaction of the meta-populations. It is suitable when the population sizes are not permanently affected by the flow of individuals, as in the case of commuter movement between locations of residence, work and study. This type of movement is obligatory cyclical, predictable and recurring regularly, most of the time on a daily basis. In this model, we assumed that each meta-population $i$, with $N_i$ individuals, has its own transmission rate $\beta_i(t)$. Also, the movement of individuals between meta-populations is taken into account, where we introduce $\Phi_{ij}(t)$ as the density of flow between the populations $i$ and $j$. That is, the amount of $i$ resident individuals commuting from $i$ to $j$ divided by the total population of $i$, $N_i$. The parameters $\beta_i(t)$ and $\Phi_{ij}(t)$ are time dependent, so we can incorporate the changes in the behavior of the populations on those variables.In Supplementary Material 2 we derive this model, inspired by \cite{Miranda}, and in Supplementary Material 1 we obtain the corresponding reproduction numbers and generation interval distribution for it, which reads: \begin{equation} R_{ij}(t) = S_i(t) \frac{\lambda_{ij}(t)}{\gamma}, \qquad g_{ij}(\tau)= g(\tau)= \gamma \; e^{-\gamma \, \tau}. \end{equation} \noindent Here $\lambda_{ij}$ is related to the transmission of the disease from a meta-population ``$i$'' to other meta-population ``$j$'' and is derived based on simple assumptions about the commuter movement of individuals in the network, see Supplementary Material 2. Notably, if we isolate the meta-populations from the network, $\Phi_{ij}(t)=0$, for all $i$ and $j$, then the reproduction numbers and the generation interval distributions becomes identical to the classical SIR model \cite{Nishiura}. \subsection{A meta-population model for Covid-19 (SEIIR)} The classical SIR model is a very simple and qualitative approach to disease transmission dynamics, but it does not provide the best description for most diseases. For now on, we will be focusing on a model for a specific disease, the SARS-Cov-2 coronavirus. In this case, the transmission can be facilitated by the existence of individuals whose symptoms are very weak or even nonexistent \cite{Li_R}, therefore, this heterogeneity can change the dynamics. In order to have consistence description of this aspect, it is wise to assume that the existence of two classes of infected individuals, the symptomatic and the asymptomatic/undetected ones, as considered in a general model for the SARS-Cov-2 coronavirus \cite{Oliveira}. Therefore, we now take into account infected individuals who do not need to be hospitalized and are not recorded in official registered data, thus becoming undetectable. For the sake of simplicity, we will refer to those individuals only as asymptomatic ones, for simplicity. In Supplementary Material 2 we proceed to derive this model, based in the meta-population SIR-type approach described in the previous section. In Supplementary Material 1 the expressions for the reproduction number and generation interval distribution are detailed derived. There, we show that we only need $\bm{\mathcal{R}}$ for $i,j\leq n$ to describe the dynamics. Thus, in this main framework, whenever we refer to $\bm{\mathcal{R}}$ or $\bm{g}$ we will be alluding to their $i,j\leq n$ elements. Thus, we obtain: \begin{equation} R_{ij}(t) = S_i(t)\lambda_{ij}(t) \Big[ \frac{p}{\gamma_s} + \frac{\delta (1-p)}{\gamma_a}\Big], \qquad g_{ij}(\tau)= g(\tau)= \frac{\frac{p}{\gamma_s} g^s(\tau) + \frac{\delta (1-p)}{\gamma_a}g^a(\tau)}{\frac{p}{\gamma_s} + \frac{\delta (1-p)}{\gamma_a}}. \end{equation} \noindent the $\lambda_{ij}$'s expressions are the same presented for the SIR-type model in Supplementary Material 2. $\delta$ is a factor that reduces or enhances the asymptomatic infectivity, $p$ is the proportion of individuals that becomes symptomatic when infected, $\gamma_a$ and $\gamma_s$ are the recover rates of the asymptomatic and symptomatic individuals, respectively. $g^a(\tau)$ and $g^s(\tau)$ are expressed in terms of exponential functions described according to \eqref{eqgas}. \begin{equation}\label{eqgas} g^a(\tau) = \frac{\kappa \gamma_a}{\gamma_a - \kappa} (e^{-\kappa \tau} - e^{-\gamma_a \tau} ), \qquad \qquad \qquad g^s(\tau) = \frac{\kappa \gamma_s}{\gamma_s - \kappa} (e^{-\kappa \tau} - e^{-\gamma_s \tau} ) . \end{equation} \noindent Whereby, if we isolate the meta-populations from the network, $\Phi_{ij}(t)=0$, for all $i$ and $j$, the reproduction numbers and generation interval distributions returns to the expression obtained in \cite{Oliveira}. \section{Applications for the meta-population models using actual data} In this section, we present results for the methodology applied to the meta-population model developed in the previews section. We use actual data of the first six months of the COVID-19 pandemic in Brazilian cities, such as: reported cases in each municipality, daily commuter movement due to work between municipalities and daily mobility tends towards workplaces. In Supplementary Material 3 we derive and present the expressions and parameters needed to estimate the reproduction numbers for both meta-population models. Thus, we obtain a daily time series of the reproduction numbers for each model. \subsection{Database} In this work, we use daily notifications of new cases due to COVID-19 in Brazil, obtained from public websites: \url{https://covid.saude.gov.br/} and \url{https://brasil.io/datasets/} , which provide data from the Health Ministry. We obtained intermunicipal commuter movement of workers and students data from a study on population arrangements and urban concentrations in Brazil conducted by IBGE (Brasilian Institute of Geography and Statistics) in 2015 that can be found at \cite{ibge}. In addition, we obtain daily mobility data for each Brazilian state from a public report by Google, accessed at: \url{https://google.com/covid19/mobility/}. We used the data observed until September 14th, 2020, and performed a 10-day moving average, in order to attenuate noise and better express the data trend. To take the social distancing restrictions into account, we considered only the commuter movement data related to work, since, with the mitigation measures of COVID-19, the flow due to education were significantly reduced. In addition, because the movement towards work also dropped due to social isolation policies, we used the mobility data obtained from the community mobility report provided by Google to estimate this reduction. This database compares for each state the daily mobility to workplaces with the past trends, therefore, we can access the percentage of reduction in commuting to work. A moving average is performed in the mobility time series and we reduce the intermunicipal work flow according to the percentage indicated on the data. Thus, we obtain the flow of individuals due to commuter work that leads to $\Phi_{ij}(t)$. The parameters used to feed the model were obtained in \cite {Jorge} and are displayed in Supplementary Material 3. The data from the state of Rio de Janeiro was selected for this analysis. Ten cities with the highest commuter flow with the capital Rio de Janeiro (RJ), were chosen, namely: Duque de Caxias (DdC), Nova Iguaçu (NI), São João de Meriti (SJdM), Niterói (Nt), São Gonçalo (SG), Belford Roxo (BR), Nilópolis (Ns), Mesquita (Mq), Queimados (Q), Magé (Ma). All of the chosen cities for this work are part of the Rio de Janeiro's metropolitan region. Additional information about the municipalities is presented in Supplementary Material 4. \subsection{Analyses of the results} In our first results, shown in Figure \ref{fig:1}, we present a comparison between the SIR and SEIIR outputs. Using the daily time series of the reproduction numbers, see Supplementary Material 3, we obtain the series of $\overline{\bm{\mathcal{R}}}(t)$ and $\bm{\mathcal{T}}(t)$ from \eqref{R_bar_j} and \eqref{T}, respectively. We observe that the $\overline{\bm{\mathcal{R}}}$ of SEIIR model is, in the average, $33\%$ higher then the corresponding results for the SIR model. On the other hand, the estimations of the total number of exported cases that are reported, $ \sum _{t}\sum_{i} ^n \mathcal{T}_{ij}(t)$ for $i \neq j$, is very similar for both SIR and SEIIR models Also, it seems that the total commuter movement, which is the sum of all the inflow and outflow happening in a municipality, is not the only main factor that determine the number of exported cases of a municipality. This non-linearity can be observed when comparing São João de Meriti (SJdM) and Niterói (Nt) or Duque de Caxias (DdC) and Nova Iguaçu (NI) (see Figure \ref{fig:1}b). Those municipalities have a similar amount of total flow but very different results for the exported cases. Interestingly, even not having the highest $\overline{\mathcal{R}}_j$, the capital, Rio de Janeiro (RJ) presents the largest amount of exported cases, which also showcases the non-linear dynamics of the phenomenon. \begin{figure} \centering \includegraphics[width={1.\linewidth}]{Figures/Compare.pdf} \caption{\textbf{Comparison between SIR and SEIIR outputs.} SIR results in blue and SEIIR ones in orange. The bar graph (a) compares the SIR and SEIIR $\overline{\mathcal{R}}_i$ averages in time for all municipalities. In (b), for each municipality, estimations for the total number of exported cases that are reported for both models are displaced with the total commuter movement in that city. The names of the municipalities are abbreviated using acronyms: Rio de Janeiro (RJ), Duque de Caxias (DdC), Nova Iguaçu (NI), São João de Meriti (SJdM), Niterói (Nt), São Gonçalo (SG), Belford Roxo (BR), Nilópolis (Ns), Mesquita (Mq), Queimados (Q), Magé (Ma).} \label{fig:1} \end{figure} From now on, we will only look at the results of the SEIIR model. With $\mathcal{T}_{ij}(t)$ we are able to access the contribution of each municipality on the outbreaks happening in the state. Thus, by dividing $\mathcal{T}_{ij}(t)$ by $\mathcal{B}_i(t)$ in every time step, we obtain a time series for the proportion of the total cases in ``$i$'' generated by ``$j$''. Therefore, we proceed into evaluating the mean value through time of $\mathcal{T}_{ij}/\mathcal{B}_i$ , where it is displaced in Figure \ref{fig:2}. It is observed a high autochthonous behavior on the disease transmission, whereby the highest influence of a city is on itself. Therefore, most of the cases generated in a municipality are caused by its own individuals. However, we also identify cities where the cases generated by other municipalities on it are very important. The $\bm{\mathcal{R}}(t)$ matrix also corroborates the presence of an important autochthonous behavior as its diagonal elements correspond to the highest values of the reproduction numbers. We also observed lots of very smalloff-diagonal elements low values through out the matrix. \begin{figure} \centering \includegraphics[width={0.75\linewidth}]{Figures/Matrix_SEIIR.pdf} \caption{\textbf{Influence on number of cases of a municipality in another.} The heatmap captures the average, through time, influence on number of cases of a municipality in another as a proportion of the total number of daily cases, $\mathcal{T}_{ij}/\mathcal{B}_i$. $i$ corresponds to the rows and $j$ to the columns. } \label{fig:2} \end{figure} In Figure \ref{fig:3} we illustrate the results of Figure \ref{fig:2}, whereby only non-autochthonous influences above 5\% are considered. Thus, we must highlight the capital, Rio de Janeiro (RJ), as the most important agent on the disease transmission to the municipalities on the network. However, cities like Nova Iguaçu (NI) also presents itself as a relevant disseminator of the pathogen. As shown in Figure \ref{fig:1}, Nova Iguaçu NI is the highest city in case exportation, besides the capital. In Figure \ref{fig:3} we identify cities like: São João de Meriti (SJdM), Belford Roxo (BR), Nilópolis (Ns), Mesquita (Mq) and Queimados(Q) as the main receptors of those cases. Niterói (Nt), even having a NI-like number of exported cases, did not presented a high influence on a lot of cities. On the other hand, Niterói (Nt) generates a significant amount of cases in São Gonçalo (SG), highlighting the importance of the connection of those two municipalities. \begin{figure} \centering \includegraphics[width={0.63\linewidth}]{Figures/Rplot.pdf} \caption{\textbf{Visualization of the influence between municipalities.} Here, we present a visualization of the results from Figure \ref{fig:2}. Only non-autochthonous influences above 5\% were considered. The thickness of the lines connecting municipalities is proportional to the number of cases that one generates on the other. The color of each line represents the municipality that is generating the cases.} \label{fig:3} \end{figure} \begin{figure} \centering \includegraphics[width={1.\linewidth}]{Figures/Time_Series.pdf} \caption{\textbf{Reconstruction of the time series.} The black dots represents the daily reported cases in (a) São Gonçalo (SG), (b) São João de Meriti (SJdM). The blue dots are the amount of cases generated in (a) São Gonçalo (SG), (b) São João de Meriti (SJdM) due to the capital, Rio de Janeiro (RJ). In red we have the number of cases reported in (a) São Gonçalo (SG) due to Niterói (Nt); (b) São João de Meriti (SJdM) due to Nova Iguaçu (NI).} \label{fig:4} \end{figure} We illustrate the time series reconstruction in Figure \ref{fig:4} where two scenarios are displaced. In the first one, we focus on the cases reported in São Gonçalo (SG) and compare total amount, $\mathcal{B}_i(t)$, with the number of daily cases in SG generated by Rio de Janeiro (RJ) and Niterói (Nt). We observe that Nt has a predominance over the capital, presenting a higher number of cases generated in all times. The second scenario is related to the total number of cases in São João de Meriti (SJdM) and the contributions due to Rio de Janeiro (RJ) and Nova Iguaçu (NI). In this case, the capital presents the largest number of cases generated in that city, besides the city itself. It is also interesting to observe in both scenarios how the cities contributions merge into a part of the total cases notified on a municipality. \section{Discussion} Understanding and dealing with infectious diseases is an on going challenge to humankind. The methodology presented in this work is very general and can be applied to multiple disease transmission systems. By merging the model dynamics with the available epidemic data, this method is able to estimate key epidemiological factors, like the reproduction numbers and the generation interval distributions. These theoretical results are the basis for a lot of possible data analyses, specially for highly heterogeneous systems that have been demanding for a suitable methodology for estimating the reproduction number through actual data. The method is robust and reproduces known results in literature, as shown in Supplementary Material 1 for the SIR, SEIR and SEIIR models. This methodology opens room for the analyses of more sophisticated models, that leads to a better understanding and control of infectious diseases. With the emerging of the COVID-19 pandemic, due to SARS-Cov-2 coronavirus, in December 2019, scientists allover the world have joined forces to formulate control strategies \cite{Manica}. Factors like the presence of asymptomatic individuals, risk groups and spatial distribution of the disease brings out a system with multiples layers of heterogeneity \cite{Li_R, Eikenberry, Miranda, Gomes}. Although the application of some vaccines starts recently, mitigating COVID-19 with non-pharmacological strategies is essential. Therefore, the second half of this work we focused on the application of the developed methodology for two models with spatial heterogeneity, one focusing on the COVID-19 specific dynamics. Each municipal demography, captured by $\beta$ parameters, and the commuter movement were combined by the model lead to some interesting results. The reproduction numbers are obtained for both models indicate that they only differ by multiplying factors, as $1/\gamma$ and $ \frac{p}{\gamma_s} + \frac{\delta (1-p)}{\gamma_a}$, as in \cite{Oliveira, Jorge}. The measured results shows that the asymptomatic presence is related to an increase on the reproduction number, because the model predicts more infected individuals than what is reported. Another substantial result is obtained when comparing the exported cases with the total commuter movement in a municipality. In this case, the relationship is not direct, since the intricate topology of the network fosters a non-linear dynamic in the phenomenon. This result points up the importance of a model to provide epidemiological meaning of the available data. For this specific case, it became evident that analysing the system only by the commuter movement data wouldn't be enough to point out the key cities on the disease transmission dynamic, as we did using the models and methodology. The results of the model display the role of each municipality on the epidemic on the network. Cities like Nova Iguaçu, Niterói and São Gonçalo pops out as important agents on the spread of the pathogen through out the metropolitan region of Rio de Janeiro, Brazil. The capital is highlighted as the main hub of spreading, which is related to its high incidence of the disease combined with its central role on the movement behavior of individuals on the network, as also observed in \cite{Jorge, Miranda, Liu}. However, cases like Niterói and São Gonçalo, where the interaction of both cities is higher than the capital's influence, cannot be neglected. This call's the attention for the relevance of analysis, like in this work, that provides the epidemiological interpretation of data. In this work, we choose to present an analysis with actual data in which the reproduction number is not the main result. We proceeded with this approach to portrait the reproduction number not as a dead end result, but as a tool for obtaining deeper analyses, such as the reconstruction of the time series and the number of exported cases. Our results for the Rio de Janeiro metropolitan area, however, have limitations. The model developed in this work provides a very simple approximation of intercity commuter flow, while more sophisticated models available in the literature are required to provide more precise description of the movement behavior. Another limitation of this work is that it does not consider the interstate flow, since the state of Rio de Janeiro was portrayed as a closed system Also, we did not take into account further heterogeneous features, which could be accounted as well by the general theoretical formalism, besides space and asymptomatic presence, leaving a gap for other key COVID-19 dynamics characteristics, like age-groups. In addition, the examples presented in this work considered actual reported data of confirmed cases and mobility, which may present problems of underestimation and report delay. Taking those limitations into account, the results displaced here are still able to give substantial understanding of the system studied, which is a common feature in mathematical modelling. Finally, we reinforce the importance of the method proposed on this work and highlight its broad application on infectious disease models. \enlargethispage{20pt} \ethics{Since all data handled in this study is publicly available, an approval by an ethics committee is not required, according to Resolutions 466/2012 and 510/2016 (article 1, sections III and V) from the National Health Council (CNS), Brazil.} \dataccess{All data and codes are gathered and presented in our public GitHub repository at \url{https://github.com/danielcpj/Rt-heterogeneous-models}} \aucontribute{DCPJ, STRP and RFSA conceived of and designed the methodology presented in this study. DCPJ, STRP, JGVM and JFO formulated and interpreted the meta-population models and applications. DCPJ performed the data analysis and drafted the manuscript. All authors read, reviewed and approved the manuscript.} \competing{The authors declare that they have no competing interests.} \funding{DCPJ was funded by a Scientfiic Initiation scholarship from CNPq (process number 117568/2019-8). JFO was supported by the Fiocruz Program of Promotion of Innovation - innovative ideas and products - COVID-19, orders and strategies - Inova Fiocruz (Processo VPPIS-005-FIO-20-2-40), and the Center of Data and Knowledge Integration for Health (CIDACS) through the Zika Platform - a long-term surveillance platform for Zika virus and microcephaly (Unified Health System (SUS), Brazilian Ministry of Health). STRP was supported by an International Cooperation grant (process number INT0002/2016) from Bahia Research Foundation (FAPESB). RFSA was supported by Brazilian agency CNPq through Grants No. 422561/2018-5 and 304257/2019-2. STRP and RFSA were supported by the National Institute of Science and Technology - Complex Systems from CNPq, Brazil. JGVM acknowledges the support of the National Council of Technological and Scientific Development, CNPq, Brazil (Grant number: 307828/2018-2).} \ack{The authors acknowledge the discussions and suggestions from members of the CoVida Network (\url{http://www.redecovida.org}).} \beginsupplement \section*{Supplementary Materials} \paragraph*{Supplementary Material 1-}{ Methodology applied for epidemiological compartment models} \vspace{-10pt} \paragraph*{Supplementary Material 2-}{ Meta-population models formulation.} \vspace{-10pt} \paragraph*{Supplementary Material 3-}{Expressions and parameter values to estimate the reproduction numbers of the meta-population models} \vspace{-10pt} \paragraph*{Supplementary Material 4-}{Additional information about the municipalities.} \section*{Applying the method to epidemiological models} We proceed to illustrate the application of the method developed in this work to compartmental epidemic models. We chose two simple models that are known in literature and two variations of a meta-population model that is used in the main framework and developed in Supplementary Material 2. We seek to show the method step by step, presenting its detailed calculations. All models presented are composed of ordinary differential equations. In the transitions between infected compartments described in these models, $ \mathbfcal{V} (t, \tau) $, the the infected compartments $\bm {x}(t,\tau)$ appear with linear dependence. Therefore we can simplify: \begin{equation}\label{cte} \dfrac{\partial\overline{\mathcal{V}_i}}{\partial u_j} = \dfrac{\partial \mathcal{V}_i}{\partial x_j}=cte \qquad \text{and} \qquad \frac{d}{d\omega} \bm{u}(\omega) = -\frac{\partial {\mathbfcal{V}}}{\partial \bm{x}} \bm{u}(\omega). \end{equation} \noindent In addition, all the parameters of the models in this Supplementary Material are constant, which leads to \begin{equation} \bm{\Omega}(t)= \frac{\partial}{\partial \bm{X}} {\mathbfcal{F}}(t) \qquad \text{and} \qquad \overline{\bm{\Gamma}}(\omega) = \bm{\Gamma}(\tau) \end{equation} \subsection*{SEIR model} The SEIR model is designed by introducing a new exposed $ E $ stage of the disease into the SIR model \cite{SIRfirst}. We can assume that when an individual becomes infected, it must go through a latency period before showing its first symptoms and starting to infect other individuals. This is accomplished by introducing the exposed compartment $ E $ and its removal rate $ \kappa $. Thus, all individuals who are infected start in the exposed state and, on average, after a $ 1 / \kappa $ latency time are introduced into the $ I $ compartment. It is also possible to introduce a factor related to a $ \epsilon $ pre-symptomatic infection. This way, individuals can start to infect in the exposed compartment, before presenting symptoms. Thus, this model, considering pre-symptomatic infection, can be written as \begin{align} \frac{dS}{dt}& =- \frac{\beta S}{N}\Big [ I + \epsilon E \Big ],\\ \frac{dE}{dt}& = \frac{\beta S}{N}\Big [ I + \epsilon E \Big ] - \kappa E,\\ \frac{dI}{dt}& = \kappa E - \gamma I,\\ \frac{dR}{dt}&= \gamma I. \end{align} Thus, we can sort the infected compartments as $\bm{X}(t)=[E(t), \;I(t)]$. In this way the distributions of the infectious phase are defined as $\bm{x}(t,\tau)=[i_e(t,\tau) ,i_i(t,\tau)]$. We get ${\mathbfcal{F}}(t)$ e ${\mathbfcal{V}}(t,\tau)$: \begin{equation} {\mathbfcal{F}}(t) = \begin{pmatrix} \frac{\beta S}{N}\big [ I + \epsilon E \big ]\\[1ex] 0 \end{pmatrix}, \qquad {\mathbfcal{V}}(t,\tau) = \begin{pmatrix} \kappa i_e(t,\tau)\\[1ex] \gamma i_i(t,\tau) - \kappa i_e(t,\tau) \end{pmatrix}, \end{equation} \noindent as we recover the sub-set of equations for the infected compartments with: \begin{equation} \frac{d}{dt} \bm{X}(t) = \mathbfcal{F}(t) - \int_0 ^{\infty} \mathbfcal{V}(t,\tau) d\tau. \end{equation} \noindent The change from $ E $ to $ I $ is not considered a new infection, but the progression of the disease stage. Therefore, new infections only occur in the exposed compartment $ E $, causing $\mathcal{F}_2(t)=0$. Thus, from ${\mathbfcal{F}}(t)$, we obtain $\bm{\Omega}(t)$: \begin{equation} \bm{\Omega}(t) = \Big[ \frac{\partial}{\partial \bm{X}} {\mathbfcal{F}}(t) \Big] = \begin{pmatrix} \epsilon \frac{\beta S}{N} & \frac{\beta S}{N} \\ 0 & 0 \end{pmatrix}. \end{equation} \noindent We proceed to obtain $\Gamma(\tau)$ by solving \begin{equation} \frac{d}{d\omega} \bm{u}(\omega) = - \frac{\partial {\mathbfcal{V}}}{\partial \bm{x}} \bm{u}(\omega) = \begin{bmatrix} -\kappa & 0 \\ \kappa & -\gamma \end{bmatrix} \bm{u}(\omega). \end{equation} \noindent In order to obtain \begin{equation} \bm{u}(\omega) = \begin{bmatrix} e^{-\kappa\omega} & 0 \\ \frac{\kappa}{\gamma -\kappa} [e^{-\kappa\omega} - e^{-\gamma\omega} ]& e^{-\gamma\omega} \end{bmatrix} \bm{u}(0). \end{equation} \noindent Therefore: \begin{equation} \bm{\Gamma}(\tau) = \begin{bmatrix} e^{-\kappa\omega} & 0 \\ \frac{\kappa}{\gamma -\kappa} [e^{-\kappa\tau} - e^{-\gamma\tau} ]& e^{-\gamma\tau} \end{bmatrix} \end{equation} With $\bm{\Omega}(t)$ and $\bm{\Gamma}(\tau)$ we perform the multiplication between the matrices, in order to obtain \begin{equation} \bm{A}(t,\tau) = \begin{bmatrix} \epsilon \frac{\beta S}{N} & \frac{\beta S}{N} \\ 0 & 0 \end{bmatrix} \begin{bmatrix} e^{-\kappa\tau} & 0 \\ \frac{\kappa}{\gamma -\kappa} [e^{-\kappa\tau} - e^{-\gamma\tau} ]& e^{-\gamma\tau} \end{bmatrix}. \end{equation} \noindent All terms in the second line are null, leaving only \begin{align} A_{11}(t,\tau)&= \epsilon \frac{\beta S}{N}e^{-\kappa\tau} + \frac{\beta S}{N}\frac{\kappa}{\gamma -\kappa} [e^{-\kappa\tau} - e^{-\gamma\tau} ],\\ A_{12}(t,\tau)&= \frac{\beta S}{N} e^{-\gamma\tau}. \end{align} \noindent Performing the integral from zero to infinity with respect to $ \tau $ we obtain the reproduction numbers of the system ${\mathbfcal{R}}(t)$: \begin{equation} {\mathbfcal{R}}(t)=\beta\frac{S}{N} \begin{pmatrix} \dfrac{\epsilon}{\kappa}+ \dfrac{1}{\gamma} & \dfrac{1}{\gamma} \\[2ex] 0 & 0 \end{pmatrix}. \end{equation} \noindent Where it is clear that \begin{equation} \bm{\overline{\mathbfcal{R}}}= \beta\frac{S}{N} \begin{pmatrix} \dfrac{\epsilon}{\kappa}+ \dfrac{1}{\gamma} \\[2ex] \dfrac{1}{\gamma} \end{pmatrix}. \end{equation} Since there is only generation of new infected in the exposed compartment, we have to $ \mathcal {F} _1 (t) = \mathcal {F} ^ T (t) $. Thus, the vector $ \bm {\alpha} (t) $ can be written as $ \bm {\alpha} = \big (1 , 0 \big) $, thus \begin{equation} {\mathbfcal{R}} = \bm{\alpha} \otimes \bm{\overline{\mathbfcal{R}}}= \begin{pmatrix} 1 \\ 0 \end{pmatrix} \bigotimes \beta\frac{S}{N} \begin{pmatrix} \dfrac{\epsilon}{\kappa}+ \dfrac{1}{\gamma} \\[2ex] \dfrac{1}{\gamma} \end{pmatrix}. \end{equation} \noindent Thus, to obtain the total reproduction number of the system, it is enough to make the scalar product between $\bm{\overline{\mathbfcal{R}}}$ and $\bm{\alpha}$: \begin{equation} \mathcal{R}^T(t)= \bm{\alpha} \bm{\cdot} \bm{\overline{\mathbfcal{R}}} = \frac{\beta S}{N} \Big [ \frac{\epsilon}{\kappa}+ \frac{1}{\gamma} \Big ], \end{equation} \noindent which leads to \begin{equation} g^T(\tau)= \frac{\epsilon \; e^{-\kappa\tau} + \frac{\kappa}{\gamma -\kappa} [e^{-\kappa\tau} - e^{-\gamma\tau} ]}{\epsilon/\kappa+ 1/\gamma} . \end{equation} When we make $ \epsilon \to 0 $, we retrieve the result of the reproduction number of the classic SEIR model (without pre-symptomatic infection),$ \mathcal {R} _0 = \beta / \gamma $. Similarly, the generation interval distribution of the SEIR model is also reduced to the well-known form in the literature \cite{champredon2018equivalence}, demonstrating robustness in the method. It is trivial to apply the method to the SIR model, whose analysis corresponds to tanking a limit of $\kappa \to \infty$. Therefore, both the reproduction number and the generation interval distribution, $g(\tau)= \gamma e^{-\gamma \tau}$, return to the known results from literature \cite{nishiura2009effective}. \subsection*{SIIR model} This is an adaptation of the SIR model, where we include two different manifestations of the disease, $ I_1 $ and $ I_2 $. In this model, there is only one susceptible population whose individuals can evolve into two compartments that carry the infectious agent. Both types of the disease carry the same pathogen, so that individuals infected with either type can go for $ I_1 $ and $ I_2 $. When infected, an individual has the probability $ p $ of manifesting the type $ I_1 $ of the disease and $ q = (1-p) $ of manifesting the type $ I_2 $. We assume two independent infection rates $ \beta_1 $ and $ \beta_2 $ for $ I_1 $ and $ I_2 $. Likewise, each slot has its own $ \gamma_1 $ or $ \gamma_2 $ removal rate. So, we write the model equations: \begin{align} \frac{dS}{dt}& =- \frac{(\beta_1 I_1 +\beta_2 I_2)S}{N},\\ \frac{dI_1}{dt}& = p\frac{(\beta_1 I_1 +\beta_2 I_2)S}{N} - \gamma_1 I_1,\\ \frac{dI_2}{dt}& = q\frac{(\beta_1 I_1 +\beta_2 I_2)S}{N} - \gamma_2 I_2,\\ \frac{dR}{dt}&= \gamma_1 I_1 + \gamma_2 I_2. \end{align} We sort the infected compartments as $\bm{X}(t)=[I_1(t), \;I_2(t)]$ and $\bm{x}(t,\tau)=[i_1(t,\tau) ,i_2(t,\tau)]$, where it is clear that $\bm{X}(t)= \int_{0}^{\infty}\bm{x}(t,\tau) d\tau$. We obtain ${\mathbfcal{F}}(t)$ and ${\mathbfcal{V}}(t,\tau)$ as: \begin{equation} {\mathbfcal{F}}(t) =\dfrac{S}{N} \begin{pmatrix} p (\beta_1 I_1 +\beta_2 I_2)\\[1ex] q (\beta_1 I_1 +\beta_2 I_2) \end{pmatrix}, \qquad {\mathbfcal{V}}(t,\tau) = \begin{pmatrix} \gamma_1 i_1(t,\tau)\\ \gamma_2 i_2(t,\tau) \end{pmatrix}. \end{equation} \noindent Thus \begin{equation} \bm{\Omega}(t) = \dfrac{S}{N} \begin{pmatrix} p\beta_1 & p\beta_2 \\[1ex] q\beta_1 & q\beta_2 \end{pmatrix}, \qquad -\frac{\partial {\mathbfcal{V}}}{\partial \bm{x}} = \begin{pmatrix} -\gamma_1 & 0 \\ 0 & -\gamma_2 \end{pmatrix}. \end{equation} The linear O.D.E system described by the matrix $-\partial {\mathbfcal{V}}/\partial \bm{x}$ it is simple to solve, since this is a diagonal matrix. So, we get \begin{equation} \bm{\Gamma}(\tau) = \begin{pmatrix} e^{-\gamma_1 \tau} & 0 \\ 0 & e^{-\gamma_2 \tau} \end{pmatrix}, \end{equation} \noindent which, when multiplied by the matrix $ \bm {\Omega} (t) $ results in \begin{equation} \bm{A}(t,\tau) = \dfrac{S}{N} \begin{pmatrix} p\beta_1 \; e^{-\gamma_1 \tau} & p\beta_2\; e^{-\gamma_2\tau} \\[2ex] q\beta_1 \; e^{-\gamma_1 \tau} & q\beta_2 \; e^{-\gamma_2 \tau} \end{pmatrix} \end{equation} \noindent It is interesting to realize that this is one of the cases where $A_{ij}(t,\tau)= \alpha_{i} \overline{A}_j(t,\tau)$, recalling that $\overline{A}_j(t,\tau)= \sum_{i} A_{ij}(t,\tau)$. Therefore $\bm{\alpha} =\big(p,q\big)$ and we can factorize the matrix into $\overline{\bm{A}}$ and $\bm{\alpha}$ as: \begin{equation} \bm{A} = \bm{\alpha} \otimes \overline{\bm{A}} = \begin{pmatrix} p \\ q \end{pmatrix} \bigotimes \begin{pmatrix} \dfrac{\beta_1 S}{N} \; e^{-\gamma_1 \tau} \\[2ex] \dfrac{\beta_2 S}{N}\; e^{-\gamma_2\tau} \\ \end{pmatrix}. \end{equation} \noindent Where $\otimes$ represents the tensorial product, $\bm{\alpha} \otimes \bm{\overline{\bm A}}= \Big [ \alpha_i \overline{A}_j \Big ]$. Integrating $\overline{\bm{A}}(t,\tau)$ in relation to $\tau$ we get: \begin{equation} \bm{\overline{\mathbfcal{R}}}(t) = \int_0 ^{\infty}\overline{\bm{A}}(t,\tau) d\tau =\dfrac{ S}{N} \begin{pmatrix} \beta_1/\gamma_1\\[1ex] \beta_2/\gamma_2 \\ \end{pmatrix}. \end{equation} \noindent Therefore, we proceed to the scalar product between $\bm \alpha$ and $\overline{\mathcal{\bm{R}}}$: \begin{equation} \mathcal{R}^T (t) = \bm{ \alpha \cdot \overline{\mathbfcal{R}}}=\frac{S(t)}{N} \bigg [ p \frac{\beta_1}{\gamma_1} + q\frac{\beta_2}{\gamma_2} \bigg ]. \end{equation} \noindent When $t \to 0$, $\mathcal{R}_0 = p \beta_1/\gamma_1 + q\beta_2/\gamma_2 $. We realize that the total reproduction number is the sum of the reproduction numbers of the two types of infection times the percentage of occurrence of each. Thus, we proceed to obtain the distribution of the generation interval, which takes the form: \begin{equation} g^T(\tau) = \dfrac{ p \beta_1 \; e^{-\gamma_1 \tau} + q \beta_2 \; e^{-\gamma_2 \tau} }{ \beta_1/\gamma_1 + \beta_2/\gamma_2 }. \end{equation} \subsection*{SIR-type meta-population model} In this section we consider a meta-population model that will be used in the main framework and is detailed at Supplementary Material 2. Here we summarize the model. We consider the existence of ``$ n $'' meta-populations with coupled SIR-type dynamics. Where, due to the movement of individuals between meta-populations, an infected compartment in one meta-population can influence the disease transmission process of all the others. The coupling of the equations happens by the transmission rates $ \lambda_ {ik} $ related to the contamination process that emerges from the flow of individuals. The SIR-type model for $n$ meta-populations can be written as \begin{align} \frac{dS_i}{dt} =& - \sum_{j}^{n} \lambda_{ij}(t)\;I_j(t) \; S_i(t),\label{eqS}\\ \frac{dI_i}{dt}=& \sum_{j}^{n} \lambda_{ij}(t)\;I_j(t)\; S_i(t) -\gamma \; I_i(t),\label{eqI}\\ \frac{dR_i}{dt}=& \gamma I_i(t),\label{eqR} \end{align} \noindent in which $S_i(t)$, $I_i(t)$ and $R_i(t)$ correspond to the susceptible, infected and removed individuals that belong to the meta-population ``$i$''. The $\lambda_{ij}$ parameter is related to the transmission between the meta-populations ``$i$'' and ``$j$'', see Supplementary Material 2. We assume that the recover rate $\gamma$ is uniform for all meta-populations.The infected compartments are sorted as $\bm{X}(t)=[I_1(t), \;I_2(t), \hdots , I_n(t)]$ and $\bm{x}(t,\tau)=[i_1(t,\tau), \;i_2(t,\tau), \hdots , i_n(t,\tau)]$, where it is clear that $\bm{X}(t)= \int_{0}^{\infty}\bm{x}(t,\tau) d\tau$. We proceed to identify $\mathcal{F}_i(t)$ and $\mathcal{V}_i(t,\tau)$ as \begin{equation} \bm{\mathbfcal{F}}(t) = \Bigg [ \sum_j ^n \lambda_{ij}(t) I_j(t) S_i(t) \Bigg ], \qquad \bm{\mathbfcal{V}}(t,\tau) = \Bigg [ -\gamma \; i_i(t,\tau) \Bigg ] \end{equation} Therefore we get: \begin{equation} \bm{\Omega}(t) = \begin{pmatrix} \lambda_{11} S_1 & \lambda_{12} S_1 & \ldots & \lambda_{1n} S_1 \\ \lambda_{21} S_2 & \lambda_{22} S_2 & \ldots & \lambda_{2n} S_2 \\ \vdots & \vdots & \ddots & \vdots \\ \lambda_{n1} S_n & \lambda_{n2} S_n & \ldots & \lambda_{nn} S_n \end{pmatrix},\qquad -\frac{\partial \bm{\mathbfcal{V}}}{\partial \bm{x}} = \begin{pmatrix} \gamma & 0 & \ldots & 0 \\ 0 & \gamma & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & \gamma \end{pmatrix}. \end{equation} It is clear to see that $-\partial \bm{\mathbfcal{V}}/\partial \bm{x}$ is a diagonal matrix, such that $\dfrac{d}{d\omega}\bm{u}(\omega) = -\dfrac{\partial \bm{\mathbfcal{V}}}{\partial \bm{x}} \bm{u}(\omega)$ are of trivial solution \begin{equation} \bm{\Gamma}(\tau) = e^{-\gamma \tau} \mathds{I}, \end{equation} \noindent where $\mathds{I}$ represents the identity matrix of dimension ``$n$''. Thus, when multiplying the matrices $ \bm {\Omega} $ and $ \bm {\Gamma} $ we arrive at: \begin{equation} \bm{A}(t,\tau) = \Big [ \lambda_{ik} S_i \; e^{-\gamma \tau}\Big ] \end{equation} \noindent Integramos $\bm{A}$ de forma a obter a matriz de próxima geração do sistema $\bm{\mathbfcal{R}}$, dada por: \begin{equation} \bm{\mathbfcal{R}}(t) = \Bigg [ \dfrac{\lambda_{ik} S_i}{\gamma} \Bigg ]. \end{equation} \noindent That leads to: \begin{equation} g_{ij}(\tau)= g(\tau)= \gamma \; e^{-\gamma \, \tau}. \end{equation} \noindent Whereby, if only one meta-population is considered, the SIR model results are recovered \cite{nishiura2009effective}. \subsection*{SEIIR-type meta-population model} Finally, we present a meta-population model related to the transmission dynamics of SARS-Cov-2 coronavirus. As in the previous section, the meta-populations are connected by the commuter movement of individuals. In this model, individuals that are infected have to pass through a latency period to become infectious, during that time, we consider that those are in the exposed compartment $E$. After a mean latency period of $1/k$, those infected can become symptomatic or asymptomatic, $I^s$ and $I^a$ respectively. While in these compartments, the individuals of ``$j$'' are able to generate new infected ones in ``$i$'' based on the transmission rate $\lambda_{ij}$. However, asymptomatic ones are considered to have a lower transmissibility, thus, the transmission rate must be multiplied by a factor $\delta$ that lowers it's infectivity. The SEIIR model was developed by Oliveira in \cite{Oliveira2020mathematical} and in Supplementary Material 2 we derive an SEIIR-type meta-population model that can be written as: \begin{align} \frac{dS_i}{dt} =& -\sum_{j}^{n} \lambda_{ij}(t)\; S_i(t) \Big [I^s_j(t)+\delta I^a_j(t)\Big ] \; S_i(t),\label{eqS-met2}\\ \frac{dE_i}{dt}=& \sum_{j}^{n} \lambda_{ij}(t)\; S_i(t) \Big [I^s_j(t)+\delta I^a_j(t)\Big ] -\kappa \; E_i(t),\\ \frac{dI^s_i}{dt}=& p\kappa \; E_i(t) -\gamma_s I^s _i(t),\\ \frac{dI^a_i}{dt}=& (1-p)\kappa \; E_i(t) -\gamma_a I^a _i(t),\\ \frac{dR_i}{dt}=& \gamma_s I^s_i(t) + \gamma_a I^a_i(t).\label{eqR2} \end{align} To proceed with the methodology, we sort the compartments as $X=[ E_1, ..., E_n, {I_1}^{a}, ..., {I_n} ^{a}, {I_1} ^{s}, ..., {I_n} ^{s}]$ and $x=[ {i^e} _{1}, ..., {i^e} _{n}, {i^s} _{1}, ..., {i _n} ^{s}, {i_1} ^{a}, ..., {i_n} ^{a}]$ in a way that if there are $n$ meta-populations we have $3n$ infected compartments. Therefore, we obtain $\mathbfcal{F}(t)$ and $\mathbfcal{V}(t,\tau)$ as: \begin{align}\label{F_SEIIR} \mathcal{F}_i(t)= \begin{cases} \sum_{j}^{n} \lambda_{ij}(t)\; S_i(t) \Big [I^s_j(t)+\delta I^a_j(t)\Big ] , & \text{for $i\leq n$} \\ 0, & \text{for $i > n$} \end{cases} \end{align} \begin{align}\label{V_SEIIR} \mathcal{V}_i(t,\tau)= \begin{cases} \kappa\; x_i(t,\tau) , & \text{for $i\leq n$} \\ \gamma_s \; x_i(t,\tau) - p\kappa\; x_{i-n}(t,\tau), & \text{for $ n < i \leq 2n$} \\ \gamma_a \; x_i(t,\tau) - (1-p)\kappa\; x_{i-2n}(t,\tau), & \text{for $ 2n < i$} \end{cases} \end{align} \noindent where $I_j^s=X_{j+n}$ and $I_j^a=X_{j+2n}$. Thus, from \eqref{F_SEIIR}, we obtain: \begin{align}\label{Omega_SEIIR} \bm{\Omega}(t)= \begin{pmatrix} \bm{0} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} &\bm{\Omega^s} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} & \bm{\Omega^a} \\ \hline \bm{0} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} &\bm{0} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} & \bm{0} \\ \hline \bm{0} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} &\bm{0} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} & \bm{0} \end{pmatrix}, \qquad \qquad -\frac{\partial \bm{\mathbfcal{V}}}{\partial \bm{x}}= \begin{pmatrix} -\kappa & 0 & 0 \\ p \kappa & -\gamma_s & 0 \\ (1-p) \kappa & 0 & -\gamma_a \end{pmatrix} \mbox{\Large{$\otimes$}} \; \mbox{\Large{$\mathds{I}_n$}}. \end{align} \noindent Whereby ${\mathds{I}_n}$ is an $n \times n$ identity matrix and $\otimes$ is a tensor product. Also, $\bm{\Omega}$ is divided into nine $n \times n$ submatrices, where $\bm{0}$ is an all-zeroes $n \times n$ matrix, $\bm{\Omega}^s\equiv \Big [\lambda_{ij} S_i \Big ]$ and $\bm{\Omega}^a\equiv \delta \bm{\Omega}^s$. Solving the system of differential equations on the characteristic line, we obtain: \begin{align}\label{Gamma_SEIIR} \bm{\Gamma}(\tau)= \begin{pmatrix} e^{-\kappa \tau} & 0 & 0 \\[2ex] p\frac{\kappa}{\gamma_s -\kappa} \Big(e^{-\kappa\tau} - e^{-\gamma_s\tau} \Big ) & e^{-\gamma_s \tau} & 0 \\[2ex] (1-p)\frac{\kappa}{\gamma_a -\kappa} \Big(e^{-\kappa\tau} - e^{-\gamma_a\tau} \Big ) & 0 & e^{-\gamma_a \tau} \end{pmatrix} \mbox{\Large{$\otimes$}} \; \mbox{\Large{${\mathds{I}_n}$}}. \end{align} We proceed into the multiplication of the matrices $\bm{\Omega}$ and $\bm{\Gamma}$ in order to obtain: \begin{align}\label{Omega_SEIIR} \bm{A}(t,\tau)= \begin{pmatrix} \bm{A^e} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} &\bm{A^s} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} & \bm{A^a} \\ \hline \bm{0} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} &\bm{0} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} & \bm{0} \\ \hline \bm{0} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} &\bm{0} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} & \bm{0} \end{pmatrix} \end{align} \noindent whereby the submatrices are defined as: \begin{equation} \bm{A^e}(t) \equiv \Bigg [ \lambda_{ij}(t) S_i(t) \bigg( p\frac{\kappa}{\gamma_s -\kappa} \Big(e^{-\kappa\tau} - e^{-\gamma_s\tau} \Big ) + \delta (1-p)\frac{\kappa}{\gamma_a -\kappa} \Big(e^{-\kappa\tau} - e^{-\gamma_a\tau} \Big ) \bigg) \Bigg ], \end{equation} \noindent $\bm{A^s}\equiv e^{-\gamma_s \tau}\bm{\Omega^s}$ and $\bm{A^a}\equiv e^{-\gamma_a \tau} \bm{\Omega^a}$. Therefore, the next generation matrix is \begin{align}\label{Omega_SEIIR} \bm{\mathbfcal{R}}(t)= \begin{pmatrix} \bm{\mathbfcal{R}^e} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} &\bm{\mathbfcal{R}^s} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} & \bm{\mathcal{R}^a} \\ \hline \bm{0} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} &\bm{0} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} & \bm{0} \\ \hline \bm{0} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} &\bm{0} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} & \bm{0} \end{pmatrix} \end{align} \noindent where: \begin{equation} \bm{\mathbfcal{R}^e}(t) \equiv \Bigg [ \lambda_{ij} (t) S_i(t) \bigg( \frac{p}{\gamma_s} + \delta \frac{(1-p)}{\gamma_a} \bigg) \Bigg ], \end{equation} \noindent Because there is no generation of infected individuals on the symptomatic and asymptomatic compartments, the expressions for $\bm{\mathbfcal{R}^s}$ and $\bm{\mathbfcal{R}^a}$ are not needed. This is so because $\mathcal{F}_i(t)$ for $i>n$ is null for any time. Therefore, the renewal equations of $\mathcal{F}_i(t)$ for $i>n$ have the tautological result that zero equals zero, regardless of the values of $\bm{\mathbfcal{R}^s}$ or $\bm{\mathbfcal{R}^a}$. Thus, all we need to describe the dynamics is $\bm{\mathbfcal{R}}$ for $i,j\leq n$, that is $\bm{\mathbfcal{R}^e}$. Lastly, we obtain the generation interval distribution matrix for $i,j\leq n$: \begin{equation} g_{ij}(\tau) \equiv g(\tau)= \frac{\frac{p}{\gamma_s} g^s(\tau) + \frac{\delta (1-p)}{\gamma_a}g^a(\tau)}{\frac{p}{\gamma_s} + \frac{\delta (1-p)}{\gamma_a}}, \end{equation} \noindent for \begin{equation} g^a(\tau) = \frac{\kappa \gamma_a}{\gamma_a - \kappa} (e^{-\kappa \tau} - e^{-\gamma_a \tau} ), \qquad \qquad \qquad g^s(\tau) = \frac{\kappa \gamma_s}{\gamma_s - \kappa} (e^{-\kappa \tau} - e^{-\gamma_s \tau} ) . \end{equation} \noindent Whereby, if only one meta-population is considered, we return to the results in literature \cite{Oliveira2020mathematical}. \newpage \topskip0pt \vspace*{\fill} \begin{center} \begin{minipage}{.6\textwidth} \Large{\textbf{Supplementary Material 2}}\\ \large{\textcolor{gray}{\textbf{Meta-population models formulation}}} \end{minipage} \end{center} \vspace*{\fill} \newpage \section*{Meta-population models formulation} In this Supplementary Material we will be, inspired by \cite{miranda2021scaling}, developing a meta-population model that takes into account the movement of individuals between meta-populations to describe the propagation of a disease through out multiple municipalities . We will be interpreting the inter-municipal flow as a complex network where its nodes represent municipalities and the weight of its edges represents the intensity of the flow between the connected municipalities. We also consider that each municipality can be interpreted as a meta-population, with its own compartments and parameters, which can be represented by a vector $\bm{y}_i(t)$, where the index “$i$” indicates the meta-population portrayed.Each entry of $\bm{y}_i(t)$ represents a compartment of this meta-population, such as $\bm{y}_i(t)=\big (S_i(t), I_i(t), R_i(t) \big)$ in the case of a SIR model. In which $S_i(t)$, $I_i(t)$ and $R_i(t)$ correspond to the susceptible, infected and removed individuals that are residents of the meta population "$i$", respectively. The sum of the elements of $\bm{y}_i$ corresponds to the number of individuals of the meta-population "$i$", in the SIR model we have $S_i(t) + I_i(t) + R_i(t) =N_i(t)$. We can represent the amount of individuals that goes from the meta-population "$i$" to another meta-population "$j$" each day as $\varphi_{ij}$, so it represents the flow of individuals between those meta-populations. Since each meta-population is described from it's compartments, we can describe the flow of those using $\bm{y}_i(t)$ in \begin{equation} \text{Flow from $i$ to $j$} = \varphi_{ij}(t) \frac{\bm{y}_{i}}{N_i}.\centering \end{equation} We define $\Phi_{ij}(t) \equiv \frac{\varphi_{ij}(t)}{N_i}$ as the density of flow. In that way, $\Phi_{ij}\; \bm{y_i}$ is the number of individuals from each compartment class that are flowing from "$i$" to "$j$". It's natural to see that sum of $\Phi_{ij}\; \bm{y}_i$ for each compartment class of $\bm{y}_i$ is equal to $\varphi_{ij}$, in the SIR model for example: $\Phi_{ij}\; S_i +\Phi_{ij}\;I_i + \Phi_{ij}\; R_i = \varphi_{ij}$. Given that the meta-populations are connected through the flow , it is necessary to identify how they. For this purpose, graphs are used which, in general, represent complex networks, given the large number of connections. To represent networks, matrices are used, as in the case of adjacency matrices whose elements represent the density of flow ($\Phi_{ij}$). Thus, the values of each $\Phi_{ij}(t)$ are the weight of the edges with a null value representing the absence of an edge. Since there are no self interactions in this network, we do not consider the flow of a meta-population to itself, thus $\varphi_{ii}\equiv 0$ . Due to the circulation of individuals through the network, the characteristics of the meta-populations will be changed. Thus, we define the effective population of ``$i$'' described by the vector $\bm{y}_{e,\;i} = \big(S_{e\; ,i}(t),I_{e\; ,i}(t),R_{e\; ,i}(t) \big)$ which corresponds to individuals located, at time t, in the “i” meta-population regardless of their origin meta-population.The effective population represents the new characteristics of a population due to the flow; for example, a meta-population that does not have infected resident individuals may have carriers of the pathogen in their effective population due to the flow from some other meta-population that presents infected individuals. We can write $\bm{y}_{e,\; i}$ as: \begin{equation} \bm{y}_{e,\; i}(t) = \overbrace{\bm{y}_i(t)}^\text{ Resident Pop. } -\quad\overbrace{ \sum_{j \neq i}^{n} \, \Phi_{ij} \; \bm{y}_{i}(t) }^\text{Outflow} \quad+ \quad \overbrace{\sum_{j \neq i}^{n} \Phi_{ji} \;\bm{y}_{j}(t)}^\text{Inflow} \end{equation} For, $N$ being the number of meta populations. Rearranging the equation: \begin{equation}\label{Effective_Pop} \bm{y}_{e,\;i}(t) = \bm{y}_{i}(t)\Big( 1 - \sum_{j}^{n} \, \Phi_{ij} \Big) + \sum_{j}^{n} \Phi_{ji}\; \bm{y}_{j}(t) \end{equation} It's clear then that the effective population of "$i$" is the sum of the individuals from "$i$" that are in "$i$" , $\bm{y}_{i}\Big( 1 - \sum_{j \neq i}^{n} \, \Phi_{ij} \Big)$, with the individuals from other meta populations that are in "$i$", $\sum_{j \neq i}^{n} \Phi_{ji}\; \bm{y}_{j}$. If we sum each element of $\bm{y}_{e,\; i}$ we have the effective population number: $N_{e,\; i}= N_i - \sum_{j \neq i}^{n}\varphi_{ij} + \sum_{j \neq i}^{n}\varphi_{ji}$. It is important to highlight that the resident population is not changed, therefore, we do not consider migration but only commuter periodic movement, whereby the individuals of one meta-population go to another to work or study. We now proceed in order to describe how the transmission of a disease occurs in a meta-population, taking into account the flow of individuals in the network. We must then describe the occurrence of new infections in the meta-population $\bm{y}_i(t)$, which can occur within the “i” meta-population \begin{equation}\label{Infection_In_city} \frac{\beta _i}{N_i} \, \times \overbrace{S_{i} \Big( 1 - \sum_{j}^{n} \, \Phi_{ij} \Big)}^\text{Susceptible Indv. from ``$i$'' in ``$i$''} \times \overbrace{I_{e,\;i}}^\text{Infected Indv. in ``$i$''}, \end{equation} \noindent or on another meta-population "$j$" \begin{equation}\label{Infection_Out_city} \frac{\beta _j}{N_j} \times \overbrace{\Phi_{ij} \, S_{i}}^\text{Susceptible Indv. from ``$i$'' in ``$j$''} \times \overbrace{I_{e,\; j}}^\text{Infected Indv. in ``$j$''}. \end{equation} \noindent $\beta_i(t)$ and $\beta_j(t)$ being the transmission rates within the “i” and “j” meta-populations, respectively. The parameters $\beta_i(t)$ and $\Phi_{ij}(t)$ are time dependent, so we can incorporate the changes in the behavior of the populations on those variables. Thus, the number of new infections of individuals that belong to “$i$”, $\mathcal{F}_i(t)$, will be equal to the sum of \eqref{Infection_In_city} and \eqref{Infection_Out_city}. We can substitute equation \eqref{Effective_Pop} in expressions \eqref{Infection_In_city} and \eqref{Infection_Out_city}, in order to expand the term of the effective population of infected individuals($I_{e\; ,i}(t)$ ,$I_{e\; ,i}(t)$) and be able to express them according to the infected residents in each meta-population. Carrying out these operations, we get to: \begin{equation}\label{Lambda} \mathcal{F}_i(t) = \sum_{j}^{n} \lambda_{ij}(t)\; S_i(t)I_j(t) \end{equation} For: \begin{align} \lambda_{ii}&=\frac{\beta _i}{N_i} \; \Big( 1 - \sum_{j}^{n} \, \Phi_{ij} \Big)^2 + \sum_{j}^{n}\; \frac{\beta _j}{N_j} \; \Phi_{ij}^2 \label{lambdaI} \\ \lambda_{ij}&= \frac{\beta _i}{N_i} \; \Phi_{ji} \Big( 1 - \sum_{k}^{n} \, \Phi_{ik} \Big) + \frac{\beta _j}{N_j} \; \Phi_{ij} \; \Big( 1 - \sum_{k}^{n} \, \Phi_{jk} \Big) + \sum_{k}^{n} \frac{\beta _k}{N_k} \Phi_{ik} \; \Phi_{jk} \label{lambdaJ} \end{align} The $\lambda_{ij}(t)$ are related to the transmission between individuals of the same meta-population and from individuals of the “j” meta-population to individuals of the “i” meta-population. \subsection*{SIR-Type model} In this section, we construct the set of equations for the SIR-type model based in the contamination process modeled in this Supplementary Material. We will be modeling the resident populations of each municipality and will not be considering migration, therefore, resident individuals of one meta-population will always remain residents of that same meta-population. It is assumed that the susceptible population can only decrease due to infection process, therefore, no life and death dynamic. In the same way, the removed individuals are only generated by a removing rate $\gamma$, uniform for all meta-populations. Gathering those considerations, we write the system of equations: \begin{align} \frac{dS_i}{dt} =& - \sum_{j}^{n} \lambda_{ij}(t)\;I_j(t) \; S_i(t),\label{eqS-met}\\ \frac{dI_i}{dt}=& \sum_{j}^{n} \lambda_{ij}(t)\;I_j(t)\; S_i(t) -\gamma \; I_i(t),\label{eqI}\\ \frac{dR_i}{dt}=& \gamma I_i(t).\label{eqR} \end{align} \noindent Where it is clear that when $\varphi_{ij}=0$, for all $i$'s and $j$'s, each meta-population will be described by the classical SIR homogeneous model. Of course, this SIR approach is not taking into account the other heterogeneities of a disease besides space. On the following section we present a more sophisticated approach focused on the Covid-19 transmission dynamics. \newpage \subsection*{A meta-population model for Covid-19 (SEIIR)} In this section, we establish a more precise description for the dynamics of SARS-Cov-2 coronavirus on the municipalities network. To do so, we consider, as in \cite{Oliveira2020mathematical}, that the infected individuals can be separated in three classes: the exposed $E$, individuals infected which are in the latency period and do not transmit the disease; the symptomatic individuals $I^s$, that are infectious, present a substantial amount of symptoms and are registered in the official data; the asymptomatic/undetected ones $I^a$, that are infectious but present mild/non-existing symptoms and are not registered in the official data. The infected individuals always start in the exposed compartment, a portion $p$ of them eventually becomes symptomatic and the other $(1-p)$ portion becomes asymptomatic. Since we have two types of infectious individuals, both of them must be taken in consideration in the generation of new infected. Therefore, assuming that the asymptomatic individuals transmission rate is a fraction $\delta$ of the symptomatic transmission rate, it is simple to derive, similarly to \eqref{Infection_In_city} and \eqref{Infection_Out_city}, the number of infected individuals that are generated on the exposed compartment of each meta-population: \begin{equation}\label{Lambda2} \mathcal{F}^e_i(t) = \sum_{j}^{n} \lambda_{ij}(t)\; S_i(t) \Big [I^s_j(t)+\delta I^a_j(t)\Big ]. \end{equation} \noindent Therefore, by the same assumptions presented on the previews section, the following model is obtained: \begin{align} \frac{dS_i}{dt} =& -\sum_{j}^{n} \lambda_{ij}(t)\; S_i(t) \Big [I^s_j(t)+\delta I^a_j(t)\Big ] \; S_i(t),\label{eqS-met2}\\ \frac{dE_i}{dt}=& \sum_{j}^{n} \lambda_{ij}(t)\; S_i(t) \Big [I^s_j(t)+\delta I^a_j(t)\Big ] -\kappa \; E_i(t),\\ \frac{dI^s_i}{dt}=& p\kappa \; E_i(t) -\gamma_s I^s _i(t),\\ \frac{dI^a_i}{dt}=& (1-p)\kappa \; E_i(t) -\gamma_a I^a _i(t),\\ \frac{dR_i}{dt}=& \gamma_s I^s_i(t) + \gamma_a I^a_i(t).\label{eqR2} \end{align} \noindent Whereby $\kappa$, $\gamma_s$ and $\gamma_a$ are the removing rate of the exposed, symptomatic and asymptomatic compartments, respectively, and are uniform in all meta-populations. \newpage \topskip0pt \vspace*{\fill} \begin{center} \begin{minipage}{.6\textwidth} \Large{\textbf{Supplementary Material 3}}\\ \large{\textcolor{gray}{\textbf{Expressions and parameter values for evaluating $\mathbfcal{R}(t)$ for the meta-population models}}} \end{minipage} \end{center} \vspace*{\fill} \newpage \section*{Expressions for evaluating $\mathbfcal{R}(t)$ using incidence data} Here, we derive the expressions and parameter values needed to estimate the reproduction numbers for the meta-population models. Firstly, we start by substituting the explicit expressions of the reproduction numbers of each model in the equation 2.24 of the main framework. After some straightforward calculations, we obtain: \begin{equation}\label{Est} \mathbfcal{Q}_i (t) = \sum_j ^n \lambda_{ij}(t) a_j(t). \end{equation} \noindent whereby, $\mathbfcal{Q}_i (t) = \mathcal{B}_i(t)/S_i(t)$ and \begin{equation} a_j= \sum_{\tau=0} ^t \frac{1}{\gamma} g(\tau) \Delta t \end{equation} \noindent for the SIR model and \begin{equation} a_j= \sum_{\tau=0} ^t \Big[ \frac{p}{\gamma_s} + \frac{\delta (1-p)}{\gamma_a}\Big] g(\tau) \Delta t \end{equation} \noindent for the SEIIR model. We consider that the $\mathcal{B}_i(t)$ is the collection of all the new infections during a $\Delta t$ time interval of 1 day. With appropriate units of measure, we have $\Delta t=1$. In the SIR model we consider that every infection is reported, $\rho_i=1$ and in the SEIIR case only the symptomatic individuals report their infections, $\rho_i=p$. We proceed into estimating the susceptible population. If we integrate \eqref{eqS-met} and substitute \eqref{Lambda} and equation 2.23 of the main framework, we get: \begin{equation} S_i(t)= N_i - \sum_{t'=0} ^t\frac{\mathcal{B}_i(t')}{\rho_i}. \end{equation} Finally, substituting \eqref{lambdaI} and \eqref{lambdaJ} in \eqref{Est} results in: \begin{equation}\label{Est2} \mathbfcal{Q}_i (t) = \sum_j ^n \Theta_{ij}(t) \beta_j(t), \end{equation} \noindent for: \begin{align} \Theta_{ii}&= \frac{1}{N_i} \Bigg[ a_i \Big( 1- \sum_k ^n \varphi_{ik} \Big )^2 + \sum_j ^n a_j \varphi_{ji} \Big(1-\sum_k ^n \varphi_{ik} \Big ) \Bigg ],\label{ThetaI} \\ \Theta_{ij}&= \frac{1}{N_j} \Bigg[ a_j \varphi_{ij} \Big(1-\sum_k ^n \varphi_{jk} \Big ) + \varphi_{ij} \sum_{k} ^{n} a_k \varphi_{kj}\Bigg ]. \label{ThetaJ} \end{align} \noindent Therefore value of $\mathcal{Q}_i(t)$ and $\theta_{ij}$ for each day can be obtained using the reported data and parameters. Thus, for each day, this leaves us with algebraic system of ``$n$'' variables, $\beta_j(t)$ and ``$n$'' equations. We can analytically solve this system for every meta-population, with the help of a computer algorithm, and obtain the daily values of $\beta_j(t)$ of every meta population. Substituting the values of every $\beta_j(t)$ in \eqref{lambdaI} and \eqref{lambdaJ}, we can compute the values of the $\lambda_ij(t)$'s which leads us to the values of the reproduction number, equations 3.1 and 3.2 of the main framework. Since the available data is of daily number of cases, the $\bm{\mathcal{R}}(t)$ is evaluated for each calendar day. \subsection*{Parameters} To estimate the $\beta_i(t)$ parameters, and consequently estimate the reproduction numbers, the parameters of both models must be obtained. $\delta$, $\gamma_a$, $\gamma_s$, $p$, and $\kappa$ are estimated for the state of Rio de Janeiro in \cite{jorge2020assessing} and can be found on Table \ref{tabPara}. Additionally, in the SIR type model we assume $\gamma=\gamma_s$. The intermunicipal commuter movement of workers and students for the cities of Brazil, $\varphi_{ij}$, can be found in a study conducted by IBGE (Brasilian Institute of Geography and Statistics) \cite{ibge2016arranjos}. \begin{table}[H] \caption{Key epidemiological parameters of the SEIIR model obtained in \cite{jorge2020assessing}. } \label{tabPara} \centering \begin{tabular}{lllllll} \hline \textbf{Parameter} & \textbf{Description} & \textbf{Value} \\ \hline $\delta$ & Asymptomatic/non-detected infectivity factor & $0.258$ \\ $p$ & Proportion of latent (E) that proceed to symptomatic infective & $0.273$ \\ $\kappa^{-1}$ & Mean exposed period (days$^{-1}$) & $1/0.25$ \\ $\gamma_{a}^{-1}$ & Mean asymptomatic period (days$^{-1}$) & $1/0.288$ \\ $\gamma_{s}^{-1}$ & Mean symptomatic period (days$^{-1}$) & $1/0.25$ \\ \hline \end{tabular} \end{table} \newpage \topskip0pt \vspace*{\fill} \begin{center} \begin{minipage}{.6\textwidth} \Large{\textbf{Supplementary Material 4}}\\ \large{\textcolor{gray}{\textbf{Additional information about the municipalities}}} \end{minipage} \end{center} \vspace*{\fill} \newpage \section*{Additional information about the municipalities} \begin{table}[H] \centering \resizebox{0.9\textwidth}{!}{% \begin{tabular}{llll} \hline \textbf{Municipality} & \textbf{Acronyms} & \textbf{Populational size} & \textbf{Total reported cases} \\ \hline Belford Roxo & BR & 485,687 & 8,578 \\ Duque de Caxias & DdC & 905,129 & 8,736 \\ Magé & Ma & 242,113 & 3,539 \\ Mesquita & Mq & 800,835 & 1,351 \\ Nilópolis & Ns & 154,749 & 1,245 \\ Niterói & Nt & 497,883 & 12,165 \\ Nova Iguaçu & NI & 167,287 & 5,908 \\ Queimados & Q & 150,333 & 2,376 \\ \textbf{Rio de Janeiro} & RJ & 6,592,227 & 95,444 \\ São Gonçalo & SG & 1,075,372 & 11,601 \\ São João de Meriti & SJdM & 448,340 & 3,167 \\ \hline \end{tabular}% } \caption{} \caption{ The selected cities of the state of Rio de Janeiro in the Southeast of Brazil with their acronyms, number of inhabitants, and the reported COVID-19 cases until September 14th (2020). In bold, the capital (Rio de Janeiro).} \label{tab:my-table} \end{table} \begin{figure}[h] \centering \includegraphics[width={\linewidth}]{Figures/Mapa_Rio.png} \caption{Distribution of cases by the chosen cities of this study.} \label{map} \end{figure} \begin{figure}[H] \centering \includegraphics[width={\linewidth}]{Figures/Rplot12.pdf} \caption{Average daily commuter movement between municipalities, due to workplaces. The thickness of each line is proportional to the amount of movement from one municipality to another. Colors are used only for the purpose of better visualization and have no specific meaning.} \label{map} \end{figure} \newpage
2,869,038,155,401
arxiv
\section{Introduction} \label{sec:intro} Quenched disorder has a profound effect on the low-energy, low-temperature and long wavelength properties of quantum systems. The interplay between quantum fluctuations, correlations and disorder fluctuations generally results in strong singularities in the thermodynamical quantities and in the (dynamical) correlation functions\cite{qsg,im}. This type of effect takes place even outside the quantum critical region, e.g. in the quantum paramagnetic phase at zero temperature, $T=0$, where spatial correlations are short ranged\cite{im,vojta}. The origin of this phenomenon, as pointed out by Griffiths\cite{griffiths}, is due to rare regions, in which strong bonds are accumulated by extreme fluctuations, so that the system in these regions is locally in the thermodynamically unstable ferromagnetic phase. As a consequence the excitation energy, $E$, in the rare regions is very small, the relaxation process is very slow and the associated relaxation time, $\tau \sim E^{-1}$, is divergent in the thermodynamic limit. If we consider a finite part of a sample with linear size, $\ell$, the characteristic time-scale of the slowest relaxation process stays also finite and asymptotically given by: \begin{equation} \tau \sim \ell^{z}\;. \label{tau} \end{equation} Here $z=z(\delta)$ is the dynamical exponent, which is generally a continuous function of the quantum control parameter, $\delta$, which measures the distance from the quantum critical point. According to scaling theory\cite{scaling,ijr} the distribution of the low-energy excitations, $n(E,\ell)$, depends on the scaling combination, $E\ell^z$, and for a small but fixed $E$ it is proportional to the volume, $\ell^d$, since the probability of finding a rare region goes linearly with the volume. From this the asymptotic behavior of the distribution function in the thermodynamic limit reads as: \begin{equation} n(E) \sim E^{d/z-1}\;. \label{n_E} \end{equation} Thermodynamical quantities which are obtained through integration of the density of states are also singular. For example the low-temperature behavior of the average linear susceptibility, $\chi(T)$, and that of the specific heat, $c_v(T)$, is expected to scale as\cite{im,vojta} \begin{equation} \chi(T)\sim T^{-1+d/z},\quad c_v(T)\sim T^{d/z}\;, \label{chi_cv} \end{equation} whereas the small-field, $H$, dependence of the zero-temperature magnetization is given by: \begin{equation} m(H) \sim H^{d/z}\;. \label{m_H} \end{equation} One can see from Eq.(\ref{chi_cv}) that the susceptibility is divergent at zero temperature for $z(\delta)>d$, which was noticed first by McCoy\cite{mccoy} in an exact calculation of the random transverse-field Ising chain (RTFIC). Detailed results about Griffiths-McCoy singularities are obtained for one-dimensional (1d) systems partially by numerical investigations (free-fermionic techniques\cite{free_fermion,bigpaper,ijr}, density matrix renormalization method\cite{DMRG,ijl}, quantum Monte Carlo (MC) simulations\cite{QMC}) and by analytical calculations\cite{fisher,s1,ijl,i02} based on the use of a strong disorder renormalization group (SDRG) method\cite{im}. For higher dimensional systems Griffiths-McCoy singularities are studied numerically, either by quantum MC simulations\cite{QMC1} or by numerical implementation of the SDRG method\cite{SDRG1}. Analytical and conjecturedly exact results about Griffiths-McCoy singularities are scarce and these are practically restricted to the RTFIC. Analytical solution of the SDRG equations is first obtained in the vicinity of the quantum critical point\cite{fisher}, i.e. in the weakly disordered and in the weakly ordered Griffiths phases, where the dynamical exponent is shown to diverge as $z(\delta) \sim 1/|\delta|$. The solution is then extended to the complete Griffiths phase\cite{ijl,i02} and the calculated value of $z(\delta)$ is shown to agree with that obtained through a mapping to a random walk problem in a random environment\cite{ir98}. In this paper we use a direct and simple method to calculate exact values of the Griffiths-McCoy singularities in a class of random quantum spin chains. These models include the random tight-binding chain, the random antiferromagnetic XX-chain as well as the RTFIC. The low-energy excitations for each models have the same form: they are obtained from the eigenvalue problem of a symmetric tridiagonal matrix, $\cal M$ see in Eq.(\ref{M_tb}), with random (positive) entries. In the off-critical region of the spin chains there is an even-odd asymmetry: the matrix-elements of $\cal M$ are taken from different distributions at even and odd bonds. We calculate the density of states, $n(E)$, in the center of the band by the Dyson-Schmidt technique\cite{dyson} using the random walk idea by Eggarter and Riedinger\cite{eggarter_riedinger}. In Ref.[\onlinecite{eggarter_riedinger}] $n(E)$ is calculated in the continuum approximation for an even-odd symmetric $\cal M$, which corresponds to the critical point of the random quantum spin chains. In the present paper $\cal M$ has a general even-odd asymmetric form, which corresponds to a strongly disordered quantum Griffiths phase and for which the continuum approximation is no longer valid. Having the exact behavior of $n(E)$ at hand we than calculate the singularities of the thermodynamic quantities (specific heat, susceptibility, magnetization). The structure of the paper is the following. Random quantum chain models studied in this paper are presented in Sec.\ref{sec:random}. In Sec.\ref{sec:density} the density of states of the low energy excitations is calculated by the Dyson-Schmidt technique and the relation of this technique with the SDRG method is discussed. Thermodynamic singularities are calculated in Sec.\ref{sec:therm} and the results are discussed in Sec.\ref{sec:disc}. \section{Random quantum chains} \label{sec:random} \subsection{Random tight-binding model} The first model we consider is a one-dimensional tight-binding model with off-diagonal disorder\cite{cohen} being defined by the Hamiltonian: \begin{equation} {\cal H}=\sum_{i} t_i (|i\rangle\langle i+1|+|i+1\rangle\langle i|)\;, \label{H_tb} \end{equation} with random hopping matrix-elements, $t_i$. The hopping matrix-elements are generally taken from different distributions at even ($t_e$) and odd ($t_o$) sites, so that a quantum control-parameter is defined as: \begin{equation} \delta=\frac{[\ln t_o]_{\rm av}-[\ln t_e]_{\rm av}}{{\rm var}(\ln t_e)+{\rm var}(\ln t_o)}\;, \label{delta} \end{equation} where $[\dots]_{\rm av}$ stands for averaging over quenched disorder and ${\rm var}(x)$ stands for the variance of $x$. For $\delta > 0$ ($\delta < 0$) the model is asymmetric and the particles are preferentially at odd (even) bonds. The symmetric model with $\delta = 0$ corresponds to a quantum critical point. In the basis, $|i\rangle$, the Hamiltonian is represented by a tridiagonal matrix \begin{equation} {\cal M}= \begin{pmatrix} 0 & t_1 & & & & \cr t_1 & 0 & t_2 & & & \cr & t_2 & 0 & t_3& & \cr & & \ddots &\ddots &\ddots & \cr \end{pmatrix} \label{M_tb} \end{equation} and we are interested in its eigenvalue problem: \begin{equation} {\cal M}\vec{\alpha}=E\vec{\alpha} \label{M} \end{equation} and the corresponding density of states, $n(E)$, at the center of the band. \subsection{Random antiferromagnetic XX-chain} The second model is the random antiferromagnetic XX-chain defined by the Hamiltonian: \begin{equation} {\cal H}_{XX}=\sum_{i} J_i (S_i^x S_{i+1}^x+S_i^y S_{i+1}^y) \label{H_XX} \end{equation} in terms of the spin-$1/2$ operators, $S_i^{x,y}$, at cite $i$. Here the $J_i$ exchange couplings are random variables which have different distributions at even ($J_e$) and odd ($J_o$) sites. Using the Jordan-Wigner transformation: $a_j^{\pm}=S_j^x \pm iS_j^y$ and $c^+_i=a_i^+\exp\left[\pi i \sum_{j}^{i-1}a_j^+a_j^-\right]$ and $c_i=\exp\left[\pi i \sum_{j}^{i-1}a_j^+a_j^-\right]a_i^-$ this Hamiltonian is expressed in terms of fermion creation ($c^+_i$) and annihilation ($c_i$) operators as\cite{lsm}: \begin{equation} {\cal H}_{XX}=\sum_{i} \frac{1}{2}(J_i c^+_i c_{i+1} + {\rm h.c.})\;. \label{ferm_XX} \end{equation} The low-energy states of the model contain one fermion, which can be written in the form $|\psi\rangle=\sum_i \alpha_i c^+_i |0 \rangle$, where $|0 \rangle$ denotes the fermionic vacuum. Energies in this one fermion subspace are obtained by the solution of the eigenvalue problem of ${\cal M}$ in Eq.(\ref{M_tb}) with the correspondence: \begin{equation} t_i=J_i/2 \label{corresp_XX} \end{equation} Then the quantum control parameter of the model is just given by $\delta$ in Eq.(\ref{delta}). In the asymmetric model with $\delta>0$ ($\delta<0$) there is enforced dimerization and the system is in the random dimer phase\cite{HYMAN} with preference of odd (even) bonds. On the other hand at the quantum critical point with $\delta=0$ the system is in the so called random singlet phase\cite{fisherxx}. \subsection{Random transverse-field Ising chain} Our third and final model is the RTFIC, which is a prototypical model of random quantum systems having an order-disorder transition\cite{fisher}. This system is defined by the Hamiltonian: \begin{equation} {\cal H}_{I} = -\frac{1}{2}\sum_{i} \lambda_{i}\sigma_i^x \sigma_{i+1}^x-\frac{1}{2}\sum_{i=1}^L h_i \sigma_i^z \label{eq:H} \end{equation} in terms of the Pauli-matrices, $\sigma_i^{x,z}$, at site $i$ and the $\lambda_i$ couplings and the $h_i$ transverse fields are random numbers. As for the XX-chain ${\cal H}_{I}$ is expressed in terms of fermion operators\cite{pfeuty}: \begin{eqnarray} {\cal H}_{I}&=& -\sum_{i}h_i\left( c^+_i c_i-\frac{1}{2} \right)\cr &-& \frac{1}{2}\sum_{i}\lambda_i(c^+_i-c_i)(c^+_{i+1}+c_{i+1}) \label{ferm_I} \end{eqnarray} which is than diagonalized through a canonical transformation. Now the low-energy excitations contain one free fermion, the possible energy of which is given by the positive eigenvalues of the following symmetric matrix\cite{it,bigpaper}: \begin{equation} {\cal T}= \begin{pmatrix} 0 & h_1 & & & & \cr h_1 & 0 &\lambda_1 & & & \cr & \lambda_1 & 0 & h_2& & \cr & & \ddots &\ddots &\ddots & \cr \end{pmatrix} \label{T} \end{equation} This is equivalent to ${\cal M}$ in Eq.(\ref{M_tb}) with the correspondences: \begin{equation} t_{2i-1}=h_i,\quad t_{2i}=\lambda_i \label{corresp_I} \end{equation} Using this relation together with Eq.(\ref{delta}) the control parameter of the RTFIC is given by the difference in the average log-fields and the average log-couplings. For $\delta>0$ ($\delta<0$) the system is in the paramagnetic (ferromagnetic) phase, and $\delta=0$ represents the quantum critical point. We can thus conclude that the low-energy properties of all the three models are related to the eigenvalue problem of ${\cal M}$ in Eq.(\ref{M_tb}). In the next section we calculate the density of states of matrix ${\cal M}$ around $E=0$ by the Dyson-Schmidt method. \section{Density of states at the center of the spectrum} \label{sec:density} Here in the first two subsections we recapitulate the basic ingredients of the Dyson-Schmidt method and present the solution in the continuum approximation. Our findings, which are obtained in the strongly disordered regimes are presented in the last two subsections. \subsection{The random walk method} In order to calculate the density of states of ${\cal M}$ we introduce a new vector, ${\vec \Delta}$, with the components $\Delta_i=\alpha_{i-1}t_{i-1}/\alpha_i$, which satisfy the equations : $\Delta_{i+1}=t_i^2/(E-\Delta_i)$. The basic ingredient of the Dyson-Schmidt method\cite{dyson} is the {\it node counting theorem} of one-dimensional Hamiltonians, which states that the integrated density of states, $N(E)=\int_{-\infty}^E n(E') {\rm d}E'$, is given by the fraction of positive terms in the sequence of $\Delta_i$. At the center of the band, $E=0$, the components of ${\vec \Delta}$ have alternating signs, thus here the "sign variables" $s_i \equiv sign[\Delta_i (-1)^i]$ have a fully ordered state, $\dots \uparrow \uparrow \uparrow \uparrow \uparrow \uparrow \uparrow \uparrow \uparrow \uparrow \dots$ and $N(0)=1/2$. For nonzero $E$ the iterated equations for ${\Delta}_i$ are the following\cite{eggarter_riedinger}: \begin{eqnarray} \Delta_{2i}&=& f_{2i-2} \left(\frac{t_{2i-1}}{t_{2i-2}}\right)^2\Delta_{2i-2} \cr f_{2i-2}&=&\frac{1-E/\Delta_{2i-2}}{1+(E \Delta_{2i-2}-E^2)/t_{2i-2}^2}\;, \label{iterated} \end{eqnarray} which lead to different iteration behaviors for small positive $E$ for various limiting values of ${\Delta}_{2i}$. These are summarized as: \begin{subequations}\label{limits} \begin{align} &\Delta_{2i+1}/\Delta_{2i}<0,& {\rm if}& &\Delta&_{2i} < E \label{a} \\ &f_{2i}=1,& {\rm if}& &E& \gg \Delta_{2i} \gg {\tilde t}^2/E \label{b} \\ &\Delta_{2i+2}/\Delta_{2i} < 1,& {\rm if}& &\Delta&_{2i} \approx {\tilde t}^2/E \label{c} \end{align} \end{subequations} where $\tilde{t}$ denotes the typical (average) value of the matrix-element. According to (\ref{b}) we can identify an interval, $[ E,{\tilde t}^2/E]$, in which the "signs" stays ordered, say $s_i=\uparrow$. There is a finite upper boundary value at $\Delta_{max}={\tilde t}^2/E$, where the iterated sequence is reflected, but $s_i$ stays $\uparrow$ (see (\ref{c}), whereas as the sequence arrives at the lower boundary value, $\Delta_{min}=E$, the "spins" change sign (see (\ref{a}) and the iteration process starts again, however in a new domain with $s_i=\downarrow$. Consequently for a small $E>0$ the sign variables have a fragmented domain structure $\dots \downarrow \downarrow \downarrow \downarrow \uparrow \uparrow \uparrow \uparrow \uparrow \downarrow \downarrow \downarrow \downarrow \dots$ and therefore the fraction of positive terms in the sequence of $\Delta_i$ is somewhat larger than $1/2$, due to extra positive terms appearing at the domain walls. If the typical (average) size of a domain is denoted by $\tilde{\ell}$, then the density of states is asymptotically given by: \begin{equation} N(E)-N(0)=\frac{1}{2\tilde{\ell}}\;. \end{equation} We can thus summarize that to obtain the density of states at the center of the spectrum it is enough to follow the evaluation of the sequence, $\Delta_i$, within one typical domain and calculate its size, $\tilde{\ell}$. Within this domain we formally put $f_{2i-2}=1$ in Eq.(\ref{iterated}) and set i) a reflecting boundary at $\Delta_{max}$ and ii) an absorbing boundary at $\Delta_{min}$. If we introduce the logarithmic variable, $\ln \Delta_{2i}=u_{2i}$, we obtain a random walk (directed polymer) problem: \begin{equation} u_{2i}=2(\ln t_{2i-1}-\ln t_{2i-2})+u_{2i-2}\;, \label{walk} \end{equation} with reflecting $(u=u_{max})$ and absorbing $(u=u_{min})$ boundary conditions. In this language the walker (polymer) starts at $u_0=u_{max}$ and its mean first-passage time (length) at the position $u_{min}$ is just $\tilde{\ell}$, thus $u_{min}=u_{\tilde{\ell}}$. \subsection{Analysis in terms of the diffusion equation} \label{sec:diff} In order to set the length-scales in the random walk problem we use a continuum approximation in which Eq.(\ref{walk}) is transformed into a diffusion equation: \begin{equation} \frac{\partial P(u,\ell)}{\partial \ell}=D\frac{\partial^2 P(u,\ell)}{\partial u^2} -v\frac{\partial P(u,\ell)}{\partial u}\;. \label{diff} \end{equation} Here $P(u,\ell)$ is the probability distribution of the walk, $D=2[var(\ln t_e)+var(\ln t_o)]$ is the diffusion coefficient and $v=2([\ln t_o]_{\rm av}-[\ln t_e]_{\rm av})$ is the drift velocity. The typical size of the transverse fluctuations of the walk is given by: $\tilde{u}=D/v=\delta^{-1}$, whereas the average distance between two reflections, $\xi$, follows from the relation $\tilde{u} \sim \sqrt{D\xi}$, thus we obtain for the correlation length: \begin{equation} \xi \sim D^{-1} \delta^{-2} \end{equation} which agrees with the result of SDRG calculations\cite{fisher}. The continuum approximation and thus the use of the diffusion equation is justified if the correlation length is much larger than the lattice spacing. This condition is satisfied if we are either at the critical point, $\delta=0$, or in the weakly disordered Griffiths phase with $|\delta| \ll 1$. \subsubsection{Critical point} At the critical point both the correlation length, $\xi$, and the typical size of transverse fluctuations, $\tilde{u}$, are divergent and they are related with the length scale, $\tilde{\ell}$, as: $\xi \sim \tilde{\ell}$ and ${\tilde u} \sim \sqrt{D \tilde{\ell}}$. Absorption of the walker in this case is due to {\it typical fluctuations}, when ${\tilde u}$ grows to the order of the width of the strip: ${\tilde u} \sim \Delta u=u_{max}-u_{min}=\ln(\tilde{t}^2/E^2)$. From this follows: \begin{equation} \tilde{\ell} \sim \frac{1}{D} \ln(\tilde{t}^2/E^2)^2\; \end{equation} so that \begin{equation} N(E)-N(0)\sim D\left[\ln(\tilde{t}^2/E^2)\right]^{-2}\;. \end{equation} This is the classical result derived by Eggarter and Riedinger\cite{eggarter_riedinger}. \subsubsection{Weakly disordered Griffiths phase} \label{sec:weak_G} In the weakly disordered Griffiths phase with $1 \ll \delta>0$ the walker is drifted towards the reflecting boundary and both the correlation length and the typical size of the transverse fluctuations are finite, but much larger than the lattice spacing, thus the continuum approximation is valid. In this case $\tilde{u}$ is much smaller than the width of the strip, its absorption takes place with a very small probability: $p(\Delta u) \propto \exp(-\frac{\Delta u}{\tilde{ u}})$, thus it is a rare region effect and due to {\it extreme fluctuations}. Before having such a large fluctuation the walker is reflected several times and the typical number of independent excursions is given by $ \tilde{\ell}/\xi$, the value of which follows from extreme-value statistics\cite{galambos}: $p(\Delta u) \tilde{\ell}/\xi=O(1)$. From this we have \begin{equation} \tilde{\ell} \sim \xi \exp\left(\frac{v}{D}\ln(\tilde{t}^2/E^2)\right) \sim \left(\frac{\tilde{t}}{E}\right)^{1/z}; \label{n_z} \end{equation} with \begin{equation} \frac{1}{z}=\frac{2v}{D}=2\frac{[\ln t_o]_{\rm av}-[\ln t_e]_{\rm av}}{[var(\ln t_e)+var(\ln t_o)]}=2\delta\;. \label{1/z} \end{equation} Here $z$ is just the dynamical exponent defined in Eq.(\ref{tau}). In the weakly disordered Griffiths phase at the center of the band there is a power-law singularity of the density of states: \begin{equation} N(E)-N(0)\sim \left(\frac{\tilde{t}}{E}\right)^{-1/z}\;. \label{N_E} \end{equation} which is equivalent to the form in Eq.(\ref{n_E}). This result for the random antiferromagnetic XX-chain has been presented in\cite{lamas}. \subsection{Analysis in the strongly disordered Griffiths phase} \label{sec:asymptotic} In the strongly disordered Griffiths phase the correlation length is in the order of the lattice spacings and the continuum approximation is not valid. In this case we use discrete variables and denote the (nonlogarithmic) position of the walker at the $j$-th step of the $k$-th independent excursion, which starts at $r(k)$, as $\Delta_{2j}^{(k)}$. Thus $r(k)/k = \xi$ for large $k$ and the normalized position is given by: \begin{equation} \rho_{2j}^{(k)} \equiv \frac{\Delta_{2j}^{(k)}}{\Delta_{max}}= \prod_{j'=1+r(k)}^{j+r(k)} \left(\frac{t_{2j'-1}}{t_{2j'-2}}\right)^2\;. \end{equation} The condition of absorption is formulated as: \begin{equation} \min_{k} \min_{1<j<\Delta r(k)} \rho_{2j}^{(k)}=\frac{\Delta_{min}}{\Delta_{max}}=\frac{\tilde{t}^2}{E^2}\;, \end{equation} where $\Delta r(k)=r(k+1)-r(k)$, which can be replaced by $\Delta r=\infty$. Keeping in mind that $\rho^{(k)}_{2j}$ is typically much larger than its minimum value we can estimate the order of magnitude of the minimum as \begin{equation} \min_{k} \min_{1<j<\infty} \rho_{2j}^{(k)} \propto \min_{k} \left[ y^{(k)} \equiv \sum_j (\rho_{2j}^{(k)})^{-1} \right]^{-1}\;. \label{min2} \end{equation} Here $y^{(k)}$ is a Kesten variable\cite{kesten} for any $k$, the distribution function of which for large arguments displays a singularity: \begin{eqnarray} p(y) \opsim_{y \to \infty} y^{-(1+\mu)}\;. \label{p_asymp} \end{eqnarray} where the exponent, $\mu$, is given by the positive root of the equation: \begin{equation} \left[\left(\frac{t_o^2}{t_e^2}\right)^{\mu}\right]_{\rm av}=1\;. \label{mu} \end{equation} (For a pedagogical introduction to the theory of Kesten variables see Appendix C of Ref.[\onlinecite{im}].) In this way the typical number of excursions, $ \tilde{\ell}/\xi$, follows from extreme-value statistics\cite{galambos}: $\tilde{\ell}/\xi \int_{y_{max}}^{\infty} p(y) {\rm d} y=1$, and we obtain: \begin{equation} \tilde{\ell} \sim \xi \left(\frac{t}{E}\right)^{2\mu}\;. \end{equation} Comparing with Eq.(\ref{n_z}) we see that the dynamical exponent in the strongly disordered Griffiths phase is given by: \begin{equation} \frac{1}{z}=2\mu\;, \label{z_mu} \end{equation} which in the limit $\delta\ll 1$ gives back the result obtained in the weakly disordered Griffiths phase\cite{ijl,i02} in Eq.(\ref{1/z}). Then with the correspondence in Eq.(\ref{z_mu}) the density of states at the center of the band is given in Eq.(\ref{N_E}). \subsection{Relation with the strong disorder renormalization group method} The density of states in the center of the spectrum of $\cal M$ can be analyzed by the SDRG method\cite{im}, too, and here we outline this procedure. The first step in this study is to arrange the matrix-elements, $t_i$, in descending order and use the largest one, $\Omega={\rm max}_i \{t_{i}\}$, to set the energy scale in the system. Let us denote the largest term by $t_j$, which connects sites $j$ and $j+1$ and eliminate the two equations in the eigenvalue problem which contain $t_j$. In second-order perturbational method, which is correct up to $O\left((t_{j-1}/t_j)^2\right)$ and $O\left((t_{j+1}/t_j)^2\right)$ we have for the effective matrix-element, $t'$, between the remaining sites, $j-1$ and $j+2$: \begin{equation} t' \approx \frac{t_{j-1} t_{j+1}}{t_j}\;. \end{equation} This new term has a length, $m'=m_{j-1}+m_{j}+m_{j+1}=3$, where the original matrix-elements have unit lengths. In the following steps we repeat the decimation transformation, during which the energy scale is reduced, the lengths are increased and the distribution functions of the matrix-elements, $R_e(t_e,\Omega)$ and $R_o(t_o,\Omega)$, approach their fixed-point form. This type of RG equations have been analytically solved both at the critical point\cite{fisher} and in the Griffiths phase\cite{ijl,i02}. Here we summarize the known results for the Griffiths phase with $\delta>0$. In the starting steps of the RG both $t_e$ and $t_o$ terms are decimated, but the transformation in later steps become asymmetric. As the typical lengths are growing beyond $m' \sim \xi$ almost exclusively the $t'_o$ terms are decimated and the $t'_e$ terms become very small, such that at the fixed point, $\Omega \to \Omega^*=0$, we have $t'_e/t'_o \to 0$. As a consequence the energy of the low-energy excitations is simply $E \simeq t'_o$. At the fixed point the distribution of $t_o$ is given by\cite{ijl,i02}: \begin{equation} R_o(t_o,\Omega)=\frac{2 \mu}{\Omega} \left(\frac{\Omega}{t_o}\right)^{1-2\mu}\;, \end{equation} where $\mu$ is defined in Eq.(\ref{mu}). This is just equivalent to the distribution of the excitation energies in Eq.(\ref{n_E}), with the dynamical exponent defined in Eq.(\ref{z_mu}). Now to make a correspondence with the random walk method the starting RG steps which lead to an effective $t'_e(k)(\gg t'_o(k))$ of length $m'(k) \sim \xi$ are equivalent to an excursion (between two reflections) of the walk of size $\Delta r(k) \sim \xi$ and the minimal value of $\rho^{(k)}_{2j}$ for this excursion is just the renormalized value of $t'_o(k)$. The analogous quantities in the two approaches are collected in Table \ref{table:1}. \begin{table} \caption{Analogous quantities in the random walk (RW) and in the SDRG methods \label{table:1}} \begin{tabular}{|c||c|c|c|} \hline method & independent unit & length scale & energy scale \\ \hline RW & excursion & size of the excursion & ${\rm min}_j \rho^{(k)}_{2j}$ \\ \hline SDRG & cluster & size of the cluster & $t'_o(k)$ \\ \hline \end{tabular} \end{table} \section{Thermodynamic singularities} \label{sec:therm} Here we consider the random tight-binding model with half filling, as well as the random antiferromagnetic XX-chain and the RTFIC and note that all these models are expressed in terms of free fermions. The common form of the Hamiltonians is given by: \begin{equation} {\cal H}_F=\sum_q E_q (\eta_q^+ \eta_q -1/2) \end{equation} where $E_q$ denotes the $q$-th eigenvalue of ${\cal M}$ and $\eta_q^+$ ($\eta_q$) are fermion creation (annihilation) operators. The ground-state energy per site of this system is given by: \begin{equation} {\cal E}=-\frac{1}{2L} \sum_q E_q=-\frac{1}{2}\int_{E_{min}}^{E_{max}} n(E) E {\rm d} E\; \label{energy_0} \end{equation} and the free energy per site: \begin{eqnarray} &{\cal F}&=-\frac{T}{L} \sum_q \ln \left[2\cosh \left(\frac{E_q}{2T} \right) \right]=\cr &-&T\left\{\ln2 + \int_{E_{min}}^{E_{max}} n(E) \ln\left[\cosh \left(\frac{E}{2T} \right) \right] {\rm d} E \right\}\; \label{free_energy} \end{eqnarray} where $L$ is the length of the chain. From the free energy we obtain the internal energy: \begin{equation} {\cal E}(T)=-\frac{1}{2}\int_{E_{min}}^{E_{max}} n(E) E \tanh\left(\frac{E}{2T} \right){\rm d} E \end{equation} and the specific heat: \begin{equation} c_v(T)=\int_{E_{min}}^{E_{max}} n(E)\left(\frac{E}{2T} \right)^2 \cosh^{-2}\left(\frac{E}{2T} \right){\rm d} E. \end{equation} Now using the form of the density of states at the center of the band we obtain for the low temperature behavior: \begin{equation} c_v(T) \propto {\cal A} T^{1/z} \int_{-\infty}^{\infty} \varepsilon^{1/z+1} \cosh^{-2} \varepsilon\ {\rm d} \varepsilon.\;, \label{cv1} \end{equation} in agreement with the scaling result in Eq.(\ref{chi_cv}). Note that the prefactor in Eq.(\ref{cv1}), ${\cal A}$, is proportional to $\xi^{-1} z^{-1}$, which means that in the weakly disordered Griffiths phase we have: ${\cal A} \sim \delta^3,~\delta \ll 1$, in agreement with the SDRG result\cite{fisher}. Next we consider the random antiferromagnetic XX-chain for which in the Hamiltonian in Eq.(\ref{H_XX}) we introduce a homogeneous ordering field: $H\sum_i S_i^z$. This term with fermionic variables assumes the form: $H/2 \sum_i (c_i^+c_i-1/2)$, thus the eigenvalue matrix $\cal M$ contains also diagonal elements: ${\cal M}_{i,i}=H/4,~\forall i$, and the eigenvalues are shifted by $E \to E+H/4$. The magnetization is obtained through differentiation: \begin{equation} m(H,T)=-\frac{\partial {\cal F}}{\partial H} \sim \int_{-H/4}^{H/4} n(E) \tanh\left(\frac{E}{2T} \right){\rm d} E \label{m_H_T} \end{equation} where we have used the fact that the spectrum of ${\cal M}$ in Eq.(\ref{M_tb}) is symmetric to $E=0$. At zero temperature $m(H,0)$ is singular for small $H$: \begin{equation} m(H,0)\sim N(H/4)-N(-H/4) \sim H^{1/z} \end{equation} as in Eq.(\ref{m_H}). Evaluating the integral in Eq.(\ref{m_H_T}) for small $H$ and $T$, however with $H/T=O(1)$ we obtain for the low-temperature susceptibility: \begin{equation} \chi(T) \sim T^{1/z-1} \end{equation} which corresponds to the scaling result in Eq.(\ref{chi_cv}). \section{Discussion} \label{sec:disc} In this paper we have studied Griffiths-McCoy singularities in random quantum (tight-binding, XX and Ising) spin chains, which can be represented in terms of free fermions. The main step of our investigation is the calculation of the density of states of the low energy excitations, which excitations are eigenvalues of a symmetric tridiagonal matrix with random entries, however with an odd-even asymmetry. This latter problem is solved exactly by the Dyson-Schmidt technique\cite{dyson,eggarter_riedinger} for any value of the quantum control parameter, $\delta$. Previous studies of this problem are restricted to the quantum critical point\cite{eggarter_riedinger}, $\delta=0$, and to the weakly disordered Griffiths phase\cite{lamas}, $\delta \ll 1$. As we described in Sec.\ref{sec:diff} in this problem there are three length scales: the mean-first passage length, $\tilde{\ell}$, the correlation length, $\xi$, and the lattice spacing, $a$. In the different regimes of the quantum control parameter their relative magnitudes are summarized in Table~\ref{table:2}. \begin{table} \caption{Relation between the length-scales in different regimes of the quantum control parameter. \label{table:2}} \begin{tabular}{|c||c|c|} \hline critical point & $\delta=0$ & $\tilde{\ell} \sim \xi \gg a$ \\ \hline weakly dis. Griffiths & $|\delta| \ll 1$ & $\tilde{\ell} \gg \xi \gg a$ \\ \hline strongly dis. Griffiths & $|\delta|=O(1)$ & $\tilde{\ell} \gg \xi \sim a$ \\ \hline \end{tabular} \end{table} In a finite system there is still another length scale given by the size of the system, $L$, and the mean-first passage length can not exceed this value: $\tilde{\ell}\sim L$. Consequently the lowest excitation energy is limited to $E_1 \sim L^{-z}$. In this case one is interested in the distribution of the scaling combination, $E_1 L^{z}$, which in the random walk method in Sec.\ref{sec:asymptotic} is obtained from the statistics of extremes. Here we recall that $E_1$ is just the minimum value of a set of $L/\xi$ independent random numbers, each having the same parent distribution in a power-law form, see Eq.(\ref{p_asymp}). Consequently the distribution of $\epsilon_1=aE_1 L^{z}$ in the large $L$ limit follows the Fr\'echet distribution\cite{galambos}: \begin{equation} \tilde{P}_1(\epsilon_1)=\frac{1}{z} \epsilon_1^{1/z-1} \exp(-\epsilon_1^{1/z})\;, \label{frechet} \end{equation} where $a$ is a nonuniversal constant which depends on the amplitude of the tail in Eq.(\ref{p_asymp}). Here one can go on and consider the second eigenvalue, $E_2$, or more generally the $q$-th smallest eigenvalue, $E_q$. These are all obtained from the theory of extreme value statistics of independent and identically distributed ({\it i.i.d.}) random numbers and their distribution is given by the generalized Fr\'echet distribution, see in\cite{galambos}. In this way we have shown that the distribution of the lowest energy levels of these strongly correlated physical systems are described in a form which holds for {\it i.i.d.} random numbers. This scenario, which is shown here exactly for the specific models is expected to hold generally for all such random quantum systems, even in higher dimensions, for which the low-energy behavior is controlled by a so called strong disorder fixed point in the SDRG framework\cite{extr}. The dynamical exponent, $z$, which is calculated exactly in this paper is found a continuous function of the control parameter, $\delta$. Using the SDRG approach the same result is obtained\cite{ijl,i02}, thus our present study gives further credit to the conjecture that the SDRG method provides asymptotically exact results even far outside the critical point, as far as dynamical quantities are considered. This latter statement is expected to hold for all systems with a strong disorder fixed point. \begin{acknowledgments} This work has been supported by the National Office of Research and Technology under Grant No. ASEP1111 and by the Hungarian National Research Fund under grant No OTKA TO48721, K62588, MO45596. \end{acknowledgments}
2,869,038,155,402
arxiv
\section{The Free Particle} In classical statistical mechanics, a particle is described in terms of a nonnegative phase space distribution \begin{equation} P(x,p;t) \ge 0. \label{eq:posdist} \end{equation} This is the probability of the particle having position $x$ and momentum $p$ at the time $t$. The time evolution of the phase space distribution for a free particle is \begin{equation} {\partial P \over \partial t} = - {p \over m} \, {\partial P \over \partial x}. \label{eq:classtraj} \end{equation} That quantum mechanics can be formulated in a similar manner was first discovered by Wigner \cite{Wigner32}. However, the Wigner distribution $W(x,p;t)$ has the nonclassical feature of being negative for certain quantum states. In fact, among pure states only the gaussian states have nonnegative Wigner distributions \cite{Hudson74}. However, even if a free particle is in a state with negative Wigner distribution, it will not necessarily behave exhibit different \emph{dynamics} than an ensemble of classical free particles \cite{Lee82}. For instance, the Wigner distribution obeys the same equation of motion (\ref{eq:classtraj}) as a classical phase space distribution. Bracken and Melloy \cite{Bracken94} recently found that the probability of observing a free particle in a certain region of space may increase even though there is zero probability for the particle to have momenta pointing towards this region. They explained this effect in terms of negative probability. Here we shall study another dynamical effect which defies explanation in terms of the classical model (\ref{eq:posdist}) and (\ref{eq:classtraj}). To this end, we consider the modulus of $x$, \begin{equation} \langle \mid x \mid \rangle = \int dp \int dx \, | \, x \, | \, W(x,p;t). \end{equation} We now find that \begin{equation} {d \langle \mid x \mid \rangle \over dt} = {1 \over m} \int dp \, p \int dx \, \, \textrm{sign} \, x \, W(x,p;t), \end{equation} and \begin{equation} {d^2 \langle \mid x \mid \rangle \over dt^2} = {2 \over m^2} \,\int dp \, p^2 \, W(0,p;t). \label{eq:secder} \end{equation} For an ensemble of classical, free particles we have the condition \begin{equation} {d^2 \langle \mid x \mid \rangle \over dt^2} \ge 0, \label{eq:nonneg} \end{equation} since the phase space distribution must be nonnegative. $\langle | x | \rangle$ is a measure of the \emph{uncertainty} in position provided that $\langle x \rangle=0$. It is often called the absolute deviation. According to Eq. (\ref{eq:nonneg}), the curvature with respect to time of the absolute deviation must be nonnegative in classical theory. This inequality therefore sets a constraint on the dynamics of the spreading of a wave packet in classical mechanics. What is the physical significance of the r.h.s. of Eq. (\ref{eq:secder})? We introduce the moments $\pi_n$ defined by \begin{equation} \pi_n(x;t) = \int dp \, p^n \, W(x,p;t). \label{eq:kindens} \end{equation} $\pi_n(x;t)/\pi_0(x;t)$ can be interpreted classically as the average of $p^n$ given $x$ \cite{Moyal49}. Violation of (\ref{eq:nonneg}) therefore can be interpreted in classical terms as due to negative kinetic energy given $x$. It may seem surprising that a negative kinetic energy can be observed. Indeed, the operator $\hat{p}^2$ has only nonnegative eigenvalues, and the expectation of this operator therefore is always nonnegative. But this does not imply that also \emph{conditional} kinetic energy must be nonnegative. Indeed, in tunneling, it seems as if the particle traversing the tunneling region has negative kinetic energy, since the total energy is lower than the energy of the potential barrier. But also here one always finds a nonnegative kinetic energy. However, Aharonov \emph{et al.} \cite{Aharonov93} have shown that if the kinetic energy is first measured followed by a position measurement, the subensemble of particles found in the tunneling region may display negative kinetic energy. Finally, let's study a system with negative kinetic energy. To this end, consider the (unnormalized) state \begin{equation} | \psi \rangle = | \alpha \rangle + | - \alpha \rangle. \end{equation} This is a superposition of two coherent states $180^{\circ}$ out of phase with respect to each other. With a choice of units so that $\hbar=1$, it has a Wigner distribution \begin{eqnarray} W(x,p;0) = {1 \over \pi} \left [ e^{-(p-p_0)^2 - (x-x_0)^2} + e^{-(p+p_0)^2 - (x+x_{\alpha})^2} + 2 e^{-x^2 - p^2} \cos 2(p_0 x - p x_0) \right ]. \label{eq:wigner} \end{eqnarray} This distribution has negative regions. For the choice \mbox{$x_0=0$}, these regions are centered along the line \mbox{$p=0$}, and for \mbox{$p_0=0$} they are centered along \mbox{$x=0$}. As seen from Eq. (\ref{eq:secder}), violation of inequality (\ref{eq:nonneg}) for \mbox{$t=0$} requires that the Wigner distribution has negative regions for \mbox{$x=0$}. We therefore use the parameter choice \mbox{$p_0=0$}. \begin{centering} \centerline{\psfig{figure=fig1.eps,height=6cm}} \end{centering} \begin{quotation} FIG. 1. Contourplot of the Wigner distribution for an even coherent state where $x_0=\sqrt{2}$ and $p_0=0$. Note the negative regions along $x=0$. \end{quotation} The solution of Eq. (\ref{eq:classtraj}) is $W(x,p;t)=W(x-pt/m,p;0)$. Using this and Eq. (\ref{eq:wigner}), we get from Eq. (\ref{eq:kindens}) by integration \begin{equation} \pi_2(0;t) = {1 \over \sqrt{\pi}} \: \exp \left (- {x_0^2 \over 1+t^2} \right ) \: {1 - x_0^2 + t^2 + x_0^2 t^2 \over (1+t^2)^{5/2}}, \end{equation} where we have assumed that $p_0=0$. We see that $\pi_2$ is negative if \begin{equation} t < \sqrt{x_0^2 - 1 \over x_0^2 + 1}. \label{eq:timelimit} \end{equation} Thus $\pi_2$ may become negative provided that $x_0>1$, and it has a relative minimum for $t=0$ and $x_0=\sqrt{2}$. \begin{centering} \centerline{\psfig{figure=fig2.eps,height=6cm}} \end{centering} \begin{quotation} FIG. 2. Plot of $\pi_2(0;t)$ for a free particle in an even coherent state, where $x_0=\sqrt{2}$ and $p_0=0$. It is negative for $t < 1/\sqrt{3}$. This is impossible for an ensemble of classical, free particles, since it implies negative kinetic energy. According to Eq. (\ref{eq:secder}), $\pi_2(0;t)$ is also proportional to the curvature of the expected absolute value of position. \end{quotation} \section{Quantum Optics} We have found that violation of inequality (\ref{eq:nonneg}) can only be explained in terms of a negative Wigner distribution. Let's now consider a simple realization of a similar scheme in quantum optics. Consider the rotated quadrature variable \begin{equation} x_{\theta} = x \cos \theta + p \sin \theta. \end{equation} This observable can be measured in homodyne detection, if the radiation mode described by $W(x,p)$ is mixed with a strong local oscillator with phase $\theta$ \cite{Yuen83b,Schumaker84}. We may ``simulate" free particle evolution by introducing the variable \cite{Leonhardt95b} \begin{equation} \chi_{\tau} = {x_{\theta} \over \cos \theta} = x + p \tau, \end{equation} where \begin{equation} \tau = \tan \theta. \end{equation} We thus have \begin{equation} \langle \mid \chi_{\tau} \mid \rangle = \int dp \int dx \mid x + p \tau \mid W(x,p). \label{eq:ordprob} \end{equation} We substitute $x' = x + p \tau$, so that \begin{equation} \langle \mid \chi_{\tau} \mid \rangle = \int dp \int dx' \mid x' \mid W(x'-p \tau,p). \end{equation} This clarifies that there is no ordering problem associated with Eq. (\ref{eq:ordprob}) \cite{Moyal49}. We may now proceed to demonstrate that \begin{equation} {d^2 \langle \mid \chi_{\tau} \mid \rangle \over dt^2} = 2 \int dp \; p^2 \; W(0,p). \end{equation} In analogy with the free particle case, we therefore see that \begin{equation} {d^2 \langle \mid \chi_{\tau} \mid \rangle \over d \tau^2} \ge 0 \label{eq:quadrature} \end{equation} for nonnegative Wigner distributions. Violation of this inequality indicates that the Wigner distribution has negative regions along $x=0$. It can be tested in homodyne detection. \section*{Conclusion} We have seen that it is possible to observe negative kinetic energy for free particles. It was shown that this leads to nonclassical evolution of the position absolute deviation. The scheme was also applied to quantum optics, where a simple experiment was proposed to detect negative Wigner distributions. Depending on the operator ordering we assign to a scalar product of $x$ and $p$, we get a different quasi phase space distribution \cite{Cahill69}. Thus, since there is no unique phase space distribution in quantum mechanics, there is no unique conditional kinetic energy either. It turns out, e.g., that the state examined by Aharonov {\em et al.} \cite{Aharonov93} does not give a negative Wigner kinetic energy. An essential part of the analysis of Aharonov {\em et al.} was an inherent uncertainty in the pointer position of their measurement apparatus. When the classical model expressed by Eqs. (\ref{eq:posdist}) and (\ref{eq:classtraj}) breaks down, one may abandon either assumption (\ref{eq:posdist}) or (\ref{eq:classtraj}). By using the Wigner distribution, we abandon (\ref{eq:posdist}) and the concept of nonnegative probability. Using other distributions, one might instead abandon (\ref{eq:classtraj}) \cite{Lee82}. This amounts to abandoning the idea that a point in phase space moves with constant velocity. The analysis done here can be generalized to particles in arbitrary potentials. Also in this case it can be shown that negative kinetic energy leads to nonclassical evolution of the position absolute value. \section*{Acknowledgments} I would like to thank Howard M. Wiseman and Stefan Weigert for drawing my attention to the paper by Bracken and Melloy \cite{Bracken94}. I would also like to thank Ulf Leonhardt for useful comments.
2,869,038,155,403
arxiv
\section{Introduction} \label{sec:1} It is well known that $B$ meson rare decays provide us an abundant source of information on QCD, $CP$ violation and new physics (NP) beyond the Standard Model (SM). In recent years, the anomalies such as $R(D^{(*)})$ and $R_{K^{(*)}}$ observed in semileptonic $B$ meson rare decays at large hadron collider (LHC) and $B$-Factories imply that the lepton flavour universality may be violated, which in particular are viewed as the signals of the effects of NP (for recent reviews, see, e.g., Refs.~\cite{Li:2018lxi, Bifani:2018zmi, London:2021lfn, Altmannshofer:2021qrr}). Unlike the semileptonic decays, the hadronic $B$ decays suffer from larger uncertainties and are therefore more difficult to calculate with a high accuracy, because the hadronic matrix elements cannot be calculated from the first principle directly. In the past twenty years, based on the factorization hypothesis \cite{Bauer:1986bm}, some QCD based approaches to handle such kinds of problems are usually discussed in the heavy quark limit and implemented by the heavy quark expansion, such as the light-cone sum rule (LCSR) \cite{Khodjamirian:2000mi}, the QCD factorization (QCDF) \cite{Beneke:1999br, Beneke:2003zv}, the soft-collinear effective theory (SCET) \cite{Bauer:2000yr, Bauer:2001cu} and the perturbative QCD (PQCD) factorization approach \cite{Keum:2000ph, Lu:2000em, Ali:2007ff}. However, the observables such as the branching fractions, $CP$ asymmetries, polarization fractions and angular distributions might suffer from large uncertainties from higher-order and higher-power contributions. In this sense, in hadronic $B$ decays a deviation with respect to the SM prediction requires one to be much more conservative regarding these uncertainties than in the case of semileptonic $B$ decays. For this reason, in order to search for the signals of NP in the hadronic heavy flavour particle decays, on the one hand we should reduce the theoretical uncertainties as possible by preforming the higher order and higher power corrections with the developments of QCD technique, but on the other we are encouraged to search for new observables that are insensitive to the theoretical uncertainties. Among the two-body $B$ meson hadronic decays, it is of great interest to us that the decays $B_{d} \to K^{*0} {\overline K^{*0}}$ and $B_{s} \to K^{*0} {\overline K^{*0}}$ have same final states and are related by $U$-spin. Both two decays are induced by the flavor-changing neutral-current (FCNC) transitions, in which new particles of NP could affect the observables by entering the loops. In addition, $B_{s} \to K^{*0} {\overline K^{*0}}$ decay is also regarded as a golden channel for a precision measurement of the CKM phase $\beta_s$ \cite{Ciuchini:2007hx}. In the experimental side, both the branching fractions and the longitudinal polarization fractions have been measured in two $B$ factories \cite{Aubert:2007xc, BaBar:2007wwj, Belle:2010uya} and LHCb experiment \cite{LHCb:2011btn, LHCb:2015exo, Aaij:2017wgt, Aaij:2019loz}. For the decay $B_{d} \to K^{*0} {\overline K^{*0}}$, the theoretical predictions of the branching fraction and polarization fractions based on QCDF \cite{Beneke:2006hg} and PQCD \cite{Zou:2015iwa,Chai:2022kmq} are all in agreement with the averaged experimental results \cite{Workman:2022ynf} $B(B_{d} \to K^{*0} {\overline K^{*0}})=(8.3 \pm 2.4) \times 10^{-7}$ and $f_L(B_{d} \to K^{*0} {\overline K^{*0}}) = 0.74 \pm 0.05 $ within the large uncertainties. Furthermore, the measurement of $f_L(B_{d} \to K^{*0} {\overline K^{*0}})$ agrees with the na\"ive hypothesis, based on the quark helicity conservation and the $(V-A)$ nature of the weak interaction. For the decay $B_{s} \to K^{*0} {\overline K^{*0}}$, the latest averaged experimental results \cite{Workman:2022ynf} are $B(B_{s} \to K^{*0} {\overline K^{*0}}) = (11.1\pm 2.7) \times 10^{-6}$, $f_L(B_{s} \to K^{*0} {\overline K^{*0}}) = 0.240 \pm 0.031 \pm0.025 $ and $f_\perp(B_{s} \to K^{*0} {\overline K^{*0}}) = 0.38 \pm 0.11 \pm 0.04$. It is found that the prediction of branching fraction $B(B_{s} \to K^{*0} {\overline K^{*0}}) = (9.1^{+0.5+11.3}_{-0.4-6.8}) \times 10^{-6}$ in QCDF \cite{Beneke:2006hg} agrees well with the data, but the longitudinal polarization fraction $f_L(B_{s} \to K^{*0} {\overline K^{*0}}) =0.63^{+0.42}_{-0.29}$ is much larger than the data. On the another side, based on PQCD approach \cite{Zou:2015iwa}, the predicted branching fraction and longitudinal polarization fraction are $B(B_{s} \to K^{*0} {\overline K^{*0}})=(5.4^{+3.0}_{-2.4}) \times 10^{-6}$ and $f_L(B_{s} \to K^{*0}{\overline K^{*0}}) =0.38^{+0.12}_{-0.10}$, respectively. It is seen that although the longitudinal polarization fraction $f_L$ is consistent with the data, its center value is a bit smaller than the experimental measurement. Altogether, the theoretical predictions with large uncertainties from two approaches cannot explained all available data convincingly. In order to explain the current data simultaneously, the theoretical predictions with high precision in both approaches are called, and we are also encouraged to explore the contributions of NP. Following \cite{Descotes-Genon:2011rgs}, the authors in ref.\cite{Alguero:2020xca} defined an observable that is sensitive to the $U$-spin asymmetry but with a cleaner theoretical prediction as \begin{equation}\label{eq:LK} L_{K^{*0}\overline{K}^{*0}}=\frac{{B}(B_{s}\to K^{*0}\overline{K}^{*0}) g(B_{s}\to K^{*0}\overline{K}^{*0}) f_L(B_{s}\to K^{*0}\overline{K}^{*0})}{{B}(B_{d}\to K^{*0}\overline{K}^{*0}) g(B_{d}\to K^{*0}\overline{K}^{*0}) f_L(B_{d}\to K^{*0}\overline{K}^{*0})}, \end{equation} where the phase space factors $g(B_{Q}\to K^{*0}\overline{K}^{*0})$ involved in the corresponding branching fractions are given as \begin{equation} g(B_{Q}\to K^{*0}\overline{K}^{*0})= \frac{\tau_{B_Q}}{16\pi M_{B_Q}^2} \sqrt{M_{B_Q}^2-4M_{K^{*0}}^2}\,. \end{equation} In such a ratio, the experimental uncertainties are reduced, as the uncertainties in the denominator and numerator can be cancelled out by each other. In \cite{Aaij:2019loz}, LHCb collaboration released the measurements of the ratio between two branching fractions and the longitudinal polarization fraction of $B_{s} \to K^{*0} {\overline K^{*0}}$. With the latest results and the longitudinal polarization fraction of $B_{d} \to K^{*0} {\overline K^{*0}}$ from PDG \cite{Workman:2022ynf}, we could obtain this new observable as \begin{equation}\label{eq:expL} L_{K^*\overline{K}^*}^{\rm Exp}=4.43\pm 0.92, \end{equation} where the effect of $B_s$ meson mixing in the measurement of the branching fraction is included. In QCDF, the prediction based on the results from \cite{Beneke:2006hg} is given as \cite{Alguero:2020xca} \begin{eqnarray}\label{eq:tension1} L_{K^*\overline{K}^*}^{\rm QCDF}=& 19.5^{+9.3}_{-6.8}, \label{eq:tension3} \end{eqnarray} which implies a $2.6\sigma$ tension with respect to the experimental data. This new ``anomaly" discrepancy is viewed as a new signal of NP \cite{Alguero:2020xca}. However, $L_{K^*\overline{K}^*}$ of PQCD is not available yet till now. Motivated by this, we shall exploit this observable in PQCD in this work and try to check whether the $L_{K^*\bar{K}^*}$ is still lager than the experimental data. Moreover, the branching fractions and polarizations of both two decays will also be recalculated with the new fitted distribution amplitudes of $K^*$ \cite{Hua:2020usv}. As aforementioned, in order to interpret the called $R_K$ and $R_{K^*}$ anomalies, large number of NP models have been proposed. One of the most popular NP explanations are models with an extra heavy vector $Z^\prime$ boson \cite{Albrecht:2021tul, Geng:2021nhg}, where the new introduced $Z^\prime$ boson has couplings to quarks, as well as to either electrons or muons with non-universal parameters. In order to test these models, besides searching $Z^\prime$ at the higher energy colliders directly, the signals in other observables involving the similar transitions are also expected. A straightforward place to explore the possible existence of these signals are hadronic $B$ decays induced by the FCNC transitions $b\to (d,s) q\bar q$. In SM, such kind of decays are forbidden at tree level and only occur by loops. The comparable contributions from $Z^\prime$ at tree level may change the observables remarkably. Hence, another purpose of this work is to explore whether the contributions of an extra $Z^\prime$ boson can explain all measured observables in some certain spaces of parameters. This paper is organized as follows. We will first present the calculations of $B_{d} \to K^{*0} {\overline K^{*0}}$ and $B_{s} \to K^{*0} {\overline K^{*0}}$ decays in SM within the PQCD approach, and more attentions are mainly paid on not only branching fractions and the longitudinal polarization fractions but the new observable $ L_{K^*\overline{K}^*}$. In Sec.\ref{sec:3}, we will study contributions from the non-universal $Z^\prime$ boson, which could change the observables in the suitable parameters space. Lastly, we shall summarize this work in Sec. \ref{sec:4}. \section{Calculation in SM} \label{sec:2} In SM, the decay amplitudes of of $B_{d,s}\to K^{*0} {\overline K^{*0}}$ decays follow from the matrix elements $\langle V_{2}V_{3}|H_\text{eff}| B\rangle$ of the effective Hamiltonian \begin{equation} \label{Hamiltonian} H_\text{eff} = \frac{G_F}{\sqrt{2}} \sum_{p=u,c} \lambda^{(D)}_p \left\{ C_{1} Q_{1}^p + C_{2} Q_{2}^p +\!\! \sum_{i=3,\ldots 10}\!\! C_i Q^p_i \right\} + \mathrm{h.\,c}, \end{equation} with $D \in \{d,s\}$ and $\lambda^{(D)}_p = V_{pb}^*V_{pD}$. $C_{i}(\mu)$ are Wilson coefficients, and $O_{i}(\mu)(i=1,2,3 \cdots, 10)$ are the four-quark effective operators, whose specific forms refer to \cite{Buchalla:1995vs}. In PQCD, the $B$ meson amplitude can be expressed as \cite{Keum:2000ph} \begin{eqnarray} \label{PQCD} \langle V_{2}V_{3}\left|{H}_\text{eff}\right|B\rangle &\sim& \int dx_{1}dx_{2}dx_{3}b_{1}db_{1}b_{2}db_{2}b_{3}db_{3}\nonumber\\ &&\times{\rm Tr}\left[C(t)\Phi_{B}(x_{1},b_{1})\Phi_{V_{2}}(x_{2},b_{2})\Phi_{V_{3}}(x_{3},b_{3})H(x_{i},b_{i},t)S_{t}(x_{i})e^{-S(t)}\right]. \end{eqnarray} The meson wave functions $\Phi_i$ ($i=B,V_2,V_3$) include the dynamical information that how the quarks are combined into a hadron. They are nonperturbative but universal. $\rm Tr$ is the sum of degrees of freedom in the spin and color space. $b_{i}$ is the conjugate variable of the quark transverse momentum $k_{iT}$, and $x_{i}$ is the longitudinal momentum fraction carried by the light quark in each meson. $H(x_{i},b_{i},t)$ describes the four quark operators and the spectator quark connected by a hard gluon, and can be calculated perturbatively. The jet function $S_{t}(x_{i})$ coming from the threshold resummation of the double logarithms $\ln^2 x_i$ smears the end-point singularities in $x_{i}$ \cite{Li:2001ay}. The Sudakov form factor $e^{-S(t)}$ arising from the resummation of the double logarithms suppresses the soft dynamics effectively i.e. the long distance contributions in the large-$b$ region \cite{Li:1994iu, Keum:2000wi}. The main advantage of this approach is that it preserves the transverse momenta of quarks and avoids the problem of end-point divergence. Because there are three kinds of polarizations for a vector meson, namely longitudinal ($L$), normal ($N$) and transverse ($T$), the amplitudes for a $B$ meson decay to two vector mesons are generally characterized by the polarization states of two vector mesons. Thus, the amplitude $ A^{(\sigma)}$ for the decay $B(P_B) \to V_2(P_2,\epsilon_{2\mu}^{*}) V_3(P_3,\epsilon_{3\mu}^{*})$ can be decomposed as follows: \begin{eqnarray} A^{(\sigma)}& =&\epsilon_{2 \mu}^{*}(\sigma)\epsilon_{3\nu}^{*}(\sigma)\left[a g^{\mu \nu}+\frac{b}{M_{2} M_{3}} P_{B}^{\mu} P_{B}^{v}+i \frac{c}{M_{2} M_{3}} \epsilon^{\mu \nu \alpha \beta} P_{2 \alpha} P_{3 \beta}\right]\nonumber\\ &=&A_{L}+A_{N} \epsilon_{2}^{*}(\sigma=T) \cdot \epsilon_{3}^{*}(\sigma=T)+i\frac{A_{T}}{M_{B}^{2}} \epsilon^{\mu \nu \gamma \rho} \epsilon_{2 \mu}^{*}(\sigma) \epsilon_{3 v}^{*}(\sigma) P_{2 \gamma} P_{3 \rho}, \end{eqnarray} where $M_2$ and $M_3$ are the masses of the vector mesons $V_2$ and $V_3$, respectively. The definitions of the amplitudes $A_{i}$ $(i=L,N,T)$ in terms of the Lorentz-invariant amplitudes $a$, $b$ and $c$ could be written as \begin{align} A_{L}&=a \epsilon_{2}^{*}(L) \cdot \epsilon_{3}^{*}(L)+\frac{b}{M_{2}M_{3}} \epsilon_{2}^{*}(L) \cdot P_{3} \epsilon_{3}^{*}(L) \cdot P_{2},\\ A_{N}&=a,\\ A_{T}&=\frac{c}{r_{2} r_{3}}, \end{align} with $r_{2,3}=M_{V_{2,3}}/M_B$. The amplitudes $ A_{i}$ $(i=L,N,T)$ could be calculated in PQCD approach directly. Alternatively, we can also define the polarization amplitudes of three directions, and their relationships with $A_{L}$, $A_{N}$ and $A_{T}$ are given as follows: \begin{equation} A_{0}=-A_{L}, \,\,\,A_{\|}=\sqrt{2}A_{N},\,\,\, A_{\perp}=r_{2}r_{3} \sqrt{2\left(\kappa^{2}-1\right)} A_{T}, \end{equation} with the ratio $\kappa=\frac{P_{2} \cdot P_{3}}{M_{K^{*0}}}$. Then, the branching fraction of $B\to V_2V_3$ is expressed as \begin{align} \label{br} {B}\left(B\to V V\right)=\tau_{B} \frac{\left|p_{c}\right|}{8 \pi M_{B}^{2}}\left[\left|A_{0}\right|^{2}+\left|A_{\|}\right|^{2}+\left|A_{\perp}\right|^{2}\right], \end{align} where $\tau_B$ is the lifetime of the $B$ meson, $p_c$ is the three-dimension momentum of the vector meson. Three polarization fractions $f_i(i=L, \parallel, \perp)$ are also defined as \begin{eqnarray}\label{pvf} f_i=\frac{|A_i|^2}{|A_0|^2+|A_\parallel|^2+|A_\perp|^2}\;. \end{eqnarray} In PQCD approach, the most important inputs are the wave functions of hadrons. For the initial state $B$ meson, its wave function is of the form \cite{Zou:2015iwa,Ali:2007ff,Xiao:2006hd,Li:2004ep} \begin{equation} \Phi_{B}(x,b) = \frac{i}{\sqrt{2N_c}} \left[ \not \! P_{B} \gamma_5 + M_{B} \gamma_5 \right] \phi_{B}(x,b), \end{equation} where $b$ is the conjugate space coordinate of the transverse momentum $k_\perp$, and $N_c=3$ is the number of color. The distribution amplitude $\phi_{B}$ is in the form of \begin{align} \phi_{B}(x, b)=N_{B}x^{2}(1-x)^{2} \exp \left[-\frac{1}{2}\left(\frac{x m_{B}}{\omega_{B}}\right)^{2}-\frac{\omega_{B}^{2} b^{2}}{2}\right], \end{align} where $N_{B}$ is the normalization factor and satisfies \begin{align} \int_{0}^{1} d x \phi_{B}(x, b=0)=\frac{f_{B}}{2 \sqrt{2 N_{c}}}, \end{align} $f_{B}$ being the decay constant of $B$ meson. The shape parameter $\omega_{B}=0.30$ and $\omega_{B_s}=0.50$ are determined by experimental data or calculated from the first principle \cite{Wang:2019msf}. Unlike the pseudoscalar particle, the vector meson has the longitudinal polarization vector $\epsilon_L$ and the transverse polarization one $\epsilon_T$. For a special final state $K^{*0}$ moving in the plus direction ($n_{+}$) with momentum $P$, two wave functions of the $K^{*0}$ up to twist-3 are given as \cite{Ball:2007rt} \begin{eqnarray} \Phi_{K^*}^\parallel &=&\frac{1}{\sqrt{2N_c}}\left[M_{K^*}\not\!\epsilon_{L}\phi_{K^*}(x)+\not\!\epsilon_{L}\not\!P\phi_{K^*}^t(x)+M_{K^*}\phi_{K^*}^s(x)\right],\\ \Phi_{K^*}^\perp &=&\frac{1}{\sqrt{2N_c}}\left[ M_{K^*}\not\! \epsilon^*_T\phi_{K^*}^v(x)+ \not\!\epsilon^*_T\not\!P\phi_{K^*}^T(x)+iM_{K^*}\epsilon_{\mu\nu\rho\sigma}\gamma_5\gamma^\mu\epsilon_T^{*\nu}n_+^\rho n_-^\sigma \phi_{K^*}^a(x)\right ], \end{eqnarray} where $n_{+}=\left(1,0, \mathbf{0}_{T}\right)$ and $n_{-}=\left(0,1, \mathbf{0}_{T}\right)$. Two polarizations are defined as \begin{align} \epsilon(L)=\frac{P}{M_{K^{*}}}-\frac{M_{K^{*}}}{P \cdot n_{+}} n_{+},\,\,\,\,\,\, \epsilon(T)=\left(0,0,\mathbf{1}_{T}\right), \end{align} The light-cone distribution amplitudes in the wave function have been calculated within the QCD sum rules \cite{Ball:2005vx, Ball:2007zt}, \begin{eqnarray} \phi_{K^{*}}(x)&=&\frac{3f_{K^{*}}}{\sqrt{2N_c}} x(1-x)\left[1+a_{1K^{*}}^{\|} C_{1}^{3/2}(t)+a_{2K^{*}}^{\|} C_{2}^{3 / 2}(t)\right], \\ \phi_{K^{*}}^{T}(x)&=&\frac{3f_{K^{*}}}{\sqrt{2N_c}} x(1-x)\left[1+a_{1K^{*}}^{\perp} C_{1}^{3/2}(t)+a_{2 K^{*}}^{\perp} C_{2}^{3/2}(t)\right], \\ \phi_{K^{*}}^{t}(x)&=&\frac{3f_{K^{*}}^{T}}{2\sqrt{2N_c}}t^{2}, \\ \phi_{K^{*}}^{s}(x)&=&\frac{3f_{K^{*}}^{T}}{2\sqrt{2N_c}}(-t), \\ \phi_{K^{*}}^{v}(x)&=&\frac{3f_{K^{*}}}{8\sqrt{2N_c}}\left(1+t^{2}\right),\\ \phi_{K^{*}}^{a}(x)&=&\frac{3 f_{K^{*}}}{4\sqrt{2N_c}}(-t). \end{eqnarray} The Gegenbauer polynomials in the distribution amplitude are given as \begin{align} C_{1}^{3/2}(t)=3 t,\quad C_{2}^{3/2}(t)=\frac{3}{2}\left(5 t^{2}-1\right), \end{align} where $t=2x-1$ and $x$ is the momentum fraction of the light quark. According to the effective Hamiltonian eq.(\ref{Hamiltonian}), we could draw the lowest order diagrams contributing to $B_{d,s}\to K^{*0} \overline{K}^{*0}$. For example, the Feynman diagrams of $B_s\to K^{*0} \overline{K}^{*0}$ are shown in Fig.~\ref{Feynman Diagram}, where the symbols ``$\otimes$" are the effective operators. The figures (a) and (b) are factorizable emission diagrams, while (c) and (d) are nonfactorizable emission ones. Similarly, figures (e) and (f) are factorizable annihilation diagrams, and (g) and (h) are nonfactorizable annihilation ones. We also note that in $B_s\to K^{*0} \overline{K}^{*0}$ decay the final vector meson $\overline{K}^{*0}$ takes the spectator strange quark, while in $B_d\to K^{*0} \overline{K}^{*0}$ decay the spectator down quark enters ${K}^{*0}$ meson. \begin{figure} \begin{center} \includegraphics[scale=0.75]{feynman.eps} \caption{The leading order Feynman diagrams for $B_{s} \to K^{*0} {\overline K^{*0}}$}\label{Feynman Diagram} \end{center} \end{figure} After calculating the amplitudes of each diagram with different operators, we obtain the amplitudes of $B^0 \to K^{*0} {\overline K^{*0}}$ and $B^0_{s} \to K^{*0} {\overline K^{*0}}$, which are given as \begin{align} A^{i}\left(B^{0} \rightarrow K^{*0} \overline{K}^{*0}\right)=&-\frac{G_{F}}{\sqrt{2}} V_{tb}^{*} V_{t d}\left\{ M_{fh}^{LL, i}\left[a_{4}-\frac{1}{2} a_{10}\right]+M_{nfh}^{LL,i}\left[C_{3}-\frac{1}{2} C_{9}\right]\right.+M_{nfh}^{LR,i}\left[C_{5}-\frac{1}{2}C_{7}\right] \nonumber\\ & +M_{fa}^{LL,i}\left[\frac{4}{3}a_{3}+\frac{4}{3}a_{4}-\frac{2}{3}a_{9}-\frac{2}{3} a_{10}\right] +M_{fa}^{LR,i}\left[a_{5}-\frac{1}{2} a_{7}\right]+M_{fa}^{SP,i}\left[a_{6}-\frac{1}{2} a_{8}\right] \nonumber\nonumber\\ &+M_{nfa}^{LL,i}\left[C_{3}-\frac{1}{2}C_{9}+C_{4} -\frac{1}{2}C_{10}\right] +M_{nfa}^{LR,i}\left[C_{5}-\frac{1}{2}C_{7}\right] +M_{nfa}^{SP,i}\left[C_{6}-\frac{1}{2} C_{8}\right] \nonumber\\ &+\left(M_{fa}^{LL,i}\left[a_{3}-\frac{1}{2} a_{9}\right]+M_{fa}^{LR, i}\left[a_{5}-\frac{1}{2} a_{7}\right]\right)_{K^{*0} \leftrightarrow \overline{K}^{*0}} \nonumber\\ &\left.+\left(M_{nfa}^{LL,i}\left[C_{4}-\frac{1}{2} C_{10}\right]+M_{nfa}^{SP,i}\left[C_{6}-\frac{1}{2}C_{8}\right]\right)_{K^{*0} \leftrightarrow \overline{K}^{*0}}\right\},\label{BDKK} \end{align} \begin{align} A^{i}\left(B_{s}^{0}\rightarrow K^{*0}\overline{K}^{*0}\right)=&-\frac{G_{F}}{\sqrt{2}}V_{tb}^{*} V_{ts}\left\{ M_{fh}^{LL, i}\left[a_{4}-\frac{1}{2} a_{10}\right]+M_{nfh}^{LL,i}\left[C_{3}-\frac{1}{2} C_{9}\right]+M_{nfh}^{LR,i}\left[C_{5}-\frac{1}{2}C_{7}\right]\right. \nonumber\\ & +M_{fa}^{LL,i}\left[\frac{4}{3}a_{3}+\frac{4}{3}a_{4}-\frac{2}{3}a_{9}-\frac{2}{3} a_{10}\right]+M_{fa}^{LR,i}\left[a_{5}-\frac{1}{2} a_{7}\right]+M_{fa}^{SP,i}\left[a_{6}-\frac{1}{2} a_{8}\right] \nonumber\nonumber\\ &+M_{nfa}^{LL,i}\left[C_{3}-\frac{1}{2}C_{9}+C_{4} -\frac{1}{2}C_{10}\right] +M_{nfa}^{LR,i}\left[C_{5}-\frac{1}{2}C_{7}\right] +M_{nfa}^{SP,i}\left[C_{6}-\frac{1}{2} C_{8}\right] \nonumber\\ &+\left(M_{fa}^{LL,i}\left[a_{3}-\frac{1}{2} a_{9}\right]+M_{fa}^{LR, i}\left[a_{5}-\frac{1}{2} a_{7}\right]\right)_{K^{*0} \leftrightarrow \overline{K}^{*0}} \nonumber\\ &\left.+\left(M_{nfa}^{LL,i}\left[C_{4}-\frac{1}{2} C_{10}\right]+M_{nfa}^{SP,i}\left[C_{6}-\frac{1}{2}C_{8}\right]\right)_{K^{*0} \leftrightarrow \overline{K}^{*0}}\right\},\label{BSKK} \end{align} with \begin{align} a_{1}=C_{2}+C_{1}/3,\quad a_{2}&=C_{1}+C_{2}/3,\quad a_{3}=C_{3}+C_{4}/3,\quad a_{4}=C_{4}+C_{3}/3, \nonumber\\ a_{5}=C_{5}+C_{6}/3,\quad a_{6}&=C_{6}+C_{5}/3,\quad a_{7}=C_{7}+C_{8}/3,\quad a_{8}=C_{8}+C_{7}/3, \nonumber\\ a_{9}&=C_{9}+C_{10}/3,\quad a_{10}=C_{10}+C_{9}/3, \end{align} where $i=L,N,T$ denote the longitudinal polarization and the two transverse polarizations. In above two formulae, the superscripts $LL$, $LR$ and $SP$ indicate the operators $(V-A)(V-A)$, $(V-A)(V+A)$ and $(S-P)(S+P)$, respectively. The subscript ``$fh$" in $M_{fh}$ meas factorizable emission diagrams $(a)$ and $(b)$, while ``$nfh$" means nonfactorizable ones $(c)$ and $(d)$. Similarly, ``$fa$" and ``$nfa$" are the the factorizable and nonfactorizable annihilation diagrams, respectively. Due to the limit of space, we will not list the above amplitudes for each $M$, and the explicit expressions can be found in refs.~\cite{Ali:2007ff, Zou:2015iwa}. It should be stressed that all amplitudes ``$M$" are mode dependent, as the spectator quarks are different in these two decays, though the eqs.(\ref{BDKK}) and (\ref{BSKK}) are very similar. With above formulae, we then calculate the observables in SM. The branching fractions and longitudinal polarization fractions of both decays are given in Table.~\ref{tab:1}, together with predictions of QCDF and the available experimental data. In our numerical calculations, the updated distribution amplitudes \cite{Hua:2020usv} of $K^*$ are adopted. We acknowledge that there are still some uncertainties in our calculations, and we here only discuss two main uncertainties. In the table, the first errors arise from the wave functions of heavy $B$ mesons, in which the shape parameters $ \omega_{B_d}$ and $\omega_{B_s}$ are the only inputs, and we make them change $30\%$. The second ones are from the next-leading power (order) corrections characterized by the hard scale $t$, which changes from $0.8 t$ to $1.2 t$. It can be seen that the branching fractions are affected by both parameters, while the polarization fractions are only sensitive to the shape parameter $\omega_{B_d}$ or $\omega_{B_s}$. In PQCD, both $B_d^0 \to K^{*0}\overline{K}^{*0}$ and $B_{s}^0 \to K^{*0} \overline{K}^{*0}$ are induced only by the penguin operators, so that the direct $CP$ asymmetries of two decays are zero in PQCD. However, including the contributions from charm penguins, the direct $CP$ asymmetries from QCDF are nonzero. Thus, the measurements of direct $CP$ asymmetries in future could discriminate two approaches. \begin{table}[!htp] \begin{center} \caption{Numerical results for observables in $B_{d,s} \to K^{*0} {\overline K^{*0}}$ decays in SM, tegather with results of QCDF and experimental results.}\label{tab:1} \begin{tabular}{ccccc} \hline\hline Decay Mode & BF $(10^{-6})$ &$f_{L}(\%)$ &$f_{\|}(\%)$ &$f_{\perp}(\%$)\\ \hline $B^{0} \rightarrow K^{*0}\overline{K}^{*0}$ &$0.5_{-0.1-0.1}^{+0.2+0.2}$ &$67.1_{-5.7-0.4}^{+5.1+0.3} $ &$17.4_{-3.4-0.0}^{+3.6+0.1}$ &$15.5_{-2.5-0.2}^{+2.7+0.1}$ \\ \hline QCDF \cite{Beneke:2006hg} &$0.6_{-0.1-0.3}^{+0.1+0.5}$ &$69_{-1-27}^{+1+34}$ \\ \hline Exp. \cite{Workman:2022ynf} &$0.8 \pm 0.09 \pm 0.04$ &$72.4 \pm 5.1\pm 1.6 $ &$11.6 \pm 3.3\pm 1.2$ &$16\pm 4.4\pm 1.2 $ \\ \hline \hline $B_{s}^{0} \rightarrow K^{* 0} \overline{K}^{* 0}$ &$7.8_{-1.4-1.5}^{+1.9+2.3}$ &$51.1_{-6.8-0.3}^{+7.3+0.6} $ &$25.6_{-4.2-0.3}^{+3.7+0.1} $ &$23.3_{-3.5-0.2}^{+3.3+0.3} $ \\ \hline QCDF \cite{Beneke:2006hg} &$9.1^{+0.5+11.3}_{-0.4-6.8}$ &$63_{-0-29}^{+0+42}$ \\ \hline Exp. \cite{Workman:2022ynf} &$11.1\pm 2.2\pm1.2$ &$24\pm 3.1\pm 2.5$ &$ $ &$38_{-11-4}^{+11+4}$ \\ \hline\hline \end{tabular} \end{center} \end{table} From Table.~\ref{tab:1}, we find that for the decay $B^{0} \rightarrow K^{*0}\overline{K}^{*0}$, the predictions of branching fractions and polarization fractions from PQCD and QCDF are in agreement with the experimental results, though the theoretical center values of branching fraction are smaller than the experimental data. In fact, the longitudinal contribution is dominant, which is roughly proportional to the form factor $A_0^{B\to K^*}$. In QCDF, $A_0^{B\to K^*}(0)=0.39\pm 0.06$ calculated from light-cone sum rules \cite{Ball:2004rg} was adopted, while $A_0^{B\to K^*}(0)=0.36\pm 0.05$ is obtained in PQCD. In addition, the form factors $A_1^{B\to K^*}(0)$ and $V^{B\to K^*}(0)$ that are relate to transverse amplitudes are almost same in PQCD and QCDF. For the decay $B_s^{0} \to K^{*0}\overline{K}^{*0}$, the theoretical predictions are in agreement with each other with uncertainties, with $A_0^{B_s\to K^*}(0)=0.33\pm 0.05$ in QCDF and $A_0^{B_s\to K^*}(0)=0.30\pm 0.05$ in PQCD. However, in comparison to the experimental results, both branching fractions are smaller than the data, and both theoretical longitudinal polarization fractions are much larger than data, even the predictions of QCDF have large uncertainties arising from annihilation diagrams. In our previous study \cite{Zou:2015iwa}, with the large suppression from threshold resummation, the predicted longitudinal polarization fraction $f_L=(38.3^{+12.1}_{-10.5})\%$ could be comparable to data, but the corresponding branching fraction $(5.4^{+3.0}_{-2.4})\times 10^{-6}$ is smaller than the current data. Although there are many uncertainties in the theoretical calculations, this discrepancy could be a hint of NP beyond SM. Now, we calculate the $L_{K^*\overline{K}^{*0}}$-parameter and obtain \begin{eqnarray} L_{K^*\overline{K}^{*0}}^{\rm PQCD}= 12.7^{+5.6}_{-3.2},\label{LKKPQCD} \end{eqnarray} where the uncertainty is mainly from the shape parameters in the distribution amplitudes of $B^0_d$ and $B^0_s$ mesons. The uncertainties taken by high order corrections are almost cancelled. In this sense, the more precise and reliable shape parameters of heavy mesons based on the nonperturbative approaches are needed. By comparison, we find our result is also larger than one from the current data, eq.(\ref{eq:expL}), though it is smaller than that of QCDF. \section{Calculation in Family Nonuniversal $Z^{\prime}$ Model} \label{sec:3} Now, we turn to study the contributions of the extra gauge boson $Z^\prime$ to the decays $B^0_{s} \to K^{*0} {\overline K^{*0}}$ which is induced by the FCNC $b\to s \bar d d$ transition. Supposing there is no mixing between $Z$ and $Z^\prime$, the $Z^\prime$ term of the neutral-current Lagrangian in the gauge basis can be written as \cite{Langacker:2000ju,Langacker:2008yv} \begin{eqnarray} L^{Z^\prime} =-g^\prime Z^{\prime {\mu}}\sum_{i} {\overline \psi_i^I} \gamma_{\mu} \left[ (\epsilon_{\psi_L})_{i} P_L + (\epsilon_{\psi_R})_{i} P_R \right] \psi^I_j, \end{eqnarray} where $\psi^I_i$ means the $i$-th family fermion, and the superscript $I$ refers to the gauge interaction eigenstate. $g^\prime$ is the gauge coupling constant at the electro-weak scale $M_W$, and $P_{L,R}=(1\mp\gamma_5)/2$. The parameter $\epsilon_{\psi_L}$ ($\epsilon_{\psi_R}$) denotes the left-handed (right-handed) chiral coupling. According to certain string constructions \cite{Chaudhuri:1994cd} or GUT models \cite{Barger:1987hh}, the couplings can be family non-universal. When we change the weak basis to the physical one, FCNC's generally appear at tree level in both left-handed and right-handed sectors, explicitly, as \begin{eqnarray} B^{L}=V_{\psi_L}\epsilon_{\psi_L}V_{\psi_L}^{\dagger},\;\;\;\;\; B^{R}=V_{\psi_R}\epsilon_{\psi_R}V_{\psi_R}^{\dagger}, \end{eqnarray} where $V_{\psi_{L,R}}$ are unitary matrices. For simplicity, the right-handed couplings are supposed to be flavor-diagonal. Therefore, the FCNC $b\to s\bar{q}q$ (and $q=u,d$) transition can also be mediated by the $Z^{\prime}$ at tree level, and the corresponding effective Hamiltonian has the form as: \begin{equation}\label{heffz1} {H}_{eff}^{\rm Z^{\prime}}=\frac{2G_F}{\sqrt{2}}\big(\frac{g^{\prime}M_Z} {g_1M_{Z^{\prime}}}\big)^2\,B_{sb}^L(\bar{s}b)_{V-A}\sum_{q}\big(B_{qq}^L (\bar{q}q)_{V-A} +B_{qq}^R(\bar{q}q)_{V+A}\big)+h.c.\,, \end{equation} where $g_1=e/(\sin{\theta_W}\cos{\theta_W})$ and $M_{Z^{\prime}}$ is the mass of the new $Z^\prime$ boson. The current structures $(V-A)(V-A)$ and $(V-A)(V+A)$, are same as eq.(\ref{Hamiltonian}) of SM, which allow us to translate eq.~(\ref{heffz1}) as \begin{equation} {H}_{eff}^{\rm Z^{\prime}}=-\frac{G_F}{\sqrt{2}}V_{tb}V_{ts}^{\ast}\sum_{q} (\Delta C_3 O_3^q +\Delta C_5 O_5^q+\Delta C_7 O_7^q+\Delta C_9 O_9^q)+h.c.\,. \end{equation} In above Hamiltonian, $\Delta C_i$ denote $Z^{\prime}$ corrections to the Wilson coefficients of the SM operators, which can be written as \begin{eqnarray} \Delta C_{3}&=&-\frac{2}{3V_{tb}V_{ts}^{\ast}}\,\big(\frac{g^{\prime}M_Z} {g_1M_{Z^{\prime}}}\big)^2\,B_{sb}^L\,(B_{uu}^{L}+2B_{dd}^{L})\,,\nonumber\\ \Delta C_{5}&=&-\frac{2}{3V_{tb}V_{ts}^{\ast}}\,\big(\frac{g^{\prime}M_Z} {g_1M_{Z^{\prime}}}\big)^2\,B_{sb}^L\,(B_{uu}^{R}+2B_{dd}^{R})\,,\nonumber\\ \Delta C_{7}&=&-\frac{4}{3V_{tb}V_{ts}^{\ast}}\,\big(\frac{g^{\prime}M_Z} {g_1M_{Z^{\prime}}}\big)^2\,B_{sb}^L\,(B_{uu}^{R}-B_{dd}^{R})\,,\nonumber\\ \Delta C_{9}&=&-\frac{4}{3V_{tb}V_{ts}^{\ast}}\,\big(\frac{g^{\prime}M_Z} {g_1M_{Z^{\prime}}}\big)^2\,B_{sb}^L\,(B_{uu}^{L}-B_{dd}^{L})\,. \label{NPWilson} \end{eqnarray} It is obvious that $Z^\prime$ contributes to the QCD penguins as well as to the EW penguins. For simplicity, we follow the assumptions in refs.~\cite{Buras:2003dj, Barger:2009eq, Hua:2010wf, Chang:2013hba, Li:2015xna, Chang:2009wt, Celis:2015ara} and set $B_{uu}^{L,R}=-2 B_{dd}^{L,R}$, so that new physics is manifest in the EW penguins, namely $O_7$ and $O_9$. Furthermore, without loss of generality, the diagonal elements of the effective coupling matrices $B_{qq}^{L,R}$ are supposed to be real due to the hermiticity of the effective Hamiltonian. However, there is no constrain that the off-diagonal $B_{sb}^{L}$ should be a real, and a new weak phase $\phi_{bs}$ can exist. Taking all these information together, we then have the new Wilson coefficients \begin{eqnarray} & &\Delta C_{3,5}\simeq 0, \nonumber\\ & &\Delta C_{9,7}=4\frac{|V_{tb}V_{ts}^{\ast}|}{V_{tb}V_{ts}^{\ast}}\xi^{L,R}e^{i\phi_{bs}}, \end{eqnarray} with \begin{eqnarray}\label{xi} \xi^{L,R}=\left(\frac{g^{\prime}M_Z} {g_1M_{Z^{\prime}}}\right)^2\left|\frac{B_{sb}^LB_{dd}^{L,R}}{V_{tb}V_{ts}^{\ast}} \right|. \end{eqnarray} With the assumption that both ${U(1)_{Y}}$ in the SM and ${U(1)}$ introduced in new models origin from the Grand Unified Theory, the gauge coupling constants for ${Z}$ and ${Z^{\prime}}$ bosons are the same, implying that $g^\prime/g_1=1$. So far, the obvious signal of the new ${Z^{\prime}}$ boson have not been observed in the current experiments such as CMS and ATLAS, which indicates that the mass of $Z^\prime$ would be larger than the Tev scale. Conservatively, we set $M_{Z} / M_{Z^{\prime}} \approx 0.1$. In order to accommodate the mass difference between $B_{s}^0$ and $\overline{B}_{s}^0$ that is one of the most strictest constraints to the models with $Z^\prime$ boson, $\left|B_{sb}^{L}\right| \sim\left|V_{tb} V_{ts}^{*}\right|$ is theoretically required. Meanwhile, in order to explain $CP$ asymmetries of $B \to K \pi$ and branching fractions of $B \to K \phi$ and $B \to K^* \phi$, the diagonal elements should satisfy $\left|B_{qq}^{L,R}\right|\sim 1$. For the newly introduced weak phase $\phi_{bs}$, it is assumed to be a free parameter without any restriction whose range is $[-\pi, \pi]$. In order to reduce the number of new parameters, we further assume $\xi=\xi^{L L}=\xi^{L R}$, which means that the left-hand couplings are same as right-handed ones. Of course, $\xi^{L L}=0$ or $\xi^{L R}=0$ can also assumed, and we shall not discuss these two cases any more. Therefore, in our following discussion, we have only two parameters $\xi \in[0.001,0.02]$ and $\phi_{bs} \in [-180^\circ,180^\circ]$. In Figure.~\ref{fig:2} and Figure.~\ref{fig:3}, we present the branching fraction and longitudinal polarization fraction of $B_s\to K^{*0}\overline{K}^{*0}$ as functions of the new weak phase $\phi_{bs}$, for a fixed value $\xi=0.01$ with $\omega_{B_s}=0.45, 0.50$ and $0.55$ in the left panels, and for a fixed $\omega_{B_s}=0.50$ with $\xi=0.02,0.01$ and $0.005$ in the right panels. The experimental data and the SM predictions are also shown in the figures for comparisons. As aforementioned, the experimental result and theoretical prediction of SM on the branching fraction have some overlaps, but there is no overlap on the longitudinal polarization fraction. From Table.~\ref{tab:1}, we could see that in SM the uncertainty of the branching fraction arising from the $\omega_{B_s}$ is about $20\%$. With the fixed parameter $\xi=0.01$, for each $\omega_{B_s}$, the uncertainties coming from the unknown phase $\phi_{bs}$ are also around $20\%$, as shown in the left panel of Figure.~\ref{fig:2}. Comparing the theoretical results with the data, a small $\omega_{B_s}$ is preferred by the experimental data. Given $\omega_{B_s}=0.50$, it is found from the right panel of Figure.~\ref{fig:2} that if $\xi<0.01$ the contributions of the new particle would be plagued by the large theoretical uncertainties. However, when we set $\xi=0.02$, the effect from $Z^\prime$ boson becomes more remarkable, and the branching fraction could be as large as $11.2\times 10^{-6}$ when $\phi_{bs}=0^{\circ}$. Specifically, for $\xi=0.02$ and $\omega_{B_s}=0.50$, the new weak phase $\phi_{bs}$ is constrained in the range $[-100^{\circ},100^{\circ}]$ by the current data, and the range decreases as $\xi$ becomes smaller. \begin{figure}[htb] \begin{center} \includegraphics[scale=0.8]{fig2-1} \includegraphics[scale=0.8]{fig2-2} \caption{The dependence of the branching fraction of $B_s\to K^{*0}\overline{K}^{*0}$ on the weak phase $\phi_{bs}$, for a fixed value $\xi=0.01$ with $\omega_{B_s}=0.45$ (dotted blue line), $0.50$ (solid black line) and $0.55$ (dashed red line) in the left panels; and for a fixed $\omega_{B_s}=0.50$ with $\xi= 0.005$ (dot-dashed blue line) , $0.01$ (dashed purple line) and $0.02$ (dotted red line) in the right panels. The blue and yellow regions represent the experimental data and SM prediction, respectively.}\label{fig:2} \end{center} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[scale=0.8]{fig3-1.eps} \includegraphics[scale=0.8]{fig3-2.eps} \caption{The dependence of the longitudinal polarization fraction ($f_L$) of $B_s\to K^{*0}\overline{K}^{*0}$ on the weak phase $\phi_{bs}$, for a fixed value $\xi=0.01$ with $\omega_{B_s}=0.45$ (dotted blue line), $0.50$ (solid black line) and $0.55$ (dashed red line) in the left panel; and for a fixed $\omega_{B_s}=0.50$ with $\xi= 0.005$ (dot-dashed blue line) , $0.01$ (dashed purple line) and $0.02$ (dotted red line) in the right panel. The blue and yellow regions represent the experimental data and SM prediction, respectively. }\label{fig:3} \end{center} \end{figure} In contrast to the branching fraction, the measured longitudinal polarization fraction is smaller than the theoretical prediction, which allows us to find out some mechanisms to suppress the longitudinal contribution or enhance the transverse contributions. It can be seen from the left panel of Fig.~\ref{fig:3} that for the fixed value $\xi=0.01$, most results are larger than the data, and only few results approach the upper limit of experimental data when $\omega_{B_s}=0.55$ and $\phi_{bs}\approx50 ^\circ$. Therefore, a larger $\omega_{B_s}$ is favored, which is different from the result from the well measured branching fraction. It is shown in the right panel that, for the fixed $\omega_{B_s}=0.50$, the theoretical predictions of longitudinal polarization fractions $f_L$ are larger than the data, for both $\xi=0.01$ and $\xi=0.001$. When $\xi=0.02$, $f_L$ changes in a wide range with the changes of $\phi_{bs}$, and could fall into the experimental range within $\phi_{bs}\in [8 ^\circ,93 ^\circ]$. When $\phi_{bs}\approx50 ^\circ$, $f_L$ could be as small as $22\%$. \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.8]{fig4-1.eps}\hspace{0.4cm} \includegraphics[scale=0.8]{fig4-2.eps} \caption{The dependence of the branching fraction (left panel) and longitudinal polarization fraction ($f_L$) (right panel) of $B_s\to K^{*0}\overline{K}^{*0}$ on the weak phase $\phi_{bs}$, for a fixed value $\xi=0.02$ with $\omega_{B_s}=0.50\pm0.05$. The blue bands represent the experimental data. }\label{fig:4} \end{center} \end{figure} From above analysis, the branching fraction prefers a smaller $\omega_{B_s}$, while the longitudinal polarization fraction prefers a larger one. Also, we found that once $\xi=0.02$ is adopted, both the branching fraction and the longitudinal polarization fraction vary in a large region with the change of $\phi_{bs}$. Thus, with $\xi=0.02$ we plot all possible regions for $\omega_{B_s}=0.50\pm0.05$ in Fig.~\ref{fig:4}. These two figures illustrate that for the fixed $\xi=0.02$ both two observables could be consistent with the experimental data well, even $\omega_{B_s}=0.45$ is adopted. In addition, a positive weak phase $\phi_{bs}$ is preferred, as implied in Fig.~\ref{fig:4}. \begin{figure}[htb] \begin{center} \includegraphics[scale=0.8]{fig5-1.eps}\hspace{0.4cm} \includegraphics[scale=0.8]{fig5-2.eps} \caption{The dependence of $L_{K^*\overline{K}^{*0}}$-parameter on the weak phase $\phi_{bs}$, for a fixed value $\xi=0.01$ with $\omega_{B_s}=0.45$ (dotted blue line), $0.50$ (solid black line) and $0.55$ (dashed red line) in the left panel, and for a fixed $\omega_{B_s}=0.50$ with $\xi= 0.005$ (dot-dashed blue line) , $0.01$ (dashed purple line) and $0.02$ (dotted red line) in the right panel. The blue and yellow regions represent the experimental data and SM prediction, respectively.}\label{fig:5} \end{center} \end{figure} Now, we shall discuss the effect of the new introduced $Z^\prime$ boson on the new defined parameter $L_{K^*\overline{K}^{*0}}$. As aforementioned, we suppose that $Z^\prime$ only participates in the $b\to s$ transitions, and its contribution to the FCNC $b\to d$ transitions is suppressed by small $|B_{db}|$ and negligible. In this respect, $L_{K^*\overline{K}^{*0}}$ does in fact reflect the contribution of longitudinal amplitude of decay $B_{s}^{0} \to K^{*0} {\overline{K}^{*0}}$. In the left panel of Figure.~\ref{fig:5}, we adopt $\xi=0.01$ again and show the variant of $L_{K^*\overline{K}^{*0}}$ with changes of $\phi_{bs}$ for $\omega_{B_s}=0.45, 0.50$ and $0.55$. The SM prediction and the latest measurement are also shown. By comparison, we find that if $\xi=0.01$ the theoretical predictions cannot agree with experimental data, even if $\omega_{B_s}=0.55$ is adopted. By setting $\omega_{B_s}=0.45, 0.50$ and $0.55$, we also calculated $L_{K^*\overline{K}^{*0}}$. The numerical results show that if $\xi<0.02$ the values of $\omega_{B_s}=0.45, 0.50$ are not preferred by the experimental data. Thus, we adopt $\omega_{B_s}=0.55$ and plot $L_{K^*\overline{K}^{*0}}$ dependence on the phase for $\xi=0.001, 0.01$ and $0.02$ in the right panel. It can be clearly seen that $L_{K^*\overline{K}^{*0}}$ changes in a wide range for $\xi=0.02$, and it could be 4.61 as $\phi_{bs} \approx 75^\circ$. Combining Figures.~\ref{fig:4} and ~\ref{fig:5}, we find that in such a family non-universal $Z^\prime$ model there might exist a certain parameter space, where all observables can be achieved. In order to obtain the parameter space, we show the combined result in the $(\phi_{bs},\xi)$ two-dimensional plane for the fixed value $\omega_{B_s}=0.55$, as shown Figure~\ref{fig:6}. The green and yellow bands represent the regions fitting the branching fraction and the longitudinal polarization fraction respectively, while the region of the parameter space corresponding to a viable fit of $L_{K^*\overline{K}^{*0}}$ has been marked in blue. Evidently, the experimental data of $L_{K^*\overline{K}^{*0}}$ gives the most stringent constraint. As was expected, these three bands overlap in a very small region, $\xi \in [0.017,0.018]$ and $\phi_{bs}\in [50^\circ,65^\circ]$. Within this small parameter space, we then have \begin{eqnarray} B(B_{s}^{0} \to K^{* 0} \overline{K}^{* 0})&=&(8.6\pm0.4)\times 10^{-6}, \\ f_{L}(B_{s}^{0} \to K^{* 0} \overline{K}^{* 0})&=&(19.5\pm0.7)\%\\ L_{K^*\overline{K}^{*0}}^{\rm PQCD}&=&5.3\pm 0.3. \end{eqnarray} These results with few uncertainties could be further tested with high precision in the current LHCb experiment or the Belle-II experiment. \begin{figure}[htb] \begin{center} \includegraphics[scale=0.8]{fig6.eps} \caption{Combined constraints on the $(\phi_{bs},\xi)$ two-dimensional plane for the fixed value $\omega_{B_s}=0.55$. The green, yellow and blue regions represent the constraints from the branching fraction and the longitudinal polarization fraction of $B_s\to K^{*0}\overline{K}^{*0}$ decay, and $L_{K^*\overline{K}^{*0}}$-parameter, respectively.}\label{fig:6} \end{center} \end{figure} Finally, we present some comments on the direct searches of $Z^\prime$ boson. At the LHC, the main way to search directly for a $Z^\prime$ is via a resonance peak in the invariant-mass distribution of its decay products. This experimental analysis is usually performed by the ATLAS and CMS collaborations for $Z^\prime$ production in the $s$-channel in a rather model-independent way, but assuming that the observed new resonance is narrow, such that any interference of SM and NP contributions can be neglected. Under these assumptions, the $Z^\prime$ Drell-Yan cross section at a hadron machine can be approximated as \cite{Accomando:2010fz,Paz:2017tkr, Workman:2022ynf} \begin{eqnarray} \sigma(pp\to Z^\prime X \to f\bar{f} X) \simeq \frac{\pi}{6 s} \sum_q c_q^f w_q(s,{M_{Z^\prime}}^2) \end{eqnarray} where $q=u,d,s,c,b$. Here, the hadronic structure functions $w_q(s,{M_{Z^\prime}}^2)$ are independent of the $Z^\prime$ model and contain all information on parton distribution functions and QCD corrections. On the other hand, the coefficients $c_q^f$ contain all model-dependent information. Recently, ATLAS and CMS collaborations published the limits on $M_{Z^\prime}$ as a function of $c_{u,d}^\ell$ where $\ell =e,\mu$ \cite{ATLAS:2019erb, CMS:2021ctt}. The lower mass limits of $5.15 (4.56)$ TeV are set based on the sequential standard model (superstring-inspired model) \cite{CMS:2021ctt}, and the lower limits could reach $4.5$ TeV for the $E_6$-motivated $Z^\prime$ boson \cite{ATLAS:2019erb}. However, our results are challenged by above measurements, because the combined parameter $\xi \in [0.017,0.018]$ implies that the large $g^\prime$ or small $M_{Z^\prime}$ are needed, as shown in eq.(\ref{xi}). We also note for high values of $g^\prime$ the ratio $g^\prime/M_{Z^\prime}$ can be quite large, which could spoil the narrow-width approximation. Besides, the current limits are all model-dependence, and the model-independent analyses are not available yet. Therefore the models with $M_{Z^\prime}\leq 3-4$ TeV required by flavour physics cannot be excluded totally by current data. We look forward to further searches of $Z^\prime$ in the current LHC experiment or future high-energy colliders. \section{Summary}\label{sec:4} In this work, we studied the nonleptonic decays $B_{d} \to K^{*0} {\overline K^{*0}}$ and $B_{s} \to K^{*0} {\overline K^{*0}}$ within the perturbative QCD approach, which is based on the $k_{\rm T}$ factorization. With the new fitted distribution amplitudes of $K^{*}$, both the branching fractions and the polarization fractions are recalculated. Numerical results show that the theoretical results of $B_{d} \to K^{*0} {\overline K^{*0}}$ are in agreement with experimental measurements, while for the decay $B_{s} \to K^{*0} {\overline K^{*0}}$ the branching fraction and the longitudinal polarization fraction cannot agree with data simultaneously. We also explored the $L_{K^*\overline{K}^{*0}}$-parameter that is a combination of polarization fractions and branching fractions in order to reduce the theoretical uncertainties. In SM, $L_{K^*\overline{K}^{*0}}^{\rm PQCD}= 12.7^{+5.6}_{-3.2}$ is obtained based on PQCD, which is still larger than the experimental data. In order to identify whether the deviations come from the contribution of new physics, the accuracy of theoretical calculations should be further improved in future, for example exploring the wave function of heavy $B$ meson. On the other side, we are also encouraged to search for the effects of NP beyond SM. Then, we interpreted these deviations by introducing a family nonuniversal $Z^{\prime}$ boson in $b\to s q\bar q$ transition. In order to reduce the number of new parameters, we simplified the model as possible. With the large shape parameter $\omega_{B_s}=0.55$ in the distribution amplitude of $B_s$ meson, it is in a small parameter space $\xi \in [0.017,0.018]$ and $\phi_{bs}\in [50^\circ,65^\circ]$ that these three measurements (branching fraction, longitudinal polarization fraction and $L_{K^*\overline{K}^{*0}}$-parameter) could be accommodated simultaneously. In such small parameter space, the theoretical uncertainties could be reduced remarkably. All our results are hopeful tested in LHCb experiment, Belle-II and future high-energy colliders. \section*{Acknowledgment} This work is supported in part by the National Science Foundation of China under the Grant Nos. 11975195, 11365018 and 11375240, and the Natural Science Foundation of Shandong province under the Grant No.ZR2019JQ04. This work is also supported by the Project of Shandong Province Higher Educational Science and Technology Program under Grants No.2019KJJ007. {\small \bibliographystyle{bibstyle}
2,869,038,155,404
arxiv
\section{Introduction} Clusters of galaxies are large bound systems that evolve from large-scale fluctuations, making their existence at large redshifts an important constraint on cosmological models. They are also ideal sites to study galaxy evolution, once systems at different redshifts are available. Combined, these characteristics have stimulated systematic searches for large, statistical samples of galaxy clusters at large redshifts ($z\gtrsim0.5$). With few exceptions most of the systems identified, especially those at very large redshifts, have been serendipitous discoveries in deep X-ray exposures. While this work has provided confirmation for the existence of these systems, only a handful of clusters have been identified. A more promising alternative is to use moderately deep, optical or near-infrared surveys to search for concentrations in the projected galaxy distribution as originally carried out by \cite{postman96} and later used by \cite{olsen99a,olsen99b} and \cite{scodeggio99}, among others. While finding cluster candidates using these single-passband imaging surveys is much easier and yields much larger samples than using the available X-ray data, the task of confirming that the systems correspond to true density enhancements in redshift space and going even further to bound systems is much harder. Over the past few years our group has been engaged in an effort to study galaxy systems at different redshift ranges with the aim of confirming and if possible determining the nature of the EIS cluster candidates \citep{olsen99a,olsen99b,scodeggio99}. Previous work have included \cite{olsen03,olsen05} for systems at low redshift ($z\lesssim0.4$), and \cite{ramella00} for systems at intermediate redshifts ($0.5\lesssim z\lesssim0.7$). For candidates with estimated redshifts larger than $z \gtrsim0.6$ we have carried out observations with FORS1 and FORS2 mounted at the VLT. Preliminary results were presented by \cite{benoist02} who showed strong evidence for a system at $z=1.3$. The present paper extends these earlier results by presenting the result of VLT spectroscopic observations of 5 additional fields. In Sect.~\ref{sec:sample} we discuss how the candidate clusters were selected from the original catalog and how available photometric data were used to select individual galaxy targets, aiming at improving the efficiency of the observations by eliminating foreground and background objects. In Sect.~\ref{sec:obs_data} the observations and data reduction are summarized. In Sect.~\ref{sec:results} the results of the spectroscopic observations are presented for each of the fields considered. In Sect.~\ref{sec:discussion} these results are combined with those of \cite{benoist02} to draw conclusions regarding the efficiency of the matched-filter technique applied to moderately deep $I$-band survey data in building a statistical sample of high-redshift galaxy clusters. Finally, in Sect.~\ref{sec:summary}, the main results of the present paper are summarized. \section{Sample Selection} \label{sec:sample} Cluster candidates were drawn from the sample of EIS cluster candidates compiled by \cite{olsen99a,olsen99b} and \cite{scodeggio99} applying the matched-filter technique to the EIS-WIDE $I$-band imaging survey covering 17~square degrees. The eight target clusters, of which three were discussed by \cite{benoist02}, were selected based on their identification as likely clusters in a color slice analysis \citep{olsen00}. This analysis was based on the optical survey data combined with infrared follow-up imaging. Based on the fact that most clusters exhibit a red sequence of early-type galaxies \cite[e.g. ][]{gladders98, stanford98} we searched for concentrations of galaxies with similar color by separating the galaxies in slices of color and identifying peaks in the density distribution for each color. The analysis was carried out separately for the $I-K_s$ and $J-K_s$ colors. The systems selected for follow-up spectroscopy all appeared to have significant overdensities in both $I-K_s$ and $J-K_s$. In Table~\ref{tab:cl_targets} we present the detection information, both for the matched filter and the color slicing, regarding the five candidates discussed in this work. The table gives: in Col.~1 the field name; in Cols.~2 and 3 the nominal position of the cluster candidate in J2000; in Col.~4 the redshift estimated by the matched-filter algorithm; in Col.~5 the $\Lambda_{cl}$-richness, which measures the equivalent number of $L^*$ galaxies and in Col.~6 the Abell like richness giving the number of galaxies in the magnitude interval $[m_3;m_3+2]$, where $m_3$ is the third brightest galaxy. For both richnesses more details can be found in \cite{olsen99a}; in Cols.~7 and 8 the $I-K_s$ and $J-K_s$-colors obtained from the color-slicing analysis. When comparing the computed colors of the cluster galaxies to those expected for a passively evolving elliptical galaxy we find that in general, the $I-K_s$-colors are bluer, while the $J-K_s$-colors were found to be roughly consistent with these expectations. This may be caused by a poor calibration of the IR data used for the preliminary analysis. \begin{table*} \caption{Basic properties for the targeted candidate clusters. The parameters are described in the text and more details can be found in \cite{olsen99a}.} \label{tab:cl_targets} \center \begin{tabular}{lrrrrrrr} \hline\hline EIS Cluster & $\alpha$ (J2000) & $\delta$ (J2000) & $z_{MF}$ & $\Lambda_{cl}$ & $N_R$ & $I-K_s$ & $J-K_s$\\ \hline EISJ0046-2951 & $00:46:07.4$ & $-29:51:44.5$ & $0.9$ & $157.0$ & $2$ & 2.75 & 1.75\\ EISJ0048-2942 & $00:48:31.6$ & $-29:42:52.1$ & $0.6$ & $55.6$ & $13$ & 2.75 & 1.75\\ EISJ0050-2941 & $00:50:04.4$ & $-29:41:35.6$ & $1.0$ & $175.3$ & $62$ & 3.50 & 1.60\\ EISJ2236-4017 & $22:36:18.0$ & $-40:17:54.9$ & $0.6$ & $107.8$ & $47$& 2.90 & 1.45\\ EISJ2249-3958 & $22:49:33.0$ & $-39:58:10.1$ & $0.9$ & $123.6$ & $29$ & 2.75&1.90\\ \hline\hline \end{tabular} \end{table*} The selection of target galaxies in each field was based on a combination of data from as many bands as available. For the candidates EISJ0046-2951, EISJ0048-2942, EISJ0050-2941 we derived photometric redshifts based on $BVIJK_s$ imaging. The limiting magnitude used for this work was $I=22.5$ to avoid large errors in the derived photometric redshifts. The primary targets were selected among galaxies with $z_{phot}\geq0.5$ ($\sim50\%$ of the target galaxies). Remaining slits were filled with arbitrarily chosen objects. For the targets selected to have $z_{phot}\geq0.5$ $\sim70\%$ proved to have a spectroscopic redshift $z_{spec}\geq0.5$. A more thorough discussion of the photometric redshifts is the topic of a forthcoming paper. For the clusters EISJ2236-4017 and EIS2249-3958 only $IJK_s$ imaging data were available at the time of the spectroscopic observations. For these clusters we selected galaxies based on their optical-infrared colors to match those of elliptical galaxies at $z\geq0.5$. Remaining slits were filled with arbitrarily chosen galaxies. \section{Observations and data reduction} \label{sec:obs_data} The observations were carried out in the nights September 21-25, 2000, using FORS1 mounted at the VLT-ANTU telescope. We used the multi-object spectroscopy (MOS) mode, in which FORS1 provides 21 slits with a length of 20 and 22~arcsecs (see the FORS Manual for details). The length of the slits is much longer than necessary for each galaxy, thus as often as possible we tried to fit two galaxies in the slits. In practice, however, this was rarely possible. The MOS masks were positioned using the FIMS-software developed for this purpose. We used the $I$-band images from the ESO Imaging Survey \citep{nonino99, benoist99} to determine the positions of the slits. We used grism 150I+17 with the order separation filter OG590 covering the wavelength range 6000-11000{\AA}. The dispersion of 230{\AA}/mm corresponding to 5.52{\AA}/pixel gave a spectral resolution of 280 or about 29{\AA} for a slit width of 1.4~arcsecs. The exposure times for each mask were either 60~min or 120~min, depending on the $I$-band magnitude of the target galaxies. We split the exposures into four or eight 15~min exposures. Calibration frames (flatfields and calibration arcs) were obtained during daytime. We reduced the spectra using IRAF-tasks written for this purpose based on the APALL task. Details on the reduction procedure and measurement of redshift are available in J{\o}rgensen et al. (2005, in preparation). Here, it suffices to say that the individual science exposures were combined and a flatfield correction applied before the wavelength calibration. Redshifts were computed by cross-correlating the extracted one-dimensional spectra against template spectra, taken from \cite{kinney96}, properly shifted to a redshift close to that expected for the galaxy being measured as estimated from features in the galaxy spectrum. We apply this procedure iteratively, and also for different galaxy spectra. Before a redshift was accepted it was compared with the presence of corresponding spectral features. The redshifts derived in this way are listed in Tables~\ref{tab:EIS0046-2951} - \ref{tab:EIS2249-3958}. We estimate an accuracy of the individual redshifts of $\delta z=0.0004\sim130\mathrm{km/s}$. In these tables a value of $8.8888$ indicates that the spectrum revealed a stellar object. \section{Results} \label{sec:results} We have secured a total of 266 new galaxy redshifts. The distribution of the redshifts of each field is shown in the upper panels of Fig.~\ref{fig:zdistributions}. These panels give in their upper parts a bar diagram of all the measured redshifts and in their lower parts the distribution of redshifts (dashed line) with the solid line indicating identified groups as discussed below. The figure also shows for each field the redshifts versus right ascension and declination respectively (lower panels, left and right). As in previous papers of this series, we use the ``gap''-technique originally proposed by \cite{katgert96} for identifying groups in redshift space. We adopt a gap-size of 0.005(1+z) to separate individual groups. This separation corresponds to a restframe velocity of 1500~km/s. In addition, only systems with a group redshift $z>0.4$ and with at least 3 galaxies are considered for further analysis. This lower limit on the redshift corresponds to an offset from the original matched-filter estimate of $\Delta z=0.2$ which should be sufficient to include all confirmations. \begin{figure*} \center \resizebox{0.23\textwidth}{!}{\includegraphics{3433_1a.ps}} \resizebox{0.23\textwidth}{!}{\includegraphics{3433_1b.ps}} \resizebox{0.23\textwidth}{!}{\includegraphics{3433_1c.ps}} \resizebox{0.23\textwidth}{!}{\includegraphics{3433_1d.ps}} \resizebox{0.23\textwidth}{!}{\includegraphics{3433_1e.ps}} \resizebox{0.23\textwidth}{!}{\includegraphics{3433_1f.ps}} \resizebox{0.23\textwidth}{!}{\includegraphics{3433_1g.ps}} \resizebox{0.23\textwidth}{!}{\includegraphics{3433_1h.ps}} \resizebox{0.23\textwidth}{!}{\includegraphics{3433_1i.ps}} \resizebox{0.23\textwidth}{!}{\includegraphics{3433_1k.ps}} \caption{For each cluster the obtained redshifts (upper panels) and the redshifts as function of right ascension and declination (lower panels, left and right) are shown. In the upper panel the upper part gives a bar diagram of the redshifts, while the lower part gives the redshift distribution (dashed line) with the identified groups (solid line). In the lower panels the left cone shows the redshifts as function of right ascension and the right one the redshifts as function of declination. The diagrams correspond to the complete coverage of each field. The bar in the top of each cone gives the scale of $1h_{75}^{-1}\mathrm{Mpc}$ The shape of the cones translates the evolution of scale with redshift.} \label{fig:zdistributions} \end{figure*} To assess the significance of each group we have used simulated data sets based on the expected redshift distribution for a uniform distribution of galaxies with a given luminosity function (LF). The redshift distribution is built for the same limiting magnitude as was used for the target selection ($I=22.5$). It was confirmed that this approach leads to a redshift distribution which is consistent with that measured by the Canada-France Redshift Survey \citep{lilly95} when the same limiting magnitude is adopted \cite[for further details see ][]{benoist02}. We determine the significance of the detected groups from the probability of finding a group as rich or richer at the same redshift. To do this we draw 1000 sets of galaxies from the redshift distribution constructed above, with the size given by the number of redshifts measured in each of the cluster fields. We select only galaxies with $z\geq0.4$ to mimic the color pre-selection of our targeted galaxies. For each set we run our group finding method to obtain the frequency of groups as rich or richer than and at the same redshift (within $\Delta z \leq 0.05$) as the group detected in the spectroscopic data. This constraint in redshift is necessary because, due to the shape of the selection function, the frequency of groups with a certain number of members varies with redshift. The significance of the group is defined to be $1-f$, where $f$ is the redshift dependent frequency. Applying the gap-technique to the redshift distributions shown in Fig.~\ref{fig:zdistributions}, and adopting the same criteria used in previous papers to consider only density enhancements with a significance $\geq 99\%$, we identify 8 groups. Their properties are summarized in Table~\ref{tab:groups} which gives: in Col.~1 the name of the EIS cluster field; in Cols.~2 and 3 the J2000 right ascension and declination; in Col.~4 the number of member galaxies; in Col.~5 the redshift of the group; in Col.~6 its biweight estimated restframe velocity dispersion corrected for our measurement accuracy with 68\% bootstrap errors; and finally, in Col.~7 the significance, as defined above. The positions given in the table are mean values computed using the spectroscopically confirmed member galaxies. Below the detections for each individual cluster field are briefly discussed using the available color information to provide additional information. \begin{table*} \caption{The identified groups with significance $\sigma \geq 99\%$.} \label{tab:groups} \center \begin{tabular}{lrrrrrr} \hline\hline Cluster & $\alpha$ (J2000) & $\delta$ (J2000) & \# galaxies & $z$ & $\sigma_v$ (km/s) & Significance\\ \hline EIS0046-2951 & 00:46:04.2 & -29:49:27.6 &17& 0.614 & $1400^{+210}_{-610}$& 99.9\smallskip\\ EIS0046-2951 & 00:46:07.7 & -29:51:04.9 &10& 0.671 & $865_{-270}^{+120}$ & 99.7\smallskip\\ \hline EIS0048-2942 & 00:48:35.4 & -29:41:52.4 & 7& 0.402 & $1000_{-480}^{+100}$ & 99.4\smallskip\\ EIS0048-2942 & 00:48:33.4 & -29:42:28.9 &33& 0.637 & $1080_{-210}^{+150}$ & $>$99.9\smallskip\\ \hline EIS0050-2941 & 00:50:06.2 & -29:40:35.3 &12& 0.558 & $1375_{-270}^{+190}$ & 99.9\smallskip\\ EIS0050-2941 & 00:50:03.0 & -29:40:18.1 & 8& 0.616 & $970_{-620}^{+210}$ & 99.1\smallskip\\ \hline EIS2236-4017 & 22:36:22.0 & -40:17:55.0 &12& 0.509 & $900_{-260}^{+160}$ & 99.9\smallskip\\ \hline EIS2249-3958 & 22:49:32.1 & -39:58:02.9 & 8& 0.710 & $380_{-140}^{+ 50}$ & 99.4\smallskip\\ \hline \end{tabular} \end{table*} \subsection{EISJ0046-2951} \begin{figure*} \begin{center} \resizebox{0.6\textwidth}{!}{\includegraphics{3433_2.ps}} \caption{A 10$\times$10 arcmin cutout centered on the matched-filter position of EISJ0046-2951. The circles mark galaxies with redshifts outside significant groups. The diamonds mark galaxies in the foreground group and the squares those in the background group. North is up, east to the left.} \label{fig:img0046-2951} \end{center} \end{figure*} In this field 71 galaxy redshifts have been measured and are shown in Fig.~\ref{fig:zdistributions}. We identify 6 groups with at least 3 members at $z\geq0.4$, out of which only 2, one at $z=0.614$ and the other at $z=0.671$ have significance $\geq99\%$ and are, therefore, included in Table~\ref{tab:groups}. In order to decide which of these groups is the one most likely associated with the matched-filter detection we examined both the cone diagrams (Fig.~\ref{fig:zdistributions}) and the projected distribution of the galaxies with redshifts as shown in Fig.~\ref{fig:img0046-2951}. In the cone diagrams the foreground system is more prominent than the background one, due to its larger extent. From the projected distribution of the galaxies, it is clear that the center of the foreground group deviates significantly from that of the matched-filter algorithm, located at the center of the field. The background group, on the other hand, consists of 10 members out of which five form a compact system located very close to the center derived by the matched filter and five more uniformly distributed. Furthermore, from the examination of the image one finds that the five galaxies situated near the center are also among the brightest galaxies in the central region. It should also be kept in mind that the radial part of the matched filter gives a significantly higher weight to galaxies close to the estimated position, thus the galaxies closest to the originally estimated position are the ones contributing the most, even when other systems are found at almost the same redshift and in the same field. Combined these arguments suggest that the background group at $z=0.671$ ($\Delta z=z_{spec}-z_{MF}\sim - 0.23$) and with an estimated velocity dispersion of $\sigma_v=865\mathrm{km/s}$ is the most likely to correspond to the EIS cluster candidate identified by the matched-filter algorithm, with the difference between the estimated and measured redshifts being consistent with the expected errors. Another possible explanation, could be that the matched-filter detection at a larger redshift is correct but system galaxies have not been observed or had no measured redshifts. However, careful inspection of the image show no evidence for any further clustering of faint galaxies in the field. \begin{figure*} \begin{center} \resizebox{0.6\textwidth}{!}{\includegraphics{3433_3.ps}} \caption{A 10$\times$10 arcmin cutout centered on the matched-filter position of EISJ0048-2942. Symbols follow those in Fig.~\protect\ref{fig:img0046-2951}. North is up, east is to the left.} \label{fig:img0048-2942} \end{center} \end{figure*} \subsection{EISJ0048-2942 } The distribution of the 77 redshifts measured in this field is presented in Fig.~\ref{fig:zdistributions}. As it can be seen, the distribution shows a distinct spike around $z\sim0.6$, in excellent agreeement with the value estimated by the matched filter. The gap-technique identifies 3 groups with at least 3 members in this field, out of which two have significance $\geq99\%$ and are therefore included in Table~\ref{tab:groups} - one with seven members at $z=0.402$ and a more prominent background group with 33 members at $z=0.637$. The examination of the redshift distribution and the cone diagrams for this cluster (shown in Fig.~\ref{fig:zdistributions}) leaves little doubt that the background concentration at $z=0.638$ ($\Delta z \sim 0.04$) corresponds to the matched-filter detection, with cluster galaxies being distributed over nearly the entire field. This conclusion is also supported by the image of the field shown in Fig.~\ref{fig:img0048-2942}. The velocity dispersion of this system is estimated to be $\sigma_v=1080\mathrm{km/s}$ indicating a massive system. Combined, the photometric (color and projected distribution) and spectroscopic results provide strong evidence that we have detected a real galaxy cluster at a redshift in excellent agreement with that estimated by the matched-filter algorithm. \subsection{EISJ0050-2941} \begin{figure*} \begin{center} \resizebox{0.6\textwidth}{!}{\includegraphics{3433_4.ps}} \caption{A 10$\times$10 arcmin cutout centered on the matched-filter position of EISJ0050-2941. Symbols follow those in Fig.~\protect\ref{fig:img0046-2951}. North is up, east to the left.} \label{fig:img0050-2941} \end{center} \end{figure*} In this field 55 redshifts have been measured and are shown in the last panel in the first row of Fig.~\ref{fig:zdistributions}. The distribution is considerably more complex than in the previous case, with no single dominant peak discernible. The gap-technique identifies 6 groups with at least 3 members in the field, out of which two, with comparable number of members, satisfy our significance criterion. Information about these two systems is given in Table~\ref{tab:groups} - a foreground system with 12 members at $z=0.558$ and a more distant background system with 8 members at $z=0.616$. Note that both systems are at considerably smaller redshifts than that estimated by the matched filter and their projected spatial distribution (see Fig.~\ref{fig:img0050-2941}) is scattered over the entire field. Taken together this casts some doubt on the association of these density enhancements in redshift space with the matched-filter detection at $z_{MF}=0.9$. In addition, the colors listed in Table~\ref{tab:cl_targets} point to a more distant concentration. These colors correspond to the reddest of two peaks detected by the color-slicing analysis by \cite{olsen00}. The identification of two color peaks indicate the presence of two superposed systems of which one is possibly more distant than indicated by the spectroscopic redshifts. In order to investigate this point further, we visually examined the available imaging data (Fig.~\ref{fig:img0050-2941}), finding an apparent clustering of faint galaxies very close to the position of the matched-filter detection but only two of these galaxies have a measured redshift (with $z\sim0.39$ and $z\sim0.58$). Still there are many more faint galaxies in the concentration and it is conceivable that a more distant system may still exist lying behind two widely scattered foreground systems. It is unclear at the present time whether the matched-filter detection corresponds to the combination of the two spectroscopically identified systems, with the faint galaxies leading to an overestimate of the matched-filter redshift, or whether the original detection is caused by a background concentration without measured redshifts and the two identified systems lying in the foreground. Clearly, with the available data alone it is not possible to resolve this ambiguity which must await additional spectroscopic observations in the field. Regardless of the interpretation, the present results strongly suggest the presence of two systems with estimated velocity dispersions of 1375 and 970~km/s for the foreground and background systems, respectively, but with redshift offsets about twice as large as the estimated accuracy. \subsection{EISJ2236-4017} \begin{figure*} \begin{center} \resizebox{0.6\textwidth}{!}{\includegraphics{3433_5.ps}} \caption{A 10$\times$10 arcmin cutout centered on the matched-filter position of EISJ2236-4017. Symbols follow those in Fig.~\protect\ref{fig:img0046-2951}, except that in this case only one significant groups is found. North is up, east to the left.} \label{fig:img2236-4017} \end{center} \end{figure*} In this field 28 redshifts were measured and are shown in the left panel in the second row of Fig.~\ref{fig:zdistributions}. As in the case of EISJ0048-2942 the redshift distribution shows a distinct peak at $z\sim0.5$ as well as the suggestion of other enhancements at somewhat larger redshifts ({\it e.g.\/}\rm,\ $z\sim 0.65$). In fact, the gap-technique identifies 3 groups with at least 3 members in the field. However, only one satisfies our significance criterion. The properties of this system are given in Table~\ref{tab:groups}. It has 12 members and a redshift of 0.509 which, even though slightly smaller, is consistent with that estimated by both the matched-filter and the color-slicing technique. A cutout of the cluster region with field and member galaxies marked is shown in Fig.~\ref{fig:img2236-4017}. The system does not appear to be very concentrated but has member galaxies being uniformly distributed, even though the color slicing showed a strong peak at the position of the matched-filter detection \citep{olsen00}. However, for this field not all the masks were observed. In fact, the missing mask was the most likely to include the cluster brightest members in the central regions. This not only explains the significantly smaller number of galaxies with measured redshift, but the lack of visible clustering may also be due to the poor sampling achieved in this field. As can be seen from the image cutout the region around the matched-filter position at the center of the image is almost devoid of measured redshifts. It is thus likely that the concentration of galaxies in the center of the field corresponds to the matched-filter detection but does not have any measured redshifts. Regardless of the match with the matched-filter detection there is evidence for the presence of a galaxy system at $z=0.509$ with a velocity dispersion of 900~km/s. \subsection{EISJ2249-3958} \begin{figure*} \begin{center} \resizebox{0.6\textwidth}{!}{\includegraphics{3433_6.ps}} \caption{A 10$\times$10 arcmin cutout centered on the matched-filter position of EISJ2249-3958. Symbols follow those in Fig.~\protect\ref{fig:img2236-4017}. North is up, east to the left.} \label{fig:img2249-3958} \end{center} \end{figure*} In this field 35 redshifts were measured and are shown in the last row of Fig.~\ref{fig:zdistributions}. The distribution shows a distinct peak at $z\sim0.7$ as well as other smaller peaks both in the foreground and background. Using the gap-technique, we indeed identify five groups with at least 3 members, but only one is significant according to the criteria adopted in this paper ($\sigma \geq99\%$). As listed in Table~\ref{tab:groups} this cluster has a redshift of $z=0.710$ ($\Delta z \sim -0.19$), somewhat smaller than that estimated by the matched filter, and a velocity dispersion of about 380~km/s, typical of groups. Inspection of the cone diagrams and the $I$-band image (Fig.~\ref{fig:img2249-3958}) shows that seven out of eight confirmed member galaxies lie along an elongated structure extending only 2~ar\-cmin. The remaining galaxy is positioned along the same axis but 2~arc\-min away from the rest. The concentrated galaxies are the brightest ones found at the matched-filter position, and thus likely to correspond to the original detection. The small values of the velocity dispersion may be due to poor sampling or alternatively that this density enhancement is associated to a filament or a non-virialized cluster, instead of part of a relaxed system. However, deciding among these various possibilities must await further spectroscopic observations. For the time being, we consider the detected system in redshift space to correspond to the matched-filter detection of the projected distribution, which led to an overestimate of the redshift. \section{Discussion} \label{sec:discussion} The main objective of the present paper has been to extend the earlier work of \cite{benoist02} and present the results of a spectroscopic survey conducted at the VLT of the fields of 8 EIS candidate clusters with redshifts $z\geq0.6$. From the above analysis we find that in three of the 5 fields considered here we identify two statistically significant density enhancements and one in each of the two remaining fields. More importantly, the measured redshifts for these systems range from $ 0.4~\lesssim~z~\lesssim~0.7$ and nearly all with velocity dispersions typical of rich systems. A less obvious question is whether these detections are associated with the original matched-filter detection. In general, one would say yes but in at least one (but probably two) case(s) it appears that we have detected a foreground system and that detected by the matched-filter technique still needs to be confirmed by additional observations of fainter galaxies. Another point that must await further observations is the nature of the systems - namely, whether they form relaxed clusters, or are part of proto-clusters before infall, or density enhancements associated with filaments and walls. A preliminary effort in answering these questions is presented by J{\o}rgensen et al. (2005, in preparation). Combining the present results with those compiled by \cite{benoist02}, our group has now studied the fields of 8 high-z candidate clusters, with all leading to at least one confirmed system. The properties of all the detected systems are listed in Table~\ref{tab:conf_EIS_cl}. The table gives: in Col.~1 the cluster field name; in Col.~2 the matched-filter estimated redshift, $z_{MF}$, whenever we believe that there is a match between the detection in redshift and projected space; in Col.~3 the spectroscopic redshift, $z_{spec}$, of the systems detected in redshift space; and in Col.~4 the estimated velocity dispersion of the system, $\sigma_v$. For the six systems for which we believe to have identified the counterpart of the matched-filter detection we find that the difference between spectroscopic and estimated matched-filter redshifts range from $\Delta z=z_{spec}-z_{MF}=-0.229$ to $\Delta z = 0.208$, with a mean offset of $\Delta z=-0.022$ and a standard deviation of $\sim0.15$, therefore in excellent agreement with what would be expected from the estimated errors of the algorithm. This result is valid all the way to the highest redshifts found in the catalog and thus makes the EIS cluster candidate catalog a good source for drawing high-z clusters for more detailed studies. \begin{table} \caption{Summary of confirmed EIS clusters from this work and from \cite{benoist02}.} \label{tab:conf_EIS_cl} \begin{minipage}{\columnwidth} \begin{center} \begin{tabular}{lrrr} \hline\hline Cluster & $z_{MF}$ & $z_{spec}$ & $\sigma_v \mathrm{(km/s)}$\\ \hline EISJ0046-2930$^*$ & 0.6 & 0.808 & 1170\\ EISJ0046-2951 & & 0.614 & 1400 \\ & 0.9 & 0.671 & 865 \\ EISJ0048-2942 & & 0.402 & 1000 \\ & 0.6 & 0.638 & 1080 \\ EISJ0050-2941 & & 0.559 & 1375 \\ & & 0.617 & 970 \\ EISJ0533-2412$^*$ & 1.3 & 1.301 & $-$\\ EISJ0954-2023$^*$ & & 0.948 & 200\\ & 1.1 & 1.141 & 290\\ EISJ2236-4017 & & 0.509 & 900\\ EISJ2249-3958 & 0.9 & 0.710 & 380 \\ \hline \end{tabular} \end{center} * The spectroscopic confirmations of these systems were reported in \cite{benoist02}. \end{minipage} \end{table} \section {Summary} \label{sec:summary} In this paper we have presented the results of spectroscopic observations conducted with FORS1 at the VLT in the fields of 5 high-z ($z\geq0.6$) cluster candidates identified by applying the matched-filter algorithm to the images from the EIS-WIDE $I$-band survey. The presence of galaxy clusters were supported by a color-slicing analysis targeting the individual detections. We find at least one significant system in all fields with redshifts in the range $0.40 < z < 0.71$ and from 8 to 33 confirmed cluster members. All systems, except one, have velocity dispersions $\gtrsim 800\mathrm{km/s}$ typical of that of rich clusters. Despite the intrinsic ambiguity of uniquely associating a significant density enhancement in redshift space with a detection in the projected distribution, the agreement of the matched filter and spectroscopic redshifts is, on average, excellent even though the matched filter has a tendency to overestimate them at higher redshifts. The results of this paper together with others of this series, strongly suggest that nearly all of the EIS candidate clusters identified applying the matched-filter algorithm to the $I$-band galaxy catalogs are associated with real density enhancements in redshift space regardless of the redshift domain. We conclude that the EIS Cluster Candidate Catalog is an excellent starting point to build a statistical sample of galaxy clusters at different redshifts for further investigation, complementing in many ways samples based on X-ray selection. \void{ In this paper we have presented the results of spectrocospic observations conducted with FORS at the VLT in the fields of 5 high-z ($z>0.6$) cluster candidates identified by applying the matched-filter algorithm to the images from the EIS-WIDE $I$-band survey. Despite of the intrinsic ambiguity of uniquely associating a significant density enhancement in redshift space with a detection in the projected distribution, we find at least one significant system in all fields with redshifts in the range $0.5 < z < 0.71$ and from 8 to 33 confirmed cluster members. All systems, except one, have velocity dispersions $\gtrsim 900\mathrm{km/s}$ typical of that of rich clusters. The agreement of the matched-filter and spectroscopic redshifts is excellent for systems at z=0.6, but the matched filter seem to over estimate them at higher redshifts. The results of this paper together with others of this series, strongly suggests that nearly all of the EIS candidate clusters identified using the matched-filter analysis to the $I$-band images are associated with real density enhancements in redshift space regardless of the redshift domain, being therefore an excellent starting point to build a statistical sample of cluster of galaxies at different redshifts for further investigation, complementing in many ways samples based on X-ray selection. } \begin{acknowledgements} We would like to thank the referee for many useful comments, which helped improve the manuscript. LFO acknowledges financial support from the Carlsberg Foundation, the Danish Natural Sciences Research Council and the Poincar\'e fellowship program at Observatoire de la C\^ote d'Azur. \end{acknowledgements} \bibliographystyle{../../../aa}
2,869,038,155,405
arxiv
\section*{Introduction} \label{intro} It has long been known that particles flowing at a finite Reynolds number ($Re$) can passively migrate laterally across streamlines and focus at stable equilibrium locations \cite{Segre1961} within confined systems as a result of nonlinear fluid stresses on the particle \cite{Ho1974}. Recently, this phenomena has received a new found interest due to its use in the precise manipulation of micron-sized particles in a continuous microflow, coining the term inertial microfliudics. On the basis of inertial microfluidics, researchers designed many unique devices to isolate \cite{Nathamgari2015}, sort \cite{Sarkar2016}, focus \cite{Gossett2009,Wang2017} and concentrate \cite{Martel2015} particles. By far the most common device designs leverage curvilinear channels to produce a transverse Dean flow, not only affording a compact design but also allowing for exquisite control of particle streams by simply tuning the Dean forces. Dean forces arise from the curvilinear geometry which introduces a centrifugal acceleration component directed radially outward as flow navigates through the curved channel. The resulting Dean flow is orthogonal to the streamwise flow direction and is composed of two symmetric counter rotating vortices known as Dean vortices (Fig. \ref{fig:5-1}a). The effect of these vortices in combination with inertial forces serve to perturb the inertial equilibrium locations of a particle into a size dependent stream thus allowing for sorting, concentrating and/or isolating certain kinds of particles. The magnitude of this perturbation is set by the strength of these vortices, which is dictated by the Dean number ($De$) \cite{Nivedita2017,Norouzi2013,Dean1927}. Inertial Dean flow focusing has been used with both alternating curves and spirals for various bio-analytic purposes \cite{Wang2017,Dicarlo2007,Martel2013,Ozbey2016,Lee2013,Bhagat2008}. However, modeling the flow in these devices for a specific application is quite challenging as the full Navier-Stokes equations are needed to solve for the particle dynamics in these complex channels. Often complete models are too computationally burdensome to be of any practical use in designing these devices \cite{Pedrol2018}. Given the complexity of simulating particle migration, some authors have proposed the use of lattice Boltzmann methods (LBM) as the technique very computationally efficient \cite{Chun2006,Yuan2018}. However, LBM is prone to instability issues because of the coarse grained representation of the fluid-boundary interface \cite{Yuan2018}. By far the most common approach has been a point particle model, where the inertial forces are solved for in a straight channel and the Dean flow effects are added independently by assuming that is is simply a Stokes drag associated with an underlying Dean velocity. This model has been used widely in recent studies, but has been generally limited to small particles and slow flows \cite{Martel2013,Ozbey2016,Zhang2014,Rasooli2018,Martel2013b}. While this approach is quick and has shown some success, the ability to superpose these two forces may not hold under extreme flow regimes. In particular, this model becomes questionable at high $De$ where the Reynolds number based upon the average Dean flow velocity ($Re_D$) approaches unity (Fig. \ref{fig:5-1}b) and inertial corrections to Stokes drag are necessary. Furthermore, at higher $De$, there is also a redistribution of the axial flow profile (Fig.~\ref{fig:5-1}c) that can alter the shear gradient lift forces. Recently, Dinler and coworkers \cite{Dinler2018} have proposed the use of a direct numerical simulation (DNS) model, where the flow problem is solved in reference frame fixed to a moving sphere similar to \cite{Dicarlo2009,Dinler2018,Martel2013b,Kim2016}. This method is robust and provides the inertial force distribution over the particle in a section of the channel. This method is well suited for fundamental studies \cite{Dicarlo2009}, but not for practical design because the it is computationally inefficient. It is no surprise then that Diner \textit{et al.} applied this model in a curvilinear geometry using coarse parameters and an incomplete description of the momentum equations \cite{Dinler2018}. \\ \indent There is a need for a simple and precise model that can reliably predict the behavior of confined inertial particles across a wide range of flow parameters in a curvilinear geometry. To address this need we first use a numerical model similar to Dinler \textit{et al.}\cite{Dinler2018}, but here we include Coriolis and centripetal terms in our momentum equations. Based on our numerical observations, we then develop a perturbation based model to predict the lateral forces acting on a spherical particle migrating in a curved channel. We then validate this model against previously published experiments, and compare to the Stokes drag model proposed in the past where for the first time we explicitly demonstrate the break down of the Stokes model. Finally, we use the perturbation based model to design a spiral channel and speculate on how this model can be used to design devices in the future. \section*{Numerical Model} In order to understand how particles focus in a curved channel at moderate Re numbers we first define a model system. Our model focuses on the flow of a neutrally buoyant particle of diameter $a$ in a channel of rectangular cross-section $W \times H$ ($W/H = 2$), arc-length $5W$ and average radius $R$ (Fig.~\ref{fig:5-1}a). The particle is translating with at a velocity $\textbf{U}_P= -U_p \textbf{e}_\theta = U_p [-\cos\theta \mathbf{e}_x, 0 \mathbf{e}_y, \sin \theta \mathbf{e}_z] $ and is rotating at an angular velocity $\boldsymbol{\Omega}$ in a flow of average velocity $U$. We define the channel Reynolds number as $Re = \rho U D_h / \mu$ the relative curvature of the channel as $\delta = D_h/2R$, and the Dean number as $De=Re\sqrt{\delta}$, where $\rho$ and $\mu$ are the fluid density and viscosity respectively and $D_h = 2(W+H)/(WH)$ is the hydraulic diameter of the channel. \begin{figure}[b] \centerline{\includegraphics[width=8.5cm]{Fig5-1.eps}} \caption{\label{fig:5-1}(a) Schematic illustration of the channel considered in this report. The channel is rectangular with cross-section ($W \times H$) and average radius $R$. The spherical particle of diameter $a$ flows within the confines of the bounding walls at a location $\textbf{r}_p$ relative to origin. A cross sectional slice of the channel reveals that the recirculating flow patterns shown in the red dashed window. (b) A plot of the Reynolds number ($Re_D$) of this recirculating flow versus the Dean number ($De$). For high $De$ the flow has appreciable inertia as the $Re_D$ is $\mathcal{O}(1)$.(c) A plot of the axial flow profile for various $De$. For low $De$ we observe a symmetric profile similar to flow in a straight channel, but for high $De$ the symmetry vanishes due to increased flow redistribution associated with the Dean flow.} \end{figure} To solve for the flow field and pressure around the particle, it is convenient to consider a rotating frame of reference such that the particle appears stationary. The rotating reference frame is a non-inertial frame of reference and thus the Navier-Stokes equations must adopt a form that takes into account the effects of both centripetal and Coriolis forces. Note that we assume a quasi-steady model to eliminate time dependace from the equations: \begin{equation} \label{eq5-1} \rho\bigg( \mathbf{u} \cdot \nabla \mathbf{u} + 2 \dot{\boldsymbol{\theta}} \times \mathbf{u} + \dot{\boldsymbol{\theta}} \times \dot{\boldsymbol{\theta}} \times \mathbf{r}\bigg) = \mu \nabla^2 \mathbf{u} - \nabla p\\ \end{equation} \begin{equation} \label{eq5-2} \nabla \cdot \textbf{u} = 0 \end{equation} where $p$ is the fluid pressure field, $\textbf{u}$ is the fluid velocity field in the rotating reference frame, $\dot{\boldsymbol{\theta}}$ is the angular velocity of the frame, and \textbf{r} is the position vector of a fluid element about the point of rotation of the frame. The frame velocity, $\dot{\boldsymbol{\theta}}$ is related to the particle velocity by: $\textbf{U}_p = \textbf{r}_p \times \dot{\boldsymbol{\theta}}$, where $\textbf{r}_p$ is the position vector of the particle center relative to the point of rotation (\textit{i.e.} at the origin). The translational and rotational flow rates of the suspended particle ($U_P$ and $\boldsymbol{\Omega}$, respectively) can be self-consistently determined by setting conditions such that the axial motion satisfies a drag constraint $F_{\theta} = 0$ and its rotational motion satisfies a torque constraint $\tau_{r} = \tau_{z} = \tau_{\theta} = 0$. The boundary conditions of this problem are in the rotating reference frame, therefore, the no slip condition on the walls is, $\textbf{u}_{wall} = -\dot{\boldsymbol{\theta}} \times \textbf{r}$. The no slip condition on the particle is enforced by assigning a velocity to the surface of the sphere corresponding to that of a rigid body rotation at angular velocity, $\textbf{u}_{surface} = \boldsymbol{\Omega} \times (\textbf{r}-\textbf{r}_{p})$. Far from the particle the flow is undisturbed and regains the behavior of flow in the absence of a particle. To solve for the unknowns (\textit{i.e.}, $\mathbf{u}$, $p$, $U_{p}$ and $\boldsymbol{\Omega}$) we couple the Navier-Stokes equations to the equations constraining the particle motion (\textit{i.e.} torque and force free equations of motion) and numerical solve directly using the COMSOL multiphysics software. This procedure is performed for a lattice of discrete positions of the particle within the symmetric top half of the cross-section of the channel. To calculate the lift force on the particle, we integrate the surface stresses on the particle in the appropriate direction ($y$ or $z$). Note that because $a/R << 1$ and the particle is simulated at $\theta = 0$, we can say that $\textbf{e}_r \approx \textbf{e}_{z}$ and $\textbf{e}_{\theta} \approx \textbf{e}_{x}$ for the purposes of integrating the hydrodynamic stresses on the the surface of the particle. The numerical model presented in this report investigates the steady state forces $\textbf{F}_{DNS}$ on a finite sized particle through direct numerical simulation of the flow field. \begin{equation} \label{eq5-12} \textbf{F}_{DNS} = \int_s \textbf{n} \cdot \textbf{T} \,ds - m_p \dot{\boldsymbol{\theta}} \times (\dot{\boldsymbol{\theta}} \times \textbf{r}_p) \end{equation} Here the first term on the right hand side of the equation represents the hydrodynamic forces, where $\textbf{T} = \mu \nabla^2\textbf{u} - p\textbf{I}$, is the total stress tensor of the flow around a particle that is restricted from moving laterally. The second term represents the contribution of the centripetal acceleration on the particle. This numerical model includes finite size effects of the particle, the redistribution of the axial velocity profile and the Coriolis and centripetal acceleration terms in the momentum equation. Computational modeling efforts were performed using COMSOL multiphysics software (version 5.2a) using a 3D CFD model using a model with $6 \times 10^5$ degrees of freedom. To calculate inertial lift forces we coupled the equations of fluid motion to a set of global differential equations to solve for the translational and rotational velocity of the particle. The Coriolis and centripetal terms in the Navier-Stokes equations were modeled as a body force. The drag on the particle ($F_{\theta} \approx F_x$) was calculated in COMSOL by integrating the total stress over the surface of particle in the axial direction ($\textbf{e}_\theta \approx \textbf{e}_x$). Similarly the torque on the particle ($\boldsymbol{\tau}$) was calculated by integrating the differential torque ($d\boldsymbol{\tau} = (\textbf{r}-\textbf{r}_p) \times \textbf{n} \cdot \textbf{T} ds$) on the surface of the particle. A mesh sensitivity analysis was conducted to show that calculated lift forces were independent of mesh density to within 1\% error. \section*{Numerical Results} \begin{figure} \centerline{\includegraphics[width=8cm]{Fig5-2.eps}} \caption{\label{fig:5-2}(a) Schematic illustration of a curved channel depicting the region of interest (red dashed box).(b) The cross-section plots show the simulated resultant force $\textbf{F}_{DNS}$ on the particle for multiple channel geometries $(\delta = D_H/2R)$ at $Re = 100$. The gray line are streamlines of the force field and are for visualization purposes. The red square denotes the location of the long face equilibrium (LFE), the blue circle and green diamond denote the short face equilibrium (SFE). Note that only the top half of the channel is shown due to symmetry. (c) Stable equilibrium location for both LFE and SFE as a function of the relative channel curvature $\delta$ for the results of this numerical model and experiments done by Martel \textit{et al.}, 2013 \cite{Martel2013}; only the inner SFE is shown for clarity} \end{figure} Fig. \ref{fig:5-2}a shows a schematic illustration of the top half of the channel cross-section over which we simulate a particle spanning the parameters $Re = 10$ to $100$ and $\delta = 0$ to $0.05$. Fig. \ref{fig:5-2}b shows the force-field $\textbf{F}_{DNS}$ for a subset of the simulation space, specifically an intermediate sized particle ($a/D_h = 0.150$) and at $Re = 100$. Under these conditions, and without loss of generality, we observe that the force-fields are progressively perturbed for increasing channel curvature (\textit{i.e.} $\delta$) at a constant flow rate ($Re=100$) (Fig. \ref{fig:5-2}b). Further, in a straight channel (\textit{i.e.} $\delta = 0$) we see four stable equilibrium locations, where the equilibrium along the long faces (LFE) attracts more streamlines than the equilibrium along the short faces (SFE). The phenomena of a relatively more stable LFE has been observed experimentally and numerically for a rectangular channel under the similar conditions \cite{Liu2015}. As the channel curvature increases, the location of the LFE (red square) shifts towards the inner wall. The LFE eventually merges with the inner SFE (blue circle) at sufficiently high channel curvature ($\delta = 0.005$). After this point the SFE/LFE begins a retrograde motion towards the outer wall (Fig. \ref{fig:5-2}c). Interestingly, after the SFE/LFE switch direction, the equilibrium destabilizes. At this point the particle is not focused at a single point, but rather orbits in plane (Fig. \ref{fig:5-2}b, $\delta = 0.01$). These results are compared to the experimentally obtained values of the LFE and show excellent agreement \cite{Martel2013}. Note that there is also a SFE that corresponds to the outer wall, however it is not a stable equilibrium location after $\delta = 0.005$ and has been neglected for the clarity of this discussion. The non-monotonic shift in LFE at a fixed $Re$ for varying $\delta$ is caused by the presence of the Dean flow within the channel \cite{Martel2013}. Initially, for low $\delta$ the LFE is at a vertical location where locally the Dean flow is directed towards the inner wall. The strength of this Dean flow increases with the curvature of the channel (Fig. \ref{fig:5-1}b) and thus the LFE shifts towards the inner wall with increasing $\delta$. As LFE the shifts towards the inner wall the Dean flow in that region beings to impart a vertical force that is directed in the negative $y$-direction (Fig. \ref{fig:5-1}a). This causes the LFE to move towards the inner SFE and eventually merge. Finally, the merged LFE/SFE migrate towards the outer wall (locally the direction of the Dean flow) at sufficiently high $De$. This transition occurs because the shear gradient across the width of the channel on the inner half of the channel is insufficient to counter the increasing Dean flow forces; thereby adjusting the location of the LFE/SFE towards the center-line \cite{Martel2013}. \section*{Second Order Model (SOM)} \begin{figure*}[h] \centerline{\includegraphics[width=15cm]{Fig5-3.eps}} \caption{\label{fig:5-3} Stable equilibrium location as a function of the relative channel curvature $\delta$ for three distinct $Re$ and $a/D_h = 0.150$. The square markers represent results from direct numerical simulations (DNS) and the solid lines represent the results from the second order perturbation model (SOM). The shaded region at $Re=100$ represents the orbit focusing limits.} \end{figure*} The process of solving for the inertial forces with the numerical model proposed in the previous subsection is computationally intensive and thus difficult to apply as an optimization and/or design tool. Therefore, we developed a second order model that can produce quantitative results with significantly less computational power such that we can use the model to design systems for particular applications. This model follows the work of Dean \cite{Dean1927}, and is based on the observation that the inertial forces are increasingly perturbed for increasing channel curvature (Fig. \ref{fig:5-2}). Dean's seminal study laid the framework to describe flow in a curved pipe with pertubation method based analytic solutions with the curvature ratio as the perturbation parameter. Following this work, we propose a similar model, which assumes instead that the forces on a particle (and not the flow) in a curved geometry can be thought of as a perturbation series. Like Dean's model the leading term in this power series is the solution of the straight channel problem, while further terms describe the deviation in the solution due to increased curvature $\delta$. We first consider a perturbation of the lateral lift forces $\textbf{F}_{DNS}$ about the $\delta = 0$, \textit{i.e.} straight channel case. \begin{equation} \label{eq5-14} \textbf{F}_{DNS} = \textbf{F}_0 + \delta \textbf{F}_1 + \delta^2 \textbf{F}_2 + {\mathcal{O}}(\delta^3) \end{equation} Where $\textbf{F}_{0} \equiv \textbf{F}_{DNS} \big|_{\delta = 0}$ is the full physics lift force calculated for a particle under a given $Re$ for a straight channel (\textit{i.e.}, $\delta = 0$), $\textbf{F}_{1}$ and $\textbf{F}_{2}$ represents the effects of channel curvature on the lateral forces experienced by a particle. We speculate that for sufficiently small $\delta$, $\textbf{F}_{1}$ and $\textbf{F}_{2}$ in Eq. \ref{eq5-14} are the only terms required to model the lateral forces and thus we can neglect any higher order terms. Here the objective is to obtain quantitatively precise forces values with minimal computational requirement and thus we truncate the infinite series after only three terms. In this work, we do not try to analytically identify the form of the functions $\textbf{F}_{1}$ and $\textbf{F}_{2}$, but explore how it can be constructed by using a minimal set of full physics simulations. We show below that $\textbf{F}_{1}$ and $\textbf{F}_{2}$ (and hence, $\textbf{F}_{DNS}$) can be reliably constructed using just three full physics simulations - to do so we solve for these perturbation functions by rewriting Eq.~\ref{eq5-14} for a fixed $Re$ and $a/D_h$. This approach has been previously demonstrated in our previous work and has shown excellent agreement with both experimental and numerical results \cite{Garcia2018}. \begin{equation} \label{eq5-17} \textbf{F}_{1} = \frac{\big(\delta_1^2\textbf{F}_0 - \delta_2^2\textbf{F}_0 - \delta_1^2\textbf{F}_{DNS}\big|_{\delta = \delta_2}+\delta_2^2\textbf{F}_{DNS}\big|_{\delta = \delta_1}\big)}{\delta_1(\delta_2^2-\delta_1\delta_2)} \end{equation} \begin{equation} \label{eq5-18} \textbf{F}_{2} = -\frac{\big(\delta_1\textbf{F}_0 - \delta_2\textbf{F}_0 -\delta_1\textbf{F}_{DNS}\big|_{\delta = \delta_2} + \delta_2\textbf{F}_{DNS}\big|_{\delta = \delta_1}\big)}{\delta_1(\delta_2^2-\delta_1\delta_2)} \end{equation} Here $\textbf{F}_{DNS}\big|_{\delta = \delta_1}$ and $\textbf{F}_{DNS}\big|_{\delta = \delta_2}$ are the full physics simulation results for flow at the same $Re$ in two distinct channels of curvature ratio $\delta = \delta_1$ and $\delta = \delta_2$ respectively. To demonstrate the utility of such a model, we calculate $\textbf{F}_1$ and $\textbf{F}_2$ using Eq. \ref{eq5-17} and Eq. \ref{eq5-18} with only three DNS ($\delta = 0, 0.02, 0.05$) at a fixed $Re$ and $a/D_h$. Fig. \ref{fig:5-3} shows the results of this model, where we show the predicted equilibrium location as a function of $\delta$ and compare to the discrete DNS results for three distinct flow regimes ($Re$). Here we use the equilibrium location as a concise representation of the more complex force maps. From this figure, it is apparent that the SOM reconstructs the lateral lift forces well with little discernible error; with the advantage of the SOM being that it only requires knowledge of three full simulations as opposed to the nine DNS shown in the figure. Moreover, the model is not limited to discrete values of $\delta$ - as it can predict the particle behavior at any combination of $\delta$ or $Re$ provided that basis are known. The second order model is so precise, in fact, that it even predicts the orbit focusing for $Re =100, \delta =0.01$ that was observed previously in Fig. \ref{fig:5-2}b; It does so with no knowledge of the flow as $\delta = 0.02$ and $\delta = 0.05$ were used to solve for the model parameters (Fig. \ref{fig:5-3}, $Re=100$). \section*{Discussion} \begin{figure*} \centerline{\includegraphics[width=15cm]{Fig5-4.eps}} \caption{\label{fig:5-4} Stable equilibrium location as a function of the relative channel curvature $\delta$ for three distinct particle sizes ($a/D_h$) at $Re = 100$. The square markers indicate the predictions from a simple Stokes drag model (Stokes). The solid lines are the predictions from the second order model (SOM). The stars represent the experimental results from Martel \textit{et al}. \cite{Martel2013}.} \end{figure*} As mentioned in the introduction, the Stokes model has been proposed in previous studies as a quick an reliable method for modeling the lateral forces on a particle in curvilinear channel. However, the effectivneness of such a model has yet to be demonstrated particularly across a wide range of particle sizes. Here we compare the results of our SOM with the simple Stokes model and experimental results of Martel \textit{et al.} \cite{Martel2013} to determine under what conditions either model is valid. As a reminder, the Stokes model adds the inertial lift forces ($\textbf{F}_0$), derived for a straight channel, with a force caused by the local Dean flow velocity ($\textbf{U}_{Dean}$) in the channel. Here $\textbf{U}_{Dean}$ is the lateral flow field in a curved channel with no particle at discrete values of $\delta$ and $Re$. It is important to note that this approach is also computationally inefficient as it requires knowledge of the underlying flow field, which in general can be spatially varying. \begin{equation} \label{eq5-19} \textbf{F}_{Stokes}= \textbf{F}_0 + 3\pi \mu a \textbf{U}_{Dean} \end{equation} Here the centripetal force term has not been included in this Stokes model and in previous work \cite{Martel2013,Ozbey2016,Zhang2014,Rasooli2018,Martel2013b}. It is a serendipitous occurrence and can be shown that for a small and neutrally buoyant particle that the pressure gradient term associated with the undisturbed flow imparts a force that exactly cancels out centripetal forces \cite{Maxey1998,Lim2003}. While this Stokes model has been proposed as a simple tool and used heavily in literature, It is not obvious that it should provide meaningful results for flows with large particles and at high $Re$. Fig. \ref{fig:5-4} shows a comparison of the predicted focusing location for the two models discussed in this article with the experimental results of Martel \textit{et al.} \cite{Martel2013}. The SOM agrees well with all experimental results. Using the SOM we can precisely replicate the experimental results to see that in general a small addition of curvature causes the focusing location of a particle to shift towards the inner wall. Interestingly, for the smallest particles ($a/D_h = 0.066$ and $a/D_h = 0.150$), at a sufficiently high curvature, we observe that the particles can be entrained in an orbit rather than having a single focusing location, a result that is confirmed by experiments by Martel \textit{et al.} \cite{Marte2013}. As expected, for small particles the Stokes model and SOM agree well, but for larger particles and at higher $\delta$, the predicted focusing locations begin to diverge. This discrepancy can be attributed to two factors: 1) the redistribution of the axial flow profile at high Dean number (Fig. \ref{fig:5-1}a, $De= Re\sqrt\delta$) and 2) finite size effects which are not considered by the point particle assumption inherent in the Stokes model. Our findings resolve confusion about the size dependence of inertial lift forces combined with Dean flow experienced by particles traveling through curved microchannels. Many studies have assumed that this behavior can be represented by a simple Stokes model. However, by numerically dissecting the equations of fluid flow around the particle, we find that this assumption does not hold for larger particles. This result is of particular significance in many biological applications when the particles of interests are cells which often are large compared to the size of the confining channel. Finally, to demonstrate one potential application of the SOM, we consider the focusing of particles in a ``spiral channel''. The spiral channel is a geometry that is ubiquitous in inertial microfluidics. This geometry has been utilized in numerous studies to manipulate particles \cite{Nivedita2017,Lee2013,Bhagat2008,Martel2013,Martel2012}. However, modeling the focusing behavior of particles in this type of channel is typically quite challenging. The challenge is due to the fact that the channel does not have a single radius of curvature, but rather a radius of curvature that is evolving with the streamwise direction. Modeling the the trajectories of particles in this type of channel using the techniques outlined in the introduction of this article would be not be practical. The full 3D geometry has a very large aspect ratio and thus the computational time and memory requirements would be extensive. However, the SOM is well suited for this problem because it predicts the local force values using only $\delta$ as the input parameter (for a given flow). Thus providing precise force predictions with no knowledge of the flow field everywhere in channel or long computational efforts. Here we consider an Archimedean spiral (Fig. \ref{fig:5-5}a) with a similar cross-section as the previous section (\textit{i.e.} $W/H = 2$) that has a radius of curvature that is parameterized by $R = a + b\theta$. Where $R$ is the local channel radius, $a$ is the channel radius at the inlet ($\theta = 0$) and $b$ is a parameter that controls the spacing successive between spirals. To determine the lateral forces on a given particle, we first use the expression for $R$ to to derive and expression for relative channel curvature everywhere in the channel: \begin{equation} \label{eq5-21} \delta = \frac{D_h}{2(a + b\theta)} \end{equation} From Eq. \ref{eq5-21} it apparent that curvature can vary significantly over the length of the channel. In Fig. \ref{fig:5-5}b we show this variation in a polar coordinate representation from the inlet to the outlet of this spiral channel. Using this knowledge, we can then compute the lateral forces anywhere in the channel using Eq.\ref{eq5-21} and Eq. \ref{eq5-21}. The trajectories of a given particle are then calculated using a first order time stepping approximation: \begin{equation} \theta_{n+1} = \theta_{n} + \frac{u_{\theta}(y_{n}, z_{n})}{R}\Delta t \end{equation} \begin{equation} y_{n+1} = y_{n} + \frac{F_{y}(y_{n},z_n)}{3\pi \mu a} \Delta t \end{equation} \begin{equation} z_{n+1} = z_{n} + \frac{F_{z}(y_{n},z_n)}{3\pi \mu a} \Delta t \end{equation} \begin{figure} \centerline{\includegraphics[width=8cm]{Fig5-5.eps}} \caption{\label{fig:5-5} (a) Schematic illustration of the spiral channel considered in this chapter. (b) The radius of curvature in this spiral channel decreases in the stream-wise direction as $\delta \sim 1/\theta$. at the inlet $\delta = 0.392$ and at the outlet $\delta = 0.136$. (c) The cross-sectional trajectories of the three particles in this spiral channel at $Re=100$. The particles are seeded at a common reference and their outlet location is indicated by the square markers. (d) A projection of the particle trajectories in (c) onto the stream-wise plane from inlet ($\theta = 0$ to outlet $\theta = 7 \pi$).} \end{figure} Where $u_\theta$ is the streamwise flow field, $F_y$ and $F_z$ are the predicted forces in the lateral directions calculated using the SOM. Fig. \ref{fig:5-5}c and \ref{fig:5-5}d show the the trajectories calculated for three distinct particles $a/D_h = 0.066,0.150, 0.225$ under the same flow conditions ($Re=100$). In Fig.\ref{fig:5-5}c and \ref{fig:5-5}d we seed three particles at a common location as a basis for comparison ($z/W = 0, y/H = 0.1$). Interestingly, we see that the particles never reach an equilibrium, but rather are constantly migrating (Fig. \ref{fig:5-5}d). This result is rationalized by considering that the curvature is never constant and thus the forces on the particles are perpetually evolving. These results agree well with experimental findings, where the focused particle streaks in a similar spiral channel were seen to continuously migrate \cite{Martel2012}. Furthermore, we note that the trajectory is high oscillatory for smaller particles ($a/D_h = 0.066$), but the oscillations dampen towards the outlet. Suggesting that smaller particles in this particular geometry may take a considerable channel length to actually focus. Another intriguing observation of this specific channel is that under this configuration we actually observe quite significant separation of the focused particle streams, suggesting that this may be a viable channel for separation purposes. It is clear that there is tremendous value in predicting the lateral forces in an arbitrary geometries such as the spiral channel presented here. One could imagine easily iterating over thousands of channels to obtain the optimal design for separating particles in minutes. The SOM presented here is not limited to spiral channels, but can easily be adapted to any channel where the local channel curvature can be parameterized such as in a serpentine channel. Furthermore, our SOM can be used to better understand the complex focusing dynamics observed in many previous studies \cite{Martel2013,Martel2012}. \section*{Conclusion} There is a clear need for a simple yet precise model of the forces behind the motion of particles in moderate Reynolds number flows within curved channels. This is a first attempt to precisely model the equations of fluid motion to determine the effect of channel curvature on the behavior of inertial particles. Using the full numerical model we observed that particle equilibrium locations are highly dependent on the magnitude of the underlying Dean flow. Based on this full model we have developed a second order model that provides a simple yet precise representation of these forces with minimal computation burden. We have demonstrated that this second order model is both more precise and versatile than the commonly referenced Stokes model. Future work in this problem will answer the ill-posed inverse problem for which there is no tractable solution \textit{i.e.} can a channel be designed given a desired particle focusing location? Continued development and investigation of this model can help answer this question and make these results more accessible to researchers with no knowledge of inertial microfluidics. \section*{Acknowledgements} The authors would like to thank Professor Paolo Luzzato-Fegiz for insightful discussion regarding rotating reference frames. M.G. was funded by XXX.
2,869,038,155,406
arxiv
\section{Introduction} {\bf (a) Relevance of quantum criticality in the cuprates} The unusually high superconducting transition temperature of the hole doped cuprates~\citep{bednorz_muller} remains an unsolved puzzle, despite more than two decades of intense theoretical and experimental research. Pairing, which has a $d-$wave symmetry and short coherence length, but too high of a $T_c$ to be accounted by BCS~\citep{bcs}, is not the only unconventional property of these materials. Their phase diagram, shown in Fig.~\ref{fig:crossover-phase-diagram-QCP}, is a landscape of exotic states of matter. Undoped cuprates are Mott insulators with antiferromagnetic long-range order~\citep{neel_order}. Antiferromagnetism collapses upon small doping and it is replaced by a pseudogap state characterized by a suppression of spectral weight along the antinodal direction. Further doping turns the system into a conventional Fermi-liquid metal. Between the Fermi-liquid and the pseudogap region lies a strange metal phase with linear-$T$ resistivity. The superconducting dome emerges in the cross-over between the pseudogap and the Fermi-liquid regions at lower temperatures. Strong electronic correlations are the cause of the rich phase diagram of cuprate superconductors~\citep{phillips_rmp}. The same strong correlations render traditional theoretical approaches, such as perturbation theory and Fermi-liquid theory, inapplicable. Some recent conceptual progress has been achieved by associating the optimal $T_c$ with a quantum critical point (QCP), lying underneath the superconducting dome and connecting the pseudogap and the Fermi-liquid regions~\citep{broun_criticality,sachdev_qcp}. Unlike a classical critical point, a QCP affects the behavior of the system in a wide range of temperatures and might explain the emergence of a linear-$T$ resistivity up to room temperature. Experimental evidence for a QCP comes from transport~\citep{dirk,r_daou_09,f_Balakirev_09} and thermodynamic measurements \citep{tallon}. Angle-resolved photo-emission spectroscopy (ARPES) \citep{shen_nodal_quasiparticles,plate_overdoped_fermi_surface} and quantum oscillation measurements \citep{DoironLeyraud_quantum_oscillations} show that in the pseudogap region, the Fermi surface consists of small pockets which have different topology than the large Fermi surface present in the Fermi liquid. It is reasonable to assume that those two states are orthogonal to one another and are connected through a transition. Additional evidence in support of quantum criticality comes from measurements of the Kerr signal in YBCO by Jing Xia {\em et al.}~\citep{Xia_KerrYBCO}. They find that at the pseudogap crossover temperature, $T^*$ a non-zero Kerr signal develops sharply and persists even inside the superconducting dome. This is consistent with earlier neutron scattering measurements by Fauqu\'e {\em et al.}\citep{Fauque_neutrons}, which show the development of magnetic order in the pseudogap phase. In this manuscript we review numerical evidence of quantum criticality in the Hubbard model, the de-facto model for the cuprates, that appeared in earlier publications. In those cited works, the Hubbard model is solved using the dynamical cluster approximation (DCA) in conjunction with several quantum Monte Carlo (QMC) cluster solvers. In all calculations relevant for the phase diagram we neglect the superconducting transition. The interplay between the QCP and superconductivity will be discussed in a future publication.~\citep{Yang10} In this review we focus on the thermodynamic quantities, such as the entropy and the chemical potential, and also on single-particle quantities, such as the spectral weight and the quasiparticle weight. The thermodynamic properties give unbiased evidence of quantum criticality, whereas single-particle properties may be used to gain more detailed insight on the ground state. Both set of quantities rely on the evaluation of the self-energy which can be calculated using quantum cluster methods. At a critical interaction-dependent filling, we find that the entropy exhibits a maximum, the quasiparticle weight displays a crossover from Fermi liquid to pseudogap behavior, and the spectral function shows a wide saddle point region crossing the chemical potential. This is consistent with the presence of a QCP, since the lack of an energy scale results in an enhanced entropy at low temperatures. We also find that by tuning an appropriate control parameter, the next-nearest-neighbor hoping, $t^\prime$, the QCP becomes a classical critical point associated with a phase separation transition. We present our findings in two sections. In section \ref{section:fermiliquid_pseudogap}, we discuss the single-particle spectra and the thermodynamics properties of the $t^\prime=0$ Hubbard model. In section \ref{section:phase_separation}, we discuss the phase separation in the $t^\prime>0$ Hubbard model. \begin{figure}[t] \parbox[h]{6.5cm}{ \includegraphics[width=0.48\textwidth]{phase_diagram_cuprates_withQCP}} \parbox[h]{0.48\textwidth}{ \caption{The phase diagram of the cuprates. As a function of temperature and doping, the cuprates display antiferromagnetic order at low doping, a non-Fermi liquid pseudogap region at intermediate doping and a metallic region at higher doping. Around optimal doping, superconductivity develops, and above the superconducting dome, a strange metal with non-Fermi liquid properties appears. $T^*$ separates the pseudogap from the marginal Fermi-liquid phase. $T_X$ is the crossover temperature between the Fermi and the marginal Fermi-liquid regions. A quantum critical point hides underneath the superconducting dome near optimal hole doping.} \label{fig:crossover-phase-diagram-QCP}} \end{figure} {\bf (b) Hubbard Model} Short after the discovery of high-$T_c$ superconductors, Anderson~\citep{anderson} suggested that the Hubbard model captures the basic properties of the high temperature superconductors and Zhang and Rice~\citep{ZhangRice} demonstrated that only a single band is needed. The single-band Hubbard model is represented by the Hamiltonian: \begin{equation} H=-t \sum_{\left<i,j\right>,\sigma} \left[ c_{i\sigma}^\dagger c_{j\sigma}+\text{H.c.}\right]+ U \sum_i {n_{i\downarrow}n_{i\uparrow}}, \label{eq:hubbard_model} \end{equation} where $c_{i\sigma}^\dagger$ ($c_{i\sigma}$) is the creation (annihilation) operator of an electron at site $i$ and spin $\sigma$, $n_{i\sigma}$ is the corresponding number operator, $t$ is the hopping parameter between nearest-neighbor sites, and $U$ the on-site Coulomb repulsion. Despite its apparent simplicity, the Hubbard model is notoriously difficult to solve. No analytical solutions exist except in one dimension~\citep{lieb_oned,frahm_oned,kawakami_oned}. However, tremendous theoretical and computational efforts have resulted in approximation schemes that provide access to the physics of this model in higher dimensions. In this manuscript we also discuss results for the generalized Hubbard model which includes hopping between next-nearest neighbor with amplitude $t'$: \begin{equation} H=-t \sum_{\left<i,j\right>,\sigma}\left[ c_{i\sigma}^\dagger c_{j\sigma}+\text{H.c.}\right] -t' \sum_{\left<\left< i,l \right> \right>} \left[ c_{i\sigma}^\dagger c_{l\sigma}+\text{H.c.}\right] +U \sum_i {n_{i\downarrow}n_{i\uparrow}}. \label{eq:gen_hubbard} \end{equation} Important progress in our understanding of strongly correlated models has been achieved by the development of finite size methods, including exact diagonalization and QMC. The latter works well in the simulation of bosonic systems where creation and annihilation operators commute. However, due to the minus sign problem associated with the anticommutation relations of fermionic operators, QMC is limited to small lattice sizes and consequently give questionable predictions for correlated electronic systems in the thermodynamic limit. Another successful approach is the dynamical mean-field approximation (DMFA) which treats the local dynamical correlations explicitly and non-local (inter-site) correlations in a mean-field approximation~\citep{georges_dmft,metzner_vollhardt,e_mullerhartmann_89a,e_mullerhartmann_89b}. This technique becomes exact in the limit of infinite dimensions~\citep{georges_infinite_D,jarrell_infinite_D}. However, when applied to finite dimensions, the DMFA fails to describe the renormalization effects due to momentum-dependent modes and the transitions to phases with non-local order parameters. Thus, DMFA misses physical phenomena that are abundant in strongly correlated systems, such as the development of spin or charge density wave phases, localization in the presence of disorder, spin-liquid physics, unconventional superconductivity, etc. The limitations of the DMFA are addressed by cluster mean-field theories. Those fall into two categories~\citep{QCT}: the cluster dynamical mean field theory (CDMFT)~\citep{kotliar_cdmft}, which is formulated in real space, and the DCA~\citep{hettler98} which is formulated in momentum space. In both cases the system is viewed as a cluster embedded in an effective medium. The formal difference between DCA and CDMFT is that in real space, the DCA cluster satisfies periodic boundary conditions whereas the CDMFT cluster is open. The two methods should give the same results for large enough clusters. In this work we present DCA~\citep{hettler98,hettler00} results. DCA treats short-ranged correlations explicitly, while longer ranged ones are approximated by the mean field. By increasing the cluster size, the length-scale of the explicitly treated correlations can be gradually increased while the calculation remains in the thermodynamic limit. In momentum space, the DCA can easily be conceptualized as the approximation in which the self-energy calculated by the coarse grained green function. Quantum Monte Carlo based solvers such as Hirsch-Fye (HFQMC)~\citep{hirsch_fye}, continuous-time (CTQMC)~\citep{rubtsov_ctqmc} and determinantal quantum Monte Carlo (DQMC)~\citep{bss} are used to solve the cluster problem. QMC methods are often formulated in imaginary time and an analytic continuation to real time is necessary to evaluate physical quantities. Fortunately, powerful techniques such as the maximum entropy method (MEM)~\citep{MEM,mem2} are able to successfully select the most likely solution. Even though quantum cluster schemes have provided a tremendous breakthrough in our understanding of the Hubbard model, they are also subject to limitations. Quantum Monte Carlo solvers suffer from the sign problem, which scales exponentially with inverse temperature, interaction strength and cluster size. This limits the application of the method to relatively small cluster sizes, higher temperatures and intermediate interactions. The limitation in the cluster size is particularly problematic close to a phase transition where the correlation length diverges. The coarse-graining also limits the momentum resolution, which for typical cluster sizes is too small to capture detail features of the spectra, such as van Hove singularities. For a Fermi liquid, this is not a limitation since the physics is dominated by the low frequencies in which the self-energy is momentum independent. However, intrinsically anisotropic states, such as the pseudogap, or possibly the quantum critical region, can be captured only approximately. Finally, MEM uses Bayesian statistics to find the most likely spectra for the QMC data, subject to sum rules, such as conservation of the spectral weight. Because of the statistical errors in the QMC data, the frequency spectrum resolved using MEM has a limited resolution. Despite those limitations, progress can be achieved in accessing the quantum critical region by algorithmic optimizations. A truly universal way in dealing with the severity of the sign problem is to vastly increase the statistics, using massively parallel QMC algorithms with highly optimized codes. \section{From Fermi Liquid to Pseudogap}\label{section:fermiliquid_pseudogap} A great advantage of the DCA is its ability to evaluate the self-energy as a function of momentum {\bf k} and Matsubara frequency $i\omega_n$, $\Sigma({\bf k},i\omega_n)$. From the self-energy various single-particle quantities, such as the spectral function, $A({\bf k},\omega)$, the quasiparticle weight, $Z_{\bf k}$, and the energy can be derived. All those quantities provide insight about the ground state of the system. In this section we will show how the transition from the Fermi-liquid to the pseudogap state is reflected in such single-particle quantities. {\bf (a) Spectral Function} The single-particle spectral function shows a clear evolution from a Fermi-liquid to a pseudogap state as the filling increases towards half filling. Fig. \ref{fig:single_spectrum} displays a density plot of the spectral function, $\displaystyle A({\bf k},\omega)=-\frac{1}{\pi}\Im G({\bf k},\omega)$, which is extracted by analytically continuing the imaginary time Green function. At low filling, $n<0.85$, the spectral function exhibits a typical Fermi-liquid form. A notable characteristic is the presence of a wide saddle point region, reminiscent of a van Hove singularity,~\citep{Radtke94} along the antinodal direction. Around the critical filling of $n=0.85$ this saddle point feature crosses the chemical potential. This crossing results in a sharp peak in the density of states~\citep{raja_qcp}, which displays low-energy particle-hole symmetry~\citep{s_chakraborty_08}. We are currently exploring the influence of the van Hove singularity on the superconducting transition.~\citep{Sandeep} At higher filling, $n>0.85$, the spectral weight collapses along the antinodal direction and a pseudogap opens. The Fermi surface obtained by extremizing $\left|\nabla n_{\bf k}\right|$ shows a similar evolution (see lower panels in Fig. \ref{fig:single_spectrum}). The Fermi-liquid region consists of a large hole pocket, which extends and touches the edges of the Brillouin zone $(0,\pm \pi)$, $(\pm \pi,0)$ at $n=0.85$. In the pseudogap region the Fermi surface consists of four Fermi arcs centered around the nodal points, similar to the ones obtained from ARPES. These results clearly demonstrate that the DCA can capture qualitatively the evolution of the ground state from a Fermi-liquid to a pseudogap phase. \begin{figure} \includegraphics[width=\textwidth]{Akw_spectra} \includegraphics[width=\textwidth]{gradnk} \caption{Upper panels: Density plots of the spectral function $A({\bf k},\omega)$ for the Fermi liquid (left), marginal Fermi liquid (middle) and pseudogap region (right) for filling $n=0.75, 0.85$ and $0.95$, respectively. (The dashed feature seen in the regions of steepest dispersion, especially for $n=0.75$, is a plotting artifact). The momentum is along the path $G(0,0)\rightarrow M(\pi,\pi)\rightarrow X(\pi,0)\rightarrow G(0,0)$. A wide saddle point region between X and G sits above the chemical potential in the Fermi-liquid region and crosses it around the critical filling ($n=0.85$). In the pseudogap region this features sits below the chemical potential leaving a gap along the antinodal direction behind it. Note that the fact that the dispersion looks discontinuous along $G(0,0)\rightarrow M(\pi,\pi)$ in the left and middle panels is an artifact of our interpolation algorithm. Lower panels: Fermi surface as extracted from $|\nabla n_{\bf k}|$ in the Fermi liquid (left), marginal Fermi liquid (middle) and pseudogap (right) region showing the development of the pseudogap in the antinodal direction. The Coulomb repulsion is $U=6t$, the temperature $T=0.069t$, and the cluster size $N_c=16$. The energy unit is $4t$. \label{fig:single_spectrum}} \end{figure} {\bf (b) Quasiparticle Weight} Whereas the spectral function gives a qualitative understanding of the ground state, it relies on the analytic continuation of numerical data. Since extracting quantitative information from analytically continued data is difficult, a more robust way is to rely on imaginary time quantities, such as the quasiparticle weight $Z({\bf k})$. Since the quasiparticle weight is finite across a Fermi surface, but vanishes if the spectrum is incoherent, it will allows to clearly distinguish between a Fermi liquid and a pseudogap state. The quasiparticle weight can be directly obtained from the Matsubara frequency self-energy as $\displaystyle Z_{0}\left({\bf k}\right)= \left(1-\frac{\Im\Sigma\left({\bf k},i \omega_0 \right)}{\omega_0}\right)^{-1}$, where $\omega_0=\pi T$ is the lowest fermionic Matsubara frequency. At the limit $T\rightarrow0$ and for a well-behaved self-energy, $Z_{0}\left({\bf k}\right)$ converges to the quasiparticle weight, $Z\left({\bf k}\right)$. Fig.~\ref{fig:Quasiparticle-weight-raja} (a) displays $Z_{AN}=Z_0(\omega_0=\pi T, k \parallel (0,0) \rightarrow (0,\pi))$, the Matsubara quasiparticle weight along the antinodal momentum direction for $U=6t$ and a cluster of size $N_c=16$~\citep{raja_qcp}. The momentum ${\bf k}$ at the Fermi surface is determined by maximizing $\left| \nabla n(\bf k) \right|$. $Z_{AN}$ exhibits two distinguishable behaviors: for $n>n_{c}=0.85$ the quasiparticle weight vanishes, whereas it approaches a finite value for $n<n_{c}$. The $n>n_{c}$ region corresponds to the pseudogap state in which the spectral weight collapses along the antinodal direction, while the $n<n_{c}$ region behaves as a Fermi liquid. The temperature dependence of $Z_{AN}$ (Fig. \ref{fig:Quasiparticle-weight-raja} (a)) not only provides information about the ground state but also allows the extraction of relevant energy scales. By comparing the numerical results with analytical expressions derived from particular phenomenological forms of the self-energy, we obtain $T_{X}$ and $T^*$. At low filling, $n<n_{c}$, the high $T$ dependence of $Z_{AN}$ is best fit by a marginal Fermi-liquid form, whereas for low $T$ the data is best fit by a Fermi liquid. The crossover occurs at a temperature $T_X$, which is extracted by fitting with a crossover function, and is accompanied by a change in the sign of the curvature of $Z_{AN}$. At higher filling ($n>0.85$), the high temperature $Z_{AN}$ can also be fit by a marginal Fermi liquid, whereas at low temperatures, it cannot. The crossover temperature $T^*$ can be extracted as the lowest temperature where the marginal Fermi liquid fit lies within the statistical error. However a more accurate value can be obtained from the bulk spin susceptibility which exhibits a peak at $T^*$ and the two values are found to be consistent~\citep{raja_qcp}. The crossover temperatures $T_{X}$ and $T^{*}$ are shown in Fig. \ref{fig:Quasiparticle-weight-raja} (b). Both of them converge to zero as the filling approaches $n_c=0.85$, which is the same value for which the peak in the density of states~\citep{raja_qcp} crosses the chemical potential. \begin{figure} \begin{center} \includegraphics[height=0.42\textwidth]{raja_quasiparticle_weight} \includegraphics[height=0.42\textwidth]{temperatures} \caption{{\bf a)} The antinodal quasiparticle fraction $Z_{AN}$ as a function of temperature for different values of filling, $U=6t$ and cluster size $N_c=16$ (the unit of energy is $4t$). The onset of the pseudogap region is determined by the vanishing of the antinodal spectral weight at zero temperature. The dashed and solid lines represent fits of the low temperature ($T<0.3$) data to marginal Fermi liquid (red solid curves), Fermi liquid (black solid curves) and crossover forms (dashed black curves), respectively. The arrows show the corresponding crossover temperatures $T_X$ and $T^*$. The value of $T^*$ presented here is obtained from the spin susceptibility as explained in~\citep{raja_qcp}, but is consistent with the one extracted from the from the fitting forms. The ratio $Z_{N}/Z_{AN}$ of the quasiparticle weight in the nodal ($(\pi,\pi)$) and antinodal ($(0,\pi)$) directions (inset) diverges as the pseudogap develops in accordance with Fig.~\ref{fig:single_spectrum}. {\bf b)} The crossover temperatures $T_X$ and $T^*$ as a function of filling as extracted from the temperature dependence of $Z_{AN}$~\citep{raja_qcp} for the same parameters. \label{fig:Quasiparticle-weight-raja}} \end{center} \end{figure} {\bf (c) Thermodynamics} A different perspective at the transition from a Fermi liquid to the pseudogap state comes from the evaluation of the entropy. We obtain the entropy by integrating the energy using the formula: \begin{equation} S(\beta,n)=S(0,n)+\beta E(\beta,n)-\int_{0}^{\beta}E(\beta^{\prime},n)d\beta^{\prime}, \label{eq:entropy_integral_energy} \end{equation} where $\beta$ is inverse temperature and $S(0,n)$ is the infinite temperature entropy. Equation \ref{eq:entropy_integral_energy} is appropriate for QMC calculations, because the integration reduces the statistical error. The challenge is to have good enough statistics to control the error of the surface term, $\beta E(\beta,n)$. In Mikelsons {\em et al.}~\citep{mikelsons_thermodynamics} large statistics was possible simply by using large computational resources. The entropy divided by the temperature, shown in Fig. \ref{fig:Entropy-versus-filling} (a), exhibits a maximum at exactly the same critical filling that was identified before from the spectral function and the quasiparticle weight. In Fig.~\ref{fig:Entropy-versus-filling} (b), we show the chemical potential, $\mu$, as a function of temperature. We note that at the critical filling $d\mu/dT=0$, since the entropy and the chemical potential are related by the Maxwell relation: \begin{equation} \left(\frac{\partial S}{\partial n}\right)_{T,U}= -\left(\frac{\partial\mu}{\partial T}\right)_{U,n}.\label{maxwell_S_mu} \end{equation} \begin{figure} \begin{center} \includegraphics[width=0.9\textwidth]{SoverT_and_chempot} \end{center} \caption{{\bf a)} The filling dependence of the entropy divided by temperature, $S/T$, for various temperatures at $U=6t$ and $N_c=16$. With decreasing temperature a peak develops around the critical filling of $n_c=0.85$. {\bf b)} The temperature dependence of the chemical potential $\mu$ for different fillings. At the critical filling, $n_c$, $\mu$ becomes temperature independent at low temperatures. \label{fig:Entropy-versus-filling}} \end{figure} \begin{figure} \includegraphics[height=0.4\textwidth]{mu_vs_T_Us} \includegraphics[height=0.4\textwidth]{nc_vs_U} \caption{{\bf a)} The chemical potential as a function of temperature for fillings of $n=0.85$ and $0.90$ and for a variety of interaction strengths $U$ for $N_c=12$. {\bf b)} The critical filling, defined by the filling in which $\partial\mu /\partial T=0$ versus $U$. The critical filling decreases with $U$ monotonically and is projected to reach the atomic limit value of $n_c=2/3$ at $U_c=30t$.\label{fig:nc_vs_U}} \end{figure} Also the temperature dependence of the chemical potential can be used as a practical criterion to identify the location of the critical filling, because evaluating the chemical potential is much less computationally intensive than evaluating the entropy. Using this criterion we investigate the important question of the dependence of $n_c$ on the Coulomb repulsion $U$. As it is shown in Fig.~\ref{fig:nc_vs_U}, we find that increasing $U$ reduces the critical filling and thus enlarges the pseudogap region in the phase diagram. Our results follow the trend proposed in earlier arguments~\citep{s_chakraborty_08} according to which the critical filling decreases in order to reach the atomic limit value of $n_c=2/3$. In this section we have shown that several single-particle quantities are consistent with the presence of a QCP. The qualitative form of the single-particle spectrum shown in Fig.~\ref{fig:single_spectrum} is fundamentally different in the Fermi-liquid and the pseudogap regions, which points to orthogonal ground states. The temperature dependence of the quasiparticle weight reveals the presence of two crossover temperatures $T^*$ and $T_X$, which converge to zero at $n_c$ as shown in Fig.~\ref{fig:Quasiparticle-weight-raja} (b). If the crossover temperatures $T_X$ and $T^*$ constitute energy scales that suppress degrees of freedom, their vanishing at $n_c$ means that there are no relevant energy scales to quench the entropy and therefore it collapses at a slower rate, which is consistent with the peak of the entropy observed at $n_c$. The natural next step to investigate quantum criticality is to access the QCP. However the fermion sign problem severely limits the applicability of quantum Monte Carlo techniques close to a QCP. It is possible however, as we will discuss in the next section, that by tuning an appropriate control parameter, the critical point may be lifted to finite temperature and thus studied with QMC. \section{Phase Separation and Quantum Criticality}\label{section:phase_separation} Experiments suggest that cuprate superconductors are susceptible to charge inhomogeneities, such as stripes or checkerboard modulations~\citep{hinkov04}. These inhomogeneous charge patterns have stimulated intense theoretical and experimental research. Here we will consider the possibility that those charge instabilities are evidence that the cuprates are close to a phase separation transition, and this proximity may be related to the nature of the QCP. \begin{figure} \begin{center} \includegraphics[height=0.40\textwidth]{Critical_point} \includegraphics[height=0.40\textwidth]{Isotherms_tp03_nvsmu} \caption{{\bf a)} The schematic phase diagram in the presence of charge separation. This phase diagram describes the transition between two states labeled Mott Liquid (ML) and Mott Gas (MG) as a function of temperature, $T$, chemical potential, $\mu$, and filling, $n$. The red surface represents the coexistence region, which terminates in a critical point (CP). As we go around the critical point the state changes smoothly from ML to MG. Along the first order transition line and for a fixed $T$ and $\mu$, the filling has two values. {\bf b)} Filling as a function of chemical potential for several temperatures in the vicinity of the charge separation critical point. The number next to each curve represents the temperature. The coexisting phases are an incompressible Mott liquid at $n\approx1$ and a compressible Mott gas at $n\approx0.93$. The critical temperature is $T_c=0.1t$. The blue dashed line represents the surface of metastability which is not accessible within the DCA. The green dotted line represents the isothermal of the metastable state inside the phase coexistence region (gray zone). At the critical point the isothermals for $T>T_c$ cross. The inset shows the scaling curve $(n-n_c)(T-T_c)^{-\beta}$ vs $(\mu-\mu_c)(T-T_c)^{-\beta\delta}$ in arbitrary units for $\mu_c=3t$, $n_c=0.96$, $T_c=0.1t$. The scaling exponents, $\beta=0.10 \pm 0.05$ and $\beta \delta \sim 1$, are roughly consistent with the Ising universality class. \label{fig:Phase-separation}} \end{center} \end{figure} Our findings suggest that the Hubbard model displays a phase diagram similar to the one for the gas-liquid transition with Mott liquid (ML) and Mott Gas (MG) regions. Fig.~\ref{fig:Phase-separation}(a) shows a possible phase diagram for the Hubbard model as a function of $T$, $|\mu|$, and $n$. The red-colored surface is a schematic of the region where the Mott liquid and Mott gas states, characterized by different densities, coexist for $T<T_c$. The critical point is located at temperature $T_c$, filling $n_c$, and chemical potential $\mu_c$. One can go from one state to the other either smoothly, by avoiding the phase separation region, or through a first-order transition by crossing it. Right on the phase separation region, the density has two values for a given value of $\mu$ and $T$. \begin{figure} \begin{center} \includegraphics[width=0.95\textwidth]{DOSMLMG} \caption{The density of states of the {\bf a)} Mott liquid and {\bf b)} Mott gas states at $T=0.077t$ (dotted line) and $T=0.057$ (solid line). The Mott liquid is an incompressible insulator with a pseudogap while the Mott gas is weakly compressible with a Fermi liquid peak in the DOS. \label{fig:dos_mg_ml} } \end{center} \end{figure} Macridin {\em et al.} \citep{macridin_phases} provided compelling evidence of phase separation in the case of the generalized Hubbard model (Eq.~(\ref{eq:gen_hubbard})) with positive next-near-neighbor hopping $t^{\prime}=0.3\,t$ and $U=8t$. Using the DCA in a $N_c=8$ cluster with HFQMC as the cluster solver, they showed that below a critical temperature $T_{c}\sim 0.1t$ a first order transition occurs, which is identified by a hysteresis in the $n$ versus $\mu$ curve for $T<T_c$. As shown in Fig.~\ref{fig:Phase-separation}(b) with more precise data obtained using DQMC as the cluster solver, the hysteresis is between two states of different filling, the Mott liquid at half filling and the Mott gas at a filling of about 0.93 for $T=0.071t$. The Mott liquid is incompressible and insulating. Its compressibility, which is the slope of the filling vs $\mu$ curve in the high filling side of the hysteresis curve, is small and decreases with temperature. Also the density of states of the ML phase, shown in Fig.~\ref{fig:dos_mg_ml}(a), exhibits a gap as expected for an insulator. On the other hand, the Mott gas is compressible and metallic; the density of states is finite at the chemical potential ($\mu=\omega=0$), as displayed in Fig.~\ref{fig:dos_mg_ml}(b). The analogy to the well-known phase diagram of a liquid-gas mixture, such as water and steam, is useful to understand this phase transition. At low temperatures, there is a region in the pressure-volume phase diagram in which water and steam coexist for a range of pressures. As the temperature is increased, the region of coexistence contracts and finally terminates at a critical point where the compressibility diverges. In the pressure-temperature phase diagram, this region of coexistence becomes a line of first order transitions which terminates at a second order point where the water and gas become indistinguishable and the compressibility diverges. Since the line terminates, it is possible for the system to evolve adiabatically from steam to water without crossing a phase transition line; therefore, the steam and water must have the same symmetry. In the Mott liquid and Mott gas system the chemical potential $\mu$ replaces the pressure and the density $n$ replaces the volume of the water-gas mixture. Because the order parameter separating the ML from the MG, the density $n$, does not have a continuous symmetry, order may occur at finite temperatures, and the ML-MG transition will most likely be in the Ising or lattice gas universality class. Within this context, one may then understand the hysteresis of Fig.~\ref{fig:Phase-separation}(b). The solid lines are isotherms which show how the system evolves with increasing density. At the temperature $T=T_c$, the compressibility diverges at the critical filling. As the temperature is lowered further, there is a region where the ML and MG coexist. Inside this region the isothermals contain unphysical regions of negative compressibility (dashed green line in Fig.~\ref{fig:Phase-separation}(b)) along with metastable regions of positive compressibility. The metastable branch of the isothermal in the vicinity of the ML is a "supercooled" ML, whereas the one in the vicinity of the MG is a "superheated" MG. The translational invariance of DCA along with the stabilizing effect of the mean-field host enable access to those metastable states. However the real physical system will phase separate and the two phases will co-exist in equilibrium (dotted blue line in Fig.~\ref{fig:Phase-separation}(b)). We can sketch the phase diagram in the $T-\mu$ plane using the analogy to the water-steam mixture. The most generally applicable rule governing the shape of phase diagrams was established by Gibbs. For a system of $c$ conserved components and $f$ phases, the Gibbs constraint is give by the relation $\Phi=c-f +2$ where $\Phi$ is the number of independent variables needed to specify the state of every phase. In this case, as in the water-steam system, the number of components $c=1$, since the particle number is conserved. At a location in the phase diagram where only one phase exists, $\Phi=1-1+2 =2$, so there are two independent variables, and the phase diagram is a surface on the $\mu$, $T$ and $n$ three-dimensional space. There will be places in the phase diagram where two phases exist simultaneously, then $\Phi=1-2+2=1$, implying that two phases co-exist only along lines in the phase diagram. At the lines in the $T-\mu$ plane where two phases coexist, $n$ is also determined for each phase, but its value can be different. That is a line of first order transitions. \begin{figure} \begin{center} \includegraphics[width=0.95\textwidth]{T-mu_classical-quantum} \caption{(a) The chemical potential-temperature phase diagram of the ML and MG mixture for $t'>0$. The ML and MG coexist on a line of first order transitions with positive slope. Since ML and MG have the same symmetry this line can terminate in a second order critical point. The blue dashed lines define the boundaries of the supercritical region where the ML and MG cannot be distinguished. Outside this region either the ML or MG character dominates. (b) The chemical potential-temperature phase diagram for $t'\rightarrow0$. The first-order line is absent but supercritical region remains as a quantum critical region. In the Hubbard model the lines $T*$ and $T_X$ (Fig.~\ref{fig:Quasiparticle-weight-raja}(b)) define the boundaries of this region. \label{fig:T-mu_classical}} \end{center} \end{figure} Additional information about the lines of first order transitions is obtained from the Clapeyron's equation. The Gibbs free energy $G= E -TS - \mu N$, and $dG = -SdT -Nd\mu$, must be the same for the coexisting phases on a line. If we label the two phases $1$ and $2$, then \begin{equation} (S_1-S_2)dT = -(N_1-N_2)d\mu \,. \end{equation} If we identify the latent heat $L = (S_1-S_2)T$, then $d\mu/dT = -L/(T \Delta n)$ represents the slope of the first order transition line. Since the latent heat $L$ of going from ML to MG is positive, but $dn$ is negative, the slope $d\mu/dT$ of the line of first order transitions is positive. Above the critical point terminating the ML-MG transition, the system displays supercritical behavior in a region where the gas and the liquid cannot be distinguished thermodynamically (c.f. Fig.~\ref{fig:T-mu_classical}). It is possible for the system to evolve adiabatically through a counterclockwise path from deep in the MG region, through the supercritical region, into the ML region. At the lower edge of the supercritical region, the system loses the Fermi-liquid character of the MG, and at the upper edge, it begins to acquire the pseudogap character of the ML. Let us discuss now how this phase separation, which occurs at finite temperature, is related to quantum criticality. The key parameter is the next-nearest-neighbor hoping, $t^{\prime}$. For $t^{\prime}=0$ there is no evidence for phase separation at finite $T$, but such a phase separation occurs for positive $t^{\prime}$. Khatami {\em et al.} ~\citep{khatami_criticality} performed a systematic analysis of the phase diagram of the extended Hubbard model as a function of $t^{\prime}$. As shown in Fig. \ref{fig:mu-vs-n-ehsan} (a) the compressibility, $\chi_{c}=dn/d\mu$, exhibits a peak for all positive $t^{\prime}$ at a critical filling that depends on $t^{\prime}$. The width of the peak measures the distance from the critical temperature: the sharper the peak the closer to $T_{c}$ the employed temperature is. We see that the critical temperature increases with $t^{\prime}$ and it starts from $T_{c}=0$ at $t^{\prime}=0$. These results point to the phase diagram of Fig.~\ref{fig:mu-tp-T-phase-diagram} (b). At a positive $t^{\prime}$ a charge separation occurs at temperatures $T<T_{c}(t^{\prime})$ and at a critical filling $n_{c}(t^{\prime})$ between an incompressible and insulating Mott liquid and a compressible metallic Mott gas. Right at $T_{c}$, there is a terminating second-order critical point. By decreasing $t^{\prime}$ this second-order critical point is pushed down to lower temperatures. Presumably the line of second-order critical points terminates at the QCP. \begin{figure} \includegraphics[width=0.5\textwidth]{ehsan_nvsmu_Nc16B_U1\lyxdot 5_tp0\lyxdot 0-0\lyxdot 3} \includegraphics[width=0.5\textwidth]{second_order_terminus_pd} \caption{(a) Filling, $n$, vs.\ chemical potential, $\mu$, for $T=0.077t$, $N_c=16$, $U=6t$ and various $t^{\prime}$ is shown in solid lines and the compressibility $\displaystyle \frac{dn}{d\mu}$ in dashed lines. A critical filling, identified by the peak in the compressibility appears at higher temperatures and fillings as $t^\prime$ is increased. The inset shows the $t^\prime$ dependence of the critical filling, $n_c$. (b) Schematic phase diagram of the Hubbard model in the $\mu$, $t^\prime$ and $T$ space (neglecting superconductivity). The classical critical point turns asymptotically into a quantum critical point as $t^\prime \rightarrow 0$. \label{fig:mu-vs-n-ehsan}\label{fig:mu-tp-T-phase-diagram}} \end{figure} Such a scenario constitutes a new path to quantum criticality as it is closely tied to charge fluctuations rather than spin fluctuations. However, numerous simulations suggest that a finite positive $t'$ enhances antiferromagnetic correlations, and since phase separation is only present for $t'>0$ it suggests that it is driven by strong spin correlations. In addition, previous simulations incorporating Holstein phonons to the Hubbard model found that phonons also enhance the phase separation instability~\citep{macridin_isotope}. As $t'/t \to 0$ (and the electron-phonon coupling vanishes), the phase separation critical point approaches zero temperature becoming a QCP. Here, the first-order behavior is absent from the phase diagram (Fig.~\ref{fig:Quasiparticle-weight-raja}(b)) leaving only the adiabatic path from the ML to the MG, which passes through the supercritical region, which is now the quantum critical (QC) region. The crossover scale $T_X$ and the pseudogap scale $T^*$ are now understood as the boundaries of the QC region. As we cross the line of $T^*$ from the QC region into the ML region, the characteristics of the ML become apparent, including the pseudogap in the DOS and the insulating behavior. As we cross the line of $T_X$ from the QC region into the MG, the characteristics of the MG become apparent, including Fermi liquid formation. Those calculations certainly do not elucidate the nature of the Mott liquid and Mott gas states in real materials. The long-ranged nature of the Coulomb interaction prevents true charge separated states, but the phase separation we observe may also correspond to other charge instabilities, such as stripes or checkerboard patterns. To distinguish between different charge instabilities, systematic calculations in much larger clusters are necessary which are not practical at the moment. However, whatever the type of order, those calculations provide convincing evidence for the existence of a first-order transition at low temperatures. Such a transition is similar to the liquid-gas or the ferromagnetic transition and its phase diagram would look like Fig. \ref{fig:Phase-separation}(a): a first order line of coexistence which terminates at a critical point at a critical temperature $T_{c}$ and a critical filling $n_{c}$. \section{Conclusions} The presence of a QCP at finite filling in the cuprate phase diagram is a topic of active theoretical and experimental research. Quantum cluster methods are able to shed some light in this phase diagram. By studying single-particle quantities for $t^{\prime}=0$, such as the spectral function and the entropy, it can be shown that a Fermi-liquid region at low filling and the pseudogap region at higher filling have different spectral signatures, and are connected through an intermediate "marginal Fermi-liquid" region of maximal entropy. Due to limitations of quantum Monte Carlo, the ground state and quantum criticality are not accessible. We also neglect the superconducting phase transition. The connection with quantum criticality is established by switching on $t^{\prime}$. For positive $t^{\prime}$ a classical critical point emerges at finite temperature $T_c$, which increases with $t^\prime$. We note that $t^\prime$ is not the only control parameter that may be able to tune the critical point to finite temperatures, but other parameters, such as phonon coupling, may have the same effect. The phase diagram around the critical point is similar to that of the gas-liquid transition, where the incompressible Mott liquid and the compressible Mott gas are the coexisting phases. The strange metal region in this context may be viewed as the supercritical region lying in the vicinity of the critical point. Within the scenario we presented, the pseudogap region is not characterized by an order parameter, rather it must have the same symmetry as the Fermi-liquid and the marginal Fermi-liquid, since these regions are connected by an adiabatic path in the $T-\mu$ phase diagram. Further investigation is necessary to fully characterize the pseudogap region, and also to investigate the connection of those results with other scenarios of quantum criticality. \section{Acknowledgements} We would like to thank R. Gass, S. Kivelson, D. J. Scalapino, A. M. Tremblay, C. Varma, M. Vojt, S. R. White, J. Zaanen and for useful discussion that helped during the development of the presented work. This research was supported by NSF DMR-0706379, DOE CMSN DE-FG02-08ER46540, and by the DOE SciDAC grant DE-FC02-06ER25792. This research used resources of the National Center for Computational Sciences at Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. \bibliographystyle{rspublicnat}
2,869,038,155,407
arxiv
\section{Introduction} \vspace{-.1in} Human Motion analysis is an important topic in the computer vision community. Recent advances in human-related application systems bring new human-centered challenges such as driver behavior recognition~\cite{drive_and_act_2019_iccv}, crowd pose estimation for crowd motion analysis~\cite{golda2019crowdposeestimation} or human action recognition in the dark~\cite{xu2020arid}. However, the CNNs trained for such higher-level tasks require large amounts of annotated data. This data is often challenging to collect and properly annotate. Therefore, synthetic photo-realistic data is often considered as a cost-effective method for augmentation of existing datasets~\cite{3dsemseg_ICCVW17,fabbri2018learning,kviatkovsky2020real}. In this work, we introduce a method for synthesizing real-looking videos of multiple persons dancing side by side and switching places based on real input and target videos. Based on the Everybody Dance Now work~\cite{chan2019dance} and similar to recent video-to-video translation works~\cite{zhou2019dance,gomes2020,ren2020}, we first extract pose skeletons using a state of the art method~\cite{cao2017realtime,openpose1,openpose2,openpose3}. Since those works only address the video-to-video translation problem for a single person, we extend the simple yet efficient method from~\cite{chan2019dance} to the multiple person transfer problem. We collect online videos from dance workouts with different numbers of persons and perform an ablation study depending on the number of persons. Furthermore, we improve the face generation network by using more accurate face landmarks. Finally, this scenario brings new challenges regarding the pose transfer of each individual in the group. The normalization step is adapted in order to accurately map each subject from the input video to its counterpart in the target. Furthermore, we address the problem of persons switching places by adapting the keypoint correspondence network from~\cite{umer2020self} for tracking each individual in the video. \section{Related Work} \label{sec:relatedwork} Recent breakthroughs in the field of image-to-image translation were recently offered by the introduction of conditional GANs for paired and unpaired images~\cite{Isola_2017,zhu2017unpaired}. Those works were rapidly followed by numerous methods for image and video manipulation. In this section, we review related work for image-to-image translation and appearance transfer. Wang et al.~\cite{Wang_2018} presented a method to generate high resolution 2048$\times$1024~pixels results from semantic label maps using perceptual loss, a coarse-to-fine generator and a multi-scale discriminator architecture. An approach was proposed in~\cite{lassner2017generative} to generate images of high-resolution using semantic segmentation and texture prediction. Various generative adversarial networks were proposed to increase the visual quality of generated images using labels and texts~\cite{zhu2017your, zhang2017stackgan, yan2016attribute2image, odena2017conditional}. Liu et al.~\cite{liu2020pose} proposed an encoder-decoder for a pose-guided high resolution appearance transfer to a target pose. They use local descriptors with means of progressive local perceptual loss and local discriminators at the highest resolution followed by training of the autoencoder architecture. Zanfir et al.~\cite{zanfir2018human} successfully transferred the appearance of a person in source images to a person in target images while preserving the body outline of the target person, using 3D pose as an intermediate representation. Kundu et al.~\cite{kundu2020} propose a recurrent network for targeting a long-term synthesis of 3D person interactions for long periods of time. Attribute-Decomposed GAN~\cite{men2020controllable} introduces a generative model for controllable person image synthesis, which generates desired human attributes such as pose, head, upper clothes and pants. Efros et al.~\cite{Efros03} transfers videos based on predicted skeletons introducing the concepts of “Do as I do”, where the images of a target person are generated according to a drivers movement, and “Do as I say” where images of target persons are produced based on imposed commands. More recently Zhou et al.~\cite{zhou2019dance} trained a model with a relatively long video of a target person which resulted in the ability to transfer any movements of choice from a reference video to the target person while preserving the appearance of the target person. This model receives a frame of a target person and a pose from the reference as an input and generates the images of the target person in that pose as an output. Wang et al.~\cite{wang2019few} proposed a model for generating images of the targets including humans or scenes that have never been seen previously as a few-shot vid2vid framework. This model generalizes the poses of the reference video to few example images of the target simultaneously. Liu et al.~\cite{liu2019video} proposed a generative adversarial learning-based approach to upper body video synthesis. They use body and facial landmarks of the source person into the target person, followed by the normalization of the upper body landmarks to generate facial features in the target video with spatio-temporal smoothing. Chan et al.~\cite{chan2019dance} proposed a similar approach for motion transfer from a source video to a target video. Their approach consists of two steps of pose encoding and normalization followed by a pose to video translation. Their poses are normalized by evaluating the ankle positions and height of the subjects in order to adapt the size of the source to the target. Their pose to video translation uses a three-step coarse to fine approach, one step explicitly addressing the quality of the generated face, and use of temporal smoothing. Videos for unseen in-the-wild poses are generated in~\cite{ren2020} using data augmentation and unpaired learning to improve generalization of the system for minimizing the domain gaps between testing and training pose sequence. Gomes et al.~\cite{gomes2020} account for pose, shape, appearance, and motion features of the moving target. Finally, a graph convolutional network is proposed in~\cite{ferreira2020cag} for generating dance videos from audio information to create natural motions preserving the key movements of different music styles. Nevertheless, these methods focus only on the transfer of a single subject at the same time. \section{Method} Starting from a video with a fixed number of source persons and a video with the same number of targets, we aim to generate a new synthetic version of the target video in which the persons now perform the movement seen in the source. Starting from in~\cite{chan2019dance}, the pipeline is divided into three stages – pose detection, global pose normalization, and mapping from normalized skeletons to the target subjects. \noindent\textbf{Pose Encoding}~~ Since the focus of our work lies on generalizing pose transfer, we use a pre-trained state of the art model for pose estimation~\cite{cao2017realtime, openpose1, openpose2, openpose3} to produce accurate pose estimation for the input frames. We then generate a colored pose stick figure for each person. In order to reliably learn the appearance of each individual using the poses as intermediate representation, we notice empirically that the stick figures need clear distinct color for each person. If the colors of the body part of two persons are too similar, the model tends to average both appearances and produce less realistic features. \noindent \textbf{Pose Normalization}~~Since target and source videos have different settings of environment and camera as well as people with different physical appearances such as height and shape, a normalization step between the input and target subjects is needed in order to produce more realistic frames. For instance, in Figure \ref{fig:norm1}, the horizon of the source video is higher than the horizon of the target video which caused the people in the generated video to be above the horizon: their feet are not located on the floor. Moreover, if the person in the source video is taller than the corresponding person in the target video, the subject in the generated frame is abnormally taller. Another instance is if the distance between the people and the camera in the source video is shorter than the distance between the people and the camera in the target video, those in the generated video are peculiarly and disproportionately larger than they should be. \noindent \textbf{Changes of Place between Source Subjects}~~ If the source subjects change places, there happens to be an identity switch in the target video, meaning an input person will change appearances with another after changing places. To address this, each subject needs to be tracked over time. In this case, poses are tracked before encoding using keypoint correspondences as proposed in~\cite{umer2020self} with scenario-specific adjustments. First, we extend the model for tracking the whole 25 body landmarks available instead of only 17. Although we could also use face and hand landmarks, those aren't predicted as reliably, therefore we decide to use body landmarks only. Since the number of subjects in each video remains the same, we drop frames where more poses are detected than expected. Typically, in the case where more poses are predicted than there are people in a frame, two poses are assigned to one person which can result in identity switches. For better keypoint heatmap accuracy, we train a keypoint correspondence network with input images of size $512\times512$~pixels instead of $256\times256$~pixels. As the persons are a closed set no similarity threshold is used and poses are always assigned in a greedy fashion to the ids of the closed set. \noindent \textbf{Pose to Video Translation}~~ After preprocessing the tracked skeletons are then used as input to our model. An overview of our method is given in Figure~\ref{fig:norm2}. Our model is based on an adversarial conditional GAN setup, trained in three stages: global, local and faces. For more details, we refer to~\cite{chan2019dance}. \begin{figure} \centering \includegraphics[width=0.23\textwidth]{./images/figure2/source.png} \includegraphics[width=0.23\textwidth]{./images/figure2/source.png} \\ \includegraphics[width=0.23\textwidth]{./images/figure2/result_before.png} \includegraphics[width=0.23\textwidth]{./images/figure2/result_after.png} \caption{Top: input images. Bottom left: a generated image before normalizing the source keypoints. Bottom right: the same frame after normalizing the source keypoints. In this case the normalization step allows our model to generate the feet of the subjects on the floor.} \label{fig:norm1} \end{figure} \section{Experiments} \subsection{Setup} A separate model is trained on collected frames for each training video of 2, 3, 4 and 5 people videos at $1024\times512$ respectively. This is followed by an evaluation using unseen test frames in order to evaluate the efficiency of our approach. Each model is trained separately in three stages. In the first stage, a global generator is used for training the model. In the second stage, the model is refined with a local enhancer generator and finally FaceGAN is used in the last stage. For our experiments we use the Adam optimizer with learning $rate = 0.0002$ and $B = 0.999$. For all experiments the batch size is set to $1$. Also we set $\lambda_{VGG}=10$. We trained our models for about $168,000$ iterations which required totally about $35$ hours on a RTX 2080Ti. As baseline we reproduce the results from~\cite{chan2019dance} for single to single person motion transfer. \noindent \textbf{Evaluation Metrics} We measure the quality of the synthesized frames using four metrics: 1) Peak Signal-to-Noise Ratio (PSNR) measures the similarity of the pixel-level images between generated images, 2) Structural Similarity (SSIM)~\cite{ssim} between two images by comparing three factors of luminance, contrast, and structure, 3) Learned Perceptual Image Patch Similarity (LPIPS)~\cite{lpips} to measure the perceptual similarity between synthesized images and ground truth, and 4) Frechet Inception Distance (FID)~\cite{fid} to measure the quality of frames of generated videos. We strive for high metrics for 1) and 2) and for smaller metrics for 3) and 4). \subsection{Quantitative Results} We compare our approach quantitatively for a different number of people. We perform the evaluation on a held-out test data and report our results in Table~\ref{tab:qualitative_eval_no-face}. We first report results for single-to-single transfer as a baseline. For this model almost four times more data is available for training than for multiple person transfer, which may partially explain the performance gap between the baseline and our models. Overall our models provide satisfying results for the few constraints we chose to apply for collecting the video pairs. Surprisingly our 5-to-5 model performs especially well by FID and LPIPS metrics, which means the results should be more convincing in the human eye. In order to refine our results and since acceptable face keypoints for the 3-to-3 and 5-to-5 videos were available, we added 60 supplemental face landmarks to our model and report our findings in Table~\ref{tab:qualitative_eval_with-face}. This addition brought a strong improvement in terms of FID to the 3-to-3 model and a smaller improvement to the 5-to-5 model. However, we emphasize that these improvements are largely reliant on the quality of the pose estimator's predictions. Therefore, such additional facial landmarks may not always be available or of sufficient quality. \begin{table} \centering \begin{tabular}{l c c c c c c} \toprule \textbf{Model} & \textbf{FID$\downarrow$}& \textbf{LPIPS$\downarrow$}& \textbf{PSNR$\uparrow$} & \textbf{SSIM$\uparrow$} \\ \midrule 1-to-1 & $20.843$ & $0.060$ & $37.837$ & $0.950$ \\\midrule 2-to-2 & $25.280$ & $0.085$ & $36.576$ & $0.947$ \\\midrule 3-to-3 & $24.364$ & $0.178$ & $34.135$ & $0.865$\\\midrule 4-to-4 & $26.797$ & $0.211$ & $33.319$ & $0.843$ \\\midrule 5-to-5 & $8.510$ & $0.088$ & $35.666$ & $0.925$ \\\bottomrule \end{tabular} \caption{1 Person with 23,000 Frames as in~\cite{chan2019dance}. The other models are trained with around 5,600 frames due to unavailability of more frames.} \label{tab:qualitative_eval_no-face} \end{table} \begin{table} \centering \begin{tabular}{l c c c c c c c} \toprule \textbf{Model} & \textbf{F} & \textbf{FID$\downarrow$} & \textbf{LPIPS$\downarrow$} & \textbf{PSNR$\uparrow$} & \textbf{SSIM$\uparrow$} \\ \midrule 3-to-3 & & $24.364$ & $0.178$ & $34.135$ & $0.865$\\\midrule 5-to-5 & & $8.510$ & $0.088$ & $35.666$ & $0.925$ \\ \hline\hline 3-to-3 & \checkmark & $20.210$ & $0.173$ & $34.214$ &$0.865$ \\\midrule 5-to-5 & \checkmark & $8.086$ & $0.088$ & $33.110$ & $0.830$ \\\bottomrule \end{tabular} \caption{Our best models are further optimized using 68 face keypoints instead of only eight.} \label{tab:qualitative_eval_with-face} \end{table} \subsection{Qualitative Results} Transfer results for multiple source and target subjects can be seen in Figure~\ref{fig:transfer_1} and Figure~\ref{fig:transfer_2}. The advantage of using target normalization can be clearly seen in Figure~\ref{fig:transfer_1} where the input subjects are shifted to the left as in the learned target video. This property is important since the target scene could contain physical objects on which the person would otherwise mistakenly be projected. We show results for more difficult face poses in Figure~\ref{fig:transfer_2} with an example for which a turning face is handled properly and another for which our model struggles. In this case the turning head of the dancer in the input video has never been seen in a similar fashion during training for the target subject. Therefore, our model can't handle the projection of the back of the head and produces a strong artifact instead of generating hair. Furthermore, failure cases for previously unseen extreme poses are illustrated in Figure~\ref{fig:results:fails_extrem_poses}. As shown in Table~\ref{tab:qualitative_eval_no-face}, the number of subjects for transfer grows, the performance of the model generally declines, which could be expected while increasing the difficulty of the task without altering the parameters of the model. However, the 5-to-5 model delivers contradicting results. We show qualitative results for this model in Figure~\ref{fig:results:5p_good}. Those results and particularly the faces are convincingly smooth. We notice the similarities in clothing between the target subjects. This setup not only boosts the performance metric for this video due to the clothing, but also allows better performance for the whole scene. We argue that this is related to the limited amount of parameters available to our model: less parameters are required to learn the appearances of lower bodies, therefore more parameters are available to accurately represent faces and upper bodies. Further works could progressively increase the number of subjects and investigate a required size of the model in order to reach optimal transfer performance. Finally, identity switches are handled as shown in Figure~\ref{fig:results:no-switch}. However, such tracking is highly dependent on the quality of the pose estimator. Therefore, for input scenes in which a subject disappears for a long time behind another subject, the track may be lost requiring a new mapping from source to target. While our results suggest the clear feasibility to convincingly transfer multiple persons at the same time, we find that the quality of the synthesized face still needs improvement. Furthermore, extreme arm or face poses remain challenging. \begin{figure*} \centering \subfloat{ \includegraphics[width=0.32\textwidth]{./images/results/transfer2/frame000010} \includegraphics[width=0.32\textwidth]{./images/results/transfer2/frame000070} \includegraphics[width=0.32\textwidth]{./images/results/transfer2/frame000120} } \qquad \subfloat{ \includegraphics[width=0.32\textwidth]{./images/results/transfer2/frame000010_synthesized_image} \includegraphics[width=0.32\textwidth]{./images/results/transfer2/frame000070_synthesized_image} \includegraphics[width=0.32\textwidth]{./images/results/transfer2/frame000120_synthesized_image} } \qquad \subfloat{ \includegraphics[width=0.32\textwidth]{./images/results/transfer2_3p_18k/frame000010_synthesized_image} \includegraphics[width=0.32\textwidth]{./images/results/transfer2_3p_18k/frame000070_synthesized_image} \includegraphics[width=0.32\textwidth]{./images/results/transfer2_3p_18k/frame000120_synthesized_image} } \caption{Given a source video~\cite{3person1} (top) and two different target videos from~\cite{3person_sunny1} (middle) and ~\cite{3person_sunny2} (bottom). Our approach transfers the movements from the people in (top) to the people in middle and last row. While the middle row handles the turning face (right), the last row can't manage the face pose and produces a dark artifact in the center of the face.} \label{fig:transfer_2} \end{figure*} \begin{figure*} \centering \subfloat{ \includegraphics[width=0.32\textwidth]{./images/results/transfer3/source/frame000002} \includegraphics[width=0.32\textwidth]{./images/results/transfer3/source/frame000113} \includegraphics[width=0.32\textwidth]{./images/results/transfer3/source/frame000321} } \qquad \subfloat{ \includegraphics[width=0.32\textwidth]{./images/results/transfer3/frame000002_synthesized_image} \includegraphics[width=0.32\textwidth]{./images/results/transfer3/frame000111_synthesized_image} \includegraphics[width=0.32\textwidth]{./images/results/transfer3/frame000319_synthesized_image} } \caption{Results for our 5-to-5 person model. Input appearance from~\cite{5person1} (top) followed by our results (bottom) on~\cite{5person2}. Due to relatively uniform target clothes, the results are particularly smooth.} \label{fig:results:5p_good} \end{figure*} \begin{figure*} \centering \subfloat{ \includegraphics[width=0.32\textwidth]{./images/results/transfer_2person/source/000526} \includegraphics[width=0.32\textwidth]{./images/results/transfer_2person/source/001176} \includegraphics[width=0.32\textwidth]{./images/results/transfer_2person/source/001197} } \qquad \subfloat{ \includegraphics[width=0.32\textwidth]{./images/results/transfer_2person/results/frame000526_synthesized_image} \includegraphics[width=0.32\textwidth]{./images/results/transfer_2person/results/frame001176_synthesized_image} \includegraphics[width=0.32\textwidth]{./images/results/transfer_2person/results/frame001197_synthesized_image} } \caption{Transfer results on~\cite{2person2} (bottom) with input targets from~\cite{2person1} (top) switching places without identity switch.} \label{fig:results:no-switch} \end{figure*} \begin{figure*} \centering \subfloat{ \includegraphics[width=0.48\textwidth]{images/failures/frame000097.png} \includegraphics[width=0.48\textwidth]{images/failures/frame000374.png} }\qquad \subfloat{ \includegraphics[width=0.48\textwidth]{images/failures/frame000097_synthesized_image.png} \includegraphics[width=0.48\textwidth]{images/failures/frame000374_synthesized_image.png} } \caption{Failure cases. Input appearance from~\cite{5person1} (top) followed by our results (bottom) on~\cite{5person2}. If our model has seen only relatively small motions for training, it can't handle extreme poses.} \label{fig:results:fails_extrem_poses} \end{figure*} \section{Conclusion} We extended and generalized the concept of video human motion transfer to multiple person using a relatively simple but yet efficient model. We address the pose normalization of multiple subjects and potential identity switches when different actors change places. Our method, while using only a few thousand frames, delivers high-quality videos of a target group of persons following the visual instructions of another group, even generating convincing shadows. However, our results are highly limited by the available data for the target group, which is difficult to collect. Furthermore, input and target videos are required to take on a similar perspective. Future work could focus on the training data and on extracting even more information such as semantic masks, dense poses or clothing information. A potential application is to create photo-realistic avatars from synthesized poses in order to efficiently render individuals anonymous and therefore facilitate a generation of new realistic data in the target domain.
2,869,038,155,408
arxiv
\section*{Keywords} Blockchain; Distributed Ledger Technology; Smart Contracts; Defence Support Network; Supply Chain \end{abstract} \chapter{Acknowledgements} My first thanks must go to my dissertation supervisor: Dr Duncan Hodges. His contribution to this research can only be adequately described as `Nelsonian' - in that he provided me with battle winning guidance whilst allowing me the research freedom to close and engage the enemy. I appreciate his wisdom and advice, and have enjoyed working with him. His advice on \LaTeX ~also saved a number of frustrating hours. Next I thank my interviewees who were kind of enough to give me their time and knowledge, there was not a single interview where I did not learn something new. I also thank colleagues who engaged with my research, especially those within Navy Command Information Warfare Division, the Defence Logistics Directorate and DSTL; of particular note is Gary Glennon-Alty. The patience and support of my line management during my entire MSc was also appreciated: Graham Cheshire and Captains P Waterhouse, M Rance, K Nicholson and A Parry Royal Navy.\\ \tableofcontents \sslistoffigures \sslistoftables \begin{listofabbreviations} \abbrev{ACTO}{Attractive to Criminal or Terrorist Organisations} \abbrev{ALIS}{Autonomic Logistics Information System} \abbrev{AM}{Additive Manufacturing} \abbrev{BOSE}{Blockchain Oriented Software Engineering} \abbrev{CfA}{Contracting for Availability} \abbrev{CSIS}{Codification Support Information System} \abbrev{DAO}{Decentralised Autonomous Organisation} \abbrev{DApp}{Decentralised Application} \abbrev{DAR}{Defence Application Register} \abbrev{DE\&S}{Defence Equipment and Support} \abbrev{DL}{Distributed Ledger} \abbrev{DLT}{Distributed Ledger Technology} \abbrev{DSTL}{Defence Science and Technology Laboratory} \abbrev{DSN}{Defence Support Network} \abbrev{DT}{DE\&S Delivery Team} \abbrev{E\&AM}{Engineering and Asset Management} \abbrev{EEL}{Electronic Equipment Logbook} \abbrev{FOCJ}{Functional, Overlapping and Competing Jurisdictions} \abbrev{GDPR}{General Data Protection Regulation} \abbrev{HMRC}{Her Majesty's Revenue and Customs} \abbrev{IS}{Information System} \abbrev{ITAR}{International Traffic in Arms Regulations} \abbrev{JAMES}{Joint Asset Management and Engineering Solutions} \abbrev{JSF}{Joint Strike Fighter} \abbrev{KSI}{Keyless Signature Infrastructure} \abbrev{Log IS}{Logistics Information System} \abbrev{MDL}{Mutual Distributed Ledger} \abbrev{MoD}{Ministry of Defence} \abbrev{NAO}{National Audit Office} \abbrev{NATO}{North Atlantic Treaty Organisation} \abbrev{NMCRL}{NATO Master Catalogue of References for Logistics} \abbrev{NSN}{NATO Stock Number} \abbrev{OEM}{Original Equipment Manufacturer} \abbrev{PAC}{Public Accounts Committee} \abbrev{PoA}{Proof-of-Authority} \abbrev{PoS}{Proof-of-Stake} \abbrev{PoW}{Proof-of-Work} \abbrev{SC}{Smart Contract} \abbrev{SCIS}{Support Chain Information Services} \abbrev{SCISRA}{Support Chain Information Services Architectural Repository} \abbrev{UKNCB}{United Kingdom National Codification Bureau} \abbrev{UoA}{Unit of Analysis} \end{listofabbreviations} \mainmatter \chapter{Introduction} \section{Overview} \label{sec:Overview} Bitcoin, the best known example of Distributed Ledger Technology (DLT), has been heralded as bigger than the internet by Silicon Valley entrepreneurs \parencite{Carlson2015}. The UK Government's Chief Scientific Advisor believes the technology behind Bitcoin could underpin ``potential explosions of creative potential that catalyse exceptional levels of innovation'' \parencite{Walport2016}. Not surprisingly a host of enterprises are now attempting to exploit this much trumpeted innovation hoping that, as SAP declare, DLT is ''ideally suited for... complex industry processes involving many untrusted parties taking part in reading and writing a multitude of transactions, decisions, and documents'' \parencite{Galer2017}. At its most basic, a Distributed Ledger (DL) is a database distributed across more than one organisational entity that makes use of cryptography \parencite{Kello2017}; typically it is used to store transactions of anything that might be considered an asset (e.g. cash, physical items, personal data, etc). DL's unique value lies in sharing or validating the database across many different people or organisations, who do not necessarily trust each other. This is achieved through a cryptographic consensus between all organisations, so that the data held is verified and represents one immutable version of the truth; once recorded, no party can amend the data. If this represents DLT's uniqueness, its promise is in reducing the friction between organisations. When each organisation holding its own siloed version of events share or agree data across organisations, a transactional cost is entailed. Such cost is reflected by the existence of roles and processes whose function is to verify transactions, e.g solicitors, notaries, audits and performance measurement. Given that once data is in a DL it is accepted as an accurate reflection of events, DLT eliminates the agreeing stage and the transactional cost, thereby increasing the efficiency of inter-party transactions \parencite{DAVIDSON2018}. This dissertation analyses how DLT might be applied to Defence, specifically to examine how it might benefit the Defence Support Network (DSN). \section{Definitive articles} Given that DLT is an emergent technology, there is currently little agreement to standard terms used, although international standards are being developed \parencite{isoblockchain2017}. In view of this DLT, should not be referred to monolithically, as one might to the concept of relational databases. For the purpose of this dissertation the definition used by the UK's Chief Scientific Officer \parencite{Walport2016} will be adopted as at Figure \ref{fig:WalportTypesDLT}. \begin{figure} \includegraphics[width=\textwidth]{WalportTypesDLT} \caption[Variants of DLT]{Variants of DLT \parencite{Walport2016}} \label{fig:WalportTypesDLT} \centering \end{figure} This shows DLT is a wide term covering many applications, used to contrast this technology from predecessors - namely organisationally centred ledgers, whether paper ledgers, relational databases or NoSQL. Walport \parencite*{Walport2016} further defines a DL as ``a type of database that is spread across multiple sites, countries or institutions, and is typically public'' - although as Figure \ref{fig:WalportTypesDLT} shows private versions are equally valid. This definition is not universally accepted - Swanson \parencite*{Swanson2015} asserts a DL has to involve a legal entity; while Mainelli \parencite*{Mainelli2017b} prefers the term Mutual Distributed Ledger, usefully emphasising that the data is ``held in common or owned by no one.'' In the wider media the term `blockchain' is better known than DLT \parencite{Deshpande2017}. Blockchain, the technology underlying the cryptocurrency Bitcoin, is a list of transactions recorded in a block that is linked or `chained' through cryptography to previous blocks of transactions. As Section \ref{sec:GenesisBlock} examines, Bitcoin is the first example of a blockchain and DL. Bitcoin is not though the only blockchain, many others have utilised this format, for instance IBM markets a ``blockchain for business'' \parencite{Hartman2017}. However not all DLs are blockchains. For instance, IOTA designed for the internet of things, uses ``the tangle'' where instead of transactions being recorded in blocks they are recorded in a directed acyclic graph \parencite{Popov2016}. Due to the prevalence of the term, even DLT that do not rely on blockchains use the term liberally. An example is Guardtime's Keyless Signature Infrastructure which is referred to as blockchain technology even though it does not make use of blocks of transactions, but rather a Merkle tree culminating in an internet published hash calendar \parencite{Buldas2013}. Such is the market hype that even vendors selling an implementation of Git version-control (a common tool) claim to have a DLT product \parencite[p.~112]{Gerard2017}. This semantic flexibility is partly because Bitcoin's blockchain, although unique in itself, is based on the synergy of already existing components \parencite{Narayanan2017}, further discussed in Chapter \ref{ch:LitReview}. The UK's Chief Scientific Officer at Figure \ref{fig:WalportTypesDLT} makes a distinction between public and private DLTs, the former being subdivided further into permissionless versus permissioned. At the far left of the diagram is the blockchain (i.e. Bitcoin's ledger) as an example of both public and permissionless DLT - anyone can view the transactions contained therein (public) and anyone can act as nodes on the network (permissionless) \parencite{Huberman2017}. Ripple is an example of a public, permissioned DL \parencite{Schwartz2014} - that is transactions are available for all to see but nodes (or `gateways' in Ripple terminology) are run by trusted parties (i.e. financial institutions). Corda, a DLT intended for use by banks, sits on the right of Figure \ref{fig:WalportTypesDLT}'s DLT spectrum being private and permissioned - this network is used only by authorised organisations, with transaction details only accessible to the parties involved and regulators \parencite{Brown2016a}. Indeed Corda is attempting to implement hardware-level privacy via chip-manufacturer Intel \parencite{Hearn2017}. Lundb\ae k and Huth \parencite*{Lundbaek2017} however argue these terms, although used extensively in strategy papers, have little technical implementation consensus - and furthermore cannot be verified due to proprietary source code and documentation. \section{Code is law} \label{sec:codeislaw} Smart contracts (SC), a concept that predates Bitcoin \parencite{Szabo1997}, is frequently associated with DLT. Contracts constitute a written agreement between two parties where services or items are transacted based on conditions being met. SCs are the evolution of this agreement into a logic-based form implemented by software. Szabo \parencite*{Szabo1997} provides the example of a vending machine - a user provides cash and the machine returns an item with no further human interaction. A SC could be layered over a DL and conducts activity when certain conditions are met. If a DL was used for land registry (as Sweden and India are evaluating \parencite{Bal2017}), then a SC could be instigated so that when a change of ownership was recorded, the contract executes a monetary transaction from the buyer instantly. If cryptocurrency was used as the medium of exchange, a phenomenon already observed \parencite{Prynn2017}, this becomes even more plausible. SC are also relevant for more complex transactions, e.g. a farmer seeks insurance cover for temperature fluctuations outside an agreed range over an agreed time period \parencite{Mainelli2017}. By automating transactions, SC promotes efficiency and could lead to structural changes in sectors that manage transactions. Building on SCs are Decentralised Autonomous Organisations (DAO) \parencite{Johnston2013}. DAOs are established by entrepreneur-programmers founding businesses comprised entirely of software; these entities perform transactions dependent on algorithmic business plans. They would receive payments from clients, execute trades based on a DL and then reimburse owners with profits; all without human agency. Ethereum is a DL created to act as a platform for DAOs \parencite{Morris2015}. A prototype DAO is Bitbarista \parencite{Pschetz2017}, a coffee machine paid in cryptocurrency (although not unique in this aspect \parencite{Beck2016}). The Bitbarista uses the freedoms offered by cryptocurrencies to automate business processes such as ordering refills online and rewarding people for maintenance tasks (i.e. refilling coffee beans). Using DLT removes barriers associated with traditional financial structures, such as accounts accessible to legal entities only, as opposed to machines. Although unlikely to threaten Starbucks, it is an illustration of how removing frictions might shift how enterprises operate. \section{Defence Support Network} \label{sec:DefenceSupportNetwork} This dissertation examines how DLT might be applied in the environment of the DSN. UK military doctrine \parencite[p.~9]{MoDJDP2015} defines the DSN as: \begin{quotation}``a flexible set of supply chains connecting points of production and use, ensuring the most appropriate and efficient use of resources across the Whole Force, maximising information and technology to assure logistic support to operational commanders. The DSN consists of a series of linked nodes through which support is delivered in an agile manner, giving end-to-end visibility and control.''\end{quotation} Although this statement opens by referring to tangibles - i.e. supply chains, production and use; the latter sentence talks more generally of `support'. Support, in a Defence context, is defined by the same doctrine as encompassing, as well as physical items such as logistics and equipment, more abstract areas such as legal, medical and infrastructure. This dissertation focuses on the concrete embodiment of this support, such as equipment and items of supply. \begin{figure} \includegraphics[width=\textwidth]{DSN4} \caption[The Defence Support Network]{The Defence Support Network \parencite[p.~10]{MoDJDP2015}} \label{fig:DSNRichPicture} \centering \end{figure} The definition also refers to the `Whole Force,' an acknowledgement of Defence's dependence on a wider pool than uniformed personnel - including civil servants, other government departments, contractors and external parties. This is illustrated by the rich picture at Figure \ref{fig:DSNRichPicture} visualising the many organisations comprising the DSN. The recent Afghanistan and Iraq conflicts brought into contrast MoD's increased reliance on contractor support \parencite[p.~34]{MinistryofDefence2015}. This ties strongly into the DLT concept - if DLs are concerned with sharing information across organisational boundaries, then the more boundaries, the more impact DLT stands to make. Information and technology is given a central role in the DSN according to the above definition. The MoD has been extensively criticised \parencite{NationalAuditOffice2011} for weaknesses in exploiting logistics information, with multiple failures in understanding the complex interplay of assets and supply chains. This makes a strong case for investigating how DLT, a new paradigm for managing information, might be beneficially applied to the DSN. \section{Research question, aims and objectives} \label{sec:researchQAO} \subsection{Research question} Understanding the background that has led to this research the question that this dissertation sets out to answer is: \begin{quote} How could DLT and SC be beneficially applied to the DSN? \end{quote} \subsection{Research aim} A logical conclusion of this Research Question is the following Research Aim: \begin{quote} The aim of this research is to understand how DLT and SC might be applied to the DSN and the potential benefits. \end{quote} \subsection{Research objectives} The Research Aim breaks down into the following objectives: : \begin{enumerate} \item To define DLT (Section \ref{sec:Overview}) and smart contracts (Section \ref{sec:codeislaw}). \item To define the DSN (Section \ref{sec:DefenceSupportNetwork}) and how DLT might address the challenges it faces (Section \ref{sec:challengesaddressed}). \item To create a framework for evaluating the utility of DLT against use cases (Section \ref{sec:DSNevaluationframework}), drawing from academic or business models (Section \ref{sec:extantframeworks}). \item To assess generic DSN use cases against a lightweight version of the evaluation framework identified in Objective 3 (Section \ref{sec:Resultsofquestionnaire}), by gathering quantitative and qualitative evidence from subject matter experts (Section \ref{sec:genericusecaseexploration}). \item To explore further use cases of how DLT might apply to the DSN beyond the generic use cases of Objective 4 (Section \ref{sec:Widerusecaseexploration}). \end{enumerate} \chapter{Literature review} \label{ch:LitReview} A literature review of the concept of DLT is presented first, followed by a thematically structured analysis of its potential wider impact. \section{Genesis block} \label{sec:GenesisBlock} No discussion of DLT can be complete without referring to its genesis: the Bitcoin white paper by the pseudonymous Satoshi Nakamoto \parencite*{Nakamoto2008}. This paper was distributed outside of either academic or commercial circles, having its roots in the cryptoanarchist community \parencite[p.~36]{Frisby2014}. Bitcoin introduced four key concepts as proposed by Antonopoulos \parencite*[p.~40]{Antonopoulos2014}: \begin{enumerate} \item The Bitcoin protocol itself - a decentralised peer-to-peer network. \item The blockchain - a public transaction ledger. \item Rules for establishing consensus for validating independent transactions and issuing currency. \item A proof-of-work algorithm - the mechanism by which global consensus is reached on which ledger is valid. \end{enumerate} This combination for the first time allowed the creation of electronic cash without relying on financial institutions to serve as trusted third parties. Satoshi argues that removing these institutions is beneficial as it will reduce transaction costs, which are created forming and enforcing agreements \parencite[p.~605]{Cooper2011}. When trusted third parties are involved, they have the power to reverse payments or alter balances which leads to an increase in the requirement for trust. As greater trust is needed, greater amounts of information must be accumulated by participants in the network, thus increasing costs. Szabo states the point more forcefully: ``trusted third parties are security holes'' \parencite{Szabo2001}; arguing that the most expensive and vulnerable part of any security system that relies on trusted third parties, will be that third party itself. North \parencite*{North1987} shows transaction costs prevent economic development, i.e. the cost used to fulfil this `trust' function (e.g. lawyers, auditors) could be utilised more productively in creating goods or services. Bitcoin was not the first attempt to introduce electronic cash (excluding fiat money accounted for electronically), failed previous examples were e-gold and beenz \parencite{Eiland2017}. These endeavours were unsuccessful because they had not solved three fundamental questions \parencite[p.~40]{Antonopoulos2014}: \begin{enumerate} \item Is the money authentic, i.e. not counterfeit? \item Is the money unspent - also known as double-spend where somebody spends the same money twice? \item Is this money claimed by me (as opposed to someone else)? \end{enumerate} An explanation of the technology is necessary to understand how Bitcoin solves these problems. Imagine a situation where Alice and Bob wish to exchange bitcoin. Each will have a wallet - software that is able to conduct Bitcoin transactions. Each wallet will contain a number of addresses; addresses simply being containers to hold any amount of bitcoin. Each address will have an associated private and public key. Assuming Alice wished to pay Bob one bitcoin she would transfer this by using her private key to digitally sign what is known as a hash of the previous transaction and Bob's public key, as at figure \ref{fig:BitcoinTransaction}. A hash is an algorithm that takes an arbitrary amount of digital data; which could be anything from a Microsoft word document, jpeg photo or text file; and returns a fixed length value (known as a digest). As an example if the text from the abstract of this dissertation is run through a SHA1 function the resulting value is: \begin{quote} f4f3c70d905f007f6c069def9b66082aaab22bee \end{quote} Every time the abstract of this dissertation is run through the SHA1 algorithm it will produce the same value. However it is computationally difficult to retrieve the original text (the abstract) if one is simply provided with the hash value. Should any change be made in the original text, the resulting hash will be different. For instance if the abstract is run again through the algorithm, but with the final full-stop omitted the result will be: \begin{quote} d21a01538b8c87394d44567269f6ac5aa22335e0 \end{quote} Despite the input text differing by only one character, the resulting hash is completely different. Hashing therefore allows a piece of data to be reduced to a shorter fixed length value. This can be used to quickly determine whether the original data has been altered - by hashing the suspect data and checking it matches the hash value recorded earlier. \begin{figure} \includegraphics[width=\textwidth]{BitcoinTransaction} \caption[Bitcoin transactions]{Bitcoin transactions \parencite{Nakamoto2008}} \label{fig:BitcoinTransaction} \centering \end{figure} To return to Alice and Bob's bitcoin transfer - this transaction is now broadcast to the network. At the same time other transactions are also being broadcast to the network; which are bundled up into blocks together. Other users, known as miners, will now create cryptographic hashes of the data contained within those blocks. The first miner to create a hash that conforms to a certain format (specifically that it begins with a prescribed number of zeros) is rewarded with newly minted bitcoins. The creation of this winning hash however is computationally difficult - it is accomplished by adding a random segment (known as a nonce) to the block of transactions, so eventually resulting in a hash matching the required format. In this way a new block is created on the ledger. This new block will contain the winning hash of the last block, and the next block the winning hash of this one and so forth. Because the hash of the last block is in the current block, changing any transactions in the last block will cause a hash mismatch and alert all to the attack. \label{sec:Genesis} In this way the transaction of one bitcoin between Alice and Bob cannot be altered in any way. Alice cannot change her mind and take it back, so defeating the double-spend problem where Alice tries to give the same one bitcoin to both Bob and Charlie - as the first transaction of this bitcoin (to Bob) is recorded immutably in the ledger and considered the `valid' transaction. It also means a malicious actor, such as Mallory, cannot change the transaction to divert Alice's bitcoin to her. As more blocks are created, each containing the hash of the last, the previous transactions are less vulnerable to attack, due to the increasing number of blocks that would need to be changed. To attack the network successfully Mallory would have to take over 51\% of the nodes so allowing her version of the chain to be accepted. However to do this the amount of computing power (and associated costs such as electricity, hardware, etc) would be so great, it would make more financial sense to use those resources to support the network and reap the benefits of mining. Indeed any successful attack would likely cause the value of Bitcoin to drop, again removing the motivation of financial gain. Although a considerable technical achievement, it does not necessarily explain the excitement around Bitcoin. Reasons behind this are varied; many early libertarian advocates welcomed it as a transfer of government financial power to individuals with the prospect of states no longer controlling the money supply \parencite[p.~152]{Frisby2014}. Others foresee its role in transforming the economy and questioning the assumptions of the industrial age - Antonopoulos for instance asks whether in a truly digital economy salaries should be streamed by the minute rather than arriving in monthly instalments \parencite{Dale2017}. Alternatively cynics suggest that interest is primarily fuelled by speculation \parencite{Baur2017}, comparing it to the seventeenth century tulip craze which bankrupted many investors \parencite{Jones2017}. \section{The root of all evil} The applications of DLT go beyond cryptocurrency. Ledgers have been a fundamental feature of trade since ancient times \parencite{Gray1996} so it is unsurprising that this technological shift could have wider impact. Although it is clear why Bitcoin is a novel approach to digital cash, Wenger \parencite*{Wenger2014} provides a good explanation of why blockchain represents a discontinuum with previous technology as a means of organising information, as shown at Table \ref{Wenger2014Table}. \begin{table} \centering \begin{tabular}{ m{6em} m{7em} m{7em} } \toprule & \begin{flushleft}Organisationally centralised\end{flushleft} & \begin{flushleft}Organisationally decentralised\end{flushleft} \\ \midrule \begin{flushleft}Logically centralised\end{flushleft} & e.g. Paypal & \textbf{***new***} blockchain \\ \begin{flushleft}Logically decentralised\end{flushleft} & e.g. Excel & e.g. email \\ \bottomrule \end{tabular} \caption[Foundational innovation of the blockchain]{Foundational innovation of the blockchain \parencite{Wenger2014}} \label{Wenger2014Table} \end{table} Wenger posits that DLT represents a new category of information management - logically centralised, but organisationally decentralised. Using the Wenger classifications there are many examples of organisationally centralised information technologies in the world - Paypal and Excel are two examples, both owned by organisations who chose what version to release and their retail price. However they differ in their `logical' centralisation - it is possible for Alice to send Bob an Excel spreadsheet and for Bob to edit that spreadsheet independently of Alice. Excel is therefore logically decentralised. Paypal however is logically centralised - if Alice sends Bob \textsterling1 via Paypal the accounts of Alice, Bob and the system as a whole have to correlate. Whereas email is both organisationally and logically decentralised; no single organisation owns email and Alice sending Bob an email is unrelated to other people's emails. Blockchain is a new category in that there is no central organisation, no permission is needed to write a new software wallet or run a network mining node, however there is a logical centralisation - when Alice sends Bob bitcoin the entire system is aware of that. Ludwin \parencite*{Ludwin2016} takes this point further by arguing that it is unhelpful to think of Bitcoin as a currency but rather as a ``new asset class that enables decentralised applications.'' The utility of a decentralised application (DApp) being that it is organisationally decentralised - no one person owns it. Ludwin however proposes that DApps, although useful, have disadvantages due to inefficiency. Not all DLT has to use coins or tokens. Bitcoin uses a Proof-of-Work (PoW) consensus mechanism meaning miners are rewarded with the ability to add blocks to the chain (so earning bitcoin) dependent on the amount of compute they have undertaken \parencite{Bonneau2015}. However there are other consensus mechanisms, such as Proof-of-Stake (PoS), where the more tokens you have (e.g. stake) the greater your probability of creating the next block \parencite{Bentov2016}. PoS is less energy intensive and more environmentally sustainable than PoW \parencite{Dwyer2014}; although PoS has been criticised for unfairly rewarding those who already have amassed the most \parencite{Mamoria2017}. Meanwhile Proof-of-Authority (PoA) is a consensus mechanism where those involved in establishing the network have decided which nodes are deemed reliable \parencite{Cachin2016} and is synonymous with permissioned blockchains. PoA has been criticised as removing one of the central aspects of blockchain - achieving trust without a central authority - and as such has been criticised as ``probably not [a] real blockchain'' \parencite[p.~28]{Bashir2017}. There are yet other mechanisms for consensus; e.g. Proof-of-Burn, Proof-of-Capacity \parencite{Tasca2017}; which are outside this dissertation's scope. \section{Two sides to the coin} \label{sec:twosidestothecoin} DL is a new category of technology that could lead to a swathe of different business models \parencite{Tapscott2017a}. Mainelli and Gupta \parencite*{Mainelli2017b} point to an earlier technological shift that occurred with the rise of digital mapping, combined with GPS, which allowed the real-world to be visualised on computers leading to challenger upstarts such as Uber. DLT could allow a similar shift with the digitalisation of business fundamentals; if the transactions and contracts that are the lifeblood of business, previously locked within company silos but now shared between enterprises, are able to be manipulated digitally new worlds of possibility could emerge \parencite{Iansiti2017}. However as has already been covered there is considerable variety in what can be thought of as DLT and correspondingly great variety in what DLT might be used for. Ultimately however all the technologies in this area are considered with one or other business problem: Sharing or Proving, and in some circumstances both. This is illustrated at Figure \ref{fig:ShareProveVenn}. \begin{figure} \includegraphics[width=\textwidth]{ShareProveVenn} \caption[Sharing vs Proving DLT]{Sharing vs Proving DLT (Author's own work)} \label{fig:ShareProveVenn} \centering \end{figure} Bitcoin sits in the middle of this Venn diagram being a DL that both proves and shares data. When Alice sends Bob one bitcoin the technology is both used to \textbf{prove} that Alice has one bitcoin to send and \textbf{share} the data that Alice has transferred one bitcoin to Bob to all other participants on the network. However it is not necessary to have both aspects present within DLT. The \textbf{share} use case is exemplified by an example provided by Hyperledger Fabric \parencite{IBM2016a} where a blockchain is established for a consortium of companies involved in car leasing. The participants; such as the vehicle manufacturer, dealerships and scrap merchant; all use the ledger to access information such as the Vehicle Identification Number and maintenance logs. All participants are now sharing one view of the vehicle history - IBM gives the example that if a recall had to be organised this would be far more efficient and effective if all participants were using the same blockchain. However although we can see data \textbf{share} occurring, there is no \textbf{prove}. For instance if the manufacturer were to erroneously ascribe the wrong Vehicle Identification Number to the car record there would be no way of verifying that fact by looking at the information contained on-chain. Rather physical verification would need to take place that off-chain reality (i.e. the number written on the car) matched the data on-chain. The situation further complicates if there are malignant actors within the consortium. For instance if the scrap merchant is involved in an illegal `cut and shut' scheme where two halves of old cars are welded together to form a `new' vehicle \parencite{BBC2000}; this blockchain would not guarantee that cars marked as destroyed had been so. This can be considered the digital-physical gap: the difficulty of achieving a link between the immutable digital object on-chain and its twin mutable physical object in the real world. The right side of the Venn diagram is entirely \textbf{prove} and no \textbf{share}. Guardtime's Keyless Signature Infrastructure (KSI) Blockchain creates chain-of-custody information for digital assets - any time a protected file (e.g. MS Word document, JPEG image) is modified, created, deleted or transmitted there is forensic evidence of that activity, admissible in a court of law \parencite{Johnston2014}. This technology works by taking a hash of any protected asset, combining those hashes with other hashes using a Merkle-tree and then publishing that data in a hash calendar (whereby the current hash is combined with the previous hash) as demonstrated at Figure \ref{fig:KSIFederatedHashTree}. Although this is far removed from Bitcoin, it is described as blockchain technology by the creators \parencite{Guardtime2016} and Walport \parencite*{Walport2016} counts it as DLT. The Estonian government, who employ this proprietary technology, elaborate by stating that KSI Blockchain was being tested in Estonia in 2008, prior to the Bitcoin white-paper, at which point the term blockchain was known as ''hash-linked time stamping'' \parencite{eestoniacom2017}. In this use case however the utility comes entirely from the \textbf{prove} aspect, there is no \textbf{share} as with the car lease demonstration. Rather here a distributed ledger acts purely to provide provenance, not to distribute data. A similar initiative is Archangel which looks to verify the contents of the National Archive to prove items have not changed over time \parencite{Thereaux2017}. This share versus prove dichotomy illustrates how radically different approaches to solving different business problems are still covered under the umbrella of DLT. In some ways DLT can be compared to a clawhammer: both Guardtime KSI Blockchain and IBM Hyperledger Fabric share a common stem (the clawhammer shaft) but are used for purposes as radically different as hammering versus removing nails. \begin{figure} \includegraphics[width=\textwidth]{KSIFederatedHashTree} \caption[KSI Federated Hash Tree]{KSI Federated Hash Tree \parencite{Zatyko2015}} \label{fig:KSIFederatedHashTree} \centering \end{figure} Praise for use cases which move away from Bitcoin's original conception of both share and prove is not universal. Antonopoulos \parencite*{Antonopoulos2017b} for instance mocks how the conversation has turned increasingly anodyne as development has moved from bitcoin to blockchain to DLT, which he believes is led by vested corporate interests attempting to head off disruption. Here he makes a parallel between blockchain's maturity and the internet circa 1997; although there were attempts to use the internet for ambitious plans (e.g. grocery deliveries) these failed until a sufficiently dense level of adoption had been established through the relatively simple application of email. Similarly he argues DLT will not be used for ambitious plans, such as real estate title, until cryptocurrencies are used for everyday transactions; at which point a tipping point will have been reached which challenges the establishment (e.g. banks are out-competed in their core business of banking). Similarly Song \parencite*{Song2018} asserts that blockchain without bitcoin is equivalent to selling ``snake oil.'' These viewpoints contrast strongly with that of Walport \parencite*{Walport2016} which is about working within current systems; this conflict between revolutionary and evolutionary paths is a recurring theme and will be examined more closely in the literature search. \section{Search strategy} \label{sec:searchstrategy} DLT can therefore be seen as a unique concept that has emerged from counter-cultural roots and has generated considerable debate on how it can be best utilised. In understanding how it might serve the needs of the DSN it would be helpful to survey the academic literature, as well as this being a required Individual Learning Outcome of an ICM Research Project. A structured search using metadata only was conducted on IEEE Xplore's Digital Library for the search terms shown in Table \ref{table:SearchTerms}. Because the term `blockchain' (or derivatives) is used in other academic disciplines, the search is limited to a computer-science relevant database. No time period was specified, and no restriction on materiel (e.g., journals, books etc) was set. The search was limited to English material only due to resources available. The Alan Turing Institute notes the prodigious rate of output on the subject of blockchains (averaging 250 papers per year since 2014), but that most of these have appeared in the form of white papers outside traditional academic peer-reviewed literature \parencite{Bano2017}. Thus grey literature has been included separately in the results of the literature search. Interest in DLT is not limited to the English speaking world. For instance China is involved in cryptocurrency mining and speculation - in 2015 88\% of total Bitcoin trades took place there \parencite{Pel2015}; while political support for DLT research comes from President Xi Jinping himself \parencite{Cheng2018}. Japan and South Korea share similar enthusiasm for the technology \parencite{Price2017}. Searching the China Academic Journals Database using the sinographs for bitcoin, \begin{CJK*}{UTF8}{gbsn}比特币\end{CJK*}, yielded 669 articles; a higher count than that of IEEE Xplore. This finding is borne out by the China Intellectual Property Office filing nearly 140 DLT related patents in the last three years \parencite{Zhao2018}. Thus future literature review should endeavour to cover non-English language publications. \begin{table} \centering \begin{tabular}{ l l } \toprule Search Term & Frequency \\ \midrule blockchain & 236 \\ bitcoin & 213 \\ distributed ledger & 22 \\ \bottomrule \end{tabular} \caption{Search term frequency using IEEE Xplore dated 14 Nov 17} \label{table:SearchTerms} \end{table} Table \ref{table:SearchTerms} shows 55\% of the search results included the term `blockchain' but not `bitcoin' (with the remaining 45\% having the terms `blockchain' and `bitcoin'). This is in contrast to an earlier literature review \parencite{Yli-huumo2016} which found that 80.5\% of academic output focused on Bitcoin rather than wider applications. This difference may be due to the stricter search criteria for this study (which selected 41 papers). Alternatively the greater number of blockchain minus bitcoin results, may represent the increasingly popularity view that DLT not Bitcoin is where potential lies \parencite{Knight2017}. It is worth noting the UK government's preferred term, `distributed ledger technology' \parencite{Walport2016} is less frequently used in the literature. This term having such little traction shows stakeholders have not coalesced around agreed definitions, although there have been attempts in the literature to define a DLT ontology \parencite{Tasca2017}. The British Standards Institution \parencite{Deshpande2017} has highlighted that the lack of industry standards for this emergent technology makes comparison or categorisation challenging. They also believe that standards would drive other benefits such as addressing the concerns of security, privacy and data governance. Insurance industry research \parencite{Mainelli2016b} tallies with this, proposing that the introduction of voluntary standards markets would allow organisations to manage risk and reduce regulatory uncertainty via establishing compliance and verification regimes. Figure \ref{fig:MainelliStandards} illustrates what standards might apply. \begin{figure} \includegraphics[width=\textwidth]{MainelliStandards} \caption[Representation of the standards environment for DLT]{Representation of the standards environment for DLT \parencite{Mainelli2016b}} \label{fig:MainelliStandards} \centering \end{figure} Although how these standards might apply to DL goes beyond this dissertation's scope, one area from Figure \ref{fig:MainelliStandards} that is worth exploring are standards for `interoperability.' There are real-world examples of attempts to integrate DLs - e.g. IBM's cross-border trade payments project \parencite{DelCastillo2017} uses both private (Hyperledger) and public (Stellar) blockchains in conjunction; the former for transaction clearing, the latter for settlement payment. Not that this represents a standard, simply that agreement on interoperability can be reached. Common interoperability standards underpins today's networked world: Hypertext Transfer Protocol and Simple Mail Transfer Protocol standards allow users to communicate across networks and applications, but no similar standard exists for DLT \parencite{Strajnar2015}. Of course not all technologies have common standards. For instance DLT's fundamentals are compared to a database; yet the recent big-data NoSQL databases are a competing set of technologies without a common standard \parencite[p.~9]{Sadalage2012}. This though could be a false comparison - DLT's power is sharing data across organisational boundaries. Scenarios can be envisaged where an organisational grouping using a DL, e.g. a fishing co-operative verifying their environmentally sustainable processes \parencite{Provenance2016}, have a requirement to pass information to another organisational grouping using a different DL, e.g. to grocery retailers selling the seafood and using a DL for food-safety \parencite{Kharif2016}. Hardjono, Lipton and Pentland \parencite*{Hardjono2018} illustrates how in the same way the internet is neutral as to which network your packet travels on, applications might be agnostic as to which DL your transaction is recorded on. Common standards would allow interoperability between these types of DL to become routine, rather than being an expensive ad-hoc process. Indeed, Mougayar \parencite*{Mougayar2017} argues that before DLT can become as ubiquitous as the web, common standards are required to enable interoperability. The opposing case is that standards might damage this emerging technology, freeze-framing it prematurely, which Mougayar \parencite*{Mougayar2016} himself previously posits. Either way momentum is building for standards, with the International Organization for Standardization drafting proposals \parencite*{isoblockchain2017}. This lack of standards represents a challenge for literature reviews, as Webster and Watson \parencite*{Webster2002} argue the first step of academic enquiry is classification. As yet there have been few comprehensive reviews, and much of this focuses on the technical \parencite{Tasca2017} rather than on enterprise applications. Additionally defence, by its very nature, is guarded on revealing how they might employ new technologies. Although there are exceptions to this; for instance US Air Force cyber-security orientated studies \parencite{Barnas2016}, Washington think-tank papers \parencite{Hsieh2017} and Canadian defence research \parencite{Willink2018}; the sub-set of the literature specifically considering how defence might use DLT is small. \section{Thematic analysis} This lack of precedence as well being a challenge, can be seen as an opportunity to create a small contribution to the field. A thematic analysis was chosen because Saunders, Lewis and Thornhill \parencite*[p.~ 80]{Saunders2016} conclude that those papers that contribute most to an area of study follow this approach. Three themes in the literature were identified: that of supporting the `revolutionary', `evolutionary' or `reactionary' paradigm. \subsection{Revolutionary} As previously discussed DLT's roots lay within Bitcoin, emerging from the cryptoanarchist and cypherpunk movements. The former movement is a political philosophy recognising no laws apart from those enforced by code, the latter a technological vision of socio-political change limiting the power of authorities \parencite{Narayanan2013}. These aligned movements took early inspiration from Chaum \parencite*{Chaum1985}, who proposed a decentralised digital cash that empowers individuals, rather than governments or corporations. Later seminal texts `A declaration of the independence of cyberspace' and `A Cypherpunk's Manifesto' \parencite[pp.~27-30, 81-83]{Ludlow2001} saw an opportunity in technological progress to break with the industrial nations' established order. Early victories came in defeating US Government bans on the export of encryption software PGP (Pretty Good Privacy) software \parencite[p.~152]{Gellman2011}. Despite these successes, by the 2010s Narayanan \parencite*{Narayanan2013a} declared that ``cypherpunk crypto [has] failed to materialise,'' while Zittrain \parencite*{Zittrain2012} foresaw ``the end of crypto.'' They argued that the difficulties of implementing cryptography, plus the resistance of corporates and democratic governments, meant that the cypherpunk movement would not have the impact imagined. However Bitcoin's arrival marks a resurgence of the cypherpunk movement; both from the original proponents who always had digital cash central to their vision \parencite{Torpey2016}, and from a new wave of adherents \parencite{Bartlett2016}. Links between these communities are often explicit - for instance the Bitcoin subredit directs new joiners to `A Cypherpunk's Manifesto' \parencite{Reddit.com2018}. Following cypherpunk's near-death; why then did Bitcoin, followed by the DLT paradigm, emerge when it did? The creator's secrecy, the pseudonymous Satoshi Nakamoto, thwarts a definitive answer; although there appear to be two drivers: technical and political. From a technical standpoint, as acknowledged by Nakamoto \parencite*{Nakamoto2008}, Bitcoin built upon much previous work \parencite{Bonneau2015}, as illustrated at Figure \ref{fig:BitcoinChronology}, such as the Hashcash project which used Proof-of-Work to counter email spam \parencite{Back2002}. \begin{figure} \includegraphics[width=\textwidth]{BitcoinChronology} \caption[Chronology of Bitcoin's technological precursors]{Chronology of Bitcoin's technological precursors \parencite{Narayanan2017}} \label{fig:BitcoinChronology} \centering \end{figure} Therefore Bitcoin partly emerged when it did because the technical foundations had caught up with cypherpunk's aim of digital gold. The second driver, political, is something Nakamoto explains less: ``it's very attractive to the libertarian viewpoint if we can explain [Bitcoin] properly. I'm better with code than with words though'' \parencite{Jansen2012}. Nakamoto was familiar with Austrian economics (libertarianism's fountainhead) \parencite{Davis2011}, which proposes government interference rarely benefits economic systems. Bitcoin explicitly references 2008's Global Financial Crisis, inserting this phrase in the genesis block: ``The Times 03/Jan/2009 Chancellor on brink of second bailout for bank'' \parencite{Maurer2013}. Libertarians would object to governments creating money, as in The Times headline, to save financial institutions. Bitcoin with its limit of 21 million coins and resistance to any central authority, is a strong reaction to these events by those who question the state's role in economic affairs. The MoD's Development, Concepts and Doctrine Centre even forecasts longer term scenarios where cryptocurrencies might challenge the state's dominance \parencite[p.~78]{MoDDCDC2014}. The second reason for Bitcoin's emergence was therefore the political environment which Nakamoto, and allies, were reacting to. Given this it is unsurprising much debate around DLT (especially in cryptocurrencies) has been couched in the language of opposition to established order, bluntly put by Antonopoulos \parencite{Bundrick2015}: \begin{quotation}``You put an open, decentralized ecosystem: open source, open standards, open networking and the intelligence and innovation pushed all the way to the edge — put that against a closed system, controlled by a central provider, whose permission you need in order to innovate and who will only innovate at the exclusion and competition of all of the other companies — and we will crush them.''\end{quotation} Even the less colourful Bank of Finland uses the term ``revolutionary'' to describe this ``marvellous structure'' \parencite{Huberman2017} thereby sharing common ground with firebrands such as Antonopoulos. The adjective `disruptive' is often applied to emergent technology; however given the regularity of disruptors being bought out by the very firms they are trying to disrupt \parencite{Faktor2016}, this hardly captures DLT's transformative vision. Rather `revolutionary' - seeking to upend the established order - is used as one thematic grouping in this dissertation instead. \subsection{Evolutionary} Despite its roots in radical thought, DLT is being increasingly co-opted by established firms. Figure \ref{fig:ieeeSpectrum2017} shows the heavy involvement of multi-national finance and technology firms within DLT, often with one firm backing several initiatives. Companies such as J.P. Morgan and Goldman Sachs are hardly edgy upstarts wanting to turn the system upside down. \begin{figure} \includegraphics[width=\textwidth]{Mjk1OTIxNw} \caption[Financial industry involvement with DLT]{Financial industry involvement with DLT \parencite{Nordrum2017}} \label{fig:ieeeSpectrum2017} \centering \end{figure} In a similar vein `evolutionary' also covers the phenomenon of IT departments promoting DLT simply to make ``boring back-office coordination work sexy'' so prompting funding \parencite{Levine2016}. Indeed the Centre for Evidence Based Management claim the most successful use of DLT is where it is a ``catalyst'' to implement benefits such as common data standards and automation, even though much of this work could have been done without DLT; or where DLT is the clarion-call for change but the eventual final product shares more similarities with conventional systems \parencite{Parliament.HouseofCommons.2018}. One might argue an example of this is R3 Corda's product which began as a ``blockchain for finance'' but engineering choices moved it away from Bitcoin's architecture \parencite{GendalBrown2016}. Although critics such as Gerard \parencite*[p.~123]{Gerard2017} might reach the conclusion that this is a ``blockchain product'' that does not contain a blockchain; it is also possible to see this as the evolving nature of DLT which, as Section \ref{sec:searchstrategy} discussed, lacks a common definition. For these reasons `evolutionary' is the second thematic grouping. \subsection{Reactionary} Both the revolutionary and evolutionary paradigm agree that DLT has potential benefits. Others are less optimistic: one argument is that mass take-up of DLT is unlikely given the existing good alternatives, with comparisons being made between DLT and open-source Linux as a Windows replacement for consumers \parencite{Evans2017}. Conte de Leon et al \parencite*{ContedeLeon2017} goes further suggesting DLT is inferior to existing systems: the interactions of independent agents are by definition complex so DLT proves difficult to verify it will work correctly. Even proponents acknowledge DLT is typically slow \parencite{Greenspan2016}. The Centre for Evidence Based Management concludes that much of the hype is due to the disconnect between senior management and IT departments; with DLT simply being the latest fad that offers ``magic wand pixie dust'' to solve enterprise problems \parencite{Parliament.HouseofCommons.2018}. Gerard \parencite*[p.~17]{Gerard2017} expresses the maximally negative view that the technology as a whole is as unstable as its originating anarchist sub-culture, with the primary driver greed and naivety. `Reactionary' therefore defines the final thematic grouping. \subsection{Units of Analysis} These three broad themes cover wide ground. Therefore to make the critical literature review more meaningful a further dimension was added - units of analysis (UoA) \parencite{Webster2002}. This was needed as DLs that are revolutionary in one sense, for instance a radical method of allocating domain names so challenging internet governance \parencite[p.~30-33]{Swan2015}, might have minimal impact elsewhere (e.g. socio-cultural). An analysis framework could therefore usefully sub-categorise the impact DLT might have. A number of frameworks were considered, including commercial, for instance Porter's Five Forces model which analyses market competition \parencite{Porter2008}. Academic DLT literature reviews which used conceptual groupings were also studied, specifically: keyword analysis \parencite{Notheisen2017a, Holub2018}, social media context \parencite{Risius2017}, engineering layers \parencite{Hawlitschek2018} and technical challenges \parencite{Yli-huumo2016}. The UoA eventually selected is recommended by the British Computer Society for business analysis \parencite{Cadle2010}, and is established in information systems research \parencite{Peng2007, Bakri2012} - PESTLE (Political, Economic, Social, Technological, Legal, Ecological) analysis. One criticism of a PESTLE approach is it lacks a Defence research focus, however understanding impacts wider than Defence enables discovery of cross-cutting or unanticipated applications. It is therefore using PESTLE, coupled with the revolutionary, evolutionary and reactionary themes that the literature will be reviewed. \section{Matrix} Items relevant to the research question of how DLT might apply to the DSN were selected from the IEEE Xplore database (Section \ref{sec:searchstrategy}). These papers then had their citations reviewed and, using the Web of Science, forward-citations were also reviewed. Additionally where it was felt the IEEE Xplore database had not provided enough coverage in any UoA, databases from Cranfield University, Scopus and Google Scholar were also utilised, using the same search terms. Papers selected on this basis were then mapped against the thematic analysis and UoA so creating the matrix at Table \ref{table:LitReviewThematicAnalysis}. The papers are listed in chronological order of publication; where two papers are published in the same year then date of submission is used to further sort. Where non-academic papers (e.g. government reports) contributed to the topic, they were included at the end of the matrix under the heading `grey literature.' \begin{landscape} \centering \begin{longtable}{ l l l l l l l l l l l l l c } \toprule Theme & \multicolumn{6}{l}{Revolutionary} & \multicolumn{6}{l}{Evolutionary} & Reactionary\\ Unit of Analysis &P &E &S &T &L &E &P &E &S &T &L &E &- \\ \midrule \endhead \cite{Maurer2013} & & &X & & & & & & & & & & \\ \cite{Dwyer2014} & & & & & & & & & & & &X &X \\ \cite{Jenssen2014} &X &X & & & & & & & & & & & \\ \cite{Benet2014} & & & &X & & & & & & & & & \\ \cite{Swan2015} &X &X &X &X &X &X & & & & & & & \\ \cite{Wright2015} & & & & &X & & & & & & & & \\ \cite{Golumbia2015} &X & & & & & &X & & & & & &X \\ \cite{Zyskind2015} & & &X & & & &X & & & & & & \\ \cite{Dupont2015} & & & & &X & & & & & & & & \\ \cite{Dennis2015} & & &X & & & & & & & & & & \\ \cite{Haubo2016} & &X & & & & & & & & & & & \\ \cite{Abramowicz2016} & & & & &X & & & & & & & & \\ \cite{Yli-huumo2016} & & & &X & & & & & &X & & & \\ \cite{Yasin2016} & & &X & & & & & & &X & & & \\ \cite{Ammous2016} & &X & & & & & & & & & & &X \\ \cite{Fu2016} & & &X & & & & & & &X & & & \\ \cite{Natoli2016} & & & &X & & & & & & & & &X \\ \cite{Lajoie-mazenc2016} & & & &X & & & & & & & & & \\ \cite{Atzori2017} & & & & & & &X & & & & & & \\ \cite{Allen2017} &X &X & & &X & & & & & & & &X \\ \cite{Pazaitis2017} & &X &X & & & & & & & & & & \\ \cite{Yermack2017} & & & & & & & &X & & &X & & \\ \cite{Beck2017} & &X & & & & & &X & & & & & \\ \cite{Risius2017} & &X &X &X & & & & & &X & & & \\ \cite{Vranken2017} & & & & & & & & & & & &X & \\ \cite{Nowinski2017} & &X & & & & & &X & & & & & \\ \cite{Chapron2017} & & & & & & & & & & & &X & \\ \cite{Porru2017} & & & &X & & & & & & & & & \\ \cite{Sullivan2017} & & & & & & &X & & & &X & & \\ \cite{Natoli2017} & & & &X & & & & & & & & &X \\ \cite{Hsueh2017} & & & &X & & & & & & & & &X \\ \cite{Hwang2017} & & & & & &X & & & & & & & \\ \cite{Wu2017} & & & & & & & &X & & & & & \\ \cite{Velasco2017} &X &X & & & & & & & & & & & \\ \cite{Pazaitis2017} & &X &X & & & & & & & & & & \\ \cite{Mathews2017} & &X & & & & & & & & & & & \\ \cite{Fink2017} & & & & &X & & & & & &X & & \\ \cite{Werbach2017a} & & & & & & & & & & &X & &X \\ \cite{Mengelkamp2018} & & & & &X & & & & & & & & \\ \cite{Kshetri2018} & & & & & & & &X & & & & & \\ \cite{Aniello2017} & & & & & & & & & &X & & & \\ \cite{Hawlitschek2018} & & & & & & & &X & & & & & \\ \cite{Holub2018} & &X & &X & & & &X & &X & & & \\ \cite{Saberi2018} & & & & & & & & & & & &X & \\ \cite{Reyna2018} & &X & & & & & & & &X & & & \\ \cite{Matzutt2018} & & & & & & & & & & &X & &X \\ \cite{Peterson2018} & &X &X & & & & & & & & & & \\ \cite{Destefanis2018} & & & &X & & & & & & & & &X \\ \cite{Banerjee2018} & & & & & & & &X & & & & & \\ \cite{Garcia2018} & & & & & & & & &X & & & & \\ \cite{Angrish2018} & & & & & & & & & & & &X & \\ \cite{Wang2018} & & & & & & &X & & & & & & \\ \cite{Chakraborty2018} & & & & & & & & & &X & & & \\ \cite{Juskalian2018} & &X & & & & &X &X & & & & & \\ \multicolumn{1}{ c }{\textbf{Grey literature}} \\ \cite{Rosenfeld2012} & &X & &X & & & & & & & & & \\ \cite{McCook2014} & & & & & &X & & & & & & & \\ \cite{Schwartz2014} & & & & & & & &X & & & & & \\ \cite{Ross2015} & & & &X & & & & & & & & & \\ \cite{Swanson2015} & & & & & & & &X & & & & & \\ \cite{Swanson2015a} & & & & & & & &X & & & & &X \\ \cite{Walport2016} & & & & & &X &X &X & & &X &X & \\ \cite{Weber2016a} & &X & & & & & & & & & & &X \\ \cite{Brown2016a} & & & & & & & &X & & & & & \\ \cite{Strobele2017} & & & & & & &X & & & &X & & \\ \cite{Gupta2017} & & & & & & &X &X & & &X & & \\ \cite{MercyCorps2017} & & &X & & & & & &X & & & & \\ \cite{Grigg2017} & & & &X & & & & & & & & & \\ \cite{Maupin2017} & & & & & & & & & & &X & &X \\ \cite{Holmes2017} & & & & & & &X &X & & &X & & \\ \cite{Mazet2017} & & & & & & & & &X & & & & \\ \cite{Ernst2017} & & & & & &X & & & & & & & \\ \cite{Catalini2018} & &X & & & & & & & & & & & \\ \cite{Greene2018} & & & & & & & &X & & & & & \\ \cite{Baird2018} & & & &X & & & & & & & & & \\ \cite{Gammar2018} & & & & & & &X & & & & & & \\ \cite{Murphy2018} & & & &X & & & & & &X & & & \\ \bottomrule \caption{Literature review thematic analysis matrix} \label{table:LitReviewThematicAnalysis} \end{longtable} \end{landscape} \section{Matrix analysis} Examination of the matrix at Table \ref{table:LitReviewThematicAnalysis}, illustrates two phenomena. Firstly the academic literature is of a more revolutionary bent than the grey. This is unsurprising; grey literature is produced by industry, government or related institutions (e.g. think-tanks) - industry profit-maximises and is unlikely to innovate in such a way that their company, or industry sector, disappears. Similar logic could apply to government - public choice theory suggests that governments are motivated as much by self-preservation as the common good \parencite[p.~80]{Buchanan1984}. Academics meanwhile, although subject to funding and institutional pressures, have leeway to radically re-imagine all the PESTLE factors. The second phenomena is chronological: within academia less revolutionary papers appear to be published over time. This could be confirmation of a previously expressed idea (Figure \ref{fig:ieeeSpectrum2017}): revolutionary concepts are co-opted by established players. However the time-span of this review is short (due to DLT's recent emergence), so this conclusion is rudimentary. If these observations are valid then this could herald DLT breaking with its founder's vision. Bitcoin was fundamentally based on disintermediation; the trends above suggest that the ideas of the central parties are gaining ground. This should not be over-stated however; partly because this review is not exhaustive \parencite[p.~21]{Booth2016}, and partly because of the history of DLT - its most well-known proponent, Bitcoin, emerged almost entirely outside of government and academic scrutiny; its development may well continue in this vein. Even given the limitations of this review, the matrix clearly demonstrates that DLT could impact across all areas of PESTLE; each area will now be examined in turn. \subsection{Political} Political analysis within the literature has largely focussed on the politics of the blockchain and cryptocurrencies themselves, likely due to interest in the counter-cultural cypherpunk movement which gave birth to DLT. Golumbia \parencite*{Golumbia2015} posits that Bitcoin is merely a form of political expression because it is neither a money (as it lacks support of the state) nor a currency (as it is not useful as a means of exchange). Specifically Golumbia characterises that expression as ``right wing extremism'' with elements of anti-semitism. This analysis, and similar \parencite{Maurer2013}, is based on online articles or web forums, a source that is considered compromised for rational debate \parencite{Mitchelstein2011}. The problem though is deeper: in a distributed system where anyone can partake and the enigmatic creator disappeared; finding a concrete political philosophy that is more than sweeping cultural analysis, will likely frustrate any academic. Politics is not just internal to DLT however, DLT could affect how politics is conducted. State research, such as the UK Chief Scientific Officer \parencite{Walport2016} and the House of Lords \parencite{Holmes2017}, proposes `evolutionary' uses for DLT: suggesting how the government improve its offering to citizens. Although such reports veer towards hype (Artificial Intelligence and Internet of Things being ever present) they also emphasise the unglamorous ``minimising... costs and redundant work... in administration and back office operations'' \parencite{Holmes2017} where DLT is best suited. Lord Holmes \parencite*{Holmes2017} refers to `algorithmic government' which rests on authentication (proof you are \textit{x}), authorisation (proof you have permission to do \textit{x}) and accountability (proof that \textit{x} has completed an action). Estonia, Switzerland \parencite{Strobele2017} and the UAE \parencite{Gupta2017} are investigating DLT in e-government initiatives like identity authentication and online voting. The impact of DLT for government efficiency might be even greater in developing countries. A Swiss firm, Agora, has trialled voting on the blockchain in Sierra Leone: to record votes immutably at voting offices to combat corruption \parencite{Gammar2018}; while others examine DLT for voter privacy \parencite{Zyskind2015, Wang2018}. DLT has also been used by supranational organisations, such as United Nations Office for Project Services \parencite{UN2018} and the World Food Programme using DLT to provide identification and payment to refugees \parencite{Juskalian2018}. Outside of government the views on DLT can be more `revolutionary' - as Atzori \parencite*{Atzori2017} asks: with the advent of blockchain is the state still required? Embryonic virtual states are now emerging; here Estonia leads the way with its e-residency scheme and partnership with `Bitnation' \parencite{Sullivan2017} which styles itself as: \begin{quote} ``a decentralized, open-source movement, powered by the Bitcoin blockchain 2.0 technology, in an attempt to foster a peer-to-peer voluntary governance system, rather than the current `top-down’, ‘one-size-fits-all’ model, restrained by the current nation-state-engineered geographical apartheid, where your quality of life is defined by where you were arbitrarily born.'' \parencite{Sullivan2017} \end{quote} This truly revolutionary idea is synonymous with the cyberanarchists rejection of ``governments of the industrial world... weary giants of flesh and steel'' \parencite[p.~28]{Ludlow2001} with free-agents instead voluntarily deciding what governance regime they wish to adopt. This concept pre-dates DLT with Eichenberger and Frey's \parencite*{Eichenberger2006} Functional, Overlapping and Competing Jurisdictions (FOCJ), but one might argue technology is only now catching up. Sullivan and Burger \parencite*{Sullivan2017} provide a practical example with a Spanish couple using the Estonian e-Residency to register a marriage on the Bitnation blockchain, thereby stepping outside traditional regulatory routes. They however highlight the ultimate issue - if no territory recognises your Bitnation marriage then what good is it? Velasco \parencite*{Velasco2017} continues this `revolutionary' stance arguing the fractional-reserve banking system is a technical device used politically, but that the passing of trust to DLT brings new forms of politics which current traditional actors of state and market cannot define. Central to his thesis however is DLT's economic role, which will be examined next. \subsection{Economic} Economic research on the earliest DLT, cryptocurrencies, is considerable, covering positions from how these markets work \parencite{Haubo2016}, to how they do not \parencite{Greene2018}, to how they challenge current financial systems \parencite{Jenssen2014}. Research naturally followed to DLT's role in non-cryptocurrency financial practices, such as clearing \parencite{Schwartz2014, Swanson2015, Brown2016a}. The view that DLT can assist in financial markets is not homogeneous: Ammous \parencite*{Ammous2016} argues Bitcoin is the sole use case for DLT - and that foisting this slow, over-engineered mechanism elsewhere will prove futile. Much analysis is macroeconomic, the ultimate expression of this being Bitcoin as a new gold standard which all other currencies are backed against, although Weber \parencite*{Weber2016a} appraises if this was achieved it would fail for the same reasons the original did. At the microeconomic level Beck and Muller-Bloch \parencite*{Beck2017} present a case study of a large financial organisation adopting DLT, focusing on the pressure faced by incumbents whose business model is to act as trusted third parties. Lessons drawn was that motivation was sparked by curiosity, that DLT specific knowledge had to be bought in (with start-ups being more competitive) and that adoption lowered intra- and inter-organisational boundaries. Nowinski and Kozma \parencite*{Nowinski2017} add to the microeconomic understanding by categorising the various business models that DLT could foster from micropayments to eliminating forgery. DLT could help all firms, not just those who adopt it, through improved corporate governance. Yermack \parencite*{Yermack2017} examines how practices such as recording share ownership using DLT would improve transparency, reduce cost and minimise bad practices such as ``empty voting'' where participants acquire shares purely to vote malignantly against corporations. Ironically though in the short term DLT is allowing governance to be disregarded as firms accumulate capital through Initial Coin Offerings \parencite{Catalini2018}. DLT research focuses more on `Wall Street' than `Main Street'; but in reviewing supply chain DLT projects, Kshetri \parencite*{Kshetri2018} assesses these are a better fit, as DLT's strength is in solving problems of messaging rather than as a database. Wu et al \parencite*{Wu2017} and Banerjee \parencite*{Banerjee2018} also propose DLT's use in supply chain; although both papers may have some commercial bias: the former features an author from Dow Chemicals who have experimented with DLT and may therefore wish to portray it positively, and the latter from Infosys - a software vendor seeking to market the technology. A more concrete `Main Street' example is DLT's use in manufacturing and construction: Mathews, Robles and Bowe \parencite*{Mathews2017} argue that the 3D design files are well suited to the immutable nature of DLT, while Wang et al \parencite*{WANG2017} propose that DLT's ability to cross boundaries would help within construction's complex leasing arrangements. Angrish et al \parencite*{Angrish2018} best demonstrate the use of DLT within this field. Here automated machine tools broadcast their availability, capabilities and performance (as certified by a neutral third party) to a blockchain; whilst clients requiring manufacturing broadcast digital work packages they need completing and prices they are prepared to pay. Smart contracts match supply and demand; so meaning that where previously a company would send work to a few trusted partners, now this trust cost is reduced they can source from a greater pool. This would be a radical decentralisation with firms of all sizes being able to fairly compete. Pazaitis, De Filippi and Kostakis \parencite*{Pazaitis2017} see DLT as changing the basic fundamentals of the economy. Their study into `Backfeed' examines an organisation which uses DLT to form a governance structure allowing individual members to reward each other based on the size of their contribution to the group. Although one of the authors being an instigator of Backfeed is likely to have introduced bias, it highlights the potential impact of DLT on hierarchical structures and associated social interaction. Hawlitschek, Notheisen and Teubner \parencite*{Hawlitschek2018} alternatively contrast that although DLT can replace trust on a narrow technical level, it cannot in complex social environments such as the sharing economy where it is a multilayered concept. Judging then DLT has the potential to impact the economy multitudinously, what evidence exists it can do so with these social elements - the next element of this review will examine this. \subsection{Social} If DLT can be used to track diamonds \parencite{Everledger2017}, how about for what Socrates describes as one's `richest jewel' - reputation? Dennis and Owens \parencite*{Dennis2015} suggest DLT could be used for recording a universal reputation score, a key element of much e-commerce e.g. Ebay, Uber, AirBnB. Decentralising this would mean an end to a company's algorithm change eradicating overnight an individual's reputation, with financially negative implications in today's sharing economy; and potentially lower barriers to start-ups. Zyskind, Nathan and A. S. Pentland \parencite*{Zyskind2015} go further in empowering users and inverting the power relationship, by proposing that keys to data shared with companies is held in a DL, meaning access can be revoked at any time. The authors also propose social feedback could be added to a DL consensus model with nodes up and down-voted, Fu and Fang \parencite*{Fu2016} suggest improvements to this with a `Proof-of-Credit' hybrid-consensus model; although neither paper provides clear criteria on what determines a good node from a bad. Another suggested DLT reputational system involves a comprehensive assessment of a user (including professional, social, academic, etc) \parencite{Yasin2016}, this stops short of Orwellian due to the user granting access rights. It is noteworthy this research is from a Chinese university, where attitudes to social-media surveillance are markedly different, as evidenced with the proposed `social credit system' \parencite{Chin2016}. It is not clear how these systems would not be gamed however - if followers lead to upvotes, what is to stop reputation inflation by bot? Nevertheless this illustrates that the benefit of DLT does not exist in a vacuum where only efficiency matters, rather cultural values such as privacy concerns also determine its use. Underlying any reputational system is identification - proving to society you are who you say. Garcia \parencite*{Garcia2018} addresses this suggesting biometrics coupled with distributed identifiers could allow government to write identification information to a blockchain, but then be highly selective as to what data is shared. Caution must be taken when interpreting this academically-published research however, given Garcia's employ by a DLT vendor. The literature therefore coalesces around DLT's impact on the individual's relationship with society; there is less on how DLT might change society itself, this mostly relates to increasing transparency in charitable giving to social enterprises \parencite{Mazet2017, MercyCorps2017}. This limitation may be due to the difficulty of forecasting how technology changes society, given the complex two-way relationship between technology and society \parencite{Pinch1993}. Although the literature might be reticent about predictions; DLT is not, with platforms hosting prediction markets where ``reputation tokens'' are staked on outcomes \parencite{Peterson2018}. If the literature does not tell us a great deal on technology's impact on society, the next section will look at what it tells us about the technology itself. \subsection{Technological} As this is a technology under discussion, it is unsurprising the majority of the literature has focused on technical aspects of DLT \parencite{Yli-huumo2016, Risius2017, Holub2018}. A common theme is DLT design weaknesses. This is to be expected: security and cryptography move forward by the research community revealing flaws which subsequently get rectified. An example of this is Natoli and Gramoli \parencite*{Natoli2016}'s examination of how delays in messaging between nodes in Proof-of-Work blockchains could lead to circumstances where double-spend occurs. Research such as this leads to both proposed improvements in the systems \parencite{Lajoie-mazenc2016, Hsueh2017} and how this might impact on real-world enterprise uses, such as the R3 Consortium \parencite{Natoli2017}. Although highlighting individual flaws is useful, Porru et al \parencite*{Porru2017} argue that DLT is not going to progress without an academic examination of the art and science of developing these systems which they coin BOSE (Blockchain Oriented Software Engineering). Similarly, Destefanis et al \parencite*{Destefanis2018} evaluates that innovations such as smart contracts and DAOs can only reach full potential with a specific discipline considering best practices, testing and design patterns. The literature also examines how DLT might impact other areas. Within the field of computer science, DLT applications are proposed in cloud storage \parencite{Benet2014}, operating systems \parencite{Grigg2017}, artificial intelligence \parencite{Murphy2018} and internet-of-things \parencite{Chakraborty2018, Reyna2018}; so offering users the ability to own their identity whilst interacting with the promised ``new economy'' DLT heralds \parencite{Swan2015}. DLT's impact on technology and science may go beyond computer science - e.g. within medicine FoldingCoin examines how cryptocurrency can incentivise distributed networks to contribute compute to projects, such as folding proteins \parencite{Ross2015}. The technological aspect of the literature reminds us DLT is still emergent. The body of work is primarily concerned with narrow design issues, although this is changing with the rise of BOSE as a discipline and consideration how this might cross-fertilise wider technology and science. \subsection{Legal} How a technology relates to the world is much dependent on the legal context. It is thus important to review the literature that examines where the law and DLT intersect. DLT's `immutability' represents a challenge for legal systems - a court order instructing deletion is unenforceable against a censorship-resistant, decentralised system offering no method to erase. Fink \parencite*{Fink2017} argues the European Union's General Data Protection Regulation (GDPR), based on an old world order of centralised data silos, fundamentally conflicts with DLT; whilst also believing DLT might contribute to data protection regulations by providing citizens cryptographic control. GDPR aside, another concern is illegal content, such as images of child sexual abuse, as Matzutt et al \parencite*{Matzutt2018} identifies on Bitcoin's blockchain. Depending on legal interpretation this could make all full nodes illegal, so allowing unscrupulous authorities to clandestinely add illegal data and then ban Bitcoin use \parencite{Colyer2018}. Moore \parencite*{Moore2018} however states this is over-emphasised, highlighting that illegal content on the blockchain requires considerable processing to make human-readable, and compares how many dollar bills contain some trace of illegal drugs. In certain jurisdictions the issue is not how DLT interacts with existing law; but that specific DL have been declared illegal. Venezuela for instance has banned Bitcoin, which can be interpreted as a defence to threats of ``crypto-secession'' \parencite{Allen2017}. Maupin \parencite*{Maupin2017} criticises these blanket bans; instead jurisdictions should consider the merits of individual use cases and co-ordinate across borders. Much of this law though applies to permissionless DLT, particularly cryptocurrency; permissioned DLT, often corporate established, is less constrained by these rulings. The relationship between DLT and the law however is not one-way, Wright and De Filippi \parencite*{Wright2015} suggest DLT will lead to a reduction in authorities' control through legal mechanisms. A subset of law, ``\textit{lex cryptographia}'' \parencite{Wright2015}, will emerge where smart contracts and decentralised autonomous organisations administer rules agreed by willing participants. Consequently the roles of centralised institutions as arbiter and notary will diminish \parencite{Dupont2015}. Werbach and Cornell \parencite*{Werbach2017a} refute that law is replaceable as it arbitrates when parties disagree - a situation where inherently inflexible smart contracts fail. This criticism is answered by Abramowicz \parencite*{Abramowicz2016}, who demonstrates ``tacit coordination games'' can decentralise legal judgements. Abramowicz proposes a thought-experiment where a person is offered \$10 to judge whether the next person who the same question is asked of, under the same conditions, will answer whether it is `hot' or `cold'; this could be expanded to more nuanced judgements. The previous barrier to using this decision making mechanism: the requirement of a central party to remunerate winners, could be overcome by smart contracts leading to ``peer-to-peer governance'' structures \parencite{Abramowicz2016}. In summary the literature shows that could DLT face legal challenges as the old world of centralised authority collides with the new decentralised one, but beyond that the impact might be the disruption of the entire legal system. \subsection{Ecological} Given the permissionless nature of the network, estimating Bitcoin energy demand is not a simple task. One estimate suggests it consumes the same amount of energy as Ireland: 3 - 6 gigawatts \parencite{Dwyer2014}. More recent studies \parencite{Vranken2017} estimate less: 0.1 - 0.5 gigawatts. Bitcoin, which uses a Proof-of-Work consensus, relies on there being a mining cost to prevent a `51\% attack,' whereby a malicious party hijacks the network (Section \ref{sec:Genesis}); as all currencies entail a production and distribution cost when this is accounted for some counter that physical cash or gold is actually less sustainable \parencite{McCook2014}. Although permissionless blockchains typically use PoW, most permissioned DL use other consensus mechanisms such as Proof-of-Authenticity, with similar energy cost as typical software. DLTs impact on energy is far from solely negative - Saberi, Kouhizadeh and Sarkis \parencite*{Saberi2018} propose its use in product hazard and disposal management through recording information pertaining to the European Community Directive on Waste Electrical and Electronic Equipment. Once could envisage a DL, with the Department for the Environment acting as regulator, which companies are mandated to complete for new products, and to which recyclers have access at the end of products' life; so retaining product information even when the item is no longer manufactured. This scheme would allow stakeholders to understand both where hazards lie and how scarce resources could best be reclaimed. This life-cycle provenance could also apply to environmentally sensitive commodities. For example, using DLT to track diamonds, fish and pork allow consumers and retailers access to the items' ecological footprint \parencite{Chapron2017}. Coupled with Internet-of-Things connected devices, information such as storage temperatures could be added for food safety management. If this section started with DLT's electrical use, it is fitting to end with how it might directly reduce it. Hwang et al. \parencite*{Hwang2017} identifies the use of blockchain in `prosumer' electrical generation, where individuals produce renewable energy. DLT can be used to sell prosumers' surplus energy, recording who used what without relying on a central authority - smart contracts and cryptocurrency can be layered on top to enable autonomous payments. Reporting on a microgrid utilising DLT in New York, Mengelkamp et al \parencite*{Mengelkamp2018} views this positively but suggests that real-world application will be constrained by this being a highly regulated market. RightMesh \parencite{Ernst2017} intend to use DLT and tokens to incentivise mesh networks allowing subscribers to share their internet access and be recompensed, especially useful in deprived areas. The use of DLT enabling consumers to be producers is another example of the revolutionary mindset, upending the traditional role of central authority, this time in the guise of big energy. \section{Conclusions and implications} \label{sec:conclusionsAndImplications} The literature shows that DLT has the potential to impact across the entirety of PESTLE; in many ways radically. Additionally it suggests that with the increasing adoption of DLT in established institutions, focus is switching to evolutionary from revolutionary. There is a parallel here too within the technology - whereas the revolutionary can be seen to equate to Figure \ref{fig:WalportTypesDLT}'s early permissionless and public systems, the evolutionary counterpart is the later permissioned and private. There are implications in this for the DSN - it is likely that permissioned systems will be the preferred route. Lord Holmes \parencite*{Holmes2017} has already reached this judgement by advocating permissioned DLT in government. Permissionless systems require a mechanism to ensure the honesty of nodes - Bitcoin's genius is in ensuring this through energy-intensive Proof-of-Work and on-chain financial incentives. It is difficult to imagine what the equivalence of this would be with a system designed around the business of the DSN. If the DSN were to use a DLT which required no permission to join or append information too, how would nodes be motivated to act in a trustworthy fashion? Likewise a public DSN would put information, which you might not wish to share with military adversaries, in a public forum. That is not to rule out using permissionless DLT. Recording hashes of a permissioned DLT on a permissionless DLT leverages up immutability, referred to as a two-layer blockchain architecture \parencite{Aniello2017}. Alternatively Rosenfeld \parencite*{Rosenfeld2012} propose using meta-data in a permissionless blockchain to store information standing in for asset ownership (so called coloured coins), although this method has critics \parencite{Swanson2015a}. Additionally newer forms of DLT, such as Hashgraph, a directed acyclic graph using ``permissionless consensus'' with ``permissioned governance'' \parencite{Baird2018}, might necessitate a reassessment of the basis for judgement. This dissertation will also primarily examine DLT which enables the \textbf{share} functionality of Figure \ref{fig:ShareProveVenn}. Although DLT that enables \textbf{prove} alone is useful in cyber-security, the efficiencies in the DSN will come through sharing data across organisational boundaries. However there will be no attempt to select a particular DL, this is a fast moving space and different use cases will draw on different technological models. In the same way however that a permissioned DLT that follows the evolutionary path is more likely to be applicable for the DSN, the methodology of this dissertation will follow the sociology of regulation approach. This paradigm considers iterative change rather than root and branch reform, and will be covered in the next chapter. \chapter{Methodology} \section{Research paradigm} This dissertation takes a functionalist research paradigm \parencite[p.~80]{Johnson2000}. That is an objectivist approach (the author is external to the study and is able to observe the facts in a way that is generalisable) combined with a `sociology of regulation’ perspective (that is the study will look to improve the organisation studied, rather than suggest radical change). The studies philosophical position is that of pragmatism: the research question of applying a technology to Defence takes centre stage, and concepts are only relevant when they support this practical aim. The deductive approach is used – the literature will be studied, followed by suggestions of how DLT might be used within a defence context. The time horizon will be cross-sectional, that is the study of DSN in the DLT at this moment, rather than change and development over time \parencite[pp.~122-152]{Saunders2016}. This approach was selected as it matched the Research Question and Aim (Section \ref{sec:researchQAO}) which was not to establish a radical re-understanding of the DSN or the perception with which it is held, but rather a practical analysis as to how a technology might be applied. It is acknowledged that the functionalist approach has limitations, namely its avoidance of both the role of conflict and the agency of individuals \parencite[p.~432-442]{King2011}, however this was considered less germane to the research in hand. \section{Research strategy} A simple form of Action Research is used – the context will be understood, problems will be identified and solutions will be proposed using the nominated technology. It will differ from typical Action Research in that proposals will not be implemented and iterated on; this is due both to time and financial constraints. Given that this study focuses on understanding problems and providing solutions, potentially having practical relevance to further MoD work-streams, it falls closer to applied than basic on the research spectrum \parencite[p.~9]{Saunders2016}. \section{Data collection} Data was collected using sequential mixed method research \parencite[p.6]{Ivankova2016}. Initial exploratory discussions with individuals was followed by semi-structured interviews combined with a questionnaire. These were held with two groups of interviewees: those with experience of DLT - primarily from the technology-sector; and those with knowledge of the DSN, primarily MoD employees. Exploratory discussions were open interviews enabling contextual data to be gathered and to facilitate further lines of enquiry. Follow-on semi-structured interviews were then held with experts on DLT and DSN processes. Semi-structured interviews allowed exploration of interviewees' insight, a key requirement for an emerging (and therefore not fully understood) technology. The questionnaire also acted as a prompt of discussion for qualitative data. Data collection also occurred through analysis of documentary evidence \parencite[p.~107]{judithbell2014}, for instance policy instruction that governs the DSN (e.g. Defence Logistics Framework) and minutes of various committees (e.g. Defence Logistics Directorate IS Working Group). Reliability and validity were key concerns throughout data collection; the former being whether a process produces the same results when duplicated and the latter being whether the process collects the correct evidence to support the research aim \parencite[p.~103-104]{judithbell2014}. Interviews in particular can suffer from problems with reliability, validity and additionally bias - where either the personal views of the interviewer are introduced into the process or the interviewee does not provide a true representation of their views \parencite[p.~50]{White2014}. Interviewee bias was minimised by explaining the research was not commercial in nature, offering anonymity and conducting interviews where participants could not be overheard, so minimising career harm and sales pitches - see Section \ref{EthicalConsiderations}. Interviewer bias was minimised by using videos to explain the technology, although as discussed in Section \ref{sec:defenceSectorInterviewees}, it was difficult to find a video short enough to serve as an introduction, yet long enough to provide a balanced view of pros and cons. The use of videos also increased reliability by ensuring all defence-sector interviewees were provided with the same base-line DLT information. Despite this bias was considered a particular problem for technology-sector interviewees. Firstly, company employees are unlikely to disclose criticism of their products due to potential commercial implications, regardless of whether they are actively trying to sell. Secondly, being actively involved with a technology and agreeing to an interview focused on such technology suggests a personal belief in DLT which might lead to over-optimism. Similarly, although effort was made to adopt a neutral stance, the present researcher's interest in this dissertation topic could lead to bias in interpreting data collected \parencite{Nickerson1998}. Although this risk was difficult to mitigate, an attempt was made by encouraging interviewees to discuss negative aspects, as well as seeking out DLT-critical research. Sampling-frame bias, where the selected interviewees constitute a non-representative sample so leading to an invalid understanding of either DLT or DSN \parencite[p.~82]{White2014}, was also considered. Defence interviewees were selected for their expertise in the generic use cases presented in the questionnaire, so enabling assumptions to be thoroughly questioned. Whereas technology-sector interviewees were selected to gain a broader understanding of DLT, therefore a variety of organisational types were sought out. More technology-sector interviewees were interviewed than defence, as the research was being conducted from a position of experience with the DSN. Table \ref{table:interviewees} illustrates these points. The research did suffer from a lack of interviewee gender and ethnicity diversity \parencite[p.~126]{Mergaert2015}, efforts were made to counter this but were frustrated by dates available for interview; this may have been a consequence of neither the defence \parencite{Diversity2018} nor technology-sector \parencite{BritishComputerSociety2017} being exemplars of inclusivity. Combining qualitative and quantitative data allowed triangulation, so increasing validity and reliability \parencite[p.~71-72]{Collis2013}. Reliability was increased by using a Likert-scale questionnaire, meaning that the another interviewer would have received the same answers regardless of the particular relationship between interviewer and interviewee. Validity was increased by using semi-structured interviews which allowed a deeper appreciation of interviewees thought-processes, so allowing an understanding of why DLT would be applicable to questionnaire use cases; this also provided generalisability \parencite[p.~365-366]{Symon2012} as this understanding could then be considered against processes not present in the questionnaire. Feedback on the questionnaire was also provided by a number of advisors with experimental experience. Ultimately an awareness of reliability and validity concerns ensured some mitigation, however with a research team of one this was an inevitable limitation. Surveys were considered but not used because DLT is an emergent technology and therefore by definition many will be unfamiliar with it. Surveys are typically used with a ``sizeable population'' \parencite[p.~728]{Saunders2016}, yet the number of people who have understanding of how the technology might be applied to DSN amounts to a small sample. As this research involves exploring a novel topic, the open ended nature of interviews afforded this latitude. \section{Ethical considerations} \label{EthicalConsiderations} The primary ethical concern was causing interviewees `career harm' by publicising opinions which might damage their professional standing. For instance a consultant revealing that they thought DLT was too immature for business application, might affect their future employment. Therefore interviewees were offered the options of being identified personally (e.g. Joe Bloggs of IBM), by their employer (e.g. IBM) or anonymity. In this latter case interviewees would be referred to by the sector they worked for (e.g. `a technology company employee...'). Care was also taken to ensure that interviewees were interviewed where they were not overheard by co-workers. All interviewees were given an sequential number for the purpose of data collection and analyses. Another ethical concern was conflict of interest. Many interviewees from the technological sector work for DLT vendors. Data collected would be biased if interviewees misinterpret the interview as a sales opportunity. Therefore the purpose and nature of the research was clarified when recruiting candidates. Not only was it clearly stated in the consent form that all discussions were of non-commercial nature without commitment or prejudice of the MoD, these were also preambled at the interview. This research was awarded a Level 2b risk assessment level and authority to proceed on 20 Sep 17 by the Cranfield University Research Ethics System. MoD Research Ethics Committee approval was not required. \chapter{Method} \label{ch:method} Participants were selected from two groups: the technology and defence sectors. The former were chosen for their knowledge of DLT while the latter for their knowledge of the DSN. An overlap was foreseen: some technology interviewees might be familiar with DSN (e.g reservists) and some defence interviewees with DLT (e.g. bitcoin media coverage). To capture this, all interviewees reported their familiarity with both DSN and DLT on a five point Likert scale as shown in Figure \ref{fig:techFamiliarity}. \begin{figure} \includegraphics[width=\textwidth]{defenceFamiliarity} \includegraphics[width=\textwidth]{techFamiliarity} \caption[Participants' familiarity with DLT and DSN]{Defence vs technology-sector interviewees self-reported familiarity with DLT and DSN (Author's own work)} \label{fig:techFamiliarity} \centering \end{figure} Ten semi-structured face-to-face interviews were conducted with technology interviewees and six with defence (with an in-depth exploratory interview being conducted with one interviewee from each sector prior to this to establish a research direction). Interviews took place at the interviewees' workplace or at a mutually convenient location (e.g. cafe), locations where chosen to ensure interviewees could hear the videos clearly and were not overhead by co-workers . Interviews typically lasted 45 minutes to one hour. Technology and defence interviews followed different formats. \section{Technology-sector interviewees} Participants were selected in two ways. Social networking within the MoD enterprise, trade events (e.g. Team Defence Information) and the MoD intranet were used to identify companies with an ongoing relationship with the MoD and an interest in DLT. Companies were then approached and asked if any of their employees would be willing to be interviewed. Secondly interviewees were also recruited by asking at the end of an interview for additional contacts who would be willing to participate. Typically, but not always, the former method led to interviewees from established technology companies, and the latter to start-ups. The format used for all interviewees from the technology-sector was: \begin{enumerate} \item Introduction to research and consent approval \item Overview of DSN \item Discussion of company technology or organisational interest \item Interviewee completes questionnaire - discussing with interviewee throughout \item Exploration of further defence use cases \item Discussion of further avenues of research (either people, organisations or literature) \end{enumerate} Item 2, the overview of the DSN, was brief and featured an oral introduction similar to that provided in Section \ref{sec:DefenceSupportNetwork}. Typically all interviewees had some knowledge (first-hand or otherwise) of supply chains, so this concept was easily relatable. Figure \ref{fig:techFamiliarity} shows that technology-sector interviewees reported greater knowledge of the DSN, than defence-sector of DLT. \section{Defence-sector interviewees} \label{sec:defenceSectorInterviewees} Interviewees from the defence-sector (i.e. chosen for their knowledge of defence processes) were selected from networking within the enterprise and the MoD intranet. While technology interviewees were selected from a wide pool, with any organisation that had a DLT undertaking considered admissible; Defence employees were selected on the basis of expertise in one of the generic use cases at Appendix \ref{ch:questionnaire}. The defence interview format differs from the technology-sector in replacing the introduction to DSN with DLT: \begin{enumerate} \item Introduction to research consent approval \item Discussion of role of interviewee \item Introduction to DLT \item Interviewee completes questionnaire - discussing with interviewer throughout \item Exploration of further defence use cases \item Discussion of further avenues of research (either people, organisations or literature) \end{enumerate} Interviewees were introduced to DLT through two short videos. Showing videos rather than reading a script helped prevent the researcher showing positive bias unconsciously (e.g. via tone of voice), whilst at the time ensuring consistency. Illustrating the relevant underlying concepts visually (for example animation of the chaining of data blocks) helped explain complicated concepts to those without an Information Systems (IS) background. Videos might also make what was potentially a dry subject more interesting compared to reading or listening to a monotone script. The videos were selected from YouTube. The search term ``blockchain introduction'' was used and the resulted videos were sorted by YouTube ascribed `relevance.' Videos longer than 10 minutes were excluded. The resulting list was reviewed in order of relevance. Videos with poor audio, graphics and non-professional delivery were discounted. Remaining videos were checked for user-comments related to usefulness in explaining the technology to those previously unfamiliar with DLT before being given a researcher ascribed rating. This process was repeated using the search term ``distributed ledger technology introduction.'' In total nineteen videos were given ratings. Following review it was decided that no single video sufficiently explained both the technology itself and applications of the technology, necessitating the selection of two. The video selected to explain the technology was produced by the UK Government Office for Science \parencite{GOScience2016} and accompanied the Walport \parencite*{Walport2016} report; which although providing a good grounding in the technology concentrated more on finance - potentially leading defence-sector interviewees to think DLT had little relevance to the DSN. The second video therefore selected was an IBM \parencite*{IBM2016a} demonstration of how the blockchain might be used in the car industry. Choosing this video did risk interviewees being influenced by their attitude towards IBM; however it was felt that the IBM video provided a far superior explanation on how the technology might be applied in a DSN-like environment. Thus effort was made to reiterate that many vendors offer this technology besides IBM, and the Government video was shown first. The videos last 5 min 15 seconds and 3 minutes respectively. Interviewees had an opportunity to ask questions after watching - however typically interviewees reported that the videos had provided a satisfactory overview of DLT. One risk that it was difficult to mitigate against was giving an overly positive impression of DLT. Introductory videos by definition do not cover critical detail; this was especially the case in the IBM clip (a DLT vendor). An attempt at balance was made by explaining that this was an emergent technology with few real world use cases. Ultimately this trade off was accepted to ensure defence-sector interviewees had a basic understanding of DLT. \section{Questionnaire} Both industry and Defence interviewees completed the same questionnaire, featuring the use cases at Appendix \ref{ch:questionnaire}. These four generic DSN use cases had been selected following an initial review of the literature, and were: Codification, Revenue \& Customs, Engineering \& Asset Management and Contracting for Availability. Questionnaires were structured as follows: a description of the four potential use cases, questions on familiarity with DSN and DLT, questions on utility and ease of implementation of each use case and questions on confidence in answers provided. All operative words were defined at the start of the questionnaire. All questions were answered using a Likert-scale rating of one to five. Where one represented not at all useful, very difficult or not at all confident; five: very useful, very easy or very confident. Although interviewees were asked to rate how confident they felt in rating each use case (e.g. I am very confident that Use Case A is very easy to implement), in the final analysis this confidence data was not used as it added little value, partly due to its self reported nature. Interviewees were encouraged to read all use cases prior to answering the questions - this was to ensure they were able to compare use cases as they responded. A discussion on the strengths and weaknesses of the use cases would normally take place as the questionnaire was completed, notes were taken by the interviewer and recorded via dictaphone with interviewees' consent. An open discussion followed the questionnaire, bringing out further issues with the use cases and exploring other use cases not covered in the questionnaire. Table \ref{table:interviewees} lists semi-structured interviewees. Sector was defined by expertise not organisation. For instance on 1 Feb a MoD employee was interviewed who contracted DLT pilot projects; they were defined as tech-sector as the knowledge they were bringing to the research was of DLT. One interviewee listed in Table \ref{table:interviewees}, although providing qualitative data, declined to answer the questionnaire - this was due to the ambiguity inherent in the term DLT and the variety of solutions that might be applied to each use case. Although further clarification was provided by email after the interview, the interviewee did not follow up. \begin{landscape} \begin{table} \centering \begin{tabular}{ l l l l l} \toprule Date & Sector & Location & Organisational Type & Reason for interview \\ \midrule 26 Sep 17 & Tech & London & Consultancy & DLT engineering use cases\\ 26 Sep 17 & Tech & London & Consultancy & DLT engineering use cases\\ 5 Oct 17 & Tech & London & Think tank & DLT finance use cases\\ 28 Nov 17 & Tech & London & Technology start-up & DLT supply chain use cases\\ 11 Dec 17 & Tech & London & Technology start-up & DLT supply chain use cases\\ 11 Dec 17 & Tech & London & Consultancy & DLT generic use cases\\ 1 Feb 18 & Tech & London & MoD Information Systems \& Services & DLT defence use cases\\ 2 Feb 18 & Tech & Hampshire & Technology corporate (large cap) & DLT public sector use cases\\ 9 Feb 18 & Defence & Portsmouth & Navy Command Headquarters & CfA \& E\&AM expertise\\ 19 Mar 18 & Defence & Bristol & DE\&S Delivery Team & CfA \& E\&AM expertise\\ 11 Apr 18 & Tech & Bristol & Engineering corporate (large cap) & DLT defence use cases\\ 24 Apr 18 & Defence & Bristol & DE\&S Delivery Team & Customs expertise\\ 24 Apr 18 & Defence & Bristol & DE\&S Delivery Team & Customs expertise\\ 10 May 18 & Defence & Bristol & UK National Codification Bureau & Codification expertise\\ 16 May 18 & Tech & Hampshire & Technology private company & DLT private sector use cases\\ 5 Jun 18 & Defence & Portsmouth & Defence corporate (large cap) & CfA \& E\&AM expertise\\ \bottomrule \end{tabular} \caption{Semi-structured interviews} \label{table:interviewees} \end{table} \end{landscape} \chapter{Discussion} \section{Extant evaluation frameworks} \label{sec:extantframeworks} Lord Holmes \parencite*{Holmes2017}, identified that although it is easy to find inefficient processes within government to serve as use cases, constructing viable DLT business cases would prove much harder. Indeed an initial exploratory discussion with a technologist in a global IT company identified that a standard process to determine the validity and utility of use cases should be a pre-condition of any research into DLT. This led to Research Objective 3 (Section \ref{sec:researchQAO}): \begin{quote} To create a framework for evaluating the utility of DLT against use cases, drawing from relevant academic or business models. \end{quote} A framework groups conceptual elements making general assertions and identifying key features on a relevant subject, in this case the utility of the application of DLT to DSN business processes; it differs from a theory in that is not sufficient to perform hypothesis-testing research \parencite[p.~2]{Anderson2014}. By designing such a framework, this research adds value by enabling others to assess DLT DSN use cases against a standard process, or at least provide the basis for such a process. Extant frameworks were reviewed to determine relevant elements for the proposed framework. Frameworks considered were generic technology frameworks that looked at the application of technology to an enterprise, and DLT-specific frameworks (divided into those originating from academia versus commerce). \subsection{Generic technology evaluation frameworks} A number of academic models explore a technology's utility within a business process. Unfortunately from the perspective of this dissertation many of these models do not examine a technology directly, but rather users' perception of the technology. For instance the Technology Acceptance Model \parencite{Davis1989} claims to be a ``robust, powerful, and parsimonious model for predicting user acceptance'' but acts through subjective questions, such as ``people who are important to me think that I should use the system'' \parencite{VenkateshDavis2000} without assessing the technology itself. Likewise the Expectation-Confirmation Model uses consumer research to elicit similar findings, namely that users will accept a system if satisfied, and will be satisfied if it conforms to their expectations of utility \parencite{Bhattacherjee2001}. Resultantly this `soft' social-science approach is difficult to use practically when analysing an emerging technology prior to adoption. At the heart of this difficulty is that IS is an amalgam of technology, people and processes \parencite[p.~12]{stair2012information} so attempting to analyse one of these factors in isolation is difficult, particularly when software itself is abstract \parencite{dijkstra1989cruelty}. The literature also looks at adoption within organisations, covering the concepts of technology diffusion (how widely technology is used in an organisation) and infusion (how deeply technology permeates an organisation). Studies have been conduced using the Technology-Organization-Environment (TOE) framework to measure this \parencite{Zhu2005}. Again however this is of limited benefit in the scenario where existing business processes are being mapped against an emergent technology. \begin{figure} \includegraphics[width=\textwidth]{ttfGoodhue} \caption[Task Technology Fit]{Task Technology Fit \parencite{GoodhueThompson1995}} \label{fig:ttfGoodhue} \centering \end{figure} One academic model that does have relevance though is Task Technology Fit (TTF) theory. TTF predicts that the impact on performance of a team will only be positive when the technology introduced meets the requirements of the team's task \parencite{Goodhue1995}. Studies of TTF \parencite{Fuller2009} confirm the better the fit of the technology, the higher the initial performance of the team; expressed both in effectiveness (e.g. quality) and efficiency (e.g. time/cost). There are three primary antecedents influencing this fit, as illustrated at Figure \ref{fig:ttfGoodhue}, and expanded below: \begin{enumerate} \item \textbf{Task}. The actions conducted by individuals to transform inputs into outputs for the purposes of achieving a goal. \item \textbf{Technology}. The tools used by individuals to accomplish the task. In the TTF context this refers to IS; which cover policies, training, etc, as well as the IT itself. \item \textbf{Individual}. Those using the tools to complete the actions required; influenced by their own specific characteristics (e.g. motivation). \end{enumerate} Therefore TTF measures how a technology aids individuals in completing their tasks. As Figure \ref{fig:ttfGoodhue} shows TTF is not the only determinant of performance, `utilisation' also matters which is influenced by attitudes, which is beyond the scope of this research. TTF is a static model, when teams develop familiarity with IS they are used differently to the designer's intent - the Fit Appropriation Model \parencite{Fuller2009} considers this, but again is outside research scope. Although not all the TTF factors are relevant to an assessment of DLT (for instance how reliable are the systems), Goodhue and Thompson \parencite*{GoodhueThompson1995} list some that are: \begin{enumerate} \item Task Equivocality \begin{enumerate} \item ADHC1 — I frequently deal with ill-defined business problems. \item ADHC2 — I frequently deal with ad-hoc, non-routine business problems. \item ADHC3 — Frequently the business problems I work on involve answering questions that have never been asked in quite that form before. \end{enumerate} \item Task Interdependence \begin{enumerate} \item INTR1 — The business problems I deal with frequently involve more than one business function. \item INTR2 — The problems I deal with frequently involve more than one business function. \end{enumerate} \end{enumerate} Task equivocality could be seen to negatively correlate with the utility of DLT. Ultimately a distributed ledger is a form of database, and architectural decisions have to be made as to what information a database captures. The more ill-defined the problem, the less certain we are that the right data is being captured or processed correctly, and the less useful the ledger will be in providing answers. On the other hand task interdependence may positively correlate with the utility of DLT - after all a database able to cover more than one business unit is exactly what is meant by ``distributed.'' The literature also refers to more specific use case analysis tools such as Critical Success Factors (CSF) \parencite{Sebora2009, Chow2009} and Feasibility Analysis \parencite[p.~518]{stair2012information}. CSF are those key elements that an enterprise must do well when implementing a technology to ensure success, which although useful, does not assist in actually selecting use cases, and typically requires a number of implemented use cases to study before CSF can be elicited. Feasibility Analysis meanwhile tends to be a process where factors like Return on Investment are considered by an enterprise - but only once a use case has been selected. Technology Readiness Levels \parencite{Mankins2009}, although initially promising due to their emphasis on emerging technology, were discounted due to the difficulty of using it with such a broadly defined technology. Correspondingly these tools were not used in constructing a framework. \subsection{DLT-specific evaluation frameworks} Turning from the more overarching literature of generic technology adoption, next DLT-specific papers were reviewed for assistance in designing an evaluation framework. Examples of these can be found both within the academic literature, and as might be expected from commercial vendors seeking to sell DLT-based products. \subsubsection{Commercial DLT evaluation frameworks} Clearly the profit seeking imperative should be considered when weighing commercial DLT evaluation Frameworks; vendors are ultimately looking to sell products. But they should not be discounted due to this: during one interview an industry representative ventured that they were taking great pains to ensure that appropriate projects were chosen for DLT - as the technology is so new failures at this stage might poison adoption for a considerable period. \begin{figure} \includegraphics[width=\textwidth]{WEFUseCases} \caption[Characteristics of high-potential use cases]{Characteristics of high-potential use cases \parencite{Mulligan2018}} \label{fig:WEFUseCases} \centering \end{figure} Multichain, a DLT vendor, is specific about this point with their blog post entitled ``Avoiding the pointless blockchain project'' \parencite{Greenspan2015}. Here they readily admit that DLT is, from a readiness standpoint, still in ``diapers'' and lays out a number of needed conditions where DLT should be used in preference to a standard database: \begin{itemize} \item \textbf{The database}. Ultimately a DLT is a structured repository for information, therefore an enterprise needs to know why they are using a database - in the same way they would for any other store of information. \item \textbf{Multiple writers}. The above database needs to be modified by more than one entity. In most cases, but not all, the writers will also keep a copy of the database themselves (i.e. run their own node). \item \textbf{Absence of trust}. An absence of trust must exist between the multiple entities specified above. Greenspan usefully expands this by explaining that mistrust can exist within an organisation (i.e. between business units) and can be expressed as reluctance to let one entity modify database entities which another one ``owns.'' \item \textbf{Disintermediation}. Amongst the multiple entities, who mistrust each other, there should be no ``trusted intermediary.'' That is a central gatekeeper who all parties trust and can verify and authenticate transactions. \item \textbf{Transaction interaction}. Transactions need to cross the boundaries of several organisational boundaries. In other words there need to be situations of the sort where Alice's bank passes £1 to Bob's bank, who then loans that £1 to Charlie's bank. If the ledger is simply being used to record Alice's balance, and no one else's, then DLT makes little sense. \end{itemize} Although written with the aim of selling a proprietary DLT-product, this is a useful summary of where enterprises should seek to adopt DLT. Greenspan's approach is validated by the adoption of these principles by Rodrigues, Bockek and Stiller \parencite*{Rodrigues2018} and the World Economic Forum \parencite{Mulligan2018} (although the latter without attribution, so it is possible they have been formulated independently) as at Figure \ref{fig:WEFUseCases}. Multichain is not alone in providing advice as to where firms should look to adopt DLT; established firms are providing similar advice in a consultancy role. In contrast to the above which is technically orientated, much of this consultancy advice is more focussed on business use cases; Table \ref{table:DLTAdoptionComparison} is an example of this form of advice, comparing the criteria that IBM, SAP and Oracle suggest customers consider when adopting DLT. \begin{landscape} \begin{longtable}{ F L L L } \toprule \textbf{Factor} & \textbf{IBM} & \textbf{SAP} & \textbf{Oracle}\\ \midrule \endhead Multiparty & Do we need to track transactions that involve more than two Parties? & Multiparty collaboration: Are many different parties, and not just one, involved in the process or scenario, but one party dominates everything? For example, a company with many parties in the ecosystem that are all connected to it but not in a network or more decentralized structure. & Is my business process pre-dominantly cross-departmental / cross-organisational? \\[3.5cm] Auditability & Can the network benefit from increased trust, transparency, and accountability in recordkeeping? & Transparency and auditability: Is it important to offer each party transparency (e.g., on the origin, delivery, geolocation, and hand-overs) and auditable steps? (e.g., How can I be sure that the wine in my bottle really is from Bordeaux?) & Is there a need to improve traceability or audit trail? \\[4cm] Manual process & Is the current system prone to errors due to manual processes or duplication of effort? & Process optimization: Will blockchain massively improve a process that today is performed manually, involves multiple parties, needs to be digitized, and is very cumbersome to manage or be part of? & Can I improve business process by automating certain steps in it? \\ Intermediary & Is the current system overly complex or costly, possibly due to the need for intermediaries or a central point of control? & - & Does it involve intermediaries, possibly corruptible? \\[2cm] Trust & Does my business network need to manage contractual relationships? & - & Is there a trust issue among transacting parties? \\[1cm] Fraud & Is the current transaction system vulnerable to fraud, cyber-attack, and human error? & Risk and fraud minimization: Does it help (or is there a need) to minimize risk and fraud for each party, or at least for most of them in the chain? (e.g., A company might want to know if its goods have suffered any shocks in transit or whether the predefined route was not followed.) & - \\[4.5cm] Periodicity & - & - & Does it require periodic reconciliations? \\[0.5cm] Visibility & - & - & Do we need real time visibility of the current state of the transaction?\\ \bottomrule \caption[Comparison of IBM, SAP and Oracle DLT use case adoption criteria]{Comparison of IBM \parencite[p.~37]{ManavGupta2017}, SAP \parencite{Roehricht2017} and Oracle \parencite{Goel2017} DLT use case adoption criteria} \label{table:DLTAdoptionComparison} \end{longtable} \end{landscape} Table \ref{table:DLTAdoptionComparison} shows that the vendors' advice has common themes. It is possible that this is simple plagiarism; no company claiming to be at the forefront of technology would want to miss out on the next wave of innovation. However this seems unlikely as a sole explanation considering the investment these firms made, for instance IBM has donated 44,000 lines of code to an open-source DLT \parencite{Borek2016}. Assuming that this commonality exists because these are genuine areas where DLT might engage, how does this compare against the DSN? More specifically, considering that Research Objective 2 (Section \ref{sec:researchQAO}) covers what challenges the DSN faces, this next section will consider these adoption criteria in light of these challenges. \subsubsection{DSN challenges addressed by commercial DLT adoption criteria} \label{sec:challengesaddressed} \paragraph{Multiparty} The DSN can be viewed as multiparty both externally and internally. As illustrated in Figure \ref{fig:DSNRichPicture}, the DSN relies upon a network of external parties - equipment and items are purchased from, maintained and calibrated and subsequently disposed of by industry. These are not merely external touch points - the use of contractors embedded on operations, for repair and resupply, has increased steadily since the 1990s as part of the `Whole Force' concept \parencite[p.~9]{uttley2005}. An example of such extensive arrangements is the UK Military Flying Training System (MFTS); here DE\&S use both industry equipment and personnel in a Private Finance Initiative to train armed forces aircrew \parencite[p.~37]{MoD2016}. Despite the depth of industry links, the MoD has been criticised for failing to manage its contracts proficiently. The Public Accounts Committee noted that the MFTS contract was late, over-budget and underperforming; a key reason being that data to manage the contract was unavailable as it was ``held in pockets within the Department, and is not routinely analysed'' \parencite[p.~14-16]{Parliament2015}. The other element of multiparty is the internal element, as Oracle comments not all DLT members have to be external. Again an area that the DSN struggles with; Figure \ref{fig:naoBowman} illustrates what the National Audit Office \parencite*{NationalAuditOffice2006} described as ``complex inter-relationships'' in Bowman procurement which contributed to the failure of the project. Beck and Muller-Bloch \parencite*{Beck2017}'s case study of a firm that did adopt DLT found that both intra- and inter-organisational boundaries faded due to the decentralised nature of data sharing - this could therefore benefit the DSN whose very definition is as a ``flexible set of [...] connecting points [and] linked nodes'' \parencite[p.~9]{MoDJDP2015}. \begin{figure} \includegraphics[width=\textwidth]{naoBowman} \caption[Bowman radio procurement stakeholders]{Bowman radio procurement stakeholders \parencite{NationalAuditOffice2006}} \label{fig:naoBowman} \centering \end{figure} The DSN can therefore be seen as a multiparty enterprise - internally due to its federated nature and externally due to its relationship to suppliers. SAP in their blockchain adoption advice recommends that one party dominates this multiparty arrangement - possibly because the dominant player might be able to influence others to adopt a new technology. This would fit well with the MoD - it is often a major client of suppliers and consequently is in a position to influence them. Thus the multiparty factor of DLT adoption could apply to the DSN; so helping MoD overcome its challenges with sharing data over organisational boundaries. \paragraph{Auditability} The next factor that all three vendors agree on is that of auditability. When one considers the criticality of equipment (e.g. armaments, medical, radiological items) that makes up the DSN, this is a key requirement. An example can be taken from the Haddon-Cave report into the mid-air explosion of a Nimrod surveillance aircraft over Afghanistan with the loss of 14 lives. This report tracks the procurement of a rubber seal that was the likely cause of the fuel leak responsible. This had been sold by the manufacturer for £15, and after having passed through two subsequent sub-contractors, was eventually procured by Defence Equipment \& Support (DE\&S) for £123.50 \parencite[p.~101-102]{HaddonCave2006}. DE\&S assumed at this cost it meant they were procuring the item from an ``accredited aviation supplier'' however neither the manufacturer nor anyone else in the procurement chain were performing any quality control. This issue might be ameliorated by a DLT, as interviews with technologists involved with the automotive industry revealed. If DE\&S implemented a system whereby any party who was supplying air-worthiness items had to log dates of manufacture and identifying batch or serial numbers on a blockchain; then any quality checks performed either by the original equipment manufacturer or a third party could also be logged against that item on the same blockchain. Signatures using public-private key encryption would ensure the quality control was only conducted by authorised parties. Although this would still leave the digital-physical gap (i.e. how can one be sure the item in the blockchain is the same as the item in hand) it would help in ensuring the provenance of items where many layers of sub-contractors are involved. Auditability could therefore be a strong driver for DSN adoption of DLT. \paragraph{Manual Processes} All three vendors agree that adopting companies should look to DLT to replace manual processes, specifically where those manual processes introduce errors. This is a factor that is also identified by Lord Holmes \parencite*{Holmes2017} arguing that both the health sector and real estate markets suffer from excessive manual reconciliation which could be eradicated by DLT. This problem also exists in the DSN however, as illustrated by the NAO while investigating the MoD's Logistics Information Systems \parencite{NationalAuditOffice2011}. Process mapping the delivery of a single item to Afghanistan the NAO found data of its movement recorded on four different systems. To allow full visibility of the consignment the data had to be transferred manually, involving portable hard drives and re-keying, thus is error prone.. Although the situation since 2011 has improved, with the limited introduction of some new Log IS, many examples of manual reconciliation still exist. However the existence of a manual process in itself is not a strong driver to DLT specifically - the basic premise of almost all IS is that they take an unstructured process and automate it, in the process reaping productivity gains \parencite{Davenport1990}. This particular use case criteria might therefore be best considered not as a single driver, but one to be taken in consultation with others: when looking for potential DLT use cases a logical starting point are ones that currently have a large proportion of manual effort. \paragraph{Intermediary} The next criteria is not universal: SAP and Oracle highlight the role of intermediaries, while IBM does not. The role of intermediaries is also highlighted in Greenspan's \parencite*{Greenspan2015} analysis; this is not surprising, one of DLT's founding principles was to removes intermediaries, in the case of bitcoin the banks. This criteria does not necessarily map across to the DSN because the MoD often plays the role of Greenspan's ``trusted intermediary'', with the MoD often playing an outsized role in arrangements between it and its suppliers. It does not necessarily mean there are no aspects to explore here - one avenue might be a potential `disintermediation' of roles the MoD plays. For instance the role of the United Kingdom National Codification Bureau (UKNCB) is to assign NATO Stock Numbers (NSN) on behalf of the UK to items procured by the MoD. But as the majority of the item information is supplied and updated by the manufacturer, it might be more efficient for the MoD and manufacturer to simply share a DL; this use case will be further discussed in Section \ref{sec:codification}. Thus disintermediation could therefore be an internal effect. Although the MoD may play an outsized role with its own suppliers, this does not apply in coalition where the DSN works alongside other nations; as key element in DSN doctrine, as illustrated at Figure \ref{fig:DSNRichPicture}. An example here is the F-35 Joint Strike Fighter (JSF) project where equipment is shared between a multinational alliance of twelve countries. Each time repairable JSF equipment is transferred between nations it is accompanied by an Electronic Equipment Logbook (EEL) which shows all maintenance performed on that item. There has been criticism of how this is handled within the JSF's Autonomic Logistics Information System (ALIS) \parencite[p.~53]{Behler2018}, potentially using a distributed system to record and share this information might be more effective. As ALIS has also been criticised for inappropriately sharing sensitive data \parencite{Seidel2017} a DL could also have the advantage of giving nations more granularity over data to be shared (e.g. a nation would share what maintenance was conducted, but might redact where that maintenance was conducted). Consideration though should be given to whether a standard relational database might be more efficient, which is possible with an agreed central authority to trust; trust being the subject of the next criteria. \paragraph{Trust} Trust is not mentioned by all vendors. Oracle mentions it explicitly, while IBM's reference is implicit in referencing contractual relationships. As discussed the DSN relies heavily on contractual relationships; this research discovered trust issues here, particularly involving Contracting for Availability (CfA) - which is where the MoD pays a contractor for performance of a platform (e.g. days a ship is available for tasking) rather than for the equipment outright \parencite{Caldwell2014}. One MoD interviewee reported frequently disagreements with industry when reviewing CfA performance indicators because both parties believed their own data more reliable. Echoes of this can be found in official publications, for instance NAO's assessment of the MoD's current equipment plan raises ``significant risk around ... the quality of contractor data'' \parencite{Parliament.HouseofCommons2017}. Trust issues over contractor data therefore have room for improvement, meaning that the DSN meets this criteria for DLT adoption. \paragraph{Fraud} IBM and SAP both list fraud minimisation as reasons to adopt DLT; although this is a wide category, with IBM including cyber-attack and human error, and SAP a more general `risk.' Oracle although not specifically having a fraud criteria refers previously to `corruptible' intermediaries. Given the sums of money involved in the DSN it is perhaps not surprising that contractor misconduct does occur - forty-four allegations of bribery and corruption involving the equipment budget were made by the MoD between 2011 and 2016 \parencite{PressAssociation2016}. Similarly the Single Source Regulations Office \parencite*{SSRO2016} stated that £61 million charged by DE\&S contractors may be for ``costs that are not appropriate, attributable and reasonable.'' This is a criteria that again fits the DSN. Fraud within the DSN is not limited to financial matters such as inflated costs, it can also have a physical impact such as with the supply of counterfeits. Indeed, 15\% of spare and replacement semiconductors procured by the US military are estimated to be counterfeit \parencite{SemiconductorIndustryAssociation2013}. Within a defence context counterfeit products raise not only operational issues, due to reliability and quality concerns from unscrupulous sub-contractors using fake items to reduce costs; but also cyber-security risks - counterfeit electrical items can be an attack vector for hostile nation states. As defence equipment becomes ever more complicated and reliant on contractors, this is a growing problem \parencite{Barnas2016}. Hsieh and Ravich \parencite*{Hsieh2017} directly address this problem in their analysis of how DLT might be used to protect the Industrial Base from Supply Chain attacks. They propose that contractors and sub-contractors involved in the manufacture of complex systems are provided with accounts on a blockchain and payment is only rendered when a prime or sub-contractor records value-adding activity on this chain. Data analytics performed on this chain would then highlight anomalies where bad actors (i.e. front companies) might have opportunity to supply counterfeits. Although this is a bold vision, there are considerable challenges, for instance influencing all the sub-contractors to join the chain. Either way fraud is certainly a criteria where DSN and DLT might interact. \paragraph{Periodicity and Visibility} Only one vendor, Oracle, provide periodicity and visibility as a criteria within Table \ref{table:DLTAdoptionComparison}. Both are tangential to DLT implementation - many systems provide real time visibility and are periodically reconciled. Thus these criteria are excluded from the proposed evaluation framework. \subsubsection{Academic DLT evaluation frameworks} \label{sec:Academic} Having examined a sub-set of commercial advice as to when DLT should be adopted and how this might address DLT challenges; the academic literature will now be reviewed. This is far less focussed, than the commercial publications, on the solving of particular business problems. Rather academia focuses on the technical aspects of implementation, this is borne out by previous DLT literature reviews \parencite{Yli-huumo2016, Risius2017}. Subsequently these frameworks, unlike the vendor supplied ones, will not be compared to the challenges the DSN faces (Research Objective 2, Section \ref{sec:researchQAO}). \begin{figure} \includegraphics[width=\textwidth]{PeckDoYouNeed} \caption[Peck's blockchain adoption decision flowchart]{Peck's \parencite*{Peck2017} blockchain adoption decision flowchart} \label{fig:PeckDoYouNeed} \centering \end{figure} Peck \parencite*{Peck2017} provides a decision tree to assist enterprises to decide if they require a blockchain, and if so, whether public or permissioned, as at Figure \ref{fig:PeckDoYouNeed} (NB - Peck is a journalist, but as she is writing in a technical journal this will be assessed as academic literature). Of the seven possible paths a user might take through the tree; four result in not needing a blockchain, two in a permissioned blockchain and only one in a public blockchain. The pathways that lead to DLT adoption are based around censorship resistance and universal access; while if speed, cost, speed, predictability or privacy are important Peck believes you should avoid blockchain. The logic behind this is unclear, if not actually circular. If the user answers the starting question, ``Can a traditional database technology meet your needs?'', negatively; the user will still be redirected to the default of ``You don't need a blockchain.'' But in this case what solution is Peck proposing for the enterprise - a non-Computer Based IS (e.g. a filing cabinet)? Alternatively maybe Peck is assuming that a ``no'' to the traditional database question might be the wrong answer - in which case the user should be redirected to the traditional solution; however if you cannot rely on reliable answers the decision tree concept is flawed. \begin{figure} \includegraphics[width=\textwidth]{doyouneedWust} \caption[Wust \& Gervais' blockchain adoption decision flowchart]{Wust \& Gervais' \parencite*{Wust2017} blockchain adoption decision flowchart} \label{fig:doyouneedWust} \centering \end{figure} Wust \& Gervais' \parencite*{Wust2017} provide a more precise decision tree at Figure \ref{fig:doyouneedWust} than Peck \parencite*{Peck2017}, with the initial question regards storing state usefully focussing on concepts rather than specific technologies. With this version four out of seven paths lead to ``don't use blockchain,'' and singular paths lead to private permissioned, public permissioned and permissionless respectively. One of their key decision points, as highlighted by the industry criteria, revolves around trust. Here trust is defined as assuming ``no participant is malicious''; this is a far stricter definition than expressed by industry, Greenspan \parencite*{Greenspan2015} for instance believing mistrust could exist within the same organisation when one unit was unwilling to let another modify its data. Of particular relevance to this research is the authors' scepticism over supply-chain management DLT because of the digital-physical gap, or the difficulty of confirming the object that exists on-chain matches that in reality; potential solutions to their criticisms are examined in Section \ref{sec:SupplyChainProvenance}. \begin{landscape} \begin{figure} \includegraphics[width=\paperwidth]{WEFFlow} \caption[World Economic Forum's blockchain adoption decision flowchart]{World Economic Forum's \parencite{Mulligan2018} blockchain adoption decision flowchart} \label{fig:WEFFlow} \centering \end{figure} \end{landscape} One of the more authoritative decision trees is published by the World Economic Forum at Figure \ref{fig:WEFFlow}, which is adopted from an earlier academic model \parencite{Maull2017}. This decision tree is the most sophisticated reviewed with extensive descriptions of decision points and worked examples. However there are contradictions: the first step asks ``are you trying to remove intermediaries or brokers?'' yet the use case features an imaging company where ``the intermediaries are actually the boundaries of the firm that currently hold the GPU computational capacity for rendering images.'' It is difficult to imagine a firm without boundaries - which effectively means that all firms could answer `yes' to the first question. This does not detract from the use case (which is similar to Angrish et al.'s \parencite*{Angrish2018} cybermanufacturing example), but does illustrate that an inflexible process to decide where DLT may fit, is unsuited to fluidly-defined DLT. The academic world is therefore more reticent to propose DLT as a solution for business problems than the commercial one. There are a number of possible reasons - the first and most obvious being the lack of commercial incentive. However, established companies such as IBM, SAP and Oracle are likely to prefer long-standing business relationships over short term gain over; they also potentially have more practical insight into business problems than academics. A more subtle problem might be the ``not invented here'' syndrome \parencite{Antons2015}. Narayanan and Clark \parencite*{Narayanan2017} comment that although Nakamoto's white paper was more novel than typical academic research, it was ignored by academia initially because of Nakamoto's rejection of academic norms such as peer-review. Furthermore they critique that many academics continued to argue that theoretically Bitcoin could not work, even when the reality contradicted this. Though this can only be taken so far - as the literature review shows, many academics are engaging with DLT. Lastly the divergence of views might be explained by the difficulty of defining DLT; as discussed there is not yet a standard definition of DLT. As Peck \parencite*{Peck2017a} admits elsewhere, DLT is like the parable of three blind people describing an elephant: ``one person feels the leg and it’s a tree trunk, another person feels the side of the elephant and it’s a wall, and a third person feels the trunk and it’s a snake.'' Regardless of reason considerable difference exists over when DLT should be applied to processes and when not, and in truth only its practical application will tell. \section{An evaluation framework for DSN} \label{sec:DSNevaluationframework} Synthesising this elephant into one framework for evaluating DSN use cases is therefore difficult. However it is worthwhile; given the DSN's scope this research cannot cover all possible use cases, but it can contribute by designing a framework which can be used to shortlist use cases for development (Research Objective 3, Section \ref{sec:researchQAO}). A decision tree, following the examples in Section \ref{sec:Academic}, was considered, but rejected. As Mulligan et al. \parencite*{Mulligan2018} inadvertently demonstrated `yes' or `no' questions can lead to contortions when valid use cases do not conform to a pre-designed question set; not helped by DLT being so widely-defined. Rather an analogue approach is required which rates use cases on a spectrum of DLT adaptability. Inspired by the Gartner Magic Quadrant series of data representation \parencite{Whitehorn2007}, the proposed framework takes the form of a plotted chart and is a visual aid. Utility and ease of implementation form the two axes. Use cases are plotted against these axes and are represented by a point whose size corresponds with the impact of the use case on the IS landscape of the organisation. The definition of the terms utility, ease of implementation and impact are: \begin{description} \item[Utility.] The benefit to the enterprise of applying DLT to a use case - this metric recognises that some processes are more suited to DLT-adoption than others. \item[Ease of implementation.] Although there might be utility in applying DLT to a process, this recognises that the business change required may be difficult or require substantial resources. \item[Impact.] The size of the change this will have on the IS landscape of the enterprise as a whole, for example will it affect everyone within the enterprise, or a small team. \end{description} Figure \ref{fig:graphdemo} is an indicative example of how this graph might look when populated. This graph shows the assessment of use cases A, B and C. Use Case A is a very good fit for DLT adaptation (utility), and will have a large effect on the enterprise (impact) but will be difficult to implement (ease of implementation). Use Case B alternatively will have a medium impact and will be very easy to implement - however imposing DLT on the business process will produce few benefits as opposed to another method. Use Case C is both a good fit for DLT and is easy to implement, however will have a small impact on the enterprise (e.g. minimal users, small revenue stream). \begin{figure} \includegraphics[width=\textwidth]{graph_demo_4} \caption[Indicative example of evaluation framework]{Indicative example of evaluation framework} \label{fig:graphdemo} \centering \end{figure} Using the above exploration of commercial and academic literature, criteria have been selected to assess where use cases fall along each axis and how impact can be assessed. Wherever possible these criteria utilise some measurable aspect of the enterprise so allowing objectivity. It is recognised however that when dealing with complex social systems (such as an enterprise), and change to those systems, a completely metric driven approach is difficult to achieve \parencite{VonHayek1989}. \subsection{Utility Factors} This axis measures the enterprise benefit gain by adopting DLT for a given use case. It draws on the criteria identified above by Goodhue and Thompson's \parencite*{GoodhueThompson1995}, Greenspan \parencite*{Greenspan2015}, IBM \parencite[p.~37]{ManavGupta2017}, SAP \parencite{Roehricht2017} and Oracle \parencite{Goel2017}. Three factors are selected for measuring DSN utility: multiparty, trust and auditability. For each a score of between zero and five will be accrued, thus a maximum of 15 (with zero representing least utility). Before this assessment can be made, there are two obligatory factors. Firstly Greenspan's \parencite*{Greenspan2015} ``database'' criteria: a structured repository for information must be required. Use cases without such a repository are awarded a maximum utility score of zero. The second factor concerns Goodhue and Thompson's \parencite*{GoodhueThompson1995} `Task Equivocality' relating to ``ill-defined'' or ``ad-hoc, non-routine business problems.'' Although the data contained within a DLT might contribute to solving ill-defined business problems, ultimately it is stored in a structured fashion - so there must be an understanding of what problems exist for a structure to be imposed. Therefore if the use case involves business problems which are poorly defined, DLT is unsuited and a maximum utility score of zero should again be awarded. Once this is completed the factors can be scored as follows: \begin{itemize} \item \textbf{Multiparty}. The number of organisational boundaries that data is shared across in the course of the business process. These relate to both internal boundaries (business units) and external (different enterprises). \begin{itemize} \item[0] If there are no organisational boundaries crossed then a multiparty score of zero is ascribed. \item[1] If one internal organisational boundary (i.e. two business units) are crossed then a score of one is ascribed. \item[2] If two internal organisational boundaries (i.e. three business units) are crossed then a score of two is ascribed. \item[3] If three or more internal organisational boundaries or one external boundary (i.e. two enterprises) are crossed then a score of three is ascribed. \item[4] If two external boundaries (i.e. three enterprises) are crossed then a score of four is ascribed. \item[5] If three or more external boundaries are crossed then a score of five is ascribed. \end{itemize} \item \textbf{Trust}. Whereas organisational boundaries can be precisely measured, trust is an abstract concept. The literature on trust in organisations was reviewed \parencite{Dietz2006, McEvily2011} and the subject discussed in interview, which led to the selection of three trust-influencing factors: \begin{itemize} \item \textbf{Contractual relationships}. Commercial relationships will affect trust. If one organisation has a contractual relationship which includes performance penalties or bonuses it will likely have an impact on trust. The same will apply when organisations share information with competitors. Although this typically applies to external relationships, it could also apply to internal DSN business units - especially where written service agreements are used to hold to account. \item \textbf{Organisational functions}. If organisations are engaged in different areas of the enterprise (e.g. accounting, personnel) there may be more reticence to let others alter their data, and therefore less trust. This builds on the work of Williams \parencite*{Williams2001} examining outgroups in organisations and finding ``different functional areas ... view members of contrasting groups with distrust, suspicion, and animosity.'' \item \textbf{Culture}. Some organisations are by their very nature less likely to trust others outside of their sphere - within DSN this applies to those business units which routinely deal with classified information (e.g. Special Forces, Submarines, etc) as opposed to more generic areas (e.g. commodity supplies). Another example of this culture might be expressed by inter-service rivalry \parencite{Barlow1994}. \end{itemize} It is appreciated that the majority of these factors are subjective. However they can be used to form the basis of an assessment of trust for a use case; which can then be followed up by more rigorous investigation. Once the trust level has been ascertained it should be scored from zero denoting full trust, to five: least trust. \item \textbf{Auditability}. This factor is based around the importance of preserving the information stored in a DL. The constituents here are: \begin{itemize} \item \textbf{Controlled Items}. The DSN manages items that are subject to regulatory control either by domestic or international regulation. Items that fall into this category are for instance those subject to the Polaris Sales Agreement, International Traffic in Arms Regulations (ITAR), Airworthiness items, Attractive to Criminal or Terrorist Organisations (ACTO), etc. MoD will wish to assure itself of transactions involving these items, so adding weight to a DLT solution. Use cases involving these should be awarded two points, depending on the proliferation of these items (e.g. zero for nil items, one for limited amounts of controlled items, two for a use case focussed on controlled items). \item \textbf{Classification}. Use cases that concern a classification higher than Official, will also benefit from DLT's immutability, typically (depending on the DLT implemented) by preventing adversaries from reading/writing data without leaving an audit trail. Use cases featuring classified items should be awarded two points, depending on the extent of classification (e.g. zero for nil items, one for limited amounts of classified items, two for a use case focussed on classified items). \item \textbf{Theft or Fraud}. Use cases which are particularly susceptible to theft or fraud (e.g. high value / attractive or items susceptible to corruption) gain an additional one point. This is relatively judgement based due to criminality's pervasive nature, however as there is policy ascribing certain items valuable and attractive, there is some basis for decisions to be made on. \end{itemize} \end{itemize} \subsection{Ease of implementation} While the y-axis of Figure \ref{fig:graphdemo} represents how suited a particular use case is to DLT adoption, the x-axis represents ease of DLT implementation. This measure will allow the enterprise to consider factors such as time and cost, prior to deciding on what project to implement. Studies were therefore analysed which defined taxonomies of constraints to successful implementation of IS \parencite{Yeo2002, Al-ahmad2009} - although what constitutes success \parencite{Agarwal2006, Shaul2013} was considered out of scope. One common theme was the importance of soft factors e.g. project manager \parencite{Mohd2011}, team communication \parencite{Mohan2011}. Despite this soft factors are not measured in the axis as they are typically reflective of management within the whole enterprise rather than specific to DLT use cases. It is recognised that further studies may wish to consider this - for instance by measuring resistance to change or project manager skills. Instead to determine ease of implementation constraints, two approaches were used. Firstly adaptive change cases proposed to the Defence Logistics Directorate's Information Systems Working Group were analysed for common themes which had expedited or stalled implementation. Secondly implementation factors were elicited from interviewees. The above methods, in combination with an analysis of the above literature, resulted in four factors, again with a maximum score of 15 points (with zero points representing most difficulty): \begin{itemize} \item \textbf{Contractorisation}. There are typically four hierarchies of software development within the DSN: end user computing (where a non-technical user creates scripts such as Visual Basic in Excel), software developers within the employ of MoD, independent developers contracted by a business unit on a project and lastly underpinning contracts with a third party for a suite of IS. Each of these methods typically involves more bureaucracy than the previous and act as a drag on implementation. Although it is recognised there is currently a dearth of DLT development expertise \parencite{Stein2018}, this is likely to change over time as the technology becomes more familiar; meaning all hierarchies, with the possible exception of end-users, will be able to develop DLT solutions. Where a business unit has in house developers three points should be scored; independent developers: two; and underpinning contract: one or zero points depending on the contractual relationship. \item \textbf{Manual process}. The existence of a manual process was an adoption factor highlighted by all three blockchain vendors in Table \ref{table:DLTAdoptionComparison}. However within this DSN evaluation framework it is used as an implementation rather than utility measure. It does not feature on the utility axis as computer IS typically aims to improve a paper based process by digitising it \parencite[p.~12]{stair2012information}; this is therefore far from limited to DLT. It does feature on the implementation axis as within a manual system it is easier to understand the flow of information and processes, than when that process is embedded in code which requires specialised skill sets to interpret. To grade against this metric the researcher should note the number of manual transactions used (e.g. paper forms, faxes, phone calls), as opposed to digital and use that ratio to score from zero to four - i.e. all manual: four; half manual: two; all digital: zero. \item \textbf{System age}. The National Audit Office criticise the DSN for the antiquated nature of its logistic systems, noting that two of its main inventory IS began service in the 1980s \parencite{NationalAuditOffice2011}. The issue of legacy IS interfacing with new projects is considered a critical success factor by Fui-Hoon Nah, Lee-Shang Lau \& Kuang \parencite*{FuiHoonNah2001} and was revealed as a point of contention with interviewees. Therefore if the oldest system in the use case is less than five years-old four points are added, and with every five years one implementation point should be removed - so a 12 year-old system would have two points, and a 20 plus year-old system zero points. \item \textbf{Complexity}. Beese et al \parencite*{Beese2016} propose that IS complexity makes adaptive change more difficult; although as Xia and Lee \parencite*{XIA2005} discuss there is not necessarily one agreed way to measure this. Complexity has many definitions \parencite{Geraldi2011}, but for the purposes of this dissertation Beese et al's structural complexity (as opposed to dynamic) is most useful; and can be sub-divided into number (size), variety and interdependence of systems \parencite*{Beese2016}. This measurement will be a count of the numbers of systems required to integrate with the DLT implemented - four points for a simple web-front end (as comes standard with blockchains such as IBM Hyperledger's Fabric), three points for one to five systems, two points for six to ten systems, one point for eleven to twenty systems and zero points for over twenty. Both DSN service catalogues (see Section \ref{sec:Impact}) can be used to ascertain this figure. \end{itemize} Although the factors for both utility and ease of implementation are rudimentary, it is nevertheless adequate to provide a scale for comparison of use cases. In practice the scale is intended to be adapted as circumstances dictate. \subsection{Impact} \label{sec:Impact} This last metric recognises that high utility and ease of implementation does not necessarily entail impact on the overall enterprise. An example of this is a system which has low business criticality - for instance a trivial application for recording office chores; or one which has only a few users. Naturally business impact will be specific to the enterprise in question. If organisations are following IT service management best practice, as specified by the Information Technology Infrastructure Library (ITIL), they will be maintaining a service catalogue, which should capture application business value \parencite[p.~104]{Axelos2012}. The DSN has two applicable service catalogues: the Defence Application Register (DAR) listing all applications used within MoD, and the Support Chain Information Services Architectural Repository (SCISRA) which covers applications provided by SCIS. Between them these contain the following pertinent information: \begin{itemize} \item \textbf{Users}. An approximate number of users for an application and associated data such as number of licences and installations. \item \textbf{Business Priority}. A range of metrics for deciding business priority including ``criticality,'' ``impact,'' ``business importance,'' ''importance rating'' and ``deployment priority''. Although there is considerable overlap these attempt to rate whether an application supports a business unit's primary function, and the inconvenience of a workaround were it not available; the last two also factor in the number of users an application has. \item \textbf{Contractual Support}. The level of support an application receives when it has been contracted out to third-party support, particularly Boeing Defence UK \parencite*{Boeing2018}. \end{itemize} It is unlikely that either the DAR nor SCISRA are entirely current, however they would be a good first source of information for measuring business impact. It is proposed that ``importance rating'' above (which is scored 0 - 130) should be used to ascertain impact; by measuring across business priority and users it captures in a granular way the whole enterprise footprint of an application. For the purpose of visually representing this data, as at Figure \ref{fig:graphdemo}, the impact score should be used as the circle's area, which can be used to ascertain the radius and circumference. For instance a DAR Importance Rating of 55 would equate to a radius of 4.18 (i.e. 4.18 = \begin{math} \sqrt{\frac{55}{\pi}} \end{math}) . The circle could then be drawn with a radius of 4.18mm or 4.18cm, or any other scale. Where a use case covers more than one system then the highest rated Importance Rating should be used. In cases where there is no application currently in use (e.g. a manual solution) then the DAR scoring matrix should be used to estimate an Importance Rating. This mechanism should be adjusted as required - especially should service catalogues be found out of date. However striving for an objective criteria when measuring business impact should improve the rigour of a DLT use case selection process. \section{Results of questionnaire} \label{sec:Resultsofquestionnaire} The proposed evaluation framework is designed to identify where DLT might be best adopted in the DSN. However as generic use cases were used to prompt discussion within interviews, a lightweight version of the framework has been used to assess these. Instead industry and MoD interviewees rated the utility and ease of implementation of the use cases (see Appendix \ref{ch:questionnaire}) based on their experience. As the use cases are hypothetical; no measure of impact (Section \ref{sec:Impact}), which relies on an importance rating provided by the business, is given. Data collected from these interviews was averaged amongst the two sectors and is presented at Figure \ref{fig:useCaseGraph}. \begin{figure} \includegraphics[width=\textwidth]{useCaseGraph_track} \caption{DSN evaluation framework applied to interview use cases} \label{fig:useCaseGraph} \centering \end{figure} The ease of implementation axis ranges from one (very difficult) to five (very easy), while utility ranges from one (not at all useful) to five (very useful). All use cases were rated above moderately useful (three) by all interviewees. Three use cases (Customs, E\&AM, CfA) shared a similar pattern - compared to technology-sector interviewees, defence considered them easier to implement and greater (or equal) utility. One possibility is that defence interviewees, being more aware of the problems the DSN faces, are more open to solutions. Alternatively this may suggest a weakness in the research: the pro-DLT nature of the introduction videos might have led to a bias in defence interviewees, while technology interviewees may have a more balanced understanding of DLT. Codification (Use Case 1) was the exception, with defence-sector considering it more difficult to implement than the technology-sector (although continuing to believe it had greater utility). Reasons for this will be considered below in Section \ref{sec:codification}. The easiest use case to implement as rated by the technology-sector was Codification (Use Case 1), for defence-sector it was joint first with Customs (Use Case 2). Contracting for Availability (Use Case 4) was considered the hardest to implement by technology-sector, E\&AM by defence-sector. Both defence and technology considered Engineering and Asset Management (Use Case 3) the most useful and Customs (Use Case 2) the least. \section{Generic use cases exploration} \label{sec:genericusecaseexploration} This section will discuss the qualitative data collected - the interviewees' feedback on use cases. \subsection{Codification - Use Case 1} \label{sec:codification} Codification was considered by technology-sector interviewees the easiest use case to apply to DLT. A major factor in this was the lack of systems complexity - the Codification Support Information System (CSIS) is one relational database exclusively owned by MoD. This was coupled with existing business processes - MoD already contracts in DEFCON 117 \parencite{MinistryofDefence2013} for this information to be provided - as one interviewee stated: ``we are masters of our own destiny.'' This would act as solid foundations for a DLT pilot - it could be accomplished internally and then expanded. Another factor in favour of implementation was that the database contains NATO Stock Numbers (NSNs) which represent classes of real world objects, but avoids the complications of trying to track specific, unique items. The defence-sector gave higher ease of implementation ratings than technology-sector interviewees in all but one use case: the Codification use case (Figure \ref{fig:useCaseGraph}). Partly this was because the UKNCB respondent, being intricately aware of the complexities of codification (e.g. other processes relying on the CSIS database), gave this a lower ease of implementation score. However other defence-respondents were also sceptical of data sharing between MoD, prime-contractors and sub-contractors when competition existed in the network over this data (especially pricing); and therefore believed this would make implementation difficult. All however rated the use case highly on usefulness - this was due to the multitude of third parties (e.g. Original Equipment Manufacturers (OEM)) involved in the network. Interviews revealed there could be surprisingly little trust between DE\&S Delivery Teams (DT) as regards item data; for instance one commodity-based team might have no requirement for an item being certified (e.g. airworthiness), but this requirement might not be shared by a DT responsible for supporting complex equipment. This led to a situation where although CSIS might have one value, a DT could amend this data on a Base Inventory System otherwise. Participants believed DLT could achieve data integrity across systems, by ensuring all changes are immutably recorded and that consensus ensures only those with relevant permissions are allowed to update - although it might be argued a well designed relational database would also achieve this. These findings agree with academic research \parencite{Banerjee2018} suggesting DLT would be a good fit for master data management - although authored by a vendor of DLT. One factor that resulted in easy implementation - that MoD controlled the system and can demand that suppliers provide details, also attracted criticism regarding the absence of disintermediation. Greenspan \parencite*{Greenspan2015} argues DLT is redundant if a``trusted intermediary'' exists - yet this in theory is the role of UKNCB. There was inconsistency in responses to this, some interviewees stated industry does not always trust the MoD; for instance items are not codified because industry is unwilling to provide data containing intellectual property (e.g. technical drawings) or commercially sensitive information (e.g. price). Others believed that although the MoD might be a trusted intermediary there was an issue when contractors were collecting codification data on behalf of other contractors, as in industry coalitions. Potentially DLT could partly alleviate both problems by giving OEMs verifiable control over who sees what data - for instance price could be seen by the MoD, but not by other commercial companies. It is unrealistic however to expect a technological panacea - industry is likely to wish to keep secrets regardless. The NAO has reported that codification has not occurred because of the desire to reduce costs \parencite[p.~31]{Parliament.HouseofCommons2017a}. Interviewees considered that a UKNCB-run DLT might allow seamless integration of OEM's databases of items manufactured or procured, so reducing cost. The emergent nature of DLT means that it is hard to predict whether DLT would be more cost-effective than a relational database, though it is possible. Where the disintermediation argument works in DLT's favour is codification at a supranational level. All NSN issued by the UK have the NATO Country Code `99' applied indicating the country of original codification, this is then shared with other participating countries via the NATO Master Catalogue of References for Logistics (NMCRL). This is a loosely coupled system as participating countries need not be NATO members; simply approved `partner' countries - for instance non-NATO Singapore has the country code of `32' \parencite{NATO2018a}. Using DLT the intermediary role of NATO could be considerably reduced, with associated cost savings. In this scenario manufacturers in each country would submit transactions (e.g. the creation of a new item of supply) onto a DL; these transactions would then be checked against the business logic imposed by the relevant national codification bureau (e.g. all fields contain valid data, the item is unique, etc). Passing the business logic would result in the transaction being cryptographically signed with that country's identifying code and submitted to a ledger visible to all NATO partners. Information could be encrypted for release to specified parties - for instance detailed technical drawings might be submitted to the chain, but only viewable by the originating country and the OEM. This use case deserves further investigation - either at the national or supranational level. \subsection{Revenue \& Customs - Use Case 2} This use case was rated as the least useful by both sectors. It was considered the second easiest to implement by the technology-sector, and joint second by defence. Much of this lack of utility was due to this use case being the `odd one out' of the four - Revenue \& Customs is something that happens to the MoD, as a transactional cost of doing business, rather than something MoD sets out to achieve. Correspondingly it was felt there was little point in MoD forging ahead with something outside its control. A world-class DL could be established by DE\&S to capture this data; but if no one else could be persuaded to use it (e.g. HMRC, shipping lines, etc), it would be of no use. Despite this lack of utility, it rated highly for implementation - coming second in line (although far behind codification). The reasoning behind this was that interviewees strongly felt that much of the current manual process (e.g. bills of lading) was ripe for digital conversion. Resultantly there was a strong feeling amongst interviewees that if another organisation was to lead on this, the MoD should be keen to participate. Several interviewees were already aware of organisations pursuing similar endeavours (apart from the IBM-Maersk trial mentioned in the questionnaire) and that MoD would gain advantage by joining these schemes when they had reached scale, rather than piloting its own. \subsection{Engineering \& Asset Management - Use Case 3} This use case rated as the most useful for both sectors. Part of this might have been cultural - the MoD has been criticised for being more focused on equipment and engineering rather than the logistics tail supporting that equipment \parencite[p.~8]{Parliament.HouseofCommons2017a}, and this may have been reflected amongst interviewees. Although this seems unlikely as a reason alone, given that interviewee selection aimed to maximise those with broad experience of each use case. Several interviewees referred to current issues affecting Engineering \& Asset Management (E\&AM) systems across DE\&S. JAMES (Joint Asset Management and Engineering Solutions), which records maintenance on land managed equipment, is used by the Armed Forces but is not implemented by contractors who manage third line warehousing on behalf of the MoD - which means MoD does not have visibility of its assets throughout the reverse supply chain. One defence-sector interviewee highlighted that errors occurred where MoD had lost sight of DE\&S assets managed by industry, and pointed to recent changes in logistic policy to prevent this. Interviewees felt DLT, by integrating contractor systems with those used by the MoD, might allow this visibility. IBM made this point most strongly during discussions. Their view was that DLT is unlikely to replace current E\&AM systems, but rather it would act as back-end glue allowing assets to transfer seamlessly from the MoD to partners and back; so ensuring the integrity of records. This solution may be more palatable than trying to assert one system across a plethora of suppliers, and allows parties to employ systems suited to their own needs. There is evidence of this approach being investigated by industry, such as Rolls Royce with aero-engines \parencite{Bryan2018}. This factor also explained why this use case was rated second most difficult by the tech-sector, and most difficult by defence. Attempting to connect a DL back-end with established E\&AM systems would be challenging, likely requiring cross industry co-operation. This could be difficult in a marketplace where vendors use competitive differentiation - rather than making their products interchangeable with rivals. Were this to be achieved however all felt that the gains across the DSN would be significant. \subsection{Contracting for Availability - Use Case 4} The Contracting for Availability (CfA) use case was considered the most complicated of all - and therefore scored lowest in ease of implementation by the technology-sector, and second most difficult by defence. One defence-interviewee argued that even routine commercial activities were difficult within MoD, let alone more ambitious plans. Examples of CfA were elicited during interview to confirm this use case. One current contract with BAE pays a daily rate when a ship is available for tasking. However when Operational Defects occur then the MoD gains `credits,' with more serious defects gaining more credits, which are subtracted from the daily rate. Credits normally commence on the date a ship signals a defect and end when the ship signals the defect is rectified - however credits in some circumstances also cease when the contractor can prove dispatch of the part rectifying the defect. Interviewees confirmed that a source of frustration was having to audit large amounts of hard-copy paper to compile credits from a variety of sources at the end of the reporting period. The advantage of a DL which all parties, including the contractor, could append to, was instantly seen and considered a strong case for the technology. This use case is not without flaws. A system which results in winners and losers will create incentives for people to try and game the system - for instance by claiming that stores were dispatched earlier than they were. This might especially be the case where a penalty is imposed by a smart contract; if one can fabricate the data with no audit process, other than smart contract, then that incentivises malfeasance. Defence and Security Technological Laboratories \parencite{Colley2016} raise similar concerns over incentives encouraging data to be generated to meet targets, rather than record the truth. A further complication if this use case is to be taken to its ultimate conclusion and have payment conducted on the blockchain using a cryptocurrency is what might be referred to as the `locked in problem.' Take a scenario where the MoD were to agree a £12 million availability contract and was going to disburse that money using smart contract alone. That would require the MoD providing \pounds12 million in the DL monitoring contract up-front; if they were not prepared to do this then there would be little assurance that they would abide by the terms of the smart contract - the SC might ask them to pay, but they could decide not to. However putting these funds in at the start of the contract is problematic because it means they cannot be spent elsewhere. CfA was therefore seen as very useful, but highly complex to implement in a reliable way. \section{Wider use case exploration} \label{sec:Widerusecaseexploration} A number of interviewees commented that the generic use cases presented were similar, with one technologist going as far as to state they were ``obvious.'' This is not necessarily negative, it is logical for discussion to begin with areas that are better understood. The generic use cases acted as a jumping-off point for qualitative evidence gathering on other areas for consideration - the following section expands on this. \subsection{Supply chain provenance} \label{sec:SupplyChainProvenance} An oft cited use case for DLT is supply chain provenance - i.e. verifying the authenticity of items and equipment. The use of DLT within defence for provenance has been proposed by a number of parties including Barnas \parencite*{Barnas2016}, Colley \parencite*{Colley2016} and Hsieh \& Ravich \parencite*{Hsieh2017}. It was not however selected as one of the four generic use cases within this research because of the perceived `digital-physical gap' problem. This problem, discussed in Section \ref{sec:twosidestothecoin} as part of the share versus prove debate and raised by academia in Section \ref{sec:Academic}, is that although a digital object (e.g. a circuit board's identifying serial number) might exist in immutable form in a DL, the circuit board in hand is not necessarily the genuine article. Although checking the serial number of a circuit board against a serial number in a DL, as proposed by Barnas \parencite*{Barnas2016}, is better than no form of verification; any adversary (e.g. hostile nation states, criminals) capable of creating either a fake or malicious circuit board, will likely be capable of creating fake serial numbers. Several methods could be used to mitigate the digital-physical gap: \begin{itemize} \item \textbf{Bubble-tag}\texttrademark ~ \parencite{Prooftag2017}. An adhesive sticker (Figure \ref{fig:bubbletag}) incorporating a polymer layer that when attached to an item chaotically creates a pattern of bubbles which can be read electronically \parencite{Patraucean2010}. \begin{figure} \centering \includegraphics[width=2.5cm]{bubbletag2} \caption[Sample bubble tag]{Sample bubble tag\texttrademark ~ \parencite{Prooftag2017}} \label{fig:bubbletag} \centering \end{figure} \item \textbf{Q-ID}\texttrademark ~ \parencite{QuantumBase2017}. A graphene layer (Figure \ref{fig:qid}) with atomic scale imperfections that reflect light in such a way that an identifying reference can be created \parencite{Roberts2015}. \begin{figure} \centering \includegraphics[width=18cm]{qid2} \caption[Q-ID Infographic]{Q-ID Infographic \parencite{QuantumBase2017}} \label{fig:qid} \centering \end{figure} \item \textbf{CryptoSeal} \parencite{Chronicled2018}. A tamper-proof Radio Frequency Identification tag (Figure \ref{fig:cryptoseal}) read by Near Field-Communication enabled devices; as used by Thales in a DLT proof of concept to track sensor equipment. \begin{figure} \centering \includegraphics[height=2.5cm]{cryptoseal} \caption[CryptoSeal]{CryptoSeal \parencite{Chronicled2018}} \label{fig:cryptoseal} \centering \end{figure} \end{itemize} Even more promising solutions exist for assemblies containing integrated circuits, as incorporated into the mentioned Thales DLT pilot . Here software can be used to prove the existence of a physical unclonable function (PUF). Due to variations in manufacturing all integrated circuits will have different physical manifestations - for instance nanosecond differences in logic gate operations. As these characteristics are embedded in the circuit they form a signature that is difficult to replicate or change as they are caused by a process outwith the manufacturer's control. Although one attack vector might be to simply copy the signature once it has left the circuit, this is made more complicated by the fact that more than one test can be run: using different combinations of gates, different temperatures etc. Again this facility is available commercially \parencite{Intrinsic2018} and has been academically assessed \parencite{Gu2017}. The above examples are far from a comprehensive overview of the market; for instance it only links physical items to DLT, rather than processes. These exist too however - the Fishface case study captured video-feed and GPS onto a DLT to record fishermen's catches \parencite{ZYenGroup2018} for fisheries protection purposes. This summary however does indicate that the digital-physical gap is bridgeable. Ultimately it could be argued at a philosophical level the digital-physical gap can never be truly overcome; for instance how can one be certain that the original equipment manufacturer is producing what they claim to be producing. However realistically few security solutions are flawless, and the techniques explored above considerably mitigate risk. In view of this supply chain provenance is a use case that should merit further exploration. \subsection{Certification} \label{sec:certification} Closely tied to provenance, DLT could also play a role in certification; as noted by Defence Research and Development Canada \parencite{Willink2018}. The defence enterprise is involved in many regulated activities: e.g. nuclear engineering, medicine, aviation and of course the application of lethality. To ensure these activities are conducted in accordance with external laws and internal regulations, there are a plethora of certificates that apply to items of supply: from air worthiness conformity to hazard data sheets for cleaning products. These are typically issued in paper form by an authority and accompany the item through its life-cycle. This process is error prone: paper certificates can easily separate from the relevant item, especially as they cross organisational boundaries (e.g. store, repair, calibrate, etc). This causes a range of issues: in the best case items are quarantined until a certificate can be sourced, in the worst case they are used in contravention of their design intent. Recording certificates for items on a DL can assist with this, due to DLT's strengths in public-private key cryptography and the sharing of information across boundaries. Imagine a scenario where a submarine depth gauge is due for periodic calibration: first a transaction is made in the ledger as the item is passed from depot to third party calibrator; the third party calibrator then records on the ledger that the item has been calibrated, the name of the individual who calibrated it and when that calibration expires, signing it with a private key so making it irrefutable. When the item passes back to the unit, again recorded in the ledger, the calibration certificate is visible to all. Should a problem be revealed later it is obvious where blame lies. Furthermore as the certificate is digitised, information can easily be queried from the chain - it is a simplistic matter for the Commanding Officer to be presented with a list of all items requiring calibration prior to patrol. The UKNCB interviewee raised a related problem. When a new item of supply is codified, for instance a washer supplied by BAE, if certification is required a flag is raised on the relevant record on the codification system. However when this record is passed to a base inventory system a project manager might append to that record an item from a different manufacturer performing the same function (for instance a washer bought off the shelf from B\&Q), but does not have the same certification or quality assurance guarantees. This can result in units demanding what they believe is a certified part from one manufacturer and receiving a non-certified part from another manufacturer. This problem can be prevented by encoding the business logic in the DL of parties authorised to add new part numbers to a NSN record. Although this problem can be addressed in a relational database too, DLT's difference is that the certification flag could be established during the OEM creating the item record and promulgated seamlessly to all DL connected systems, rather than relying on post-event capture by the MoD. Although not included in the generic use cases, following the course of research it is believed a strong use case for DLT is the management of certification in the DSN. \subsection{Additive Manufacturing} Additive manufacturing (AM), or 3D printing, is where computer technology is used to solidify material to create an object in three dimensions. It has the potential to revolutionise the Defence Support Network as spare parts could be manufactured at the front line to satisfy immediate demands, rather than being stored and then shipped from a rear echelon area \parencite{Campbell2011a}. AM has inherent risks however - including maliciously altering the design files or copying them without authorisation (so stealing intellectual property). Both the US Department of Defence \parencite{Dobesh2017} and Marine Corps \parencite{Daugherty2017} believe these risks might be mitigated by DLT, which they are piloting. Dobesh \parencite*{Dobesh2017} proposes a DL as a ``ubiquitous data bus,'' meaning design files can be encrypted and only viewable by those with the right privileges, with the immutable nature of DL preventing alteration prior to manufacture. This references back to Figure \ref{fig:ShareProveVenn}'s share vs prove Venn diagram. An AM design file is a digital artefact, in much the same way as cryptocurrency, and therefore sits at the intersection where a DL can perform both the functions of sharing the artefact whilst at the very same time validating it. AM within the DSN is at an embryonic stage, therefore this use case is not an immediate one; although has considerable future potential. \subsection{MoD Coin} Possibly the most unconventional use case is that inspired by discussion with Agility Sciences: MoD Coin. Here when the MoD contracts a good or service it issues to the contractor a token, MoD Coin, stored on a DL. That token is redeemable when the contractor produces the good or service to an agreed specification, at which point the token holder can present it to Defence Business Services for exchange with fiat currency. This then allows the contractor who has been issued the token to raise funds to procure sub-assembly parts or services - they might for instance use them as collateral on a loan from a bank. Alternatively the token could be paid directly to the manufacturer of the sub-assembly, who would then be able to redeem it when the final product was delivered. This latter might also be used to gain business intelligence into the MoD network of sub-contractors, below the prime-contractor level; although it would be questionable as to how deep into the chain this reached - it seems unlikely every sheet of bubble-wrap is going to be accounted for via this method. More sophistication could be added to this by other services utilising the MoD coin's DL. For instance suppliers could be reviewed: positive for early delivery, negative if late or not to specification. This would help DE\&S Delivery Teams evaluate suppliers used previously by others. The objects procured using MoD Coin (assuming the object itself was recorded on chain) could, as previously discussed, have certifications (e.g. tolerances, etc) recorded against that unique item or batch; this could even be applied by a third party who cryptographically signs to prove quality control - a condition of the MoD Coin being redeemed might be that this certification is present; both verification and payment could then be accomplished via smart contract. Indeed the ultimate conclusion of this is that contractors might even use MoD Coin as a guarantee to issue their own coin (e.g. BAE or Babcock Coin), on the same basis as fractional reserve banking, knowing that it is unlikely all their suppliers would redeem those coins at the same time. This vision is an exciting possibility, but also the most ambitious use case laid out in this research. There would be many barriers to adoption - not least government accounting rules, but also whether they would be accepted by suppliers. Despite these barriers this could be worthy of further research when this technology is considerably more mature. \subsection{Experimentation} If the last use case was unconventional, this final one is counter-intuitive: it does not start with a use case. Instead this approach proposes individual business units experiment by adopting DLT to capture information (alongside more traditional methods), learn from their application and observe where use cases emerge. This is contrary to traditional MoD processes which begin with capability analysis and requirements; not surprisingly this strand was inspired by interviews outside the Defence sphere (namely Z/Yen Group who principally serve the financial industry). Although this use case has risks, specifically that tax payers money will be expended without deliverables; it is feasible - MoD has the organic capability to experiment in this way. After considering DLT's grand visions, this concept is refreshing in proposing emergent change arising out of needs identified from the user-base. This also tallies with Holmes \parencite*{Holmes2017} who suggests that digital disruption is only learnt in organisations by doing and that small scale pilots are essential to de-risk, optimise and develop. Whether this theme is taken forward will likely be the result of individual decisions in various business units, although buy-in by management will still be required; this research hopes that will be forthcoming. \chapter{Conclusion} DLT emerged from anarchic beginnings, but has the potential to impact all sectors of society, the economy and government. Its value comes from allowing different organisations to reach consensus on shared data, so removing much of the transactional friction common in business processes. This dissertation thematically reviewed the literature; reported results from interviews with technical and defence-sector employees on the use of DLT in DSN and proposed a framework for evaluating the utility, ease of implementation and impact of DL use cases. DLT's utility is not yet proven however, with few existing real-life applications in industry. Indeed even its definition remains open (Section \ref{sec:searchstrategy}), Figure \ref{fig:ShareProveVenn} illustrated this showing DLs can look and act radically different from each other and solve divergent problems. The fountainhead of this activity - Bitcoin - although potentially revolutionary, is unlikely to have direct applications with the DSN. The evolutionary branch of Bitcoin's domesticated off-spring, the permissioned DL, is however worthy of closer examination as regards the DSN. This therefore is a conditional endorsement. Applications of permissioned DLT outside of test environment are rare, with the exception of purely `prove' deployments such as Guardtime KSI Blockchain. If efficiency gains were more obvious there would be larger scale deployment; much current interest is doubtless driven by hype and greed. However given that this is an emerging technology this is expected. Even if large scale DLT adoption occurs there is no guarantee that the DSN will prove a fertile ground for it. One of the key tenets of DLT is disintermediation. While this attribute is strongly suited to scenarios involving transactions without central authority and trust - perfect for digital cash for example; it is less easy to apply to government departments. After all in this instance the trusted central authority to disintermediate, MoD, is the very entity seeking to use DLT - a serpent eating its own tail. On the other hand there are pros for adoption in the DSN too. DLT is likely to be particularly useful when it is applied to situations where assets or services extend outside the boundaries of the MoD into industry or allied nations; this is one of the hallmarks of the DSN. Likewise the immutable nature of DLT is particularly useful considering the regulatory environment much of the DSN takes place in. Therefore the MoD should approach DLT cautiously, running pilot projects before making large scale investment. Due to the pros and cons of adopting DLT in the DSN, an evaluation framework is required to objectively assess pilot use cases. This research has proposed such a framework, which measures for utility, ease of implementation and impact; it is hoped this will help identify DSN processes for DLT adoption. It may also form the basis of assessment in other areas of MoD, or related industrial sectors. This research did not aim to identify specific DLT use cases, rather generic use cases were selected for data gathering. Of these use cases studied, codification (Section \ref{sec:codification}) stood out, both on utility and ease of implementation, as a worthy contender for further study. Although not included in the questionnaire, it became apparent in the course of research that a use case involving certification (Section \ref{sec:certification}), possibly coupled with supply chain provenance (Section \ref{sec:SupplyChainProvenance}), also deserves further investigation - as it could greatly aid the challenges the enterprise faces in this direction. It should be emphasised however that the use cases covered in this research are the genesis for further work, such as feasibility analysis, and not a definitive statement. \section{Strengths and limitations} Exploring the potential of DLT is timely given high-levels of government, business and academic interest. As a result there is a plethora of research being published, and interviewees have been willing to contribute. Conducting this research from within the MoD has also been a benefit - previous experience meant that all use cases proposed were identified by interviewees as being useful (Figure \ref{fig:useCaseGraph}). Selecting interviewees from both defence and technology-sectors allowed a variety of insights to be gained which was useful for considering utility and implementation factors and meant that enough data was collected to be confident in providing concrete recommendations. The unique contribution of this research is proposing a framework to consider how DLT might be adopted, variations of this framework might have wider application than the current vogue towards binary-choice flowcharts. As an emergent technology, DLT research is being generated at a rapid rate: Bano et al \parencite*{Bano2017} calculates one paper is produced every day and a half, thus one inevitable limitation is that this research will have a short shelf life. Another limitation concerns the sample. Participants might have a positive bias towards DLT: technology interviewees were naturally bullish regarding its long term success; defence interviewees might have been biased due to the pro-DLT introduction videos. It must be noted however that feedback from interviewees indicate they were unlikely to be blindly positive, as critical discussion was held on the relative merits and drawbacks of DLT in different areas. Future research could involve a bigger sample and creating a more `neutral' introduction to DLT to reduce bias. Partly this bias is a function of its novelty - as time passes and there are more deployments of DLs a stronger evidence base will be produced as to where this fails or succeeds. Another lesson for future research is that much evidence may not be in English - considering the global activity in this - particularly in Asia (see Section \ref{sec:searchstrategy}); any following analysis of current work in this field would have to account for this. \section{Recommendations} \begin{enumerate} \item The MoD should pilot use cases of DLT within the DSN to establish whether efficiency gains can be made. A cautious approach is recommended because of the emergent nature of DLT. Wide scale adoption or investment would be unwise at this stage. \item Pilot projects should be selected carefully because DLT will not be suitable for all use cases. An evaluation framework would assist in objectively assessing use cases for DLT adoption. This research has proposed such a framework (Section \ref{sec:DSNevaluationframework}); which measures for utility, ease of implementation and impact; either this framework or a similarly adapted one could assist in selecting pilot use cases. \item Although the aim of this research was not the selection of pilot projects, it is noted that use cases involving codification (Section \ref{sec:codification}), certification (Section \ref{sec:certification}) and supply chain provenance (Section \ref{sec:SupplyChainProvenance}) appear particularly worth of further investigation. \item Once a use case has been selected for piloting then a private and permissioned DL is more likely to prove successful for enterprise use than a public and permissionless one. This is due to both classification issues and the requirement within permissionless DLs for there to be an incentive to prevent malicious actors (Section \ref{sec:conclusionsAndImplications}). As DLT covers a wide variety of mechanisms for storing data, further consideration will be required to choose the appropriate DL for the selected use case. \item Further academic research into organisations that have trialled DLT would benefit the literature greatly. Much real world enterprise use of this technology is currently disseminated only via press releases or corporate communications, so lacking objective assessments of benefits and challenges. Gaining access to conduct this research is likely to prove difficult given that participants may prove reticent to discuss failure, which could occur frequently in this emergent field. \end{enumerate}
2,869,038,155,409
arxiv
\section{Introduction} A symplectic structure on a Lie algebra $\g$ is a nondegenerate 2-cocycle $\omega\in\wedge^2\g^*$. The underlying structure of a symplectic Lie algebra is a quadratic pre-Lie algebra \cite{Chu}. An almost product structure on a Lie algebra $\g$ is a linear map $E$ satisfying $E^2=\Id$. If in addition, $E$ also satisfies the following integrability condition $$ [Ex,Ey]=E([Ex,y]+[x,Ey]-E[x,y]),\quad \forall x,y\in\g, $$ then $E$ is called a product structure. The above integrability condition is called the Nijenhuis condition. An equivalent characterization of a product structure is that $\g$ is the direct sum (as vector spaces) of two subalgebras. An almost complex structure on a Lie algebra $\g$ is a linear map $J$ satisfying $J^2=-\Id$. A complex structure on a Lie algebra is an almost complex structure that satisfies the Nijenhuis condition. Adding compatibility conditions between a complex structure and a product structure, between a symplectic structure and a paracomplex structure, between a symplectic structure and a complex structure, one obtains a complex product structure, a paraK\"{a}hler structure and a pseudo-K\"{a}hler structure respectively. These structures play important roles in algebra, geometry and mathematical physics, and are widely studied. See \cite{Alek,Andrada0,ABD,ABDO,AS,Baibialgebra,Bai-2,Banayadi0,Benayadi,Calvaruso0,Calvaruso,Poon1,Poon2,Li,Salamon} for more details. Generalizations of Lie algebras to higher arities, including 3-Lie algebras and more generally, $n$-Lie algebras~\cite{Filippov,Kasymov,Tcohomology}, have attracted attention from several fields of mathematics and physics. It is the algebraic structure corresponding to Nambu mechanics \cite{Gautheron,N,T}. In particular, the study of 3-Lie algebras plays an important role in string theory. In \cite{Basu}, Basu and Harvey suggested to replace the Lie algebra appearing in the Nahm equation by a 3-Lie algebra for the lifted Nahm equations. Furthermore, in the context of Bagger-Lambert-Gustavsson model of multiple M2-branes, Bagger-Lambert managed to construct, using a ternary bracket, an $N=2$ supersymmetric version of the worldvolume theory of the M-theory membrane, see \cite{BL0}. An extensive literatures are related to this pioneering work, see \cite{BL3,BL2,HHM,P}. See the review article \cite{review} for more details. In particular, metric 3-algebras were deeply studied in the seminal works \cite{DFM,DFMR,DFMR2}. In \cite{Liu-Sheng-Bai-Chen}, the authors introduced the notion a Nijenhuis operator on an $n$-Lie algebra, which generates a trivial deformation. The purpose of this paper is to study symplectic structures, product structure and complex structures on 3-Lie algebras and these combined structures. In the case of Lie algebras, pre-Lie algebras play important roles in these studies. It is believable that 3-pre-Lie algebras will play important roles in the corresponding studies. Thus, first we introduce the notion of a representation of a 3-pre-Lie algebra and construct the associated semidirect product 3-pre-Lie algebra. Several important properties of representations of 3-pre-Lie algebras are studied. Note that the notion of a symplectic structure on a 3-Lie algebra was introduced in \cite{BGS} and it is shown that the underlying structure of a symplectic 3-Lie algebra is a quadratic 3-pre-Lie algebra. We introduce the notion of a phase space of a 3-Lie algebra $\g$, which is a symplectic 3-Lie algebra $\g\oplus \g^*$ satisfying some conditions, and show that a 3-Lie algebra has a phase space if and only if it is sub-adjacent to a 3-pre-Lie algebra. We also introduce the notion of a Manin triple of 3-pre-Lie algebras and show that there is a one-to-one correspondence between Manin triples of 3-pre-Lie algebras and phase spaces of 3-Lie algebras. An almost product structure on a 3-Lie algebra $\g$ is defined to be a linear map $E:\g\longrightarrow\g$ satisfying $E^2=\Id$. It is challengeable to add an integrability condition on an almost product structure to obtain a product structure on a 3-Lie algebra. We note that the Nijenhuis condition (see \eqref{eq:Nejenhuiscon}) given in \cite{Liu-Sheng-Bai-Chen} is the correct integrability condition. Let us explain this issue. Denote by $\g_{\pm}$ the eigenspaces corresponding to eigenvalues $\pm1$ of an almost product structure $E$. Then it is obvious that $\g=\g_+\oplus \g_-$ as vector spaces. The Nijenhuis condition ensures that both $\g_+$ and $\g_-$ are subalgebras. This is what ``integrability'' means. Moreover, we find that there are four types special integrability conditions, which are called strict product structure, abelian product structure, strong abelian product structure and perfect product structure respectively, each of them gives rise to a special decomposition of the original 3-Lie algebra. See the following table for a precise description: \begin{tabular}{|c|c|c|} \hline product & $E[x,y,z]_\g=[Ex,y,z]_\g+[x,Ey,z]_\g+[x,y,Ez]_\g$ & $\g=\g_+\oplus\g_-$ \\ structure&$-E([Ex,Ey,z]_\g+[Ex,y,Ez]_\g+[x,Ey,Ez]_\g)$&$[\g_+,\g_+,\g_+]_\g\subset\g_+$\\ &$+[Ex,Ey,Ez]_\g$&$[\g_-,\g_-,\g_-]_\g\subset\g_-$\\\hline strict product& $E[x,y,z]_\g=[Ex,y,z]_\g$ & $[\g_+,\g_+,\g_-]_\g=0$\\ structure & {} & $[\g_-,\g_-,\g_+]_\g=0$\\\hline abelian product & $[x,y,z]_\g=-[x,Ey,Ez]_\g-[Ex,y,Ez]_\g-[Ex,Ey,z]_\g$ & $[\g_+,\g_+,\g_+]_\g=0$\\ structure & {} & $[\g_-,\g_-,\g_-]_\g=0$\\\hline & $[x,y,z]_\g=E[Ex,y,z]_\g+E[x,Ey,z]_\g+E[x,y,Ez]_\g$ & $[\g_+,\g_+,\g_+]_\g=0$\\ strong abelian & {} & $[\g_-,\g_-,\g_-]_\g=0$\\ product structure & {$\huaO$-operators} & $[\g_+,\g_+,\g_-]_\g\subset\g_+$\\ {} & {Rota-Baxter operators} & $[\g_-,\g_-,\g_+]_\g\subset\g_-$\\\hline perfect product & $E[x,y,z]_\g=[Ex,Ey,Ez]_\g$ & $[\g_+,\g_+,\g_-]_\g\subset\g_-$\\ {structure} & {involutive automorphisms} & $[\g_-,\g_-,\g_+]_\g\subset\g_+$\\\hline \end{tabular} It is surprised that a strong abelian product structure is also an $\huaO$-operator on a 3-Lie algebra associated to the adjoint representation. Since an $\huaO$-operator on a 3-Lie algebra associated to the adjoint representation is also a Rota-Baxter operator \cite{RB3Lie,PBG}, it turns out that involutive Rota-Baxter operator can also serve as an integrability condition. This is totally different from the case of Lie algebras. Furthermore, by the definition of a perfect product structure, an involutive automorphism of a 3-Lie algebra can also serve as an integrability condition. This is also a new phenomenon. Note that the decomposition that a perfect product structure gives is exactly the condition required in the definition of a matched pair of 3-Lie algebras \cite{BGS}. Thus, this kind of product structure will be frequently used in our studies. An almost complex structure on a 3-Lie algebra $\g$ is defined to be a linear map $J:\g\longrightarrow\g$ satisfying $J^2=-\Id$. With the above motivation, we define a complex structure on a 3-Lie algebra $\g$ to be an almost complex structure satisfying the Nijenhuis condition. Then $\g_i$ and $\g_{-i}$, which are eigenspaces of eigenvalues $\pm i$ of a complex linear map $J_{\mathbb C}$ (the complexification of $J$) are subalgebras of the 3-Lie algebra $\g_{\mathbb C}$, the complexification of $\g$. Parallel to the case of product structures, there are also four types special integrability conditions, and each of them gives rise to a special decomposition of $\g_{\mathbb C}$: \begin{tabular}{|c|c|c|} \hline complex & $J[x,y,z]_\g=[Jx,y,z]_\g+[x,Jy,z]_\g+[x,y,Jz]_\g$ & $\g_{\mathbb C}=\g_i\oplus\g_{-i}$ \\ structure&$+J([Jx,Jy,z]_\g+[Jx,y,Jz]_\g+[x,Jy,Jz]_\g)$&$[\g_i,\g_i,\g_i]_{\g_{\mathbb C}}\subset\g_i$\\ &$-[Jx,Jy,Jz]_\g$&$[\g_{-i},\g_{-i},\g_{-i}]_{\g_{\mathbb C}}\subset\g_{-i}$\\\hline strict complex& $J[x,y,z]_\g=[Jx,y,z]_\g$ & $[\g_i,\g_i,\g_{-i}]_{\g_{\mathbb C}}=0$\\ structure & {} & $[\g_{-i},\g_{-i},\g_i]_{\g_{\mathbb C}}=0$\\\hline abelian complex & $[x,y,z]_\g=[x,Jy,Jz]_\g+[Jx,y,Jz]_\g+[Jx,Jy,z]_\g$ & $[\g_i,\g_i,\g_i]_{\g_{\mathbb C}}=0$\\ structure & {} & $[\g_{-i},\g_{-i},\g_{-i}]_{\g_{\mathbb C}}=0$\\\hline & $[x,y,z]_\g=-J([Jx,y,z]_\g+[x,Jy,z]_\g+[x,y,Jz]_\g)$ & $[\g_i,\g_i,\g_i]_{\g_{\mathbb C}}=0$\\ strong abelian & {} & $[\g_{-i},\g_{-i},\g_{-i}]_{\g_{\mathbb C}}=0$\\ complex structure & {$\huaO$-operators} & $[\g_i,\g_i,\g_{-i}]_{\g_{\mathbb C}}\subset\g_i$\\ {} & {Rota-Baxter operators} & $[\g_{-i},\g_{-i},\g_i]_{\g_{\mathbb C}}\subset\g_{-i}$\\\hline perfect complex & $J[x,y,z]_\g=-[Jx,Jy,Jz]_\g$ & $[\g_i,\g_i,\g_{-i}]_{\g_{\mathbb C}}\subset\g_{-i}$\\ {structure} & {anti-involutive automorphisms} & $[\g_{-i},\g_{-i},\g_i]_{\g_{\mathbb C}}\subset\g_i$\\\hline \end{tabular} Then we add a compatibility condition between a complex structure and a product structure on a 3-Lie algebra to define a complex product structure on a 3-Lie algebra. We give an equivalent characterization of a complex product structure on a 3-Lie algebra $\g$ using the decomposition of $\g$. We add a compatibility condition between a symplectic structure and a paracomplex structure on a 3-Lie algebra to define a paraK\"{a}hler structure on a 3-Lie algebra. An equivalent characterization of a paraK\"{a}hler structure on a 3-Lie algebra $\g$ is also given using the decomposition of $\g$. Associated to a paraK\"{a}hler structure on a 3-Lie algebra, there is also a pseudo-Riemannian structure. We introduce the notion of a Livi-Civita product associated to a pseudo-Riemannian 3-Lie algebra, and give its precise formulas. Finally, we add a compatibility condition between a symplectic structure and a complex structure on a 3-Lie algebra to define a pseudo-K\"{a}hler structure on a 3-Lie algebra. The relation between a paraK\"{a}hler structure and a pseudo-K\"{a}hler structure on a 3-Lie algebra is investigated. We construct complex product structures, paraK\"{a}hler structures and pseudo-K\"{a}hler structures in terms of 3-pre-Lie algebras. We also give examples of symplectic structures, product structures, complex structures, complex product structure, paraK\"{a}hler structures and pseudo-K\"{a}hler structures on the $4$-dimensional Euclidean $3$-Lie algebra $A_{4}$ given in \cite{BL0}. The paper is organized as follows. In Section 2, we recall Nijenhuis operators on 3-Lie algebras and 3-pre-Lie algebras. In Section 3, we study representations of 3-pre-Lie algebras. In Section 4, we introduce the notion of a phase space of a 3-Lie algebra and show that a 3-Lie algebra has a phase space if and only if it is sub-adjacent to a 3-pre-Lie algebra. We also introduce the notion of a Manin triple of 3-pre-Lie algebras and study its relation with phase spaces of 3-Lie algebras. In Section 5, we introduce the notion of a product structure on a 3-Lie algebra and give four special integrability conditions. In Section 6, we introduce the notion of a complex structure on a 3-Lie algebra and give four special integrability conditions. In Section 7, we introduce the notion of a complex product structure on a 3-Lie algebra and give its equivalent characterization. In Section 8, we introduce the notion of a paraK\"{a}hler structure on a 3-Lie algebra and give its equivalent characterization. Moreover, we give a detailed study on the associated Levi-Civita product. In Section 9, we introduce the notion of a pseudo-K\"{a}hler structure on a 3-Lie algebra and study the relation with a paraK\"{a}hler structure. In this paper, we work over the real field $\mathbb R$ and the complex field $\mathbb C$ and all the vector spaces are finite-dimensional. \vspace{2mm} \noindent {\bf Acknowledgement:} We give our warmest thanks to Chengming Bai for very useful comments and discussions. This research is supported by NSFC (11471139) and NSF of Jilin Province (20170101050JC). \section{Preliminaries} In this section, first we recall the notion of a Nijenhuis operator on a 3-Lie algebra, which will be frequently used as the integrability condition in our later studies. Then we recall the notion of a 3-pre-Lie algebra, which is the main tool to construct examples of symplectic, product and complex structures on 3-Lie algebras. \begin{defi}\label{defi of n-LA} A {\bf $3$-Lie algebra} is a vector space $\g$ together with a trilinear skew-symmetric bracket $[\cdot,\cdot,\cdot]_\g:\wedge^3\g\longrightarrow\g$ such that the following {\bf fundamental} identity holds: \begin{eqnarray}\label{FI} [x,y,[z,w,v]_\g]_\g=[[x,y,z]_\g,w,v]_\g+[z,[x,y,w]_\g,v]_\g+[z,w,[x,y,v]_\g]_\g,\quad \forall x,y,z,w,v\in\g. \end{eqnarray} \end{defi} \emptycomment{ \begin{defi} A morphism of $3$-Lie algebras $f:(\g,[\cdot,\cdot,\cdot]_\g)\lon(\h,[\cdot,\cdot,\cdot]_\h)$ is a linear map $f:\g\lon\h$ such that \begin{eqnarray} f[x,y,z]_\g=[f(x),f(y),f(z)]_\h,\hspace{3mm}\forall x,y,z\in \mathfrak{g}. \end{eqnarray} \end{defi} } For $x,y\in\g$, define $\ad:\wedge^{2}\g\longrightarrow\gl(\g)$ by $$\ad_{x,y}z=[x,y,z]_\g, \quad\forall z\in \g.$$ Then \eqref{FI} is equivalent to that $\ad_{x,y}$ is a derivation, i.e. $$ \ad_{x,y}[z,w,v]_\g=[\ad_{x,y}z,w,v]_\g+[z,\ad_{x,y}w,v]_\g+[z,w,\ad_{x,y}v]_\g,\quad \forall x,y\in\g. $$ Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a $3$-Lie algebra, and $N:\g\longrightarrow\g$ a linear map. Define a $3$-ary bracket $[\cdot,\cdot,\cdot]_N^1:\wedge^3\g\longrightarrow\g$ by \begin{equation}\label{eq:bracket(1)} [x,y,z]_N^{1}=[Nx,y,z]_\g+[x,Ny,z]_\g+[x,y,Nz]_\g-N[x,y,z]_\g. \end{equation} Then we define $3$-ary bracket $[\cdot,\cdot,\cdot]_N^2:\wedge^3\g\longrightarrow\g$ by \begin{equation}\label{eq:bracket (j)} [x,y,z]_N^{2}=[Nx,Ny,z]_\g+[x,Ny,Nz]_\g+[Nx,y,Nz]_\g-N[x,y,z]_N^{1}. \end{equation} \begin{defi}\label{defi:Nijenhuis}{\rm (\cite{Liu-Sheng-Bai-Chen})} Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a $3$-Lie algebra. A linear map $N:\g\longrightarrow\g$ is called a {\bf Nijenhuis operator} if the following {\bf Nijenhuis condition} is satisfied: \begin{equation}\label{eq:Nijenhuis(n)} [Nx,Ny,Nz]_\g=N[x,y,z]_N^{2},\quad\forall x,y,z\in \g. \end{equation} \end{defi} More precisely, a linear map $N:\g\longrightarrow\g$ of a $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$ is a Nijenhuis operator if and only if \begin{eqnarray} \nonumber[Nx,Ny,Nz]_\g&=&N[Nx,Ny,z]_\g+N[x,Ny,Nz]_\g+N[Nx,y,Nz]_\g\\\nonumber &&-N^2[Nx,y,z]_\g-N^2[x,Ny,z]_\g-N^2[x,y,Nz]_\g\\ \label{eq:Nejenhuiscon}&&+N^3[x,y,z]_\g. \end{eqnarray} \begin{defi}{\rm (\cite{Kasymov})}\label{defi:usualrep} A {\bf representation} of a $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$ on a vector space $V$ is a linear map $\rho:\wedge^2\frkg\longrightarrow \gl( V),$ such that for all $x_1,x_2,x_3,x_4\in\g,$ there holds: \begin{eqnarray*} &\rho([x_1,x_2,x_3]_\g, x_4) +\rho(x_3,[x_1,x_2, x_4]_\g) =[\rho(x_1,x_2),\rho(x_3,x_4)];\\ &\rho([x_1,x_2,x_3]_\g,x_4)=\rho(x_1,x_2)\circ\rho(x_3,x_4)+\rho(x_2,x_3)\circ\rho(x_1,x_4)+\rho(x_3,x_1)\circ\rho(x_2,x_4). \end{eqnarray*} \end{defi} \begin{ex}{\rm Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a $3$-Lie algebra. The linear map $\ad:\wedge^{2}\g\longrightarrow\gl(\g)$ defines a representation of the $3$-Lie algebra $\g$ on itself, which we call the {\bf adjoint representation} of $\g$. } \end{ex} Let $A$ be a vector space. For a linear map $\phi:A\otimes A\lon\gl(V)$, we define a linear map $\phi^*: A\otimes A\lon\gl(V^*)$ by \begin{eqnarray*} \langle \phi^*(x,y)\alpha,v\rangle=-\langle\alpha, \phi(x,y)v\rangle,\,\,\,\,\forall \alpha\in V^*,x,y\in\g,v\in V. \end{eqnarray*} \emptycomment{ Let $(V,\rho)$ be a representation of a $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. Define $\rho^*:\wedge^2\frkg\longrightarrow \gl(V^*)$ by \begin{eqnarray} \langle\rho^*(x_1,x_2)\alpha,v\rangle=-\langle\alpha,\rho(x_1,x_2)v\rangle,\,\,\,\,\forall \alpha\in V^*,x_1,x_2\in\g,v\in V. \end{eqnarray} } \begin{lem}{\rm (\cite{BGS})}\label{dual-rep-3-Lie} Let $(V,\rho)$ be a representation of a $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. Then $(V^*,\rho^*)$ is a representation of the $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$, which is called the {\bf dual representation}. \end{lem} \begin{lem}\label{lem:semidirectp} Let $\g$ be a $3$-Lie algebra, $V$ a vector space and $\rho: \wedge^2\g\rightarrow \gl(V)$ a skew-symmetric linear map. Then $(V;\rho)$ is a representation of $\g$ if and only if there is a $3$-Lie algebra structure (called the {\bf semidirect product}) on the direct sum of vector spaces $\g\oplus V$, defined by \begin{equation}\label{eq:sum} [x_1+v_1,x_2+v_2,x_3+v_3]_{\rho}=[x_1,x_2,x_3]_\g+\rho(x_1,x_2)v_3+\rho(x_2,x_3)v_1+\rho(x_3,x_1)v_2, \end{equation} for all $x_i\in \g, v_i\in V, 1\leq i\leq 3$. We denote this semidirect product $3$-Lie algebra by $\g\ltimes_\rho V.$ \end{lem} \begin{defi} Let $A$ be a vector space with a linear map $\{\cdot,\cdot,\cdot\}:\otimes^3 A\lon A$. The pair $(A,\{\cdot,\cdot,\cdot\})$ is called a $3$-{\bf pre-Lie algebra} if the following identities hold: \begin{eqnarray} \{x,y,z\} &=&-\{y,x,z\}\\ \nonumber\{x_1,x_2,\{x_3,x_4,x_5\}\} &=&\{[x_1,x_2,x_3]_C,x_4,x_5\}+\{x_3,[x_1,x_2,x_4]_C,x_5\}\\ &&+\{x_3,x_4,\{x_1,x_2,x_5\}\}\\ \nonumber \{[x_1,x_2,x_3]_C,x_4,x_5\} &=&\{x_1,x_2,\{x_3,x_4,x_5\}\}+\{x_2,x_3,\{x_1,x_4,x_5\}\}\\ &&+\{x_3,x_1,\{x_2,x_4,x_5\}\}, \end{eqnarray} where $x,y,z,x_i\in A,1\le i\le 5$ and $[\cdot,\cdot,\cdot]_C$ is defined by \begin{eqnarray} [x,y,z]_C\triangleq \{x,y,z\}+\{y,z,x\}+\{z,x,y\},\,\,\,\,\forall x,y,z\in A. \end{eqnarray} \end{defi} \begin{pro}{\rm (\cite[Proposition 3.21]{BGS})}\label{3-pre-Lie} Let $(A,\{\cdot,\cdot,\cdot\})$ be a $3$-pre-Lie algebra. Then $(A,[\cdot,\cdot,\cdot]_C)$ is a $3$-Lie algebra, which is called the sub-adjacent $3$-Lie algebra of $A$, and denoted by $A^c$. $(A,\{\cdot,\cdot,\cdot\})$ is called the compatible $3$-pre-Lie algebra structure on the $3$-Lie algebra $A^c$. \end{pro} Define the left multiplication $L:\wedge^2 A\longrightarrow\gl(A)$ by $L(x,y)z=\{x,y,z\}$ for all $x,y,z\in A$. Then $(A,L)$ is a representation of the $3$-Lie algebra $A^c$. Moreover, we define the right multiplication $R:\otimes^2 A\lon\gl(A)$ by $R(x,y)z=\{z,x,y\}$. If there is a $3$-pre-Lie algebra structure on its dual space $A^*$, we denote the left multiplication and right multiplication by $\huaL$ and $\huaR$ respectively. \emptycomment{ \begin{defi} A morphism of $3$-pre-Lie algebras $f:(A,\{\cdot,\cdot,\cdot\}_A)\lon(A',\{\cdot,\cdot,\cdot\}_{A'})$ is a linear map $f:A\lon A'$ such that \begin{eqnarray} f\{x,y,z\}_A)=\{f(x),f(y),f(z)\}_{A'}\hspace{3mm}\forall x,y,z\in A. \end{eqnarray} \end{defi} } \begin{defi}{\rm (\cite[Definition 3.16]{BGS})}\label{3-Lie-O-operator} Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a $3$-Lie algebra and $(V,\rho)$ a representation. A linear operator $T:V\lon\g$ is called an {\bf$\huaO$-operator} associated to $(V,\rho)$ if $T$ satisfies: \begin{eqnarray} [Tu,Tv,Tw]_\g=T(\rho(Tu,Tv)w+\rho(Tv,Tw)u+\rho(Tw,Tu)v),\,\,\,\,\forall u,v,w\in V. \end{eqnarray} \end{defi} \emptycomment{ \begin{pro} Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a $3$-Lie algebra and $(V,\rho)$ a representation. Suppose that the linear map $T:V\lon\g$ is an $\huaO$-operator associated to $(V,\rho)$. Then there exists a $3$-pre-Lie algebra structure on $V$ given by \begin{eqnarray} \{u,v,w\}=\rho(Tu,Tv)w,\,\,\,\,\forall u,v,w\in V. \end{eqnarray} \end{pro} } \begin{pro}{\rm (\cite[Proposition 3.27]{BGS})}\label{3-Lie-compatible-3-pre-Lie} Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a $3$-Lie algebra. Then there is a compatible $3$-pre-Lie algebra if and only if there exists an invertible $\huaO$-operator $T:V\lon\g$ associated to a representation $(V,\rho)$. Furthermore, the compatible $3$-pre-Lie structure on $\g$ is given by \begin{eqnarray} \{x,y,z\}=T\rho(x,y)T^{-1}(z),\,\,\,\,\forall x,y,z\in \g. \end{eqnarray} \end{pro} \emptycomment{ \begin{lem} Given a matched pair $(\g,\h)$ of $3$-Lie algebras, there is a $3$-Lie algebra structure $\g\bowtie\h$ on the direct sum vector space $\g\oplus\h$ with the bracket \begin{eqnarray} \nonumber[x_1+u_1,x_2+u_2,x_3+u_3]_{\g\bowtie\h}&=&[x_1,x_2,x_3]_{\g}+\nu_\h(u_1)(x_2,x_3)+\nu_\h(u_2)(x_3,x_1)+\nu_\h(u_3)(x_1,x_2)\\ &&+\nonumber\rho_\h(u_1,u_2)x_3+\rho_\h(u_2,u_3)x_1+\rho_\h(u_3,u_1)x_2\\ &&+\nonumber[u_1,u_2,u_3]_\h+\nu_\g(x_1)(u_2,u_3)+\nu_\g(x_2)(u_3,u_1)+\nu_\g(x_3)(u_1,u_2)\\ &&+\rho_\g(x_1,x_2)u_3+\rho_\g(x_2,x_3)u_1+\rho_\g(x_3,x_1)u_2. \end{eqnarray} Conversely, if $\g\oplus\h$ has a $3$-Lie algebra structure for which $\g$ and $\h$ are $3$-Lie algebras, then the four linear maps defined by \begin{eqnarray} &&\rho_{\g}(x,y)u=P_{\h}([x,y,u]_{\g\oplus\h}),\,\,\,\,\nu_\g(x)(u,v)=P_{\h}([x,u,v]_{\g\oplus\h}),\\ &&\rho_{\h}(u,v)x=P_{\g}([u,v,x]_{\g\oplus\h}),\,\,\,\,\nu_\h(u)(x,y)=P_{\g}([u,x,y]_{\g\oplus\h}), \end{eqnarray} where $P_{\g}$ and $P_{\h}$ are the natural projection of $\g\oplus\h$ to $\g$ and $\h$ respectively. Moreover, they endow the couple $(\g,\h)$ with a structure of a matched pair. Thus, the three Lie algebras $(\g\oplus\h, \g, \h)$ form a double $3$-Lie algebra. \end{lem} } \section{Representations of 3-pre-Lie algebras} In this section, we introduce the notion of a representation of a 3-pre-Lie algebra, construct the corresponding semidirect product 3-pre-Lie algebra and give the dual representation. \begin{defi}\label{defi:rep3-pre-Lie} A {\bf representation} of a $3$-pre-Lie algebra $(A,\{\cdot,\cdot,\cdot\})$ on a vector space $V$ consists of a pair $(\rho,\mu)$, where $\rho:\wedge^2 A\rightarrow \gl(V)$ is a representation of the $3$-Lie algebra $A^c$ on $V$ and $\mu:\otimes^2 A\rightarrow \gl(V)$ is a linear map such that for all $x_1,x_2,x_3,x_4\in A$, the following equalities hold: \begin{eqnarray} \nonumber \rho(x_1,x_2)\mu(x_3,x_4) &=&\mu(x_3,x_4)\rho(x_1,x_2)-\mu(x_3,x_4)\mu(x_2,x_1)\\ \label{rep1}&&+\mu(x_3,x_4)\mu(x_1,x_2)+\mu([x_1,x_2,x_3]_C,x_4)+\mu(x_3,\{x_1,x_2,x_4\}),\\ \label{rep2} \mu([x_1,x_2,x_3]_C,x_4)&=&\rho(x_1,x_2)\mu(x_3,x_4)+\rho(x_2,x_3)\mu(x_1,x_4)+\rho(x_3,x_1)\mu(x_2,x_4),\\ \nonumber \mu(x_1,\{x_2,x_3,x_4\}) &=&\mu(x_3,x_4)\mu(x_1,x_2)+\mu(x_3,x_4)\rho(x_1,x_2)\\ \nonumber &&-\mu(x_3,x_4)\mu(x_2,x_1)-\mu(x_2,x_4)\mu(x_1,x_3)\\ \label{rep3} &&-\mu(x_2,x_4)\rho(x_1,x_3)+\mu(x_2,x_4)\mu(x_3,x_1)+\rho(x_2,x_3)\mu(x_1,x_4),\\ \nonumber \mu(x_3,x_4)\rho(x_1,x_2) &=&\mu(x_3,x_4)\mu(x_2,x_1)-\mu(x_3,x_4)\mu(x_1,x_2)\\ \label{rep4} &&+\rho(x_1,x_2)\mu(x_3,x_4)-\mu(x_2,\{x_1,x_3,x_4\})+\mu(x_1,\{x_2,x_3,x_4\}). \end{eqnarray} \end{defi} \emptycomment{ \begin{eqnarray} \nonumber L(x_1,x_2)(R(x_4,x_5)v_3) &=&R(x_4,x_5)(L(x_1,x_2)v_3)-R(x_4,x_5)(R(x_2,x_1)v_3)\\ \nonumber &&+R(x_4,x_5)(R(x_1,x_2)v_3)+R([x_1,x_2,x_4]_C,x_5)v_3\\ \label{rep1}&&+R(x_4,\{x_1,x_2,x_5\})v_3,\\ \nonumber -R([x_1,x_2,x_3]_C,x_5)v_4&=&-L(x_1,x_2)(R(x_3,x_5)v_4)-L(x_2,x_3)(R(x_1,x_5)v_4)\\ \label{rep2}&&-L(x_3,x_1)(R(x_2,x_5)v_4),\\ \nonumber R(x_2,\{x_3,x_4,x_5\})v_1 &=&R(x_4,x_5)(R(x_2,x_3)v_1)+R(x_4,x_5)(L(x_2,x_3)v_1)\\ \nonumber &&-R(x_4,x_5)(R(x_3,x_2)v_1)-R(x_3,x_5)(R(x_2,x_4)v_1)\\ \nonumber &&-R(x_3,x_5)(L(x_2,x_4)v_1)+R(x_3,x_5)(R(x_4,x_2)v_1)\\ \label{rep3}&&+L(x_3,x_4)(R(x_2,x_5)v_1),\\ \nonumber R(x_4,x_5)(L(x_1,x_2)v_3) &=&R(x_4,x_5)(R(x_2,x_1)v_3)-R(x_4,x_5)(R(x_1,x_2)v_3)\\ \nonumber &&+L(x_1,x_2)(R(x_4,x_5)v_3)-R(x_2,\{x_1,x_4,x_5\})v_3\\ \label{rep4} &&+R(x_1,\{x_2,x_4,x_5\})v_3. \end{eqnarray} } Let $(A,\{\cdot,\cdot,\cdot\})$ be a $3$-pre-Lie algebra and $\rho$ a representation of the sub-adjacent $3$-Lie algebra $(A^c,[\cdot,\cdot,\cdot]_C)$ on the vector space $V$. Then $(\rho,0)$ is a representation of the $3$-pre-Lie algebra $(A,\{\cdot,\cdot,\cdot\})$ on the vector space $V$. It is obvious that $(L,R)$ is a representation of a $3$-pre-Lie algebra on itself, which is called the {\bf regular representation}. Let $(V,\rho,\mu)$ be a representation of a $3$-pre-Lie algebra $(A,\{\cdot,\cdot,\cdot\})$. Define a trilinear bracket operation $\{\cdot,\cdot,\cdot\}_{\rho,\mu}:\otimes^3(A\oplus V)\lon A\oplus V$ by \begin{eqnarray}\label{semidirect-3-pre-Lie-bracket} \{x_1+v_1,x_2+v_2,x_3+v_3\}_{\rho,\mu}\triangleq\{x_1,x_2,x_3\}+\rho(x_1,x_2)v_3+\mu(x_2,x_3)v_1-\mu(x_1,x_3)v_2. \end{eqnarray} By straightforward computations, we have \begin{thm}\label{semidirect-3-pre-Lie} With the above notation, $(A\oplus V,\{\cdot,\cdot,\cdot\}_{\rho,\mu})$ is a $3$-pre-Lie algebra. \end{thm} This 3-pre-Lie algebra is called the {\bf semidirect product} of the $3$-pre-Lie algebra $(A,\{\cdot,\cdot,\cdot\})$ and $(V,\rho,\mu)$, and denoted by $A\ltimes_{\rho,\mu}V$. Let $V$ be a vector space. Define the switching operator $\tau:\otimes^2 V\longrightarrow \otimes^2 V$ by \begin{eqnarray*} \tau(T)=x_2\otimes x_1,\quad \forall T=x_1\otimes x_2\in\otimes^2 V. \end{eqnarray*} \begin{pro}\label{pro:representa} Let $(\rho,\mu)$ be a representation of a $3$-pre-Lie algebra $(A,\{\cdot,\cdot,\cdot\})$ on a vector space $V$. Then $\rho-\mu\tau+\mu$ is a representation of the sub-adjacent $3$-Lie algebra $(A^c,[\cdot,\cdot,\cdot]_C)$ on the vector space $V$. \end{pro} \pf By Theorem \ref{semidirect-3-pre-Lie}, we have the semidirect product 3-pre-Lie algebra $A\ltimes_{\rho,\mu}V$. Consider its sub-adjacent 3-Lie algebra structure $[\cdot,\cdot,\cdot]_C$, we have \begin{eqnarray} \nonumber[x_1+v_1,x_2+v_2,x_3+v_3]{_C}&=&\{x_1+v_1,x_2+v_2,x_3+v_3\}_{\rho,\mu}+\{x_2+v_2,x_3+v_3,x_1+v_1\}_{\rho,\mu}\\ \nonumber &&\{x_3+v_3,x_1+v_1,x_2+v_2\}_{\rho,\mu}\\ \nonumber &=&\{x_1,x_2,x_3\}+\rho(x_1,x_2)v_3+\mu(x_2,x_3)v_1-\mu(x_1,x_3)v_2\\ \nonumber&&+\{x_2,x_3,x_1\}+\rho(x_2,x_3)v_1+\mu(x_3,x_1)v_2-\mu(x_2,x_1)v_3\\ \nonumber&&+\{x_3,x_1,x_2\}+\rho(x_3,x_1)v_2+\mu(x_1,x_2)v_3-\mu(x_3,x_2)v_1\\ \nonumber &=&[x_1,x_2,x_3]_C+((\rho-\mu\tau+\mu)(x_1,x_2))v_3\\ \label{eq:samesubadj} && +((\rho-\mu\tau+\mu)(x_2,x_3))v_1+((\rho-\mu\tau+\mu)(x_3,x_1))v_2. \end{eqnarray} By Lemma \ref{lem:semidirectp}, $\rho-\mu\tau+\mu$ is a representation of the sub-adjacent $3$-Lie algebra $(A^c,[\cdot,\cdot,\cdot]_C)$ on the vector space $V$. The proof is finished. \qed\vspace{3mm} If $(\rho,\mu)=(L,R)$ is the regular representation of a 3-pre-Lie algebra $(A,\{\cdot,\cdot,\cdot\})$, then $\rho-\mu\tau+\mu=\ad$ is the adjoint representation of the sub-adjacent 3-Lie algebra $(A^c,[\cdot,\cdot,\cdot]_C)$. \begin{cor}\label{sub-adjacent-3-Lie} Let $(\rho,\mu)$ be a representation of a $3$-pre-Lie algebra $(A,\{\cdot,\cdot,\cdot\})$ on a vector space $V$. Then the semidirect product $3$-pre-Lie algebras $A\ltimes_{\rho,\mu}V$ and $A\ltimes_{\rho-\mu\tau+\mu,0}V$ given by the representations $(\rho,\mu)$ and $(\rho-\mu\tau+\mu,0)$ respectively have the same sub-adjacent $3$-Lie algebra $A^c\ltimes_{\rho-\mu\tau+\mu}V$ given by \eqref{eq:samesubadj}, which is the semidirect product of the $3$-Lie algebra $(A^c,[\cdot,\cdot,\cdot]_C)$ and its representation $(V,\rho-\mu\tau+\mu)$. \end{cor} \emptycomment{ Let $(\rho,\mu)$ be a representation of a $3$-pre-Lie algebra $(A,\{\cdot,\cdot,\cdot\})$ on a vector space $V$. Define $\rho^*:\wedge^2 A\lon\gl(V^*)$ and $\mu^*:\otimes^2 A\lon\gl(V^*)$ by \begin{eqnarray} \langle \rho^*(x_1,x_2)\alpha,v\rangle=-\langle \alpha,\rho(x_1,x_2)v\rangle,\,\quad\langle \mu^*(x_1,x_2)\alpha,v\rangle=-\langle \alpha,\mu(x_1,x_2)v\rangle, \end{eqnarray} for all $ x_1,x_2\in A,\alpha\in V^*,v\in V.$ } \begin{pro} Let $(\rho,\mu)$ be a representation of a $3$-pre-Lie algebra $(A,\{\cdot,\cdot,\cdot\})$ on a vector space $V$. Then $(\rho^*-\mu^*\tau+\mu^*,-\mu^*)$ is a representation of the $3$-pre-Lie algebra $(A,\{\cdot,\cdot,\cdot\})$ on the vector space $V^*$, which is called the {\bf dual representation} of the representation $(V,\rho,\mu)$. \end{pro} \pf By Proposition \ref{pro:representa}, $\rho-\mu\tau+\mu$ is a representation of the sub-adjacent $3$-Lie algebra $(A^c,[\cdot,\cdot,\cdot]_C)$ on the vector space $V$. By Lemma \ref{dual-rep-3-Lie}, $\rho^*-\mu^*\tau+\mu^*$ is a representation of the sub-adjacent $3$-Lie algebra $(A^c,[\cdot,\cdot,\cdot]_C)$ on the dual vector space $V^*$. It is straightforward to deduce that other conditions in Definition \ref{defi:rep3-pre-Lie} also hold. We leave details to readers. \qed \begin{cor} Let $(V,\rho,\mu)$ be a representation of a $3$-pre-Lie algebra $(A,\{\cdot,\cdot,\cdot\})$. Then the semidirect product $3$-pre-Lie algebras $A\ltimes_{\rho^*,0}V^*$ and $A\ltimes_{\rho^*-\mu^*\tau+\mu^*,-\mu^*}V^*$ given by the representations $(\rho^*,0)$ and $(\rho^*-\mu^*\tau+\mu^*,-\mu^*)$ respectively have the same sub-adjacent $3$-Lie algebra $A^c\ltimes_{\rho^*}V^*$, which is the semidirect product of the $3$-Lie algebra $(A^c,[\cdot,\cdot,\cdot]_C)$ and its representation $(V^*,\rho^*)$. \end{cor} If $(\rho,\mu)=(L,R)$ is the regular representation of a 3-pre-Lie algebra $(A,\{\cdot,\cdot,\cdot\})$, then $(\rho^*-\mu^*\tau+\mu^*,-\mu^*)=(\ad^*,-R^*)$ and the corresponding semidirect product 3-Lie algebra is $A^c\ltimes_{L^*}A^*$, which is the key object when we construct phase spaces of 3-Lie algebras in the next section. \section{Symplectic structures and phase spaces of $3$-Lie algebras} In this section, we introduce the notion of a phase space of a 3-Lie algebra and show that a 3-Lie algebra has a phase space if and only if it is sub-adjacent to a 3-pre-Lie algebra. Moreover, we introduce the notion of a Manin triple of 3-pre-Lie algebras and show that there is a one-to-one correspondence between Manin triples of 3-pre-Lie algebras and perfect phase spaces of 3-Lie algebras. \begin{defi}{\rm (\cite{BGS})} A {\bf symplectic structure} on a $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$ is a nondegenerate skew-symmetric bilinear form $\omega\in\wedge^2\g^*$ satisfying the following equality: \begin{eqnarray}\label{symplectic-structure} \omega([x,y,z]_\g,w)-\omega([y,z,w]_\g,x)+\omega([z,w,x]_\g,y)-\omega([w,x,y]_\g,z)=0,\quad\forall x,y,z,w\in\g. \end{eqnarray} \end{defi} \begin{ex}\label{ex:A4symplectic}{\rm Consider the $4$-dimensional Euclidean $3$-Lie algebra $A_{4}$ given in \cite{BL0}. The underlying vector space is $\mathbb R^4$. Relative to an orthogonal basis $\{e_1,e_2,e_3,e_4\}$, the $3$-Lie bracket is given by $$[e_1,e_2,e_3]=e_4, \quad [e_2,e_3,e_4]=e_1,\quad [e_1,e_3,e_4]=e_2,\quad[e_1,e_2,e_4]=e_3.$$ Then it is straightforward to see that any nondegenerate skew-symmetric bilinear form is a symplectic structure on $A_4$. In particular, \begin{eqnarray*} \omega_1=e_3^*\wedge e^*_1+e_4^*\wedge e_2^*,\quad \omega_2=e_2^*\wedge e^*_1+e_4^*\wedge e_3^*,\quad \omega_3=e_2^*\wedge e^*_1+e_3^*\wedge e_4^*,\\ \omega_4=e_1^*\wedge e^*_2+e_4^*\wedge e_3^*,\quad\omega_5=e_1^*\wedge e^*_2+e_3^*\wedge e_4^*,\quad\omega_6=e_1^*\wedge e_3^*+e_2^*\wedge e_4^*\end{eqnarray*} are symplectic structures on $A_4$, where $\{e_1^*,e_2^*,e_3^*,e_4^*\}$ are the dual basis. } \end{ex} \begin{pro}{\rm (\cite{BGS})}\label{3-pre-Lie-under-3-Lie} Let $(\g,[\cdot,\cdot,\cdot]_\g,\omega)$ be a symplectic $3$-Lie algebra. Then there exists a compatible $3$-pre-Lie algebra structure $\{\cdot,\cdot,\cdot\}$ on $\g$ given by \begin{equation}\label{3-pre-Lie-omega} \omega(\{x,y,z\},w)=-\omega(z,[x,y,w]_\g),\quad \forall x,y,z,w\in \g. \end{equation} \end{pro} A {\bf quadratic 3-pre-Lie algebra} is a 3-pre-Lie algebra $(A,\{\cdot,\cdot,\cdot\})$ equipped with a nondegenerate skew-symmetric bilinear form $\omega\in\wedge^2A^*$ such that the following invariant condition holds: \begin{equation}\label{eq:quadratic} \omega(\{x,y,z\},w)=-\omega(z,[x,y,w]_C),\quad \forall x,y,z,w\in A. \end{equation} Proposition \ref{3-pre-Lie-under-3-Lie} tells us that quadratic 3-pre-Lie algebras are the underlying structures of symplectic 3-Lie algebras. Let $V$ be a vector space and $V^*=\Hom(V,\mathbb R)$ its dual space. Then there is a natural nondegenerate skew-symmetric bilinear form $\omega$ on $T^*V=V\oplus V^*$ given by: \begin{eqnarray}\label{phase-space} \omega(x+\alpha,y+\beta)=\langle \alpha,y\rangle-\langle \beta,x\rangle,\,\,\,\,\forall x,y\in V,\alpha,\beta\in V^*. \end{eqnarray} \begin{defi} Let $(\h,[\cdot,\cdot,\cdot]_\h)$ be a $3$-Lie algebra and $\h^*$ its dual space. \begin{itemize} \item If there is a $3$-Lie algebra structure $[\cdot,\cdot,\cdot]$ on the direct sum vector space $T^*\h=\h\oplus\h^*$ such that $(\h\oplus\h^*,[\cdot,\cdot,\cdot],\omega)$ is a symplectic $3$-Lie algebra, where $\omega$ given by \eqref{phase-space}, and $(\h,[\cdot,\cdot,\cdot]_\h)$ and $(\h^*,[\cdot,\cdot,\cdot]|_{\h^*})$ are $3$-Lie subalgebras of $ (\h\oplus\h^*,[\cdot,\cdot,\cdot])$, then the symplectic $3$-Lie algebra $(\h\oplus\h^*,[\cdot,\cdot,\cdot],\omega)$ is called a {\bf phase space} of the $3$-Lie algebra $(\h,[\cdot,\cdot,\cdot]_\h)$. \item A phase space $(\h\oplus\h^*,[\cdot,\cdot,\cdot],\omega)$ is called {\bf perfect} if the following conditions are satisfied: \begin{equation}\label{eq:conperfectPS} [x,y,\alpha]\in\h^*,\quad [\alpha,\beta,x]\in\h,\quad \forall x,y\in\h, \alpha,\beta\in\h^*. \end{equation} \end{itemize} \end{defi} 3-pre-Lie algebras play important role in the study of phase spaces of 3-Lie algebras. \begin{thm}\label{3-pre-Lie-phase-space} A $3$-Lie algebra has a phase space if and only if it is sub-adjacent to a $3$-pre-Lie algebra. \end{thm} \pf Let $(A,\{\cdot,\cdot,\cdot\})$ be a $3$-pre-Lie algebra. By Proposition \ref{3-pre-Lie}, the left multiplication $L$ is a representation of the sub-adjacent $3$-Lie algebra $A^c$ on $A$. By Lemma \ref{dual-rep-3-Lie}, $L^*$ is a representation of the sub-adjacent $3$-Lie algebra $A^c$ on $A^*$. Thus, we have the semidirect product 3-Lie algebra $A^c\ltimes_{L^*}A^*=(A^c\oplus A^*,[\cdot,\cdot,\cdot]_{L^*})$. Then $(A^c\ltimes_{L^*}A^*,\omega)$ is a symplectic $3$-Lie algebra, which is a phase space of the sub-adjacent $3$-Lie algebra $(A^c,[\cdot,\cdot,\cdot]_C)$. In fact, for all $x_1,x_2,x_3,x_4\in A$ and $\alpha_1,\alpha_2,\alpha_3,\alpha_4\in A^*$, we have \begin{eqnarray*} &&\omega([x_1+\alpha_1,x_2+\alpha_2,x_3+\alpha_3]_{L^*},x_4+\alpha_4)\\&=&\omega([x_1,x_2,x_3]_C+L^*(x_1,x_2)\alpha_3+L^*(x_2,x_3)\alpha_1+L^*(x_3,x_1)\alpha_2,x_4+\alpha_4)\\ &=&\langle L^*(x_1,x_2)\alpha_3+L^*(x_2,x_3)\alpha_1+L^*(x_3,x_1)\alpha_2,x_4\rangle-\langle \alpha_4,[x_1,x_2,x_3]_C\rangle\\ &=&-\langle \alpha_3,\{x_1,x_2,x_4\}\rangle-\langle \alpha_1,\{x_2,x_3,x_4\}\rangle-\langle \alpha_2,\{x_3,x_1,x_4\}\rangle\\ &&-\langle \alpha_4,\{x_1,x_2,x_3\}\rangle-\langle \alpha_4,\{x_2,x_3,x_1\}\rangle-\langle \alpha_4,\{x_3,x_1,x_2\}\rangle. \end{eqnarray*} Similarly, we have \begin{eqnarray*} &&\omega([x_2+\alpha_2,x_3+\alpha_3,x_4+\alpha_4]_{L^*},x_1+\alpha_1)\\&=&-\langle \alpha_4,\{x_2,x_3,x_1\}\rangle-\langle \alpha_2,\{x_3,x_4,x_1\}\rangle -\langle \alpha_3,\{x_4,x_2,x_1\}\rangle\\ &&-\langle \alpha_1,\{x_2,x_3,x_4\}\rangle-\langle \alpha_1,\{x_3,x_4,x_2\}\rangle-\langle \alpha_1,\{x_4,x_2,x_3\}\rangle,\\ &&\omega([x_3+\alpha_3,x_4+\alpha_4,x_1+\alpha_1]_{L^*},x_2+\alpha_2)\\&=&-\langle \alpha_1,\{x_3,x_4,x_2\}\rangle-\langle \alpha_3,\{x_4,x_1,x_2\}\rangle -\langle \alpha_4,\{x_1,x_3,x_2\}\rangle\\ &&-\langle \alpha_2,\{x_3,x_4,x_1\}\rangle-\langle \alpha_2,\{x_4,x_1,x_3\}\rangle-\langle \alpha_2,\{x_1,x_3,x_4\}\rangle,\\ &&\omega([x_4+\alpha_4,x_1+\alpha_1,x_2+\alpha_2]_{L^*},x_3+\alpha_3)\\&=&-\langle \alpha_2,\{x_4,x_1,x_3\}\rangle-\langle \alpha_4,\{x_1,x_2,x_3\}\rangle -\langle \alpha_1,\{x_2,x_4,x_3\}\rangle\\ &&-\langle \alpha_3,\{x_4,x_1,x_2\}\rangle-\langle \alpha_3,\{x_1,x_2,x_4\}\rangle-\langle \alpha_3,\{x_2,x_4,x_1\}\rangle. \end{eqnarray*} Since $\{x_1,x_2,x_3\}=-\{x_2,x_1,x_3\}$, we deduce that $\omega$ is a symplectic structure on the semidirect product 3-Lie algebra $A^c\ltimes_{L^*}A^*$. Moreover, $(A^c,[\cdot,\cdot,\cdot]_C)$ is a subalgebra of $A^c\ltimes_{L^*}A^*$ and $A^*$ is an abelian subalgebra of $A^c\ltimes_{L^*}A^*$. Thus, the symplectic 3-Lie algebra $(A^c\ltimes_{L^*}A^*,\omega)$ is a phase space of the sub-adjacent $3$-Lie algebra $(A^c,[\cdot,\cdot,\cdot]_C)$. Conversely, let $(T^*\h=\h\oplus\h^*,[\cdot,\cdot,\cdot],\omega)$ be a phase space of a $3$-Lie algebra $(\h,[\cdot,\cdot,\cdot]_\h)$. By Proposition \ref{3-pre-Lie-under-3-Lie}, there exists a compatible $3$-pre-Lie algebra structure $\{\cdot,\cdot,\cdot\}$ on $T^*\h$ given by \eqref{3-pre-Lie-omega}. Since $(\h,[\cdot,\cdot,\cdot]_\h)$ is a subalgebra of $(\h\oplus\h^*,[\cdot,\cdot,\cdot])$, we have \begin{eqnarray*} \omega(\{x,y,z\},w)=-\omega(z,[x,y,w])=-\omega(z,[x,y,w]_{\h})=0,\quad\forall x,y,z,w\in\h. \end{eqnarray*} Thus, $\{x,y,z\}\in\h$, which implies that $(\h,\{\cdot,\cdot,\cdot\}|_\h)$ is a subalgebra of the $3$-pre-Lie algebra $(T^*\h,\{\cdot,\cdot,\cdot\})$. Its sub-adjacent 3-Lie algebra $(\h^c,[\cdot,\cdot,\cdot]_C)$ is exactly the original $3$-Lie algebra $(\h,[\cdot,\cdot,\cdot]_\h)$. \qed \begin{cor}\label{3-pre-Lie-sub} Let $(T^*\h=\h\oplus\h^*,[\cdot,\cdot,\cdot],\omega)$ be a phase space of a $3$-Lie algebra $(\h,[\cdot,\cdot,\cdot]_\h)$ and $(\h\oplus \h^*,\{\cdot,\cdot,\cdot\})$ the associated $3$-pre-Lie algebra. Then both $(\h,\{\cdot,\cdot,\cdot\}|_\h)$ and $(\h^*,\{\cdot,\cdot,\cdot\}|_{\h^*})$ are subalgebras of the $3$-pre-Lie algebra $(\h\oplus \h^*,\{\cdot,\cdot,\cdot\})$. \end{cor} \begin{cor} If $(\h\oplus\h^*,[\cdot,\cdot,\cdot],\omega)$ is a phase space of a $3$-Lie algebra $(\h,[\cdot,\cdot,\cdot]_\h)$ such that the $3$-Lie algebra $(\h\oplus\h^*,[\cdot,\cdot,\cdot])$ is a semidirect product $\h\ltimes_{\rho^*}\h^*$, where $\rho$ is a representation of $(\h,[\cdot,\cdot,\cdot]_\h)$ on $\h$ and $\rho^*$ is its dual representation, then $$\{x,y,z\}\triangleq \rho(x,y)z,\quad \forall x,y,z\in\h,$$ defines a $3$-pre-Lie algebra structure on $\h$. \end{cor} \pf For all $x,y,z\in\h$ and $\alpha\in\h^*$, we have \begin{eqnarray*} \langle \alpha,\{x,y,z\}\rangle&=&-\omega(\{x,y,z\},\alpha)=\omega(z,[x,y,\alpha]_{\g\oplus\g^*})=\omega(z,\rho^*(x,y)\alpha)=-\langle \rho^*(x,y)\alpha,z\rangle\\ &=&\langle \alpha,\rho(x,y)z\rangle. \end{eqnarray*} Therefore, $\{x,y,z\}=\rho(x,y)z$. \qed \begin{ex}{\rm Let $(A,\{\cdot,\cdot,\cdot\}_A)$ be a $3$-pre-Lie algebra. Since there is a semidirect product $3$-pre-Lie algebra structure $(A\ltimes_{L^*,0}A^*,\{\cdot,\cdot,\cdot\}_{L^*,0})$ on the phase space $T^*A^c=A^c\ltimes_{L^*}A^{*}$, one can construct a new phase space $T^*A^c\ltimes_{L^*}(T^*A^c)^*$. This process can be continued indefinitely. Hence, there exist a series of phase spaces $\{A_{(n)}\}_{n\ge2}:$ $$A_{(1)}=A^c,\,\,\,\,A_{(2)}=T^*A_{(1)}=A^c\ltimes_{L^*}A^{*},\cdots,\,\,\,\,A_{(n)}=T^*A_{(n-1)},\cdots.$$ $A_{(n)}~~(n\ge2)$ is called the symplectic double of $A_{(n-1)}.$ } \end{ex} At the end of this section, we introduce the notion of a Manin triple of 3-pre-Lie algebras. \begin{defi} A {\bf Manin triple of $3$-pre-Lie algebras} is a triple $(\huaA;A,A')$, where \begin{itemize} \item $(\huaA,\{\cdot,\cdot,\cdot\},\omega)$ is a quadratic $3$-pre-Lie algebra; \item both $A$ and $A'$ are isotropic subalgebras of $(\huaA,\{\cdot,\cdot,\cdot\})$; \item $\huaA=A\oplus A'$ as vector spaces; \item for all $x,y\in A$ and $\alpha,\beta\in A'$, there holds: \begin{equation}\label{eq:conMT} \{x,y,\alpha\}\in A',\quad \{\alpha, x,y\}\in A',\quad \{\alpha,\beta, x\}\in A,\quad \{ x,\alpha,\beta\}\in A. \end{equation} \end{itemize} \end{defi} In a Manin triple of $3$-pre-Lie algebras, since the skewsymmetric bilinear form $\omega$ is nondegenerate, $A'$ can be identified with $A^*$ via $$ \langle \alpha,x\rangle\triangleq \omega(\alpha,x),\quad\forall x\in A, \alpha\in A'. $$ Thus, $\huaA$ is isomorphic to $A\oplus A^*$ naturally and the bilinear form $\omega$ is exactly given by \eqref{phase-space}. By the invariant condition \eqref{eq:quadratic}, we can obtain the precise form of the 3-pre-Lie structure $\{\cdot,\cdot,\cdot\}$ on $A\oplus A^*$. \begin{pro}\label{pro:stuctureMP3preLie} Let $(A\oplus A^*;A,A^*)$ be a Manin triple of $3$-pre-Lie algebras, where the nondegenerate skewsymmetric bilinear form $\omega$ on the $3$-pre-Lie algebra is given by \eqref{phase-space}. Then we have \begin{eqnarray} \label{eq:m1}\{x,y,\alpha\}&=&(L^*-R^*\tau+R^*)(x,y)\alpha,\\ \label{eq:m2}\{\alpha,x,y\}&=&-R^*(x,y)\alpha,\\ \label{eq:m3}\{\alpha,\beta,x\}&=&(\huaL^*-\huaR^*\tau+\huaR^*)(\alpha,\beta)x,\\ \label{eq:m4}\{x,\alpha,\beta\}&=&-\huaR^*(\alpha,\beta)x. \end{eqnarray} \end{pro} \pf For all $x,y,z\in A,\alpha\in A^*$, we have \begin{eqnarray*} \langle\{x,y,\alpha\},z\rangle&=&\omega(\{x,y,\alpha\},z)=-\omega(\alpha,[x,y,z]_C)\\ &=&-\omega(\alpha,\{x,y,z\}+\{y,z,x\}+\{z,x,y\})\\ &=&-\omega(\alpha,L(x,y)z-R(y,x)z+R(x,y)z)\\ &=&-\langle\alpha,L(x,y)z-R(y,x)z+R(x,y)z\rangle\\ &=&\langle(L^*-R^*\tau+R^*)(x,y)\alpha,z\rangle, \end{eqnarray*} which implies that \eqref{eq:m1} holds. We have \begin{eqnarray*} \langle\{\alpha,x,y\},z\rangle&=&\omega(\{\alpha,x,y\},z)=-\omega(y,[\alpha,x,z]_C)=\omega(y,[z,x,\alpha]_C)=- \omega(\{z,x,y\},\alpha)\\ &=&\langle\alpha,R(x,y)z\rangle=-\langle R^*(x,y)\alpha,z\rangle, \end{eqnarray*} which implies that \eqref{eq:m2} holds. Similarly, we can deduce that \eqref{eq:m3} and \eqref{eq:m4} hold. \qed \begin{thm}\label{thm:MT-ps} There is a one-to-one correspondence between Manin triples of $3$-pre-Lie algebras and perfect phase spaces of $3$-Lie algebras. More precisely, if $(A\oplus A^*;A,A^*)$ is a Manin triple of $3$-pre-Lie algebras, then $(A\oplus A^*,[\cdot,\cdot,\cdot]_C,\omega)$ is a symplectic $3$-Lie algebra, where $\omega$ is given by \eqref{phase-space}. Conversely, if $(\h\oplus \h^*,[\cdot,\cdot,\cdot],\omega)$ is a perfect phase space of a $3$-Lie algebra $(\h,[\cdot,\cdot,\cdot]_\h)$, then $(\h\oplus \h^*;\h,\h^*)$ is a Manin triple of $3$-pre-Lie algebras, where the $3$-pre-Lie algebra structure on $\h\oplus \h^*$ is given by \eqref{3-pre-Lie-omega}. \end{thm} \pf Let $(A\oplus A^*;A,A^*)$ be a Manin triple of $3$-pre-Lie algebras. Denote by $\{\cdot,\cdot,\cdot\}_A$ and $\{\cdot,\cdot,\cdot\}_{A^*}$ the 3-pre-Lie algebra structure on $A$ and $A^*$ respectively, and denote by $[\cdot,\cdot,\cdot]_A$ and $[\cdot,\cdot,\cdot]_{A^*}$ the corresponding sub-adjacent 3-Lie algebra structure on $A$ and $A^*$ respectively. By Proposition \ref{pro:stuctureMP3preLie}, it is straightforward to deduce that the corresponding 3-Lie algebra structure $[\cdot,\cdot,\cdot]_C$ on $A\oplus A^*$ is given by \begin{eqnarray} \nonumber [x+\alpha,y+\beta,z+\gamma]_C&=&[x,y,z]_A+\huaL^*(\alpha,\beta)z+\huaL^*(\beta,\gamma)x+\huaL^*(\gamma,\alpha)y\\ \label{eq:MP3Lie} &&+[\alpha,\beta,\gamma]_{A^*}+L^*(x,y)\gamma+L^*(y,z)\alpha+L^*(z,x)\beta. \end{eqnarray} For all $x_1,x_2,x_3,x_4\in A$ and $\alpha_1,\alpha_2,\alpha_3,\alpha_4\in A^*$, we have \begin{eqnarray*} &&\omega([x_1+\alpha_1,x_2+\alpha_2,x_3+\alpha_3]_C,x_4+\alpha_4)\\&=&\omega([x_1,x_2,x_3]_A+\huaL^*(\alpha_1,\alpha_2)x_3+\huaL^*(\alpha_2,\alpha_3)x_1+\huaL^*(\alpha_3,\alpha_1)x_2\\ &&+[\alpha_1,\alpha_2,\alpha_3]_{A^*}+L^*(x_1,x_2)\alpha_3+L^*(x_2,x_3)\alpha_1+L^*(x_3,x_1)\alpha_2,x_4+\alpha_4)\\ &=&\langle [\alpha_1,\alpha_2,\alpha_3]_{A^*}+L^*(x_1,x_2)\alpha_3+L^*(x_2,x_3)\alpha_1+L^*(x_3,x_1)\alpha_2,x_4\rangle\\ &&-\langle \alpha_4,[x_1,x_2,x_3]_A+\huaL^*(\alpha_1,\alpha_2)x_3+\huaL^*(\alpha_2,\alpha_3)x_1+\huaL^*(\alpha_3,\alpha_1)x_2\rangle\\ &=&\langle [\alpha_1,\alpha_2,\alpha_3]_{A^*},x_4\rangle-\langle \alpha_3,\{x_1,x_2,x_4\}_A\rangle-\langle \alpha_1,\{x_2,x_3,x_4\}_A\rangle-\langle \alpha_2,\{x_3,x_1,x_4\}_A\rangle\\ &&-\langle \alpha_4,[x_1,x_2,x_3]_A\rangle+\langle\{\alpha_1,\alpha_2,\alpha_4\}_{A^*},x_3\rangle+\langle\{\alpha_2,\alpha_3,\alpha_4\}_{A^*},x_1\rangle +\langle\{\alpha_3,\alpha_1,\alpha_4\}_{A^*},x_2\rangle. \end{eqnarray*} Similarly, we have \begin{eqnarray*} &&\omega([x_2+\alpha_2,x_3+\alpha_3,x_4+\alpha_4],x_1+\alpha_1)\\&=&\langle [\alpha_2,\alpha_3,\alpha_4]_C,x_1\rangle-\langle \alpha_4,\{x_2,x_3,x_1\}_A\rangle-\langle \alpha_2,\{x_3,x_4,x_1\}_A\rangle-\langle \alpha_3,\{x_4,x_2,x_1\}_A\rangle\\ &&-\langle \alpha_1,[x_2,x_3,x_4]_C\rangle+\langle\{\alpha_2,\alpha_3,\alpha_1\}_{A^*},x_4\rangle+\langle\{\alpha_3,\alpha_4,\alpha_1\}_{A^*},x_2\rangle +\langle\{\alpha_4,\alpha_2,\alpha_1\}_{A^*},x_3\rangle,\\ &&\omega([x_3+\alpha_3,x_4+\alpha_4,x_1+\alpha_1],x_2+\alpha_2)\\&=&\langle [\alpha_3,\alpha_4,\alpha_1]_C,x_2\rangle-\langle \alpha_1,\{x_3,x_4,x_2\}_A\rangle-\langle \alpha_3,\{x_4,x_1,x_2\}_A\rangle-\langle \alpha_4,\{x_1,x_3,x_2\}_A\rangle\\ &&-\langle \alpha_2,[x_3,x_4,x_1]_C\rangle+\langle\{\alpha_3,\alpha_4,\alpha_2\}_{A^*},x_1\rangle+\langle\{\alpha_4,\alpha_1,\alpha_2\}_{A^*},x_3\rangle +\langle\{\alpha_1,\alpha_3,\alpha_2\}_{A^*},x_4\rangle,\\ &&\omega([x_4+\alpha_4,x_1+\alpha_1,x_2+\alpha_2],x_3+\alpha_3)\\&=&\langle [\alpha_4,\alpha_1,\alpha_2]_C,x_3\rangle-\langle \alpha_2,\{x_4,x_1,x_3\}_A\rangle-\langle \alpha_4,\{x_1,x_2,x_3\}_A\rangle-\langle \alpha_1,\{x_2,x_4,x_3\}_A\rangle\\ &&-\langle \alpha_3,[x_4,x_1,x_2]_C\rangle+\langle\{\alpha_4,\alpha_1,\alpha_3\}_{A^*},x_2\rangle+\langle\{\alpha_1,\alpha_2,\alpha_3\}_{A^*},x_4\rangle +\langle\{\alpha_2,\alpha_4,\alpha_3\}_{A^*},x_1\rangle. \end{eqnarray*} By $\{x_1,x_2,x_3\}_A=-\{x_2,x_1,x_3\}_A$ and $\{\alpha_1,\alpha_2,\alpha_3\}_{A^*}=-\{\alpha_2,\alpha_1,\alpha_3\}_{A^*}$, we deduce that $\omega$ is a symplectic structure on the 3-Lie algebra $(A\oplus A^*,[\cdot,\cdot,\cdot]_C)$. Therefore, it is a phase space. Conversely, let $(\h\oplus \h^*,[\cdot,\cdot,\cdot],\omega)$ be a phase space of the $3$-Lie algebra $(\h,[\cdot,\cdot,\cdot]_\h)$. By Proposition \ref{3-pre-Lie-under-3-Lie}, there exists a $3$-pre-Lie algebra structure $ \{\cdot,\cdot,\cdot\}$ on $ \h\oplus \h^*$ given by \eqref{3-pre-Lie-omega} such that $(\h\oplus \h^*,\{\cdot,\cdot,\cdot\},\omega)$ is a quadratic 3-pre-Lie algebra. By Corollary \ref{3-pre-Lie-sub}, $(\h,\{\cdot,\cdot,\cdot\}|_{\h})$ and $(\h^*,\{\cdot,\cdot,\cdot\}|_{\h^*})$ are $3$-pre-Lie subalgebras of $(\h\oplus \h^*,\{\cdot,\cdot,\cdot\})$. It is obvious that both $\h$ and $\h^*$ are isotropic. Thus, we only need to show that \eqref{eq:conMT} holds. By \eqref{eq:conperfectPS}, for all $x_1,x_2\in\h$ and $\alpha_1,\alpha_2\in\h^*$, we have \begin{eqnarray*} \omega(\{x_1,x_2,\alpha_1\},\alpha_2)=-\omega(\alpha_1,[x_1,x_2,\alpha_2]_C)=0, \end{eqnarray*} which implies that $\{x_1,x_2,\alpha_1\}\in\h^*$. Similarly, we can show that the other conditions in \eqref{eq:conMT} also hold. The proof is finished. \qed \begin{rmk} The notions of a matched pair of $3$-Lie algebras and a Manin triple of $3$-Lie algebras were introduced in \cite{BGS}. By \eqref{eq:MP3Lie}, we obtain that $(A^c,{A^*}^c;L^*,\huaL^*)$ is a matched pair of $3$-Lie algebras and the phase space is exactly the double of this matched pair. However, one should note that a Manin triple of $3$-pre-Lie algebras does not give rise to a Manin triple of $3$-Lie algebras. \end{rmk} \begin{rmk} For pre-Lie algebras, there are equivalent description between Manin triples of pre-Lie algebras, matched pairs of pre-Lie algebras associated to the dual representations of the regular representations and pre-Lie bialgebras \cite{Baibialgebra}. Here we only study Manin triples of $3$-pre-Lie algebras, which are closely related to phase spaces of $3$-Lie algebras and para-Kähler $3$-Lie algebras that studied in Section 8. We postpone the study of matched pairs of $3$-pre-Lie algebras and $3$-pre-Lie bialgebras in the future. \end{rmk} \section{Product structures on $3$-Lie algebras} In this section, we introduce the notion of a product structure on a 3-Lie algebra using the Nijenhuis condition as the integrability condition. We find four special integrability conditions, each of them gives a special decomposition of the original 3-Lie algebra. At the end of this section, we introduce the notion of a (perfect) paracomplex structure on a $3$-Lie algebra and give examples. \begin{defi} Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a $3$-Lie algebra. An {\bf almost product structure} on the $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$ is a linear endomorphism $E:\g\lon\g$ satisfying $E^2=\Id$ $(E\not=\pm\Id)$. An almost product structure is called a {\bf product} structure if the following integrability condition is satisfied: \begin{eqnarray}\nonumber\label{product-structure} E[x,y,z]_\g&=&[Ex,Ey,Ez]_\g+[Ex,y,z]_\g+[x,Ey,z]_\g+[x,y,Ez]_\g\\ &&-E[Ex,Ey,z]_\g-E[x,Ey,Ez]_\g-E[Ex,y,Ez]_\g. \end{eqnarray} \end{defi} \begin{rmk} One can understand a product structure on a $3$-Lie algebra as a Nijenhuis operator $E$ on a $3$-Lie algebra satisfying $E^2=\Id.$ \end{rmk} \begin{thm}\label{product-structure-subalgebra} Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a $3$-Lie algebra. Then $(\g,[\cdot,\cdot,\cdot]_\g)$ has a product structure if and only if $\g$ admits a decomposition: \begin{eqnarray} \g=\g_+\oplus\g_-, \end{eqnarray} where $\g_+$ and $\g_-$ are subalgebras of $\g$. \end{thm} \pf Let $E$ be a product structure on $\g$. By $E^2=\Id$ , we have $\g=\g_+\oplus\g_-$, where $\g_+$ and $\g_-$ are the eigenspaces of $\g$ associated to the eigenvalues $\pm1$. For all $x_1,x_2,x_3\in\g_+$, we have \begin{eqnarray*} E[x_1,x_2,x_3]_\g&=&[Ex_1,Ex_2,Ex_3]_\g+[Ex_1,x_2,x_3]_\g+[x_1,Ex_2,x_3]_\g+[x_1,x_2,Ex_3]_\g\\ &&-E[Ex_1,Ex_2,x_3]_\g-E[x_1,Ex_2,Ex_3]_\g-E[Ex_1,x_2,Ex_3]_\g\\ &=&4[x_1,x_2,x_3]_\g-3E[x_1,x_2,x_3]_\g. \end{eqnarray*} Thus, we have $[x_1,x_2,x_3]_\g\in\g_{+}$, which implies that $\g_+$ is a subalgebra. Similarly, we can show that $\g_-$ is a subalgebra. Conversely, we define a linear endomorphism $E:\g\lon\g$ by \begin{eqnarray}\label{eq:productE} E(x+\alpha)=x-\alpha,\,\,\,\,\forall x\in\g_+,\alpha\in\g_-. \end{eqnarray} Obviously we have $E^2=\Id$. \emptycomment{ Since the three $3$-Lie algebras $(\g,\g_+,\g_-)$ form a double $3$-Lie algebra, we have four linear maps $\rho_{\g_+}:\wedge^2{\g_+}\lon\gl({\g_-}),\,\,\nu_{\g_+}:{\g_+}\lon \Hom(\wedge^2{\g_-},{\g_-}),\,\,\rho_{\g_-}:\wedge^2{\g_-}\lon\gl(\g_+),\,\,\nu_{\g_-}:{\g_-}\lon \Hom(\wedge^2\g_+,\g_+)$. Thus, for all $x_1,x_2,x_3\in\g_+,y_{1},y_{2},y_{3}\in\g_-$ we have \begin{eqnarray*} E[x_1+y_{1},x_2+y_{2},x_3+y_{3}]_\g&=&[x_1,x_2,x_3]_{\g_+}+\nu_{\g_-}(y_1)(x_2,x_3)+\nu_{\g_-}(y_2)(x_3,x_1)+\nu_{\g_-}(y_3)(x_1,x_2)\\ &&+\rho_{\g_-}(y_1,y_2)x_3+\rho_{\g_-}(y_2,y_3)x_1+\rho_{\g_-}(y_3,y_1)x_2\\ &&-[y_1,y_2,y_3]_{\g_-} -\nu_{\g_+}(x_1)(y_2,y_3)-\nu_{\g_+}(x_2)(y_3,y_1)-\nu_{\g_+}(x_3)(y_1,y_2)\\ &&-\rho_{\g_+}(x_1,x_2)y_3 -\rho_{\g_+}(x_2,x_3)y_1-\rho_{\g_+}(x_3,x_1)y_2. \end{eqnarray*} By straightforward computation, we have \begin{eqnarray*} [E(x_1+y_{1}),E(x_2+y_{2}),E(x_3+y_{3})]_\g&=&[x_1-y_{1},x_2-y_{2},x_3-y_{3}]_\g\\ &=&[x_1,x_2,x_3]_{\g_+}-\nu_{\g_-}(y_1)(x_2,x_3)-\nu_{\g_-}(y_2)(x_3,x_1)-\nu_{\g_-}(y_3)(x_1,x_2)\\ &&+\rho_{\g_-}(y_1,y_2)x_3+\rho_{\g_-}(y_2,y_3)x_1+\rho_{\g_-}(y_3,y_1)x_2\\ &&-[y_1,y_2,y_3]_{\g_-}+\nu_{\g_+}(x_1)(y_2,y_3)+\nu_{\g_+}(x_2)(y_3,y_1)+\nu_{\g_+}(x_3)(y_1,y_2)\\ &&-\rho_{\g_+}(x_1,x_2)y_3-\rho_{\g_+}(x_2,x_3)y_1-\rho_{\g_+}(x_3,x_1)y_2,\\ \,[E(x_1+y_{1}),x_2+y_{2},x_3+y_{3}]_\g &=&[x_1-y_{1},x_2+y_{2},x_3+y_{3}]_\g\\ &=&[x_1,x_2,x_3]_{\g_+}-\nu_{\g_-}(y_1)(x_2,x_3)+\nu_{\g_-}(y_2)(x_3,x_1)+\nu_{\g_-}(y_3)(x_1,x_2)\\ &&-\rho_{\g_-}(y_1,y_2)x_3+\rho_{\g_-}(y_2,y_3)x_1-\rho_{\g_-}(y_3,y_1)x_2\\ &&-[y_1,y_2,y_3]_{\g_-}+\nu_{\g_+}(x_1)(y_2,y_3)-\nu_{\g_+}(x_2)(y_3,y_1)-\nu_{\g_+}(x_3)(y_1,y_2)\\ &&+\rho_{\g_+}(x_1,x_2)y_3 -\rho_{\g_+}(x_2,x_3)y_1+\rho_{\g_+}(x_3,x_1)y_2,\\ \,[x_1+y_{1},E(x_2+y_{2}),x_3+y_{3}]_\g &=&[x_1+y_{1},x_2-y_{2},x_3+y_{3}]_\g\\ &=&[x_1,x_2,x_3]_{\g_+}+\nu_{\g_-}(y_1)(x_2,x_3)-\nu_{\g_-}(y_2)(x_3,x_1)+\nu_{\g_-}(y_3)(x_1,x_2)\\ &&-\rho_{\g_-}(y_1,y_2)x_3-\rho_{\g_-}(y_2,y_3)x_1+\rho_{\g_-}(y_3,y_1)x_2\\ &&-[y_1,y_2,y_3]_{\g_-}-\nu_{\g_+}(x_1)(y_2,y_3)+\nu_{\g_+}(x_2)(y_3,y_1)-\nu_{\g_+}(x_3)(y_1,y_2)\\ &&+\rho_{\g_+}(x_1,x_2)y_3 +\rho_{\g_+}(x_2,x_3)y_1-\rho_{\g_+}(x_3,x_1)y_2,\\ \,[x_1+y_{1},x_2+y_{2},E(x_3+y_{3})]_\g &=&[x_1+y_{1},x_2+y_{2},x_3-y_{3}]_\g\\ &=&[x_1,x_2,x_3]_{\g_+}+\nu_{\g_-}(y_1)(x_2,x_3)+\nu_{\g_-}(y_2)(x_3,x_1)-\nu_{\g_-}(y_3)(x_1,x_2)\\ &&+\rho_{\g_-}(y_1,y_2)x_3-\rho_{\g_-}(y_2,y_3)x_1-\rho_{\g_-}(y_3,y_1)x_2\\ &&-[y_1,y_2,y_3]_{\g_-}-\nu_{\g_+}(x_1)(y_2,y_3)-\nu_{\g_+}(x_2)(y_3,y_1)+\nu_{\g_+}(x_3)(y_1,y_2)\\ &&-\rho_{\g_+}(x_1,x_2)y_3 +\rho_{\g_+}(x_2,x_3)y_1+\rho_{\g_+}(x_3,x_1)y_2,\\ \,E[E(x_1+y_{1}),E(x_2+y_{2}),x_3+y_{3}]_\g &=&E[x_1-y_{1},x_2-y_{2},x_3+y_{3}]_\g\\ &=&[x_1,x_2,x_3]_{\g_+}-\nu_{\g_-}(y_1)(x_2,x_3)-\nu_{\g_-}(y_2)(x_3,x_1)+\nu_{\g_-}(y_3)(x_1,x_2)\\ &&+\rho_{\g_-}(y_1,y_2)x_3-\rho_{\g_-}(y_2,y_3)x_1-\rho_{\g_-}(y_3,y_1)x_2\\ &&-[y_1,y_2,y_3]_{\g_-}+\nu_{\g_+}(x_1)(y_2,y_3)+\nu_{\g_+}(x_2)(y_3,y_1)-\nu_{\g_+}(x_3)(y_1,y_2)\\ &&-\rho_{\g_+}(x_1,x_2)y_3 +\rho_{\g_+}(x_2,x_3)y_1+\rho_{\g_+}(x_3,x_1)y_2,\\ \,E[x_1+y_{1},E(x_2+y_{2}),E(x_3+y_{3})]_\g &=&E[x_1+y_{1},x_2-y_{2},x_3-y_{3}]_\g\\ &=&[x_1,x_2,x_3]_{\g_+}+\nu_{\g_-}(y_1)(x_2,x_3)-\nu_{\g_-}(y_2)(x_3,x_1)-\nu_{\g_-}(y_3)(x_1,x_2)\\ &&-\rho_{\g_-}(y_1,y_2)x_3+\rho_{\g_-}(y_2,y_3)x_1-\rho_{\g_-}(y_3,y_1)x_2\\ &&-[y_1,y_2,y_3]_{\g_-}-\nu_{\g_+}(x_1)(y_2,y_3)+\nu_{\g_+}(x_2)(y_3,y_1)+\nu_{\g_+}(x_3)(y_1,y_2)\\ &&+\rho_{\g_+}(x_1,x_2)y_3 -\rho_{\g_+}(x_2,x_3)y_1+\rho_{\g_+}(x_3,x_1)y_2,\\ \,E[E(x_1+y_{1}),x_2+y_{2},E(x_3+y_{3})]_\g &=&E[x_1-y_{1},x_2+y_{2},x_3-y_{3}]_\g\\ &=&[x_1,x_2,x_3]_{\g_+}-\nu_{\g_-}(y_1)(x_2,x_3)+\nu_{\g_-}(y_2)(x_3,x_1)-\nu_{\g_-}(y_3)(x_1,x_2)\\ &&-\rho_{\g_-}(y_1,y_2)x_3-\rho_{\g_-}(y_2,y_3)x_1+\rho_{\g_-}(y_3,y_1)x_2\\ &&-[y_1,y_2,y_3]_{\g_-}+\nu_{\g_+}(x_1)(y_2,y_3)-\nu_{\g_+}(x_2)(y_3,y_1)+\nu_{\g_+}(x_3)(y_1,y_2)\\ &&+\rho_{\g_+}(x_1,x_2)y_3 +\rho_{\g_+}(x_2,x_3)y_1-\rho_{\g_+}(x_3,x_1)y_2. \end{eqnarray*} } Since $\g_+$ is a subalgebra of $\g$, for all $x_1,x_2,x_3\in\g_+$, we have \begin{eqnarray*} &&[Ex_1,Ex_2,Ex_3]_\g+[Ex_1,x_2,x_3]_\g+[x_1,Ex_2,x_3]_\g+[x_1,x_2,Ex_3]_\g\\ &&-E[Ex_1,Ex_2,x_3]_\g-E[x_1,Ex_2,Ex_3]_\g-E[Ex_1,x_2,Ex_3]_\g\\ &=&4[x_1,x_2,x_3]_\g-3E[x_1,x_2,x_3]_\g=[x_1,x_2,x_3]_\g\\ &=&E[x_1,x_2,x_3]_\g, \end{eqnarray*} which implies that \eqref{product-structure} holds for all $x_1,x_2,x_3\in\g_+$. Similarly, we can show that \eqref{product-structure} holds for all $x,y,z\in\g$. Therefore, $E$ is a product structure on $\g$. \qed \begin{lem} Let $E$ be an almost product structure on a $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. If $E$ satisfies the following equation \begin{eqnarray}\label{abel-product-0} E[x,y,z]_\g=[Ex,y,z]_\g, \end{eqnarray} then $E$ is a product structure on $\g$ such that $[\g_+,\g_+,\g_-]_\g=0$ and $[\g_-,\g_-,\g_+]_\g=0,$ i.e. $\g$ is the $3$-Lie algebra direct sum of $\g_+$ and $\g_-.$ \end{lem} \pf By \eqref{abel-product-0} and $E^2=\Id$, we have \begin{eqnarray*} &&[Ex,Ey,Ez]_\g+[Ex,y,z]_\g+[x,Ey,z]_\g+[x,y,Ez]_\g\\ &&-E[Ex,Ey,z]_\g-E[x,Ey,Ez]_\g-E[Ex,y,Ez]_\g\\ &=&[Ex,Ey,Ez]_\g+E[x,y,z]_\g+[x,Ey,z]_\g+[x,y,Ez]_\g\\ &&-[E^2x,Ey,z]_\g-[Ex,Ey,Ez]_\g-[E^2x,y,Ez]_\g\\ &=&E[x,y,z]_\g. \end{eqnarray*} Thus, $E$ is a product structure on $\g$. For all $x_1,x_2\in\g_+,\alpha_1\in\g_-$, on one hand we have \begin{eqnarray*} E[\alpha_1,x_1,x_2]_\g=[E\alpha_1,x_1,x_2]_\g=-[\alpha_1,x_1,x_2]_\g. \end{eqnarray*} On the other hand, we have \begin{eqnarray*} E[\alpha_1,x_1,x_2]_\g=E[x_1,x_2,\alpha_1]_\g=[Ex_1,x_2,\alpha_1]_\g=[x_1,x_2,\alpha_1]_\g. \end{eqnarray*} Thus, we obtain $[\g_+,\g_+,\g_-]_\g=0$. Similarly, we have $[\g_-,\g_-,\g_+]_\g=0$. The proof is finished. \qed \begin{defi}{\bf (Integrability condition I)} An almost product structure $E$ on a $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$ is called a {\bf strict product structure} if \eqref{abel-product-0} holds. \end{defi} \begin{cor} Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a $3$-Lie algebra. Then $(\g,[\cdot,\cdot,\cdot]_\g)$ has a strict product structure if and only if $\g$ admits a decomposition: $$ \g=\g_+\oplus\g_-, $$ where $\g_+$ and $\g_-$ are subalgebras of $\g$ such that $[\g_+,\g_+,\g_-]_\g=0$ and $[\g_-,\g_-,\g_+]_\g=0.$ \end{cor} \pf We leave the details to readers. \qed \begin{lem} Let $E$ be an almost product structure on a $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. If $E$ satisfies the following equation \begin{eqnarray}\label{abel-product} [x,y,z]_\g=-[x,Ey,Ez]_\g-[Ex,y,Ez]_\g-[Ex,Ey,z]_\g, \end{eqnarray} then $E$ is a product structure on $\g$. \end{lem} \pf By \eqref{abel-product} and $E^2=\Id$, we have \begin{eqnarray*} &&[Ex,Ey,Ez]_\g+[Ex,y,z]_\g+[x,Ey,z]_\g+[x,y,Ez]_\g\\ &&-E[Ex,Ey,z]_\g-E[x,Ey,Ez]_\g-E[Ex,y,Ez]_\g\\ &=&-[Ex,E^2y,E^2z]_\g-[E^2x,Ey,E^2z]_\g-[E^2x,E^2y,Ez]_\g\\ &&+[Ex,y,z]_\g+[x,Ey,z]_\g+[x,y,Ez]_\g+E[x,y,z]_\g\\ &=&E[x,y,z]_\g. \end{eqnarray*} Thus, $E$ is a product structure on $\g$. \qed \begin{defi}{\bf (Integrability condition II)} An almost product structure $E$ on a $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$ is called an {\bf abelian product structure} if \eqref{abel-product} holds. \end{defi} \begin{cor}\label{abelian-product-structure} Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a $3$-Lie algebra. Then $(\g,[\cdot,\cdot,\cdot]_\g)$ has an abelian product structure if and only if $\g$ admits a decomposition: $$ \g=\g_+\oplus\g_-, $$ where $\g_+$ and $\g_-$ are abelian subalgebras of $\g$. \end{cor} \pf Let $E$ be an abelian product structure on $\g$. For all $x_1,x_2,x_3\in\g_{+}$, we have \begin{eqnarray*} [x_1,x_2,x_3]_\g&=& -[Ex_1,Ex_2,x_3]_\g-[x_1,Ex_2,Ex_3]_\g-[Ex_1,x_2,Ex_3]_\g\\ &=&-3[x_1,x_2,x_3]_\g, \end{eqnarray*} which implies that $[x_1,x_2,x_3]_\g=0$. Similarly, for all $\alpha_1,\alpha_2,\alpha_3\in\g_{-}$, we also have $[\alpha_1,\alpha_2,\alpha_3]_\g=0$. Thus, both $\g_+$ and $\g_-$ are abelian subalgebras. Conversely, define a linear endomorphism $E:\g\lon\g$ by \eqref{eq:productE}. Then it is straightforward to deduce that $E$ is an abelian product structure on $\g$. \qed \begin{lem} Let $E$ be an almost product structure on a $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. If $E$ satisfies the following equation \begin{eqnarray}\label{product-integrability} [x,y,z]_\g&=&E[Ex,y,z]_\g+E[x,Ey,z]_\g+E[x,y,Ez]_\g, \end{eqnarray} then $E$ is an abelian product structure on $\g$ such that $[\g_+,\g_+,\g_-]_\g\subset\g_+$ and $[\g_-,\g_-,\g_+]_\g\subset\g_-.$ \end{lem} \pf By \eqref{product-integrability} and $E^2=\Id$, we have \begin{eqnarray*} &&[Ex,Ey,Ez]_\g+[Ex,y,z]_\g+[x,Ey,z]_\g+[x,y,Ez]_\g\\ &&-E[Ex,Ey,z]_\g-E[x,Ey,Ez]_\g-E[Ex,y,Ez]_\g\\ &=&E[x,Ey,Ez]_\g+E[Ex,y,Ez]_\g+E[Ex,Ey,z]_\g+E[x,y,z]_\g\\ &&-E[Ex,Ey,z]_\g-E[x,Ey,Ez]_\g-E[Ex,y,Ez]_\g\\ &=&E[x,y,z]_\g. \end{eqnarray*} Thus, we obtain that $E$ is a product structure on $\g$. For all $x_1,x_2,x_3\in\g_+$, by \eqref{product-integrability}, we have \begin{eqnarray*} [x_1,x_2,x_3]_\g&=&E[Ex_1,x_2,x_3]_\g+E[x_1,Ex_2,x_3]_\g+E[x_1,x_2,Ex_3]_\g\\ &=&3E[x_1,x_2,x_3]_\g=3[x_1,x_2,x_3]_\g. \end{eqnarray*} Thus, we obtain $[\g_+,\g_+,\g_+]_\g=0$. Similarly, we have $[\g_-,\g_-,\g_-]_\g=0$. By Corollary \ref{abelian-product-structure}, $E$ is an abelian product structure on $\g$. Moreover, for all $x_1,x_2\in\g_+,\alpha_1\in\g_-$, we have \begin{eqnarray*} [x_1,x_2,\alpha_1]_\g&=&E[Ex_1,x_2,\alpha_1]_\g+E[x_1,Ex_2,\alpha_1]_\g+E[x_1,x_2,E\alpha_1]_\g\\ &=&E[x_1,x_2,\alpha_1]_\g, \end{eqnarray*} which implies that $[\g_+,\g_+,\g_-]_\g\subset\g_+$. Similarly, we have $[\g_-,\g_-,\g_+]_\g\subset\g_-$. \qed \begin{defi}{\bf (Integrability condition III)} An almost product structure $E$ on a $3$-Lie algebra $ (\g,[\cdot,\cdot,\cdot]_\g)$ is called a {\bf strong abelian product structure} if \eqref{product-integrability} holds. \end{defi} \begin{cor} Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a $3$-Lie algebra. Then $(\g,[\cdot,\cdot,\cdot]_\g)$ has a strong abelian product structure if and only if $\g$ admits a decomposition: $$ \g=\g_+\oplus\g_-, $$ where $\g_+$ and $\g_-$ are abelian subalgebras of $\g$ such that $[\g_+,\g_+,\g_-]_\g\subset\g_+$ and $[\g_-,\g_-,\g_+]_\g\subset\g_-.$ \end{cor} \begin{rmk} Let $E$ be a strong abelian product structure on a $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. Then we can define $\nu_+:\g_+\longrightarrow \Hom(\wedge^2\g_-,\g_-)$ and $\nu_-:\g_-\longrightarrow \Hom(\wedge^2\g_+,\g_+)$ by $$ \nu_+(x)(\alpha,\beta)=[\alpha,\beta,x]_\g,\quad \nu_-(\alpha)(x,y)=[x,y,\alpha]_\g,\quad\forall x,y\in\g_+, \alpha,\beta\in\g_-. $$ It turns out $\nu_+$ and $\nu_-$ are generalized representations of abelian $3$-Lie algebras $\g_+$ and $\g_-$ on $\g_-$ and $\g_+$ respectively. See \cite{Liu-Makhlouf-Sheng} for more details about generalized representations of $3$-Lie algebras. \end{rmk} More surprisingly, a strong abelian product structure is an $\huaO$-operator as well as a Rota-Baxter operator \cite{RB3Lie,PBG}. Thus, some $\huaO$-operators and Rota-Baxter operators on 3-Lie algebras can serve as integrability conditions. \begin{pro} Let $E$ be an almost product structure on a $3$-Lie algebra $ (\g,[\cdot,\cdot,\cdot]_\g)$. Then $E$ is a strong abelian structure on $\g$ if and only if $E$ is an $\huaO$-operator associated to the adjoint representation $(\g,\ad)$. Furthermore, there exists a compatible $3$-pre-Lie algebra $(\g,\{\cdot,\cdot,\cdot\})$ on the $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$, here the $3$-pre-Lie algebra structure on $\g$ is given by \begin{eqnarray} \{x,y,z\}=E[x,y,Ez]_\g,\,\,\,\,\forall x,y,z\in\g. \end{eqnarray} \end{pro} \pf By \eqref{product-integrability}, for all $x,y,z\in\g$ we have \begin{eqnarray*} [Ex,Ey,Ez]_\g&=&E[E^2x,Ey,Ez]_\g+E[Ex,E^2y,Ez]_\g+E[Ex,Ey,E^2z]_\g\\ &=&E(\ad_{Ex,Ey}z+\ad_{Ey,Ez}x+\ad_{Ez,Ex}y). \end{eqnarray*} Thus, $E$ is an $\huaO$-operator associated to the adjoint representation $(\g,\ad)$. Conversely, if for all $x,y,z\in\g$, we have \begin{eqnarray*} [Ex,Ey,Ez]_\g&=&E(\ad_{Ex,Ey}z+\ad_{Ey,Ez}x+\ad_{Ez,Ex}y)\\ &=&E([Ex,Ey,z]_\g+[x,Ey,Ez]_\g+[Ex,y,Ez]_\g), \end{eqnarray*} then $ [x,y,z]_\g=E[x,y,Ez]_\g+E[Ex,y,z]_\g+E[x,Ey,z]_\g $ by $E^{-1}=E$. Thus, $E$ is a strong abelian structure on $\g$. Furthermore, by $E^{-1}=E$ and Proposition \ref{3-Lie-compatible-3-pre-Lie}, there exists a compatible $3$-pre-Lie algebra on $\g$ given by $ \{x,y,z\}=E\ad_{x,y}E^{-1}(z)=E[x,y,Ez]_\g. $ The proof is finished. \qed\vspace{3mm} There is a new phenomenon that an involutive automorphism of a 3-Lie algebra also serves as an integrability condition. \begin{lem} Let $E$ be an almost product structure on a $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. If $E$ satisfies the following equation \begin{eqnarray}\label{product-integrability-1} E[x,y,z]_\g=[Ex,Ey,Ez]_\g, \end{eqnarray} then $E$ is a product structure on $\g$ such that \begin{equation}\label{eq:coherenceconPP} [\g_+,\g_+,\g_-]_\g\subset\g_-,\quad [\g_-,\g_-,\g_+]_\g\subset\g_+. \end{equation} \end{lem} \pf By \eqref{product-integrability-1} and $E^2=\Id$, we have \begin{eqnarray*} &&[Ex,Ey,Ez]_\g+[Ex,y,z]_\g+[x,Ey,z]_\g+[x,y,Ez]_\g\\ &&-E[Ex,Ey,z]_\g-E[x,Ey,Ez]_\g-E[Ex,y,Ez]_\g\\ &=&E[x,y,z]_\g+[Ex,y,z]_\g+[x,Ey,z]_\g+[x,y,Ez]_\g\\ &&-[E^2x,E^2y,Ez]_\g-[Ex,E^2y,E^2z]_\g-[E^2x,Ey,E^2z]_\g\\ &=&E[x,y,z]_\g. \end{eqnarray*} Thus, $E$ is a product structure on $\g$. Moreover, for all $x_1,x_2\in\g_+,\alpha_1\in\g_-$, we have \begin{eqnarray*} E[x_1,x_2,\alpha_1]_\g=[Ex_1,Ex_2,E\alpha_1]_\g=-[x_1,x_2,\alpha_1]_\g, \end{eqnarray*} which implies that $[\g_+,\g_+,\g_-]_\g\subset\g_-$. Similarly, we have $[\g_-,\g_-,\g_+]_\g\subset\g_+$. \qed \begin{defi}{\bf (Integrability condition IV)} An almost product structure $E$ on a $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$ is called a {\bf perfect product structure} if \eqref{product-integrability-1} holds. \end{defi} \begin{cor} Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a $3$-Lie algebra. Then $(\g,[\cdot,\cdot,\cdot]_\g)$ has a perfect product structure if and only if $\g$ admits a decomposition: $$ \g=\g_+\oplus\g_-, $$ where $\g_+$ and $\g_-$ are subalgebras of $\g$ such that $[\g_+,\g_+,\g_-]_\g\subset\g_-$ and $[\g_-,\g_-,\g_+]_\g\subset\g_+.$ \end{cor} \pf We leave the details to readers. \qed \begin{cor} A strict product structure on a $3$-Lie algebra is a perfect product structure. \end{cor} \begin{rmk} Let $E$ be a product structure on a $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. By Theorem \ref{product-structure-subalgebra}, $\g_+$ and $\g_-$ are subalgebras. However, the brackets of mixed terms are very complicated. But a perfect product structure $E$ on $(\g,[\cdot,\cdot,\cdot]_\g)$ ensures $[\g_+,\g_+,\g_-]_\g\subset\g_-$ and $[\g_-,\g_-,\g_+]_\g\subset\g_+$. Note that this is exactly the condition required in the definition of a matched pair of $3$-Lie algebras \cite{BGS}. Thus, $E$ is a perfect product structure if and only if $(\g_+,\g_-)$ is a matched pair of $3$-Lie algebras. This type of product structures are very important in our later studies. \end{rmk} \emptycomment{ \begin{eqnarray*} [E(x_1+y_1),E(x_2+y_2),E(x_3+y_3)]_\g&=&[x_1-y_1,x_2-y_2,x_3-y_3]_\g\\ &=&[x_1,x_2,-y_3]_\g+[x_1,-y_2,x_3]_\g+[x_1,-y_2,-y_3]_\g\\ &&+[-y_1,x_2,x_3]_\g+[-y_1,x_2,-y_3]_\g+[-y_1,-y_2,x_3]_\g,\\ \,[E(x_1+y_1),x_2+y_2,x_3+y_3]_\g&=&[x_1-y_1,x_2+y_2,x_3+y_3]_\g\\ &=&[x_1,x_2,y_3]_\g+[x_1,y_2,x_3]_\g+[x_1,y_2,y_3]_\g\\ &&+[-y_1,x_2,x_3]_\g+[-y_1,x_2,y_3]_\g+[-y_1,y_2,x_3]_\g,\\ \,[x_1+y_1,E(x_2+y_2),x_3+y_3]_\g&=&[x_1+y_1,x_2-y_2,x_3+y_3]_\g\\ &=&[x_1,x_2,y_3]_\g+[x_1,-y_2,x_3]_\g+[x_1,-y_2,y_3]_\g\\ &&+[y_1,x_2,x_3]_\g+[y_1,x_2,y_3]_\g+[y_1,-y_2,x_3]_\g,\\ \,[x_1+y_1,x_2+y_2,E(x_3+y_3)]_\g&=&[x_1+y_1,x_2+y_2,x_3-y_3]_\g\\ &=&[x_1,x_2,-y_3]_\g+[x_1,y_2,x_3]_\g+[x_1,y_2,-y_3]_\g\\ &&+[y_1,x_2,x_3]_\g+[y_1,x_2,-y_3]_\g+[y_1,y_2,x_3]_\g. \end{eqnarray*} By \eqref{product-structure}, we obtain $$E[x,y,z]_\g=-E[Ex_1,Ex_2,x_3]_\g-E[x_1,Ex_2,Ex_3]_\g-E[Ex_1,x_2,Ex_3]_\g.$$ Thus, we obtain $E$ is an abelian product structure on $\g$. The proof is finished. \qed } \begin{defi} \begin{itemize} \item[\rm(i)]A {\bf paracomplex structure} on a $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$ is a product structure $E$ on $\g$ such that the eigenspaces of $\g$ associated to the eigenvalues $\pm1$ have the same dimension, i.e. $\dim(\g_+)=\dim(\g_-)$. \item[\rm(i)]A {\bf perfect paracomplex structure} on a $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$ is a perfect product structure $E$ on $\g$ such that the eigenspaces of $\g$ associated to the eigenvalues $\pm1$ have the same dimension, i.e. $\dim(\g_+)=\dim(\g_-)$. \end{itemize} \end{defi} \begin{pro}\label{paracomplex-3-pre-Lie} Let $(A,\{\cdot,\cdot,\cdot\})$ be a $3$-pre-Lie algebra. Then, on the semidirect product $3$-Lie algebra $ A^c\ltimes_{L^*}A^*$, there is a perfect paracomplex structure $E:A^c\ltimes_{L^*}A^*\lon A^c\ltimes_{L^*}A^*$ given by \begin{eqnarray}\label{eq:defiE} E(x+\alpha)=x-\alpha,\,\,\,\,\forall x\in A^c, \alpha\in A^*. \end{eqnarray} \end{pro} \pf It is obvious that $E^2=\Id$. Moreover, we have $(A^c\ltimes_{L^*}A^*)_+=A$, $(A^c\ltimes_{L^*}A^*)_-=A^*$ and they are two subalgebras of the semidirect product $3$-Lie algebra $A^c\ltimes_{L^*}A^*$. By Theorem \ref{product-structure-subalgebra}, $E$ is a product structure on $A^c\ltimes_{L^*}A^*$. Since $A$ and $A^*$ have the same dimension, $E$ is a paracomplex structure on $A^c\ltimes_{L^*}A^*$. It is obvious that $E$ is perfect. \qed\vspace{3mm} At the end of this section, we give some examples of product structures. \begin{ex}{\rm There is a unique non-trivial $3$-dimensional $3$-Lie algebra. It has a basis $\{e_1,e_2,e_3\}$ with respect to which the non-zero product is given by $$[e_1,e_2,e_3]=e_1.$$ Then $E=\left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&0&-1\end{array}\right)$ and $E=\left(\begin{array}{ccc}1&0&0\\ 0&-1&0\\ 0&0&1\end{array}\right)$ are strong abelian product structures and $E=\left(\begin{array}{ccc}-1&0&0\\ 0&1&0\\ 0&0&1\end{array}\right)$ is a perfect product structure. } \end{ex} \begin{ex}\label{ex:A4product}{\rm Consider the $4$-dimensional Euclidean $3$-Lie algebra $A_4$ given in Example \ref{ex:A4symplectic}. Then \begin{eqnarray*}E_1=\left(\begin{array}{cccc}1&0&0&0\\ 0&1&0&0\\ 0&0&-1&0\\ 0&0&0&-1\end{array}\right),~ E_2=\left(\begin{array}{cccc}1&0&0&0\\ 0&-1&0&0\\ 0&0&1&0\\ 0&0&0&-1\end{array}\right),~ E_3=\left(\begin{array}{cccc}1&0&0&0\\ 0&-1&0&0\\ 0&0&-1&0\\ 0&0&0&1\end{array}\right),\\ E_4=\left(\begin{array}{cccc}-1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&-1\end{array}\right),~ E_5=\left(\begin{array}{cccc}-1&0&0&0\\ 0&1&0&0\\ 0&0&-1&0\\ 0&0&0&1\end{array}\right), E_6=\left(\begin{array}{cccc}-1&0&0&0\\ 0&-1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{array}\right) \end{eqnarray*}are perfect and abelian product structures. } \end{ex} \section{Complex structures on $3$-Lie algebras} In this section, we introduce the notion of a complex structure on a real 3-Lie algebra using the Nijenhuis condition as the integrability condition. Parallel to the case of product structures, we also find four special integrability conditions. \begin{defi}\label{complex} Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a real $3$-Lie algebra. An {\bf almost complex structure} on $\g$ is a linear endomorphism $J:\g\lon\g$ satisfying $J^2=-\Id$. An almost complex structure is called a {\bf complex} structure if the following integrability condition is satisfied: \begin{eqnarray}\nonumber\label{complex-structure} J[x,y,z]_\g&=&-[Jx,Jy,Jz]_\g+[Jx,y,z]_\g+[x,Jy,z]_\g+[x,y,Jz]_\g\\ &&+J[Jx,Jy,z]_\g+J[x,Jy,Jz]_\g+J[Jx,y,Jz]_\g. \end{eqnarray} \end{defi} \begin{rmk} One can understand a complex structure on a $3$-Lie algebra as a Nijenhuis operator $J$ on a $3$-Lie algebra satisfying $J^2=-\Id.$ \end{rmk} \begin{rmk} One can also use definition \ref{complex} to define the notion of a complex structure on a complex $3$-Lie algebra, considering $J$ to be $\mathbb C$-linear. However, this is not very interesting since for a complex $3$-Lie algebra, there is a one-to-one correspondence between such $\mathbb C$-linear complex structures and product structures (see Proposition \ref{equivalent}). \end{rmk} Consider $\g_{\mathbb C}=\g\otimes_{\mathbb R} \mathbb C\cong\{x+iy|x,y\in\g\}$, the complexification of the real Lie algebra $\g$, which turns out to be a complex $3$-Lie algebra by extending the $3$-Lie bracket on $\g$ complex trilinearly, and we denote it by $(\g_{\mathbb C},[\cdot,\cdot,\cdot]_{\g_{\mathbb C}})$. We have an equivalent description of the integrability condition given in Definition \ref{complex}. We denote by $\sigma$ the conjugation in $\g_{\mathbb C}$ with respect to the real form $\g$, that is, $\sigma(x+iy)=x-iy,\,\,x,y\in\g$. Then, $\sigma$ is a complex antilinear, involutive automorphism of the complex vector space $\g_{\mathbb C}$. \begin{thm}\label{complex-structure-subalgebra} Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a real $3$-Lie algebra. Then $\g$ has a complex structure if and only if $\g_{\mathbb C}$ admits a decomposition: \begin{eqnarray} \g_{\mathbb C}=\frkq\oplus\frkp, \end{eqnarray} where $\frkq$ and $\frkp=\sigma(\frkq)$ are complex subalgebras of $\g_{\mathbb C}$. \end{thm} \pf We extend the complex structure $J$ complex linearly, which is denoted by $J_{\mathbb C}$, i.e. $J_{\mathbb C}:\g_{\mathbb C}\longrightarrow \g_{\mathbb C}$ is defined by \begin{equation}\label{eq:JC} J_{\mathbb C}(x+iy)=Jx+iJy,\quad \forall x,y\in\g. \end{equation} Then $J_{\mathbb C}$ is a complex linear endomorphism on $\g_{\mathbb C}$ satisfying $J_{\mathbb C}^2=-\Id$ and the integrability condition \eqref{complex-structure} on $\g_{\mathbb C}$. Denote by $\g_{\pm i}$ the corresponding eigenspaces of $\g_{\mathbb C}$ associated to the eigenvalues $\pm i$ and there holds: \begin{eqnarray*} \g_{\mathbb C}=\g_{i}\oplus\g_{-i}. \end{eqnarray*} It is straightforward to see that $\g_{i}=\{x-iJx|x\in\g\}$ and $\g_{-i}=\{x+iJx|x\in\g\}$. Therefore, we have $\g_{-i}=\sigma(\g_{i})$. For all $X,Y,Z\in\g_{i}$, we have \begin{eqnarray*} J_{\mathbb C}[X,Y,Z]_{\g_{\mathbb C}}&=&-[J_{\mathbb C}X,J_{\mathbb C}Y,J_{\mathbb C}Z]_{\g_{\mathbb C}}+[J_{\mathbb C}X,Y,Z]_{\g_{\mathbb C}}+[X,J_{\mathbb C}Y,Z]_{\g_{\mathbb C}}+[X,Y,J_{\mathbb C}Z]_{\g_{\mathbb C}}\\ &&+J_{\mathbb C}[J_{\mathbb C}X,J_{\mathbb C}Y,Z]_{\g_{\mathbb C}}+J_{\mathbb C}[X,J_{\mathbb C}Y,J_{\mathbb C}Z]_{\g_{\mathbb C}}+J_{\mathbb C}[J_{\mathbb C}X,Y,J_{\mathbb C}Z]_{\g_{\mathbb C}}\\ &=&4i[X,Y,Z]_{\g_{\mathbb C}}-3J_{\mathbb C}[X,Y,Z]_{\g_{\mathbb C}}. \end{eqnarray*} Thus, we have $[X,Y,Z]_{\g_{\mathbb C}}\in\g_{i}$, which implies that $\g_i$ is a subalgebra. Similarly, we can show that $\g_{-i}$ is also a subalgebra. Conversely, we define a complex linear endomorphism $J_{\mathbb C}:\g_{\mathbb C}\lon\g_{\mathbb C}$ by \begin{eqnarray}\label{defi-complex-structure} J_{\mathbb C}(X+\sigma(Y))=iX-i\sigma(Y),\,\,\,\,\forall X,Y\in\frkq. \end{eqnarray} Since $\sigma$ is a complex antilinear, involutive automorphism of $\g_{\mathbb C}$, we have \begin{eqnarray*} J_{\mathbb C}^2(X+\sigma(Y))=J_{\mathbb C}(iX-i\sigma(Y))=J_{\mathbb C}(iX+\sigma(iY))=i(iX)-i\sigma(iY)=-X-\sigma(Y), \end{eqnarray*} i.e. $J_{\mathbb C}^2=-\Id$. Since $\frkq$ is a subalgebra of $\g_{\mathbb C}$, for all $X,Y,Z\in\frkq$, we have \begin{eqnarray*} &&-[J_{\mathbb C}X,J_{\mathbb C}Y,J_{\mathbb C}Z]_{\g_{\mathbb C}}+[J_{\mathbb C}X,Y,Z]_{\g_{\mathbb C}}+[X,J_{\mathbb C}Y,Z]_{\g_{\mathbb C}}+[X,Y,J_{\mathbb C}Z]_{\g_{\mathbb C}}\\ &&+J_{\mathbb C}[J_{\mathbb C}X,J_{\mathbb C}Y,Z]_{\g_{\mathbb C}}+J_{\mathbb C}[X,J_{\mathbb C}Y,J_{\mathbb C}Z]_{\g_{\mathbb C}}+J_{\mathbb C}[J_{\mathbb C}X,Y,J_{\mathbb C}Z]_{\g_{\mathbb C}}\\ &=&4i[X,Y,Z]_{\g_{\mathbb C}}-3J_{\mathbb C}[X,Y,Z]_{\g_{\mathbb C}}=i[X,Y,Z]_{\g_{\mathbb C}}\\ &=&J_{\mathbb C}[X,Y,Z]_{\g_{\mathbb C}}, \end{eqnarray*} which implies that $J_{\mathbb C}$ satisfies \eqref{complex-structure} for all $X,Y,Z\in\frkq$. Similarly, we can show that $J_{\mathbb C}$ satisfies \eqref{complex-structure} for all $\huaX,\huaY,\huaZ\in\g_{\mathbb C}$. Since $\g_{\mathbb C}=\frkq\oplus\frkp$, we can write $\huaX\in\g_{\mathbb C}$ as $\huaX=X+\sigma(Y),$ for some $X,Y\in\frkq$. Since $\sigma$ is a complex antilinear, involutive automorphism of $\g_{\mathbb C}$, we have \begin{eqnarray*} (J_{\mathbb C}\circ\sigma)(X+\sigma(Y))=J_{\mathbb C}(Y+\sigma(X))=iY-i\sigma(X)=\sigma(iX-i\sigma(Y))=(\sigma\circ J_{\mathbb C})(X+\sigma(Y)), \end{eqnarray*} which implies that $J_{\mathbb C}\circ\sigma=\sigma\circ J_{\mathbb C}$. Moreover, since $\sigma(\huaX)=\huaX$ is equivalent to $\huaX\in\g$, we deduce that the set of fixed points of $\sigma$ is the real vector space $\g$. By $J_{\mathbb C}\circ\sigma=\sigma\circ J_{\mathbb C}$, there is a well-defined $J\in\gl(\g)$ given by $$J\triangleq J_{\mathbb C}|_{\g}.$$ Follows from that $J_{\mathbb C}$ satisfies \eqref{complex-structure} and $J_{\mathbb C}^2=-\Id$ on $\g_{\mathbb C}$, $J$ is a complex structure on $\g$. \qed \emptycomment{ Since the three $3$-Lie algebras $(\g_{\mathbb C},\frkq,\frkp=\sigma(\frkq))$ form a double $3$-Lie algebra, we have four linear maps $\rho_\frkq:\wedge^2\frkq\lon\gl(\frkp),\,\,\nu_\frkq:\frkq\lon \Hom(\wedge^2\frkp,\frkp),\,\,\rho_{\frkp}:\wedge^2\frkp\lon\gl(\frkq),\,\,\nu_{\frkp}:\frkp\lon \Hom(\wedge^2\frkq,\frkq)$. Thus, for all $x_1,x_2,x_3,y_1,y_2,y_3\in\frkq$ we have \begin{eqnarray*} &&J_{\mathbb C}[x_1+\sigma(y_1),x_2+\sigma(y_2),x_3+\sigma(y_3)]_{\g_{\mathbb C}}\\ &=&i[x_1,x_2,x_3]_{\frkq}+i\nu_{\frkp}(\sigma(y_1))(x_2,x_3)+i\nu_{\frkp}(\sigma(y_2))(x_3,x_1)+i\nu_{\frkp}(\sigma(y_3))(x_1,x_2) +i\rho_{\frkp}(\sigma(y_1),\sigma(y_2))x_3\\ &&+i\rho_{\frkp}(\sigma(y_2),\sigma(y_3))x_1+i\rho_{\frkp}(\sigma(y_3),\sigma(y_1))x_2 -i[\sigma(y_1),\sigma(y_2),\sigma(y_3)]_\frkp -i\nu_\frkq(x_1)(\sigma(y_2),\sigma(y_3))\\ &&-i\nu_\frkq(x_2)(\sigma(y_3),\sigma(y_1)) -i\nu_\frkq(x_3)(\sigma(y_1),\sigma(y_2)) -i\rho_\frkq(x_1,x_2)\sigma(y_3) -i\rho_\frkq(x_2,x_3)\sigma(y_1)-i\rho_\frkq(x_3,x_1)\sigma(y_2). \end{eqnarray*} By straightforward computation, we have \begin{eqnarray*} &&[J_{\mathbb C}(x_1+\sigma(y_1)),J_{\mathbb C}(x_2+\sigma(y_2)),J_{\mathbb C}(x_3+\sigma(y_3))]_{\g_{\mathbb C}}\\ &=&[ix_1+\sigma(iy_1),ix_2+\sigma(iy_2),ix_3+\sigma(iy_3)]_{\g_{\mathbb C}}\\ &=&[ix_1,ix_2,ix_3]_{\frkq}+\nu_{\frkp}(\sigma(iy_1))(ix_2,ix_3)+\nu_{\frkp}(\sigma(iy_2))(ix_3,ix_1)+\nu_{\frkp}(\sigma(iy_3))(ix_1,ix_2) +\rho_{\frkp}(\sigma(iy_1),\sigma(iy_2))ix_3\\ &&+\rho_{\frkp}(\sigma(iy_2),\sigma(iy_3))ix_1+\rho_{\frkp}(\sigma(iy_3),\sigma(iy_1))ix_2 +[\sigma(iy_1),\sigma(iy_2),\sigma(iy_3)]_\frkp+\nu_\frkq(ix_1)(\sigma(iy_2),\sigma(iy_3))\\ &&+\nu_\frkq(ix_2)(\sigma(iy_3),\sigma(iy_1)) +\nu_\frkq(ix_3)(\sigma(iy_1),\sigma(iy_2)) +\rho_\frkq(ix_1,ix_2)\sigma(iy_3) +\rho_\frkq(ix_2,ix_3)\sigma(iy_1)+\rho_\frkq(ix_3,ix_1)\sigma(iy_2),\\ &&[J_{\mathbb C}(x_1+\sigma(y_1)),x_2+\sigma(y_2),x_3+\sigma(y_3)]_{\g_{\mathbb C}}\\ &=&[ix_1+\sigma(iy_1),x_2+\sigma(y_2),x_3+\sigma(y_3)]_{\g_{\mathbb C}}\\ &=&[ix_1,x_2,x_3]_{\frkq}+\nu_{\frkp}(\sigma(iy_1))(x_2,x_3)+\nu_{\frkp}(\sigma(y_2))(x_3,ix_1)+\nu_{\frkp}(\sigma(y_3))(ix_1,x_2) +\rho_{\frkp}(\sigma(iy_1),\sigma(y_2))x_3\\ &&+\rho_{\frkp}(\sigma(y_2),\sigma(y_3))ix_1+\rho_{\frkp}(\sigma(y_3),\sigma(iy_1))x_2 +[\sigma(iy_1),\sigma(y_2),\sigma(y_3)]_\frkp +\nu_\frkq(ix_1)(\sigma(y_2),\sigma(y_3))\\ &&+\nu_\frkq(x_2)(\sigma(y_3),\sigma(iy_1)) +\nu_\frkq(x_3)(\sigma(iy_1),\sigma(y_2)) +\rho_\frkq(ix_1,x_2)\sigma(y_3) +\rho_\frkq(x_2,x_3)\sigma(iy_1)+\rho_\frkq(x_3,ix_1)\sigma(y_2),\\ &&[x_1+\sigma(y_1),J_{\mathbb C}(x_2+\sigma(y_2)),x_3+\sigma(y_3)]_{\g_{\mathbb C}}\\ &=&[x_1+\sigma(y_1),ix_2+\sigma(iy_2),x_3+\sigma(y_3)]_{\g_{\mathbb C}}\\ &=&[x_1,ix_2,x_3]_{\frkq}+\nu_{\frkp}(\sigma(y_1))(ix_2,x_3)+\nu_{\frkp}(\sigma(iy_2))(x_3,x_1)+\nu_{\frkp}(\sigma(y_3))(x_1,ix_2) +\rho_{\frkp}(\sigma(y_1),\sigma(iy_2))x_3\\ &&+\rho_{\frkp}(\sigma(iy_2),\sigma(y_3))x_1+\rho_{\frkp}(\sigma(y_3),\sigma(y_1))ix_2 +[\sigma(y_1),\sigma(iy_2),\sigma(y_3)]_\frkp +\nu_\frkq(x_1)(\sigma(iy_2),\sigma(y_3))\\ &&+\nu_\frkq(ix_2)(\sigma(y_3),\sigma(y_1)) +\nu_\frkq(x_3)(\sigma(y_1),\sigma(iy_2)) +\rho_\frkq(x_1,ix_2)\sigma(y_3) +\rho_\frkq(ix_2,x_3)\sigma(y_1)+\rho_\frkq(x_3,x_1)\sigma(iy_2),\\ &&[x_1+\sigma(y_1),x_2+\sigma(y_2),x_3+J_{\mathbb C}(\sigma(y_3))]_{\g_{\mathbb C}}\\ &=&[x_1+\sigma(y_1),x_2+\sigma(y_2),ix_3+\sigma(iy_3)]_{\g_{\mathbb C}}\\ &=&[x_1,x_2,ix_3]_{\frkq}+\nu_{\frkp}(\sigma(y_1))(x_2,ix_3)+\nu_{\frkp}(\sigma(y_2))(ix_3,x_1)+\nu_{\frkp}(\sigma(iy_3))(x_1,x_2) +\rho_{\frkp}(\sigma(y_1),\sigma(y_2))ix_3\\ &&+\rho_{\frkp}(\sigma(y_2),\sigma(iy_3))x_1+\rho_{\frkp}(\sigma(iy_3),\sigma(y_1))x_2 +[\sigma(y_1),\sigma(y_2),\sigma(iy_3)]_\frkp +\nu_\frkq(x_1)(\sigma(y_2),\sigma(iy_3))\\ &&+\nu_\frkq(x_2)(\sigma(iy_3),\sigma(y_1)) +\nu_\frkq(ix_3)(\sigma(y_1),\sigma(y_2)) +\rho_\frkq(x_1,x_2)\sigma(iy_3) +\rho_\frkq(x_2,ix_3)\sigma(y_1)+\rho_\frkq(ix_3,x_1)\sigma(y_2),\\ &&J_{\mathbb C}[J_{\mathbb C}(x_1+\sigma(y_1)),J_{\mathbb C}(x_2+\sigma(y_2)),x_3+\sigma(y_3)]_{\g_{\mathbb C}}\\ &=&J_{\mathbb C}[ix_1+\sigma(iy_1),ix_2+\sigma(iy_2),x_3+\sigma(y_3)]_{\g_{\mathbb C}}\\ &=&i[ix_1,ix_2,x_3]_{\frkq}+i\nu_{\frkp}(\sigma(iy_1))(ix_2,x_3)+i\nu_{\frkp}(\sigma(iy_2))(x_3,ix_1)+i\nu_{\frkp}(\sigma(y_3))(ix_1,ix_2) +i\rho_{\frkp}(\sigma(iy_1),\sigma(iy_2))x_3\\ &&+i\rho_{\frkp}(\sigma(iy_2),\sigma(y_3))ix_1+i\rho_{\frkp}(\sigma(y_3),\sigma(iy_1))ix_2 -i[\sigma(iy_1),\sigma(iy_2),\sigma(y_3)]_\frkp -i\nu_\frkq(ix_1)(\sigma(iy_2),\sigma(y_3))\\ &&-i\nu_\frkq(ix_2)(\sigma(y_3),\sigma(iy_1)) -i\nu_\frkq(x_3)(\sigma(iy_1),\sigma(iy_2)) -i\rho_\frkq(ix_1,ix_2)\sigma(y_3) -i\rho_\frkq(ix_2,x_3)\sigma(iy_1)-i\rho_\frkq(x_3,ix_1)\sigma(iy_2),\\ &&J_{\mathbb C}[x_1+\sigma(y_1),J_{\mathbb C}(x_2+\sigma(y_2)),J_{\mathbb C}(x_3+\sigma(y_3))]_{\g_{\mathbb C}}\\ &=&J_{\mathbb C}[x_1+\sigma(y_1),ix_2+\sigma(iy_2),ix_3+\sigma(iy_3)]_{\g_{\mathbb C}}\\ &=&i[x_1,ix_2,ix_3]_{\frkq}+i\nu_{\frkp}(\sigma(y_1))(ix_2,ix_3)+i\nu_{\frkp}(\sigma(iy_2))(ix_3,x_1)+i\nu_{\frkp}(\sigma(iy_3))(x_1,ix_2) +i\rho_{\frkp}(\sigma(y_1),\sigma(iy_2))ix_3\\ &&+i\rho_{\frkp}(\sigma(iy_2),\sigma(iy_3))x_1+i\rho_{\frkp}(\sigma(iy_3),\sigma(y_1))ix_2 -i[\sigma(y_1),\sigma(iy_2),\sigma(iy_3)]_\frkp -i\nu_\frkq(x_1)(\sigma(iy_2),\sigma(iy_3))\\ &&-i\nu_\frkq(ix_2)(\sigma(iy_3),\sigma(y_1)) -i\nu_\frkq(ix_3)(\sigma(y_1),\sigma(iy_2)) -i\rho_\frkq(x_1,ix_2)\sigma(iy_3) -i\rho_\frkq(ix_2,ix_3)\sigma(y_1)-i\rho_\frkq(ix_3,x_1)\sigma(iy_2),\\ &&J_{\mathbb C}[J_{\mathbb C}(x_1+\sigma(y_1)),x_2+\sigma(y_2),J_{\mathbb C}(x_3+\sigma(y_3))]_{\g_{\mathbb C}}\\ &=&J_{\mathbb C}[ix_1+\sigma(iy_1),x_2+\sigma(y_2),ix_3+\sigma(iy_3)]_{\g_{\mathbb C}}\\ &=&i[ix_1,x_2,ix_3]_{\frkq}+i\nu_{\frkp}(\sigma(iy_1))(x_2,ix_3)+i\nu_{\frkp}(\sigma(y_2))(ix_3,ix_1)+i\nu_{\frkp}(\sigma(iy_3))(ix_1,x_2) +i\rho_{\frkp}(\sigma(iy_1),\sigma(y_2))ix_3\\ &&+i\rho_{\frkp}(\sigma(y_2),\sigma(iy_3))ix_1+i\rho_{\frkp}(\sigma(iy_3),\sigma(iy_1))x_2 -i[\sigma(iy_1),\sigma(y_2),\sigma(iy_3)]_\frkp -i\nu_\frkq(ix_1)(\sigma(y_2),\sigma(iy_3))\\ &&-i\nu_\frkq(x_2)(\sigma(iy_3),\sigma(iy_1)) -i\nu_\frkq(ix_3)(\sigma(iy_1),\sigma(y_2)) -i\rho_\frkq(ix_1,x_2)\sigma(iy_3) -i\rho_\frkq(x_2,ix_3)\sigma(iy_1)-i\rho_\frkq(ix_3,ix_1)\sigma(y_2). \end{eqnarray*} Therefore, for all $x_1,x_2,x_3,y_1,y_2,y_3\in\frkq$, we obtain that \begin{eqnarray*} J_{\mathbb C}[x_1+\sigma(y_1),x_2+\sigma(y_2),x_3+\sigma(y_3)]_{\g_{\mathbb C}}&=&-[J_{\mathbb C}(x_1+\sigma(y_1)),J_{\mathbb C}(x_2+\sigma(y_2)),J_{\mathbb C}(x_3+\sigma(y_3))]_{\g_{\mathbb C}}\\ &&+[J_{\mathbb C}(x_1+\sigma(y_1)),x_2+\sigma(y_2),x_3+\sigma(y_3)]_{\g_{\mathbb C}}\\ &&+[x_1+\sigma(y_1),J_{\mathbb C}(x_2+\sigma(y_2)),x_3+\sigma(y_3)]_{\g_{\mathbb C}}\\ &&+[x_1+\sigma(y_1),x_2+\sigma(y_2),J_{\mathbb C}(x_3+\sigma(y_3))]_{\g_{\mathbb C}}\\ &&+J_{\mathbb C}[J_{\mathbb C}(x_1+\sigma(y_1)),J_{\mathbb C}(x_2+\sigma(y_2)),x_3+\sigma(y_3)]_{\g_{\mathbb C}}\\ &&+J_{\mathbb C}[x_1+\sigma(y_1),J_{\mathbb C}(x_2+\sigma(y_2)),J_{\mathbb C}(x_3+\sigma(y_3))]_{\g_{\mathbb C}}\\ &&+J_{\mathbb C}[J_{\mathbb C}(x_1+\sigma(y_1)),x_2+\sigma(y_2),J_{\mathbb C}(x_3+\sigma(y_3))]_{\g_{\mathbb C}}. \end{eqnarray*} } \begin{lem} Let $J$ be an almost complex structure on a real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. If $J$ satisfies \begin{eqnarray}\label{adapt} J[x,y,z]_\g=[Jx,y,z]_\g,\,\,\,\,\forall x,y,z\in\g, \end{eqnarray} then $J$ is a complex structure on $(\g,[\cdot,\cdot,\cdot]_\g)$. \end{lem} \pf By \eqref{adapt} and $J^2=-\Id$, we have \begin{eqnarray*} &&-[Jx,Jy,Jz]_\g+[Jx,y,z]_\g+[x,Jy,z]_\g+[x,y,Jz]_\g\\ &&+J[Jx,Jy,z]_\g+J[x,Jy,Jz]_\g+J[Jx,y,Jz]_\g\\ &=&-[Jx,Jy,Jz]_\g+J[x,y,z]_\g+[x,Jy,z]_\g+[x,y,Jz]_\g\\ &&+[J^2x,Jy,z]_\g+[Jx,Jy,Jz]_\g+[J^2x,y,Jz]_\g\\ &=&J[x,y,z]_\g. \end{eqnarray*} Thus, we obtain that $J$ is a complex structure on $\g$. \qed \begin{defi}{\bf (Integrability condition I)} An almost complex structure $J$ on a real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$ is called a {\bf strict complex structure} if \eqref{adapt} holds. \end{defi} \begin{cor} Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a real $3$-Lie algebra. Then there is a strict complex structure on $(\g,[\cdot,\cdot,\cdot]_\g)$ if and only if $\g_{\mathbb C}$ admits a decomposition: \begin{eqnarray} \g_{\mathbb C}=\frkq\oplus\frkp, \end{eqnarray} where $\frkq$ and $\frkp=\sigma(\frkq)$ are complex subalgebras of $\g_{\mathbb C}$ such that $[\frkq,\frkq,\frkp]_{\g_{\mathbb C}}=0$ and $[\frkp,\frkp,\frkq]_{\g_{\mathbb C}}=0,$ i.e. $\g_{\mathbb C}$ is a $3$-Lie algebra direct sum of $\frkq$ and $\frkp$. \end{cor} \pf Let $J$ be a strict complex structure on a real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. Then, $J_{\mathbb C}$ is a strict complex structure on the complex $3$-Lie algebra $(\g_{\mathbb C},[\cdot,\cdot,\cdot]_{\g_{\mathbb C}})$. For all $X,Y\in\g_{i}$ and $\sigma(Z)\in\g_{-i}$, on one hand we have \begin{eqnarray*} J_{\mathbb C}[X,Y,\sigma(Z)]_{\g_{\mathbb C}}=[J_{\mathbb C}X,Y,\sigma(Z)]_{\g_{\mathbb C}}=i[X,Y,\sigma(Z)]_{\g_{\mathbb C}}. \end{eqnarray*} On the other hand, we have \begin{eqnarray*} J_{\mathbb C}[X,Y,\sigma(Z)]_{\g_{\mathbb C}}=J_{\mathbb C}[\sigma(Z),X,Y]_{\g_{\mathbb C}}=[J_{\mathbb C}\sigma(Z),X,Y]_{\g_{\mathbb C}}=-i[\sigma(Z),X,Y]_{\g_{\mathbb C}}. \end{eqnarray*} Thus, we obtain $[\g_i,\g_i,\g_{-i}]_{\g_{\mathbb C}}=0$. Similarly, we can show $[\g_{-i},\g_{-i},\g_{i}]_{\g_{\mathbb C}}=0$. Conversely, define a complex linear endomorphism $J_{\mathbb C}:\g_{\mathbb C}\lon\g_{\mathbb C}$ by \eqref{defi-complex-structure}. Then it is straightforward to deduce that $J_{\mathbb C}^2=-\Id$. Since $\frkq$ is a subalgebra of $\g_{\mathbb C}$, for all $X,Y,Z\in\frkq$, we have \begin{eqnarray*} J_{\mathbb C}[X,Y,Z]_{\g_{\mathbb C}}=i[X,Y,Z]_{\g_{\mathbb C}}=[J_{\mathbb C}X,Y,Z]_{\g_{\mathbb C}}, \end{eqnarray*} which implies that $J_{\mathbb C}$ satisfies \eqref{adapt} for all $X,Y,Z\in\frkq$. Similarly, we can show that $J_{\mathbb C}$ satisfies \eqref{adapt} for all $\huaX,\huaY,\huaZ\in\g_{\mathbb C}$. By the proof of Theorem \ref{complex-structure-subalgebra}, we obtain that $J\triangleq J_{\mathbb C}|_{\g}$ is a strict complex structure on the real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. The proof is finished. \qed\vspace{2mm} Let $J$ be an almost complex structure on a real 3-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. We can define a complex vector space structure on the real vector space $\g$ by \begin{eqnarray}\label{complex-space} (a+bi)x\triangleq ax+bJx,\,\,\,\forall a,b\in\mathbb R,x\in\g. \end{eqnarray} Define two maps $\varphi:\g\lon\g_i$ and $\psi:\g\lon\g_{-i}$ as following: \begin{eqnarray*} \varphi(x)&=&\frac{1}{2}(x-iJx),\\ \psi(x) &=&\frac{1}{2}(x+iJx). \end{eqnarray*} It is straightforward to deduce that $\varphi$ is complex linear isomorphism and $\psi=\sigma\circ\varphi$ is a complex antilinear isomorphism between complex vector spaces. Let $J$ be a strict complex structure on a real 3-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. Then with the complex vector space structure defined above, $(\g,[\cdot,\cdot,\cdot]_\g)$ is a complex $3$-Lie algebra. In fact, the fact that the $3$-Lie bracket is complex trilinear follows from \begin{eqnarray*} [(a+bi)x,y,z]_\g&=&[ax+bJx,y,z]_\g=a[x,y,z]_\g+b[Jx,y,z]_\g\\ &=&a[x,y,z]_\g+bJ[x,y,z]_\g=(a+bi)[x,y,z]_\g \end{eqnarray*} using \eqref{adapt} and \eqref{complex-space}. \vspace{2mm} Let $J$ be a complex structure on $\g$. Define a new bracket $[\cdot,\cdot,\cdot]_J:\wedge^3\g\lon\g$ by \begin{eqnarray}\label{J-bracket} [x,y,z]_J\triangleq \frac{1}{4}([x,y,z]_\g-[x,Jy,Jz]_\g-[Jx,y,Jz]_\g-[Jx,Jy,z]_\g),\,\,\,\,\forall x,y,z\in\g. \end{eqnarray} \begin{pro}\label{subalgebra-iso} Let $J$ be a complex structure on a real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. Then $(\g,[\cdot,\cdot,\cdot]_J)$ is a real $3$-Lie algebra. Moreover, $J$ is a strict complex structure on $(\g,[\cdot,\cdot,\cdot]_J)$ and the corresponding complex $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_J)$ is isomorphic to the complex $3$-Lie algebra $\g_{i}$. \end{pro} \pf One can show that $(\g,[\cdot,\cdot,\cdot]_J)$ is a real $3$-Lie algebra directly. Here we use a different approach to prove this result. By \eqref{complex-structure}, for all $x,y,z\in\g$, we have \begin{eqnarray} \nonumber[\varphi(x),\varphi(y),\varphi(z)]_{\g_{\mathbb C}}&=&\frac{1}{8}[x-iJx,y-iJy,z-iJz]_{\g_{\mathbb C}}\\ \nonumber &=&\frac{1}{8}([x,y,z]_\g-[x,Jy,Jz]_\g-[Jx,y,Jz]_\g-[Jx,Jy,z]_\g)\\ \nonumber&&-\frac{1}{8}i([x,y,Jz]_\g+[x,Jy,z]_\g+[Jx,y,z]_\g-[Jx,Jy,Jz]_\g)\\ \nonumber &=&\frac{1}{8}([x,y,z]_\g-[x,Jy,Jz]_\g-[Jx,y,Jz]_\g-[Jx,Jy,z]_\g)\\ \nonumber &&-\frac{1}{8}iJ([x,y,z]_\g-[x,Jy,Jz]_\g-[Jx,y,Jz]_\g-[Jx,Jy,z]_\g)\\ \label{eq:Jiso}&=&\varphi[x,y,z]_J. \end{eqnarray} Thus, we have $[x,y,z]_J=\varphi^{-1}[\varphi(x),\varphi(y),\varphi(z)]_{\g_{\mathbb C}}$. Since $J$ is a complex structure, $\g_i$ is a $3$-Lie subalgebra. Therefore, $(\g,[\cdot,\cdot,\cdot]_J)$ is a real $3$-Lie algebra. By \eqref{complex-structure}, for all $x,y,z\in\g$, we have \begin{eqnarray*} J[x,y,z]_J&=&\frac{1}{4}J([x,y,z]_\g-[x,Jy,Jz]_\g-[Jx,y,Jz]_\g-[Jx,Jy,z]_\g)\\ &=&\frac{1}{4}(-[Jx,Jy,Jz]_\g+[Jx,y,z]_\g+[x,Jy,z]_\g+[x,y,Jz]_\g)\\ &=&[Jx,y,z]_J, \end{eqnarray*} which implies that $J$ is a strict complex structure on $(\g,[\cdot,\cdot,\cdot]_J)$. By \eqref{eq:Jiso}, $\varphi$ is a complex $3$-Lie algebra isomorphism. The proof is finished. \qed \begin{pro} Let $J$ be a complex structure on a real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. Then $J$ is a strict complex structure on $(\g,[\cdot,\cdot,\cdot]_\g)$ if and only if $[\cdot,\cdot,\cdot]_J=[\cdot,\cdot,\cdot]_\g.$ \end{pro} \pf If $J$ is a strict complex structure on $(\g,[\cdot,\cdot,\cdot]_\g)$, by $J[x,y,z]_\g=[Jx,y,z]_\g$, we have \begin{eqnarray*} [x,y,z]_J=\frac{1}{4}([x,y,z]_\g-[x,Jy,Jz]_\g-[Jx,y,Jz]_\g-[Jx,Jy,z]_\g)=[x,y,z]_\g. \end{eqnarray*} Conversely, if $[\cdot,\cdot,\cdot]_J=[\cdot,\cdot,\cdot]_\g$, we have $$-3[x,y,z]_\g=[x,Jy,Jz]_\g+[Jx,y,Jz]_\g+[Jx,Jy,z]_\g.$$ Then by the integrability condition of $J$, we obtain \begin{eqnarray*} 4J[x,y,z]_J&=&-[Jx,Jy,Jz]_\g+[Jx,y,z]_\g+[x,Jy,z]_\g+[x,y,Jz]_\g\\ &=&3[Jx,y,z]_\g+[Jx,y,z]_\g\\ &=&4[Jx,y,z]_\g, \end{eqnarray*} which implies that $J[x,y,z]_\g=[Jx,y,z]_\g$. The proof is finished. \qed \begin{lem} Let $J$ be an almost complex structure on a real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. If $J$ satisfies the following equation \begin{eqnarray}\label{abel-complex} [x,y,z]_\g=[x,Jy,Jz]_\g+[Jx,y,Jz]_\g+[Jx,Jy,z]_\g, \end{eqnarray} Then, $J$ is a complex structure on $\g$. \end{lem} \pf By \eqref{abel-complex} and $J^2=-\Id$, we have \begin{eqnarray*} &&-[Jx,Jy,Jz]_\g+[Jx,y,z]_\g+[x,Jy,z]_\g+[x,y,Jz]_\g\\ &&+J[Jx,Jy,z]_\g+J[x,Jy,Jz]_\g+J[Jx,y,Jz]_\g\\ &=&-[Jx,J^2y,J^2z]_\g-[J^2x,Jy,J^2z]_\g-[J^2x,J^2y,Jz]_\g\\ &&+[Jx,y,z]_\g+[x,Jy,z]_\g+[x,y,Jz]_\g+J[x,y,z]_\g\\ &=&J[x,y,z]_\g. \end{eqnarray*} Thus, we obtain that $J$ is a complex structure on $\g$. \qed \begin{defi}{\bf (Integrability condition II)} An almost complex structure $J$ on a real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$ is called an {\bf abelian complex structure} if \eqref{abel-complex} holds. \end{defi} \begin{rmk} Let $J$ be an abelian complex structure on a real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. Then $(\g,[\cdot,\cdot,\cdot]_J)$ is an abelian $3$-Lie algebra. \end{rmk} \begin{cor} Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a real $3$-Lie algebra. Then $\g$ has an abelian complex structure if and only if $\g_{\mathbb C}$ admits a decomposition: $$ \g_{\mathbb C}=\frkq\oplus\frkp, $$ where $\frkq$ and $\frkp=\sigma(\frkq)$ are complex abelian subalgebras of $\g_{\mathbb C}$. \end{cor} \pf Let $J$ be an abelian complex structure on $\g$. By Proposition \ref{subalgebra-iso}, we obtain that $\varphi$ is a complex $3$-Lie algebra isomorphism from $(\g,[\cdot,\cdot,\cdot]_J)$ to $(\g_{i},[\cdot,\cdot,\cdot]_{\g_{\mathbb C}})$. Since $J$ is abelian, $(\g,[\cdot,\cdot,\cdot]_J)$ is an abelian $3$-Lie algebra. Therefore, $\frkq=\g_{i}$ is an abelian subalgebra of $\g_{\mathbb C}$. Since $\frkp=\g_{-i}=\sigma(\g_{i})$, for all $x_1+iy_1,x_2+iy_2,x_3+iy_3\in\g_i$, we have \begin{eqnarray*} &&[\sigma(x_1+iy_1),\sigma(x_2+iy_2),\sigma(x_3+iy_3)]_{\g_{\mathbb C}}\\&=&[x_1-iy_1,x_2-iy_2,x_3-iy_3]_{\g_{\mathbb C}}\\ &=&([x_1,x_2,x_3]_\g-[x_1,y_2,y_3]_\g-[y_1,x_2,y_3]_\g-[y_1,y_2,x_3]_\g)\\ &&-i([x_1,x_2,y_3]_\g+[x_1,y_2,x_3]_\g+[y_1,x_2,x_3]_\g-[y_1,y_2,y_3]_\g)\\ &=&\sigma[x_1+iy_1,x_2+iy_2,x_3+iy_3]_{\g_{\mathbb C}}\\ &=&0. \end{eqnarray*} Thus, $\frkp$ is an abelian subalgebra of $\g_{\mathbb C}$. Conversely, by Theorem \ref{complex-structure-subalgebra}, there is a complex structure $J$ on $\g$. Moreover, by Proposition \ref{subalgebra-iso}, we have a complex $3$-Lie algebra isomorphism $\varphi$ from $(\g,[\cdot,\cdot,\cdot]_J)$ to $(\frkq,[\cdot,\cdot,\cdot]_{\g_{\mathbb C}})$. Thus, $(\g,[\cdot,\cdot,\cdot]_J)$ is an abelian $3$-Lie algebra. By the definition of $[\cdot,\cdot,\cdot]_J$, we obtain that $J$ is an abelian complex structure on $\g$. The proof is finished. \qed \begin{lem} Let $J$ be an almost complex structure on a real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. If $J$ satisfies the following equation \begin{eqnarray}\label{complex-integrability} [x,y,z]_\g&=&-J[Jx,y,z]_\g-J[x,Jy,z]_\g-J[x,y,Jz]_\g, \end{eqnarray} then $J$ is a complex structure on $\g$. \end{lem} \pf By \eqref{complex-integrability} and $J^2=-\Id$, we have \begin{eqnarray*} &&-[Jx,Jy,Jz]_\g+[Jx,y,z]_\g+[x,Jy,z]_\g+[x,y,Jz]_\g\\ &&+J[Jx,Jy,z]_\g+J[x,Jy,Jz]_\g+J[Jx,y,Jz]_\g\\ &=&J[J^2x,Jy,Jz]_\g+J[Jx,J^2y,Jz]_\g+J[Jx,Jy,J^2z]_\g+J[x,y,z]_\g\\ &&+J[Jx,Jy,z]_\g+J[x,Jy,Jz]_\g+J[Jx,y,Jz]_\g\\ &=&J[x,y,z]_\g. \end{eqnarray*} Thus, $J$ is a complex structure on $\g$. \qed \begin{defi}{\bf (Integrability condition III)} An almost complex structure $J$ on a real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$ is called a {\bf strong abelian complex structure} if \eqref{complex-integrability} holds. \end{defi} \begin{cor} Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a real $3$-Lie algebra. Then $\g$ has a strong abelian complex structure if and only if $\g_{\mathbb C}$ admits a decomposition: $$ \g_{\mathbb C}=\frkq\oplus\frkp, $$ where $\frkq$ and $\frkp=\sigma(\frkq)$ are abelian complex subalgebras of $\g_{\mathbb C}$ such that $[\frkq,\frkq,\frkp]_{\g_{\mathbb C}}\subset\frkq$ and $[\frkp,\frkp,\frkq]_{\g_{\mathbb C}}\subset\frkp.$ \end{cor} Parallel to the case of strong abelian product structures on a 3-Lie algebra, strong abelian complex structures on a $3$-Lie algebra are also $\huaO$-operators associated to the adjoint representation. \begin{pro} Let $J$ be an almost complex structure on a real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. Then $J$ is a strong abelian complex structure on a $3$-Lie algebra $ (\g,[\cdot,\cdot,\cdot]_\g)$ if and only if $-J$ is an $\huaO$-operator on $(\g,[\cdot,\cdot,\cdot]_\g)$ associated to the adjoint representation $(\g,\ad)$. Furthermore, there exists a compatible $3$-pre-Lie algebra $(\g,\{\cdot,\cdot,\cdot\})$ on the $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$, here the $3$-pre-Lie algebra structure on $\g$ is given by \begin{eqnarray} \{x,y,z\}=-J[x,y,Jz]_\g,\,\,\,\,\forall x,y,z\in\g. \end{eqnarray} \end{pro} \pf By \eqref{complex-integrability}, for all $x,y,z\in\g$ we have \begin{eqnarray*} [-Jx,-Jy,-Jz]_\g&=&J[J^2x,Jy,Jz]_\g+J[Jx,J^2y,Jz]_\g+J[Jx,Jy,J^2z]_\g\\ &=&-J(\ad_{-Jx,-Jy}z+\ad_{-Jy,-Jz}x+\ad_{-Jz,-Jx}y). \end{eqnarray*} Thus, $-J$ is an $\huaO$-operator associated to the adjoint representation $(\g,\ad)$. Conversely, if for all $x,y,z\in\g$, we have \begin{eqnarray*} [-Jx,-Jy,-Jz]_\g&=&-J(\ad_{-Jx,-Jy}z+\ad_{-Jy,-Jz}x+\ad_{-Jz,-Jx}y)\\ &=&-J([-Jx,-Jy,z]_\g+[x,-Jy,-Jz]_\g+[-Jx,y,-Jz]_\g), \end{eqnarray*} then we obtain $[x,y,z]_\g=-J[x,y,Jz]_\g-J[Jx,y,z]_\g-J[x,Jy,z]_\g$ by $({-J})^{-1}=J$. Furthermore, by $(-J)^{-1}=J$ and Proposition \ref{3-Lie-compatible-3-pre-Lie}, there exists a compatible $3$-pre-Lie algebra on $\g$ given by $ \{x,y,z\}=-J\ad_{x,y}({-J}^{-1}(z))=-J[x,y,Jz]_\g. $ The proof is finished. \qed \begin{lem} Let $J$ be an almost complex structure on a real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. If $J$ satisfies the following equation \begin{eqnarray}\label{complex-integrability-1} J[x,y,z]_\g=-[Jx,Jy,Jz]_\g, \end{eqnarray} then $J$ is a complex structure on $\g$. \end{lem} \pf By \eqref{complex-integrability-1} and $J^2=\Id$, we have \begin{eqnarray*} &&-[Jx,Jy,Jz]_\g+[Jx,y,z]_\g+[x,Jy,z]_\g+[x,y,Jz]_\g\\ &&+J[Jx,Jy,z]_\g+J[x,Jy,Jz]_\g+J[Jx,y,Jz]_\g\\ &=&J[x,y,z]_\g+[Jx,y,z]_\g+[x,Jy,z]_\g+[x,y,Jz]_\g\\ &&-[J^2x,J^2y,Jz]_\g-[Jx,J^2y,J^2z]_\g-[J^2x,Jy,J^2z]_\g\\ &=&J[x,y,z]_\g. \end{eqnarray*} Thus, $J$ is a complex structure on $\g$. \qed \begin{defi}{\bf (Integrability condition IV)} An almost complex structure $J$ on a real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$ is called a {\bf perfect complex structure} if \eqref{complex-integrability-1} holds. \end{defi} \begin{cor} Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a real $3$-Lie algebra. Then $\g$ has a perfect complex structure if and only if $\g_{\mathbb C}$ admits a decomposition: $$ \g_{\mathbb C}=\frkq\oplus\frkp, $$ where $\frkq$ and $\frkp=\sigma(\frkq)$ are complex subalgebras of $\g_{\mathbb C}$ such that $[\frkq,\frkq,\frkp]_{\g_{\mathbb C}}\subset\frkp$ and $[\frkp,\frkp,\frkq]_{\g_{\mathbb C}}\subset\frkq.$ \end{cor} \begin{cor} Let $J$ be a strict complex structure on a real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. Then $J$ is a perfect complex structure on $\g$. \end{cor} \begin{ex}\label{ex:A4complex}{\rm Consider the $4$-dimensional Euclidean $3$-Lie algebra $A_4$ given in Example \ref{ex:A4symplectic}. Then \begin{eqnarray*} J_1=\left(\begin{array}{cccc}0&0&-1&0\\ 0&0&0&-1\\ 1&0&0&0\\ 0&1&0&0\end{array}\right),~ J_2=\left(\begin{array}{cccc}0&-1&0&0\\ 1&0&0&0\\ 0&0&0&-1\\ 0&0&1&0\end{array}\right),~ J_3=\left(\begin{array}{cccc}0&-1&0&0\\ 1&0&0&0\\ 0&0&0&1\\ 0&0&-1&0\end{array}\right),~\\ J_4=\left(\begin{array}{cccc} 0&1&0&0\\ -1&0&0&0\\ 0&0&0&-1\\ 0&0&1&0\end{array}\right),~ J_5=\left(\begin{array}{cccc} 0&1&0&0\\ -1&0&0&0\\ 0&0&0&1\\ 0&0&-1&0\end{array}\right),~ J_6=\left(\begin{array}{cccc} 0&0&1&0\\ 0&0&0&1\\ -1&0&0&0\\ 0&-1&0&0\end{array}\right)\end{eqnarray*}are abelian complex structures. Moreover, $J_1,J_6$ are strong abelian complex structures and $J_2,J_3,J_4,J_5$ are perfect complex structures. } \end{ex} \section{Complex product structures on $3$-Lie algebras} In this section, we add a compatibility condition between a complex structure and a product structure on a 3-Lie algebra to introduce the notion of a complex product structure. We construct complex product structures using 3-pre-Lie algebras. First we illustrate the relation between a complex structure and a product structure on a complex $3$-Lie algebra. \begin{pro}\label{equivalent} Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a complex $3$-Lie algebra. Then $E$ is a product structure on $\g$ if and only if $J=iE$ is a complex structure on $\g$. \end{pro} \pf Let $E$ be a product structure on $\g$. We have $J^2=i^2E^2=-\Id.$ Thus, $J$ is an almost complex structure on $\g$. Since $E$ satisfies the integrability condition \eqref{product-structure}, we have \begin{eqnarray*} J[x,y,z]_\g&=&iE[x,y,z]_\g\\ &=&-[iEx,iEy,iEz]_\g+[iEx,y,z]_\g+[x,iEy,z]_\g+[x,y,iEz]_\g\\ &&+iE[iEx,iEy,z]_\g+iE[x,iEy,iEz]_\g+iE[iEx,y,iEz]_\g. \end{eqnarray*} Thus, $J$ is a complex structure on the complex $3$-Lie algebra $\g$. The converse part can be proved similarly and we omit details. \qed \begin{cor}\label{complex-to-special-paracomplex} Let $J$ be a complex structure on a real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. Then, $-iJ_{\mathbb C}$ is a paracomplex structure on the complex $3$-Lie algebra $(\g_{\mathbb C},[\cdot,\cdot,\cdot]_{\g_{\mathbb C}})$, where $J_{\mathbb C}$ is defined by \eqref{eq:JC}. \end{cor} \pf By Theorem \ref{complex-structure-subalgebra}, $\g_{\mathbb C}=\g_{i}\oplus\g_{-i}$ and $\g_{-i}=\sigma(\g_{i})$, where $\g_{i}$ and $\g_{-i}$ are subalgebras of $\g_{\mathbb C}$. It is obvious that $\dim(\g_i)=\dim(\g_{-i})$. By Proposition \ref{product-structure-subalgebra}, there is a paracomplex structure on $\g_{\mathbb C}$. On the other hand, it is obvious that $J_{\mathbb C}$ is a complex structure on $\g_{\mathbb C}$. By Proposition \ref{equivalent}, $-iJ_{\mathbb C}$ a product structure on the complex $3$-Lie algebra $(\g_{\mathbb C},[\cdot,\cdot,\cdot]_{\g_{\mathbb C}})$. It is straightforward to see that $\g_i$ and $\g_{-i}$ are eigenspaces of $-iJ_{\mathbb C}$ corresponding to $+1$ and $-1$. Thus, $-iJ_{\mathbb C}$ is a paracomplex structure. \qed \begin{defi} Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a real $3$-Lie algebra. A {\bf complex product} structure on the $3$-Lie algebra $\g$ is a pair $\{J,E\}$ of a complex structure $J$ and a product structure $E$ satisfying \begin{equation}\label{eq:compro} J\circ E=-E\circ J. \end{equation} If $E$ is perfect, we call $\{J,E\}$ a {\bf perfect complex product} structure on $\g.$ \end{defi} \begin{rmk} Let $\{J,E\}$ be a complex product structure on a real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. For all $x\in\g_+$, by \eqref{eq:compro}, we have $E(Jx)=-Jx$, which implies that $J(\g_+)\subset\g_-$. Analogously, we obtain $J(\g_-)\subset\g_+$. Thus, we get $J(\g_-)=\g_+$ and $J(\g_+)=\g_-$. Therefore, $\dim(\g_+)=\dim(\g_-)$ and $E$ is a paracomplex structure on $\g$. \end{rmk} \begin{thm} Let $(\g,[\cdot,\cdot,\cdot]_\g)$ be a real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. Then the following statements are equivalent: \begin{itemize} \item[\rm(i)] $\g$ has a complex product structure; \item[\rm(ii)] $\g$ has a complex structure $J$ and can be decomposed as $\g=\g_+\oplus\g_-$, where $\g_+,\g_-$ are $3$-Lie subalgebras of $\g$ and $\g_-=J\g_+$. \end{itemize} \end{thm} \pf Let $\{J,E\}$ be a complex product structure and let $\g_{\pm}$ denote the eigenspaces corresponding to the eigenvalues $\pm1$ of $E$. By Theorem \ref{product-structure-subalgebra}, both $\g_+$ and $\g_-$ are $3$-Lie subalgebras of $\g$ and $J\circ E=-E\circ J$ implies $\g_-=J\g_+.$ Conversely, we can define a linear map $E:\g\lon\g$ by \begin{eqnarray*} E(x+\alpha)=x-\alpha,\,\,\,\,\forall x\in\g_+,\alpha\in\g_-. \end{eqnarray*} By Theorem \ref{product-structure-subalgebra}, $E$ is a product structure on $\g$. By $\g_-=J\g_+$ and $J^2=-\Id$, we have \begin{eqnarray*} E(J(x+\alpha))=E(J(x)+J(\alpha))=-J(x)+J(\alpha)=-J(E(x+\alpha)). \end{eqnarray*} Thus, $\{J,E\}$ is a complex product structure on $\g$. The proof is finished. \qed\vspace{2mm} \begin{ex}\label{ex:A4cp}{\rm Consider the product structures and the complex structures on the $4$-dimensional Euclidean $3$-Lie algebra $A_4$ given in Example \ref{ex:A4product} and Example \ref{ex:A4complex} respectively. Then $\{J_i,E_i\}$ for $i=1,2,3,4,5,6$ are complex product structures on $A_4$. } \end{ex} We give a characterization of a perfect complex product structure on a 3-Lie algebra. \begin{pro}\label{3-pre-Lie-complex-product} Let $E$ be a perfect paracomplex structure on a real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. Then there is a perfect complex product structure $\{J,E\}$ on $\g$ if and only if there exists a linear isomorphism $\phi:\g_+\lon\g_-$ satisfying the following equation \begin{eqnarray}\label{complex-perfect-product} \nonumber\phi[x,y,z]_\g&=&-[\phi(x),\phi(y),\phi(z)]_\g+[\phi(x),y,z]_\g+[x,\phi(y),z]_\g+[x,y,\phi(z)]_\g\\ &&+\phi[\phi(x),\phi(y),z]_\g+\phi[x,\phi(y),\phi(z)]_\g+\phi[\phi(x),y,\phi(z)]_\g,\quad \forall x,y,z\in\g_+. \end{eqnarray} \end{pro} \pf Let $\{J,E\}$ be a perfect complex product structure on $\g$. Define a linear isomorphism $\phi:\g_+\lon\g_-$ by $\phi\triangleq J|_{\g_+}:\g_+\lon\g_-$. By the compatibility condition \eqref{complex-structure} that the complex structure $J$ satisfies and the coherence condition \eqref{eq:coherenceconPP} that a perfect product structure $E$ satisfies, we deduce that \eqref{complex-perfect-product} holds. Conversely, we define an endomorphism $J$ of $\g$ by \begin{eqnarray}\label{complex-perfect-product-structure} J(x+\alpha)=-\phi^{-1}(\alpha)+\phi(x),\,\,\,\,\forall x\in\g_+,\alpha\in\g_-. \end{eqnarray} It is obvious that $J$ is an almost complex structure on $\g$ and $J\circ E=-E\circ J$. For all $\alpha,\beta,\gamma\in\g_-$, let $x,y,z\in\g_+$ such that $\phi(x)=\alpha,\phi(y)=\beta$ and $\phi(z)=\gamma$. By \eqref{complex-perfect-product} and \eqref{eq:coherenceconPP}, we have \begin{eqnarray*} &&-[J\alpha,J\beta,J\gamma]_\g+[J\alpha,\beta,\gamma]_\g+[\alpha,J\beta,\gamma]_\g+[\alpha,\beta,J\gamma]_\g\\ &&+J[J\alpha,J\beta,\gamma]_\g+J[\alpha,J\beta,J\gamma]_\g+J[J\alpha,\beta,J\gamma]_\g\\ &=&[x,y,z]_\g-[x,\phi(y),\phi(z)]_\g-[\phi(x),y,\phi(z)]_\g-[\phi(x),\phi(y),z]_\g\\ &&-\phi^{-1}[x,y,\phi(z)]_\g-\phi^{-1}[\phi(x),y,z]_\g-\phi^{-1}[x,\phi(y),z]_\g\\ &=&-\phi^{-1}[\phi(x),\phi(y),\phi(z)]_\g\\ &=&J[\alpha,\beta,\gamma]_\g, \end{eqnarray*} which implies that \eqref{complex-structure} holds for all $\alpha,\beta,\gamma\in\g_-$. Similarly, we can deduce that \eqref{complex-structure} holds for all the other cases. Thus, $J$ is a complex structure and $\{J,E\}$ is a perfect complex product structure on the 3-Lie algebra $\g$. \qed\vspace{3mm} At the end of this section, we construct perfect complex product structure using 3-pre-Lie algebras. \emptycomment{ \begin{pro} Let \{J,E\} be a complex product structure on the $3$-Lie algebra $\g$ and let $(\g,\g_+,\g_-)$ be the associated double $3$-Lie algebra. Then the following assertions are equivalent: \begin{itemize} \item[\rm(i)] $J$ is an abelian complex structure. \item[\rm(ii)] The $3$-Lie subalgebra $\g_+$ and $\g_-$ are abelian. \item[\rm(iii)] $E$ is an abelian product structure. \end{itemize} \end{pro} \pf } A nondegenerate symmetric bilinear form $\huaB\in A^*\otimes A^*$ on a real $3$-pre-Lie algebra $(A,\{\cdot,\cdot,\cdot\})$ is called {\bf invariant} if \begin{eqnarray}\label{3-pre-Lie-symmetric-bilinear} \huaB(\{x,y,z\},w)=-\huaB(z,\{x,y,w\}),\,\,\,\,\forall x,y,z,w\in A. \end{eqnarray} Then $\huaB$ induces a linear isomorphism $\huaB^{\sharp}:A\lon A^*$ by \begin{eqnarray} \langle\huaB^{\sharp}(x),y\rangle=\huaB(x,y),\,\,\,\,\forall x,y\in A. \end{eqnarray} \begin{pro}\label{pro:compro} Let $(A,\{\cdot,\cdot,\cdot\})$ be a real $3$-pre-Lie algebra with a nondegenerate symmetric bilinear from $\huaB$. Then there is a perfect complex product structure $\{J,E\}$ on the semidirect product $3$-Lie algebra $ A^c\ltimes_{L^*}A^*$, where $E$ is given by \eqref{eq:defiE} and the complex structure $J$ is given as follows: \begin{eqnarray}\label{3-pre-Lie-complex} J(x+\alpha)=-{\huaB^{\sharp}}^{-1}(\alpha)+\huaB^{\sharp}(x),\,\,\,\,\forall x\in A,\alpha\in A^*. \end{eqnarray} \emptycomment{In particular, if the bilinear form $\huaB$ is a symmetric and positive definite, the perfect complex product structure $J$ is given as follows: \begin{eqnarray} J(x+y^*)=-y+x^*,\,\,\,\,\forall x,y\in A, \end{eqnarray} where for any $x=\sum_{i=1}^{n}l_ie_i\in A$, $x^*\in A^*$ is given by $x^*=\sum_{i=1}^{n}l_ie_i^*\in A^*.$ Here, $\{e_1,\cdots,e_n\}$ is a basis of $A$ such that $\huaB(e_i,e_j)=\delta_{ij}$ and $e_1^*,\cdots,e_n^*$ is the dual basis of $A^*$. } \end{pro} \pf By Proposition \ref{paracomplex-3-pre-Lie}, $E$ is a perfect product structure on $A^c\ltimes_{L^*}A^*$. For all $x,y,z\in A^c$, we have \begin{eqnarray*} &&-[\huaB^{\sharp}(x),\huaB^{\sharp}(y),\huaB^{\sharp}(z)]_{L^*}+[\huaB^{\sharp}(x),y,z]_{L^*}+[x,\huaB^{\sharp}(y),z]_{L^*}+[x,y,\huaB^{\sharp}(z)]_{L^*}\\ &&+\huaB^{\sharp}[\huaB^{\sharp}(x),\huaB^{\sharp}(y),z]_{L^*}+\huaB^{\sharp}[x,\huaB^{\sharp}(y),\huaB^{\sharp}(z)]_{L^*}+\huaB^{\sharp}[\huaB^{\sharp}(x),y,\huaB^{\sharp}(z)]_{L^*}\\ &=&[\huaB^{\sharp}(x),y,z]_{L^*}+[x,\huaB^{\sharp}(y),z]_{L^*}+[x,y,\huaB^{\sharp}(z)]_{L^*}\\ &=&L^*(x,y)\huaB^{\sharp}(z)+L^*(y,z)\huaB^{\sharp}(x)+L^*(z,x)\huaB^{\sharp}(y). \end{eqnarray*} By \eqref{3-pre-Lie-symmetric-bilinear}, we have \begin{eqnarray*} \langle \huaB^{\sharp}[x,y,z]_C,w\rangle&=&\langle \huaB^{\sharp}\{x,y,z\},w\rangle+\langle \huaB^{\sharp}\{y,z,x\},w\rangle+\langle \huaB^{\sharp}\{z,x,y\},w\rangle\\ &=&\huaB(\{x,y,z\},w)+\huaB(\{y,z,x\},w)+\huaB(\{z,x,y\},w)\\ &=&-\huaB(z,\{x,y,w\})-\huaB(x,\{y,z,w\})-\huaB(y,\{z,x,w\})\\ &=&-\langle\huaB^{\sharp}(z),\{x,y,w\}\rangle-\langle\huaB^{\sharp}(x),\{y,z,w\}\rangle -\langle\huaB^{\sharp}(y),\{z,x,w\}\rangle\\ &=&\langle L^*(x,y)\huaB^{\sharp}(z),w\rangle+\langle L^*(y,z)\huaB^{\sharp}(x),w\rangle+\langle L^*(z,x)\huaB^{\sharp}(y),w\rangle, \end{eqnarray*} which implies that $$ \huaB^{\sharp}[x,y,z]_C=L^*(x,y)\huaB^{\sharp}(z)+L^*(y,z)\huaB^{\sharp}(x)+L^*(z,x)\huaB^{\sharp}(y). $$ Thus, we have \begin{eqnarray*} \huaB^{\sharp}[x,y,z]_C&=&-[\huaB^{\sharp}(x),\huaB^{\sharp}(y),\huaB^{\sharp}(z)]_{L^*}+[\huaB^{\sharp}(x),y,z]_{L^*}+[x,\huaB^{\sharp}(y),z]_{L^*} +[x,y,\huaB^{\sharp}(z)]_{L^*}\\ &&+\huaB^{\sharp}[\huaB^{\sharp}(x),\huaB^{\sharp}(y),z]_{L^*}+\huaB^{\sharp}[x,\huaB^{\sharp}(y),\huaB^{\sharp}(z)]_{L^*}+\huaB^{\sharp}[\huaB^{\sharp}(x),y,\huaB^{\sharp}(z)]_{L^*}. \end{eqnarray*} By Proposition \ref{3-pre-Lie-complex-product}, we obtain that $\{J,E\}$ is a perfect complex product structure on $ A^c\ltimes_{L^*}A^*.$ \emptycomment{ If the invariant bilinear form $\huaB$ is symmetric and positive definite, then $\huaB^{\sharp}(e_i)=e_i^*$, where $\{e_1,\cdots,e_n\}$ is a basis of $A$ such that $\huaB(e_i,e_j)=\delta_{ij}$ and $e_1^*,\cdots,e_n^*$ is its dual basis. The proof is finished. }\qed\vspace{3mm} Let $(A,\{\cdot,\cdot,\cdot\})$ be a real $3$-pre-Lie algebra. On the real $3$-Lie algebra $\aff(A)=A^c\ltimes_L A$, we consider two endomorphisms $J$ and $E$ given by \begin{eqnarray} J(x,y)=(-y,x),\,\,\,\,E(x,y)=(x,-y),\,\,\,\,\forall x,y\in A. \end{eqnarray} \begin{pro} With the above notations, $\{J,E\}$ is a perfect complex product structure on the $3$-Lie algebra $\aff(A)$. \end{pro} \pf It is obvious that $E$ is a perfect product structure on $\aff(A)$. Moreover, we have $J^2=-\Id$ and $J\circ E=-E\circ J$. Obviously $\aff(A)_+=\{(x,0)|x\in A\}, \aff(A)_-=\{(0,y)|y\in A\}$. Define $\phi:\aff(A)_+\lon\aff(A)_-$ by $\phi\triangleq J|_{\aff(A)_+}:\aff(A)_+\lon\aff(A)_-$. More precisely, $\phi(x,0)=(0,x)$. Then for all $(x,0),(y,0),(z,0)\in\aff(A)_+$, we have \begin{eqnarray*} &&-[\phi(x,0),\phi(y,0),\phi(z,0)]_L+[\phi(x,0),(y,0),(z,0)]_L+[(x,0),\phi(y,0),(z,0)]_L\\ &&+[(x,0),(y,0),\phi(z,0)]_L +\phi[\phi(x,0),\phi(y,0),(z,0)]_L+\phi[(x,0),\phi(y,0),\phi(z,0)]_L\\ &&+\phi[\phi(x,0),(y,0),\phi(z,0)]_L\\ &=&[\phi(x,0),(y,0),(z,0)]_L+[(x,0),\phi(y,0),(z,0)]_L+[(x,0),(y,0),\phi(z,0)]_L\\ &=&(0,\{y,z,x\})+(0,\{z,x,y\})+(0,\{x,y,z\})\\ &=&\phi[(x,0),(y,0),(z,0)]_L. \end{eqnarray*} By Proposition \ref{3-pre-Lie-complex-product}, $\{J,E\}$ is a perfect complex product structure on the $3$-Lie algebra $\aff(A)$. \qed \section{Para-K\"{a}hler structures on $3$-Lie algebras} In this section, we add a compatibility condition between a symplectic structure and a paracomplex structure on a 3-Lie algebra to introduce the notion of a para-K\"{a}hler structure on a $3$-Lie algebra. A para-K\"{a}hler structure gives rise to a pseudo-Riemannian structure. We introduce the notion of a Livi-Civita product associated to a pseudo-Riemannian 3-Lie algebra and give its precise formulas using the decomposition of the original 3-Lie algebra. \begin{defi} Let $\omega$ be a symplectic structure and $E$ a paracomplex structure on a $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. The triple $(\g,\omega,E)$ is called a {\bf para-Kähler} $3$-Lie algebra if the following equality holds: \begin{equation}\label{eq:pk} \omega(Ex,Ey)=-\omega(x,y),\quad \forall x,y\in\g. \end{equation} If $E$ is perfect, we call $(\g,\omega,E)$ a {\bf perfect para-Kähler} $3$-Lie algebra. \end{defi} \begin{pro} Let $(A,\{\cdot,\cdot,\cdot\})$ be a $3$-pre-Lie algebra. Then $(A^c\ltimes_{L^*}A^*,\omega,E)$ is a perfect para-Kähler $3$-Lie algebra, where $\omega$ is given by \eqref{phase-space} and $E$ is defined by \eqref{eq:defiE}. \end{pro} \pf By Theorem \ref{3-pre-Lie-phase-space}, $(A^c\ltimes_{L^*}A^*,\omega)$ is a symplectic 3-Lie algebra. By Proposition \ref{paracomplex-3-pre-Lie}, $E$ is a perfect paracomplex structure on the phase space $T^*A^c$. For all $x_1,x_2\in A,\alpha_1,\alpha_2\in A^*$, we have \begin{eqnarray*} \omega(E(x_1+\alpha_1),E(x_2+\alpha_2))&=&\omega(x_1-\alpha_1,x_2-\alpha_2)=\langle -\alpha_1, x_2\rangle-\langle -\alpha_2, x_1\rangle\\ &=&-\omega(x_1+\alpha_1,x_2+\alpha_2). \end{eqnarray*} Therefore, $(T^*A^c=A^c\ltimes_{L^*}A^*,\omega,E)$ is a perfect paraKähler 3-Lie algebra. \qed\vspace{3mm} Similar as the case of para-Kähler Lie algebras, we have the following equivalent description of a para-Kähler $3$-Lie algebra. \begin{thm} Let $(\g,\omega)$ be a symplectic $3$-Lie algebra. Then there exists a paracomplex structure $E$ on the $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$ such that $(\g,\omega,E)$ is a para-Kähler $3$-Lie algebra if and only if there exist two isotropic $3$-Lie subalgebras $\g_+$ and $\g_-$ such that $\g=\g_+\oplus\g_-$ as the direct sum of vector spaces. \end{thm} \pf Let $(\g,\omega,E)$ be a para-Kähler $3$-Lie algebra. Since $E$ is a paracomplex structure on $\g$, we have $ \g=\g_+\oplus\g_-, $ where $\g_+$ and $\g_-$ are $3$-Lie subalgebras of $\g$. For all $x_1,x_2\in\g_+$, by \eqref{eq:pk}, we have \begin{eqnarray*} \omega(Ex_1,Ex_2)=\omega(x_1,x_2)=-\omega(x_1,x_2), \end{eqnarray*} which implies that $\omega(\g_+,\g_+)=0$. Thus, $\g_+$ is isotropic. Similarly, $\g_-$ is also isotropic. Conversely, since $\g_+$ and $\g_-$ are subalgebras, $\g=\g_+\oplus \g_-$ as vector spaces, there is a product structure $E$ on $\g$ defined by \eqref{eq:productE}. Moreover, since $\g=\g_+\oplus \g_-$ as vector spaces and both $\g_+$ and $\g_-$ are isotropic, we obtain that dim $\g_+$=dim $\g_-$. Thus, $E$ is a paracomplex structure on $\g$. For all $x_1,x_2\in\g_+,\alpha_1,\alpha_2\in\g_-$, since $\g_+$ and $\g_-$ are isotropic, we have \begin{eqnarray*} \omega(E(x_1+\alpha_1),E(x_2+\alpha_2))&=&\omega(x_1-\alpha_1,x_2-\alpha_2)=-\omega(x_1,\alpha_2)-\omega(\alpha_1,x_2)\\ &=&-\omega(x_1+\alpha_1,x_2+\alpha_2). \end{eqnarray*} Thus, $(\g,\omega,E)$ is a para-Kähler $3$-Lie algebra. The proof is finished. \qed \begin{ex}\label{ex:A4pK}{\rm Consider the symplectic structures and the perfect paracomplex structures on the $4$-dimensional Euclidean $3$-Lie algebra $A_4$ given in Example \ref{ex:A4symplectic} and Example \ref{ex:A4product} respectively. Then $\{\omega_i,E_i\}$ for $i=1,2,3,4,5,6$ are perfect para-Kähler structures on $A_4$. } \end{ex} \begin{ex}\label{ex:standardpK}{\rm Let $(\frkh,[\cdot,\cdot,\cdot]_\h$ be a 3-Lie algebra and $(\h\oplus \h^*,\omega)$ its (perfect) phase space, where $\omega$ is given by \eqref{phase-space}. Then $E:\h\oplus \h^*\longrightarrow\h\oplus \h^*$ defined by \begin{equation} \label{eq:Ephasespace} E(x+\alpha)=x-\alpha,\quad \forall x\in\h,\alpha\in\h^*, \end{equation} is a (perfect) paracomplex structure and $(\h\oplus \h^*,\omega,E)$ is a (perfect) para-Kähler $3$-Lie algebra. } \end{ex} Let $(\g,\omega,E)$ be a para-Kähler $3$-Lie algebra. Then it is obvious that $\g_-$ is isomorphic to $\g_+^*$ via the symplectic structure $\omega$. Moreover, it is straightforward to deduce that \begin{pro}\label{pro:standardpK} Any para-Kähler $3$-Lie algebra is isomorphic to the para-Kähler $3$-Lie algebra associated to a phase space of a $3$-Lie algebra. \end{pro} In the sequel, we study the Levi-Civita product associated to a perfect para-Kähler 3-Lie algebra. \begin{defi} A {\bf pseudo-Riemannian $3$-Lie algebra} is a $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$ endowed with a nondegenerate symmetric bilinear form $S$. The associated Levi-Civita product is the product on $\g$, $\nabla:\otimes^3\g\longrightarrow\g$ with $(x,y,z)\longmapsto \nabla_{x,y}z$, given by the following formula: \begin{eqnarray}\label{Levi-Civita product} 3S(\nabla_{x,y}z,w)=S([x,y,z]_\g,w)-2S([x,y,w]_\g,z)+S([y,z,w]_\g,x)+S([z,x,w]_\g,y). \end{eqnarray} \end{defi} \begin{pro} Let $(\g,S)$ be a pseudo-Riemannian $3$-Lie algebra. Then the Levi-Civita product $\{\cdot,\cdot,\cdot\}$ satisfies the following equations: \begin{eqnarray} \nabla_{x,y}z&=&-\nabla_{y,x}z,\\ \nabla_{x,y}z+\nabla_{y,z}x+\nabla_{z,x}y&=&[x,y,z]_\g. \end{eqnarray} \end{pro} \pf For all $w\in\g$, it is obvious that \begin{eqnarray*} 3S(\nabla_{y,x}z,w)&=&S([y,x,z]_\g,w)-2S([y,x,w]_\g,z)+S([x,z,w]_\g,y)+S([z,y,w]_\g,x)\\ &=&-3S(\nabla_{x,y}z,w). \end{eqnarray*} By the nondegeneracy of $S$, we obtain $\nabla_{x,y}z=-\nabla_{y,x}z.$ For all $x,y,z,w\in\g$, we have \begin{eqnarray*} 3S(\nabla_{x,y}z,w)&=&S([x,y,z]_\g,w)-2S([x,y,w]_\g,z)+S([y,z,w]_\g,x)+S([z,x,w]_\g,y),\\ 3S(\nabla_{y,z}x,w)&=&S([y,z,x]_\g,w)-2S([y,z,w]_\g,x)+S([z,x,w]_\g,y)+S([x,y,w]_\g,z),\\ 3S(\nabla_{z,x}y,w)&=&S([z,x,y]_\g,w)-2S([z,x,w]_\g,y)+S([x,y,w]_\g,z)+S([y,z,w]_\g,x). \end{eqnarray*} Add up the three equations, we have $$3S(\nabla_{x,y}z+\nabla_{y,z}x+\nabla_{z,x}y,w)=3S([x,y,z]_\g,w),$$ which implies that $\nabla_{x,y}z+\nabla_{y,z}x+\nabla_{z,x}y=[x,y,z]_\g.$ The proof is finished. \qed\vspace{3mm} Let $(\g,\omega,E)$ be a perfect para-Kähler $3$-Lie algebra. Define a bilinear form $S$ on $\g$ by \begin{eqnarray} S(x,y)\triangleq \omega(x,Ey),\,\,\,\,\forall x,y\in\g. \end{eqnarray} \begin{pro} With the above notations, $(\g,S)$ is a pseudo-Riemannian $3$-Lie algebra. Moreover, the associated Levi-Civita product $\nabla$ and the perfect paracomplex structure $E$ satisfy the following compatibility condition: \begin{equation} E\nabla _{x,y}z=\nabla_{Ex,Ey}Ez. \end{equation} \end{pro} \pf Since $\omega$ is skewsymmetric and $\omega(Ex,Ey)=-\omega(x,y)$, we have \begin{eqnarray*} S(y,x)=\omega(y,Ex)=-\omega(Ey,E^2x)=-\omega(Ey,x)=\omega(x,Ey)=S(x,y), \end{eqnarray*} which implies that $S$ is symmetric. Moreover, since $\omega$ is nondegenerate and $E^2=\Id$, it is obvious that $S$ is nondegenerate. Thus, $S$ is a pseudo-Riemannian metric on the 3-Lie algebra $\g$. Moreover, we have \begin{eqnarray*} &&3S(\nabla_{Ex,Ey}Ez,w)\\&=&S([Ex,Ey,Ez]_\g,w)-2S([Ex,Ey,w]_\g,Ez)+S([Ey,Ez,w]_\g,Ex)+S([Ez,Ex,w]_\g,Ey)\\ &=& S(E[x,y,z]_\g,w)-2S(E[x,y,Ew]_\g,Ez)+S(E[y,z,Ew]_\g,Ex)+S(E[z,x,Ew]_\g,Ey)\\ &=&-( S([x,y,z]_\g,Ew)-2S([x,y,Ew]_\g,z)+S([y,z,Ew]_\g,x)+S([z,x,Ew]_\g,y))\\ &=&-3S(\nabla_{x,y}z,Ew)\\ &=&3S(E\nabla_{x,y}z,w). \end{eqnarray*} Thus, we have $ E\nabla _{x,y}z=\nabla_{Ex,Ey}Ez.$\qed\vspace{3mm} The following two propositions clarifies the relationship between the Levi-Civita product and the 3-pre-Lie multiplication on a para-Kähler $3$-Lie algebra. \begin{pro}\label{Levi-Civita-3-pre-Lie} Let $(\g,\omega,E)$ be a para-Kähler $3$-Lie algebra and $\nabla$ the associated Levi-Civita product. Then for all $x_1,x_2,x_3\in\g_+$ and $\alpha_1,\alpha_2,\alpha_3\in\g_-$, we have $$ \nabla_{x_1,x_2}x_3=\{x_1,x_2,x_3\},\quad \nabla_{\alpha_1,\alpha_2}\alpha_3=\{\alpha_1,\alpha_2,\alpha_3\}. $$ \end{pro} \pf Since $(\g,\omega,E)$ is a para-Kähler $3$-Lie algebra, $3$-Lie subalgebras $\g_+$ and $\g_-$ are isotropic and $\g=\g_+\oplus\g_-$ as vector spaces. For all $x_1,x_2,x_3,x_4\in\g_+$, we have \begin{eqnarray*} &&3\omega(\nabla_{x_1,x_2}x_3,x_4)\\&=&3S(\nabla_{x_1,x_2}x_3,Ex_4)=3S(\nabla_{x_1,x_2}x_3,x_4)\\ &=&S([x_1,x_2,x_3]_\g,x_4)-2S([x_1,x_2,x_4]_\g,x_3)+S([x_2,x_3,x_4]_\g,x_1)+S([x_3,x_1,x_4]_\g,x_2)\\ &=&\omega([x_1,x_2,x_3]_\g,x_4)-2\omega([x_1,x_2,x_4]_\g,x_3)+\omega([x_2,x_3,x_4]_\g,x_1)+\omega([x_3,x_1,x_4]_\g,x_2)\\ &=&0. \end{eqnarray*} By $(\g_+)^{\perp}=\g_+$, we obtain $\nabla_{x_1,x_2}x_3\in\g_+$. Similarly, for all $\alpha_1,\alpha_2,\alpha_3\in\g_-$, $\nabla_{\alpha_1,\alpha_2}\alpha_3\in\g_-$. Furthermore, for all $x_1,x_2,x_3\in\g_+,$ and $\alpha\in\g_-$, we have \begin{eqnarray*} &&3\omega(\nabla_{x_1,x_2}x_3,\alpha)\\&=&3S(\nabla_{x_1,x_2}x_3,E\alpha)=-3S(\nabla_{x_1,x_2}x_3,\alpha)\\ &=&-S([x_1,x_2,x_3]_\g,\alpha)+2S([x_1,x_2,\alpha]_\g,x_3)-S([x_2,x_3,\alpha]_\g,x_1)-S([x_3,x_1,\alpha]_\g,x_2)\\ &=&\omega([x_1,x_2,x_3]_\g,\alpha)+2\omega([x_1,x_2,\alpha]_\g,x_3)-\omega([x_2,x_3,\alpha]_\g,x_1)-\omega([x_3,x_1,\alpha]_\g,x_2)\\ &=&\omega([\alpha,x_1,x_2]_\g,x_3)+2\omega([x_1,x_2,\alpha]_\g,x_3)\\ &=&-3\omega(x_3,[x_1,x_2,\alpha]_\g)\\ &=&3\omega(\{x_1,x_2,x_3\},\alpha). \end{eqnarray*} Thus, $\nabla_{x_1,x_2}x_3=\{x_1,x_2,x_3\}.$ Similarly, we have $\nabla_{\alpha_1,\alpha_2}\alpha_3=\{\alpha_1,\alpha_2,\alpha_3\}$. The proof is finished.\qed \begin{pro} Let $(\g,\omega,E)$ be a perfect para-Kähler $3$-Lie algebra and $\nabla$ the associated Levi-Civita product. Then for all $x_1,x_2\in\g_+$ and $\alpha_1,\alpha_2\in\g_-$, we have \begin{eqnarray} \label{eq:conn1}\nabla_{x_1,x_2}\alpha_1&=&\{x_1,x_2,\alpha_1\}+\frac{2}{3}(\{x_2,\alpha_1,x_1\}+\{\alpha_1,x_1,x_2\}),\\ \label{eq:conn2}\nabla_{\alpha_1,x_1}x_2&=&-\frac{1}{3}\{\alpha_1,x_1,x_2\}+\frac{2}{3}\{x_2,\alpha_1,x_1\},\\ \label{eq:conn3}\nabla_{\alpha_1,\alpha_2}x_1&=&\{\alpha_1,\alpha_2,x_1\}+\frac{2}{3}(\{\alpha_2,x_1,\alpha_1\}+\{x_1,\alpha_1,\alpha_2\}),\\ \label{eq:conn4}\nabla_{x_1,\alpha_1}\alpha_2&=&-\frac{1}{3}\{x_1,\alpha_1,\alpha_2\}+\frac{2}{3}\{\alpha_2,x_1,\alpha_1\}. \end{eqnarray} \end{pro} \pf Since $(\g,\omega,E)$ is a perfect para-Kähler $3$-Lie algebra, $3$-Lie subalgebras $\g_+$ and $\g_-$ are isotropic and $\g=\g_+\oplus\g_-$ as vector spaces. Thus, we have $S(\g_+,\g_+)=S(\g_-,\g_-)=0.$ For all $x_1,x_2\in\g_+$ and $\alpha_1,\alpha_2\in\g_-$, we have \begin{eqnarray*} &&3S(\nabla_{x_1,x_2}\alpha_1,\alpha_2)\\ &=&S([x_1,x_2,\alpha_1]_\g,\alpha_2)-2S([x_1,x_2,\alpha_2]_\g,\alpha_1) +S([x_2,\alpha_1,\alpha_2]_\g,x_1)+S([\alpha_1,x_1,\alpha_2]_\g,x_2)=0. \end{eqnarray*} Since $S$ is nondegenerate, we have $\nabla_{x_1,x_2}\alpha_1\in\g_-.$ Moreover, For all $x_1,x_2,x_3\in\g_+$ and $\alpha_1\in\g_-$, we have \begin{eqnarray*} &&3\omega(\nabla_{x_1,x_2}\alpha_1,x_3)\\&=&3S(\nabla_{x_1,x_2}\alpha_1,Ex_3)=3S(\nabla_{x_1,x_2}\alpha_1,x_3)\\ &=&S([x_1,x_2,\alpha_1]_\g,x_3)-2S([x_1,x_2,x_3]_\g,\alpha_1) +S([x_2,\alpha_1,x_3]_\g,x_1)+S([\alpha_1,x_1,x_3]_\g,x_2)\\ &=&\omega([x_1,x_2,\alpha_1]_\g,Ex_3)-2\omega([x_1,x_2,x_3]_\g,E\alpha_1) +\omega([x_2,\alpha_1,x_3]_\g,Ex_1)+\omega([\alpha_1,x_1,x_3]_\g,Ex_2)\\ &=&\omega([x_1,x_2,\alpha_1]_\g,x_3)+2\omega([x_1,x_2,x_3]_\g,\alpha_1) +\omega([x_2,\alpha_1,x_3]_\g,x_1)+\omega([\alpha_1,x_1,x_3]_\g,x_2)\\ &=&\omega([x_1,x_2,\alpha_1]_\g,x_3)+2\omega(\{x_1,x_2,\alpha_1\},x_3) +\omega(\{x_2,\alpha_1,x_1\},x_3)+\omega(\{\alpha_1,x_1,x_2\},x_3). \end{eqnarray*} Thus, we obtain \begin{eqnarray*} \nabla_{x_1,x_2}\alpha_1&=&\{x_1,x_2,\alpha_1\}+\frac{2}{3}(\{x_2,\alpha_1,x_1\}+\{\alpha_1,x_1,x_2\}), \end{eqnarray*} which implies that \eqref{eq:conn1} holds. For all $x_1,x_2\in\g_+$ and $\alpha_1,\alpha_2\in\g_-$, we have \begin{eqnarray*} &&3S(\nabla_{\alpha_1,x_1}x_2,\alpha_2)\\ &=&S([\alpha_1,x_1,x_2]_\g,\alpha_2)-2S([\alpha_1,x_1,\alpha_2]_\g,x_2) +S([x_1,x_2,\alpha_2]_\g,\alpha_1)+S([x_2,\alpha_1,\alpha_2]_\g,x_1)=0. \end{eqnarray*} Since $S$ is nondegenerate, we have $\nabla_{\alpha_1,x_1}x_2\in\g_-.$ Moreover, For all $x_1,x_2,x_3\in\g_+$ and $\alpha_1\in\g_-$, we have \begin{eqnarray*} &&3\omega(\nabla_{\alpha_1,x_1}x_2,x_3)\\&=&3S(\nabla_{\alpha_1,x_1}x_2,Ex_3)=3S(\nabla_{\alpha_1,x_1}x_2,x_3)\\ &=&S([\alpha_1,x_1,x_2]_\g,x_3)-2S([\alpha_1,x_1,x_3]_\g,x_2) +S([x_1,x_2,x_3]_\g,\alpha_1)+S([x_2,\alpha_1,x_3]_\g,x_1)\\ &=&\omega([\alpha_1,x_1,x_2]_\g,x_3)-2\omega([\alpha_1,x_1,x_3]_\g,x_2) -\omega([x_1,x_2,x_3]_\g,\alpha_1)+\omega([x_2,\alpha_1,x_3]_\g,x_1)\\ &=&\omega([\alpha_1,x_1,x_2]_\g,x_3)-2\omega(\{\alpha_1,x_1,x_2\},x_3) -\omega(\{x_1,x_2,\alpha_1\},x_3)+\omega(\{x_2,\alpha_1,x_1\},x_3). \end{eqnarray*} Thus, we obtain \begin{eqnarray*} \nabla_{\alpha_1,x_1}x_2&=&-\frac{1}{3}\{\alpha_1,x_1,x_2\}+\frac{2}{3}\{x_2,\alpha_1,x_1\}, \end{eqnarray*} which implies that \eqref{eq:conn2} holds. \eqref{eq:conn3} and \eqref{eq:conn4} can be proved similarly. We omit details. The proof is finished. \qed\vspace{3mm} Under the isomorphism given in Proposition \ref{pro:standardpK} and the correspondence given in Theorem \ref{thm:MT-ps}, using the formulas provided in Proposition \ref{pro:stuctureMP3preLie}, we get \begin{cor} For the perfect para-Kähler $3$-Lie algebra $(\h\oplus \h^*,\omega,E)$ given in Example \ref{ex:standardpK}, for all $x_1,x_2\in\h$ and $\alpha_1,\alpha_2\in\h^*$, we have \begin{eqnarray} \label{eq:conn11}\nabla_{x_1,x_2}\alpha_1&=&(L^*(x_1,x_2)-\frac{1}{3}R^*(x_2,x_1)+\frac{1}{3}R^*(x_1,x_2))\alpha_1,\\ \label{eq:conn22}\nabla_{\alpha_1,x_1}x_2&=&(\frac{1}{3}R^*(x_1,x_2)+\frac{2}{3}R^*(x_2,x_1))\alpha_1,\\ \label{eq:conn33}\nabla_{\alpha_1,\alpha_2}x_1&=&(\huaL^*(\alpha_1,\alpha_2)-\frac{1}{3}\huaR^*(\alpha_2,\alpha_1)+\frac{1}{3}\huaR^*(\alpha_1,\alpha_2))x_1,\\ \label{eq:conn44}\nabla_{x_1,\alpha_1}\alpha_2&=&(\frac{1}{3}\huaR^*(\alpha_1,\alpha_2)+\frac{2}{3}\huaR^*(\alpha_2,\alpha_1))x_1. \end{eqnarray} \end{cor} \section{Pseudo-K\"{a}hler structures on $3$-Lie algebras} In this section, we add a compatibility condition between a symplectic structure and a complex structure on a 3-Lie algebra to introduce the notion of a pseudo-K\"{a}hler structure on a $3$-Lie algebra. The relation between para-K\"{a}hler structures and pseudo-K\"{a}hler structures on a 3-Lie algebra is investigated. \begin{defi} Let $\omega$ be a symplectic structure and $J$ a complex structure on a real $3$-Lie algebra $(\g,[\cdot,\cdot,\cdot]_\g)$. The triple $(\g,\omega,J)$ is called a real {\bf pseudo-Kähler} $3$-Lie algebra if \begin{equation}\label{eq:pK} \omega(Jx,Jy)=\omega(x,y),\quad \forall x,y\in\g. \end{equation} \end{defi} \begin{ex}\label{ex:A4sK}{\rm Consider the symplectic structures and the complex structures on the $4$-dimensional Euclidean $3$-Lie algebra $A_4$ given in Example \ref{ex:A4symplectic} and Example \ref{ex:A4complex} respectively. Then $\{\omega_i,J_i\}$ for $i=1,2,3,4,5,6$ are pseudo-Kähler structures on $A_4$. } \end{ex} \begin{pro} Let $(\g,\omega,J)$ be a real pseudo-Kähler $3$-Lie algebra. Define a bilinear form $S$ on $\g$ by \begin{eqnarray} S(x,y)\triangleq \omega(x,Jy),\,\,\,\,\forall x,y\in\g. \end{eqnarray} Then $(\g,S)$ is a pseudo-Riemannian $3$-Lie algebra. \end{pro} \pf By \eqref{eq:pK}, we have \begin{eqnarray*} S(y,x)=\omega(y,Jx)=\omega(Jy,J^2x)=-\omega(Jy,x)=\omega(x,Jy)=S(x,y), \end{eqnarray*} which implies that $S$ is symmetric. Moreover, since $\omega$ is nondegenerate and $J^2=-\Id$, it is obvious that $S$ is nondegenerate. Thus, $S$ is a pseudo-Riemannian metric on the 3-Lie algebra $\g$. \qed \begin{defi} Let $(\g,\omega,J)$ be a real pseudo-Kähler $3$-Lie algebra. If the associated pseudo-Riemannian metric is positive definite, we call $(\g,\omega,J)$ a real {\bf Kähler} $3$-Lie algebra. \end{defi} \begin{thm} Let $(\g,\omega,E)$ be a complex para-Kähler $3$-Lie algebra. Then $(\g_{\mathbb R},\omega_{\mathbb R},J)$ is a real pseudo-Kähler $3$-Lie algebra, where $\g_{\mathbb R}$ is the underlying real $3$-Lie algebra, $J=iE$ and $\omega_{\mathbb R}=\re(\omega)$ is the real part of $\omega.$ \end{thm} \pf By Proposition \ref{equivalent}, $J=iE$ is a complex structure on the complex $3$-Lie algebra $\g$. Thus, $J$ is also a complex structure on the real $3$-Lie algebra $\g_{\mathbb R}$. It is obvious that $\omega_{\mathbb R}$ is skew-symmetric. If for all $x\in\g$, $\omega_{\mathbb R}(x,y)=0$. Then we have \begin{eqnarray*} \omega(x,y)=\omega_{\mathbb R}(x,y)+i\omega_{\mathbb R}(-ix,y)=0. \end{eqnarray*} By the nondegeneracy of $\omega$, we obtain $y=0$. Thus, $\omega_{\mathbb R}$ is nondegenerate. Therefore, $\omega_{\mathbb R}$ is a symplectic structure on the real $3$-Lie algebra $\g_{\mathbb R}$. By $\omega(Ex,Ey)=-\omega(x,y)$, we have \begin{eqnarray*} \omega_{\mathbb R}(Jx,Jy)=\re(\omega(iEx,iEy))=\re(-\omega(Ex,Ey))=\re(\omega(x,y))=\omega_{\mathbb R}(x,y). \end{eqnarray*} Thus, $(\g_{\mathbb R},iE,\omega_{\mathbb R})$ is a real pseudo-Kähler $3$-Lie algebra. \qed Conversely, we have \begin{thm} Let $(\g,\omega,J)$ be a real pseudo-Kähler $3$-Lie algebra. Then $(\g_{\mathbb C},\omega_{\mathbb C},E)$ is a complex para-Kähler $3$-Lie algebra, where $\g_{\mathbb C}=\g\otimes_{\mathbb R}\mathbb C$ is the complexification of $\g$, $E=-iJ_{\mathbb C}$ and $\omega_{\mathbb C}$ is the complexification of $\omega$, more precisely, \begin{eqnarray}\label{complex-omega} \omega_{\mathbb C}(x_1+iy_1,x_2+iy_2)=\omega(x_1,x_2)-\omega(y_1,y_2)+i\omega(x_1,y_2)+i\omega(y_1,x_2), \quad\forall x_1,x_2,y_1,y_2\in\g. \end{eqnarray} \end{thm} \pf By Corollary \ref{complex-to-special-paracomplex}, $E=-iJ_{\mathbb C}$ is a paracomplex structure on the complex $3$-Lie algebra $\g_{\mathbb C}$. It is obvious that $\omega_{\mathbb C}$ is skew-symmetric and nondegenerate. Moreover, since $\omega$ is a symplectic structure on $\g$, we deduce that $\omega_{\mathbb C}$ is a symplectic structure on $\g_{\mathbb C}.$ Finally, by $\omega(Jx,Jy)=\omega(x,y)$, we have \begin{eqnarray*} \omega_{\mathbb C}(E(x_1+iy_1),E(x_2+iy_2))&=&\omega_{\mathbb C}(Jy_1-iJx_1,Jy_2-iJx_2)\\ &=&\omega(Jy_1,Jy_2)-\omega(Jx_1,Jx_2)-i\omega(Jx_1,Jy_2)-i\omega(Jy_1,Jx_2)\\ &=&\omega(y_1,y_2)-\omega(x_1,x_2)-i\omega(x_1,y_2)-i\omega(y_1,x_2)\\ &=&-\omega_{\mathbb C}(x_1+iy_1,x_2+iy_2). \end{eqnarray*} Therefore, $(\g_{\mathbb C},\omega_{\mathbb C},-iJ_{\mathbb C})$ is a complex para-Kähler $3$-Lie algebra. \qed\vspace{3mm} At the end of this section, we construct a Kähler $3$-Lie algebra using a $3$-pre-Lie algebra with a symmetric and positive definite invariant bilinear form. \begin{pro} Let $(A,\{\cdot,\cdot,\cdot\})$ be a real $3$-pre-Lie algebra with a symmetric and positive definite invariant bilinear form $\huaB$. Then $(A^c\ltimes_{L^*}A^*,\omega,-J)$ is a real Kähler $3$-Lie algebra, where $J$ is given by \eqref{3-pre-Lie-complex} and $\omega$ is given by \eqref{phase-space}. \end{pro} \pf By Theorem \ref{3-pre-Lie-phase-space} and Proposition \ref{pro:compro}, $\omega$ is a symplectic structure and $J$ is a perfect complex structure on the semidirect product 3-Lie algebra $( A^c\ltimes_{L^*}A^*,[\cdot,\cdot,\cdot]_{L^*})$. Obviously, $-J$ is also a perfect complex structure on $A^c\ltimes_{L^*}A^*$. Let $\{e_1,\cdots,e_n\}$ be a basis of $A$ such that $\huaB(e_i,e_j)=\delta_{ij}$ and $e_1^*,\cdots,e_n^*$ be the dual basis of $A^*$. Then for all $i,j,k,l$, we have \begin{eqnarray*} \omega(e_i+e_j^*,e_k+e_l^*)&=&\delta_{jk}-\delta_{li},\\ \omega(-J(e_i+e_j^*),-J(e_k+e_l^*))&=&\omega(e_j-e_i^*,e_l-e_k^*)=-\delta_{il}+\delta_{kj}, \end{eqnarray*} which implies that $\omega(-J(x+\alpha),-J(y+\beta))=\omega(x+\alpha,y+\beta)$ for all $x,y\in A$ and $\alpha,\beta\in A^*$. Therefore, $(A^c\ltimes_{L^*}A^*,\omega,-J)$ is a pseudo-Kähler $3$-Lie algebra. Finally, Let $x=\sum_{i=1}^{n}\lambda_ie_i\in A,\alpha=\sum_{i=1}^{n}\mu_ie_i^*\in A^*$ such that $x+\alpha\not=0.$ We have \begin{eqnarray*} S(x+\alpha,x+\alpha)&=&\omega(x+\alpha,-J(x+\alpha))\\ &=&\omega\big(\sum_{i=1}^{n}\lambda_ie_i+\sum_{i=1}^{n}\mu_ie_i^*,\sum_{i=1}^{n}\mu_ie_i-\sum_{i=1}^{n}\lambda_ie_i^*)\big)\\ &=&\sum_{i=1}^{n}\mu_i^2+\sum_{i=1}^{n}\lambda_i^2>0. \end{eqnarray*} Thus, $S$ is positive definite. Therefore, $\{A^c\ltimes_{L^*}A^*,\omega,-J\}$ is a real Kähler $3$-Lie algebra. \qed
2,869,038,155,410
arxiv
\section{Introduction}\label{intro} Weak-field limits of theories involving long-range modifications of gravity recently put forth to address the issues of dark energy and dark matter \citep{DGP,All05,NvA05,NvA06,NvA06b,Apo06,Cap06,Noj07} are important because such exotic corrections to the Newtonian potential allow, in principle, for tests to be performed on local, astronomical scales, independently of the galactic/cosmological effects \citep{Cap07} which motivated such alternative theories and that, otherwise, would represent their only justification. In this paper we will show how to obtain phenomenologically tight constraints on the viability of some of such modified theories by suitably using the latest observational results from Solar System planetary motions \citep{Pit05a, Pit05b}. In Section \ref{alf} we will consider the four-dimensional models obtained from inverse powers of some curvature invariants \citep{NvA05}. After working out the analytic expression of the secular, i.e. averaged over one orbital revolution, perihelion precession induced by the Newtonian limit of such models, we will compare it to the phenomenologically estimated corrections to the usual Newton-Einstein precessions of the perihelia of the inner planets of the Solar System. By taking the ratio of them for different pairs of planets we will find that the predicted exotic effects are ruled out. In Section \ref{bet} and Section \ref{gam} we repeat the same procedure for the four-dimensional model based on the logarithm of some invariants of curvature \citep{NvA06b} and for the multidimensional braneworld model by \citet{DGP}, respectively, by finding that also such models do not pass our test. In Section \ref{delt} we apply the same strategy to the general relativistic gravitomagnetic field \citep{LT} finding that it is, instead, compatible with the ratio of the perihelion precessions for all the considered pairs of planets. Section \ref{concl} is devoted to the conclusions. % \section{The inverse-power curvature invariants models}\lb{alf} In this Section we will address the long-range modifications of gravity obtained by including in the action inverse powers of some invariants of curvature not vanishing in the Schwarzschild solution \citep{NvA05,NvA06,NvA06b}. From the correction to the Newtonian potential \citep{NvA05} \begin{equation} V = \frac{\alpha GM}{ {r_c}^{6k+4} }r^{6k+3},\ r\ll{r_c}, \lb{pot}\end{equation} where $k$ is a positive integer number, it follows a purely radial acceleration \begin{equation}\bds A = -\rp{\alpha GM(6k+3)}{{r_c}^{6k+4}}r^{6k+2}\ \bds{\widehat{r}},\ r\ll{r_c} \lb{aAvA}.\end{equation} The length scale $r_c$ depends, among other things, on a parameter $\mu$ which must assume a certain value in order that the model by \citet{NvA05} is able to reproduce the cosmic acceleration \citep{Car05,Mena06} without dark energy; it is just such a value of $\mu$ which makes $r_c \approx 10$ pc ($k=1$) for a Sun-like star \citep{NvA05}. Since \citep{NvA06}\begin{equation}\alpha =\rp{k(1+k)}{(6k+3)2^{4k}3^k}\end{equation} and ${r_c}\approx 10$ pc ($k=1$), the condition $r\ll{r_c}$ for which the expansion in $r/r_c$ yielding \rfr{pot} retains its validity is fully satisfied in the Solar System, and \rfr{aAvA} can be treated as a small correction to the Newtonian monopole with the standard perturbative techniques of celestial mechanics. The Gauss equation for the variation of the pericentre $\omega$ of a test particle acted upon by an entirely radial disturbing acceleration $A$ is \begin{equation}\dert\omega t = -\rp{\sqrt{1-e^2}}{nae}A\cos f,\lb{gaus}\end{equation} where $a$ and $e$ are the semimajor axis and the eccentricity, respectively, of the orbit of the test particle, $n=\sqrt{GM/a^3}$ is the unperturbed Keplerian mean motion and $f$ is the true anomaly reckoned from the pericentre. The secular precession of the pericentre $\left\langle\dot\omega\right\rangle$ can be worked out by evaluating the right-hand side of \rfr{gaus} onto the unperturbed Keplerian ellipse \begin{equation} r=a(1-e\cos E),\end{equation} where $E$ is the eccentric anomaly, and by performing subsequently an integration over one full orbital period To this aim, the following relations are useful \begin{equation} dt = \left(\rp{1-e\cos E}{n}\right)dE,\end{equation} \begin{equation} \cos f = \rp{\cos E-e}{1-e\cos E}.\end{equation} Let us start with the case $k=1$; the extra-acceleration becomes \begin{equation} \bds A_{k=1}=-\rp{9\alpha GM}{{r_c}^{10}}r^8\ \bds {\widehat{r}}.\end{equation} By proceeding as previously outlined and using the exact result \begin{equation}\int_0^{2\pi}(\cos E-e)(1-e\cos E)^8 dE=-\rp{5e\pi}{64}\left[128+7e^2\left(128+160e^2+40e^4+e^6\right)\right],\end{equation} it is possible to obtain the exact formula \begin{equation}\left\langle\dot\omega\right\rangle_{k=1} = -\rp{45\alpha}{{r_c}^{10}}\sqrt{GMa^{17}(1-e^2)}\left[1+7e^2\left(1+\rp{5}{4}e^2 +\rp{5}{16}e^4 +\rp{1}{128}e^6 \right)\right]\lb{peri}.\end{equation} It is important to note the dependence of $\left\langle\dot\omega\right\rangle$ on a positive power of the semimajor axis $a$: this fact will be crucial in setting our test. The predicted extra-precession of \rfr{peri} can be fruitfully compared to the corrections to the usual Newton-Einstein perihelion rates of the inner planets of the Solar System phenomenologically estimated by \citet{Pit05a}, in a least-square sense, as solve-for parameters of a global solution in which a huge amount of modern planetary data of all types covering about one century were contrasted to the dynamical force models of the EPM2004 ephemerides \citep{Pit05b}. Such corrections are quoted in Table \ref{tavola}. \begin{table} \caption{Semimajor axes $a$, in AU (1 AU$=1.49597870691\times 10^{11}$ m), and phenomenologically estimated corrections to the Newtonian-Einsteinian perihelion rates, in arcseconds per century ($''$ cy$^{-1}$), of Mercury, the Earth and Mars \citep{Pit05a}. Also the associated errors are quoted: they are in m for $a$ \citep{Pit05b} and in $''$ cy$^{-1}$\ for $\dot\varpi$ \citep{Pit05a}. For the semimajor axes they are the formal, statistical ones, while for the perihelia they are realistic in the sense that they were obtained from comparison of many different solutions with different sets of parameters and observations (Pitjeva, private communication 2005). The results presented in the text do not change if $\delta a$ are re-scaled by a factor 10 in order to get more realistic uncertainties.}\label{tavola} \begin{tabular}{ccccc} \noalign{\hrule height 1.5pt} Planet & $a$ (AU) & $\delta a$ (m) & $\dot\varpi$ ($''$ cy$^{-1}$) & $\delta\dot\varpi$ ($''$ cy$^{-1}$) \\ \hline Mercury & 0.38709893 & 0.105 & -0.0036 & 0.0050\\ Earth & 1.00000011 & 0.146 & -0.0002 & 0.0004 \\ Mars & 1.52366231 & 0.657 & 0.0001 & 0.0005\\ \hline \noalign{\hrule height 1.5pt} \end{tabular} \end{table} They were determined in a model-independent way, without modeling this or that particular model of modified gravity: only known Newton-Einstein accelerations\footnote{With the exception of the general relativistic gravitomagnetic interaction, yielding the Lense-Thirring effect, and of the Kuiper Belt Objects.} were, in fact, modeled so that the estimated perihelion extra-rates account, in principle, for all the unmodeled forces present in Nature. Since July 2005 \citep{Ior07a}, many other authors so far used the extra-precessions of the perihelia of the inner planets of the Solar System estimated by \citet{Pit05a} to put constraints on modified models of gravity \citep{Gan06,IorDGPb,San06,Ad07,Rug07,Ior07b}, cosmological constant \citep{Ior06a,Ser06b}, various cosmological issues \citep{Adetal07,Fay07,Nes07a,Nes07b,Ser07}, dark matter distribution \citep{Ior06b,Khri06,Ser06a,Fre07,Khri07}, trans-Neptunian bodies \citep{Ior07c}, general relativity \citep{Wil06,Ior07a}; a common feature of all such analyses is that they always used the perihelia separately for each planet, or combined linearly by assuming that the exotic effects investigated were included in the estimated corrections to the perihelia precessions, and by using their errors to constrain the parameters of the extra-forces. About the reliability of the results by \citet{Pit05a}, \citet{Ior07b} made an independent check by assessing the total mass of the Kuiper Belt Objects and getting results compatible with other ones obtained with different methods, not based on the dynamics of the inner planets. It must be noted that more robustness could be reached if and when other teams of astronomers will estimate their own corrections to the perihelion precessions. On the other hand, an alternative approach would consist in re-fitting the entire data set by including an ad-hoc parameter accounting for just the exotic effect one is interested in. However, such a procedure might be not only quite time-consuming because of the need of modifying the software's routines by including the extra-accelerations, but it would be also model-dependent by, perhaps, introducing the temptation of more or less consciously tweaking somehow the data and/or the procedure in order to obtain just the outcome one a-priori expects. Here we will not use one perihelion at a time for each planet. Indeed, let us consider a pair of planets A and B and take the ratio of their estimated extra-rates of perihelia: if \rfr{peri} is responsible for them, then the quantity\footnote{It turns out that the multiplicative term depending on the eccentricities has a negligible effect on our conclusions.} \begin{equation} \Gamma_{\rm AB} =\left|\rp{\dot\omega^{\rm A}}{ \dot\omega^{\rm B} }- \left(\rp{a^{\rm A}}{a^{\rm B}}\right)^{17/2} \right|\lb{gam}\end{equation} must be compatible with zero, within the errors. The figures of Table \ref{tavola} tell us that it is definitely not so: indeed, for A=Mars, B=Mercury we have \begin{equation}\Gamma_{\rm MaMe}=10^5\pm 0.1.\end{equation} The situation is slightly better for A=Mars and B=Earth: \begin{equation} \Gamma_{\rm MaE}=38\pm 3.5.\end{equation} An intermediate case occurs for A=Earth and B=Mercury: \begin{equation}\Gamma_{\rm EMe}=10^3 \pm 0.2.\end{equation} It is important to note that \begin{itemize} \item The uncertainty in $\Gamma_{\rm AB}$ has been conservatively estimated as \begin{equation}\delta\Gamma_{\rm AB}\leq \left|\rp{\dot\omega^{\rm A}}{\dot\omega^{\rm B}}\right|\left(\rp{\delta\dot\omega^{\rm A}}{|\dot\omega^{\rm A}|} + \rp{\delta\dot\omega^{\rm B}}{|\dot\omega^{\rm B}|}\right) + \rp{17}{2}\left(\rp{a^{\rm A}}{a^{\rm B}}\right)^{17/2}\left( \rp{\delta a^{\rm A}}{a^{\rm A}} + \rp{\delta a^{\rm B}}{a^{\rm B}} \right)\end{equation} by linearly adding the individual terms coming from the propagation of the errors in $\dot\omega$ and $a$ in \rfr{gam}; this is justified by the existing correlations among the estimated Keplerian orbital elements\footnote{The correlations among the perihelion rates are low, with a maximum of 20$\%$ between the Earth and Mercury (Pitjeva, private communication, 2005).} \item The results presented here do not change if we re-scale by a factor 10 the formal errors in the semimajor axes \citep{Pit05b} quoted in Table \ref{tavola}. The same holds also for the errors in the perihelia rates which, however, are not the mere statistical ones but are to be considered as realistic, as explained in the caption of Table \ref{tavola} \item The constraints obtained here with \rfr{gam} are independent of $\alpha$ and ${r_c}$; should one use \rfr{peri} for each planet separately to constrain ${r_c}$, it turns out that, for $\alpha=4\times 10^{-3}$ ($k=1$), ${r_c}\lesssim 4.5$ AU. Note that with such a value the condition $r\ll{r_c}$, with which \rfr{pot} and, thus, \rfr{peri} were derived, holds for all the inner planets \item For $k>1$ the situation is even worse because of the resulting higher powers with which the semimajor axis enters the formulas for the perihelion rates \end{itemize} \section{The logarithmic curvature invariants models}\lb{bet} The same approach can be fruitfully used for the model by \citet{NvA06b} based on an action depending on the logarithm of some invariants of the curvature in order to obtain a modification of gravity at the MOND \citep{Mil83} characteristic scale \citep{San02}, so to address in a unified way the dark energy and dark matter problems; in this model the length scale $r_c$ amounts to about 0.04 pc for the Sun. The correction to the Newtonian potential is \begin{equation} V\propto \rp{GMr^3}{{r_c}^4},\end{equation} which yields the perturbing acceleration \begin{equation} \bds A \propto \rp{r^2}{{r_c}^4}\bds {\widehat{r}}.\lb{newac}\end{equation} By using \begin{equation} \int_0^{2\pi}(\cos E-e)(1-e\cos E)^2dE = -e\pi(4+e^2),\end{equation} the secular precession of perihelion induced by \rfr{newac} is \begin{equation}\left\langle\dot\omega\right\rangle \propto \rp{\sqrt{GM a^5 (1-e^2)}}{{r_c}^4}(4+e^2);\lb{logperi}\end{equation} also in this case it depends on a positive power of the semimajor axis; cfr. the approximated result by \citet{NvA06b} for the shift per orbit, i.e. $2\pi\left\langle\dot\omega\right\rangle /n$. By taking the ratio of \rfr{logperi} for a pair of planets and comparing it to the ratio of the estimated extra-precessions by \citet{Pit05a} it can be obtained \begin{equation} \Delta_{\rm AB} = \left|\rp{\dot\omega^{\rm A}}{ \dot\omega^{\rm B} }- \left(\rp{a^{\rm A}}{a^{\rm B}}\right)^{5/2} \right|\lb{DEL}. \end{equation} The test is not passed. Indeed, for A=Mars and B=Mercury we have \begin{equation} \Delta_{\rm MaMe}=30.7\pm 0.1;\end{equation} the pair A=Earth, B=Mercury yields \begin{equation} \Delta_{\rm EMe}=10.6\pm 0.2,\end{equation} while A=Mars, B=Earth $\Delta$ is marginally compatible with zero \begin{equation} \Delta_{\rm MaE}=3.4\pm 3.5.\end{equation} Note that, even if the real errors in the estimated extra-precessions of perihelia were up to 10 times larger than those quoted by \citet{Pit05a}, the pair Mars-Mercury would still be able to rule out the logarithmic model by \citet{NvA06b}. \section{The multidimensional braneworld Dvali-Gabadadze-Porrati model}\lb{gam} Another modified model of gravity aimed to explain the cosmic acceleration without dark matter is the multidimensional braneworld model DGP \citep{DGP} which predicts, among other things, an extra-rate of perihelion independent of the planetary semimajor axis\footnote{The only dependence on the features of the planetary orbits occurs through a correction quadratic in the eccentricity $e$ \citep{IorDGP} which turns out to be negligible in this case.} \citep{LS,IorDGP}. It is incompatible with the test of the ratio of perihelia as well, although less dramatically than the previously examined models. Indeed, by defining \begin{equation}\Psi_{\rm AB}=\left|\rp{\dot\omega^{\rm A}}{\dot\omega^{\rm B}}-1\right|,\end{equation} for A=Mars, B=Mercury we have \begin{equation} \Psi_{\rm MaMe}=1.0\pm 0.2,\end{equation} while A=Earth, B=Mercury yield \begin{equation} \Psi_{\rm EMe}=0.9\pm 0.2.\end{equation} Errors in the determined extra-rates of perihelion 5 times larger than those quoted in Table \ref{tavola} would allow the DGP model to pass the test. The pair A=Mars, B=Earth give a result compatible with zero: \begin{equation} \Psi_{\rm MaE}=1.5\pm 3.5;\end{equation} the same hold for the other three combinations in which A and B denotes the planets with the smaller and larger semimajor axes, respectively. Until now, the DGP model was not found in disagreement with the Solar System data because the perihelia were used separately for each planet \citep{IorDGPb}. \section{General relativistic effects: gravitomagnetism and the cosmological constant}\lb{delt} It maybe interesting to note that, contrary to the exotic effects induced by the modified models of gravity previously examined, the Lense-Thirring effect \citep{LT} induced by the general relativistic gravitomagnetic field of the Sun, not modeled by \citet{Pit05a}, does pass our test based on the ratio of the perihelia. Indeed, since the Lense-Thirring perihelion precessions are proportional to a negative power of the semimajor axis, i.e. \begin{equation}\left\langle\dot\omega\right\rangle\propto a^{-3},\end{equation} the quantity \begin{equation} \Lambda_{\rm AB}=\left|\rp{\dot\omega^{\rm A}}{\dot\omega^{\rm B}}-\left(\rp{a_{\rm B}}{a_{\rm A}}\right)^3\right|\end{equation} must be considered. It turns out that it is compatible with zero for all the six combinations which can be constructed with the data of Table \ref{tavola}. This result enforces the analysis by \citet{Ior07a} in which the extra-rates of the perihelia were used one at a time for each planet and linearly combined by finding the general relativistic predictions for the Lense-Thirring precessions compatible with them. \section{Conclusions}\lb{concl} In this paper we used the corrections to the Newton-Einstein secular precessions of the perihelia of the inner planets of the Solar System, estimated by \citet{Pit05a} in a least-square sense as phenomenological solve-for parameters of a global solution in which almost one century of data were fitted with the EPM2004 ephemerides, to put tight constraints on several models of modified gravity recently proposed to explain dark energy/dark matter issues. By using the ratio of the perihelion precessions for different pair of planets, instead of taking one perihelion at a time for each planet as done so far, we were able to rule out all the considered long-range models of modified gravity, in particular the ones based on inverse powers of curvature invariants by \citet{NvA05} and on the logarithm of some curvature invariants \citep{NvA06b}, even by re-scaling by a factor 10 the errors in the estimated perihelion extra-rates. The situation is less dramatic for the DGP \citep{DGP} braneworld model since if the real errors in the perihelion precessions were, in fact, 5 times larger than the ones released it would become compatible with the data. Only the general relativistic Lense-Thirring effect passed the test. However, it must be noted that our results are based only on the extra-rates of perihelia determined by \citet{Pit05a}: it would be highly desirable to use corrections to the secular motion of perihelia estimated by other teams of astronomers as well. If and when they will be available our test will become more robust. \section*{Acknowledgments} I am grateful to G.E. Melki for useful remarks.
2,869,038,155,411
arxiv
\section{Introduction} There is presently a strong interest in computation and information processing based on fundamental principles of quantum mechanics \cite{Nielsen}. Quantum information technology has the potential both to address problems that can not be solved by standard, classical information technology as well as to radically improve the performance of existing classical schemes. The prospect of scalability and integrability with conventional electronics makes solid state systems a likely future arena for quantum information processing. Of particular interest is the entanglement between the elementary charge carriers, quasiparticles, in meso- or nanoscopic solid state conductors. Entanglement, or quantum mechanical correlations, constitutes a resource for any quantum information process. Moreover, due to controllable system properties and coherent transport conditions, conductors on the meso and nano scale constitute ideal systems for the generation and detection of quasiparticle entanglement. This opens up for quantum bits based on the spin or orbital quantum states of individual electrons, the ultimate building blocks for solid state quantum information processing. To date quasiparticle entanglement has however remained experimentally elusive. In particular, there is no unambiguous experimental demonstration of entanglement between two spatially separated quasiparticles. A class of mesoscopic systems that appear promising for a successful entanglement experiment are conductors without direct interactions between the quasiparticles. It was shown by Beenakker {\it et al} \cite{Been03} that fermions emitted from a thermal source can, in contrast to bosons, be entangled by scattering at a beam-splitter. This was originally discussed for electron-hole pairs \cite{Been03} and shortly afterward for pairs of electrons \cite{Sam04,Been04a}. Since then there has been a large number of works on entanglement of non-interacting particles, see e.g. \cite{nonint1,nonint2,nonint3,Titov,Kindermann,Frustaglia} for a number of representative papers and also \cite{Beenrev} for a review. Several of the entanglement proposals have been based on electrical analogs of optical interferometers and beam-splitter geometries. Such electronic systems are conveniently implemented in conductors in the quantum Hall regime, where electrons propagate along chiral edge states \cite{halp,mb88} and quantum point contacts constitute reflectionless beam-splitters \cite{BS1,BS2,BS3} with controllable transparency, see e.g. \cite{Buttpap}. Recent experimental progress on electronic Mach-Zehnder \cite{MZ1,MZ2,MZ3,MZ4,MZ5} and Hanbury Brown Twiss \cite{Neder} interferometers has provided further motivation for a theoretical investigation of entanglement in such systems. In addition, the experimental realization \cite{Feve} of time-controlled single-electron emitters \cite{singem1,singem2} in quantum Hall systems has opened up the possibility for a dynamical generation of entangled quasiparticles, entanglement on demand \cite{Timeent1,Timeent2,Timeent3,Timeent4}. In this work we will focus on the electronic two-particle, or Hanbury Brown Twiss, interferometer. A theoretical proposal for an implementation of this two-particle interferometer (2PI) in a conductor in the quantum Hall regime was proposed by two of us, P.S and M.B., together with E. V. Sukhorukov in Ref. \cite{Sam04}. Recently, the Heiblum group, including one of us, I.N., was able to realize the 2PI in a versatile system which could be electrically tuned between with two independent Mach-Zehnder interferometers and a 2PI. In perfect agreement with the theoretical predictions \cite{Sam04}, the two-particle interference pattern was visible in the current correlations but not in the average current. As discussed in Ref. \cite{Sam04}, there is an intimate relation between two-particle interference and entanglement in the fermionic 2PI. Under ideal conditions, i.e. zero temperature and perfect coherence, two-particle interference implies that the two particle wave function is on the form \begin{equation} |\Psi_s\rangle=\frac{1}{\sqrt{2}}\left[|1\rangle_A|2\rangle_B-|2\rangle_A|1\rangle_B\right]. \label{introsing} \end{equation} Here $1,2$ denote the sources and $A,B$ the sites of detection, as shown in Fig. \ref{system}. The wavefunction $|\Psi_s\rangle$ is maximally entangled, it is a singlet in the orbital, or pseudo spin, space $\{|1\rangle, |2\rangle \}$. However, in the experiment \cite{Neder}, $\sim 25\%$ visibility of the current correlation oscillations was observed. This indicates that both decoherence and finite temperature is important. Dephasing can qualitatively be accounted for \cite{Sam03,Turkbeen,Turksam} by a suppression of the off-diagonal components of the density matrix $|\Psi_s\rangle\langle \Psi_s|$. It was shown that at zero temperature the entanglement survives for arbitrary strong dephasing. The effect of finite temperature was not investigated at the time of the experiment. The experimental findings thus raised two important questions: are the electrons reaching the detectors at A and B entangled and if so, can this two-particle entanglement be unambiguously detected by measurements of currents and current correlators, the standard quantities accessible in electronic transport measurements? In our recent work \cite{Sam09} we provided a positive answer to both these questions. We first calculated the entanglement of the emitted two-particle state and found that the state was clearly entangled. Thereafter we showed that under very general conditions the entanglement of the reduced two-particle density matrix provides a lower bound for the entanglement of the emitted two-particle state. Since the reduced density matrix is possible to reconstruct tomographically by current and current correlation measurements \cite{tomo}, this provides an unambiguous way to detect the entanglement of the emitted state. In the present paper we discuss these findings in more detail. \section{The two-particle interferometer in optics and electronics} Interference is most often investigated in structures that lead to a superposition of amplitudes of a single particle. However, in 1956, Hanbury Brown and Twiss (HBT) invented an optical interferometer based on correlations of light intensities \cite{HBT1,HBT2}, an optical 2PI, see fig. \ref{system}. The intensity interferometer allowed HBT to determine the angular diameter of a number of visual stars, not possible with available single particle, or Michelson, interferometers. The HBT intensity interferometer displays two distinct but fundamentally interrelated features: \\ $\bullet$ First, there is a direct statistical effect since photons from a thermal light source tend to bunch, whereas fermions would anti-bunch. This effect has been used in a large number of experiments in different fields of physics such as elementary particles \cite{Baym}, solid state \cite{BS1,BS2,BS3} and free \cite{vacuum} electrons and recently cold atoms \cite{Coldat}.\\ $\bullet$ Second, light from two different, completely uncorrelated sources gives rise to an interference effect in intensity correlations but not in the intensities themselves. This is the two-particle interference effect. In optics, various aspects of two-particle interference have been investigated extensively since the HBT-experiment, see e.g. \cite{Mandel} for a short review, and is still a subject of interest \cite{Zeil}. In electronics, only very recently was a fermionic two-particle interferometer realized \cite{Neder}, the subject of this work. \\ Fundamentally both of these effects are related to the symmetry of the multiparticle wave function under exchange of two particles. We note that albeit the HBT-experiment could be explained by a classical electro-magnetic theory, a compelling quantum mechanical picture based on individual photons was put forth soon after the experiment \cite{Purcell}. Importantly, for fermions no classical theory exists. \begin{figure}[h] \centerline{\psfig{figure=figsys.eps,width=11.0cm}} \caption{a) Schematic of the Hanbury Brown Twiss intensity interferometer used to measure the angular diameter of stars. Two uncorrelated points 1,2 on the star act as sources. The signal is detected at A and B. b) Schematic of the topologically equivalent two-particle interferometer (2PI) \cite{Sam04} with beam splitters C,D and biased, active (grounded, inactive) source contacts 1,2 (3,4). Detector regions A and B (red shaded) contain beam splitters and grounded contacts $\pm$.} \label{system} \end{figure} To obtain a qualitative understanding of the physics of two-particle interferometers it is rewarding to compare the properties of optical, bosonic interferometers and electronic, fermionic interferometers. In Fig. \ref{system} a schematic of a two-particle interferometer, topologically equivalent to the HBT-interferometer, is shown. A natural measure of the correlations between the particles at $A$ and $B$ is the probability to jointly detect one particle at $A$ and one at $B$. An expression for this joint detection probability for photons was derived by Glauber \cite{Glauber}. In Ref. \cite{Sam04} this was adapted to detection of electrons. Here we consider the probability to detect one photon/electron in detector $A\alpha$, $\alpha=\pm$, at time $t$ and one in detector $B\beta$, $\beta=\pm$ at a time $t+\tau$, given by \begin{eqnarray} P_{A\alpha B\beta}(\tau)\propto\langle b^{\dagger}_{B\beta}(t)b^{\dagger}_{A\alpha}(t+\tau)b_{A\alpha}(t+\tau)b_{B\beta}(t)\rangle \label{jdp} \end{eqnarray} The photon/electron creation operators at A are $b^{\dagger}_{A\alpha}(t)=\int dE~\mbox{exp}(iEt/\hbar)b_{A\alpha}^{\dagger}(E)$, with $b_{A\alpha}^{\dagger}(E)$ creating a particle in $A\alpha$ at energy $E$ and similarly at B. For photons we consider thermal sources in $1$ and $2$ while $3$ and $4$ are left empty. A detector frequency window of size $\Delta \omega$ is assumed, over which the distribution functions of the sources are constant, i.e. $\Delta \omega \ll kT$. For electrons we assume zero temperature and the sources $1$ and $2$ biased at $eV$ while sources $3$ and $4$ are grounded. Only quasiparticle excitations, $E \geq 0$ are considered. The probabilities are normalized such that $\sum_{\alpha,\beta=\pm} P_{A\alpha B\beta}=1$. Following the scattering theory for intensity/current correlations for bosons/fermions emitted from thermal sources \cite{mb90,mb92}, we get \begin{eqnarray} P_{A\alpha B\beta}(\tau)&\propto &|s_{A\alpha 1}|^2|s_{B\beta 1}|^2\left[1\pm g(\tau)\right]+|s_{A\alpha 2}|^2|s_{B\beta 2}|^2\left[1\pm g(\tau)\right] \nonumber \\ &+&|s_{A\alpha 1}|^2|s_{B\beta 2}|^2+|s_{A\alpha 2}|^2|s_{B\beta 1}|^2 \nonumber \\ &\pm & g(\tau)\left[s_{A\alpha 1}^*s_{B\beta 2}^*s_{B\beta 1}s_{A\alpha 2}+s_{A\alpha 1}s_{B\beta 2}s_{B\beta 1}^* s_{A\alpha 2}^*\right] \label{jdpt} \end{eqnarray} where $g(\tau)=\sin^2(\tau/\pi\tau_C)/(\tau/\pi\tau_C)^2$ contains the time dependence, with $\tau_C=h/eV$ the coherence time for electrons and $2/\pi \Delta\omega$ for photons. Here $s_{A\alpha 2}$ is the amplitude to scatter from source 2 to detector $A\alpha$ etc. The upper/lower signs $\pm$ correspond to electrons/photons. Several interesting conclusions can be drawn directly from Eq. (\ref{jdpt}): 1) For $\tau \gg \tau_C$, $g(\tau)$ approaches zero and $P_{A\alpha B\beta}$ is just proportional to the product of the two mean currents/intensities. The fermionic versus bosonic statistics of the particle plays no role. 2) For shorter times, $\tau \leq \tau_C$, $g(\tau)$ is finite and the statistics is important. Note that, as pointed out above, that the statistics of the particles enter in two different ways.\\ i) The first two terms in Eq. (\ref{jdpt}) describe a direct bunching (+) or anti-bunching (-) effect for two particles emitted from the same reservoir within a time $\tau \leq \tau_C$. This effect would still be present if one of the sources $1$ or $2$ is removed. \\ ii) The last two terms describe the two-particle, or exchange \cite{mb90,mb92}, interference, where the $\pm$ sign explicitly follows from the interchange of the two detected particles. This two particle interference is only present when both sources are active. For semitransparent beam-splitters $A,B,C$ and $D$ and coincident detection $\tau\ll \tau_C$ we have \begin{eqnarray} P_{A\alpha B\beta}=\left\{ \begin{array}{cc} \frac{1}{4}\left[1+\alpha\beta\cos \phi\right] & \mbox{fermions} \\ \frac{1}{4}\left[1+\frac{\alpha\beta}{2}\cos \phi\right] & \mbox{bosons} \end{array}\right. \label{jdpt2} \end{eqnarray} where $\phi$ is a scattering phase. From this expression a very important difference between bosonic and fermionic thermal sources is apparent: the visibility \begin{equation} \nu=\frac{P_{A\alpha B\beta}^{max}-P_{A\alpha B\beta}^{min}}{P_{A\alpha B\beta}^{max}+P_{A\alpha B\beta}^{min}} \end{equation} of the oscillations is $1$ for fermions but only $1/2$ for bosons. This is directly related to the fact that while the emitted fermionic two-particle state is maximally entangled, the bosonic state is unentangled \cite{yurke}. \section{Fermionic two particle interferometer: theory} In Ref. \cite{Sam04} we proposed an implementation of an electronic 2PI in a conductor in the quantum Hall regime, with electrons propagating along single, spin polarized edge states (see Fig. \ref{HBTferm}). Two electronic reservoirs $1,2$ biased at $eV$ act as sources for electrons while the reservoirs $3,4$ as well as the detector reservoirs are grounded. All reservoirs are kept at the same temperature $T$. Moreover, we consider here only the linear regime in voltage where electron-electron interactions can be neglected. This regime is relevant for the experiment \cite{Neder}. The QPC's at $A,B,C$ and $D$ act as beamsplitters with transparencies $T_A,T_B,T_C$ and $T_D$ respectively. \begin{figure}[h] \centerline{\psfig{figure=HBTtot.eps,width=13.0cm}} \caption{a) Fermionic 2PI implemented in a conductor in the quantum Hall regime, from \cite{Sam04}. See text for details. b) Schematic of the source part of the 2PI, with the the orbital states $|1\rangle_A,|2\rangle_A,|1\rangle_B$ and $|2\rangle_B$ for particles emitted out from the source towards the detectors are shown.} \label{HBTferm} \end{figure} The scattering amplitude $s_{A+1}=\sqrt{T_AR_C}e^{i\phi_{AC}}$, where $R_C=1-T_C$ and $\phi_{AC}$ is the scattering phase picked up by the electron up when traveling from C to A. Similar relations hold for the other scattering amplitudes. Note that the total phase $\phi=\phi_{AC}-\phi_{AD}+\phi_{BD}-\phi_{BC} $ is, up to a constant term, given by $2\pi \Phi/\Phi_0$ where $\Phi$ is the magnetic flux threading the 2PI and $\Phi_0=h/e$, the single particle flux quanta. Importantly, the Corbino geometry in Fig. \ref{HBTferm} with unidirectional edge states and reflectionless beam-splitters is topologically equivalent to the 2PI shown in Fig. \ref{system}. \subsection{Two particle Aharonov-Bohm effect} The standard tools for investigating transport properties in mesoscopic electronic systems are average electrical current and current correlation measurements \cite{Buttrev}. A scattering theory calculation \cite{mb86} gives the average current at contact $A\alpha$ \begin{eqnarray} I_{A\alpha} &=&\frac{e}{h}\int dE \left(|s_{A\alpha 1}|^2+|s_{A\alpha 2}|^2\right)\left[f_V(E)-f(E)\right] \nonumber \\ \label{curr} \end{eqnarray} and similar at $B \beta$. Here $f_V=1/(1+e^{(E-eV)/kT})$ and $f=1/(1+e^{E/kT})$ are the Fermi distributions of the biased, $1,2$ and the grounded, $3,4$ reservoirs respectively. The irreducible zero frequency correlator \begin{equation} S_{A\alpha B\beta}=\int dt \langle \Delta I_{A\alpha}(0)\Delta I_{B \beta}(t)\rangle \end{equation} between currents $I_{A \alpha}(t)=I_{A \alpha}+\Delta I_{A \alpha}(t)$ and $I_{B \beta}(t)=I_{B \beta}+\Delta I_{B \beta}(t)$ \cite{mb92} becomes \begin{equation} S_{A\alpha B\beta}=\frac{e^2V}{h}\int dE \left(|s_{A\alpha 1}^*s_{B\beta 1}+s_{A\alpha 2}^*s_{B\beta 2}|^2\right)\left[f_V(E)-f(E)\right]^2 \label{noise} \end{equation} These expressions are valid for arbitrary temperature but for the rest of the discussion in this section we only consider the zero temperature case. In particular, for the simplest possible case, with all beam-splitters semitransparent and energy-independent scattering amplitudes, we have \begin{eqnarray} I_{A\alpha}=I_{B\beta}=\frac{e^2V}{2h}, \hspace{0.5cm} S_{A\alpha B\beta}=\frac{e^3V}{4h}\left[1+\alpha\beta\cos\phi\right] \label{currnoise} \end{eqnarray} While the average current is a function of QPC-transparencies only, the current cross correlator depends also on the phase $\phi$. Since this phase is proportional to the magnetic flux $\Phi$ threading the 2PI, we call this a two-particle Aharonov-Bohm (AB) effect. Interestingly, we can directly relate the coincident detection probability in Eq. (\ref{jdpt}) at times $\tau\ll\tau_C$ with the currents in Eq. (\ref{curr}) and the zero frequency noise correlators in Eq. (\ref{noise}) as $[g(0)=1]$ \begin{eqnarray} P_{A\alpha B\beta}(0) &\propto& |s_{A\alpha 1}s_{B\beta 1}^*+s_{A\alpha 2} s_{B\beta 2}^*|^2+\left(|s_{A\alpha 1}|^2+|s_{A\alpha 2}|^2\right)\left(|s_{B\beta 1}|^2+|s_{B\beta 2}|^2\right) \nonumber \\ & \propto& S_{A\alpha B\beta}+2\tau_CI_{A\alpha}I_{B \beta} \label{jdptot} \end{eqnarray} This is a direct consequence of fermionic anti-bunching, leading to a filled stream of electrons emitted from the source reservoirs and hence making long time observables an effective average of many individual, short time, single and two-particle events. \subsection{Entanglement} The connection between this two-particle Aharonov-Bohm effect and entanglement can be seen by considering the many-body ground state $|\Psi_{in}\rangle$ of the electrons injected into the 2PI. Electrons at different energies are independent and the many-body state at zero temperature is thus a product state in energy \begin{equation} |\Psi_{in}\rangle=\prod_{0\leq E \leq eV} a_1^{\dagger}(E)a_2^{\dagger}(E)|\bar 0\rangle \end{equation} where $|\bar 0\rangle$ is the filled Fermi sea and $a^{\dagger}_1(E)$ creates an electron at energy $E$, incident from reservoir $1$. Adopting the formalism of Ref. \cite{Been03} we first define $|\Psi_{in}(E)\rangle=a_1^{\dagger}(E)a_2^{\dagger}(E)|\bar 0\rangle$ the injected state at energy $E$. We have the scattering relations at the two source beam splitters, suppressing energy notation \begin{equation} \left(\begin{array}{c} b_{A1} \\ b_{B1} \end{array} \right) =\left(\begin{array}{cc} r_C & t_C' \\ t_C & r_C' \end{array} \right) \left(\begin{array}{c} a_{1} \\ a_{3} \end{array} \right), \hspace{0.5cm} \left(\begin{array}{c} b_{A2} \\ b_{B2} \end{array} \right) =\left(\begin{array}{cc} r_D & t_D' \\ t_D & r_D' \end{array} \right) \left(\begin{array}{c} a_{2} \\ a_{4} \end{array} \ \right) \label{scatrel} \end{equation} for incoming (a's) and outgoing (b's) electrons. The primed scattering amplitudes thus describes particles incoming from the unbiased sources. This gives the emitted state for the electrons at energy $E$, after beam-splitters $C,D$ but before impinging on the detector beam splitters $A,B$, as \begin{equation} |\Psi_{out}(E)\rangle=\left(r_Cb_{A1}^{\dagger}+t_Cb_{B1}^{\dagger}\right)\left(r_Db_{A2}^{\dagger}+t_Db_{B2}^{\dagger}\right)|\bar 0\rangle \end{equation} Since we are interested in entanglement between particles in the two, spatially separated detector regions A and B we project out the part of the wave function with one particle in A and one in B yielding the normalized wavefunction \begin{equation} |\Psi_{AB}(E)\rangle=\frac{1}{\sqrt N}\left(r_Ct_D b_{A1}^{\dagger}b_{B2}^{\dagger}-r_Dt_Cb_{A2}^{\dagger}b_{B1}^{\dagger}\right)|\bar 0\rangle \end{equation} with $N=|r_Dt_C|^2+|r_Ct_D|^2=R_CT_D+R_DT_C$ the normalization constant. Here we introduced the transmission and reflection probabilities of the source beam splitters as $T_C=|t_C|^2=|t_C'|^2$ and $R_C=|r_C|^2=|r_C'|^2=1-T_C$ for C and similarly for D. To make this more transparent we can, since the two particles live in well separated Hilbert spaces, introduce the Dirac notation $|1\rangle_A\equiv b_{A1}^{\dagger}|\bar 0\rangle$ etc, and write \begin{equation} |\Psi_{AB}(E)\rangle=\frac{1}{\sqrt{N}}\left[r_Ct_D|1\rangle_A|2\rangle_B-t_Cr_C|2\rangle_A|1\rangle_B\right] \label{pureAB} \end{equation} which for semi-transparent beam splitters (and scattering phase $\phi=0$) reduces to the singlet state $|\Psi_s\rangle$ in Eq. (\ref{introsing}). The orbital states are shown in Fig. \ref{HBTferm} The entanglement of the state $|\Psi_{AB}(E)\rangle$ can conveniently be quantified in terms of the concurrence $C$ \cite{Wooters}, which ranges from zero for an unentangled state to unity for a maximally entangled state. Working in the computational basis $\{|1\rangle_A|1\rangle_B,|1\rangle_A|2\rangle_B,|2\rangle_A|1\rangle_B,|2\rangle_A|2\rangle_B\}$, for the pure state $|\Psi_{AB}\rangle$ in Eq. (\ref{pureAB}) we have \begin{equation} C=|\langle \Psi_{AB}|(\sigma_y\otimes \sigma_y)|\Psi_{AB}^*\rangle| \end{equation} where $|\Psi_{AB}^*\rangle$ is $|\Psi_{AB}\rangle$ with all coefficients complex conjugated, $\sigma_y$ a Pauli matrix and $\otimes$ the direct, tensor product. We thus find for $|\Psi_{AB}\rangle$ the concurrence \begin{equation} C=\frac{2}{N}|r_Ct_Cr_Dt_D|=\frac{2}{N}\sqrt{R_CT_CR_DT_D} \end{equation} which reaches unity for semitransparent beam splitters, i.e. for the singlet state in Eq. (\ref{introsing}). Note that the normalization factor $N$ is maximal, equal to $1/2$, for semitransparent beam splitters. This demonstrates that at most only half of the particles injected from $1$ and $2$ lead to split pairs, with one particle emitted towards $A$ and one towards $B$, i.e. a maximal pair emission rate of $1/2$. For a measurement during a time $\tau$ the maximum concurrence production \cite{Beenrev} is thus ${\mathcal N}/2$, where ${\mathcal N}=\tau eV/h$ the number of pairs injected from $1$ and $2$ in the time $\tau$ and energy interval $0\leq E\leq eV$ \subsection{Dephasing} There are several microscopic mechanisms that can lead to dephasing, typically suppressing the two-particle interference. For low temperatures it is commonly believed that the dominatinating mechanism for dephasing is electron-electron interactions, but this is still a topic of ongoing research and goes beyond the scope of the present work. Here we consider no specific mechanism but model dephasing qualitatively by coupling one of the interferometer arms to a dephasing voltage probe \cite{probe1,probe2,probe3,probe4}. In this context we point out a recent experiment \cite{probeexp}: a voltage probe was coupled, via a tunable quantum point contact, to one arm of a Mach Zehnder interferometer in the quantum Hall regime, demonstrating controllable dephasing. Considering semitransparent beam splitters, the dephasing probe coupled with a strength $0 \leq \gamma \leq 1$ lead to a modification of the current correlator in Eq. (\ref{currnoise}) to \cite{Vanessa} \begin{eqnarray} S_{A\alpha B\beta}^{deph}=\frac{e^3V}{4h}\left[1+\gamma\alpha\beta\cos\phi\right] \end{eqnarray} From this expression it is clear that $\gamma$ enters as a decoherence parameter; decreasing $\gamma$ from $1$ to $0$ leads to a suppression the phase dependence of the current correlator. In the presence of dephasing the emitted state is no longer a pure state, it is instead a mixed state described by a density matrix $\sigma_{AB}$. Considering zero temperature, working in the computational basis the result for $S_{A\alpha B\beta}^{deph}$ corresponds to a suppression of the off-diagonal components of $|\Psi_{AB}\rangle \langle \Psi_{AB}| \rightarrow \sigma_{AB}$ as \begin{equation} \sigma_{AB}=\frac{1}{2}\left(\begin{array}{cccc} 0 & 0 & 0 &0 \\ 0 & 1 & -\gamma & 0 \\ 0 & -\gamma & 1 & 0\\ 0 & 0 & 0 &0 \end{array}\right) \end{equation} The concurrence for a mixed state is \cite{Wooters} \begin{equation} C=\mbox{max}\left\{\sqrt{\lambda_1}-\sqrt{\lambda_2}-\sqrt{\lambda_3}-\sqrt{\lambda_4},0\right\} \end{equation} where $\lambda_i$, $i=1-4$, are the eigenvalues in decreasing order of $\sigma_{AB}(\sigma_y\otimes \sigma_y) \sigma^*_{AB} (\sigma_y \otimes \sigma_y)$. We then have \begin{equation} C=\gamma \end{equation} This means that the entanglement persists even for very strong dephasing \cite{Sam03, Turkbeen,Turksam}. This is a consequence of the 2PI-geometry, where scattering between the arms, i.e. pseudo spin-flip scattering, is prohibited. \subsection{Fermionic two particle interferometer: experiment} Very recently the electronic 2PI was realized experimentally by Neder {\it et al}. In the experiment, in the quantum Hall regime, it was possible to electrically tune the system between two individual Mach Zehnder interferometers and a 2PI, as shown schematically in fig. \ref{expfig}. \begin{figure}[h] \centerline{\psfig{figure=MZ2.eps,width=8.0cm}} \caption{Fermionic two-particle interferometer implemented in a conductor in the quantum Hall regime in Ref \cite{Neder}. a) Figure reproduced from Ref. \cite{Neder}. Micrograph of the sample. b) Left: The system in the two Mach Zehnder interferometers configuration. Right: The system in the 2PI configuration.} \label{expfig} \end{figure} The authors first tuned the system to two Mach-Zehnder interferometers and measured the single particle interference in the average current for each interferometer. They found a very large visibility in both interferometers, around $80\%$. They also determined the periods of the single particle AB-oscillations as a function of both the area and the magnetic flux enclosed by the interferometers. Thereafter the system was tuned to a single 2PI. As predicted by theory \cite{Sam04} no single-particle AB-oscillations in the average current were observed but the current cross correlations displayed clear two-particle AB-oscillations, with an amplitude $25 \%$ of the predicted coherent, zero temperature value. By measuring also the period of the two-particle oscillations as a function of interferometer area and enclosed flux and comparing to the sum of the periods for the two Mach Zehnder interferometers, the two-particle nature of the AB-oscillations could be established beyond doubt. \begin{figure}[h] \centerline{\psfig{figure=2PIosc.eps,width=12.0cm}} \caption{Figure reproduced from Ref. \cite{Neder}. Experimental demonstration of the two-particle AB-effect. Current cross correlation displaying clear oscillations as a function of the effective interferometer area and enclosed magnetic flux.} \label{expfig2} \end{figure} In the experiment semitransparent beam splitters were used, $T_C=T_D=1/2$. For the current cross correlations, theory for finite temperature and dephasing \cite{Vanessa} predicts, for $A+,B+$, \begin{equation} S_{A+B+}=-\frac{e^3V}{4h}H\left[1-\gamma \sin \phi \right]. \label{noise2} \end{equation} The temperature dependence is fully contained in \begin{equation} H=\coth\left(\frac{eV}{2kT}\right)-\frac{2kT}{eV}, \end{equation} varying from unity for $kT\ll eV$ to zero for $kT \gg eV$. The effect of finite temperature is thus to suppress the overall amplitude of the current cross correlation oscillations. In the experiment, the applied bias was $7.8\mu V$. The electron temperature was estimated from independent auto-correlation measurements to be $10mK$. This yields the temperature suppression factor $H=0.78$. A direct comparison to Eq. (\ref{noise2}) then gives the oscillation amplitude $H\gamma=0.25$, i.e. $\gamma=0.32$, a substantial dephasing. \section{Finite temperature state} Our main aim of this work is to theoretically investigate the effects of finite temperature on the entanglement of the state emitted out from the source, towards the detectors. A prerequisite is to obtain both a qualitative and a quantitative description of the emitted many-body state at finite temperature. We consider the experimentally relevant situation with all source and detector reservoirs kept at the same temperature $T$. Due to the finite temperature, not only the electrons emitted from the source in the energy range $0\leq E \leq eV$ are of interest, we must in principle take into account particles emitted from all reservoirs at all possible energies. However, due to the chiral geometry of the 2PI in Fig. \ref{HBTferm}, particles emitted from the detectors can never scatter back to the detectors, i.e. detector cross talk is topologically prohibited. The particles arriving at the detectors thus all originate from the source reservoirs and we can focus on the many body state emitted by source $1$ to $4$. We note that in the slightly different geometry realized experimentally \cite{Neder}, there is the possibility for scattering between the detectors. It can however be shown \cite{Samnew} that this does not influence the entanglement of the emitted state. At finite temperature the state injected from the sources is mixed and described by a density matrix \cite{Beenrev} \begin{eqnarray} \rho_{in}&=&\prod_{E}\rho_{in}(E) \nonumber \\ \rho_{in}(E)&=&\prod_{\kappa=1}^4\left[[1-f_{\kappa}(E)]|0\rangle\langle 0|+f_{\kappa}(E)a^{\dagger}_{\kappa}(E)|0\rangle\langle 0|a_{\kappa}(E)\right] \label{rhoin} \end{eqnarray} where $f_{\kappa}(E)$ is the Fermi distribution of source reservoir $\kappa=1-4$. The outgoing state is then obtained by inserting the scattering relations of Eq. (\ref{scatrel}) int Eq. (\ref{rhoin}). One can see from Eq. (\ref{rhoin}) that the effect of finite temperature is to give rise to states with 0 to 4 particles emitted at a given energy. For the terms of interest, i.e. with at least one particle at both A and B, there is at finite temperature the possibility for e.g. two particles at A and one at B etc. These terms are of central importance in the discussion below. \section{Projected two-particle density matrix} A theory for entanglement production in non-interacting \cite{Been03} conductors at finite temperature was presented by Beenakker \cite{Beenrev} and along similar lines in closed condensed matter systems by Dowling, Doherty and Wiseman \cite{Wiseman}. At a given energy, only the component of the emitted many-body state with one particle in detector region A and one in B has nonzero entanglement. Moreover, as emphasized in Ref. \cite{Wiseman}, only this term describes two particles which each live in a well defined $2\times 2$ Hilbert spaces at A and B respectively, i.e. two coupled orbital qubits. We point out that this definition does not take into account occupation-number, or Fock-space entanglement. The first step is thus to project out the two-particle component from the many-body wave function, which is accomplished by the projection operator \begin{equation} \Pi=\Pi_A\otimes\Pi_B, \hspace{0.5cm} \Pi_{\alpha}=n_{\alpha 1}(1-n_{\alpha 2})+n_{\alpha 2}(1-n_{\alpha 1}) \end{equation} where $n_{Aj}=b_{Aj}^{\dagger}b_{Aj}$ with $j=1,2$ etc is the number operator (suppressing energy notation). This yields the projected density matrix \begin{equation} \rho_{p}(E)=\Pi\rho(E)\Pi \end{equation} The elements of the density matrix $\rho_{p}(E)$ are conveniently calculated from the relation \cite{Wiseman} \begin{equation} [\rho_{p}(E)]_{ij,kl}=\langle \Pi b_{Ai}^{\dagger}b_{Bj}^{\dagger}b_{Bk}b_{Al} \Pi \rangle \end{equation} where, for any operator X, $\langle X \rangle=\mbox{tr}[X \rho]$ is the standard quantum-statistical average. Some algebra gives the projected density matrix, formally equivalent to the density matrix calculated in \cite{Beenrev}, Eqs. (B9) - (B13), \begin{equation} \rho_p(E)=(1-f)^2f_V^2\left(\begin{array}{cccc} \chi & 0 & 0 & 0 \\ 0 & c_{12}^{12} & c_{12}^{21} & 0 \\ 0 & c_{21}^{12} & c_{21}^{21} & 0 \\ 0 & 0& 0 & \chi \end{array} \right) \label{projected} \end{equation} where $\chi=e^{-eV/kT}$ and $f$ and $f_V$ the Fermi distribution functions of the grounded and biased source reservoirs respectively. The coefficients \begin{eqnarray} c_{12}^{12}&=&(R_C[1-\chi]+\chi)(T_D[1-\chi]+\chi), \nonumber \\ c_{21}^{21}&=&(T_C[1-\chi]+\chi)(R_D[1-\chi]+\chi), \nonumber \\ c_{12}^{21}&=&(c_{21}^{12})^*=-\gamma \sqrt{R_CT_CR_DT_D}e^{i\phi_0}(1-\chi)^2 \end{eqnarray} with $\phi_0$ an overall scattering phase of the beam splitters C and D. Thus, only the prefactor $f_V^2(1-f)^2$ depends on energy. As for the zero temperature case we have introduced dephasing as a suppression of the off-diagonal components of the density matrix. It follows from Eq. (\ref{projected}) that finite temperature leads to \\ i) an overall modification of the energy-dependent probability for two-particle emission via the prefactor $(1-f)^2f_V^2$. \\ ii) a suppression $\sim (1-\chi)^2$ of the off-diagonal components, equivalent to the effect of dephasing. \\ iii) a finite amplitude for the diagonal density matrix elements $[\rho_p(E)]_{11,11}$ and $[\rho_p(E)]_{22,22}$, i.e for two particles being emitted from either sources 1,3 or 2,4. \\ Additional insight follows from writing the projected density matrix as \begin{equation} \rho_p(E)=(1-f)^2f_V^2\left[\chi \rho_p^{diag}+(1-\chi)^2\rho^{int}\right] \label{projected2} \end{equation} where the diagonal density matrix \begin{equation} \rho_p^{diag}=\chi \hat 1\otimes \hat 1+(1-\chi)[\rho_A\otimes \hat 1+\hat 1\otimes\rho_B] \end{equation} with the zero temperature single particle density matrices $\rho_A=R_C|1\rangle\langle 1|+R_D|2\rangle\langle 2|$ and $\rho_B=T_C|1\rangle\langle 1|+T_D|2\rangle\langle 2|$. The density matrix \begin{eqnarray} \rho^{int}&=&R_CT_D|12\rangle\langle 21|+R_DT_C|21\rangle\langle 12| \nonumber \\ &-&\gamma \sqrt{T_CR_CT_DR_D}[e^{i\phi_0}|21\rangle\langle 21|+e^{-i\phi_0}|12\rangle\langle 12|] \end{eqnarray} results from the two-particle interference. Here we used the shorthand notation $|12\rangle\equiv|1\rangle_{A}|2\rangle_{B}$ with $\langle 21|=(|12\rangle)^{\dagger}$ etc. Note that the effect of decoherence enters as a suppression of the two-particle interference $|\Psi^{int}\rangle\langle \Psi^{int}| \rightarrow \rho^{int}$, where $|\Psi^{int}\rangle=\sqrt{R_CT_D}|12\rangle-e^{i\phi_0}\sqrt{T_CR_D}|21\rangle$. Writing $\rho_p(E)$ in the form in Eq. (\ref{projected2}) shows that, taken the energy dependent prefactor $f_V^2(1-f)^2$ aside, the effects of finite temperature can be viewed as follows: First, the amplitude of the two-particle interference component $\rho^{int}$ is suppressed with increasing temperature as $\sim (1-\chi)^2$. Second, the density matrix acquires a purely diagonal component $\rho_p^{diag}$ with an amplitude $\sim \chi$ (note that $\mbox{tr}[\rho_p^{diag}]=4$, independent on temperature). For the entanglement, following \cite{Beenrev} we introduce $\sigma_p$ and $w_p(E)$, the normalized density matrix and the emission probability of the emitted two-particle state respectively, defined from \begin{eqnarray} \rho_p(E)&=&w_p(E)\sigma_p, \nonumber \\ w_p(E)&=&\mbox{tr}[\rho_p(E)]=(1-f)^2f_V^2[(R_CT_D+T_CR_D)(1-\chi)^2+4\chi] \end{eqnarray} where we note that $\sigma_p$ is independent on energy. The emission probability $w_p(E)$ is thus the probability, per unit energy, that the (normalized) two-particle state $\sigma_p$ is emitted. The concurrence production per unit energy is then \begin{eqnarray} C_p(E)&\equiv& w_p(E)C(\sigma_p)=\frac{(1-\chi)^2f_V^2(1-f)^2}{2} \nonumber \\ &\times& \mbox{max}\left\{4\gamma\sqrt{R_CT_CR_DT_D}-\frac{1}{\sinh^2(eV/2kT)},0\right\} \label{conceproj} \end{eqnarray} and the total entanglement production during a time $\tau$, $C_p=(\tau/h) \int dE C_p(E)$, is then (${\mathcal N}=\tau eV/h$) \begin{equation} C_p=\frac{{\mathcal N}H}{2}\mbox{max}\left\{4\gamma\sqrt{T_CR_CT_DR_D}-\frac{1}{\sinh^2(eV/2kT)},0\right\}. \label{concproj} \end{equation} We denote this the projected entanglement. As shown in Fig. \ref{fig2}, $C_p$ decreases monotonically as a function of $T$. It reaches zero at a critical temperature $T_c^p$ given by \begin{equation} kT_c^p=eV \ln \left(\frac{\sqrt{1+4\gamma\sqrt{R_CT_CR_DT_D}}+1}{\sqrt{1+4\gamma \sqrt{R_CT_CR_DT_D}}-1}\right) \label{critTc} \end{equation} \begin{figure}[h] \centerline{\psfig{figure=fig2.eps,width=12.0cm}} \caption{a) Entanglement production $C_p/{\mathcal N}$ (blue, transparent) and $C_r/{\mathcal N}$ (green, opaque) as functions of temperature $kT/eV$ and coherence $\gamma$ for the semi-transparent 2PI. b) Parameter $Q$ as a function of $kT/eV$ (blue line). Values $0.25,10^{-1},10^{-2},10^{-3},10^{-4}$ shown (gray lines). Figure reproduced from Ref. \cite{Sam09}.} \label{fig2} \end{figure} For semi-transparent beam-splitters and zero dephasing, $\gamma=1$, the entanglement thus survives up to \cite{Beenrev} $kT_c^p=0.57 eV$. Inserting the parameter values from the experiment, we get $C_p \approx 0.1{\mathcal N}$ and $C(\sigma_p)\approx 0.3$, i.e. {\it the state emitted by the 2PI is clearly entangled}. Importantly, the effect of finite temperature is essentially negligible, the reduction in entanglement comes from decoherence. The entanglement of the projected density matrix is the entanglement one could access, had one been able to do arbitrary local operations and classical communication between A and B, i.e. fully energy and particle resolved measurements. Under realistic conditions this is not possible, the accessible physical quantities are currents and current cross correlators. Is it possible to determine the projected entanglement with such measurements? The answer to this question is no, for two main reasons: \\ i) As discussed above, at nonzero temperatures it is not only the biased source reservoirs which emit particles but also the grounded source reservoirs do. As a consequence, there is a finite amplitude for emitted states with two-particles at A and/or at B. These unentangled states contribute to currents and current correlators, which results in a detectable state with suppressed entanglement. \\ ii) The current and current correlators provide information on the energy integrated properties of the many-body state, not on the emitted state at each energy. This lack of energy-resolved information leads to a further suppression of the detectable entanglement. \\ Clearly, these effects of the thermally excited Fermi sea constitute generic problems when trying to detect entanglement in mesoscopic conductors. As a remedy for these finite temperature read-out problems it was suggested to work with detectors at very low temperatures \cite{Beenrev}. Another idea was recently presented by Hannes and Titov \cite{Titov}. They investigated detection of entanglement at finite temperatures via a Bell inequality and proposed to introduce energy filters at the drains. However, both schemes \cite{Beenrev,Titov} would lead to additional experimental complications in systems which already are experimentally very challenging. Our idea is instead to investigate what information about the projected entanglement can actually be deduced from current and current correlation measurements. In this context we also mention the recent proposal by Kindermann \cite{Kindermann}, to produce and detect entangled electron-hole pairs in graphene via a Bell inequality formulated in terms of the transport part of the current cross correlators \cite{mb92}, i.e. by subtracting away the thermal equilibrium correlators from the finite bias ones. In our work \cite{Sam09} we proposed a similar scheme for a general mesoscopic conductor. However, as was pointed out in \cite{Sam09} and is further discussed below, it is important that one performs a detailed comparison of the projected entanglement and the entanglement obtained from current cross correlation measurements. Without such a comparison, there is the possibility that one concludes, based on correlation measurements, finite entanglement where there is none, i.e. the projected entanglement is zero. \section{Reduced two-particle density matrix} We first consider the expression for the current and zero frequency current cross correlators at contacts $A+$ and $B+$ at finite temperatures. We have \begin{eqnarray} I_{A+}&=&\frac{e}{h}\int dE \left[\langle n_{A+}\rangle-f\right], \hspace{0.5cm} I_{B+}=\frac{e}{h}\int dE \left[\langle n_{B+}\rangle-f\right], \nonumber \\ S_{A+B+}&=&\frac{e^2}{h}\int dE \langle \Delta n_{A+} \Delta n_{B+}\rangle \label{currnoisered} \end{eqnarray} where $\langle \Delta n_{A+} \Delta n_{B+} \rangle=\langle n_{A+}n_{B+}\rangle-\langle n_{A+}\rangle \langle n_{B+} \rangle$ is the irreducible correlator. As discussed above, the many-body state incident on the detectors originates from the sources. It is the properties of this state that determines the observables $\langle n_{A+}\rangle,\langle n_{B+}\rangle$ and $\langle \Delta n_{A+} \Delta n_{B+}\rangle$ and thus establishes a connection between the emitted state and the physical quantities accessible in a measurement. \subsection{Energy resolved reduced density matrix} In order to better understand the readout problem discussed above, we first discuss the energy resolved properties of the emitted state. If one would have access to energy filters, as proposed in \cite{Titov}, or would be working at zero temperature, by combining current and current cross correlations it would be possible to get direct access to the energy resolved quantities $\langle n_{A+}\rangle,\langle n_{B+}\rangle$ and $\langle \Delta n_{A+} \Delta n_{B+}\rangle$. As is discussed below, by a suitable set of measurements with different settings of the beam splitters at A and B one could then tomographically reconstruct the (unnormalized) density matrix of the state emitted out from the source beam splitters C and D, $\rho_r^E$, with elements given by \begin{equation} [\rho_r^E]_{ij,kl}=\langle b_{Ai}^{\dagger}b_{Bj}^{\dagger}b_{Bk}b_{Al} \rangle \end{equation} We denote $\rho_r^E$ the energy resolved reduced density matrix. By comparing $\rho_r^E$ with the expression for the projected density matrix in Eq. (\ref{projected}) we see that it differs by the projection operators. Consequently, the reduced density matrix contains also the contributions from processes with more than one particle at A and/or at B. After some algebra we find the density matrix \begin{equation} \rho_r^E=(1-f)^2f_V^2\left(\begin{array}{cccc} \tilde\chi & 0 & 0 & 0 \\ 0 & \tilde c_{12}^{12} & c_{12}^{21} & 0 \\ 0 & c_{21}^{12} & \tilde c_{21}^{21} & 0 \\ 0 & 0& 0 & \tilde \chi \end{array} \right) \label{redendens} \end{equation} where we introduced $\tilde \chi=\chi/[(1-f_V)(1-f)]$ and the coefficients \begin{eqnarray} \tilde c_{12}^{12}&=&(R_C[1-\chi]+\chi)(T_D[1-\chi]+\tilde \chi), \nonumber \\ \tilde c_{21}^{21}&=&(T_C[1-\chi]+\chi)(R_D[1-\chi]+\tilde \chi). \end{eqnarray} A comparison to the projected density matrix in Eq. (\ref{projected}) shows that $\rho_r^E$ only differs formally from $\rho_p(E)$ by the change $\chi \rightarrow \tilde \chi$ at a number of places. This has the consequence that the normalized density matrix $\sigma_r^E=\rho_r^E/w_r^E$, with $w_R^E=\mbox{tr}[\rho_r^E]$ depend on energy. That is, in contrast to $\rho_p$ both the normalized, emitted two-particle state as well as the emission probability depend on energy. Qualitatively, as discussed above, the difference between $\rho_r^E$ and $\rho_p(E)$ arises from the fact that also states with more than one particle at A and/or B contribute to $\rho_r^E$ but not to $\rho_p(E)$. Writing $\rho_r^E$ on a form similar to Eq. (\ref{projected2}) one sees that these three and four particle states contribute only to the diagonal part of $\rho_r^E$. Turning to the entanglement, the concurrence production $C_r^E=w_r^EC(\sigma_r^E)$ at energy $E$ is then \begin{eqnarray} C_r^E&=&\frac{(1-\chi)^2f_V^2(1-f)^2}{2} \nonumber \\ &\times& \mbox{max}\left\{4\gamma\sqrt{R_CT_CR_DT_D}-\frac{1}{\sinh^{2}(eV/2kT)}\frac{1}{(1-f_V)(1-f)},0\right\} \label{concered} \end{eqnarray} From the expression for the concurrence it becomes clear that the separable three and four-particle states are detrimental for the entanglement. Hence, finite temperature leads to a stronger suppression of the reduced, energy resolved density matrix than of the projected one. This is illustrated in fig. \ref{redeconc} where the corresponding concurrencies are plotted for semitransparent beam-splitters and different values of $kT/eV$. \begin{figure}[h] \centerline{\psfig{figure=conctempvolt.eps,width=8.0cm}} \caption{A comparison of the concurrence production rates $C_r^E$ (dashed) and $C_p(E)$ (solid), as a function of energy for $T_C=T_D=1/2$ and different ratios $eV/kT$.} \label{redeconc} \end{figure} As is clear from the figure, there is an energy $E_0$ above which the concurrence is finite (up to $E \rightarrow \infty$). The energy $E_0$ is given by the condition $C_r^E(E_0)=0$, as \begin{equation} E_0=kT \left(\ln[2]-\ln \left[(1-\chi)\sqrt{1+4\sqrt{R_CT_CR_DT_D}}-(1+\chi)\right]\right) \label{encond} \end{equation} What is moreover clear from Fig. \ref{redeconc} is that, for all energies, $C_r^E(E)<C_p(E)$. The difference is obvious for energies $E<E_0$, where $C_r^E=0$. At these energies the probability for emission of separable three and four particle states is thus large enough to completely suppress the entanglement of the reduced density matrix. Importantly, the relation $C_r^E(E)<C_p(E)$ holds for all settings of the beam splitters $T_C$ and $T_D$, as is clear by comparing Eqs. (\ref{conceproj}) and (\ref{concered}). The reason for this is that the reduced density matrix contains contributions from all individual particle density matrices $\sigma_{ij}$ with $i,j\geq 1$ (e.g. $\sigma_{12}$ describes one particle at A and two at B) while the projected density matrix only depends on $\sigma_{11}$. Since all $\sigma_{12},\sigma_{21},\sigma_{22}$ are separable and the concurrence is a convex quantity, i.e. $C(p_1\sigma_1+p_2\sigma_2)\leq p_1 C(\sigma_1)+p_2 C(\sigma_2)$ for $p_1+p_2=1$, the concurrence $C_r^E$ is always smaller than $C_p(E)$. We point out that this carries over to the total concurrence production found by integrating Eq. (\ref{concered}) over energy (result not presented here). It follows from Eq. (\ref{encond}) that for a critical temperature $T_c^{rE}$ the energy $E_0 \rightarrow \infty$, i.e. the entanglement is zero for any energy. Interestingly, this happens for the same temperature as for the projected concurrence, Eq. (\ref{critTc}). \subsection{Finite temperature reduced density matrix} Importantly, at finite temperature, without any energy filters, we do not have access to the energy resolved quantities discussed above, only to the total currents and current correlators measured at contacts $A\alpha,B\beta$. In Ref. \cite{tomo} it was discussed how to, at zero temperature, tomographically reconstruct the reduced density matrix using currents and current correlations. Extending this scheme to nonzero temperatures it is natural to define the finite temperature reduced density matrix $\rho_r$ via the relation \begin{eqnarray} &&\frac{I_{A\alpha}I_{B\beta}}{(Ve^2/h)^2}+\frac{S_{A\alpha B\beta}}{2Ve^3/h}= \mbox{tr}\left\{\left[I_{A\alpha}^O\otimes I_{B\beta}^O \right]\rho_r\right\}. \label{noisecurrrel} \end{eqnarray} We emphasize that $\rho_r$ is reconstructed from observables already integrated over energy and does hence not depend on energy. Also note that $\rho_r$ is not given by integrating $\rho_r^E$ over energy, in fact the difference between the two density matrices is further discussed below. In Eq. \ref{noisecurrrel} the orbital current operators in the local basis $\{|1 \rangle, |2\rangle \}$, including the rotations at the detector splitters, are $I_{A\alpha}^O=(\hat 1+\alpha {\mathbf n}_A\cdot \hat \sigma)/2$ and $I_{B\beta}^O=(\hat 1+\beta {\mathbf n}_B\cdot \hat \sigma)/2$, with ${\mathbf n}_A\cdot \hat \sigma=S_A\sigma_zS_A^{\dagger}$ and ${\mathbf n}_B\cdot \hat \sigma=S_B\sigma_zS_B^{\dagger}$ where $\hat \sigma=[\sigma_x,\sigma_y,\sigma_z]$ a vector of Pauli matrices and $S_A~(S_B)$ the scattering matrix of the beam splitter at A (B). Making use of the results for finite temperature current and current correlations in \cite{Vanessa} we obtain the reduced density matrix \begin{equation} \rho_r=\left(\begin{array}{cccc} R_CT_C(1-H) & 0 & 0 & 0 \\ 0 & R_CT_D & d^{12}_{21} & 0 \\ 0 & d_{12}^{21} & R_DT_C & 0 \\ 0 & 0& 0 & R_DT_D(1-H) \end{array} \right) \label{reduced} \end{equation} where $d^{12}_{21}=(d_{12}^{21})^*=-H\gamma \sqrt{R_CT_CR_DT_D} e^{i\phi_0}$. Comparing $\rho_r$ to both $\rho_p(E)$ and $\rho_r^E$ in Eqs. (\ref{projected}) and (\ref{redendens}) it is clear that the qualitative effect of finite temperature is the same for the reduced density matrix. The quantitative effects are however different. First, the temperature dependence enters via $H$ rather than via $\chi$, giving a much stronger effect of finite temperature. This is the effect of having access to energy integrated quantities only. Second, in the expression for the average current in Eq. (\ref{currnoisered}), in the integrand one subtracts $f$ which arises due to particles flowing out of the detector reservoirs. This yields smaller diagonal terms, to be further discussed below. It is illuminating, just as for $\rho_p(E)$, to write $\rho_r$ as a sum of a diagonal and an interference part, \begin{equation} \rho_r=(1-H)[\rho_A\otimes\rho_B]+H\rho^{int}. \label{reduced2} \end{equation} From this we see that the effect of increasing temperature is to monotonically increase the amplitude for the separable product state $\rho_A \otimes \rho_B$, while the amplitude of the interference component is suppressed. We can thus conclude the following properties for all three density matrices $\rho_p(E), \rho_r^E$ and $\rho_r$: \\ i) At zero temperature they all reduce to the same expression, $\rho^{int}$. \\ ii) Increasing temperature leads to a monotonic suppression of the two-particle interference component. \\ iii) Finite temperature introduces an additional diagonal component, different for the three density matrices. Turning to entanglement, introducing the normalized reduced density matrix $\sigma_r$ we can write \begin{eqnarray} \rho_r&=&w_r\sigma_r \nonumber \\ w_r&=&\mbox{tr}[\rho_r]=[R_CT_C+R_DT_D](1-H)+R_CT_D+R_DT_C. \end{eqnarray} We then define the total entanglement production during a time $\tau$ as $C_{r}\equiv {\mathcal N} w_rC(\sigma_r)$. It is \begin{equation} C_r=2{\mathcal N}\mbox{max}\{\sqrt{T_CR_CT_DR_D}[H(1+\gamma)-1],0\} \label{concred} \end{equation} here called the reduced entanglement. As $C_p$, $C_r$ decreases monotonically with increasing $T$. It reaches zero at a critical temperature $T_c^r$ given by the relation \begin{equation} H(T_c^r)=\frac{1}{1+\gamma} \end{equation} For perfect coherence, $\gamma=1$, we have $kT_c^r=0.28eV$, close to one half of $kT_c^p$. Importantly, in contrast to $T_c^p$, $T_c^r$ is independent on the setting of the beam splitters. By comparing the expressions for the two quantities of main interest, the projected and reduced concurrencies, $C_p$ in Eq. (\ref{concproj}) and $C_r$ in Eq. (\ref{concred}), we can conclude the following: \\ i) For both $C_p$ and $C_r$ the origin of the entanglement is the two-particle interference, in fact the component $\rho^{int}$ gives rise to the positive term $2{\mathcal N}H\gamma\sqrt{T_CR_CT_DR_D}$, identical for $C_p$ and $C_r$. \\ ii) For both $C_p$ and $C_r$ finite temperature introduces a negative term, $-{\mathcal N}H/[2\sinh^2(eV/2kT)]$ for $C_p$ and $-2{\mathcal N}(1-H)\sqrt{T_CR_CT_DR_D}$ for $C_r$, which leads to a suppression of the concurrence. These terms arise from the separable, diagonal components of the corresponding density matrices. \section{Entanglement bound} Comparing Eqs. (\ref{concproj}) and (\ref{concred}) quantitatively we find that $C_p\geq C_r$ for \begin{equation} Q(T)=\frac{H}{4(1-H)\sinh^2(eV/2kT)}\leq \sqrt{T_CR_CT_DR_D}, \label{bondcond} \end{equation} independent on $\gamma$ (see Fig. \ref{fig2}). Consequently, for beam splitters away from the strongly asymmetrical (tunneling) limit, {\it the reduced entanglement constitutes a lower bound for the projected entanglement}. In the tunneling limit, however, the reduced entanglement is larger than the projected one. Thus, in contrast to the energy-resolved reduced density matrix $\rho_r^E$, $\rho_r$ can be more entangled than $\rho_p$. The origin of this difference is, as pointed out above, that when calculating (and measuring) $\rho_r$ the average currents flowing out from the detector reservoirs are subtracted, yielding a smaller diagonal component and hence a larger entanglement $C_r$. Importantly, since the transparencies $T_C$ and $T_D$ can be controlled and measured via average currents in the experiment, it is always possible to verify independently that the condition in Eq. (\ref{bondcond}) is satisfied. Turning to the experiment \cite{Neder}, for the relevant parameters we have $Q(T)\approx 4\times 10^{-4} \ll \sqrt{R_CT_CR_DT_D}\approx 0.25$, showing the validity of the bound. However, $C_r\approx 0.01{\mathcal N}$ and based on the measurement \cite{Neder} no conclusive statement can be made about $C_r$ and hence not about $C_p$. In order to detect entanglement via measurements of currents and current correlations, one thus need to work at even lower temperature and further reduce the dephasing in the experiment. A more detailed understanding of this finite temperature readout problem can be obtained by comparing the properties of $\sigma_p$ and $\sigma_r$. For perfect coherence $\gamma=1$ and identical beam splitters $T_C=T_D={\mathcal T}=1-{\mathcal R}$ one can (up to a local phase rotation) write \begin{equation} \sigma_{p/r}=\frac{1}{4}\xi_{p/r}\hat 1\otimes \hat 1+(1-\xi_{p/r})|\Psi_s\rangle\langle \Psi_s| \end{equation} a Werner state \cite{Werner}, with singlet weight [$|\Psi_s\rangle$ is the singlet in Eq. (\ref{introsing})] \begin{equation} 1-\xi_p=\frac{2\mathcal{RT}\sinh^2(2eV/kT)}{1+2\mathcal{RT}\sinh^2(2eV/kT}, \hspace{0.5cm} 1-\xi_r=\frac{H}{2-H} \label{singweights} \end{equation} Increasing $kT/eV$ from zero, $\xi_p\approx 2e^{-4eV/kT}/(\mathcal{RT})$ becomes exponentially small while $\xi_r \approx kT/eV$ increases linearly. These qualitatively different behaviors, clearly illustrated in Fig. \ref{fig2}, are a striking signature of how a small $kT/eV$, having negligible effect on $C(\sigma_p)$, leads to a large suppression of $C(\sigma_r)$. From Eqs. (\ref{concproj}) and (\ref{concred}) follows also a counter-intuitive result: {\it finite amplitude of the AB-oscillations is no guarantee for finite two-particle entanglement}. This is apparent for $\sigma_r$ in the limit of no decoherence $\gamma=1$ and identical beam splitters $T_C=T_D$, since a separable Werner state, $\xi_r>2/3$, can be decomposed \cite{decomp} as \begin{equation} \sigma_r=\frac{1}{4}\sum_{n=1}^4|\phi^A_n\rangle\langle\phi_n^A|\otimes|\phi_n^B\rangle\langle\phi_n^B| \label{sepstate} \end{equation} with the normalized states at $A$ and $B$ \begin{eqnarray} |\phi^{A/B}_n\rangle&=&\cos \theta^{A/B}_n|1\rangle+e^{i\pi[1-2n]/4}\sin \theta^{A/B}_n|2\rangle, \nonumber \\ \theta_1^{A/B}&=&\theta_3^{A/B}=\mbox{atan}[y^{A/B}], \hspace{0.5cm} \theta_2^{A/B}=\theta_4^{A/B}=-\mbox{acot}[y^{A/B}] \nonumber \\ y^{A/B}&=&\frac{\sqrt{2-\xi_r}+\sqrt{3\xi_r-2}}{\sqrt{\xi_r}\pm \sqrt{4-3\xi_r}}, \hspace{0.5cm} +(-)~ \mbox{for}~ A(B) \end{eqnarray} This {\it classically} correlated state gives, via Eq. (\ref{noisecurrrel}), AB-oscillations with amplitude $2(1-\xi_r)/(2-\xi_r)=H$. Moreover, the reduced local single particle states are completely featureless, $\mbox{tr}_B(\sigma_r)=\mbox{tr}_A(\sigma_r)=\hat 1/2$ which means that there is no single particle Aharonov-Bohm effect. The existence of classically correlated two-particle states giving rise to Aharonov-Bohm oscillations in the current cross correlations but not in the currents provides further motivation for a complete tomographic reconstruction of the reduced density matrix in order to provide an unambiguous experimental demonstration of entanglement. \section{Detecting entanglement: Quantum State Tomography and Bell Inequality} \subsection{Quantum state tomography} As pointed out at several places above, the reduced density matrix can be reconstructed by a suitable set of current and current correlations measurements with different settings of the beam splitters parameters, i.e. different ${\mathbf n}_A,{\mathbf n}_B$. A detailed description of this scheme is given in \cite{tomo}. Here we only emphasize that the necessary tools, controllable reflectionless electronic beams splitters and phase gates, are experimentally available, as demonstrated in e.g. \cite{MZ1,MZ2,MZ3,MZ4,MZ5,Neder} \subsection{Bell Inequality} Another widely discussed \cite{BI1,Sam03,Been03,Sam04,nonint1,nonint2} approach to detect the entanglement in mesoscopic conductors is to use a Bell inequality. Violation of a CHSH-Bell inequality \cite{CHSH} formulated in terms of currents and low-frequency current correlations demonstrates finite entanglement of $\rho_r$. We point out that an optimal Bell test, requiring control over all three components of ${\bf n_A}$ and ${\bf n_B}$, demands the same number of measurement and level of experimental complexity as a tomographic reconstruction of $\rho_r$. The CHSH-Bell inequality is \begin{equation} \Omega_{Bp/r}\leq 2 \end{equation} where $\Omega_{Bp/r}$ is the Bell parameter for the projected/reduced state. The Bell parameter is formally determined by the projected/reduced density matrix $\sigma_{p/r}$ and different settings of the detector beam splitters, reaching its maximum value $\Omega_{Bp/r}^{max}$ for an optimal setting of ${\bf n_A}$ and ${\bf n_B}$. From $\sigma_p$ and $\sigma_r$ above, we can, using Ref. \cite{Horodecki}, calculate the maximal Bell parameters. For symmetric beam splitters, $T_C=T_D={\mathcal T}$, we have the simple result \begin{eqnarray} \Omega_{Bp/r}^{max}&=&2\sqrt{1+\gamma^2}(1-\xi_{p/r}) \end{eqnarray} where the singlet weights $1-\xi_p$ and $1-\xi_r$ are given in Eq. (\ref{singweights}). This shows that the effects of decoherence and finite temperature enters separately in the Bell parameter. Moreover, as pointed out in Refs. \cite{Sam03,Turkbeen,Turksam}, at zero temperature a Bell inequality can in principle be violated for arbitrary dephasing. We also point out that a detailed investigation of conditions for violation of a Bell inequality in the presence of dephasing, in the solid state, was recently performed in Ref. \cite{Kofman}. The limiting value for violation $\Omega_{Bp/r}^{max}=2$ for ${\mathcal T}=1/2$ plotted in Fig. \ref{fig2}. It is clear that for the values $kT/eV$ and $\gamma$ of the 2PI-experiment, while $\Omega_{Bp} \leq 2$ in principle can be violated, a detection of entanglement by violating $\Omega_{Br} \leq 2$ is not possible. This demonstrates in a striking way the known fact \cite{Werner,Verstrate} that there are entangled states that do not give a violation of a Bell Inequality. \section{Conclusions} In conclusion, we have investigate the effect of finite temperature on the entanglement production and detection in the fermionic two-particle interferometer, presenting an extended discussion of the results in Ref. \cite{Sam09}. A calculation of the entanglement of the two-particle state projected out from the emitted, finite temperature many body state shows that the state emitted in the two-particle interferometer in the experiment by Neder et al \cite{Neder} is clearly entangled. By comparing the entanglement of the projected two-particle state with the entanglement of the reduced two-particle state, accessible via quantum state tomography based on current and current correlation measurements, we establish that the entanglement of the reduced state constitute a lower bound for the entanglement of the projected state. In the two-particle interferometer experiment the reduced state is however marginally entangled. Moreover, a finite temperature Bell Inequality formulated in terms of currents and current correlators can not be violated in the experiment. This shows that an unambiguous demonstration of the entanglement via measurements of currents and current correlations requires a reduction of the dephasing and the temperature. \section{Acknowledgements} The work was supported by the Swedish VR, the Israeli SF, the MINERVA foundation, the German Israeli Foundation (GIF) and Project Cooperation (DIP), the US-Israel Binational SF, the Swiss NSF and MaNEP.
2,869,038,155,412
arxiv
\section{Introduction} \label{sec:introduction} In order to constrain models of structure formation and to investigate the star formation history of the universe, large samples of galaxies at high redshift are needed. The clustering properties on large scales at these high redshifts can only be studied with contiguous fields of considerable size. Furthermore, to overcome cosmic variance, different lines of sight should be probed. The Lyman-break technique is an efficient method to select galaxies at high redshift from multi-colour optical data. The largest survey of Lyman-break galaxies (LBGs) at $z\sim3$ to date \citep{2003ApJ...592..728S} covers 0.38 square degrees in 17 widely separated fields yielding more than 2000 LBG candidates of which 940 have been confirmed spectroscopically. On a sub-area, this group has published 244 $G$-dropouts (48 spectroscopically confirmed), candidates for $z\sim4$ galaxies \citep{1999ApJ...519....1S}. \citet{2003A&A...409..835F} published results from the Canada-France deep fields identifying $\sim 1300$ $U$-dropouts on a shallower but larger field. Very recently, \citet{2004ApJ...611..660O} obtained a sample of $\sim2000$ $B$-dropouts in deep Suprime-Cam imaging and observed 85 of them spectroscopically. In this paper we investigate large samples of $U$- and $B$-dropouts in very deep wide-field images of the CDFS. This investigation on one field is a pilot study for a much larger survey, the ESO Deep-Public-Survey (DPS), of LBGs on mosaic CCD data. Special attention is paid to the careful selection of candidates and the comparison with other successful studies of LBGs. While still smaller than some other samples because of the limited area, our LBG population will grow significantly in the near future with the analysis of the whole survey. Here the methods that will be applied to the complete dataset are presented and evaluated. In Sect.~\ref{sec:Observations_and_data_reduction} the observations, the data reduction, and the catalogue extraction are described. Sect. \ref{sec:sample_selection} deals with the photometric selection of dropouts in our data. After that the properties of the two dropout samples are presented in Sect.~\ref{sec:properties-samples}. Photometric redshift estimates, the distributions of apparent magnitudes, and the clustering properties are shown there. Concluding remarks and an outlook are given in Sect. \ref{sec:conclusions}. \section{Observations and data reduction} \label{sec:Observations_and_data_reduction} \subsection{Observations} The Chandra Deep Field South ($\alpha=03^\mathrm{h}\,32^\mathrm{m}\,29^\mathrm{s}$, $\delta=-27\degr\,48\arcmin\,47\arcsec$) was observed with the WFI@MPG/ESO2.2m for several programmes. Data were taken for the GOODS project \citep{2004ApJ...600L..93G}, the COMBO-17 survey \citep{2004A&A...421..913W}, and the ESO-Imaging-Survey (EIS) \citep{2001A&A...379..740A}. All these data are available from the ESO archive. Erben et al. (ESO Press Photos 02a-d/03) have produced very deep images in $BVR$ with a field-of-view of $34'\times33'$ using the Bonn WFI reduction pipeline (\citeauthor{2003A&A...407..869S} \citeyear{2003A&A...407..869S}; \citeauthor{2005astro.ph..1144E} \citeyear{2005astro.ph..1144E}). Additionally, $U$- and $I$-band images were published by the EIS-team \citep{2001A&A...379..740A}. Their properties are summarised in Table~\ref{tab:CDFS}.\footnote{If not otherwise specified we use Vega magnitudes in this paper.} \begin{table*} \caption{Properties of the CDFS WFI-data. The limiting magnitudes in columns 3 and 4 are calculated with equation \ref{equ:mag_lim}.} \label{tab:CDFS} \centering \begin{tabular}{c c c c c c c c} \hline\hline Band & ESO-Id & exposure & $3\sigma$ limits in a $2''$ diam. aperture & $1\sigma$ limits in $2\times$ FWHM diam.& AB & FWHM & source \\ & & time [s] & (Vega mags) & (Vega mags) & correction & [$\arcsec$] & \\ \hline $U$ & U/50 & 43\,600 & 25.6 & 26.8 & 0.9 & 1.07 & EIS\\ $B$ & B/99 & 57\,000 & 28.0 & 29.2 & $-$0.1 & 0.99 & Bonn/GaBoDS\\ $V$ & V/89 & 56\,000 & 27.5 & 28.7 & 0.0 & 0.93 & Bonn/GaBoDS\\ $R$ & Rc/162 & 57\,100 & 27.6 & 28.7 & 0.2 & 0.81 & Bonn/GaBoDS\\ $I$ & Ic/lwp & 26\,900 & 25.1 & 26.3 & 0.5 & 0.95 & EIS\\ \hline \end{tabular} \end{table*} The $BVR$ images were coadded with \emph{drizzle} \citep{2002PASP..114..144F}. The astrometric calibration was done with respect to the USNO-A2.0 \citep{1998yCat.1252....0M} and the photometric calibration is based on the COMBO-17 CDFS data \citep{2004A&A...421..913W}. The properties of the $U$- and $I$-band images from EIS are described in detail in \citet{2001A&A...379..740A}. The astrometric solution for these images is recalculated on the basis of our $R$-band catalogue. \subsection{Image preparation and catalogue extraction} Since the EIS images come from a different reduction pipeline it is necessary to resample them again to exactly the same output grid with the same centre coordinates in order to use the dual image mode of \emph{SExtractor} \citep{1996A&AS..117..393B} described below. This is done by \emph{SWarp} \citep{2003SWarp} which minimises the introduction of additional noise by applying a reverse mapping technique combined with an advanced kernel function (Lanczos-3). The catalogues are created using \emph{SExtractor} in dual-image mode. In this mode objects are detected and their shapes are measured on the $R$-band image. The flux in the other bands is measured on the corresponding images at the positions derived from the $R$-band. The $R$-band is chosen as the detection image since it is very deep, has very good seeing, and the targeted LBGs are comparatively bright in this band. An object is detected in the $R$-band if the flux in five adjacent pixels exceeds the standard deviation of the local sky background fluctuations by a factor of three. This conservative criterion is chosen because the handling of dropouts in blue bands requires clear detections in redder bands. To account for the different seeing properties of the images, aperture magnitudes are used with the size of the aperture in one band scaled to the seeing of that image (diameter of the aperture $=2\times$FWHM) when colours are measured. This approach is justified for the investigation of LBGs since these objects are usually not resolved in ground-based images. Thus, our approach delivers correct colours as long as the seeing is not too different in the images used (see Table~\ref{tab:CDFS}). When magnitudes are cited in the following, the \emph{SExtractor} parameter MAG\_AUTO is used which corresponds to flexible elliptical apertures described in \citet{1980ApJS...43..305K}. The aperture magnitudes are used only for colour estimation. When objects are detected in the $R$-band image and the flux is measured in the other bands, it is necessary to separate detected from non-detected objects in the bands different from the $R$-band. For that purpose limiting magnitudes for the apertures defined above are calculated: \begin{equation} \label{equ:mag_lim} mag_{\mathrm{lim}}=ZP-2.5 \log \left(\sqrt{N_{\mathrm{pix}}}\cdot\sigma\right) \; . \end{equation} $ZP$ is the photometric zeropoint of the image, $N_{\mathrm{pix}}$ is the number of pixels in the aperture, and $\sigma$ gives the global RMS pixel-to-pixel fluctuations of the sky background in the image considered. In Table~\ref{tab:CDFS} two different limiting magnitudes are given for every image, $3\sigma$ limits in a $2''$ diameter aperture and the $1\sigma$ limits in an aperture with $2\times$FWHM diameter. The latter are used to set a lower/upper limit to the colour index of objects that are not detected in one band. Our final catalogue contains $\sim\!57\,000$ $R$-band detected objects of which $\sim\!10\,000$ have no significant flux in $U$, $\sim\!300$ are not detected in $B$, $<\!100$ are not detected in $V$ (mostly $R$-band image defects), and $\sim\!4\,500$ are not detected in $I$ (due to the shallower depth of this image). No star-galaxy separation is performed. LBGs are of such small apparent size that a considerable fraction of them would possibly be misclassified as stars and rejected if this was done. \section{Sample selection} \label{sec:sample_selection} Whenever selecting a sub-population from a large catalogue, attention must be paid to maximise completeness and efficiency. Often these goals are working in opposite directions and a good compromise must be chosen. While the real efficiency can usually be quantified with spectroscopic data at hand, the completeness is hard to quantify. Even defining completeness and efficiency can be somewhat ambiguous in this context as we show in the following. Here we are searching for high-redshift galaxies, which means that stars or low-redshift interlopers can contaminate the catalogues, thus reducing the efficiency. The case for intermediate redshift ($1\!<\!z\!<\!2$) galaxies is more difficult. In principle, we are highly interested in these objects since few are known up to now \citep[see][]{2004ApJ...604..534S}. But for the clustering analysis and obtaining luminosity functions in redshift slices, these objects are also contaminants and should be separated. To guarantee a reasonable efficiency, model galaxies' colours are investigated for high-redshift galaxies with a pronounced Lyman-break as well as for low-redshift ellipticals that are nearby in colour space. Furthermore, our selection follows other successful studies of LBGs cited above. Completeness, however, is a different issue. If our goal was to select every galaxy at e.g. $z\sim3$ in our data we would not be very complete with the method described below. In fact we are searching for LBGs which are easy to detect because of their pronounced Lyman-break. More dusty galaxies are much harder to separate from low-redshift objects, and if they are common at high redshift we will miss a lot of them with our selection criteria. Today it is known that LBGs are common at high redshift and not rare objects, representing a considerable fraction of the total galaxy population at these epochs \citep{2002ARA&A..40..579G}. \subsection{Colours of high-redshift galaxies} The publicly available photometric redshift code \emph{Hyperz} \citep{2000A&A...363..476B} is used to estimate the colours of high-redshift galaxies. Template spectra from the library of \citet{1993ApJ...405..538B} are taken and convolved with the instrumental response of the WFI (see Fig.~\ref{fig:WFI_filter_curves_plus_CCD}). The spectral energy distribution (SED) of a galaxy with constant star formation rate (spectral type: Im) is chosen which has a pronounced Lyman-break. Different amounts of reddening are taken into account by applying the dust extinction law of \citet{2000ApJ...533..682C}. The opacity of the intergalactic medium is included by applying the estimates from \citet{1995ApJ...441...18M}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{2544fi01.eps}} \caption{\label{fig:WFI_filter_curves_plus_CCD}Instrumental response of WFI in the different filters.} \end{figure} Furthermore, the colours of elliptical galaxies at low redshift are calculated in an identical way in order to estimate their contamination of our samples of LBGs. In Fig.~\ref{fig:models} the colour distributions of the model galaxies are shown. \begin{figure*} \centering \includegraphics[width=17cm]{2544fi02.eps} \caption{\label{fig:models}Colours of model galaxies in the $U-V$ vs. $V-R$ two-colour diagramme used for $U$-dropout selection (\emph{left}) and in the $B-R$ vs. $R-I$ two-colour diagramme for $B$ dropout selection (\emph{right}). The solid lines represent galaxies (spectral type Im) with no dust reddening, the dashed lines represent galaxies (spectral type Im) with an extinction in the visual of $A_V=1.5$~mag, and the dotted lines represent elliptical galaxies (spectral type E) at low redshift. The points correspond to intervals of $\Delta z=0.1$. The boxes define our selection boundaries for high-$z$ galaxy candidates.} \end{figure*} \subsection{Selection of Candidates} \label{sec:selection-candidates} We based our selection criteria for high-redshift objects on the predicted colours of model galaxies as outlined above. Given our filter set (see Fig.~\ref{fig:WFI_filter_curves_plus_CCD}) and the data quality in the different bands, objects with $z\sim3$ are selected most efficiently in a $U-V$, $V-R$ colour-colour diagramme. More distant galaxies at $z\sim4$ are preferentially picked up in the $B-R$, $R-I$ space. In principle those populations can also be selected in $U-B$, $B-V$ and $B-V$, $V-R$ space respectively. However, due to the significant wavelength overlaps between the $B$, $V$ and $R$ filters, and due to the small gap between $U$ and $B$, an efficient discrimination between galaxies with and without a pronounced Lyman-break is not possible in such diagrammes. This can be seen in Fig.~\ref{fig:Up_B_V_models} where the redshift tracks run more diagonal than in Fig.~\ref{fig:models} due to the fact that an object that `drops out' from the $U$-band completely becomes already significantly fainter in the $B$-band. For the same reason a search for $V$-dropouts in our data is difficult, although our very deep wide-field $V$-band is predestined for such a project. Deep infrared data from the GOODS project with ISAAC@VLT \citep{2004ApJ...600L..93G} are available for the innermost part of our field and will help in searching for $V$-dropouts (see Sect.~\ref{sec:conclusions}). \begin{figure} \resizebox{\hsize}{!}{\includegraphics{2544fi03.eps}} \caption{\label{fig:Up_B_V_models}Colours of model galaxies. The solid line represents galaxies (spectral type Im) with no dust reddening, the dashed line represents galaxies (spectral type Im) with an extinction in the visual of $A_V=1.5$~mag, and the dotted lines represent elliptical galaxies (spectral type E) at low redshift. The points correspond to intervals of $\Delta z=0.1$. Here the effect of overlapping filters can be seen, resulting in slightly diagonal tracks which make it difficult to choose a selection box.} \end{figure} The selection must always be a compromise between completeness and efficiency. Galaxies that are too red in $V-R$ cannot be included in the $z\sim3$ selection box, for example, if one wants to avoid contamination by low-redshift elliptical galaxies. The same is true for the $z\sim4$ sample. Photometric errors will scatter the data-points of faint galaxies around in the two-colour-diagrammes so that it is not possible to predict a precise redshift distribution of the samples. Furthermore, the redshift distribution will change with intrinsic spectral shape because of complex selection effects. Based on these considerations the following selection criteria are chosen (see Fig.~\ref{fig:models}). For the $U$-dropout selection, \begin{eqnarray} \label{equ:U_dropout_selection} 1 &\le&(U-V) \; , \nonumber \\ -0.5 &\le& (V-R)\le1.5 \; ,\\ 3\cdot(V-R) &\le& (U-V)-0.5 \; , \nonumber \end{eqnarray} and for the $B$-dropout selection, \begin{eqnarray} \label{equ:B_dropout_selection} 2 &\le&(B-R) \; , \nonumber \\ && (R-I)\le1.5 \; ,\\ 2.5\cdot(R-I) &\le& (B-R)-1.25 \; . \nonumber \end{eqnarray} Applied to our catalogues, we get 1167 $z\sim3$ $U$-dropout candidates and 613 $z\sim4$ $B$-dropout candidates (see Fig.~\ref{fig:objects}). All $U$-dropout candidates are detected in the $B$-, $V$-, and $R$-band images, so that their colour selection is not influenced by the depth of these images. 101 of them are not detected in $I$ being fainter than $I=26.3$, the detection limit in that band. The colour selection of the intrinsically fainter $B$-dropouts is also not seriously influenced by that effect, although 172 of them are not detected in $I$ (this is the reason for the spike running from the lower right to the upper left in the selection box of Fig.~\ref{fig:objects}). Their $(R-I)$ colour is an upper limit. There are, however, some objects that lie to the right of our selection box, which could have bluer $(R-I)$ colours. So, efficiency is not affected while completeness suffers from the lower depth of the $I$-band image. Thumbnail pictures in the five WFI bands are created for all selected objects. Some examples are shown in Fig.~\ref{fig:GOODS_thumbnails} and \ref{fig:GEMS_thumbnails}. Every candidate is checked by eye and some spurious detections like bad pixels, cosmic rays, reflections, or other image defects are rejected. After that our catalogues still contain 1070 $U$-dropouts and 565 $B$-dropouts. Our dropout catalogues are freely available to the scientific community in the electronic version of A\&A (Tables~\ref{tab:cat_U} and \ref{tab:cat_B})\footnote{Tables~\ref{tab:cat_U} and \ref{tab:cat_B} are also available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/}. The spatial distribution of the two samples is shown in Fig.~\ref{fig:spatial_dist}. \begin{figure*} \centering \includegraphics[width=17cm]{2544fi04_small.eps} \caption{\label{fig:spatial_dist}Spatial distribution of our $U$-dropouts (\emph{left}) and our $B$-dropouts (\emph{right}).} \end{figure*} \subsection{Observations with other telescopes} The whole WFI field is covered with data from the Advanced Camera for Surveys (ACS) on board Hubble Space Telescope (HST). The inner part of our $34'\times33'$ field was observed for the GOODS programme \citep{2004ApJ...600L..93G} in the four bands $BVIZ$ (F435W, F606W, F775W and F850LP) and the outer regions were observed for the GEMS project \citep{2004ApJS..152..163R} in $V$ and $Z$ (F606W and F850LP). From these data, thumbnail pictures for nearly every dropout candidate are created and examples are also shown in Fig.~\ref{fig:GOODS_thumbnails} and \ref{fig:GEMS_thumbnails}. Since the GOODS data cover the $BVIZ$ filters, they are suited to check our $B$-dropout selection criteria. The ACS thumbnails of all the $B$-dropouts inside the GOODS area are checked by eye for contamination by stars (point-like objects) or objects that are clearly visible in the ACS $B$-band image. Seven out of 66 objects are classified as possible contaminants. Thus, we roughly estimate the efficiency of our $B$-dropout selection to $\sim 90\%$. \begin{figure*} \includegraphics[width=17cm]{2544fi05_small.eps} \caption{\label{fig:objects}$(U-V)$ vs. $(V-R)$ (\emph{left}) and $B-R$ vs. $R-I$ (\emph{right}) colours of galaxies in the CDFS WFI catalogue. The boxes represent the selection criteria given in equation \ref{equ:U_dropout_selection} (\emph{left}) and \ref{equ:B_dropout_selection} (\emph{right}). The spike running from the lower right to the upper left inside the $B$-dropout selection box (\emph{right}) is an artifact due to the inferior depth of the $I$-band image (see text). It does not affect the efficiency of the dropout selection.} \end{figure*} \begin{figure*} \includegraphics[width=17cm]{2544fi06_small.eps} \caption{\label{fig:GOODS_thumbnails}Examples of a $U$-dropout (upper row) and a $B$-dropout (lower row) in the GOODS area. Thumbnail pictures in the WFI-$UBVRI$ and ACS-$BVIZ$ filters (from left to right) of size $10''\times 10'' $.} \end{figure*} \begin{figure*} \includegraphics[width=17cm]{2544fi07_small.eps} \caption{\label{fig:GEMS_thumbnails}Examples of a $U$-dropout (upper row) and a $B$-dropout (lower row) in the GEMS area. Thumbnail pictures in the WFI-$UBVRI$ and ACS-$VZ$ filters (from left to right) of size $10'' \times 10'' $. In the ACS images the irregular nature of this $U$-dropout is clearly revealed, while the $B$-dropout is barely visible in the WFI-$I$-band because of the limited depth of this image.} \end{figure*} Six of our $U$-dropouts have been observed spectroscopically in the VVDS \citep{2004astro-ph...0403628} and one of them also for the GOODS programme \citep{2004astro-ph...0406591}. Three of them are at $z>3$, while the other three are low-redshift interlopers. This is not surprising since all of these objects are quite bright ($R\sim23$) and contamination plays an important role in these magnitude ranges \citep[see][]{2003ApJ...592..728S}. The redshift of one of those interlopers is not yet determined unambiguously, estimates range from $z=0.22$ (GOODS) to $z=0.64$ (VVDS). None of our $B$-dropouts have been observed spectroscopically so far. \section{Properties of the samples} \label{sec:properties-samples} In this section we test our selection criteria in detail by analysing and comparing our LBG samples against those of other studies. \subsection{Photometric redshift distributions} Photometric redshifts for all candidates are estimated from their $UBVRI$ photometry with the programme \emph{Hyperz} \citep{2000A&A...363..476B}. Again the template SEDs by \citet{1993ApJ...405..538B} are chosen. The programme calculates galaxy colours in the WFI filter set at different redshifts for every template incorporating different ages, different amounts of reddening \citep{2000ApJ...533..682C}, and absorption by the Lyman-$\alpha$ forest (calculated in dependence of redshift according to \citeauthor{1995ApJ...441...18M} \citeyear{1995ApJ...441...18M}). Every object is assigned the redshift of the best-fit SED, the primary solution phot-$z$. Furthermore, a weighted mean redshift is computed in the 99\% confidence interval around the primary solution. The distributions of these quantities for all of our dropouts are shown in Fig.~\ref{fig:z_dist}. There are clear peaks at the targeted redshifts of $z\sim3$ and $z\sim4$. Furthermore, there is a secondary peak in the redshift distribution of the $U$-dropouts at lower redshift ($z\sim1.7$) which is more pronounced in the distribution of the primary redshifts and `washed-out' in the distribution of the weighted mean redshifts. The programme \emph{Hyperz} can also put out the probability (associated with the $\chi^2$ value) of an object to be located at the different redshift values. Investigating these redshift-probability distributions of every single $U$-dropout with assigned redshift $\mbox{phot-}z<2$ it becomes clear that for most of them no unique solution is found. There are multiple solutions (often another peak at $z\sim3$) or plateaus (see Fig.~\ref{fig:log_phot}), which results in a washed-out distribution of the weighted mean redshifts. For comparison, the unambiguous redshift probability distribution of an object with assigned redshift $\mbox{phot-}z=3.29$ is also shown in Fig. \ref{fig:log_phot}. Most of the objects with assigned redshift $\mbox{phot-}z\sim3$ show similar distributions even though not as perfect as this example. It is possible that a significant fraction of those objects with $\mbox{phot-}z<2$ are indeed LBGs at redshifts of 3, but are not unambiguously identified as such by \emph{Hyperz}. This assumption is strengthened by the fact that the redshift-probability distribution of many of those galaxies shows a secondary peak at $z\sim3$ (see Fig. \ref{fig:log_phot}). In case those objects are indeed at a lower redshift of $z\sim2$, they would fall into the so-called `redshift desert', where a determination of photometric and spectroscopic redshifts is very difficult due to the absence of prominent spectral features such as strong breaks in the $UBVRI$ range. \citet{2004ApJ...604..534S} have identified a large number of galaxies in this so-called `redshift desert' applying a technique very similar to the Lyman-break technique with a selection box just below the LBG selection box \citep{2004ApJ...607..226A}. Photometric errors and differences in their and our filter set could scatter some lower redshift objects into our selection box. Some of these objects will certainly be included in the spectroscopic follow-up survey described in Sect.~\ref{sec:conclusions}. \begin{figure*} \includegraphics[width=12cm]{2544fi08.eps} \caption{\label{fig:z_dist}Photometric redshift distributions of our $U$-dropouts (\emph{left}) and our $B$-dropouts (\emph{right}). The solid lines correspond to the distributions of the primary solutions and the dashed lines correspond to the distributions of the weighted mean redshifts (see text).} \end{figure*} \begin{figure*} \includegraphics[width=17cm]{2544fi09.eps} \caption{\label{fig:log_phot}Redshift vs. probability (associated with the $\chi^2$ value) for three different $U$-dropouts. \emph{Left:} An object with assigned redshift $\mbox{phot-}z=1.81$. This example illustrates that the assignment of a single number for the photometric redshift can be misleading. The peak at $z=2.8$ has nearly the same probability. \emph{Middle:} An object with assigned redshift $\mbox{phot-}z=1.85$. This example illustrates that sometimes the photometric redshift estimation totally fails but nevertheless a primary solution is put out. \emph{Right:} An object with assigned redshift $\mbox{phot-}z=3.29$. The ideal case of an object with a definite single redshift estimate.} \end{figure*} \subsection{Distribution of apparent magnitudes} In order to compare our number-counts to other studies, the total $R$-band Vega magnitudes of the $U$-dropouts are converted to Steidel's $\mathcal{R}_{AB}$-band using the transformation equation in \citet{1993AJ....105.2017S} and an AB correction of 0.2 magnitudes. The total $I$-band Vega magnitudes of the $B$-dropouts are converted to the AB system using an AB correction of 0.5 magnitudes. Both AB corrections are calculated with \emph{Hyperz} \citep{2000A&A...363..476B}. In Fig. \ref{fig:number-counts} our results are shown in comparison to \citet{1999ApJ...519....1S}. In general there is good agreement between the two studies. \begin{figure*} \includegraphics[width=17cm]{2544fi10.eps} \caption{\label{fig:number-counts}The diagrammes show number-counts for $U$-dropouts (\emph{left}) and $B$-dropouts (\emph{right}) of our catalogue (squares) and that of \citet{1999ApJ...519....1S} (triangles). The CDFS-WFI data points are slightly offset for clarity.} \end{figure*} The few deviations, however, can be explained. On the one hand, \citet{1999ApJ...519....1S} correct their number-counts for contamination by low-$z$ interlopers and stars using their large spectroscopic database which reduces the numbers at the brighter magnitudes. On the other hand, their images go slightly deeper ($0.5-1$~mag in the $U$-band depending on the field; see \citeauthor{2003ApJ...592..728S}, \citeyear{2003ApJ...592..728S}) which increases their number of LBGs at the faint end. \subsection{Angular correlation functions} For calculating the angular correlation function we apply the estimator by \citet{1993ApJ...412...64L}, \begin{eqnarray} \label{eqn:estimator_Landy} \omega(\theta)\;\delta\theta=\frac{\mathrm{DD-2DR+RR}}{\mathrm{RR}}. \end{eqnarray} DD, DR, and RR represent the number-counts of galaxy pairs with a separation between $\theta$ and $\theta+\delta\theta$ in catalogues extracted from the data (DD), from a random distribution of galaxies (RR) with the same survey geometry (including masked out regions), and between data and random catalogues (DR). The errors of the angular correlation function are estimated following the Poissonian variance approach of \citet{1993ApJ...412...64L}, which is justified in the weak clustering regime, \begin{eqnarray} \label{eqn:err_estimator_Landy} \delta\omega(\theta)=\sqrt{\frac{1+\omega(\theta)}{\mathrm{DD}}}\;. \end{eqnarray} A power law $\omega(\theta)=A_{\omega}\,\theta^{-\delta}$ with fixed slope $\delta=0.8$ is fitted to the data for angular scales smaller than $\sim2\farcm 5$. For larger scales the finite size of the fields begins to play a role which can be accounted for by an additional constant called `integral constraint'. In Fig.~\ref{fig:correlation}, the angular correlation functions for the $U$- and $B$-dropouts are shown, the latter still suffering from small number statistics. The power law fits to our data yield amplitudes for a scale of $1''$ of $A_{\omega}=0.71\pm0.13$ for the $U$-dropouts and $A_{\omega}=2.31\pm0.78$ for the $B$-dropouts, respectively. Next, we discuss the influence of inhomogeneous depth in our data on the correlation analysis. All of our $U$-dropouts are brighter than $R=26$ and $V=25.8$. Given the depth of the $V$- and $R$-band image (see Table~\ref{tab:CDFS}) small fluctuations in limiting magnitude over the field will have no impact on our selection. In the $U$-band image, however, there are some regions which are significantly shallower and could influence our selection. At the left edge there is a vertical stripe and in the middle there is a horizontal stripe where the $1\sigma$ limiting magnitude drops to $U\sim26.4$. Thus some objects which are faint in $V$ could be misclassified as $U$-dropouts in these regions. For three reasons we believe that this is not the case. First, investigating the distributions of apparent magnitudes in the $V$- and $R$-band there is no noticeable difference between the whole sample and the subsample in the shallower regions (288 $U$-dropouts). If misclassification was present one would see an excess in the faint $V$-band counts for the subsample. Second, the number density of $U$-dropouts does not change from deep to shallow regions. Finally, as \citet{2004ApJ...604..534S} showed, objects that are near in colour space are mostly also near in redshift space so that no spurious clustering signal is expected. A similar consideration applies to the $B$-dropout sample. \begin{figure*} \includegraphics[width=17cm]{2544fi11.eps} \caption{\label{fig:correlation}Angular correlation functions for our $U$-dropouts (\emph{left}) and our $B$-dropouts (\emph{right}). The errors are Poissonian errors and the lines represent power-law $\chi^2$ fits to the data with a fixed slope of $\delta=0.8$. The fitted amplitude at a scale of $1''$ then becomes $A_\omega=0.71\pm0.13$ for the $U$-dropouts and $A_\omega=2.31\pm0.78$ for the $B$-dropouts.} \end{figure*} For a known redshift distribution the angular correlation function $\omega(\theta)$ can be related to the real space 3D correlation function $\xi(r)$ using the Limber equation \citep[see][]{1980lssu.book.....P} for a flat universe. \begin{eqnarray} \omega(\theta)=\int_0^\infty \mathrm{d}\overline{w} \: p^2(\overline{w}) \int_{-\infty}^\infty \mathrm{d}\Delta w \: \xi \bigl( \sqrt{(\overline{w}\theta)^2+\Delta w^2} \bigr)\; , \end{eqnarray} where $w$ is the comoving distance, and $\overline{w}$ and $\Delta w$ are the mean and difference of the comoving distances of the two galaxies considered. $p(\overline{w})$ is the normalised distribution of the galaxies in comoving distance. Usually the real-space correlation function is fitted with a power law with slope $\gamma=\delta+1$ and correlation length $r_0$: \begin{eqnarray} \xi(r)=\Bigl(\frac{r}{r_0}\Bigr)^{-\gamma}\; . \end{eqnarray} Thus the second integral can be solved analytically, and the correlation length $r_0$ then becomes: \begin{eqnarray} \label{eq:r_0} r_0=\left[A_{\omega;\mathrm{rad}}\cdot \frac{\Gamma(\gamma/2)}{\Gamma(1/2)\Gamma(\gamma/2-1/2)} \cdot \left(\int_0^\infty \mathrm{d}\overline{w} \: p^2(\overline{w})\:\overline{w}^{1-\gamma}\right)^{-1}\right]^{1/\gamma} \end{eqnarray} with $A_{\omega;\mathrm{rad}}$ being the amplitude of the angular correlation function at a scale of one radian (extrapolation), and $\Gamma$ is the Euler Gamma function. In order to relate the angular correlation function to the real-space correlation function we need to make an assumption on the redshift distribution. For our dropout samples we choose different redshift distributions to investigate the impact of this uncertain quantity on the correlation lengths. First we assume flat distributions of the source redshifts with different widths. Then we fit a Gaussian to each redshift distribution in Fig.~\ref{fig:z_dist} neglecting the secondary peak in the $U$-dropout redshift distribution. We find that the $U$-dropout data are well fitted by a Gaussian with mean $z=3.03$ and a $\mathrm{FWHM}=0.54$ and the $B$-dropout data by a Gaussian with mean $z=3.83$ and $\mathrm{FWHM}=0.34$. The errors for the correlation lengths are estimated from the errors of the amplitudes $A_\omega$ only, and no effects of the slope or the redshift distributions are taken into account. In Table \ref{tab:clustering_measurements} the results are shown in comparison to other studies by \citet{2004ApJ...611..685O} and \citet{2001ApJ...550..177G}. Given the large uncertainties in our redshift distribution and the different depths of the surveys the differences are not significant. Furthermore, within the uncertainties, we do not see an evolution of the scale length from our $U$-dropout sample to our $B$-dropout sample. It should be kept in mind that the two populations do not probe the same part of the luminosity function. With a distance modulus of $0.6\mathrm{mag}$ in a $\Lambda\mathrm{CDM}$-cosmology between $z=3$ and $z=3.8$ and a negligible k-correction between the $R$-band at $z=3$ and the $I$-band at $z=3.8$\footnote{Relation between the central wavelengths (CWL): $\mathrm{CWL}_{R}=652\mathrm{nm}\approx\frac{(1+3)}{(1+3.8)}\cdot\mathrm{CWL}_{I}=\frac{(1+3)}{(1+3.8)}\cdot784\mathrm{nm}$} the $U$-dropout sample is slightly deeper in terms of absolute magnitude. It would be desirable to cut the two samples at the same $L/L^*$-value. But with the available samples being cut at brighter magnitudes (e.g. $R\la24.6$ for $L\ga L^{*}$ for the $U$-dropouts), the statistical errors are still too large to reach significant conclusions. A more sophisticated clustering analysis with dropout samples cut at the same absolute magnitude will be presented when more fields of the DPS are available and LBG numbers have increased. \begin{table*} \caption{Clustering measurements. \citet{2001ApJ...550..177G} analysed a photometric sample of $U$-dropouts extracted from ground-based data. \citet{2004ApJ...611..685O} measured the clustering on their $B$-dropout sample from the Subaru Deep Survey. The $I$-band limiting magnitude of our dropout sample is not well known since some objects are not detected in $I$ (see Sect.~\ref{sec:selection-candidates}).} \label{tab:clustering_measurements} \centering \begin{tabular}{c c c c c} \hline\hline sample & mean redshift & redshift distribution & limiting magnitude & $r_0$ [Mpc$\cdot h^{-1}$] \\ \hline $U$-dropouts (this paper) & 3.0 & flat $2.7<z<3.3$ & $R_{\mathrm{WFI,Vega}}<26$ & $2.0\pm0.2$\\ $U$-dropouts (this paper) & 3.0 & flat $2.6<z<3.4$ & $R_{\mathrm{WFI,Vega}}<26$ & $2.4\pm0.2$\\ $U$-dropouts (this paper) & 3.0 & gauss $\mu=3.03$, $\sigma=0.27$& $R_{\mathrm{WFI,Vega}}<26$ & $2.6\pm0.3$\\ \citet{2001ApJ...550..177G} & 3.0 & from spectroscopic subsample & $\mathcal{R}_{AB}<25.5\hat=R_{\mathrm{WFI,Vega}}\sim25.1$& $3.2\pm0.7$\\ \hline $B$-dropouts (this paper) & 3.8 & flat $3.7<z<4.2$ & $I_{\mathrm{WFI,Vega}}\la26.3$ & $3.2\pm0.6$\\ $B$-dropouts (this paper) & 3.8 & flat $3.6<z<4.3$ & $I_{\mathrm{WFI,Vega}}\la26.3$ & $3.8\pm0.7$\\ $B$-dropouts (this paper) & 3.8 & gauss $\mu=3.83$, $\sigma=0.17$ & $I_{\mathrm{WFI,Vega}}\la26.3$ & $3.5\pm0.7$\\ \citet{2004ApJ...611..685O} & 4.0 & from simulations & $i'_{AB}<26$ & $4.1\pm0.2$\\ \hline \end{tabular} \end{table*} \section{Conclusions and Outlook} \label{sec:conclusions} We find 1070 $U$- and 565 $B$-dropout candidates in deep wide-field images of the CDFS taken with the WFI@MPG/ESO2.2m. The photometric redshift distributions are narrowly peaked around $z=3$ and $z=4$, as expected. Our number-counts of dropouts in apparent magnitude bins are consistent with previous studies. The angular correlation functions are calculated from the data and correlation lengths are derived taking into account the photometric redshift estimates of the samples. These results are also in good agreement with previous studies showing no evolution from $z\sim3$ to $z\sim4$, albeit large systematic errors remain. The dropout samples in the CDFS will be investigated further. In Sect. \ref{sec:selection-candidates} it was mentioned that ACS@HST images are available for the whole WFI field. The morphology of every candidate will be classified with the help of the high angular resolution of these data; this will yield the largest catalogue of morphologically studied LBGs. Furthermore, infrared data from the GOODS project \citep{2004ApJ...600L..93G} are publicly available. The innermost part of the field (50 arcmin$^2$) is covered with deep $JHK_s$ images from ISAAC@VLT which will help to improve the photometric redshift accuracy considerably. A larger fraction of the area is covered with shallower data from SOFI@NTT. For the brighter dropouts these data will also be sufficient to improve the photometric redshift estimates. The aim of this study was to test techniques on the CDFS that will be applied to a much larger dataset, the ESO Deep-Public-Survey (DPS). This survey covers three square degrees in total, distributed over three fields of four adjacent WFI pointings each. Deep coverage in the $UBVRI$ bands was intended. Unfortunately, the survey was not finished so that now there are only five pointings (1.25 square degrees) complete in all five colours. The completion of five further fields (that are nearly complete) was proposed by us for ESO period 75. First investigations in the four other fields yield a number of $U$-dropouts each comparable to the CDFS and we proposed a spectroscopic run with VIMOS on one subfield for ESO period 76. There will be several hundreds of LBG spectra to be analysed enabling us to quantify the contamination of our samples, to investigate their redshift distributions, and to study the astrophysical properties in detail. The area of 1.25 square degree that is completely covered in all five optical bands already now yields a larger LBG sample at $z\!\sim\!3$ than any other study to date. If the DPS is completed, there will be $\sim\!10\,000$ $U$-dropouts in the survey on two contiguous fields of one degree width and one of 0.5 degrees width. From these the clustering properties can be studied with unprecedented accuracy on the largest scales up to now and statistics of LBG properties will be improved significantly. \begin{acknowledgements} This work was supported by the German Ministry for Education and Science (BMBF) through the DLR under the project 50 OR 0106, by the BMBF through DESY under the project 05AE2PDA/8, and by the Deutsche Forschungsgemeinschaft (DFG) under the project SCHN342/3--1. \end{acknowledgements} \bibliographystyle{aa}
2,869,038,155,413
arxiv
\section{Introduction} Throughout this article $X$ stands for a Banach space, the symbol $B(X)$ denotes the space of bounded linear operators defined on $X$, and $X^*$ is the space of continuous linear functionals on $X$. Given $T\in B(X)$, we denote the Cesàro mean by $$ M_n(T)x:=\frac{1}{n+1}\sum_{k=0}^n T^kx $$ for all $x\in X$. We need to recall some definitions concerning the behaviour of the sequence of Cesàro means $(M_n(T))_{n\in \NN}$. \begin{definition} A linear operator $T$ on a Banach space $X$ is called \begin{enumerate} \item \emph{Uniformly ergodic} if $M_n(T)$ converges uniformly. \item \emph{Mean ergodic} if $M_n(T)$ converges in the strong topology of $X$. \item \emph{Weakly ergodic} if $M_n(T)$ converges in the weak topology of $X$. \item \emph{Absolutely Ces\`aro bounded} if there exists a constant $C > 0$ such that $$ \sup_{N \in \mathbb{N}} \frac{1}{N} \sum_{j=1}^N \|T^j x\| \leq C \|x\|\;, $$ for all $x\in X$. \item \emph{Ces\`{a}ro bounded} if the sequence $(M_n(T))_{n\in \NN}$ is bounded. \end{enumerate} \end{definition} An operator $T$ is said \emph{power bounded} if there is a $C>0$ such that $\|T^n\| <C$ for all $ n$. \ \par The class of absolutely Cesàro bounded operators was introduced by Hou and Luo in \cite{HL}. \begin{definition} An operator $T$ is said \begin{enumerate} \item \emph{Uniformly Kreiss bounded} if there is a $C>0$ such that $$ \left\|\sum_{k=0} ^{n} \lambda^{-k-1} T^k\right\| \le \frac{ C}{|\lambda|-1} \;\; \mbox { for all } |\lambda |>1 \mbox{ and } n=0,1,2, \cdots $$ \item \emph{Strongly Kreiss bounded} if there is a $C>0$ such that $$ \|(\lambda I-T)^{-k}\| \le \frac{ C}{(|\lambda|-1)^k} \;\; \mbox { for all } |\lambda |>1 \mbox{ and } k=1, 2, \cdots $$ \item Kreiss bounded if there is a $C>0$ such that $$ \|(\lambda I-T)^{-1}\| \le \frac{ C}{|\lambda|-1} \;\; \mbox { for all } |\lambda |>1. $$ \end{enumerate} \end{definition} \begin{remark} {\rm \begin{enumerate} \item In \cite{MSZ}, it is proved that an operator $T$ is uniformly Kreiss bounded if and only if there is a $C$ such that $$ \|M_{n}(\lambda T)\| \le C \;\; \mbox{ for } |\lambda |=1 \mbox{ and } n=0,1,2, \cdots. $$ \item We recall that $T$ is strongly Kreiss bounded if and only if $$ \|e^{zT}\| \le M e^{|z|}, \mbox{ for all } z \in \mathbb{C}. $$ \item In \cite{GZ08}, it is shown that every strong Kreiss bounded operator is uniformly Kreiss bounded. MacCarthy (see \cite {Shi}) proved that if $T$ is strong Kreiss bounded then $\|T^n\|\le Cn^{\frac{1}{2}}$. \item There exist Kreiss bounded operators which are not Ces\`{a}ro bounded, and conversely \cite{SZ}. \item On finite-dimensional Hilbert spaces, the classes of uniformly Kreiss bounded, strong Kreiss bounded, Kreiss bounded and power bounded operators are equal. \item Any absolutely Ces\`{a}ro bounded operator is uniformly Kreiss bounded. \end{enumerate} } \end{remark} Let $X$ be the space of all bounded analytic functions $f$ on the unit disk of the complex plane such that their derivatives $f'$ belong to the Hardy space $H^1$, endowed with the norm $$ \|f\| = \|f\|_{\infty} + \|f\|_{H^1}\;. $$ Then the multiplication operator, $M_z$, acting on $X$ is Kreiss bounded but it fails to be power bounded. Moreover, this operator is not uniformly Kreiss bounded (see \cite{SW}). Furthermore, for the Volterra operator $V$ acting on $L^p[0,1]$, $1\le p\le \infty$, we have that $I-V$ is uniformly Kreiss bounded, for $p=2$ it is power bounded (see \cite{MSZ}), and it is asked if every uniformly Kreiss bounded operator on a Hilbert space is power bounded. This is related to the following question in \cite[page 279]{AS} (see also, \cite{Su}): \begin{question}\label{pregunta1} If $T$ is a uniformly Kreiss bounded operator on a Banach space, does it follow that $\lim_n\| \frac{T^n}{n}\|=0$? \end{question} Graphically, we show the implications between the above definitions. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.3,>=stealth] \node[right] at (25,15) {Power bounded}; \node[right] at (10,10) {Strong Kreiss bounded}; \node[right] at (40,10) {Absolutely Ces\`{a}ro bounded}; \node[right] at (23,5) {Uniformly Kreiss bounded}; \node[right] at (10,0) {Kreiss bounded}; \node[right] at (45,0) {Ces\`{a}ro bounded}; \node[right] at (26,-5) {\mbox {$\|T^n\|= O(n)$}}; \draw[double, ->] (30,14) -- (15,11); \draw[double, ->] (30,14) -- (50,11); \draw[double, ->] (15,9) -- (30,6); \draw[double, ->] (50,9) -- (30,6); \draw[double, ->] (30,4) -- (15,1); \draw[double, ->] (30,4) -- (50,1); \draw[double, ->] (15,-1) -- (30,-4); \draw[double, ->] (50,-1) -- (30,-4); \end{tikzpicture} \caption{Implications among different definitions related with Kreiss bounded and Cesàro bounded operators in Banach spaces. } \end{figure} We recall some definitions that allow us to study some properties of orbits related to the behavior of the sequence $(M_n(T))_{n\in \NN}$. \begin{definition} Let $T\in B(X)$. $T$ is \emph{topologically mixing} if for any pair $U,V$ of non-empty open subsets of $X$, there exists some $n_0 \in \NN$ such that $T^n(U) \cap V \neq \emptyset$ for all $n \geq n_0$. \end{definition} Examples of absolutely Ces\`{a}ro bounded mixing operators on $\ell^p(\mathbb{N})$ are given in \cite{MOP13} (see Section 3.7 in \cite{beltran14}), \cite{HL}, and \cite{BBMP} (see \cite{BBPW}). \bigskip Let $H$ be a Hilbert space. For a positive integer $m$, an operator $T\in B(H)$ is called an \emph{$m$-isometry} if for any $x\in H$, $$ \sum _{k=0}^m (-1)^{m-k}{ m\choose k}\|T^{k}x\|^2 =0\; . $$ We say that $T$ is a \emph{strict $m$-isometry } if $T$ is an $m$-isometry but it is not an $(m-1)$-isometry. \ \par \begin{remark} {\rm \begin{enumerate} \item For $m\ge 2$, the strict $m$-isometries are not power bounded. Moreover, $\|T^n\| =O(n)$ for $3$-isometries and $\|T^n\| =O(n^{\frac{1}{2}})$ for $2$-isometries. \item There are no strict $m$-isometries on finite dimensional spaces for $m$ even. See \cite[Proposition 1.23]{AgS}. \item An example of weak ergodic $3$-isometry is provided in \cite{AS}. \end{enumerate} } \end{remark} \ \par The paper is organized as follows: In Section 2, we prove the optimal asymptotic behavior of $\| T^n\|$ for absolutely Cesàro bounded operators and for uniformly Kreiss bounded operators. In particular, we prove that, for any $0< \varepsilon <\frac{1}{p}$, there exists an absolutely Ces\`{a}ro bounded mixing operator $T$ on $\ell^p(\mathbb{N})$, $1\le p < \infty$, with $\|T^n\|= (n+1)^{\frac{1}{p}-\varepsilon}$. Moreover, we show that any absolutely Ces\`{a}ro bounded operator on a Banach space, and any uniformly Kreiss bounded operator on a Hilbert space, satisfies that $\|T^n\|=o(n)$. For absolutely Ces\`{a}ro bounded operators $T$ on Hilbert spaces we get $\| T^n\|=o(n^{\frac{1}{2}})$. Section 3 studies ergodic properties of $m$-isometries on finite or infinite dimensional Hilbert spaces. For example, strict $m$-isometries with $m>3$ are not Cesàro bounded, and we give new examples of weakly ergodic 3-isometries. In Section 4 we analyze numerical hypercyclicity of $m$-isometries. In particular, we obtain that the adjoint of any strict $m$-isometry unilateral forward weighted shift on $\ell ^2(\NN)$ is hypercyclic. Moreover, we prove that weakly ergodic $3$-isometries are weakly numerically hypercyclic. \section{Absolutely Ces\`{a}ro bounded operators} It is immediate that any power bounded operator is absolutely Ces\`{a}ro bounded. In general, the converse is not true. By $e_n, n\in \NN$, $e_n=(\delta_{n\; k})_{k\in \NN}:=(0, \ldots, 0,1,0,\ldots) $, we denote the standard canonical basis in $\ell^p(\NN)$ for $1\leq p<\infty $. The following theorem gives a variety of absolutely Ces\`{a}ro bounded operators with different behavior on $\ell^p (\NN)$. \begin{theorem}\label{ejemplos} Let $T$ be the unilateral weighted backward shift on $\ell ^p(\NN)$ with $1\leq p<\infty$ defined by $Te_1:=0$ and $Te_k:=w_ke_{k-1}$ for $k>1$. If $w_k:=\displaystyle \left( \frac{k}{k-1}\right)^{\alpha} $ with $0<\alpha <\frac{1}{p}$, then $T$ is absolutely Ces\`{a}ro bounded on $\ell^p(\NN)$. \end{theorem} \begin{proof} Denote $ \varepsilon := 1-\alpha p$. Then $\varepsilon >0$ and $ \alpha =\frac{1-\varepsilon}{p}$. Fix $x\in \ell^p(\NN)$ with $||x||=1$ given by $x:=\displaystyle \sum_{j=1}^\infty \alpha _j e_j $ and $N\in \NN$. Then \begin{eqnarray} \sum_{n=1}^N \|T^nx\|_p^p &=& \sum_{n=1}^N \sum_{j=n+1}^\infty |\alpha_j|^p\Big(\frac{j}{j-n}\Big)^{1-\varepsilon} \nonumber \\ &=& \sum_{j=2}^\infty |\alpha_j|^p\, j^{1-\varepsilon} \sum_{n=1}^{\min\{N,\;j-1\}}({j-n})^{\varepsilon-1} \nonumber \\ &=& \sum_{j=2}^{2N} |\alpha_j|^p\, j^{1-\varepsilon} \sum_{n=1}^{\min\{ N, \;j-1\}}({j-n})^{\varepsilon-1} + \sum_{j=2N+1}^\infty |\alpha_j|^p \sum_{n=1}^N \Big(\frac{j}{j-n}\Big)^{1- \varepsilon}\nonumber \\ &\le &\sum_{j=2}^{2N} |\alpha_j|^p\, j^{1-\varepsilon}\, \sum_{n=1}^{j-1}(j-1)^{\varepsilon -1} + \sum_{j=2N+1}^\infty |\alpha_j|^p \sum_{n=1}^N \Big(\frac{j}{j-n}\Big)^{1- \varepsilon} \;. \label{des} \end{eqnarray} Notice that for $j>2N$ and $n\leq N$, we have that $$ \left( \frac{j}{j-n} \right)^{1-\varepsilon}\leq 2^{1-\varepsilon}<2\;. $$ Hence $$ \sum_{j=2N+1}^\infty |\alpha_j|^p \sum_{n=1}^N \Big(\frac{j}{j-n}\Big)^{1- \varepsilon}<2N\sum_{j=2N+1}^\infty |\alpha _j|^p\leq 2N \;. $$ We can estimate the first term of (\ref{des}) in the following way: \begin{eqnarray*} \sum_{n=1}^{j-1} (j-n)^{\varepsilon -1} &= & \sum_{n=1}^{j-1} n^{\varepsilon -1} < 1+ \int _1^{j-1} t^{\varepsilon -1}dt \\ &\le& \frac{(j-1)^\varepsilon}{\varepsilon} <\frac{j^\varepsilon }{\varepsilon} \;. \end{eqnarray*} Thus \begin{eqnarray*} \sum_{n=1}^N \| T^nx\|^p_p &\leq & \sum_{j=2}^{2N} |\alpha _j|^pj^{1-\varepsilon} \frac{ j^{\varepsilon }}{\varepsilon} + \sum_{j=2N+1} ^\infty |\alpha _j|^p 2 N\\ &=& \sum_{j=2}^{2N} |\alpha _j|^p \frac{j}{\varepsilon} + 2N \sum_{j=2N+1} ^\infty |\alpha _j|^p \\ &\leq & \frac{2N}{\varepsilon} \sum_{j=2}^{2N} |\alpha _j|^p + 2N \sum_{j=2N+1} ^\infty |\alpha _j|^p \\ & \leq & 2N \left( \frac{1}{\varepsilon} +1\right) \;. \end{eqnarray*} By Jensen's inequality $$ \left( \frac{1}{N} \sum_{n=1}^N \|T^nx\|_p\right)^p \leq \frac{1}{N} \sum_{n=1}^N \| T^nx\|_p^p\leq 2 \left( \frac{1}{\varepsilon } +1\right) \; , $$ which yields the result. \end{proof} As consequence of above theorem, we obtain \begin{corollary} There exist absolutely Ces\`{a}ro bounded operators which are not power bounded. \end{corollary} \begin{proof} It is an immediate consequence of Theorem \ref{ejemplos}. \end{proof} \begin{corollary} For $1<p<2$, there exist absolutely Ces\`{a}ro bounded operators which are not strongly Kreiss bounded on $\ell^p(\NN)$. \end{corollary} \begin{proof} In view of \cite[Remark 3]{Shi}, if $T$ is a strong Kreiss bounded operator then $\|T^n\|\le Cn^{\frac{1}{2}}$. The conclusion follows from part (1) of Theorem \ref{ejemplos}. \end{proof} \begin{corollary}\label{mixing} Let $1\leq p<\infty$ and $\varepsilon>0$. Then there exists an absolutely Ces\`{a}ro bounded operators $T$ on $\ell^p$ which is mixing and $\|T^n\| =(n+1)^{\frac{(1-\varepsilon)}{p}}$ for all $n\in\NN$. \end{corollary} \begin{proof} By part (1) of Theorem \ref{ejemplos} we have that $T$ is absolutely Cesàro bounded and \begin{equation}\label{ejl} \| T^n\|=(n+1)^{\frac{(1-\varepsilon)}{p}} \;. \end{equation} Moreover by \cite[Theorem 4.8]{GEP11} we have that $T$ is mixing if $\left( \prod_{k=1}^n w_k \right)^{-1} \to 0$ as $n\to \infty $. Indeed $$ \left( \prod_{k=1}^n w_k \right)^{-1}=\frac{1}{n^\alpha } \to 0\;, $$ hence $T$ is mixing. \end{proof} Further consequences can be obtained for operators on Hilbert spaces. \begin{corollary} There exists a uniformly Kreiss bounded Hilbert space operator that is not absolutely Cesàro bounded. \end{corollary} \begin{proof} Let $H$ be a separable infinite-dimensional Hilbert space with an orthonormal basis $(u_k)_{k\in \NN}$. Let $0<\alpha<1/2$. Let $T\in B(H)$ be defined by $Tu_{k}:= \left( \frac{k+1}{k}\right)^\alpha u_{k+1}$. A straightforward computation gives that $T$ is not absolutely Cesàro bounded since $\| T^nu_1\| =(n+1)^\alpha \to\infty$. Note that its adjoint $T^*$ is given by $T^* u_k=\left(\frac{k+1}{k}\right)^\alpha u_{k-1}$ for $k>1$ and $T^*u_1=0$. By Theorem \ref{ejemplos}, $T^*$ is absolutely Cesàro bounded, and hence uniformly Kreiss bounded. Since the uniform Kreiss boundedness is preserved by taking the adjoints, we deduce that $T$ is uniformly Kreiss bounded. \end{proof} \bigskip It is easy to check that \begin{equation}\label{media} \frac{T^n}{n+1} = M_n(T)-\frac{n}{n+1} M_{n-1}(T)\;. \end{equation} We notice that Cesàro bounded operators satisfy that $\| T^n\|=O(n)$. Moreover, Theorem \ref{ejemplos} gives an example of a uniformly Kreiss bounded operator on $\ell^1(\NN)$ such that $\| T^n\| =(n+1)^{1-\varepsilon}$ with $0<\varepsilon <1$. We concentrate now on Question~\ref{pregunta1} for operators on Hilbert spaces. \begin{theorem}\label{kreiss} Let $T$ be a uniformly Kreiss bounded operator on a Hilbert space $H$. Then $\lim_{n\to\infty}n^{-1}\|T^n\|=0$. \end{theorem} \begin{proof} Let $C>0$ satisfy $\bigl\|\displaystyle \sum_{j=0}^{N-1}(\la T)^j\bigr\|\le CN$ for all $\la, |\la|=1$ and all $N$. We need several claims. \begin{claim}\label{lemma1} Let $x\in H$, $\|x\|=1$ and $N\in\NN$. Then $$ \sum_{j=0}^{N-1}\|T^jx\|^2\le C^2N^2. $$ \end{claim} \begin{proof} Consider the normalized Lebesgue measure on the unit circle. We have $$ C^2N^2\ge \int_{|\la|=1} \bigl\|(I+\la T+\cdots+ (\la T)^{N-1})x\bigr\|^2 d\la $$ $$ = \sum_{j,k=0}^{N-1}\int_{|\la|=1}\bigl\langle (\la T)^jx,(\la T)^kx\bigr\rangle d\la= \sum_{j=0}^{N-1}\int_{|\la|=1}\bigl\langle (\la T)^jx,(\la T)^jx\bigr\rangle d\la= \sum_{j=0}^{N-1}\|T^jx\|^2. $$ \end{proof} \begin{claim}\label{lemma2} Let $0<M<N$ and $x\in H$, $\|x\|=1$. Then $$ \sum_{j=0}^{M-1}\frac{\|T^Nx\|^2}{\|T^{N-j}x\|^2}\le C^2M^2. $$ \end{claim} \begin{proof} Set $y=T^Nx$. Since $T^*$ is also uniformly Kreiss bounded, we have $$ \int_{|\la|=1} \bigl\|(I+(\bar\la T^*)+\cdots+(\bar\la T^*)^{M-1})y\bigr\|^2 d\la\le C^2M^2\|y\|^2. $$ On the other hand, as in Claim \ref{lemma1} we have $$ \int_{|\la|=1} \bigl\|(I+(\bar\la T^*)+\cdots+(\bar\la T^*)^{M-1})y\bigr\|^2 d\la= \sum_{j=0}^{M-1}\|T^{*j}y\|^2 $$ $$ \ge \sum_{j=0}^{M-1}\Bigl|\Bigl\langle T^{*j}y,\frac{T^{N-j}x}{\|T^{N-j}x\|}\Bigr\rangle\Bigr|^2= \sum_{j=0}^{M-1}\Bigl|\Bigl\langle y, \frac{T^Nx}{\|T^{N-j}x\|}\Bigr\rangle\Bigr|^2 \ge\|y\|^2 \sum_{j=0}^{M-1}\frac{\|T^Nx\|^2}{\|T^{N-j}x\|^2}. $$ Hence $$ \sum_{j=0}^{M-1}\frac{\|T^Nx\|^2}{\|T^{N-j}x\|^2}\le C^2M^2. $$ \end{proof} \begin{claim}\label{lemma3} Let $x\in H$, $\|x\|=1$ and $N\in\NN$. Then $$ \sum_{j=0}^{N-1}\frac{1}{\|T^jx\|}\ge \frac{\sqrt{N}}{C}. $$ \end{claim} \begin{proof} Let $a_j=\|T^jx\|$. By Claim \ref{lemma1}, $\sum_{j=0}^{N-1}a_j^2\le C^2N^2$. So $$ \sum_{j=1}^{N-1}a_j\le\Bigl(\sum_{j=0}^{N-1} a_j^2\Bigr)^{1/2}\cdot\sqrt {N}\le CN^{3/2}. $$ Let $B=N\Bigl(\sum_{j=0}^{N-1}\frac{1}{a_j}\Bigr)^{-1}$ and $A=N^{-1}\sum_{j=0}^{N-1}a_j$ be the harmonic and arithmetic means of $a_j$'s for $j\in\{ 0, \ldots , N-1\}$, respectively. By the well-known inequality between these two means, we have $$ \sum_{j=0}^{N-1}\frac{1}{\|T^jx\|}= \frac{N}{B}\ge \frac{N}{A}= N^2\Bigl(\sum_{j=0}^{N-1}a_j\Bigr)^{-1}\ge \frac{N^2}{CN^{3/2}}= \frac{\sqrt{N}}{C}. $$ \end{proof} \begin{claim}\label{lemma4} Let $0<M_1<M_2<N$ and $\|x\|=1$. Then $$ \sum_{j=M_1}^{M_2-1}\frac{\|T^{N-j}x\|^2}{\|T^Nx\|^2}\ge \frac{(M_2-M_1)^2}{C^2M_2^2}. $$ \end{claim} \begin{proof} Let $a_j=\frac{\|T^{N-j}x\|^2}{\|T^Nx\|^2}$. By Claim \ref{lemma2}, $$ \sum_{j=M_1}^{M_2-1}\frac{1}{a_j}\le \sum_{j=0}^{M_2-1}\frac{1}{a_j}\le C^2M_2^2. $$ Let $A$ and $B$ be the arithmetic and harmonic mean of $a_j$'s for $j\in \{ M_1, \ldots , M_2-1\}$, respectively. We have $$ \sum_{j=M_1}^{M_2-1} a_j= (M_2-M_1)A\ge (M_2-M_1)B= (M_2-M_1)^2\Bigl(\sum_{j=M_1}^{M_2-1}\frac{1}{a_j}\Bigr)^{-1}\ge \frac{(M_2-M_1)^2}{C^2M_2^2}. $$ \end{proof} \noindent{\it Proof of Theorem \ref{kreiss}.} Suppose on the contrary that $\limsup_{n\to\infty}n^{-1}\|T^n\|>c>0$. Choose $K>8C^6c^{-2}$. Find $N>2^{K+1}$ with $\|T^N\|>cN$ and $x\in H$, $\|x\|=1$ with $$ \|T^Nx\|>cN. $$ For $|\la|=1$ let $y_\la=\sum_{j=0}^{N-1}\frac{(\la T)^jx}{\|T^jx\|}$. Then $$ \int_{|\la|=1}\|y_{\la}\|^2 d\la=N $$ and $$ \int_{|\la|=1}\bigl\|(I+\la T+\cdots+(\la T)^{N-1})y_{\la}\bigr\|^2 d\la\le C^2N^2 \int_{|\la|=1}\|y_{\la}\|^2 d\la=C^2N^3. $$ On the other hand, $$ \int_{|\la|=1}\bigl\|(I+\la T+\cdots+(\la T)^{N-1})y_{\la}\bigr\|^2 d\la $$ $$ = \int_{|\la|=1}\Bigl\|\sum_{j=0}^{2N-2} (\la T)^jx \sum_{r=0}^{\min\{N-1,j\}}\frac{1}{\|T^rx\|}\Bigr\|^2 d\la $$ $$ = \sum_{j=0}^{2N-2} \|T^jx\|^2 \Bigl(\sum_{r=0}^{\min\{N-1,j\}}\frac{1}{\|T^rx\|}\Bigr)^2\ge \sum_{j=N-2^K}^{N} \|T^jx\|^2 \Bigl(\sum_{r=0}^{N-2^K}\frac{1}{\|T^rx\|}\Bigr)^2, $$ where $$ \sum_{r=0}^{N-2^K}\frac{1}{\|T^rx\|}\ge \frac{\sqrt{N-2^K}}{C}\ge\frac{\sqrt{N}}{C\sqrt{2}} $$ and $$ \sum_{j=N-2^K}^{N} \|T^jx\|^2\ge \|T^Nx\|^2\sum_{k=0}^{K-1} \sum_{j=N-2^{k+1}}^{N-2^k-1} \frac{\|T^jx\|^2}{\|T^Nx\|^2}\ge c^2N^2\sum_{k=0}^{K-1} \frac{2^{2k}}{C^2 2^{2k+2}}= \frac{c^2N^2K}{4C^2}. $$ Hence $$ \int_{|\la|=1}\bigl\|(I+\la T+\cdots+(\la T)^{N-1})y_{\la}\bigr\|^2 d\la\ge \frac{c^2N^2K}{4C^2}\cdot\frac{N}{2C^2}=\frac{c^2KN^3}{8C^4}> C^2N^3, $$ a contradiction. This finishes the proof. \end{proof} \begin{corollary} Any uniformly Kreiss bounded operator on a Hilbert space is mean ergodic. \end{corollary} We are interested on the behavior of $\frac{\| T^n\|}{n}$ when $T$ is an absolutely Cesàro bounded operator. The following result provides an answer. \begin{theorem}\label{residual} Let $X$ be a Banach space, $C>0$ and let $T\in B(X)$ satisfy $\|T^n\|\le Cn$ for all $n\in\NN$. Then either $\displaystyle \lim_{n\to\infty}n^{-1}\|T^n\|=0$ or the set $$ \Bigl\{x\in X: \sup_N N^{-1}\sum_{n=1}^N\|T^nx\|=\infty\Bigr\} $$ is residual in $X$. \end{theorem} \begin{proof} Suppose that $\frac{\|T^n\|}{n}\not\to 0$. So there exists $c>0$ such that $$ \limsup_{n\to\infty}n^{-1}\|T^n\|>c. $$ For $s\in\NN$ let $$ M_s=\Bigl\{x\in X: \sup_N N^{-1}\sum_{n=1}^N\|T^nx\|>s\Bigr\}. $$ Clearly $M_s$ is open. We show first that each $M_s$ contains a unit vector. Let $s\in\NN$. Find $N>\exp\Bigl(\frac{Cs}{c}\Bigr)+1$ with $\|T^N\|>cN$. Find a unit vector $x\in X$ such that $\|T^Nx\|> cN$. For $k=1,\dots,N-1$ we have $\|T^Nx\|\le \|T^k\|\cdot\|T^{N-k}x\|$, and so $$ \|T^{N-k}x\|\ge\frac{\|T^Nx\|}{\|T^k\|}\ge\frac{cN}{Ck}. $$ Thus $$ N^{-1}\sum_{k=1}^N\|T^jx\|\ge \sum_{k=1}^{N-1}\frac{c}{Ck}\ge \frac{c}{C}\ln(N-1)>s, $$ and so $x\in M_s$. We show that in fact each $M_s$ is dense. Fix $s\in\NN$, $y\in X$ and $\e>0$. Let $s'>\frac{s}{\e}$. Find $x\in M_{s'}$, $\|x\|=1$. For each $j\in\NN$ we have $$ \|T^j(y+\e x)\|+\|T^j(y-\e x)\|\ge 2\e \|T^jx\|. $$ So $$ \sup_N N^{-1}\sum_{j=1}^N\|T^j(y+\e x)\|+ \sup_N N^{-1}\sum_{j=1}^N\|T^j(y-\e x)\|\ge \sup_N \frac{2\e}{N}\sum_{j=1}^N\|T^jx\|> 2\e s'>2s. $$ Hence either $y+\e x\in M_s$ or $y-\e x\in M_s$. Since $\e>0$ was arbitrary, $M_s$ is dense. By the Baire category theorem, $$ \bigcap_{s+1}^\infty M_s=\Bigl\{x\in X: \sup_N N^{-1}\sum_{j=1}^N\|T^jx\|=\infty\Bigr\} $$ is a residual set. \end{proof} \begin{corollary}\label{ACB} Let $T\in B(X)$ be an absolutely Ces\`{a}ro bounded operator. Then $\displaystyle \lim_{n\to\infty}\frac{\|T^n\|}{n}=0$. \end{corollary} \begin{proof} There exists $C>0$ such that $$ \|T^nx\|\leq \sum_{k=1}^n \| T^kx\|\leq Cn\|x\| $$ for all $x\in X$. By Theorem \ref{residual}, we have that $\displaystyle\lim_{n\to\infty}\frac{\|T^n\|}{n}=0$, since the second possibility in Theorem \ref{residual} contradicts to the assumption that $T$ is absolutely Ces\`{a}ro bounded. \end{proof} As consequence, we obtain a result that, for operators on Banach spaces, slightly improves Lorch theorem \cite{ABR09}. \begin{corollary} Any absolutely Ces\`{a}ro bounded operator on a reflexive Banach space is mean ergodic. \end{corollary} Hence by Corollary \ref{mixing}, we have that \begin{corollary} There exist mean ergodic and mixing operators on $\ell^p(\mathbb{N})$ for $1< p <\infty$ . \end{corollary} It is worth to mention that results of this type already appear in the PhD Thesis of Mar\'{\i}a Jos\'e Beltr\'an Meneu \cite{beltran14}, provided by the fourth author (see Section 3.7 in \cite{beltran14}), and in \cite{AS}. For $0<\varepsilon<1$, by Theorem \ref{ejemplos} we have an example of absolutely Ces\`{a}ro bounded operators on $\ell^2(\mathbb{N})$ such that $\|T^n\|= (n+1)^{\frac{1}{2}-\varepsilon}$. On the other hand, if there exists $\varepsilon >0$ such that $\|T^n\|\ge Cn^{\frac{1}{2}+\varepsilon}$ for all $ n$ in a Hilbert space, then by \cite[Theorem 3]{MV}, there exists $x\in X$ such that $\|T^nx\|\rightarrow \infty$, thus $T$ is not absolutely Ces\`{a}ro bounded. Hence it is natural to ask: does every absolutely Ces\`{a}ro bounded operator on a Hilbert space satisfy $\lim_{n\to\infty} n^{-1/2}\|T^n\|=0$? \begin{theorem} \label{acbhilbert} Let $H$ be a Hilbert space and let $T\in B(H)$ be an absolutely Ces\`{a}ro bounded operator. Then $\displaystyle \lim_{n\to\infty}\displaystyle \frac{ \|T^n\|}{n^{1/2}}=0$. \end{theorem} \begin{proof} Let $C>0$ satisfy $N^{-1}\sum_{n=0}^{N-1}\|T^nx\|<C\|x\|$ for all $N\in\NN$ and $x\in H$. Suppose on the contrary that $\limsup_{n\to\infty} N^{-1/2}\|T^n\|>0$. We distinguish two cases: \medskip \leftline{{\it Case I}. Suppose that $\limsup_{n\to\infty} n^{-1/2}\|T^n\|=\infty$.} Then there exist positive integers $N_1<N_2<\cdots$ and positive constants $K_1<K_2<\cdots$ with $\lim_{m\to\infty}K_m=\infty$ such that $\|T^{N_m}\|>K_m N_m^{1/2}$ and $$ \|T^j\|\le 2K_m j^{1/2}\qquad(j\le N_m). $$ Let $x_m\in H$ be a unit vector satisfying $\|T^{N_m}x_m\|> K_m N_m^{1/2}$. Let $N_m'=\Bigl[\frac{N_m}{6}\Bigr]$ \ \ (the integer part). Consider the set $$ \{\|T^jx_m\|: 2N_m'\le j< 4N_m'\}. $$ Let $A$ be the median of this set. More precisely, we have $$ \card\{j: 2N_m'\le j< 4N_m', \|T^jx_m\|\ge A\} \ge N_m'\qquad\hbox{and} $$ $$ \card\{j: 2N_m'\le j< 4N_m', \|T^jx_m\|\le A\} \ge N_m'. $$ We have $$ 4N_m' C\ge \sum_{j=0}^{4N_m'-1}\|T^jx_m\|\ge \sum_{j=2N_m'}^{4N_m'-1}\|T^jx_m\|\ge N_m'A. $$ So $A\le 4C$ \ \ (note that this estimate does not depend on $m$). For $\la\in\CC$, $|\la|=1$ let $$ y_{m,\la}=\sum_{j=1}^{N_m}\frac{(\la T)^jx_m}{\|T^jx_m\|}. $$ Then $$ \int\|y_{m,\la}\|^2 d\la= \int\sum_{j,j'=1}^{N_m} \frac{\langle \la^j T^jx_m, \la^{j'} T^{j'}x_m\rangle} {\|T^jx_m\|\cdot\|T^{j'}x_m\|} d\la $$ $$ = \int\sum_{j=1}^{N_m} \frac{\langle T^jx_m, T^{j} x_m\rangle} {\|T^jx_m\|^2} d\la=N_m. $$ Let $$ u_{m,\la}=(I+\la T+\cdots+ (\la T)^{N_m-1})y_{m,\la}. $$ Then $\|u_{m,\la}\|\le CN_m\|y_{m,\la}\|$ and $$ \int\|u_{m,\la}\|^2 d\la\le C^2N_m^2\int\|y_{m,\la}\|^2 d\la= C^2N_m^3. $$ On the other hand, $$ u_{m,\la}=\sum_{j=1}^{N_m} (\la T)^j x_m \sum_{k=1}^j \frac{1}{\|T^kx_m\|}+ \sum_{j=N_m+1}^{2N_m-1}(\la T)^jx_m\sum_{k=j-N_m+1}^{N_m}\frac{1}{\|T^kx_m\|}. $$ As above, $$ \int \|u_{m,\la}\|^2 d\la\ge \sum_{j=1}^{N_m} \|T^j x_m\|^2\Bigl(\sum_{k=1}^{j}\frac{1}{\|T^kx_m\|}\Bigr)^2 \ge \|T^{N_m}x_m\|^2\Bigl(\sum_{k=2N_m'}^{4N_m'-1}\frac{1}{\|T^kx_m\|}\Bigr)^2 $$ $$ \ge K_m^2 N_m \cdot \Bigl(\frac{N'_m}{A}\Bigr)^2\ge K_m^2\cdot{\rm const}\cdot N_m^3. $$ Since $K_m\to\infty$, this is a contradiction. \medskip {\it Case II.} Let $K$ satisfy $0<K<\limsup_{n\to\infty}n^{-1/2}\|T^n\|<2K$. Let $N_0$ satisfy $n^{-1/2}\|T^n\|\le 2K\quad(n\ge N_0)$. Find an increasing sequence $(N_m)$ of positive integers such that $\|T^{N_m}\|>KN_m^{1/2}$. Find $x_m$, $\|x_m\|=1$ such that $\|T^{N_m}x_m\|>KN_m^{1/2}$. As in case I, let $N_m'=\Bigl[\frac{N_m}{6}\Bigr]$ and let $A$ be the median of the set $$ \{\|T^jx_m\|: 2N_m'\le j< 4N_m'\}. $$ Again one has $A\le 4C$. As in case I, for $|\la|=1$ let $$ y_{m,\la}=\sum_{j=1}^{N_m}\frac{(\la T)^jx_m}{\|T^jx_m\|} $$ and $$ u_{m,\la}=(I+\la T+\cdots+ (\la T)^{N_m-1})y_{m,\la}. $$ Again we have $\displaystyle\int\|y_{m,\la}\|^2 d\la=N_m$ and $$ \int\|u_{m,\la}\|^2 d\la\le C^2N_m^3. $$ On the other hand, $$ u_{m,\la}= \sum_{j=1}^{N_m} (\la T)^j x_m \sum_{k=1}^j \frac{1}{\|T^kx_m\|}+ \sum_{j=N_m+1}^{2N_m-1}(\la T)^jx_m\sum_{k=j-N_m+1}^{N_m}\frac{1}{\|T^kx_m\|} $$ and $$ \int \|u_{m,\la}\|^2 d\la \ge \sum_{j=1}^{N_m} \|T^j x_m\|^2\Bigl(\sum_{k=1}^{j}\frac{1}{\|T^kx_m\|}\Bigr)^2 \ge \sum_{j=4N_m'}^{N_m-1} \|T^j x_m\|^2\Bigl(\sum_{k=2N_m'}^{4N_m'-1}\frac{1}{\|T^kx_m\|}\Bigr)^2 $$ $$ \ge \sum_{j=4N_m'}^{N_m-1} \|T^j x_m\|^2\Bigl(\frac{N_m'}{A}\Bigr)^2. $$ Moreover, for $4N_m'\le j< N_m$ we have $$ KN_m^{1/2}< \|T^{N_m}x_m\|\le \|T^{N_m-j}\|\cdot\|T^jx_m\|\le 2K(N_m-j)^{1/2}\|T^jx_m\|. $$ So $$ \sum_{j=4N_m'}^{N_m} \|T^j x_m\|^2\ge \sum_{j=4N_m'}^{N_m-1} \frac{N_m}{4(N_m-j)}\ge \frac{N_m}{4}\sum_{j=1}^{2N_m'}\frac{1}{j}\ge \frac{N_m\ln{(2N_m')}}{4}. $$ Hence $$ \int\|u_{m,\la}\|^2 d\la\ge {\rm const} \cdot N_m^3 \ln{(2N_m')}, $$ a contradiction. \end{proof} The following picture summarizes the implications between the properties studied here and the behaviour of $\|T^n\|$. \begin{center} \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.3] \node at (20,10) {absolutely Ces\`{a}ro bounded}; \node at (0,10) {Uniformly Kreiss bounded}; \node at (0,0) {\mbox {$\|T^n\|= o(n)$}}; \node at (14,0) {\mbox {$\|T^n\|= o(n)$}}; \node at (26,0) {\mbox {$\|T^n\|= o(n^{1/2})$}}; \draw[double, <-] (7.9,10) -- (12,10); \draw[double, ->] (0,7) -- (0,2); \draw[double, ->] (18,7) -- (14,2); \draw[double, ->] (22,7) -- (26,2); \node[left] at (0,4.5) {Hilbert space}; \node[left] at (16,4.5) {Banach space }; \node[right] at (24,4.5) {Hilbert space}; \end{tikzpicture} \caption{Behavior of $\| T^n\|$ for uniformly Kreiss and Cesàro bounded operators. } \end{figure} \end{center} We finish this section with a couple of questions. \begin{question} Are there absolutely Ces\`{a}ro bounded operators on Hilbert spaces which are not strongly Kreiss bounded? \end{question} \begin{question} Are there strongly Kreiss bounded operators which are not absolutely Ces\`{a}ro bounded? \end{question} \section{Ergodic properties for $m$-isometries} The following implications for operators on reflexive Banach spaces among various concepts in ergodic theory are a direct consequence of the corresponding definitions: \begin{center} \begin{figure}[h]\label{figure2} \centering \begin{tikzpicture}[scale=0.3] \node at (5.7,0) {Power bounded}; \node at (16.5,0) {Mean ergodic}; \node at (16.5,-5) {$\left\| \frac{T^nx}{n}\right\| \to 0 \;\;\; \forall x\in H$}; \node at (26.5,0) {Weakly ergodic}; \node at (37,0) {Cesàro bounded}; \draw[double, ->] (11,0) -- (12.5,0); \draw[double, ->] (20.5,0) -- (22,0); \draw[double, ->] (31,0) -- (32.5,0); \draw[double, ->] (16.5,-1) -- (16.5,-3.5); \end{tikzpicture} \caption{Behavior between different definitions in ergodic theory.} \end{figure} \end{center} In general, the converse implications of the above figure are not true. \ \par The purpose of this section is to study $m$-isometries within the framework of these definitions. It is clear that isometries (1-isometries) are power bounded. It is natural to ask about strict $m$-isometries and the definitions of Figure \ref{figure2} on finite or infinite Hilbert spaces. The following example is due to Assani. See \cite[page 10]{E} and \cite[Theorem 5.4]{AS} for more details. \begin{example}\label{ejemplo1} {\rm Let $H$ be $\RR^2$ or $\CC^2$ and $T=\left( \begin{array}{cc} -1 & 2 \\ 0 & -1 \\ \end{array} \right) $. It is clear that $$ T^n= \left( \begin{array}{cc} (-1)^n& (-1)^{n-1}2n \\ 0 & (-1)^n \\ \end{array} \right) \; $$ and $\sup_{n\in \NN} \| M_n(T)\| <\infty $. Then $T$ is Cesàro bounded and $\frac{\|T^nx\|}{n}$ does not converge to 0 for some $x\in H$. Hence $T$ is not mean ergodic. Note that $T$ is a strict 3-isometry. } \end{example} The above example shows that on a 2-dimensional Hilbert space there exists a 3-isometry which is Cesàro bounded and not mean ergodic. This example could be generalized to any Hilbert space of dimension greater or equal to 2. Let $H$ be a Hilbert space and $T\in B(H)$. Tomilov and Zemánek in \cite{TZ} considered the Hilbert space ${\cal H}= H\oplus H$ with the norm $$ \|x_1 \oplus x_2\|_{H\oplus H} = \sqrt{\|x_1\|^2+\|x_2\|^2} \;, $$ and the bounded linear operator ${\cal T}$ on $\cal H$ given by the matrix \[ {\cal T}:= \left( \begin{array}{cc} T& T-I \\ 0 & T \\ \end{array} \right) \;. \] In fact, they obtained the following relations of ergodic properties between the operators $\mathcal{T}$ and $T$. \begin{lemma}\label{lema1}\cite[Lemmma 2.1]{TZ} Let $T\in B(H)$. Then \begin{enumerate} \item $\mathcal{T}$ is Cesàro bounded if and only if $T$ is power bounded. \item $\mathcal{T}$ is mean ergodic if and only if $T^n$ converges in the strong topology of $H$. \item $\mathcal{T}$ is weakly ergodic if and only if $T^n$ converges in the weak topology of $H$. \end{enumerate} \end{lemma} Recall some properties of $m$-isometries. \begin{lemma}\label{lema2} Let $T\in B(H)$ and $m\in \NN$. Then \begin{enumerate} \item \cite[Theorem 2.1]{BMNe} $T$ is a strict $m$-isometry if and only if $\| T^nx\|^2$ is a polynomial at $n$ of degree less or equal to $m-1$ for all $x\in H$, and there exists $x_m\in H$ such that $\| T^nx_m\|^2 $ is a polynomial of degree exactly $m-1$. \item \cite[Theorem 2.7]{BMNo} If $H$ is a finite dimensional Hilbert space, then $T$ is a strict $m$-isometry with odd $m$ if and only if there exist a unitary $U\in B(H)$ and a nilpotent operator $Q\in B(H)$ of order $\frac{m+1}{2}$ such that $UQ=QU$ with $T=U+Q$. \item \cite[Theorem 2.2]{BMNo} If $A\in B(H)$ is an isometry and $Q\in B(H)$ is a nilpotent operator of order $n$ such that commutes with $A$, then $A+Q$ is a strict $(2n-1)$-isometry. \end{enumerate} \end{lemma} \begin{example}\label{ejemplo2} {\rm Let $H$ be a Hilbert space and $T\in B(H)$ such that $T=I+Q$ where $Q^n=0$ for some $n\geq 2$ and $Q^{n-1}\neq 0$. Define the Hilbert space ${\cal H}$ and the bounded linear operator ${\cal T}$ on $\cal H$ as above. By construction $\mathcal{T}= A+\mathcal{Q}$ where $$ A:=\left( \begin{array}{cc} I & 0 \\ 0 & I \\ \end{array} \right) \;, \;\;\;\;\; \mathcal{Q}:=\left( \begin{array}{cc} Q & Q \\ 0 & Q \\ \end{array} \right) $$ where $\mathcal{Q}^n=0$ and $\mathcal{Q}^{n-1} \neq 0$. By parts (3) and (1) of Lemma \ref{lema2}, $T$ is a strict $(2n-1)$-isometry and hence not power bounded. Thus, by Lemma \ref{lema1} we have that $\mathcal{T}$ is not Cesàro bounded. It is also simple to verify that $\mathcal{T}$ is strict $(2n-1)$- isometry by Lemma \ref{lema2}. } \end{example} \begin{example}\label{ejemplo3} {\rm Let $\lambda $ be a unimodular complex number different from 1. Then $$ {\cal A}:= \left( \begin{array}{lc} \la& \la-1 \\ 0 & \la \\ \end{array} \right) $$ is a Cesàro bounded operator (since $\sup_n|\la^n|<\infty$), it is not mean ergodic (since $\la^nx$ does not converge) and is a $3$-isometry on $\CC^2$, see Lemmas \ref{lema1} and \ref{lema2}. } \end{example} Now we give some ergodic properties of $m$-isometries. Example \ref{ejemplo1} is a Ces\`{a}ro bounded $3$-isometries. However, as a consequence of Theorem \ref{kreiss} and Lemma \ref{lema2}, we obtain the following. \begin{corollary} There is no uniformly Kreiss bounded strict $3$-isometry. \end{corollary} \begin{theorem}\label{Cesarobounded} Assume that $H$ is a finite $n$-dimensional Hilbert space. Then \begin{enumerate} \item If $n\geq 2$, then there exists a Cesáro bounded strict 3-isometry. \item The isometries are the only mean ergodic strict $m$-isometries on $H$. \end{enumerate} \end{theorem} \begin{proof} {\it (1)} Let $$ {\cal A}:= \left( \begin{array}{lc} \la& \la-1 \\ 0 & \la \\ \end{array} \right) $$ be the operator on $\CC^2$ considered in Example \ref{ejemplo3}. Write $H=\CC^2\oplus\CC^{n-2}$ and let ${\cal B}: = {\cal A}\oplus I_{\CC^{n-2}}$. Then $\cal B$ is a strict $3$-isometry which is Ces\`{a}ro bounded (and not power bounded). {\it (2)} Suppose that $T$ is a strict $m$-isometry with $m>1$ on a finite dimensional Hilbert space, then $m\ge 3$. Using part (1) of Lemma \ref{lema2}, it is easy to prove that $\frac{\|T^nx\|}{n}$ does not converges to 0 for some $x\in H$. So, $T$ is not mean ergodic. \end{proof} In infinite dimensional Hilbert space we can say more. \begin{theorem} Let $T$ be a strict $m$-isometry. Then \begin{enumerate} \item If $m>3$, then $T$ is not Cesàro bounded. In particular there is no weakly ergodic strict $m$-isometry for $m>3$. \item If $m\geq 3$, then $T$ is not mean ergodic. \end{enumerate} \end{theorem} \begin{proof} By part (1) of Lemma \ref{lema2}, there exists $x\in H$ such that $\| T^nx\|^2$ is a polynomial at $n$ of order $m-1$ exactly. Thus by equation (\ref{media}), the proof is complete. \end{proof} \begin{theorem}\label{Rachjman} There exists a Ces\`{a}ro bounded and weakly ergodic strict $3$-isometry. \end{theorem} \begin{proof} Let $U$ be the bilateral shift. Define $$ {\cal M} : = \left( \begin{array}{cc} U& U-I \\ 0 & U\\ \end{array} \right) \;. $$ First observe that ${\cal M}$ is Ces\`{a}ro bounded, by part (1) of Lemma \ref{lema1}. Since $U^n\to 0$ in the weak operator topology, $\cal M$ is weakly ergodic by part (3) of Lemma \ref{lema1}. Therefore, the conclusion is derived by part (3) of Lemma \ref{lema2}. \end{proof} In \cite{AS}, it is given an example of a Ces\`{a}ro bounded strict $3$-isometry $T$ on a Hilbert space $H$ for which the sequence $\left(\displaystyle \frac{ T^n}{n}\right)_{n\in \NN}$ is bounded below for all $x\in H \setminus \{0\}$. In particular, $ \left( M_n(T)x\right)_{n\in \NN}$ diverges for each $x\in H \setminus \{0\}$, and it is weakly ergodic. We give a characterization of this property. Given an $m$-isometry $T$, the \emph{covariance operator} of $T$ is defined by $$ \Delta_T: = \frac{1}{k!} \sum _{j=0} ^m (-1)^{m-j} \binom {m} {j} {T^*}^j T^j\; . $$ \begin{theorem} Let $T$ be a strict $3$-isometry on a Hilbert space $H$. Then the sequence $\left(\displaystyle \frac{ T^nx}{n}\right)_{n\in \NN}$ is bounded below for all $x\in H \setminus \{0\}$ if and only if the covariance operator $\Delta_T$ is injective. \end{theorem} \begin{proof} If $T$ is a strict $3$-isometry and $\Delta_T$ is injective, then $\displaystyle \inf _{n}\frac{\|T^nx\|}{n} >0 $ for all $x\in H\setminus \{0\}$ (see the proof of \cite[Theorem 3.4]{BMM}). If $\Delta_T$ is not injective, then there exists $x$ such that $\langle \Delta_Tx,x \rangle =0$. By \cite[Proposition 2.3]{BMM}, we have that $\displaystyle \inf _{n}\frac{\|T^nx\|}{n} \rightarrow \langle \Delta_Tx,x \rangle =0$, and thus the sequence $\displaystyle \frac {T^nx}{n}$ is not bounded below. \end{proof} There exist weakly ergodic strict $3$-isometries with the covariance operator $\Delta_T$ injective by \cite[Section 5.2]{AS} and not injective, see the proof of Theorem \ref{Rachjman}. The Uniform ergodic theorem of Lin \cite[Theorem]{Li} asserts that if $\displaystyle \frac{\|T^n\|}{n} \rightarrow 0$, then $T$ is uniformly ergodic if and only if the range of $I-T$ is closed. On the other hand, $T$ is uniformly ergodic if and only if $\displaystyle \frac{\|T^n\|}{n} \rightarrow 0$ and 1 is a pole of the resolvent operator. \begin{corollary} For $m>1$, there is no uniform ergodic strict $m$-isometry on a Hilbert space. \end{corollary} \begin{proof} Since there is no mean ergodic strict $m$-isometry for $m\ge 3$, the result follows immediately from the fact that any strict $2$-isometry $T$ satisfies that the spectrum $\sigma(T)=\overline{\mathbb{D}}$ and, thus, 1 is not an isolated point of $\sigma (T)$. \end{proof} There exists a strict $3$-isometry $T$ which is weakly ergodic (thus Ces\`{a}ro bounded), but it is not mean ergodic. For 2-isometries something else can be established. \begin{corollary} Let $H$ be an infinite dimensional Hilbert space and let $T$ be a strict 2-isometry. Then the following assertions are equivalent: \begin{enumerate} \item $T$ is mean ergodic. \item $T$ is weakly ergodic. \item $T$ is Ces\`{a}ro bounded. \end{enumerate} \end{corollary} \begin{proof} It is a consequence of part (1) of Lemma \ref{lema2}, since $\frac{T^nx}{n}$ converges to zero for all $x\in H$. \end{proof} The following example provides a $2$-isometry that is not Ces\`{a}ro bounded. \begin{example} On $\ell^2(\NN)$ we consider the operator $T$ given by $T(x_1, x_2,\ldots ): = (x_1, x_1, x_2,x_3,\ldots )$. Then $T$ is a $2$-isometry which is not Ces\`{a}ro bounded. \end{example} \begin{proposition} \label{Cesaro} Let $T$ be the weighted backward shift in $\ell^p(\mathbb{N})$ with $1\le p<\infty$ defined by $Te_1:=0$, $Te_j:=\Bigl(\frac{j}{j-1}\Bigr)^{1/p} e_{j-1}\quad(j>1)$. Then $T$ is not Ces\`{a}ro bounded. \end{proposition} \begin{proof} Let $x_n:=\frac{1}{n^{1/p}}\sum_{s=1}^n e_s$ with even $n$. It is clear that $\|x_n\|_p=1$. We have \begin{eqnarray*} \Bigl\|\frac{1}{n}\sum_{j=0}^{n-1} T^jx_n\Bigr\|^p_p &=& \frac{1}{n^{p+1}}\Bigl\|\sum_{j=0}^{n-1}\sum_{s=1}^n T^je_s\Bigr\|^p_p= \frac{1}{n^{p+1}}\Bigl\|\sum_{s=1}^{n}e_s\sum_{j=s}^n \Bigl(\frac{j}{s}\Bigr)^{1/p}\Bigr\|^p_p\\[1pc] & =& \frac{1}{n^{p+1}}\sum_{s=1}^{n}\Bigl(\sum_{j=s}^n \Bigl(\frac{j}{s}\Bigr)^{1/p}\Bigr)^p\ge \frac{1}{n^{p+1}}\sum_{s=1}^{n/2+1}\frac{1}{s}\Bigl(\sum_{j=n/2+1}^{n} j^{1/p}\Bigr)^p, \end{eqnarray*} where $$ \sum_{j=n/2+1}^{n} j^{1/p}\ge\int_{n/2}^n t^{1/p}dt\geq \frac{1}{p^{-1}+1}\Bigl(n^{1+p^{-1}}-\Bigl(\frac{n}{2}\Bigr)^{1+p^{-1}}\Bigr)=c n^{1+1/p} $$ with $c=\frac{p}{p+1}(1-\frac{1}{2^{1+p^{-1}}})>0$. So $$ \Bigl\|n^{-1}\sum_{j=0}^{n-1} T^jx_n\Bigr\|^p_p\ge \frac{1}{n^{p+1}}\sum_{s=1}^{n/2} \frac{c^p n^{p+1}}{s}\ge c^p\ln\frac{n}{2}\to\infty $$ as $n\to\infty$. Hence $T$ is not Ces\`{a}ro bounded. \end{proof} \begin{corollary} There is no Ces\`{a}ro bounded weighted forward shift on $\ell^2(\mathbb{N})$, which is a strict $2$-isometry. \end{corollary} \begin{proof} Assume that $T$ is a weighted forward shift with weights $(w_n)_{n\in \NN}$. By \cite[Theorem 1]{AL} (see also \cite[Remark 3.9]{BMNe}), if $T$ is a strict 2-isometry, then $$ |w_n|^2=\frac{p(n+1)}{p(n)}\;, $$ where $p$ is a polynomial of degree 1, that is, $p(n):=an+b$. First, suppose that $b=0$. Then $w_n=\sqrt{\frac{n}{n-1}}$, since $a\neq 0$. Hence $T^*e_n:=\sqrt{\frac{n}{n-1}}e_{n-1}$. By Proposition \ref{Cesaro}, $T^*$ is not Cesàro bounded. Since Cesàro boundedness is preserved by taking adjoints, $T$ is not Cesàro bounded. Now, assume that $b\neq 0$, then $w_n(c):=\sqrt{\frac{cn+1}{c(n-1)+1}}$ with $c\neq 0$. Denote $T_ce_n:=w_n(c)e_{n+1}$ and the diagonal operator $Ve_n:= \alpha_n e_n$, where $\alpha _n:=\sqrt{\frac{c(n-1)+1}{n}}$. Then $V$ is invertible and satisfies that $VT_1=VT_c$. Moreover, $T_1$ is not Cesàro bounded, by following an argument as in Proposition \ref{Cesaro}. Using that Cesàro boundedness is preserved by similarities, we obtain that $T_c$ is not Cesàro bounded. \end{proof} \begin{corollary} There is no absolutely Ces\`{a}ro bounded strict $2$-isometry on a Hilbert space. \end{corollary} \begin{proof} It is immediate by Theorem \ref{acbhilbert} and part (1) of Lemma \ref{lema2}. \end{proof} \begin{question} Is it possible to construct a Ces\`{a}ro bounded strict $2$-isometry on an infinite dimensional Hilbert space? \end{question} \section{ Numerically hypercyclic properties of $m$-isometries} In this section we study numerically hypercyclic $m$-isometries. For simplicity we discuss only operators on Hilbert spaces. \begin{definition} Let $H$ be a Hilbert space. An operator $T\in B(X)$ is called numerically hypercyclic if there exists a unit vector $x\in H$ such that the set $\{\langle T^nx,x\rangle: n\in\NN\}$ is dense in $\CC$. \end{definition} Clearly the numerical hypercyclicity is preserved by unitary equivalence but in general not by similarity. This leads to the following definition: \begin{definition} Let $T\in B(X)$. It is said that $T$ is \emph{weakly numerically hypercyclic} if $T$ is similar to a numerically hypercyclic operator. \end{definition} In \cite[Proposition 1.5]{S}, Shkarin proved that $T\in B(H)$ is weakly numerically hypercyclic if and only if there exist $x,y\in H$ such that the set $\{\langle T^nx,y\rangle: n\in\NN\}$ is dense in $\mathbb{C}$. Faghih and Hedayatian proved in \cite{FaHe} that $m$-isometries on a Hilbert space are not weakly hypercyclic. Moreover, $m$-isometries on a Banach space are not 1-weakly hypercyclic \cite{BBF}. However, there are isometries that are weakly supercyclic \cite{Sanders05} (in particular cyclic). Thus the first natural question is the following: are there numerically hypercyclic $m$-isometries? \begin{theorem} There are no weakly numerically hypercyclic $m$-isometries on $B(\mathbb{C}^n)$ for $n\le 3$. \end{theorem} \begin{proof} If $n=1$, there are not weakly numerically hypercyclic operators. Let $n=2$. By \cite[Theorem 1.13]{S}, if $T\in B(\mathbb{C}^2)$ is a weakly numerically hypercyclic operator, then there exists $\lambda\in \sigma(T)$, with $|\lambda|>1$ and thus $T$ is not an $m$-isometry. For $n=3$, it is the same by \cite[Theorem 1.14]{S}. \end{proof} We discuss the existence of weakly numerically hypercyclic $m$-isometries on $n$-dimensional spaces for $n\ge 4$. We say that $\lambda_1,\lambda_2\in \mathbb{T}$ are \emph{rationally independent} if $ \lambda_1^{m_1}\lambda_2^{m_2}\ne 1$ for every non-zero pair $m=(m_1,m_2)\in \mathbb{Z}^2$, or equivalently if $\lambda_j=e^{i\theta_j}$ with $\theta_j\in \mathbb{R}$ with $\pi, \theta_1,\theta_2 $ are linearly independent over the field $\mathbb{Q}$ of rational numbers. If $T\in B(X)$ and there are rationally independent $\lambda_1,\lambda_2\in \mathbb{T}$ such that $ker(T-\lambda_jI)^2 \ne ker(T-\lambda_jI)$ for $j\in \{1,2\}$, then $T$ is weakly numerically hypercyclic \cite[Theorem 1.9]{S}. Moreover if $X$ is a Hilbert space, then $T$ is numerically hypercyclic \cite[Proposition 1.12]{S}. The following result gives an answer to the above question for some $m$-isometries. \begin{theorem}\label{3-isometry} There exists a numerically hypercyclic strict $(2m-1)$-isometry on $B(\mathbb{C}^n)$, with $n\geq 4$, for $2\leq m\leq n-2$. \end{theorem} \begin{proof} Let $\ell \in \{ 2, 3, \ldots, n-2\}$. We will construct a numerically hypercyclic strict $(2\ell -1)$-isometry. Define $D$ the diagonal operator with diagonal $$ (\underbrace{\lambda _1, \cdots, \lambda_1}_{\ell }, \lambda _2, \lambda _2, \underbrace{1, \cdots , 1}_{k-2\ell}) $$ where $\lambda _1$ and $\lambda _2$ are rationally independent complex numbers with modulus 1 and $Q$ by \begin{eqnarray*} Q e_i :&= & e_{i-1} \mbox { for } i\in \{ 2, 3, \cdots, \ell \}\\ Q e_{\ell +2}:&=& e_{\ell +1} \mbox { and }\\ Q e_i:&= &0 \mbox{ for } i=1, i=\ell+1 \mbox{ and } i\geq \ell +3 \;. \end{eqnarray*} It is clear that $Q^\ell =0 $ and $Q^{\ell -1} e_\ell =e_1\neq 0$. Moreover, \begin{eqnarray*} QDe_i&=& DQe_i= \lambda _1 e_{i-1} \mbox{ for } 2\leq i\leq \ell \\ QDe_{\ell +2} &=& DQe_{\ell +2} =\lambda _2 e_{\ell +1} \\ QDe_i&=&DQe_i=0 \mbox{ for } i=1, \ell+1\mbox{ and }\geq i\ge\ell +3\;. \end{eqnarray*} By part (3) of Lemma \ref{lema2}, $T:=D+Q$ is a strict $(2\ell -1)$-isometry for any $\ell \in \{ 2, 3, \cdots, n-2\}$. Let us prove that $T$ satisfies that $Ker (\lambda_i -T)\neq Ker (\lambda_i -T)^2$ for $i=1,2$. By definition $e_2\in Ker (\lambda_1 -T)^2\setminus Ker (\lambda_1 -T)$ and $e_{\ell +1} \in Ker (\lambda_2 -T)^2\setminus Ker (\lambda_2 -T)$. So by \cite[Proposition 1.9]{S}, $T$ is numerically hypercyclic. \end{proof} As a consequence of the proof of Theorem \ref{3-isometry}, we obtain \begin{corollary} Let $H$ be a complex Hilbert space with dimension at least 4. Then there exists a numerically hypercyclic strict 3-isometry on H. \end{corollary} \begin{theorem} An $n$-dimensional Hilbert space supports no weakly numerically hypercyclic strict $(2n-3)$ or $(2n-1)$-isometries. \end{theorem} \begin{proof} Let $H$ be a finite-dimensional Hilbert space, $\dim H=n<\infty$. Suppose on the contrary that $T\in B(H)$ is a weakly numerically hypercyclic $(2n-1)$-isometry. Since $\|T^kx\|^2$ grows polynomially for each $x\in H$ and there exists $u\in H$ such that $\|T^ku\|^2$ is a polynomial of degree $2n-2$, the Jordan form of $T$ has only one block corresponding to an eigenvalue $\la$ with $|\la|=1$. Thus $T=\la I+Q$ where $Q^n=0$. Thus $$ T^k =\sum_{j=0}^{n-1}\binom{k}{j}\la^{k-j}Q^{j}= \la^k\sum_{j=0}^k {\binom{k}{j}}\la^{-j}Q^j $$ for all $k\in\NN$. Let $x,y\in H$ and suppose that the set $\{\langle T^kx,y\rangle:k\in\NN\}$ is dense in $\CC$. We have $\langle T^kx,y\rangle= \la^k p(k)$ for some polynomial $p$ of degree $\le n-1$. If $\deg p\ge 1$ then $|\langle T^kx,y\rangle|\to\infty$ so the set $\{\langle T^kx,y\rangle:k\in\NN\}$ is not dense in $\CC$. If $\deg p=0$ then the set $\{\langle T^kx,y\rangle:k\in\NN\}$ is bounded and again is not dense in $\CC$. Hence $T$ is not weakly numerically hypercyclic. The case of $(2n-3)$-isometries can be treated similarly. If $T\in B(H)$ is a strict $(2n-3)$-isometry then the Jordan form of $T$ has two blocks: one of dimension $n-1$ corresponding to an eigenvalue $\la$, $|\la|=1$ and the second one-dimensional block corresponding to an eigenvalue $\mu, |\mu|=1$. For $x,y\in H$ we have $\langle T^kx,y\rangle=\la^kp(k)+ a\mu^k$ for some polynomial $p, \deg p\le n-2$ and a number $a\in \CC$. Again one can show easily that the set $\{\langle T^kx,y\rangle:k\in\NN\}$ cannot be dense in $\CC$. Hence there are no weakly numerically hypercyclic $(2n-3)$-isometries on $H$. \end{proof} \begin{theorem} For $m \ge 2$, there exists a numerically hypercyclic strict $m$-isometry on $\ell^2(\mathbb{N})$. \end{theorem} \begin{proof} For $m\ge 2$, no strict $m$-isometry is power bounded \cite[Theorem 2]{COT}. Also by \cite[Theorem 1]{AL}, there exist forward weighted shifts on $\ell^2(\mathbb{N})$ that are strict $m$-isometries for $m\ge 2$. Now, using that if $1<p<\infty$ and $T$ is a forward weighted shift on $\ell^p(\mathbb{N})$, then $T$ is numerically hypercyclic if and only if $T$ is not power bounded (\cite{KPS} \& \cite{S}), we obtain the result. \end{proof} Since both numerical hypercyclicity and $m$-isometricity are properties preserved by unitary equivalence, we have that \begin{corollary} Let $H$ be an infinite dimensional separable complex Hilbert space and $m\ge 2$. Then there exists a numerically hypercyclic $m$-isometry on $H$. \end{corollary} \begin{theorem} There exists a numerically hypercyclic Ces\`{a}ro bounded strict $3$-isometry on $\mathbb{C}^4$. \end{theorem} \begin{proof} Let $T$ be the operator considered in the proof of Theorem \ref{3-isometry} $$ T:=\left( \begin{array}{cccc} \lambda _1 & \lambda _1 -1 & 0& 0 \\ 0& \lambda _1&0 &0 \\ 0& 0&\lambda _2 & \lambda _2 -1\\ 0&0 & 0& \lambda _2 \end{array} \right) \;, $$ where $\lambda_1,\lambda_2\in \mathbb{T}$ are rationally independent. By the proof of Theorem \ref{3-isometry}, it is clear that $T$ is numerically hypercyclic. Since both blocks $$ \left( \begin{array}{cc} \lambda _1 & \lambda_1 -1 \\ 0& \lambda _1 \\ \end{array} \right) \qquad\hbox{and}\qquad \left( \begin{array}{cc} \lambda _2 & \lambda_2 -1 \\ 0& \lambda _2 \\ \end{array} \right) $$ are Cesàro bounded by Lemma \ref{lema1}, it is easy to see that $T$ is Cesàro bounded. \end{proof} We know that there exist examples of numerically hypercyclic $3$-isometries and weakly ergodic $3$-isometries. The following result goes further in this direction. \begin{theorem} Any weakly ergodic strict $3$-isometry on a Hilbert space is weakly numerically hypercyclic. \end{theorem} \begin{proof} If $T$ is a weakly ergodic strict $3$-isometry, then there exists $x$ such that $\displaystyle \frac{T^nx}{n}$ is weakly convergent but it is not norm convergent. Indeed for a strict $3$-isometry $T$, there exists $x$ such that $\displaystyle \frac{T^nx}{n}$ does not converge to zero in norm. Then, since $\displaystyle x_n =\frac{T^nx}{n}$ is weakly convergent but it is not norm convergent, by \cite[Lemma 6.1]{S} there is $y\in H$ such that $\{n \langle x_n,y\rangle: n\in \mathbb{N}\}$ is dense on $\mathbb{C}$. Hence $T$ is weakly numerically hypercyclic. \end{proof} In particular, the example of a weakly ergodic $3$-isometry defined in \cite[Section 5.2]{AS} is weak numerically hypercyclic. \begin{question} Do there exist numerically hypercyclic weakly ergodic $3$-isometries? \end{question} Let $T$ be an $m$-isometry. What can we say about dynamical properties of $T^*$? Some particular classes of operators allow the study of the (chaotic) dynamics of the adjoints. \begin{theorem} Let $S_w$ be a forward weighted shift strict $m$-isometry on $\ell^{2}(\mathbb{N})$. Then \begin{enumerate} \item $S_w^*$ is mixing if and only if $ m\ge 2$. \item $S_w^*$ is chaotic if and only if $ m\ge 3$. \end{enumerate} \end{theorem} \begin{proof} By \cite[Theorem 1]{AL}, a unilateral weighted forward shift on a Hilbert space is an $m$-isometry if and only if there exists a polynomial $p$ of degree at most $m-1$ such that for any integer $n\ge 1$, we have that $p(n)>0$ and $|w_n|^2 =\displaystyle \frac{p(n+1)}{p(n)}$. Thus for $m\geq 2$, $S_w^*$ satisfies condition ii) of (c) from \cite[Theorem 4.8]{GEP11} and $S_w^*$ is mixing. For $m\ge 3$, $S_w^*$ satisfies condition ii) of c) from \cite[Theorem 4.8]{GEP11} and $S_w^*$ is chaotic. \end{proof} Notice that, if $S_w$ is a unilateral forward weighted shift and a strict $m$-isometry on $\ell^{2}(\mathbb{N})$ with $m\ge 2$, then $S_w^*$ is hypercyclic operator. Since on $\ell^{2}(\mathbb{Z})$ there exist bilateral forward weighted shifts which are strict $m$-isometries only for odd $m$, then we have \begin{theorem} Let $S_w$ be a bilateral forward weighted shift strict $m$-isometry on $\ell^{2}(\mathbb{Z})$ with $m>1$. Then $S_w^*$ is chaotic. \end{theorem} \begin{proof} By \cite[Theorem 19 \& Corollary 20]{AL}, a bilateral weighted forward shift on a Hilbert space is a strict $m$-isometry if and only if there exists a polynomial $p$ of degree at most $m-1$ such that for any integer $n$, we have $p(n)>0$ and $|w_n|^2 =\displaystyle \frac{p(n+1)}{p(n)}$ and $m$ is an odd integer. Hence, for $m\ge 3$, $S_w^*$ satisfies condition ii) of c) from \cite[Theorem 4.13]{GEP11}. Thus $S_w^*$ is chaotic. \end{proof}
2,869,038,155,414
arxiv
\section{Introduction} The Local Volume of the Universe amounts to almost a thousand galaxies having distance estimates within 11~Mpc\footnote{http://www.sao.ru/lv/lvgdb} (Karachentsev et al. 2013). Near the far edge of this volume at a distance of 9.55~Mpc (McQuinn et al. 2016) there is a bright galaxy M\,104 (also known as NGC\,4594 or the Sombrero galaxy). With an apparent $K$-band magnitude of $m_K = 5\fm0$ it has the luminosity of $L_K/L_{\odot} = 11.32$~dex, which is four times higher than the luminosity of the Milky Way (10.70~dex) or M\,31 (10.73~dex). Thanks to its luminosity and, by inference, to its stellar mass the Sombrero is the most outstanding galaxy of the Local Volume. Over recent years several attempts have been undertaken to determine the total (virial) mass of Sombrero using radial velocities and projected separations of its companions. Estimations of $M_T/M_{\odot}$ vary widely: 10.90~dex (Makarov \& Karachentsev, 2011), 13.17~dex (Karachentsev \& Nasonova, 2013), 13.45~dex (Karachentsev \& Kudrya, 2014), and 13.96~dex (Kourkchi \& Tully, 2017). The main reason of the scatter in estimates of the total mass is caused by the uncertainty on the gravitational binding of Sombrero with galaxies neighbouring in the projection. The Sombrero galaxy is located near the equator of the Local Supercluster where galaxies are concentrated in the filamental structure, the Virgo Southern Extension (VirgoSE; Tully, 1982; Kourkchi \& Tully, 2017). Many galaxies in the VirgoSE have radial velocities similar to that of Sombrero, but lie at greater distances typical of the Virgo cluster (15--20~Mpc). At a distance of 8 Mpc from the core of the Virgo cluster, the Sombrero galaxy lies at the edge of the zero velocity surface bounding the cluster infall domain, a property shared by other galaxies in the VirgoSE over an extended range in distances. To reveal the true satellites of Sombrero among the neighbouring galaxies we need to measure their distances, preferably with an error $\Delta D<$\,\,$\sim 1$~Mpc. At present there is only one galaxy, KKSG\,29, in the Sombrero vicinity with an accurately measured distance (9.82$\pm$0.32~Mpc; Karachentsev et al. 2018), determined via the tip of the red giant branch (TRGB) method. This distance places the dwarf galaxy KKSG\,29 as a physical satellite of the Sombrero galaxy. In this work we present measurements of TRGB distances for two more dwarf galaxies, UGCA\,307 (or DDO\,153) and KKSG\,30 (or LEDA\,3097708) situated close to Sombrero. The distances of both the galaxies agree well with their belonging to the family of Sombrero satellites. The new distance measurements together with other less reliable distance estimates give us a possibility to make more precise value of the virial (orbital) mass of Sombrero. \section{TRGB distances to UGCA\,307 and KKSG\,30} The dwarf galaxies UGCA\,307 ($12^h53^m56\fs8$--$12\degr06\arcmin21\arcsec$) and KKSG\,30 ($12^h37^m35\fs9$--$08\degr52\arcmin01\arcsec$) have apparent $B$ magnitudes of $14\fm6$ and $16\fm3$, respectively, and projected separations of $\sim3\degr$ with respect to Sombrero. Their radial velocities in the Local Group rest frame, 731~km\,s$^{-1}$ (UGCA\,307) and 918~km\,s$^{-1}$ (KKSG\,30), are close to the radial velocity of Sombrero, 892~km\,s$^{-1}$. The galaxies were observed with the Advanced Camera for Surveys (ACS) aboard the Hubble Space Telescope (HST) on December 5, 2019, and March 13, 2020, as a part of the SNAP project 15922 (PI R.B. Tully). \begin{figure*} \begin{tabular}{cc} \includegraphics[width=\textwidth]{images.eps} \end{tabular} \caption{ HST/ACS combined images of UGCA 307 and KKSG 30. The image size is $1\farcm6\times1\farcm4$. North is up and east is left.} \label{m104:fig01} \end{figure*} Two exposures for each object were made in a single orbit with the filters $F606W$ (750~s) and $F814W$ (750~s). The $F814W$ images of the galaxies are presented in Fig.~\ref{m104:fig01}. We used the ACS module of the DOLPHOT package by Dolphin (2002) to perform photometry of resolved stars based on the recommended recipe and parameters. Only stars with good quality photometry were included in the analysis. We selected the stars with a signal-to-noise ratio $S/N > 4$ in both filters, and with DOLPHOT parameters $crowd_{F606W} + crowd_{F814W} < 0.8$, ($sharp_{F606W}+sharp_{F814W})^2 < 0.075.$ Artificial stars were inserted and recovered using the same reduction procedures to accurately estimate photometric errors. Subsequent analysis included only those image regions that contain stars of the galaxies themselves. We selected the region of $1.6 \times 1.6$ arcmin around UGCA\,307 and $2.8 \times 1.5$ arcsec around KKSG\,30. The resulting colour-magnitude diagrams in $F606W$--$F814W$ versus $F814W$ are plotted in Fig.~\ref{m104:fig02}. A maximum-likelihood method by Makarov et al. (2006) was applied to estimate the magnitude of the TRGB. We found $F814W$(TRGB) to be $25.76^{+0.19}_{-0.11}$ for UGCA\,307 and $25.96^{+0.08}_{-0.07}$ for KKSG\,30. Following the zero-point calibration of the absolute magnitude of the TRGB developed by Rizzi et al. (2007), we obtained M(TRGB) values of --4.09 (UGCA\,307) and --4.08 (KKSG\,30). Assuming values of foreground reddening, $E(B-V),$ 0.049 (UGCA\,307) and 0.028 (KKSG\,30) from Schlafly \& Finkbeiner (2011), we derived the true distance modulus of $(m- M)_0 = 29.78^{+0.20}_{-0.12}$ or the distance $D = 9.03^{+0.84}_{-0.51}$~ Mpc for UGCA\,307 and $(m- M)_0 = 29.94^{+0.10}_{-0.09}$ or the distance $D = 9.72^{+0.44}_{-0.41}$ Mpc for KKSG\,30. \begin{figure*} \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{UGCA307cmd.eps} \includegraphics[width=0.5\textwidth]{KKSG30cmd.eps} \end{tabular} \caption{Colour--magnitude diagrams of UGCA\,307 and KKSG\,30. The TRGB position is indicated by the horizontal line.} \label{m104:fig02} \end{figure*} \section{Companions of Sombrero and background objects} \begin{table* \caption{Galaxies around Sombrero with $r_p < 6\degr$ and $V_{LG} <1400$ km s$^{-1}$.} \label{m104:tab01} \begin{tabular}{lcrlrrlc} \hline Name & RA Dec & $V_{LG}$& Type& $B_T$ & $D$ & meth. &Ref. \\ \hline (1)& (2)& (3)& (4)& (5)& (6)& (7)& (8) \\ \hline DDO116 & 121628.6$-$113141& 959& Sm & 16.02 & 23.28&TFb$^1$& (1) \\ DDO118 & 121711.9$-$114041& 1064& Irr & 16.2 & 14.22& TFb & (1) \\ PGC104765 & 122143.7$-$123957& 1221& Irr & 16.92 & & & \\ KKSG27 & 122205.7$-$094801& 1128& Im & 17.70 & 6.64& TFb & (1) \\ UGCA278 & 122310.4$-$135644& 942& Irr & 16.20 & 18.2 & TF & (1) \\ PGC970571 & 122549.4$-$110305& 1134& BCD & 16.33 & & & \\ PGC157820 & 123001.8$-$114731& 909& S0 & 16.13 & & & \\ NGC4487 & 123104.5$-$080314& 847& Sc & 11.76 & 20.1 & SN & (2) \\ NGC4504 & 123217.4$-$073348& 812& Sc & 12.12 & 17.5 & TF & (3) \\ UGCA287 & 123355.4$-$104047& 852& Sm & 15.36 & 20.5 & TF & (4) \\ UGCA289 & 123537.8$-$075236& 800& Sd & 14.46 & 14.6 & TF & (3) \\ PGC970397 & 123539.4$-$110402& 928& Irr & 16.87 & 10.00& TF & (1) \\ KKSG29 & 123714.1$-$102952& 562& Irr & 16.54 & 9.82& TRGB & (5) \\ KKSG30 & 123735.9$-$085201& 918& Irr & 16.30 & 9.72& TRGB & (8) \\ dw1239-1143 & 123915.3$-$114308& 1171& dE & 16.80 & 7.9 & SBF & (6) \\ PGC1024539 & 123944.7$-$070519& 744& Irr & 17.90 & 15.6 & TFb & (1) \\ NGC4594 & 123959.4$-$113723& 892& S0a & 9.00 & 9.55& TRGB & (7) \\ SUCD1 & 124003.1$-$114004& 1109& dE & 18.40 & 9.55& mem & (1) \\ NGC4597 & 124012.9$-$054757& 866& Sm & 12.87 & 10.10& TF & (4) \\ dw1240-1140 & 124017.6$-$114046& 1097& dSph& 19.50 & 9.55& mem & \\ PGC042730 & 124248.9$-$122327& 829& dEn & 14.78 & 9.55& mem & \\ UA295 & 124453.9$-$090731& 1197& Sm & 15.10 & 22.9 & TF & (1) \\ DDO146 & 124541.4$-$060408& 1304& Sm & 13.01 & 17.3 & TF & (3) \\ NGC4674 & 124603.5$-$083920& 1318& Sab & 13.96 & & & \\ PGC1003283 & 124750.9$-$082816& 860& S0 & 15.89 & & & \\ PGC104868 & 124854.1$-$114042& 1171& BCD & 15.0 & 11.17& TF & (1) \\ NGC4700 & 124908.1$-$112435& 1219& Sd & 12.60 & 7.30& TF & (4) \\ PGC043345 & 124923.6$-$100712& 1124& Sdm & 12.79 & 16.6 & TF & (4) \\ PGC1019240 & 124955.9$-$072527& 1236& Sm & 15.52 & & & \\ NGC4731 & 125101.4$-$062339& 1323& SBc & 11.88 & 13.6 & TF & (3) \\ NGC4723 & 125103.0$-$131413& 1109& Sm & 15.38 & 15.3 & TF & (4) \\ PGC043526 & 125113.3$-$063334& 1327& Im & 13.26 & 13.6 & mem & \\ NGC4742 & 125148.0$-$102717& 1088& E & 12.11 & 15.8 & SBF & (9) \\ NGC4757 & 125250.1$-$101836& 664& S0 & 14.54 & & & \\ UGCA 307 & 125356.8$-$120621& 731& Im & 14.60 & 9.03& TRGB & (8) \\ NGC4781 & 125423.7$-$103214& 1080& Scd & 11.39 & 15.5 & TF & (4) \\ NGC4790 & 125451.9$-$101452& 1175& Sc & 12.41 & 16.9 & TF & (4) \\ UGCA308 & 125531.1$-$102350& 1140& Irr & 16.30 & 16.6 & TFb & (4) \\ NGC4802 & 125549.6$-$120319& 843& S0 & 12.39 & 11.5 & SBF & (10) \\ IC3908 & 125640.6$-$073346& 1127& Scd & 13.33 & 23.9 & TF & (4) \\ NGC4818 & 125648.9$-$083131& 892& Sab & 11.99 & 11.3 & TF & (3) \\ UGCA311 & 125746.8$-$093802& 1306& Scd & 14.00 & 19.8 & TF & (4) \\ PGC044460 & 125828.3$-$103437& 1173& Sdm & 14.70 & 8.70& TF & (4)\\ UGCA312 & 125906.5$-$121340& 1121& Irr & 15.88 & 12.0 & TFb & (4)\\ NGC4856 & 125921.3$-$150232& 1145& S0a & 11.44 & 24.0 & TF & (4)\\ UGCA314 & 130017.2$-$122048& 1397& Im & 14.64 & 24.8 & TF & (4)\\ PGC936912 & 130107.0$-$133106& 1120& Im & 15.40 & 14.6 & TF & (4)\\ NGC4920 & 130204.2$-$112243& 1155& Im & 14.15 & 18.2 & TF & (4)\\ \hline \multicolumn{8}{p{0.5\textwidth}}{(1) Kashibadze+2018, (2) Pejcha+2015, (3) Tully+2016, (4) Karachentsev+2013, (5) Karachentsev+2018, (6) Carlsten+2020, (7) McQuinn+2016, (8) present paper, (9) Blakeslee+2001, (10) Tonry+2001.} \\ \multicolumn{8}{p{0.5\textwidth}}{$^1$ "TRGB" -- the luminosity of tip of the red giant branch, "SBF" -- surface brightness fluctuations,"TF" and "TFb" -- the classic Tully \& Fisher (1977) relation or its baryonic version; "SN" -- the luminosity of supernova; "mem" -- assumed membership in a group with the known distance} \end{tabular} \end{table*} Judging by the big stellar mass of the Sombrero galaxy, the virial radius of its halo can reach about 400~kpc. To search for assumed satellites of Sombrero we examined a region of radius $r_p = 6\degr$ around it that corresponds to the linear projected radius of $R_p = 1.0$~Mpc at the distance of 9.55~Mpc. In this area there are 48 galaxies with radial velocities $V_{LG} < 1400$~km\,s$^{-1}$. The data are presented in Table~\ref{m104:tab01}. The table columns contain (1) galaxy name; (2) equatorial coordinates J2000.0; (3) radial velocity in the Local Group rest frame (km\,s$^{-1}$); (4) morphological type; (5) apparent $B$ magnitude from the Lyon Extragalactic Database (LEDA, Makarov et al. 2014) or NASA Extra-galactic Database (NED); (6) distance to the galaxy in Mpc; (7) method used for the distance estimate; (8) reference to the source of distance. As seen from these data, 41 of the 48 galaxies have distance estimates. Most of them were made by the Tully-Fisher method, with uncertainties of 35-30\% for these low luminosity galaxies. Accordingly, we consider only galaxies with distances $D < 12$~Mpc as probable members of the Sombrero group. The distance and radial velocity distribution of galaxies around Sombrero are given in Figure~\ref{m104:fig03}. In total, 15 galaxies are probable satellites of Sombrero, with the luminosity of each of them more than an order of magnitude fainter than the luminosity of Sombrero. An empty volume (mini-void) is visible ahead of the group. The background galaxies have radial velocities substantially overlapping the Sombrero group velocities. Due to significant TF distance errors, $\Delta$D of about 3-5~Mpc, the membership of some galaxies (UGCA\,312, PGC\,104868), whether in the group or background may be subject to revision. \begin{figure} \centering \includegraphics[width=\hsize]{somb_dist.eps} \caption{ Distribution of assumed members of the Sombrero group (filled circles) and background galaxies (open circles) according to their distance and radial velocity in the Local Group rest frame. The galaxies satisfy the conditions $V_{LG} < 1400$~km\,s$^{-1}$ and projected separation $r_p < 6\degr$ with respect to Sombrero (asterisk).} \label{m104:fig03} \end{figure} \begin{figure} \centering \includegraphics[width=\hsize]{somb_sg.eps} \caption{Distribution of assumed Sombrero satellites (filled circles) and background galaxies (open circles) in supergalactic coordinates. All galaxies have $V_{LG} < 1400$~km\,s$^{-1}$.} \label{m104:fig04} \end{figure} The distribution of galaxies from Table~\ref{m104:tab01} in supergalactic coordinates SGL, SGB is presented in Figure~\ref{m104:fig04}. Indications of galaxies with different symbols are the same as in the previous figure. Sombrero's satellites, as well as background galaxies, are distributed asymmetrically. In both subsamples there is a noticeable increase in galaxy number towards the supergalactic equator and towards the Virgo cluster centre (SGL = 103$\degr$, SGB = $-2 \degr$). The reason for this asymmetry in the case of Sombrero group members remains unclear to us. Figure~\ref{m104:fig05} presents the distribution of assumed satellites of Sombrero (solid circles) and background galaxies (open circles) according to angular projected separation, $r_p,$ and absolute value of radial velocity difference, $|\Delta V|$. Sombrero satellites dominate within $r_p < 2.4\degr$ (i.e. 400~kpc), and at larger distances assumed Sombrero satellites are lost among the numerous background galaxies. Such a confusion of two categories of galaxies makes it difficult to estimate the virial mass of the Sombrero galaxy. \begin{figure} \centering \includegraphics[width=\hsize]{somb_separ.eps} \caption{ Radial velocity difference and angular projected separation for assumed members of the Sombrero group (filled circles) and background galaxies (open circles) taken with respect to the Sombrero galaxy.} \label{m104:fig05} \end{figure} \section{Total (orbital) mass of Sombrero} \begin{table \caption{Satellites of Sombrero with known radial velocities.} \label{m104:tab02} \begin{tabular}{llcrr}\hline Name & Type& $r_p$,& $R_p$,& $\Delta V$ \\ \hline & & $\degr$ & kpc &km s$^{-1}$ \\ \hline SucD1 & dE & 0.04 & 7 & 217 \\ dw1240-1140& dSph& 0.09 & 15 & 205 \\ dw1239-1143& dE & 0.22 & 37 & 279 \\ PGC042730 & dE & 1.03 & 171 & -63 \\ PGC970397 & Irr & 1.21 & 201 & 36 \\ KKSG29 & Irr & 1.32 & 220 &-330 \\ PGC104868 & BCD & 2.18 & 362 & 279 \\ NGC4700 & Sd & 2.24 & 372 & 327 \\ KKSG30 & Irr & 2.84 & 471 & 26 \\ UGCA 307 & Im & 3.45 & 573 &-161 \\ NGC4802 & S0 & 3.89 & 646 & -49 \\ PGC044460 & Sdm & 4.64 &770 &281 \\ KKSG27 & Im & 4.78 & 794 & 236 \\ NGC4818 & Sab & 5.17 & 858 & 0 \\ NGC4597 & Sm & 5.85 & 971 & -26 \\ \hline\end{tabular} \end{table} The list of 15 assumed satellites of Sombrero with known radial velocities is presented in Table~\ref{m104:tab02}. The galaxies are ranked according to their projected separation from Sombrero. The average linear projected separations of the satellites is $\langle R_p\rangle $ = 431$~kpc $, the mean difference of their radial velocities with respect to the principal galaxy is $+62\pm54$~km\,s$^{-1}$, and the dispersion of radial velocities is $\sigma_v = 204$~km\,s$^{-1}$. We estimated the virial (orbital) mass of the Sombrero galaxy assuming Keplerian motion of satellites, as test particles, around the massive central body. At random orientation of the satellite orbits with the mean eccentricity of $\langle e^2\rangle = 1/2$ (Barber et al. 2014) the estimate of orbital mass can be written (Karachentsev \& Kudrya, 2014) as \begin{equation} M_{\rm orb} = (16/\pi) G^{-1} \langle \Delta V^2 R_p\rangle, \end{equation} where $G$ is the gravitational constant. If $R_p$ is expressed in kpc and $\Delta V$ expressed in km\,s$^{-1}$, then \begin{equation} \log(M_{\rm orb}/M_{\odot}) = \log \langle\Delta V^2 R_p\rangle + 6.07. \end{equation} Using all 15 assumed satellites from Table 2 we obtain the mean value of orbital mass to be $(17.2\pm6.1) 10^{12} M_{\odot}$. For three satellites with accurate TRGB distances this quantity is $(15.3\pm8.1) 10^{12} M_{\odot}$, while for five dwarfs with TRGB and SBF distances the average orbital mass drops to $(10.3\pm5.4) 10^{12} M_{\odot}$. Attributing to 15 satellite galaxies different weights (w=1 for TRGB distances, w= 1/2 for SBF distances and w= 1/4 for TF and mem distances), we derive the weighted mean $(15.5\pm4.9) 10^{12} M_{\odot}$. We adopt this quantity as the optimal estimate of the total mass of the Sombrero group. As seen from Table~\ref{m104:tab02}, the early-type galaxies are concentrated towards Sombrero much more tightly than spiral and irregular galaxies. Apart from the S0 galaxy NGC\,4802, all other E-companions reside in the central zone $R_p < 200$~kpc. The known effect of morphological segregation is more pronounced if probable Sombrero satellites without radial velocities are taken into account. Table~\ref{m104:tab03} lists 12 dwarf galaxies of low and very low surface brightness have been detected near Sombrero by different authors (Karachentsev et al. 2000; Javanmardi et al. 2016; Karachentsev et al. 2020; Carlsten et al. 2020). None of them is detected in the HI line, and all are classified as spheroidal dwarfs. These objects with quenched star formation have projected separations $R_p < 200$~kpc, increasing the segregation effect. \begin{table \caption{Assumed satellites of Sombrero without radial velocities.} \label{m104:tab03} \begin{tabular}{lclrl} \hline Name & RA (2000.0) DEC & Type &D, Mpc & meth\\ \hline dw1237-1125 & 123711.6-112559 & dSph & 7.5 & SBF \\ KKSG31 & 123833.7-102925 & dSph & 9.55& mem \\ dw1239-1152 & 123909.0-115237 & dSph & 8.2 & SBF \\ dw1239-1159 & 123909.1-115912 & dSph &11.3 & SBF \\ N4594-DGSAT-3 & 123932.8-111338 & dSph & 7.9 & SBF \\ Sombrero DwA & 123951.5-112029 & dSph & 9.7 & SBF \\ KKSG32 & 123955.0-114448 & dSph & 9.0 & SBF \\ KKSG33 & 124008.9-122153 & dSph & 9.55& mem \\ dw1240-1118 & 124009.4-111850 & dSph & 8.8 & SBF \\ dw1241-1131 & 124102.8-113144 & dSph & 7.2 & SBF \\ Sombrero DwB & 124112.0-115333 & dSph &11.2 & SBF \\ KKSG34 & 124118.9-115539 & dSph & 9.0 & SBF \\ \hline\end{tabular} \end{table} \section{Concluding remarks} Radial velocities and projected separations of 15 assumed satellites of Sombrero yield the weighted estimate of its total mass $(M_T/M_{\odot})= (1.55\pm0.49)10^{13}$. At $M*/L_K = 1 M_{\odot}/L_{\odot}$ (Bell et al. 2003) the stellar mass of Sombrero is $2.1\times10^{11} M_{\odot}$. Accounting for the luminosity of all the satellites increases the stellar mass of the group to $M* = 2.4\times 10^{11} M_{\odot}$. Therefore, the Sombrero halo has a total-mass-to-stellar-mass ratio of $M_T/M* = 65\pm20$, which is much higher than the cosmic baryonic ratio, $M_{\rm halo}/M_b = 6$. Karachentseva et al. (2011) undertook a search for faint companions around 2MASS Isolated Galaxies (2MIG). They found 214 faint neighbours around 125 2MIG galaxies with radial velocity differences of $|\Delta V| < 500$~km\,s$^{-1}$ and projected separations of $R_p < 500$~kpc. For 60 companions around E,S0-galaxies the median ratio of $M_{\rm orb}/M*$ turns out to be 63, while for the remaining 154 spiral galaxies the median ratio is only 17. A similar search for companions around late-type spiral galaxies without bulges was performed by Karachentsev \& Karachentseva (2019). Based on 43 companions around 30 Sc-Scd-Sd galaxies, they found the mean ratio $M_{\rm orb}/M* = 20\pm3.$ The factor of three difference in halo-mass-to-stellar-mass ratio between early-type and late-type luminous galaxies attests to their different dynamical histories. In the vicinity of Sombrero there are still more than a dozen galaxies with unreliable or even unknown distance estimates. Measurements of their TRGB distances with HST would help us to study the structure and dynamics of this group. \begin{acknowledgements} We are grateful to the referee for helpful advice. This work is based on observations made with the NASA/ESA Hubble Space Telescope. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. The work in Russia is supported by RFBR grant 18-02-00005. \end{acknowledgements}
2,869,038,155,415
arxiv
\section{Introduction} \subsection{Complexity and hierarchy} Complex systems with emergent properties produced by self-organization processes are also most of the time exhibiting some kind of hierarchical structure. Although the term of hierarchy has several different definitions and uses in very different disciplines, ranging from political science \citep{crumley1987dialectical} to physics \citep{10.1371/journal.pone.0033799}, it seems to be intrinsically linked with complexity. \cite{lane2006hierarchy} classifies four frequent uses of the term hierarchy, namely (i) order hierarchy corresponding to the existence of an order relation for a set of elements, (ii) inclusion hierarchy which is a recursive inclusion of elements within each other, (iii) control hierarchy which is the ``common sense'' use of the term as ranked entities controlling other entities with lower rank, and (iv) level hierarchy which captures the multi-scale nature of complex systems as ontologically distinct levels (or scales). For the particular study of social systems, he concludes that hierarchical levels may be entangled, that upward and downward causations are both essential, and that at least three levels (micro, meso, macro) are generally needed to capture the complexity of such systems. In a more philosophical account of complexity, \cite{morin1980methode} constructs a hierarchical method of interdisciplinary knowledge, insists on the tension between dependancy and interdependency or between opening and closing (rejoining ideas from \cite{holland2012signals}), and develops an implicit hierarchy of social systems when hypothesizing the emergence of third-type societies (swarm intelligence between humans). Different types of complexity may be related to different types of hierarchy as \cite{raimbault:halshs-02089520} proposes, and hierarchy would indeed be endogenous to theories of complexity. \cite{allen2017multiscale} develop a multiscale information theory in which the information profile across scales, or hierarchical levels, allows quantifying the complexity of a system. The complex adaptive system theory of \cite{holland2012signals} considers complex systems as systems of boundaries that filter signals, implying an inclusion and scale hierarchy between boundaries. Theories of scaling as the one synthesized by \cite{west2017scale} rely on the quantification of hierarchy in certain dimensions of systems, captured by exponents of scaling laws. Hierarchy may be endogenous to complexity, or to knowledge of the complex itself, since for example \cite{fanelli2013bibliometric} provides empirical evidence of a ``hierarchy of sciences'', in the sense of possibility to reach theoretical and methodological consensus. This corresponds in some sense to the ``ontological complexity'' of \cite{pumain2003approche}, which relies on the number of viewpoints needed to grasp a system, or the number of perspectives in an applied perspectivism framework \citep{raimbault2020relating}. Wether linked to systems themselves or to models and theories of them, hierarchy appears to be tightly linked to complexity. \subsection{Territorial systems and hierarchy} Urban systems, and more generally territorial systems, are particularly linked to hierarchy \citep{pumain2006hierarchy}: they indeed encompass all the meanings aforementioned (order hierarchy between settlement sizes for example, inclusion hierarchy between territorial boundaries, control hierarchy through governance structure, and more importantly level hierarchy through their multi-scalar nature). \cite{batty2006hierarchy} shows that hierarchies are inherent to urban systems, as fat tail distribution of settlement size are already produced by simple models of urban growth, and suggests also that urban design processes imply underlying overlapping hierarchies. \cite{pumain2006alternative} links hierarchical selection and hierarchical diffusion of innovation across cities to the long-term dynamics of urban systems. \cite{pumain:halshs-02303136} recalls that interactions in systems of cities are tightly linked to the emergence of urban hierarchies. Generally, scaling laws in urban systems can be considered as systematic manifestations of a hierarchical structure \citep{pumain2004scaling}, which is more complex than a simple order hierarchy, since scaling patterns vary with the definition of cities \citep{cottineau2017diverse} Hierarchical properties can be observed on several dimensions of urban systems. For example, transportation systems are hierarchical in their structure \citep{yerra2005emergence} but also patterns of use such as transportation flows \citep{jiang2009street}. Urban hierarchies are tightly related to hierarchies of their transportation links \citep{bigotte2010integrated}, and different modes of transportation networks are concerned including the air network \citep{dang2012hierarchy}. The global distribution of multinational firms also exhibits strong hierarchical patterns \citep{godfrey1999ranking}. Governance structures are organized following both an inclusion hierarchy for administrative areas \citep{li2015administrative} but also level hierarchies for example for economic processes \citep{liao2017opening}. Territorial systems are therefore intrinsically hierarchical in their multiple dimensions, what is tightly linked to their different complexities \citep{2019arXiv190109869R}. \subsection{Co-evolution and hierarchy} Hierarchy in complex systems is furthermore intrinsically linked to the concept of co-evolution. Following \cite{lane2006hierarchy}, the approach to complex adaptive systems proposed by \cite{holland2012signals} integrates levels and nested hierarchies, since it considers complex systems as ensembles of boundaries that filter signals. \cite{holland2012signals} formalizes complex adaptive systems as these structures of boundaries which form co-evolution niches for the elements and subsystems within a given boundary. The concept is slightly different from the concept of ecological niche which more generally designates a region in a parameter space quantifying the environment in which a species can live. In ecology, \cite{pires2011food} show that the emergence of mutualistic species networks imply some feeding hierarchy. In the context of economic and geographical processes, \cite{volberda2003co} distinguish for the co-evolution of firms between a genealogical hierarchy (evolutionary processes in the biological sense) and an ecological hierarchy (co-evolutionary economic processes). \cite{liu2013exploring} suggest that air networks co-evolve with firm networks and that their hierarchies are related therethrough. \cite{raimbault2019modeling} introduces a co-evolution to study interactions between transportation networks and territories, which from an urban system viewpoint in the sense of \cite{pumain2006evolutionary} relates to urban hierarchies. \cite{levinson2007co} confirm a correspondance between urban and network hierarchies in a co-evolution model. Within the SimpopNet model for the co-evolution of cities and networks \citep{schmitt2014modelisation}, discrete hierarchical level of network links corresponding to successively improved transportation technologies are a core component of simulation rules. \cite{raimbault2020unveiling} furthermore showed that the level of initial urban hierarchy in terms of rank-size slope had significant impacts on model outcomes. Studying hierarchies in the context of co-evolution transportation networks and territories is thus a relevant entry to the underlying concepts, including complexity, hierarchy, co-evolution and territorial systems. \subsection{Proposed approach} \cite{pumain2006introduction} recalls in the context of social systems some remaining open methodological questions: how are hierarchies produced? How do hierarchies evolve? What discriminates between continuous and discrete hierarchical organisations? Our contribution brings new elements of answer to the first two questions above, in the particular case of co-evolution of transportation networks and territories. It situates at the intersection of the three previously given contexts, namely hierarchy in complex systems and more particularly territorial systems, seen through the prism of co-evolutive processes. More precisely, we propose to systematically explore a macroscopic co-evolution model for cities and networks, and study its properties regarding both hierarchies of each components, in terms of final hierarchy produced but also in terms of the relations between these hierarchies. Establishing links between microscopic processes and emergent hierarchical patterns through model exploration informs on possible drivers of these macroscopic patterns. Our contribution relies on three aspects: (i) we introduce a comprehensive set of indicators tailored to the study of hierarchy in territorial systems; (ii) we systematically explore the version with a physical network of the co-evolution model introduced by \cite{raimbault2018modeling} which only studied extensively the virtual network; and (iii) we apply a novelty search algorithm to establish the feasible space of hierarchy patterns which can be produced by the model. The rest of this chapter is organized as follows. We first describe the model used and introduce a novel set of indicators to quantify hierarchy in territorial systems. We then describe the results of a grid exploration of the co-evolution model using these indicators, both with the physical and virtual networks, and establish the feasible space of model outputs. We finally discuss the implications of these results for hierarchy within co-evolutionary processes. \section{Co-evolution model} \subsection{Context} The issue of interactions between transportation networks and territories remains an open question for which different approaches have been proposed \cite{offner1993effets,espacegeo2014effets}. \cite{raimbault2018caracterisation} has explored a co-evolution approach, in the sense that both dynamics have circular causal relationships. More precisely, \cite{raimbault2019modeling} introduces a definition of co-evolution in that particular context, based on the aforementioned co-evolution niches \citep{holland2012signals}, for which an empirical characterization method based on lagged correlations is developed \citep{raimbault2017identification}. As its application on empirical data yield various or inconclusive results, the use of simulation models is a medium to indirectly link microscopic processes with a potentially emergent co-evolution, both at the mesoscopic scale \citep{raimbault2019urban} and at the macroscopic scale \citep{raimbault2018modeling}. This latest model is the one used in this study. \subsection{Model description} The co-evolution model for cities and transportation networks at the macroscopic scale extends the spatial interaction model introduced by \cite{raimbault2018indirect} by adding dynamical speeds to network links. A system of cities is represented as cities as agents and network links between these. Interaction flows are determined with a spatial interaction model, and they determine city growth rates, while network links evolve according to the flow traversing them. See \cite{raimbault2018modeling} for a full mathematical description of the model. We describe below the specification and parameters used here. More precisely, a time step of the simulation model consists in the following steps: \begin{enumerate} \item cities evolve their populations following gravity interaction flows of unit weight $w_G$, as a scaling function of population with exponent $\gamma_G$, and with a distance decay parameter $d_G$; cities do not have endogenous growth in our setting (Gibrat model) as we focus our study on interactions; \item flows are assigned to network links, either (i) to the direct link between the two cities in the case of the virtual network, or (ii) through a shortest path assignment algorithm (betweenness centrality) in the case of a physical network; \item links evolve their speed with a thresholded self-reinforcement function of flows, with maximal time travel decrease $g_M$; a threshold for flows above which (resp. below which) speed increase (resp. decrease) determined by a flow quantile parameter $\phi^{(q)}_0$; and as a scaling relation to relative flows with exponent $\gamma_N$. \end{enumerate} The model can be initialized with real data or by generating a synthetic initial configuration which has its own parameters \citep{raimbault2019space}. In our case, $N = 30$ cities are randomly distributed in an uniform space of width $W=200km$, and population follow a rank-size law with parameter $\alpha_S$. For the virtual network case, all pari links are initialized with pace one, while for the physical network case a perturbed grid network is used as described in \cite{raimbault2018modeling}. We show in Fig.~\ref{fig:examples} model runs for synthetic virtual and physical networks, and for the French system of cities with railway data. We visually observe that the number of important links is smaller in the physical case as it could be expected as infrastructure is shared by neighboring flows. For the real system, the most important links emerging correspond roughly to the actual existing high-speed lines. \begin{figure} \includegraphics[width=0.95\linewidth]{Fig1.png} \caption{\footnotesize\textbf{Examples of different setups for the co-evolution model.} \textit{(Top row)} Synthetic system of cities with virtual network, initial configuration (left) and after $t_f=30$ time steps (right), with parameters $\alpha_S = 1$, $\phi^{(q)}_0 = 0.9$, $g_M = 0.01$, $\gamma_N=2$, $w_G=4.7e-3$, $d_G=248$, $\gamma_G=0.9$; \textit{(Middle row)} Synthetic system of cities with physical network, initial configuration (left) and after $t_f=30$ time steps (right), with parameters $\phi^{(q)}_0 = 0.7$, $g_M = 0.05$ and the same other parameters than the first configuration; \textit{(Bottom row)} French system of cities simulated between 1975 (left) and 1999 (right) with three time steps, with parameters $\phi^{(q)}_0 = 0.8$, $g_M = 0.2$, $\gamma_N=4$ and same others. City color and size give the population and link thickness the speed (rescaled at each time step).\label{fig:examples}} \end{figure} In our exploration settings, the model has thus six parameters (for which we give practical boundaries in experiments): the initial population hierarchy $\alpha_S \in \left[0.1; 2.0\right]$, the gravity interaction weight $w_G \in \left[1e-4; 1e-2 \right]$, the gravity interaction hierarchy $\gamma_G \in \left[0.0 ; 5.0 \right]$, the gravity decay $d_G \in \left[1.0; 500.0 \right]$, the network maximal speed growth $g_M \in \left[0.0; 0.05 \right]$, the network growth hierarchy $\gamma_N \in \left[0.0; 5.0\right]$, and the network threshold quantile $\phi_0^{(q)} \in \left[0;1\right]$. \subsection{Quantifying hierarchy in systems of cities} Indicators to understand macroscopic trajectories in simulated systems of cities have been introduced by \cite{raimbault2020unveiling}. They include some related to hierarchy but are not specifically focused on this aspect. We propose now to give a broad set of indicators to capture different dimensions of hierarchy. \subsubsection{Static quantification of hierarchy} The most straightforward way to quantify hierarchy is to use Zipf rank-size law in the case of population, or more generally scaling laws for other dimensions of the urban system. Let $Y_i$ the variable for which the hierarchy is estimated. Assuming $i$ is ordered in decreasing order, the Ordinary Least Square estimation of $\log \left(Y_i\right) \sim \log \left( i\right)$ gives an estimation of the rank-size slope $\alpha \left[Y\right]$ which is a proxy of hierarchy. Additional indicators to explain more accurately the size distribution include for example the primacy index. We take a generic approach to this issue of more degree of freedoms to capture the distribution and use a piecewise linear regression, implementing the algorithm of \cite{muggeo2003estimating}. Given the distribution observed empirically and the ones generated by simulation models, going beyond one breakpoint does not bring significant improvement. We consider thus the estimated slopes and breakpoint as refined indicators of the hierarchy, given as $\alpha_1 \left[Y\right]$, $\alpha_2 \left[Y\right]$ and $\Psi \left[Y\right]$. Finally, to quantify interactions between two aspects, a correlation between two hierarchies informs they correspond in terms of ranks, and is computed with $r_s\left[X_i,Y_i\right]$ for two variables $X_i,Y_i$ with $r_s$ an estimator of Spearman rank correlation. \subsubsection{Dynamical indicators} The rank correlation between initial and final distribution of a variable will measure how much an ordering hierarchy was modified, which is different from the variation of hierarchy given the variations of previous indicators such as the rank-size slope. Dynamical indicators for hierarchy regimes can furthermore be defined in several ways: dynamics of the rank correlation between two variables, time-series properties of rank-size trajectories, lagged rank correlations. Studying these extensively is out of the scope of this chapter, and we will consider differences between initial and final hierarchies to capture dynamics. \subsubsection{Spatialized indicators} Finally, some spatial extension of hierarchy indicators can be introduced. A spatial non-stationary version of a scaling law would write $Y_i (\vec{x}) \sim \left(\frac{X_i(\vec{x})}{X_0 (\vec{x})}\right)^{\alpha (\vec{x})}$, where $\vec{x}$ is the spatial position and assuming that samples can be defined at each point in space. In practice, a discrete version could be more relevant, for which $\vec{x}_k$ center point are defined, samples consist of points within Thiessen polygons of centers and the exponents are estimated for each center $\alpha (\vec{x}_k)$. Some heuristics should be developed to estimate such a discrete non-parametric scaling law, and also remains out of our scope here. \section{Results} \subsection{Implementation} The model is implemented in NetLogo \cite{tisue2004netlogo}, which is a good compromise between performance and interactivity, the former being necessary with a model with such a spatialized network. The model is explored using the OpenMOLE model exploration software \cite{reuillon2013openmole} to use integrated design of experiments (DOE) and exploration methods, but also the seamless access its provide to high performance computing infrastructures. Source code for model, exploration scripts, result analysis, and results are available on a git repository at \texttt{https://github.com/JusteRaimbault/CoevolutionNwTerritories}. Large dataset for simulation results are available on the dataverse at \texttt{https://doi.org/10.7910/DVN/6GUKOX}. \subsection{Hierarchy patterns} We now turn to a first basic exploration of the model, using a grid exploration of the parameter space. A first broad grid varying all parameters with 3 steps each and 20 model repetitions, for both virtual and physical models, allows identifying the dimensions along which no significant variation or qualitative variation in model behavior occur. We then fix a more targeted exploration by taking $g_M = 0.05$ and $\gamma_N = 1$ and varying $\alpha_S \in \{0.5, 1.0, 1.5 \}$, $\phi_0^{(q)} \in \{0.1, 0.5, 0.9 \}$, $\gamma_G \in \left[0.5;1.5\right]$ with a step of 0.2, and $d_G \in \left[10; 210 \right]$ with a step of 50. We consider the static indicators of hierarchy and their variation between initial and final time, applied on city populations $P$ and city closeness centrality $C$. \begin{figure} \includegraphics[width=\linewidth]{Fig2.png} \caption{\textbf{Patterns of hierarchy in the model with a virtual network.} Each indicator is shown as a function of $d_G$ for varying $\gamma_G$ (color), varying $\phi_0^{(q)}$ (columns) and varying $\alpha_S$ (rows). \textit{(Top Left)} Difference in the rank-size exponent for populations between final time and initial time; \textit{(Top Right)} Difference in the rank-size exponent for centralities; \textit{(Bottom Left)} Difference in rank correlation between population and centralities; \textit{(Bottom Right)} Difference in breakpoint of the hierarchy of centralities. \label{fig:gridexplo-virtual}} \end{figure} The variation of some indicators exhibiting an interesting behavior are shown for the model with the virtual network in Fig.~\ref{fig:gridexplo-virtual}. The evolution of city population hierarchy, captured by $\alpha_{\Delta}\left[P\right] = \alpha \left[P\right](t_f) - \alpha_S$ (top left panel of Fig.~\ref{fig:gridexplo-virtual}), exhibits a low qualitative sensitivity to initial hierarchy $\alpha_S$ (rows), but subplots are translated and a significant quantitative sensitivity is observed: in other words, more hierarchical systems produce more hierarchy, what can be expected in such self-reinforcing processes. Always negative values mean that hierarchy always increases. As a function of gravity decay $d_G$, a systematic absolute decrease is observed for the lowest values: very local interaction mitigate the increase in hierarchy. The gravity interaction $\gamma_G$ has a monotonous and expected effect, systematically increasing the hierarchy. Finally, an effect of the co-evolution with network distances which yields non-monotonous effects is worth noticing: when network threshold $\phi_0^{(q)}$ increases, a minimum of $\alpha_{\Delta}\left[P\right]$ is observed for high $\gamma_G$ values and low initial hierarchy. In that context, an intermediate range of spatial interaction will yield more hierarchical systems. As only few links increase their speed with this value of network threshold, it means that long range interactions are no longer amplified by the network. Thus, the evolution of city hierarchies depends on several parameters, in a non-monotonous way when interacting with network processes. Regarding the evolution of network hierarchies $\alpha_{\Delta}\left[C\right]$ (top right panel of Fig.~\ref{fig:gridexplo-virtual}), the most significant effect is the one of network threshold $\phi_0^{(q)}$, which witnesses an inversion of the sense of variation as a function of distance decay $d_G$ when network threshold increases. When all links are allowed to grow their speed, longer span interaction will lead to more hierarchical networks: indeed, the probability for two large cities to interact is then higher, and their flow will be favored in terms of network growth. But when only a few proportion of links improve their travel time while most of them decay, then higher network hierarchies are produced by the most local interactions. In a setting of a scarcity of network investments, taking into account long range interaction gives thus a more balanced network than a local approach, which is kind of counter-intuitive. The behavior of the rank correlation between population and centrality $\rho_r \left[P, C \right]$ (bottom left panel of Fig.~\ref{fig:gridexplo-virtual}) informs on the co-evolution processes between the territory and the transportation network. Empirically, the better connectivity of larger cities has been suggested by \cite{bretagnolle2003vitesse} as a signature of co-evolution processes. Our results confirm that indeed such co-evolution processes produce a correspondance between city and network hierarchy, as high values of correlations are attained for interaction spans over 100km. The interaction distance furthermore systematically increase the correlation, and local interaction yield a close to zero correlation except for highly initially hierarchic systems with a high network threshold (in which case some very large cities will still construct a local interaction system). The correlation is maximal at an intermediate value of $\phi_0^{(q)}$, which means that the link selection process plays a role in the synchronization between the two hierarchies, and that the co-evolution process captures more than just a self-reinforcement. Finally, as we introduced segmented regression as a finer characterization of hierarchical patterns in a system of cities, we observe an interesting behavior for the variation of the breakpoint for centralities $\Psi_{\Delta}\left[C\right]$. The breakpoints always shifts in time to lower values, meaning that the distribution becomes more unequal in time regarding the most dominating links (in the sense that less links are included in the head of the hierarchy). The shift is stronger when interaction distance is larger and network threshold is larger, meaning that favoring less long range links will break the hierarchy in a more uneven way. \begin{figure} \includegraphics[width=\linewidth]{Fig3.png} \caption{\textbf{Patterns of hierarchy in the model with a physical network.} With the same design of experiment than in Fig.~\ref{fig:gridexplo-virtual}, each indicator is shown as a function of $d_G$ for varying $\gamma_G$ (color), varying $\phi_0^{(q)}$ (columns) and varying $\alpha_S$ (rows). \textit{(Top Left)} Difference in the rank-size exponent for populations between final time and initial time; \textit{(Top Right)} Difference in the rank-size exponent for centralities; \textit{(Bottom Left)} Difference in rank correlation between population and centralities; \textit{(Bottom Right)} Difference in breakpoint of the hierarchy of centralities.\label{fig:gridexplo-physical}} \end{figure} Our second experiment to study patterns of hierarchy is the same exact Design of Experiment than the previous one, but with the physical network. We show in Fig.~\ref{fig:gridexplo-physical} the same indicators for the same parameter space. The discrepancy between the two behaviors is particularly relevant from a thematic point of view, as it reveals the role of spatializing and assigning network flows, even in such as case where no congestion is included. Some patterns are similar, but some important differences can be observed. Globally, the behavior of population hierarchy, rank correlation, and centrality hierarchy breakpoint, are qualitatively similar. The minimum which existed for population hierarchies at short range interactions mostly disappears (although still slightly present for $\gamma_G = 1.5$, $\phi_0^{(q)} = 0.9$ and $\alpha_S = 1$): spatializing the network removes some output complexity in that case. Rank correlations (bottom panel of Fig.~\ref{fig:gridexplo-physical}), i.e. the correspondance between population and centrality hierarchies, is still growing as a function of $d_G$ and exhibits a maximum at the intermediate value $\phi_0^{(q)}$. However, the effect of interaction hierarchy $\gamma_G$ is much more impactful in this case: more uniform interactions (low $\gamma_G$) lead to a much smaller correlation. This means that the approximation of using a virtual network accurately captures the hierarchy correspondance in physical networks for flows with a superlinear scaling exponent: depending on the type of activities generating flows, spatial structure of the network is more or less important. The hierarchy of centralities also behave quite differently when switching to a physical network (top right panel of Fig.~\ref{fig:gridexplo-physical}). It is in that case roughly insensitive to any parameter when all links are growing ($\phi_0^{(q)} = 0.1$), and always grows as a function of distance decay $d_G$: longer range interactions diffuse through most network links and yield less inequality between their speeds. Increasing the hierarchy of interactions still increases the hierarchy of centralities but the effect is less strong. In a nutshell, constraining spatially the link and making them share flows through the assignment procedure restricts the degrees of freedom their speed dynamics have. \subsection{Hierarchy regimes} \begin{figure} \includegraphics[width=\linewidth]{Fig4.png} \caption{\textbf{Feasible space of hierarchy regimes obtained with the PSE algorithm.} Scatter plots of the three objective dimensions. Color level gives the value of $d_G$, and distributions and correlation between indicators are also stratified following $d_G$. Patterns were filtered to have at least ten stochastic repetitions.\label{fig:pse}} \end{figure} After having inspected the link between parameters and hierarchical patterns emerging through a basic grid exploration, we turn now to a specific experiment aimed at establishing the feasible hierarchy regimes that the model can produce. Indeed in such complex simulation models, simple DOE may only capture a part of potential behavior, and miss strong non-linearities. Therefore, the Pattern Space Exploration (PSE) algorithm was introduced by \cite{cherel2015beyond} as an heuristic to approximate the feasible space of a model output, based on a novelty search algorithm \citep{lehman2008exploiting}. We apply here this algorithm with the following three dimensional pattern space: evolution of population hierarchy $\alpha_{\Delta}\left[P\right]$, evolution of centrality hierarchy $\alpha_{\Delta}\left[C\right]$, and final rank correlation between population and centrality hierarchies $\rho_r\left[P,C\right]$. For the first two, looking at dynamics is important to control for the artificial initial level of hierarchy $\alpha_S$ for population, while initial centrality hierarchy is solely linked to geometry and exhibits a narrow peak distribution of average $-0.2$ (similar pattern for virtual and physical, the physical distribution being a bit wider). These three dimension capture not only which hierarchies are produced along the two aspects included in the model, but also which relation they have in term of rank correlation. We run the PSE algorithm using OpenMOLE and distribute the computation on a grid with an island scheme. The grid for patterns, set from previous exploration results, is taken as $\alpha_{\Delta}\left[P\right] \in \left[-0.2;0.2\right]$ with step $0.02$, $\alpha_{\Delta}\left[C\right] \in \left[-0.2;0.2\right]$ with step $0.1$, and $\rho_r \left[P,C\right] \in \left[-1.0,1.0\right]$ with step $0.1$. Variable parameters are aforementioned model parameters, with the addition of $g_M \in \left[0.0;0.05\right]$. The algorithm was run on 500 parallel islands (termination time 10 minutes), for $30,000$ generations (reasonable convergence in terms of number of patterns discovered. We show in Fig.~\ref{fig:pse} the scatterplot of the obtained feasible space, conditionally to having at least 10 stochastic repetitions (robust patterns). We find that closeness hierarchy dynamics have a much wider range of possible values than population hierarchy dynamics, confirming what was obtained with the grid experiment. Furthermore, possible correlations have also a large span from -0.19 to 0.84, which means that the model can combine the production of a broad set of hierarchies for population and network, but also of their correlations. These correlations take mostly positive values as expected (mutual reinforcement of hierarchies), but are sometimes uncorrelated and can even be negative: in such a setting the lowest cities of the urban hierarchy have the highest centralities. These correspond to a very low initial hierarchy ($<\alpha_S>=0.18$ where the average is taken for points with a negative correlation), a high network reinforcement exponent ($<\gamma_N>=3.2$), a low interaction hierarchy ($<\gamma_G>=0.88$), and long range interactions ($<d_G>=228$). This can be interpreted as diffuse and uniform interaction in a low-hierarchical system which are mostly dominated by network processes. We can also observe in Fig.~\ref{fig:pse} for the $(\alpha_{\Delta}\left[C\right],\rho_r\left[P,C\right])$ point cloud, that around 75\% of the surface covered is by short range interactions, and correspond to extreme values: normal range interactions produce a restricted output space. Finally, it is interesting to note the upper and lower boundaries of the $(\alpha_{\Delta}\left[P\right],\rho_r\left[P,C\right])$ point cloud: population hierarchy increase fixes both kind of linear upper and lower bounds on correlations: high absolute increase of hierarchy imply high correlations, while correlations can not be too high for small variations of population hierarchy. Altogether, this experiment show a high diversity of hierarchy regimes that the model can produce. \begin{table} \caption{Linear regression analysis of model behavior based on PSE patterns. Each model is estimated with a Weighted Least Square, with weights being the number of stochastic samples. Significance levels: (***) $p \simeq 0$; (*) $p < 0.01$; () $p > 0.1$.\label{tab:regpse}} \centering \begin{tabular}{@{\extracolsep{5pt}}|l|cc|cc|cc|} \hline Model & $\alpha_{\Delta}\left[P\right]$ & & $\alpha_{\Delta}\left[C\right]$ & & $\rho_r\left[P,C\right]$ & \\ \hline Constant & $1.04\cdot 10^{-2}$ & *** & $0.15$ & *** & $-0.27$ & *** \\ $\alpha_S$ & $-7.2\cdot 10^{-3}$ & *** & $-6.9\cdot 10^{-3}$ & & $-1.4\cdot 10^{-2}$ & \\ $\phi_0^{(q)}$ & $-8.6\cdot 10^{-4}$ & & $-0.32$ & *** & $8.4\cdot 10^{-2}$ & *** \\ $g_M$ & $7.5 \cdot 10^{-2}$ & * & $-6.3$ & *** & $1.6$ & *** \\ $\gamma_N$ & $-2.1\cdot 10^{-3}$ & *** & $-4.2\cdot 10^{-2}$ & *** & $3.2\cdot 10^{-2}$ & *** \\ $w_G$ & $-6.9$ & *** & $15.6$ & *** & $65.5$ & *** \\ $\gamma_G$ & $-8.2\cdot 10^{-3}$ & *** & $-2.9\cdot 10^{-3}$ & * & $7.6\cdot 10^{-2}$ & *** \\ $d_G$ & $4.6\cdot 10^{-5}$ & *** & $5.0\cdot 10^{-4}$ & *** & $-5.2\cdot 10^{-4}$ & *** \\ \hlin Observations & 5208 & & 5208 & & 5208 & \\ Adjusted R$^{2}$ & 0.40 & & 0.70 & & 0.41 & \\ \hline \end{tabular} \end{table} Finally, as the output produced by the PSE algorithms are assumed to be mostly representative of what the model can offer, we can expect statistical models linking parameters and indicators to capture most of its behavior. We propose therefore in Table~\ref{tab:regpse} a linear regression analysis of model behavior. The estimation is done on the full PSE population but with weighting according to the number of stochastic samples, in order to avoid bias by non robust patterns. Most of variations explained in the grid experiment are confirmed, as hierarchies increasing with $d_G$, or decreasing with $\gamma_G$. The overall behavior of correlations is opposed to what was observed as a function of $d_G$ since it decreases. It is also interesting to note that centrality hierarchy and correlation are not significant in $\alpha_S$, while population hierarchy is not significant in $\phi_0^{(q)}$: on these dimensions, the intrication between cities and the transportation network is not statistically effective for a linear model (since these non-significant link occur between city indicator and network parameter on the one hand, and between network indicator and city parameter on the other hand). \section{Discussion} Our model exploration results have implications on the thematic question of hierarchy in urban systems and the role of co-evolution between cities and networks in its dynamics. We showed several stylized fact which have non trivial implications, including: (i) the fact that urban hierarchy depends on network processes, and that in some cases this link is non-monotonous - what introduces an additional complexity in planning such infrastructures at a macroscopic scale if put in a long time co-evolutionary context; (ii) the fact that correlation between urban hierarchy and network hierarchy are most of the time positive, but that it can take a broad range of values and even be negative - this also challenges the reductionist view of a direct correspondance between the hierarchy of a city and its accessibility, since the link depends of several parameters and of the type of interactions considered; (iii) the fact that conclusions obtained with the physical network model are globally qualitatively similar to the conclusions obtained with the virtual network, but that behavior still significantly differs in some regions of the parameter space for some indicators - what means that in some case such a simplification will be fine to be done while in other it will miss some crucial processes; (iv) the fact that the realm of possible hierarchy regimes is very broad, surely much broader than existing regimes. This last point opens the issue of comparing this approach with data and possibly identifying hierarchy regimes in existing urban systems. \cite{raimbault2018modeling} have applied this model to real population data and real rail network distance matrices in the case of the French urban system, by calibrating it on population and distance trajectories. As the model is fitted on a moving window in time, the temporal trajectory of fitted parameters may inform on the actual regime the urban system is in. However, such conclusions would be more robust if applied on different urban systems, as \cite{raimbault2019evolutionary} do for six large urban systems when benchmarking similar interaction growth models. A purely empirical characterization of hierarchy regimes, using indicators introduced here, would also be a relevant entry to this issue, but the lack of transportation data on long time scales and broad spatial spans remains an obstacle difficult to overcome. The methodology to understand hierarchical patterns in systems of cities, and the model itself are also prone to several potential developments. For example, the idea of spatial non-stationarity in estimating scaling laws, which would in a sense be linked to the existence of urban subsystems with their own hierarchical patterns, should be developed in methodological terms. An heuristic to optimize the adjustment of such a non-stationary model has to be introduced, and may be difficult to elaborate since a spatial neighborhood is not necessarily the rule in constituting subsystems of cities (large global metropolises may be a subsystem linked tighter than one of these with its hinterland in Europe for example). This then also relates to issues of relevant scales to identify hierarchies. Regarding the model in itself, it remains very simple and not realistic in the sense that similarly to \citep{xie2009topological}, no link are added, but only the speeds of existing links are updated. On the contrary, road network growth models at other scales such as \cite{raimbault2019urban} focus on the addition of links. Bridging these two approaches would be a relevant extension of the model studied here. Finally, our results can be put into a wider theoretical perspective. As explained in introduction, hierarchies in the sense of the imbrication of subsystems at multiple levels, are endogenous to complex systems. At a fixed scale, quantitative indicators such as the one we used capture emergent patterns of this organisation, as is the hierarchical structure in systems of cities in terms of scaling laws. Thus, to understand and manage such systems in a resilient and adaptive way, multi-scale approaches embracing these hierarchies are necessary, as put forward out by \cite{rozenblat2018conclusion}. Our model is a first suggestion of scale integrations, since in the physical network case cities are at the macroscopic scale while the network is at a finer mesoscopic scales. \section{Conclusion} We explored here the concept of hierarchy in the particular context of the co-evolution of transportation networks and cities. More particularly, we introduced a set of indicators to quantify hierarchy patterns, and systematically studied a co-evolution model for cities and networks, at two abstraction levels for the network. Our exploration results provide some non trivial stylized facts and inform on the diversity of regimes the model can produce. This provides an illustration of how to study hierarchy in territorial systems regarding two complementary dimensions, in terms of how each hierarchically organizes and what is the actual correspondance between the two hierarchies. \section{Acknowledgements} Results obtained in this paper were computed on the vo.complex-system.eu virtual organization of the European Grid Infrastructure ( http://www.egi.eu ). We thank the European Grid Infrastructure and its supporting National Grid Initiatives (France-Grilles in particular) for providing the technical support and infrastructure. This work was funded by the Urban Dynamics Lab grant EPSRC EP/M023583/1.
2,869,038,155,416
arxiv
\section{Introduction} In normal metal / supercunductor (N/S) junctions Andreev reflection (AR)\cite{Andreev} is one of the most important process for low energy transport. The AR is a process that an electron with up spin injected from N at the energy below the energy gap $\Delta$ is converted into a reflected hole with up spin. To describe the charge transport in N/S junctions Blonder, Tinkham and Klapwijk (BTK) proposed the formula for the calculation of the tunneling conductance\cite{BTK}. A gap like structure and the douling of tunneling conductance appear in the voltage dependence due to the AR. This method has been extended to the ferromagnet / superconductor (F/S) junctions and used to estimate the spin polarization of the F layer experimentally\cite{Tedrow,Upadhyay,Soulen}. In F/S junctions, AR is suppressed because the retro-reflectivity is broken by the exchange field in the F layer\cite{de Jong}. As a result, the conductance of the junctions is suppressed\cite{FS}. Spin dependent transport in F/S junctions is an important subject in the field of spintronics which aims to fabricate novel devices manipulating electron's spin. Spintronics has recently received much attention because of its potential impact on electric devices and quantum computing\cite{Zutic}. Among recent works, many efforts have been devoted to study the effect of spin-orbit coupling on transport properties of two dimensional electron gas (2DEG)\cite{Hirsch,Governale,Streda,Mishchenko,Schliemann,Sinova}. The pioneering work by Datta and Das suggested the way to control the precession of the spins of electrons by the Rashba spin-orbit coupling (RSOC)\cite{Rashba} in F/2DEG/F junctions\cite{Datta}. This spin-orbit coupling depends on the applied electric field and can be tuned by a gate voltage. On the other hand spin dependent transport based only on spin-orbit coupling without ferromagnet, e.g., spin Hall effect is also a hot topic\cite{Edelstein,Inoue,Watson}. As in the case of exchange field in F/S junctions, RSOC may affect the tunneling conductance in 2DEG/S junctions because RSOC mixes spin-up and spin-down states. The RSOC induces an energy splitting which lifts the spin degeneracy, but the energy splitting doesn't break the time reversal symmetry unlike an exchange splitting in ferromagnet (see Fig. \ref{f1}). Therefore transport properties in 2DEG/S junctions may be qualitatively different from those in F/S junctions. However, in 2DEG/S junctions the effect of RSOC on transport phenomena is not studied well. It is desirable to make a formalism incorporating the effect of the RSOC in these junctions. For this purpose a BTK-like formula may be accessible. However the derivation of the conductance by BTK cannot be directly extended to that in 2DEG/S junctions because velocity operator has off-diagonal components by RSOC. In this paper we present a general method to derive a conductance in superconducting junctions which is applicable to arbitrary velocity operator with off-diagonal components. Applying it, we calculate the tunneling conductance in 2DEG/S junctions, compare it with that in F/S junctions and clarify how RSOC affects the AR and normal reflection probabilities. The obtained results can be useful for the design of mesoscopic 2DEG/S junctions and for a better understanding of related experiments. \begin{figure}[htb] \begin{center} \scalebox{0.4}{ \includegraphics[width=30.0cm,clip]{fig1.eps}} \end{center} \caption{(color online) Schematic illustration of Rashba and Zeeman splitting.} \label{f1} \end{figure} The organization of this paper is as follows. In section II, we will provide the detailed derivation of the expression for the conductance. In section III, the results of calculations are presented for various types of junctions. In section IV, the summary of the obtained results is given. In the present paper we confine ourselves to zero temperature. \section{Formulation} We consider a ballistic 2DEG / S junctions where the 2DEG/S interface is located at $x=0$ (along the $y$-axis), and has an infinitely narrow insulating barrier described by the delta function $U(x)=U\delta (x)$. \begin{figure}[htb] \begin{center} \scalebox{0.4}{ \includegraphics[width=28.0cm,clip]{fig2.eps}} \end{center} \caption{(color online) Schematic illustration of scattering processes.}\label{f2}\end{figure} The effective Hamiltonian with RSOC is given by \begin{equation} H = \left( {\begin{array}{*{20}c} {\xi _k } & {i\lambda k_ - \theta \left( { - x} \right)} & 0 & {\Delta \theta \left( x \right)} \\ { - i\lambda k_ + \theta \left( { - x} \right)} & {\xi _k } & { - \Delta \theta \left( x \right)} & 0 \\ 0 & { - \Delta \theta \left( x \right)} & { - \xi _k } & { - i\lambda k_ + \theta \left( { - x} \right)} \\ {\Delta \theta \left( x \right)} & 0 & {i\lambda k_ - \theta \left( { - x} \right)} & { - \xi _k } \\ \end{array}} \right) \end{equation} with $k_ \pm = k_x \pm ik_y $, the energy gap $\Delta$, $\xi _k = \frac{{\hbar ^2 }}{{2m}}\left( {k^2 - k_F^2 } \right)$, Fermi wave number $k_F$, Rashba coupling constant $\lambda$, and step function $\theta(x)$. Velocity operator in the $x$-direction is given by\cite{Molenkamp} \begin{equation} v_x = \frac{{\partial H}}{{\hbar \partial k_x }} = \left( {\begin{array}{*{20}c} {\frac{\hbar }{{mi}}\frac{\partial }{{\partial x}}} & {\frac{{i\lambda }}{\hbar }\theta \left( { - x} \right)} & 0 & 0 \\ { - \frac{{i\lambda }}{\hbar }\theta \left( { - x} \right)} & {\frac{\hbar }{{mi}}\frac{\partial }{{\partial x}}} & 0 & 0 \\ 0 & 0 & { - \frac{\hbar }{{mi}}\frac{\partial }{{\partial x}}} & { - \frac{{i\lambda }}{\hbar }\theta \left( { - x} \right)} \\ 0 & 0 & {\frac{{i\lambda }}{\hbar }\theta \left( { - x} \right)} & { - \frac{\hbar }{{mi}}\frac{\partial }{{\partial x}}} \\ \end{array}} \right). \end{equation} As shown in Fig. \ref{f2}, the wave function $\psi(x)$ for $x \le 0$ is represented using eigenfunctions of the Hamiltonian: \begin{equation} \begin{array}{l} \psi(x \le 0) = e^{ik_y y} \left[ {\frac{1}{{\sqrt 2 }}e^{ik_{1(2)} \cos \theta _{1(2)} x} \left( {\begin{array}{*{20}c} {\left( - \right)i\frac{{k_{1(2) - } }}{{k_{1(2)} }}} \\ 1 \\ 0 \\ 0 \\ \end{array}} \right) + \frac{{a_{1(2)} }}{{\sqrt 2 }}e^{ik_1 \cos \theta _1 x} \left( {\begin{array}{*{20}c} 0 \\ 0 \\ {i\frac{{k_{1 + } }}{{k_1 }}} \\ 1 \\ \end{array}} \right) + \frac{{b_{1(2)} }}{{\sqrt 2 }}e^{ik_2 \cos \theta _2 x} \left( {\begin{array}{*{20}c} 0 \\ 0 \\ { - i\frac{{k_{2 + } }}{{k_2 }}} \\ 1 \\ \end{array}} \right)} \right. \\ \left. { + \frac{{c_{1(2)} }}{{\sqrt 2 }}e^{ - ik_1 \cos \theta _1 x} \left( {\begin{array}{*{20}c} { - i\frac{{k_{1 + } }}{{k_1 }}} \\ 1 \\ 0 \\ 0 \\ \end{array}} \right) + \frac{{d_{1(2)} }}{{\sqrt 2 }}e^{ - ik_2 \cos \theta _2 x} \left( {\begin{array}{*{20}c} {i\frac{{k_{2 + } }}{{k_2 }}} \\ 1 \\ 0 \\ 0 \\ \end{array}} \right)} \right] \end{array} \end{equation} for an injection wave with wave number $k_{1(2)}$ where $ k_1 = - \frac{{m\lambda }}{{\hbar ^2 }} + \sqrt {\left( {\frac{{m\lambda }}{{\hbar ^2 }}} \right)^2 + k_F^2 } $, $ k_2 = \frac{{m\lambda }}{{\hbar ^2 }} + \sqrt {\left( {\frac{{m\lambda }}{{\hbar ^2 }}} \right)^2 + k_F^2 } $ and $k_{1(2) \pm } = k_{1(2)} e^{ \pm i\theta _{1(2)} } $. $a_{1(2)}$ and $ b_{1(2)}$ are AR coefficients. $c_{1(2)}$ and $d_{1(2)}$ are normal reflection coefficients. $\theta _{1(2)}$ is an angle of the wave with wave number $k_{1(2)}$ with respect to the interface normal. Similarly for $x \ge 0$ $\psi(x)$ is given by \begin{equation} \psi(x \ge 0) = e^{ik_y y} \left[ {e_{1(2)} e^{ik_F \cos \theta x} \left( {\begin{array}{*{20}c} u \\ 0 \\ 0 \\ v \\ \end{array}} \right) + f_{1(2)} e^{ik_F \cos \theta x} \left( {\begin{array}{*{20}c} 0 \\ u \\ { - v} \\ 0 \\ \end{array}} \right) + g_{1(2)} e^{ - ik_F \cos \theta x} \left( {\begin{array}{*{20}c} v \\ 0 \\ 0 \\ u \\ \end{array}} \right) + h_{1(2)} e^{ - ik_F \cos \theta x} \left( {\begin{array}{*{20}c} 0 \\ { - v} \\ u \\ 0 \\ \end{array}} \right)} \right] \end{equation} with \begin{equation} u = \sqrt {\frac{1}{2}\left( {1 + \frac{{\sqrt {E^2 - \Delta ^2 } }}{E}} \right)}, \quad \quad v = \sqrt {\frac{1}{2}\left( {1 - \frac{{\sqrt {E^2 - \Delta ^2 } }}{E}} \right)} \end{equation} where $E$ is quasiparticle energy and $\theta$ is an angle of the wave with wave number $k_{F}$ with respect to the interface normal. $e_{1(2)}, f_{1(2)}, g_{1(2)}$ and $h_{1(2)}$ are transmission coefficients. Note that since the translational symmetry holds for the $y$-direction, the momenta parallel to the interface are conserved: $k_y=k_F \sin \theta = k_1 \sin \theta _1 = k_2 \sin \theta _2 $. The wave function follows the boundary conditions\cite{Molenkamp}: \begin{equation} \begin{array}{l} \left. {\psi \left( x \right)} \right|_{x = + 0} = \left. {\psi \left( x \right)} \right|_{x = - 0} \\ \left. {v_x \psi \left( x \right)} \right|_{x = + 0} - \left. {v_x \psi \left( x \right)} \right|_{x = - 0} = \frac{\hbar }{{mi}}\frac{{2mU}}{{\hbar ^2 }}\tau _3 \psi \left( 0 \right) \\ \tau _3 = \left( {\begin{array}{*{20}c} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & { - 1} & 0 \\ 0 & 0 & 0 & { - 1} \end{array}} \right). \end{array} \end{equation} Now we will derive a formula for the tunneling conductance. Before giving detailed calculation, we present an essential idea for the derivation of the conductance: First we calculate expectations of current for the complete sets of the eigenfunctions. Next we sum the expectations multiplied by corresponding distribution functions. Then we can get the net current of the junctions. Detailed derivation is given in the Appendix. Finally we obtain the dimensionless conductance represented in the form: \begin{equation*} \begin{array}{l} \sigma _s = N_1 \int_{ - \theta _C }^{\theta _C } {\frac{1}{2}\left[ {\left( {1 + \frac{{k_2 }}{{k_1 }}} \right) + \left| {a_1 } \right|^2 \left( {1 + \frac{{k_2 }}{{k_1 }}} \right) + \left| {b_1 } \right|^2 \left( {1 + \frac{{k_1 }}{{k_2 }}} \right)\lambda _{21} - \left| {c_1 } \right|^2 \left( {1 + \frac{{k_2 }}{{k_1 }}} \right) - \left| {d_1 } \right|^2 \left( {1 + \frac{{k_1 }}{{k_2 }}} \right)\lambda _{21} } \right]} \cos \theta d\theta \\ + N_2 \int_{ - \frac{\pi }{2}}^{\frac{\pi }{2}} {{\mathop{\rm Re}\nolimits} \frac{1}{2}\left[ {\left( {1 + \frac{{k_1 }}{{k_2 }}} \right) + \left| {a_2 } \right|^2 \left( {1 + \frac{{k_2 }}{{k_1 }}} \right)\lambda _{12} + \left| {b_2 } \right|^2 \left( {1 + \frac{{k_1 }}{{k_2 }}} \right) - \left| {c_2 } \right|^2 \left( {1 + \frac{{k_2 }}{{k_1 }}} \right)\lambda _{12} - \left| {d_2 } \right|^2 \left( {1 + \frac{{k_1 }}{{k_2 }}} \right)} \right]} \cos \theta d\theta \\ = \int_{ - \theta _C }^{\theta _C } {\left[ {1 + \left| {a_1 } \right|^2 + \left| {b_1 } \right|^2 \frac{{k_1 }}{{k_2 }}\lambda _{21} - \left| {c_1 } \right|^2 - \left| {d_1 } \right|^2 \frac{{k_1 }}{{k_2 }}\lambda _{21} } \right]} \cos \theta d\theta \\ + \int_{ - \frac{\pi }{2}}^{\frac{\pi }{2}} {{\mathop{\rm Re}\nolimits} \left[ {1 + \left| {a_2 } \right|^2 \frac{{k_2 }}{{k_1 }}\lambda _{12} + \left| {b_2 } \right|^2 - \left| {c_2 } \right|^2 \frac{{k_2 }}{{k_1 }}\lambda _{12} - \left| {d_2 } \right|^2 } \right]} \cos \theta d\theta \\ \end{array} \end{equation*} \begin{equation} \equiv \left( {1 + A_1 + B_1 + C_1 + D_1 } \right)\int_{ - \theta _C }^{\theta _C } {\cos \theta d\theta } + 2\left( {1 + A_2 + B_2 + C_2 + D_2 } \right) \end{equation} where \begin{equation} N_1 =\frac{1}{{1 + \frac{{m\lambda }}{{\hbar ^2 k_1 }}}} \quad \quad N_2 = \frac{1}{{1 - \frac{{m\lambda }}{{\hbar ^2 k_2 }}}}. \end{equation} $N_1$ and $N_2$ are density of states normalized by those with $\lambda=0$ for wave number $k_1$ and $k_2$ respectively. $\lambda _{12}$ and $\lambda _{21}$ are defined in the Appendix. The critical angle $\theta _C$ is defined as $\cos \theta _C = \sqrt {\frac{{2m\lambda }}{{\hbar ^2 k_1 }}} $. $\sigma _N$ is given by the conductance for normal states, i.e., $\sigma _S$ for $\Delta=0$. We define normalized conductance as $\sigma _T =\sigma _S /\sigma _N$ and parameters as $\beta = \frac{{2m\lambda }}{{\hbar ^2 k_F }}$ and $Z = \frac{{2mU}}{{\hbar ^2 k_F }}$. For example, in InGaAs heterostructures, $\beta$ is estimated as $\beta \sim 0.2$.\cite{Grundler,Sato} Here we choose the same effective mass in 2DEG and S. In most cases the effective mass in 2DEG is much smaller than that in S. However we can show that this effect is equivalent to that by the increase of Z. Thus we neglect the difference of the effective masses in the present paper. \section{Results} First we study the normalized tunneling conduntace $\sigma _T$ as a function of bias voltage $V$ in Fig. \ref{f3}. For $Z=10$ where the AR probability is low, $\sigma _T$ is almost zero within the enregy gap and independent of $\beta$. In contrast, for $Z=1$, $\sigma _T$ is slightly enhanced with the increase of $\beta$ around zero voltage. For $Z=0$ where the AR probability is very high, $\sigma _T$ becomes two for $\beta=0$ within the energy gap. It is reduced by the increase $\beta$ within the energy gap. \begin{figure}[htb] \begin{center} \scalebox{0.4}{ \includegraphics[width=18.0cm,clip]{fig3.eps}} \end{center} \caption{(color online) Normalized tunneling conductance with $Z=10$ in (a), $Z=1$ in (b), and $Z=0$ in (c).} \label{f3} \end{figure} Next we study the difference between the effect of the Rashba splitting and that of Zeeman splitting. We have calculated conductance in F/S junctions following Ref. \cite{FS}. We plot the tunneling condutance for superconducting states $\sigma _S$ at zero voltage for 2DEG/S junctions in (a)-(c) and for F/S junctions in (d)-(f) of Fig. \ref{f8} with $Z=10$ in (a) and (d), $Z=1$ in (b) and (e), and $Z=0$ in (c) and (f). In (a)-(c) we show the dependence of $\sigma _S$, normalized by $\sigma _N$ for $\beta=0$, on $\beta$ for various $Z$. For $Z=10$ it has an exponential dependence on $\beta$ but its magnitude is very small while it has a reentrant behavior as a function of $\beta$ for $Z=1$. For $Z=0$ it decreases linearly as a function of $\beta$. On the other hand, in F/S junctions, the dependence of $\sigma _S$ on $U$, normalized by Fermi energy $E_F$, is qualitatively different. We plot $\sigma _S$ normalized by $\sigma _N$ at $U=0$. The exchange field suppresses $\sigma _S$ independently of $Z$ as shown in (d)-(f). This is because the AR probability is reduced by the exchange field. Therefore the effect of the Rashba splitting on conductance is essentially different from that of Zeeman splitting on conductance. This can be explained as follows. The Zeeman splitting gives unbalance of populations of up and down spin electrons. Thus it suppresses the AR where pairs of spin-up and spin-down electrons are transmitted to S. On the other hand, the Rashba splitting never causes such an unbalance. Thus it cannot suppresses the AR, which results in various $\beta$ dependence of the conductance. \begin{figure}[htb] \begin{center} \scalebox{0.4}{ \includegraphics[width=28.0cm,clip]{fig8.eps}} \end{center} \caption{ Tunneling condutance for superconducting states at zero voltage as a function of RSOC in 2DEG/S junctions (left panels) and the exchange field in F/S junctions (right panels) with $Z=10$ in (a) and (d), $Z=1$ in (b) and (e), and $Z=0$ in (c) and (f). Here $\beta = \frac{{2m\lambda }}{{\hbar ^2 k_F }}$.} \label{f8} \end{figure} \begin{figure}[htb] \begin{center} \scalebox{0.4}{ \includegraphics[width=27.0cm,clip]{fig6.eps}} \end{center} \caption{(color online) The angular averaged Andreev and normal reflection probabilities for $Z=10$. $A_1$, $A_2$, $B_1$ and $B_2$ are AR probabilities. $C_1$, $C_2$, $D_1$ and $D_2$ are normal reflection probabilities. } \label{f6} \end{figure} In order to explain the line shapes of the conductances, we will check the angular averaged normal reflection and AR probabilities as a function of voltage. For large $Z$, AR probabilities are small and normal reflection probabilities reflect the dependence of the densities of states on $\beta$: $N_1$ is a decreasing function of $\beta$, while $N_2$ is an increasing function of $\beta$. Therefore $C_1$ and $C_2$ are reduced and $D_1$ and $D_2$ are enhanced with the increase of $\beta$. Figure \ref{f6} shows the probabilities for $Z=10$. AR probabilities ($A_1$, $A_2$, $B_1$ and $B_2$) are slightly enhanced around $eV=\Delta$ and have similar structures with the increase of $\beta$ while normal reflection probabilities $D_1$ and $D_2$ increase with the increase of $\beta$. On the other hand normal reflection probabilities $C_1$ and $C_2$ are reduced with the increase of $\beta$. In other words, reflected waves with wave number $k_1 (k_2)$ are suppressed (enhanced) by RSOC. The enhancement and the suppression compete with each other. Thus the conductance is almost independent of RSOC. For small $Z$, normal reflection probabilities are small. AR probabilities $A_1$ and $B_2$ are reduced as increasing $\beta$. This stems from the mismatch of Fermi surfaces between 2DEG and S by the increase of $\beta$. In fact, for $Z=0$ (see Fig. \ref{f7}) normal reflection probabilities $C_2$ and $D_1$, and AR probabilities $B_1$ and $B_2$ are slightly enhanced with the increase of $\beta$. Normal reflection probabilities $C_1$ and $D_2$ increase by the increase of $\beta$. On the other hand AR probabilities $A_1$ and $B_2$ within the energy gap are reduced with the increase of $\beta$. This means that only eigenfunctions with the same wave number as the injection wave are affected by RSOC. From Fig. \ref{f7}, we can understand the suppression of the tunneling conductance by RSOC (see Eq. (7)). \begin{figure}[htb] \begin{center} \scalebox{0.4}{ \includegraphics[width=26.0cm,clip]{fig7.eps}} \end{center} \caption{(color online) The angular averaged Andreev and normal reflection probabilities for $Z=0$. $A_1$, $A_2$, $B_1$ and $B_2$ are AR probabilities. $C_1$, $C_2$, $D_1$ and $D_2$ are normal reflection probabilities. } \label{f7} \end{figure} \clearpage \section{Conclusions} In the present paper we have studied the tunneling conductance in two dimensional electron gas / insulator / superconductor junctions with RSOC. We have extended the BTK formula and calculated the tunneling conductance. It is found that for low insulating barrier the tunneling conductance is suppressed by the RSOC while for high insulating barrier the tunneling conductance is almost independent of it. We also found a reentrant behavior of the conductance at zero voltage as a function of RSOC for intermediate insulating barrier strength. This phenomena are essentially different from those found in F/S junctions where the tunneling conductance is suppressed by exchange field, being independent of the barrier strength. The present derivation of the conductance is applicable to arbitrary velocity operator with off-diagonal components. The results give the possibility to control the AR probability by a gate voltage. We believe that the obtained results are useful for the design of mesoscopic 2DEG/S junctions and for a better understanding of related experiments. In this paper we focus on ballistic 2DEG/S junctions. In diffusive 2DEG/S junctions, proximity effect plays an important role. The RSOC breaks the inversion symmetry and hence mixes the parity. As a result, "triplet" pairing may be induced in the 2DEG region\cite{Edelstein2} as predicted in diffusive F/S junctions\cite{Bergeret}. The study in this direction is now in progress. The authors appreciate useful and fruitful discussions with A. Golubov. This work was supported by NAREGI Nanoscience Project, the Ministry of Education, Culture, Sports, Science and Technology, Japan, the Core Research for Evolutional Science and Technology (CREST) of the Japan Science and Technology Corporation (JST) and a Grant-in-Aid for the 21st Century COE "Frontiers of Computational Science" . The computational aspect of this work has been performed at the Research Center for Computational Science, Okazaki National Research Institutes and the facilities of the Supercomputer Center, Institute for Solid State Physics, University of Tokyo and the Computer Center. \section*{Appendix} Here we give a detailed derivation of the conductance which is applicable to arbitrary velocity operator with off-diagonal components. For an electron injection and a hole injection from 2DEG (represented by $\psi _e$ and $\psi _h$, respectively), the resulting currents $j_e$ and $j_h$ in the 2DEG region are given by \begin{equation} j_e = {\mathop{\rm Re}\nolimits} (\psi_e^\dag v_x \tau _3 \psi _e ) \propto \left( {1 + A^{he} + B^{he} - C^{ee} - D^{ee} } \right) \end{equation} \begin{equation} j_h = {\mathop{\rm Re}\nolimits} (\psi _h ^\dag v_x \tau _3 \psi _h ) \propto \left( {1 + A^{eh} + B^{eh} - C^{hh} - D^{hh} } \right). \end{equation} Similary, for an electron and a hole injection from S (represented by $\psi{\prime} _e$ and $\psi{\prime} _h$, respectively), the corresponding currents $j_e ^\prime$ and $j_h ^\prime$ in the 2DEG region reads \begin{equation} j_e ^\prime = {\mathop{\rm Re}\nolimits} (\psi_{e }^{\prime \dag} v_x \tau _3 \psi '_e ) \propto \left( {F^{ee} + G^{ee} - H^{he} - J^{he} }\right) \end{equation} \begin{equation} j_h ^\prime = {\mathop{\rm Re}\nolimits} (\psi _{h }^{\prime \dag} v_x \tau _3 \psi '_h ) \propto \left( {F^{hh} + G^{hh} - H^{eh} - J^{eh} } \right). \end{equation} Here $A^{he}$ and $B^{he}$, and, $C^{ee}$ and $D^{ee}$ denote AR and normal reflection probabilities with electron injection respectively. $F^{ee}$ and $G^{ee}$ are normal transmission probabilities with electron injection. $H^{he}$ and $J^{he}$ are transmission probabilities with the injection of an electron converted into a hole at the interface. Other notations are defined in a similar way. Note that there are four independent eigenfunctions: two kinds of electron-like quasiparticles and two kinds of hole-like quasiparticles. The total current reads \begin{equation*} I = \int_{ - \infty }^\infty {\left( {j_e f(E - eV) - j_h f(E + eV) + j_e ^\prime f(E) - j_h ^\prime f(E)} \right)} dE \end{equation*} \begin{equation} \propto \int_{ - \infty }^\infty {\left( {1 + A_1^{he} + B_1^{he} - C_1^{ee} - D_1^{ee} } \right)\left( {f(E - eV) - f(E + eV)} \right)} dE. \end{equation} Then the differential conductance at zero temperature has the form: \begin{equation} \frac{{dI}}{{dV}} \propto \left( {1 + A^{he} + B^{he} - C^{ee} - D^{ee} } \right) \end{equation} with Fermi distribution function $f(E)$ and bias voltage $V$. Here we assume the particle-hole symmetry which results in the relations $X^{ee}=X^{hh}$ and $ X^{he}=X^{eh} (X=A,B,C,D,F,G,H,J)$. The original BTK method\cite{BTK} cannot treat velocity operator with off-diagonal components. However, the derivation given here is applicable to arbitrary velocity operator with off-diagonal components. Let us apply the above procedure to our model. For an injection wave with wave number $k_{1}$, the current reads \begin{equation} j_e^1 = \frac{{\hbar ^2 }}{{2m}}\left( {1 + \frac{{k_2 }}{{k_1 }}} \right)k_1 \cos \theta _1 \left( {1 + \left| {a_1 } \right|^2 + \left| {b_1 } \right|^2 \frac{{k_1 }}{{k_2 }}\lambda _{21} - \left| {c_1 } \right|^2 - \left| {d_1 } \right|^2 \frac{{k_1 }}{{k_2 }}\lambda _{21} } \right). \end{equation} For an injection wave with wave number $k_{2}$, the current is \begin{equation} j_e^2 = \frac{{\hbar ^2 }}{{2m}}\left( {1 + \frac{{k_1 }}{{k_2 }}} \right)k_2 \cos \theta _2 \left( {1 + \left| {a_2 } \right|^2 \frac{{k_2 }}{{k_1 }}\lambda _{12} + \left| {b_2 } \right|^2 - \left| {c_2 } \right|^2 \frac{{k_2 }}{{k_1 }}\lambda _{12} - \left| {d_2 } \right|^2 } \right). \end{equation} Here we define $\lambda _{12}$ and $\lambda _{21}$ as \begin{equation} \lambda _{12} = \frac{{k_1 \cos \theta _1 }}{{k_2 \cos \theta _2 }} \quad \quad \lambda _{21} = \frac{{k_2 \cos \theta _2 }}{{k_1 \cos \theta _1 }}. \end{equation}
2,869,038,155,417
arxiv
\section{Introduction} We consider spatially periodic, incompressible viscous fluids governed by the Navier-Stokes equations \begin{eqnarray}\label{NSE} \partial_t {{u}}^\nu + \nabla \cdot ({{u}}^\nu\otimes {{u}}^\nu) \!\! &=& \!\! -\nabla p^\nu + \nu \Delta {{u}}^\nu + f^\nu,\\ \label{incom} \nabla \cdot u^\nu \!\!&=& \!\!0, \end{eqnarray} with solenoidal initial data $u^\nu|_{t=0}=u_0^\nu\in L^2({\mathbb T}^d)$ and body forcing $f^\nu \in L^2(0,T;L^2(\mathbb{T}^d))$. The parameter $\nu>0$ is the kinematic viscosity of the fluid. Upon nondimensionalization, it is replaced by the inverse Reynolds number $\mathsf{Re}^{-1} = \nu/\mathsf{ U}\mathsf{ L}$, where $\mathsf{ U}$ is a characteristic velocity and $\mathsf{ L}$ a characteristic length. If equations \eqref{NSE} and \eqref{incom} are understood as holding in the sense of distributions on $[0,T]\times \mathbb{T}^d$, then solutions of class $L^\infty(0,T;L^2({\mathbb T}^d))\cap L^2(0,T;H^1({\mathbb T}^d))$ known as Leray solutions \cite{L34}, exist for all time $T>0$ but are not known to be unique. A fundamental property of these solutions is that they satisfy a global energy inequality. This means that energy dissipation due to the viscosity of the fluid cannot exceed the difference in initial and final kinetic energies plus the energy input by forcing. This inequality can be restated as an equality by accounting for the dissipation arising from an inertial cascade to small scales caused by (hypothetical) singularities in the Leray weak solutions \cite{DR00}: \begin{eqnarray}\label{viscousDiss} \int_0^T\int_{\mathbb{T}^d} \! \varepsilon^\nu[{{u}}^\nu] \ {\rm d} { {x} } {\rm d} t = \frac{1}{2}\int_{\mathbb{T}^d} \! | {{u}}_0^\nu|^2{\rm d} { {x} } -\frac{1}{2}\int_{\mathbb{T}^d} \! |{{u}}^\nu(\cdot,T)|^2{\rm d} { {x} } +\int_0^T\int_{\mathbb{T}^d} \! {{u}}^\nu\cdot f^\nu\ {\rm d} { {x} } {\rm d} t, \end{eqnarray} for almost every $T\geq 0$, where the total energy dissipation rate is \begin{equation}\label{epsDef} \varepsilon^\nu[{{u}}^\nu] := \nu |\nabla {{u}}^\nu|^2+ D[{{u}}^\nu]. \end{equation} The dissipation due to possible singularities, $D[{{u}}^\nu]$, is a non-negative distribution (Radon measure). A consequence of \eqref{viscousDiss} is that the cumulative energy dissipation $\varepsilon[{{u}}^\nu]$ is bounded by norms of data and forcing. A striking feature of high-$Re$ turbulence is that energy dissipation does not vanish in the limit of viscosity going to zero. Namely, that there exists a number ${\varepsilon}>0$ independent of viscosity $\nu$ such that \begin{equation}\label{zerothLaw} \int_0^T\int_{\mathbb{T}^d} \varepsilon^\nu[{{u}}^\nu] {\rm d} x {\rm d} t \geq {{\varepsilon}}>0. \end{equation} See, e.g. \cite{BO95,TBS02,KRS98,KIYIU03,KRS84,PKW02}. This phenomenon, known as anomalous dissipation, is so fundamental to our modern understanding of turbulence that it has been termed the ``zeroth law" \cite{F95}. It should be emphasized however that, to this day, no single mathematical example of \eqref{zerothLaw} is available, although there has been great progress in understanding similar behavior in some model problems such as 1D conservation laws and compressible flows \cite{KMS00,Dshock21,ED15, DE18}, shell models \cite{CFP09,MSV07,FGV16,AM16b}, and passive scalars \cite{BGK98,LJR02,DEIJ19,BBPS19}. Despite its conjectural status from the point of view of mathematics, under the experimentally corroborated assumption that behavior \eqref{zerothLaw} occurs together with some heuristic assumptions on statistical properties (homogeneity, isotropy, monofractal scaling), Kolmogorov \cite{K41} made a remarkable prediction about the structure of turbulent velocity fields at high Reynolds number, namely that \begin{equation}\label{SFscaling} S_p^\|(\ell):= \langle (\delta_\ell u^\nu\cdot \hat{\ell} )^p\rangle \sim ({\varepsilon} |\ell|)^{p/3} \qquad \text{for} \qquad \ell_\nu \ll \ell \ll L \end{equation} where $\delta_\ell u^\nu(x,t):= u^\nu(x+\ell,t)-u^\nu(x,t)$, $\hat{\ell}= \ell/|\ell|$ and where $\langle\cdot \rangle$ represents some suitable combination of space, time and ensemble averages. The length $\ell_\nu$, known as the Kolmogorov scale, represents a small-scale dissipative cutoff and the integral scale $L$ represents the size of the largest eddy in the flow. The range $\ell_\nu \ll \ell \ll L$ over which the scaling \eqref{SFscaling} holds is known as the \emph{inertial range}. The objects $S_p^\|(\ell)$ are called $p$th-order longitudinal structure functions since they measure the (signed) variation of $p$th powers of the velocity increments in the direction of their separation vectors. See Figure \ref{figure1} for evidence of such persistent inertial-range scaling from numerical simulations of homogenous isotropic Navier-Stokes turbulence \cite{ISY20}. \begin{figure} [h!] \includegraphics[width=0.47\linewidth]{fig1a} \hspace{3mm} \includegraphics[width=0.47\linewidth]{fig1b} \caption{Second-order \emph{longitudinal} (a) and \emph{absolute} (b) structure functions computed from direct numerical simulation of forced homogenous isotropic turbulence with Taylor scale Reynolds numbers ($R_\lambda:= U \lambda/\nu$ where $U:= \langle |u|^2\rangle^{1/2} $ and $\lambda:= \langle |u|^2\rangle^{1/2}/ \langle |\nabla u|^2\rangle^{1/2}$ ) ranging from $R_\lambda =240 \ (\textcolor{ao(english)}{{\rm green}}), 650 \ (\textcolor{blue}{{\rm blue}}), 1300 \ (\textcolor{red}{{\rm red}})$. They exhibit scaling over an inertial range which extends as Reynolds increases. A best-fit power-law exponent $\zeta_2$ for the power-law $|r|^{\zeta_2}$ in this range is included. Data from \cite{ISY20}, Fig. 3(a). } \label{figure1} \end{figure} Onsager took a further step by recognizing that the behavior \eqref{zerothLaw} requires the fluid to develop singularities as $\nu\to 0$ in a mathematically precise sense. Specifically, for \eqref{zerothLaw} to occur on sequences of Navier-Stokes solutions, the $p$th order absolute structure functions \emph{cannot} satisfy a bound of the type \begin{equation}\label{SFbnd} S_p(\ell) = \int_0^T \int_{\mathbb{T}^d} |u^\nu(x+\ell,t)-u^\nu(x,t)|^p {\rm d} x {\rm d} t \leq C |\ell|^{\zeta_p} , \qquad \forall |\ell|\leq L, \end{equation} for any $\zeta_p>p/3$, $ p\geq 3$ and a constant $C$ independent of viscosity. This assertion, originally stated by Onsager \cite{O49} about weak solutions of the Euler equation and in the slightly more restrictive setting of H\"{o}lder spaces, has since been rigorously proved \cite{GLE94,CET94,CCFS}. In fact, energy dissipation must vanish as viscosity goes to zero for any family of solutions $\{u^\nu\}_{\nu>0}$ which are uniformly bounded in the Besov space\footnote{A vector field $v$ belongs to the Besov space $B_p^{\sigma,\infty}({\mathbb T}^d)$ for $p\geq 1,$ $\sigma\in (0,1)$ at time $t$ if and only if \begin{equation} \|v(\cdot,t)\|_{L^p}^p<C_0(t), \qquad S_p(\ell,t) \leq C_1(t)\left|\frac{\ell}{L}\right|^{\zeta_p}, \ \forall |\ell|\leq L \label{space-struc-fun} \end{equation} with $\zeta_p=\sigma p$, $L>0$ and $C_0,C_1\in L^1(0,T)$. Uniform boundedness of the family $\{u^\nu\}_{\nu>0}$ in $L^{p}(0,T;B_p^{\sigma,\infty}({\mathbb T}^d))$ is equivalent to the condition that coefficients $C_0(t),$ $C_1(t)$ independent of $\nu>0$ exist so that the bounds (\ref{space-struc-fun}) are satisfied for a.e. $t\in [0,T]$. } $L^p(0,T;B_p^{1/3+, \infty}(\mathbb{T}^d))$ for $p\geq 3$ \cite{DE19}. Thus, Kolmogorov's 1941 theory corresponds to turbulent solutions possessing the maximal degree of smoothness consistent with their ability to anomalously dissipate energy. It is well known that real fluids do not conform exactly to Kolmogorov's prediction.\footnote{However, weak Euler solutions with less regularity $u\in C^{1/3-}([0,T]\times\mathbb{T}^d)$ and which do not conserve energy have been constructed \cite{I18} after a long series of works \cite{Sch93,Shn97,LS12} and they can be made strictly (globally) dissipative \cite{BLSV17}. See \cite{BV19} for a very nice recent review of the subject. In a sense, these solutions exhibit exact K41-type behavior, although they are not known to arise as physical limits of Leray solutions of Navier-Stokes as required to make contact with real-world high-$Re$ flows.} Intermittency, or spottiness / non-uniformity of the velocity's roughness and the energy dissipation rate, result in deviations of the scaling exponents $\zeta_p^\|$ (and $\zeta_p$) from a linear behavior in $p$ \cite{B93,SSJ93,A84,S93,S94,S96}.\footnote{In fact, there is a rigorous connection between these two irregularities: Isett proved \cite{I17} that if $\zeta_p\geq p/3$ for some $p>3$, then the dissipation would have to take place on a full measure set. As experiments indicate that the dissipation takes place on measure zero set of (spatial) fractal dimension $\approx 2.87$ \cite{MS91}, this is consistent with velocity intermittency $\zeta_p<p/3$ for all $p>3$. On the other hand, velocity irregularity is not enough to sustain anomalous dissipation: Shvydkoy \cite{S09} proved that ``ordered singularities" with $\zeta_p= 1$ for $p\geq 1$ such as tangential velocity discontinuities across smooth co-dimension one hypersurfaces (regular vortex sheets) conserve energy. } Experiments do however indicate that for $p$ near three, the formula $\zeta_p\approx p/3$ approximately holds with $\zeta_2\in \frac{2}{3} + [0.03,0.06]$ and $\zeta_3\approx 1$. For example, in flow past a sphere $\zeta_2\approx 0.701$ is reported in \cite{B93} and $\zeta_2\approx 0.71$ in \cite{A84} (see Table 2 therein). Recent high-resolution numerical simulations report $\zeta_2\approx 0.725$ (see Figure \ref{figure1} and \ref{figure3}). Although there are slight variations, all these results conform to $\zeta_2 \gtrapprox 2/3$ and $\zeta_3 \lessapprox 1$. These observations motivate: \begin{quest} Why does high-Reynolds number turbulence seems to be as rough as required to support anomalous dissipation of energy but not much rougher? \end{quest} In this direction, we note that Kolmogovov's prediction \eqref{SFscaling} in the case $p=3$ has a privileged status in that it can be derived (under certain technical assumptions, see Prop. \ref{prop45}) from the equations of motion \eqref{NSE}--\eqref{incom} rather than being merely a consequence of statistical hypotheses. Specifically, Kolmogorov established the ``4/5--law" (in dimension three) under only the assumption of anomalous dissipation \eqref{zerothLaw}: \begin{equation}\label{45law} S_3^\|(\ell) :=\langle (\delta_\ell u\cdot \hat{\ell} )^3\rangle \approx -\frac{12}{d(d+2)}{\varepsilon} \ell, \end{equation} which holds in the limit of large Reynolds number $\nu\to 0$ and subsequently small scales $\ell \to 0$. In practice, \eqref{45law} is observed to hold approximately over the inertial range; see Figure \ref{figure2} for evidence from \cite{ISY20}. The 4/5--law captures some aspects of the turbulent cascade: energy is transferred through scale by a cubic nonlinear flux term related to $S_3^\|(\ell)$ until it is removed, ${\varepsilon}>0$, from the system by infinitesimal viscosity. \begin{figure} [h!] \centering \includegraphics[width=0.55\linewidth]{fig2b} \caption{The quantity $-S_3^{\|}(\ell)/(\frac{4}{5}\ell \langle {\varepsilon} \rangle)$, is plotted for Reynolds numbers $R_\lambda =1300$. The range of scales over which a value near unity is achieved visibly extends as Reynolds number increases. Data from \cite{ISY20}, Fig. 1. } \label{figure2} \end{figure} In fact, the 4/5--law \eqref{45law} fixes the scaling exponent $p/3$ in \eqref{SFscaling} via the statistical assumption of monofractal scaling in Kolmogorov's theory. On its face, this indicates that the turbulent fluid velocity satisfies $\delta_\ell u \sim \ell^{1/3}$ and so is ``$1/3$ differentiable", at least in some averaged sense. More precisely, the fact that ${\varepsilon}$ is apriori bounded by initial data and forcing via equation \eqref{viscousDiss} ensures that $S_3^\|(\ell) /\ell$ is controlled uniformly for small $\ell$. This suggests that some kind of apriori regularity information -- a `turbulent energy estimate' -- might be extracted from the 4/5--law. Unfortunately, aside from justifying the assumptions necessary for a rigorous derivation of \eqref{45law}, there are two obstructions to realizing this hope: (i) the longitudinal structure function does not measure velocity variations in all directions, only those aligned with the separation vector and (ii) the integrand is not sign-definite. In particular, although a certain skewness is implied by the 4/5--law (positive of ${\varepsilon}$ means negativity of $\delta_\ell u \cdot \hat{\ell}$ in an averaged sense), it is conceivable that there are large fluctuations which cancel in the integral to yield \eqref{45law} but would disturb this relation if the increments were replaced by their absolute values. Both of these issues prevent the control on third order longitudinal structure function afforded by \eqref{45law} from being coercive and so it seems that no direct information about regularity of the velocity can be immediately extracted. Here we explore possible nonlinear mechanisms to extract regularity from 4/5--law. Our results will be of a conditional nature, involving hypotheses which are unproved but which are corroborated by experiment and simulations of turbulence. Roughly, our main assumptions (Hypothesis \ref{hyp45thsLaw} below) are that \begin{enumerate}[label=(\alph*)] \item the Kolmogorov $4/5$--ths law holds \item the Kolmogorov $4/3$--rds law holds \end{enumerate} as well as (Hypothesis \ref{hypothesis} below) \begin{enumerate}[label=(\alph*)] \item there exists an $\alpha\in [0,1)$ and $C>0$ independent of $\nu$ such that for all scales $\ell>0$ \begin{equation}\label{bigass*} - \int_0^T\int_{\mathbb{T}^d} \left\langle ( \delta_\ell u^\nu \cdot \hat{\ell})^3\right\rangle_{ang}{\rm d} x {\rm d} t \geq C |\ell|^\alpha \int_0^T\int_{\mathbb{T}^d} \left\langle | \delta_\ell u^\nu \cdot \hat{\ell}|^3\right\rangle_{ang}{\rm d} x {\rm d} t, \end{equation} \item there exists an $\beta\in [0,1)$ and $C'>0$ independent of $\nu$ such that for all scales $\ell>0$ \begin{equation}\label{bigass2*} - \int_0^T\int_{\mathbb{T}^d} \left\langle (\delta_\ell u^\nu \cdot \hat{\ell}) | \delta_\ell u^\nu|^2\right\rangle_{ang} {\rm d} x {\rm d} t \geq C' |\ell|^\beta \int_0^T\int_{\mathbb{T}^d} \left\langle | \delta_\ell u^\nu|^3\right\rangle_{ang}{\rm d} x {\rm d} t. \end{equation} \end{enumerate} The first hypotheses assert the validity of the Kolmogorov laws for weak solutions. Although anticipated to be true (the $4/5$--ths law is sometimes referred to as the only ``exact" law in turbulence), to this day it has not been unconditionally established (see \cite{E02,B18} for conditional validations and Figure \ref{figure2} for empircal evidence). The second hypotheses concern some effective alignment properties of the velocity increments with their separation vectors. A detailed discussion is deferred to the subsequent section. As stated above, they are slightly stronger than Hypothesis \ref{hypothesis} required in the proof, but may be more convenient to verify numerically or in experiment. In fact, there has already been direct evidence of the behavior \eqref{bigass*} with $\alpha\approx 0.03$ from experiment \cite{S93,S94,S96} (see extended discussion in Remark \ref{experiment} below). In practice, \eqref{bigass*} and \eqref{bigass2*} need only be checked over the finite range of scales in the inertial range $\ell_\nu \ll \ell \ll L$. We prove \begin{thm}\label{theorem} Let $u$ be a weak solution of the Euler equations of class $L^3(0,T;L^3(\mathbb{T}^d))$. Then \begin{enumerate}[label=(\alph*)] \item if Hypotheses \ref{hyp45thsLaw}(a) and \ref{hypothesis}(a) hold, then $u\in L^2(0,T;B_2^{(1-\alpha)/3,\infty}(\mathbb{T}^d))$, \item if Hypotheses \ref{hyp45thsLaw}(b) and \ref{hypothesis}(b) hold, then $u\in L^3(0,T;B_3^{(1-\beta)/3,\infty}(\mathbb{T}^d))$. \end{enumerate} \end{thm} Theorem \ref{theorem}(b) is not very surprising since Hypothesis \ref{hypothesis}(b) (nearly) assumes control on the absolute structure function by the longitudinal. We include it because there is experimental evidence of such control, because $L^3$ seems to be the natural scale for regularization by the energy cascade. It also applies unconditionally to the entropic solutions of the Burgers equation (see Remark \ref{Burgers}). On the other hand, Theorem \ref{theorem}(a) produces regularity in $L^2$ \emph{without} assuming that the full velocity increment can be controlled by the longitudinal component. This is due to the following Lemma which shows that the dynamical law (Euler or Navier-Stokes equations) of the fluid can be used to deduce information on the full velocity increment from partial information on the behavior of the component in the direction of its separation vector. \begin{lemma}\label{lemma1} A weak solution of the incompressible Euler equations is of class $ L^2(0,T;B_2^{\zeta_2^\|/2,\infty} (\mathbb{T}^d))$ with $\zeta_2^\|\in (0,2]$ if and only if the longitudinal structure function defined by \eqref{structurefunctions} satisfies \begin{equation}\label{s2bound} \int_0^T \left\langle S_2^\|(\ell)\right\rangle_{ang} {\rm d} t \lesssim |\ell|^{\zeta_2^\|}, \qquad \forall\ |\ell|>0. \end{equation} \end{lemma} \begin{rem}[Weak solutions as zero-viscosity limits] Lemma \ref{lemma1} has some implications for the weak inviscid limit. In particular, uniform boundedness of the family $\{u^\nu\}_{\nu>0}$ in $L^2(0,T;B_2^{\zeta_2^\|/2,\infty} (\mathbb{T}^d))$ is equivalent to a bound of the form \eqref{s2bound} independent of viscosity. In fact, as in Lemma 1 of \cite{DN18}, for Leray solutions $u^\nu \in L^\infty(0,T;L^2(\mathbb{T}^d))\cap L^2(0,T;H^1(\mathbb{T}^d))$ the condition \eqref{s2bound} is equivalent to \begin{equation}\label{s2boundfin} \int_0^T \left\langle S_2^\|(\ell;\nu)\right\rangle_{ang} {\rm d} t \lesssim |\ell|^{\zeta_2^\|}, \qquad \eta(\nu)\leq |\ell|\leq L, \end{equation} where $\eta(\nu)= \nu^{1/2(1-s)}$. Thus, a uniform scaling with any positive exponent of the longitudinal structure function in the ``inertial range" suffices to obtain weak Euler solutions in the inviscid limit (see Thm 1 of \cite{DN18}). We emphasize that the bound \eqref{s2boundfin} is not naively a compactness statement, although for equations structurally similar to Navier-Stokes, Lemma \ref{s2bound} transforms it into one. See Figure \ref{figure3} for empirical verification of Lemma \ref{lemma1} as it applies to inviscid limits of Navier-Stokes turbulence, relating the bounds (scalings) of the absolute and longitudinal structure functions. \end{rem} \begin{figure} [h!] \centering \includegraphics[width=0.50\linewidth]{fig3} \caption{ Best-fit exponents within the inertial range are plotted for absolute $\zeta_2$ and longitudinal $\zeta_2^{\|}$ structure functions. The K41 value of $\zeta_{2}^{k41}:=2/3$ is given for reference. The exponents $\zeta_2, \zeta_2^{\|}$ appear to saturate at $0.725$, which serves as the exponent which provides the uniform (in $Re$) bounds \eqref{SFbnd} and \eqref{s2bound}. Data from \cite{ISY20}, Fig. 3(b). } \label{figure3} \end{figure} \section{Kolmogorov 4/5--law and Alignment Hypotheses} We here recall a rigorous formulation of the Kolmogorov 4/5--law for weak solutions of the Euler equations arising as zero viscosity limits and introduce the precise Hypotheses under which our Theorem is established. As discussed above, Onsager conjectured \cite{O49} that sufficiently rough, dissipative weak solutions of the Euler equations are candidate descriptions of high-Reynolds number flows exhibiting the behavior \eqref{zerothLaw}. Onsager's vision of weak Euler solutions as a framework to study zero-viscosity limits follows from sufficient compactness. Indeed, if a family of Leray solutions $\{u^\nu\}_{\nu>0}$ is precompact in $L^3(0,T;L^3(\mathbb{T}^d))$, then strong space-time $L^3$--limits exist $u^\nu\to u\in L^3(0,T;L^3(\mathbb{T}^d))$ and are weak solutions of the Euler equations.\footnote{Such compactness is implied if the family of Leray solutions $\{u^\nu\}_{\nu>0}$ is uniformly--in--$\nu$ bounded in $L^3(0,T;B_3^{s,\infty}(\mathbb{T}^d))$ \emph{for any} $s>0$ \cite{DE19,DN18}. As discussed above, this is robustly observed in experiments and simulations \cite{B93,SSJ93,S93,S94,S96}.} In fact, this assumption also guarantees the existence of a limiting dissipation measure $\varepsilon[{{u}}]$, \begin{equation} {\mathcal{D}'}\mbox{-}\lim_{\nu\to 0} \varepsilon^\nu[{{u}}^\nu] = \varepsilon[{{u}}] \geq 0, \end{equation} where the limit is understood in the sense of distributions (it holds upon pairing with any smooth test function and is denoted by ${\mathcal{D}'}\mbox{-}\lim$). Furthermore, Duchon and Robert \cite{DR00} showed that any weak solution of the Euler equations $u$ of class $L^3(0,T;L^3(\mathbb{T}^d))$ satisfies a (weak) energy balance \begin{equation} \partial_t\left(\frac{1}{2}|u|^2\right)+\nabla\cdot\left[\left(\frac{1}{2}|u|^2+p\right)u\right] = -D[u] \label{Ebal} \end{equation} where the `inertial dissipation' $D[u]$ is defined by the distributional limit \begin{equation}\label{DuchonRobertAnom} D[u] : = {\mathcal{D}'}\mbox{-}\lim_{\ell \to 0} \frac{1}{4\ell} \int_{\mathbb{T}^d}\! (\nabla \varphi)_\ell(r) \cdot \delta_r{{u}}(x,t) |\delta_r{{u}}(x,t)|^2 {\rm d} r \end{equation} with $\varphi$ an arbitrary standard mollifier, $(\nabla \varphi)_\ell(r)= \ell^{-d} \nabla \varphi(r/\ell)$ and $\delta_r{{u}}(x,t)=u(x+r,t)- u(x,t)$. The distribution defined by \eqref{DuchonRobertAnom} represents the flux of energy into or out of the fluid due to a nonlinear inertial cascade to zero length-scale facilitated, as Onsager envisioned, by sufficiently irregular velocity fields. As a consequence of \eqref{Ebal}, the inertial dissipation matches onto the viscous dissipation anomaly \begin{equation} D[u] =\varepsilon[u], \label{flux-anom2} \end{equation} and the distribution $ D[u]$ must be non-negative and independent of the mollifier $\varphi$. This independence can be seen directly provided that $u$ has some additional spatial continuity which is made precise by the following \begin{hyp}\label{hyp45thsLaw} Let $u$ be any weak solution of the Euler equation of class $L^3(0,T;L^3(\mathbb{T}^d))$. Suppose \begin{enumerate}[label=(\alph*)] \item the following version of the Kolmogorov $4/5$--law holds \begin{align}\label{lim45ths} {\mathcal{D}'}\mbox{-}\lim_{|\ell|\to 0} \frac{1}{|\ell|} \left\langle ( \delta_\ell u \cdot \hat{\ell})^3\right\rangle_{ang} &=D_{4/5}^*[u], \end{align} \item the following version of the Kolmogorov $4/3$--law holds \begin{align}\label{lim43rds} {\mathcal{D}'}\mbox{-}\lim_{|\ell|\to 0} \frac{1}{|\ell|}\left\langle (\delta_\ell u \cdot \hat{\ell}) | \delta_\ell u|^2\right\rangle_{ang} &=D_{4/3}^*[u], \end{align} \end{enumerate} where the angle average denotes $\left\langle f(\ell) \right\rangle_{ang} := \fint_{S^{d-1}} f(\ell) \ {\rm d} \omega(\hat{\ell})$ and $ {\rm d} \omega$ is the measure on solid angles \end{hyp} With Hypothesis \ref{hyp45thsLaw} in hand, we see explicitly that $D[u]$ does not depend on the arbitrary mollifier and the standard versions of the Kolmogorov laws hold: \begin{prop}[\cite{DR00} \& \cite{E02}]\label{prop1} Under Hypothesis \ref{hyp45thsLaw}, the following distributional equalities hold \begin{equation}\label{45thand43rd} D_{4/5}^*[u]= -\frac{12}{d(d+2)} D[u], \qquad D_{4/3}^*[u]= -\frac{4}{d} D[u]. \end{equation} \end{prop}\label{prop45} See also discussion in \cite{D19}, which gives a Lagrangian interpretation of these distributions. In combination with \eqref{flux-anom2} which holds for strong vanishing viscosity limits, Proposition \ref{prop1} constitutes precise versions of the celebrated Kolmogorov 4/5 and 4/3--laws (upon setting $d=3$ in \eqref{45thand43rd}): \begin{equation}\label{viscouslimlaws} D_{4/5}^*[u]= -\frac{12}{d(d+2)} \varepsilon[u], \qquad D_{4/3}^*[u]= -\frac{4}{d} \varepsilon[u]. \end{equation} Equation \eqref{viscouslimlaws} is a rigorous version of \eqref{45law}. It should be noted that the above relationships are \emph{local} in that they hold in the sense of space-time distributions. Moreover, they show that the fluxes $D_{4/5}^*[u]$ and $D_{4/3}^*[u]$ are, in fact, non-positive as distributions, implying a form of skewness of the velocity field. Now, for any weak Euler solution of class $L^3(0,T;L^3(\mathbb{T}^d))$, the inertial dissipation $D[u]$ must be finite upon averaging in space time. However, it need not be signed. One consequence of \eqref{flux-anom2} is that the inertial dissipation inherits an a-priori bound in terms of initial data and forcing, and is non-negative for high-Reynolds number flows exhibiting anomalous dissipation \eqref{zerothLaw}: \begin{equation}\label{fluxbnd} 0 < \int_0^T \int_{\mathbb{T}^d} D[u]\ {\rm d} x {\rm d} x <\infty. \end{equation} The balance \eqref{flux-anom2}, which leads to \eqref{fluxbnd}, is related to the direct energy cascade and can be interpreted as the statement that a nonlinear transfer of energy can be sustained even to infinitesimally small scales, where an infinitesimal viscosity can efficiently remove energy from the system. Our main thesis is that \eqref{fluxbnd} can provide a partial explanation for the discussed smoothness of the weak Euler solutions, provided that the solutions additionally possess certain structural properties. More precisely, we adopt the assumption that \begin{hyp}\label{hypothesis} Let $u$ be a weak solution of Euler satisfying Hypothesis \ref{hyp45thsLaw}. Suppose in addition that \begin{enumerate}[label=(\alph*)] \item there exists an $\alpha\in [0,1)$ and $C:=C(d,T,u_0,f)>0$ such that \begin{equation}\label{bigass} -\int_0^T \int_{\mathbb{T}^d} D_{4/5}^*[u] {\rm d} x {\rm d} t\geq \limsup_{|\ell|\to 0} \frac{C}{ |\ell|^{1-\alpha}} \int_0^T\int_{\mathbb{T}^d} \left\langle | \delta_\ell u \cdot \hat{\ell}|^3\right\rangle_{ang}{\rm d} x {\rm d} t. \end{equation} \item there exists an $\beta\in [0,1)$ and $C':=C'(d,T,u_0,f)>0$ such that \begin{equation}\label{bigass2} -\int_0^T \int_{\mathbb{T}^d} D_{4/3}^*[u] {\rm d} x {\rm d} t \geq \limsup_{|\ell|\to 0} \frac{C'}{ |\ell|^{1-\beta}} \int_0^T \int_{\mathbb{T}^d} \left\langle | \delta_\ell u |^3 \right\rangle_{ang} {\rm d} x{\rm d} t. \end{equation} \end{enumerate} \end{hyp} \begin{rem} Clearly Hypothesis \ref{hypothesis}(b) is the stronger assumption since $ | \delta_\ell u \cdot \hat{\ell} |^3 \leq | \delta_\ell u |^3 $. In turn, we will show that it leads to a stronger form of regularization on the limit Euler solution. It should be noted that Hypothesis \ref{hypothesis}(a) is an assumption on the possible cancellations of the angle average rather than a brute force control of a piece of the velocity increment by the full increment as in Hypothesis \ref{hypothesis}(b). Such statements are assumptions on the average \emph{anti-alignment} of velocity increments with their separation vectors. Additionally we remark that the choice of $D_{4/3}^*[u]$ in Hypothesis \ref{hypothesis}(b) was not important. For our purpose (Theorem \ref{theorem}(b)), it is sufficient that it hold for either distribution $D_{4/3}^*[u]$ or $D_{4/5}^*[u]$ -- the important properties are the finiteness and positivity upon space-time averaging. In any case, if Hypothesis \ref{hyp45thsLaw} holds, with both limits \eqref{lim45ths} and \eqref{lim43rds} existing, then the distributions are interchangeable in the statement of the \eqref{bigass} and \eqref{bigass2}. \end{rem} \begin{rem}[Evidence of Anti--Alignment]\label{experiment} There is some experimental evidence \cite{S93,S94,S96} for the type of alignment assumed in Hypothesis \ref{hypothesis}(a), in particular that absolute differences do differ in scaling as in \eqref{bigass}, but very slightly. Specifically, for absolutely third-order longitudinal structure functions, experiments find \cite{S94} that (a slightly strengthened version of) Hypothesis \ref{hypothesis}(a) holds with $\alpha \approx 0.03$, i.e. $-\langle (\delta_\ell u)^3\rangle \sim \ell^\alpha \langle |\delta_\ell u|^3\rangle$ in the inertial range; see Table 1 therein. This behavior of $\alpha \ll 1$ has also been observed in a number other experiments \cite{B93,SSJ93}. It should be noted that the experimental measurements are inferred from data of the velocity field along a one-dimensional cut and computed the longitudinal structure by appealing to Taylor's hypothesis, ergodicity and statistical isotropy and homogeneity. \end{rem} \begin{rem}[Stochastic Setting] Rigorously establishing alignment properties such as those appearing in Hypothesis \ref{hypothesis} seems to be a very difficult task. Moreover, even if true generically, it is quite conceivable that it is false pathwise the setting of the deterministic Navier-Stokes solutions due to non-generic events. Thus, such properties might be easier established in the stochastically forced or random data setting. For instance, one might be able to prove the existence of statistically stationary, homogenous isotropic martingale solutions. It is then plausible that \eqref{bigass} and \eqref{bigass2} hold for such solutions upon ensemble averaging. See \cite{B18} for some interesting developments concerning the validity of Hypothesis \ref{hyp45thsLaw}. \end{rem} The fact that inertial dissipation and the direct energy cascade can provide a regularization mechanism the weak solutions is well understood in some model problems, such as 1-dimensional conservation laws \cite{GP11,J09,TT07} as well the dyadic (Desnyansky--Novikov) shell model of turbulence \cite{CZ16}. For three-dimensional Navier-Stokes, no form of uniform fractional regularity or self-regularization has ever been rigorously established from first principles, however experiments and simulations ubiquitously indicate that solutions do possess some form of these phenomena \cite{B93,SSJ93,S93,S94,S96}. In particular, as discussed above, measurements of multifractal structure function scaling exponents from over the last 60 years indicate that some turbulent solutions of Navier-Stokes enjoy some limited uniform fractional regularity in $L^p$ spaces. Under the Hypothesis \ref{hyp45thsLaw} and \ref{hypothesis}, the latter being of a quantitative nature, we capture some of the smoothing effect of the nonlinearity and obtain a self-regularizing property of dissipative weak Euler solutions by Theorem \ref{theorem}. Thus, the 4/5--law together some alignment properties implies regularization for any such weak Euler solution with a finite positive inertial dissipation (in particular, vanishing viscosity limits). Of course, our Theorem \ref{theorem} is of a conditional nature in that it relies on two major Hypotheses \ref{hyp45thsLaw} and \ref{hypothesis}, both of which seem very difficult to prove a-priori. However, the validity of these Hypotheses can be checked in Nature through controlled experiment and in direct numerical simulation (DNS) of the Navier-Stokes equations at high Reynolds number. In Remark \ref{experiment}, we recalled some existing experimental results concerning the validity Hypothesis \ref{hypothesis}(a). We hope that our Hypotheses \ref{hyp45thsLaw} and \ref{hypothesis} will be subject to much further testing and scrutiny. \begin{rem}[Burgers Equation]\label{Burgers} The two Hypotheses \ref{hyp45thsLaw} and \ref{hypothesis} are true for entropy solutions of the 1-dimensional Burgers equation. In particular, the so-called 1/12th law states \begin{equation}\label{1/12thlaw} \lim_{|\ell|\to 0} \frac{1}{12}\int_0^T\int_{\mathbb{T}^d}\frac{1}{|\ell|} \left\langle (\delta_\ell u)^3\right\rangle_{ang} {\rm d} x {\rm d} t = -\int_0^T\int_{\mathbb{T}^d} \varepsilon(x,t) {\rm d} x{\rm d} t, \end{equation} where the one-dimensional angle average is defined by $ \left\langle (\delta_\ell u)^3\right\rangle_{ang} = \frac{1}{2}\left[ (\delta_\ell u)^3(|\ell|) + (\delta_\ell u)^3(-|\ell|)\right].$ Equation \ref{1/12thlaw} is the analogue of the 4/5--law in the setting of Burgers and is rigorously established for vanishing viscosity limits. If there are countably many shocks, the following can be explicitly computed \begin{align*} \lim_{|\ell|\to 0} \int_0^T\int_{\mathbb{T}^d}\frac{1}{|\ell|} \left\langle (\delta_\ell u)^3\right\rangle_{ang} {\rm d} x {\rm d} t &= \sum_i (\Delta u_i(t))^3, \end{align*} where $\Delta u_i(t)$ is the jump at the $i$th shock. The Lax entropy condition is that $u^->u^+$ at shocks, or $\Delta u_i(t)<0$. This means that $\Delta u_i(t)= -|\Delta u_i(t)|$ and our Hypothesis \ref{hypothesis} holds with $\alpha=\beta = 0$. This is an example of perfect ``anti--alignment". In accord with Theorem \ref{theorem}(b), we obtain $u\in L^3(0,T;B^{1/3,\infty}_3(\mathbb{T}))$. In light of the inclusion $(L^\infty \cap BV)(\mathbb{T}^d)\subset B_p^{1/p,\infty}(\mathbb{T}^d)$, this is consistent with the well known BV regularity of entropy solutions of 1D hyperbolic conservation laws \cite{GP11,J09,TT07}. \end{rem} \section{Proofs} \begin{proof}[Proof of Lemma \ref{lemma1}] Let $s=\zeta_2^\| /2$. \textbf{(1) $\implies$ (2)}. This direction is trivial since \begin{equation} \left\langle S_2^\|(\ell)\right\rangle_{ang} \leq \sup_{|\ell'|\leq |\ell|} \int_{\mathbb{T}^d} |\delta_{\ell'} u\cdot \hat{\ell'}|^2 {\rm d} x \leq \sup_{|\ell'|\leq |\ell|} S_2(\ell')\leq C |\ell|^{2s} \end{equation} where the Besov regularity $u\in L^2(0,T;B_2^{s,\infty} (\mathbb{T}^d))$ was used in the final inequality.\\ \noindent \textbf{(2) $\implies$ (1)}. Let $u\in L^2(0,T;L^2(\mathbb{T}^d))$ be a weak solution of the incompressible Euler equations: \begin{equation}\label{weakform} \int_0^T \int_{\mathbb{T}^d} u \partial_t \varphi \ {\rm d} x{\rm d} t + \int_0^T \int_{\mathbb{T}^d} u\otimes u : \nabla \varphi \ {\rm d} x{\rm d} t =0. \end{equation} where $\varphi := \varphi(x,t) \in C_0^\infty([0,T]\times \mathbb{T}^d)$ is a compactly supported divergence-free test function. Defining $\varphi_\ell:=\varphi(x-\ell)$. Introducing the increment field $\delta_\ell u := u(x+\ell)-u(x)$ and choosing the test function $\varphi_\ell- \varphi$ in \eqref{weakform} shows that \begin{equation}\label{incrementweak} (\partial_t + u \cdot \nabla )\delta_\ell u= - \nabla_x \delta_\ell p - \delta_\ell u \cdot \nabla_\ell \delta_\ell u +\delta_\ell f \end{equation} holds in the sense of distributions. To obtain this we denote $u'=u(x+\ell)$ and $u=u(x)$ and derive a weak form of the `doubling variables' identity \begin{align*} \int_{\mathbb{T}^d} (u\otimes u : \nabla \varphi_\ell - u\otimes u : \nabla \varphi) \ {\rm d} x &= \int_{\mathbb{T}^d} ( \delta_\ell u \otimes u' - u\otimes \delta_\ell u) : \nabla \varphi \ {\rm d} x\\ &= \int_{\mathbb{T}^d} ( \delta_{-\ell} u \otimes u :\nabla \varphi_\ell - u\otimes \delta_\ell u: \nabla \varphi)\ {\rm d} x \\ &= \int_{\mathbb{T}^d} ( \delta_{-\ell} u \otimes u :\nabla_\ell \varphi_\ell - u\otimes \delta_\ell u: \nabla \varphi)\ {\rm d} x \\ &= \nabla_\ell\cdot \int_{\mathbb{T}^d} \delta_{\ell} u \otimes u' \cdot \varphi\ {\rm d} x - \int_{\mathbb{T}^d} u\otimes \delta_\ell u: \nabla \varphi\ {\rm d} x \\ &= \nabla_\ell\cdot \int_{\mathbb{T}^d} \delta_{\ell} u \otimes \delta_{\ell} u \cdot \varphi\ {\rm d} x - \int_{\mathbb{T}^d} u\otimes \delta_\ell u: \nabla \varphi\ {\rm d} x, \end{align*} where we used the fact that $u$ is distributionally divergence-free. This establishes that \eqref{incrementweak} holds in the sense of distributions. Dotting the above equation with $\ell$, we find \begin{equation} (\partial_t + u \cdot \nabla)(\delta_\ell u\cdot \ell)= - \ell \cdot\nabla_x \delta_\ell p - \delta_\ell u \cdot \nabla_\ell (\delta_\ell u\cdot \ell)+ |\delta_\ell u|^2 + \delta_\ell f. \end{equation} Integrating this balance over the torus, we have \begin{equation}\label{intbalance} \int_{\mathbb{T}^d} |\delta_\ell u|^2 {\rm d} x=- \nabla_\ell \cdot \int_{\mathbb{T}^d}\delta_\ell u (\delta_\ell u\cdot \ell)\ {\rm d} x, \end{equation} where we used only periodicity of the solution fields. Averaging (in the separation vector $\ell$) equation \eqref{intbalance} over a ball of radius $L$ centered at zero, we find \begin{equation}\label{ident1} \fint_{B_L(0)} \int_{\mathbb{T}^d} |\delta_\ell u|^2 {\rm d} x{\rm d} \ell= \fint_{S^{d-1}} \left.\left(\int_{\mathbb{T}^d} |\delta_\ell u\cdot\hat{\ell}|^2\ {\rm d} x\right)\right|_{|\ell|=L} {\rm d} \omega(\hat{\ell}) \end{equation} where $S^{d-1}$ is the unit sphere in $d$-dimensions and $ {\rm d} \omega$ is the measure on solid angles (unit Haar measure on $S^{d-1}$) and $\fint_{A}:= \frac{1}{|A|} \int_{A}$. For $p\geq 1$, we define the absolute and longitudinal structure functions to be \begin{equation}\label{structurefunctions} S_p(\ell):=\int_{\mathbb{T}^d} |\delta_\ell u|^p {\rm d} x, \qquad S_p^\|(\ell):=\int_{\mathbb{T}^d} (\delta_\ell u\cdot \hat{\ell})^p {\rm d} x. \end{equation} Introducing the angle-averaging operation \begin{equation} \left\langle f(\ell) \right\rangle_{ang} := \fint_{S^{d-1}} f(\ell) \ {\rm d} \omega(\hat{\ell}), \end{equation} from \eqref{ident1} we have deduced the identity \begin{equation}\label{S2ident} \fint_{B_L(0)}S_2(\ell)\ {\rm d} \ell = \left\langle S_2^\|(L\hat{\ell})\right\rangle_{ang}. \end{equation} \begin{rem} In general, the tensor product with $\ell$ yields \begin{equation}\nonumber (\partial_t + u^\nu \cdot \nabla_x)(\delta_\ell u\otimes \ell) + \delta_\ell u \cdot \nabla_\ell (\delta_\ell u\otimes \ell) = \delta_\ell u\otimes \delta_\ell u - \ell \otimes \nabla_x \delta_\ell p. \end{equation} Thus, integrating over space and separation vectors then yields the tensorial identity \begin{equation}\label{zeroident} \left\langle \int_{\mathbb{T}^d} (\delta_\ell u\cdot \hat{\ell}) (\delta_\ell u\otimes \hat{\ell}) {\rm d} x\right \rangle_{ang}= \int_{B_\ell(0)} \int_{\mathbb{T}^d} (\delta_{\ell'} u\otimes\delta_{\ell'} u) {\rm d} x {\rm d} \ell'. \end{equation} Taking the trace gives the scalar identity \eqref{S2ident}. We also obtain as a special case in 3D \begin{equation}\label{zeroident} \left\langle \int_{\mathbb{T}^d} (\delta_\ell u\cdot \hat{\ell}) (\delta_\ell u\times \hat{\ell}) {\rm d} x\right \rangle_{ang}=0. \end{equation} \end{rem} Using the identity \eqref{S2ident} together with the assumed bound \eqref{s2bound}, we have \begin{equation} \fint_{B_L(0)}S_2(\ell)\ {\rm d} \ell \leq C L^{2s}. \end{equation} This inequality holds for all $L\geq 0$. Let $f(\ell):= \| \delta_\ell u\|_{L^2}$ and note that for any $\ell'\in \mathbb{T}^d$, we have \begin{align*} |f(\ell)-f(\ell')| &=| \| \delta_\ell u\|_{L^2}-\| \delta_{\ell'} u\|_{L^2}| \leq \|\delta_\ell u - \delta_{\ell'} u\|_{L^2}\\ &= \sqrt{\int_{\mathbb{T}^d} | u(x+\ell) - u(x+ \ell')|^2 {\rm d} x} = \| \delta_{\ell'-\ell} u\|_{L^2}. \end{align*} Thus we have the bound \begin{equation} \fint_{B_L(\ell')} |f(\ell)- f(\ell')|^2\ {\rm d} \ell \leq \fint_{B_L(\ell')} |f(\ell'-\ell)|^2\ {\rm d} \ell = \fint_{B_L(0)} |f(\ell)|^2\ {\rm d} \ell. \end{equation} We conclude that for any $\ell'\in \mathbb{T}^d$ and $L>0$, the following inequality holds \begin{equation} \left( \fint_{B_L(\ell')} |f(\ell)- f(\ell')|\ {\rm d} \ell\right)^2\leq \fint_{B_L(\ell')} |f(\ell)- f(\ell')|^2\ {\rm d} \ell\leq C L^{2s} \end{equation} which follows by Jensen's inequality. We finally appeal to the following basic fact \begin{lemma}\label{lemHolder} Assume that there exists $C>0$ and $\alpha \in (0,1]$ such that for every $x_0\in \mathbb{T}^d$ and $r>0$, \begin{equation} \frac{1}{|B_r(x_0)|} \int_{B_r(x_0)} |f(x)- f(x_0)| \ {\rm d} x < C r^\alpha. \end{equation} Then $f$ is H\"{o}lder continuous with exponent $\alpha$. \end{lemma} \begin{proof} Let $x_0, y_0\in \mathbb{T} ^d$ and set $r=|x_0-y_0|$. Then we have $B(y_0, r)\subset B(x_0, 2r)$. Thus, \[ \begin{aligned} 2Cr^\alpha &\ge \frac{1}{|B_{2r}(x_0)|} \int_{B_{2r}(x_0)} |f(x)- f(x_0)|dx+\frac{1}{|B_{2r}(y_0)|} \int_{B_{2r}(y_0)} |f(x)- f(y_0)|dx\\ &= (c_0(2r)^d)^{-1} \left\{\int_{B_{2r}(x_0)} |f(x)- f(x_0)|dx+ \int_{B_{2r}(y_0)} |f(x)- f(y_0)|dx\right\}\\ &\ge (c_0 (2r)^d)^{-1}\int_{B_r(x_0)} |f(x)- f(x_0)|+|f(x)-f(y_0)|dx\\ &\ge (c_0 (2r)^d)^{-1}\int_{B_r(x_0)} |f(x_0)-f(y_0)|dx=2^{-d}|f(x_0)-f(y_0)| \end{aligned} \] where $c_0$ denotes the volume of the unit ball. It follows that \[ |f(x_0)-f(y_0)|\le 2^{d+1}C|x_0-y_0|^\alpha \] for any $x_0, y_0\in \mathbb{T} ^d$. This completes the proof. \end{proof} \noindent Lemma \ref{lemHolder} allows us to conclude that $S_2(\ell)$ is H\"{o}lder continuous in $\ell$ with exponent $2s$ and \begin{equation}\nonumber \|u\|_{L^2(0,T;L^2(\mathbb{T}^d))}<C_0, \ \ \ \| \delta_\ell u\|_{L^2(0,T;L^2(\mathbb{T}^d))} \leq C_1 |\ell|^s \ \ \implies \ \ \|u\|_{L^2(0,T; B_2^{s,\infty}(\mathbb{T}^d))} \leq C_2. \end{equation} \end{proof} \begin{proof}[Proof of Theorem \ref{theorem}(a)] By Jensen's inequality, we have \begin{align} \left(\frac{1}{|\ell|^{2(1-\alpha)/3}}\int_0^T\left\langle S_2^\|(\ell)\right\rangle_{ang} {\rm d} t\right)^{3/2} &\leq \frac{1}{|\ell|^{1-\alpha}} \int_0^T \left\langle\int_{\mathbb{T}^d} | \delta_{\ell} u \cdot \hat{\ell}|^3{\rm d} x\right\rangle_{ang} {\rm d} t. \end{align} Now, for any $\epsilon>0$, we can choose $\ell_\epsilon$ sufficiently small such that for all $\ell\leq \ell_\epsilon$, we have \begin{align}\nonumber \left(\frac{1}{|\ell|^{2(1-\alpha)/3}}\int_0^T\left\langle S_2^\|(\ell)\right\rangle_{ang} {\rm d} t\right)^{3/2} & \leq \sup_{\ell\leq \ell_\epsilon} \frac{1}{ |\ell|^{1-\alpha}} \int_0^T\int_{\mathbb{T}^d} \left\langle | \delta_\ell u \cdot \hat{\ell}|^3\right\rangle_{ang}{\rm d} x {\rm d} t \\ \nonumber &\leq \limsup_{|\ell|\to 0} \frac{1}{ |\ell|^{1-\alpha}} \int_0^T\int_{\mathbb{T}^d} \left\langle | \delta_\ell u \cdot \hat{\ell}|^3\right\rangle_{ang}{\rm d} x {\rm d} t +\epsilon \\ & \leq -\int_0^T \int_{\mathbb{T}^d} D_{4/5}^*[u] {\rm d} x {\rm d} t + \epsilon, \end{align} which results from Hypothesis \ref{hypothesis}(a). It follows from Hypothesis \ref{hyp45thsLaw} and Proposition \ref{prop1} that \begin{align}\nonumber \left(\frac{1}{|\ell|^{2(1-\alpha)/3}}\int_0^T\left\langle S_2^\|(\ell)\right\rangle_{ang} {\rm d} t\right)^{3/2} & \leq \frac{12}{d(d+2)}\int_0^T \int_{\mathbb{T}^d} D[u] {\rm d} x {\rm d} t + \epsilon. \end{align} Since $f(x)= x^{2/3}$, $x>0$ is monotone increasing, we have for all $ \ell$ sufficiently small that \begin{equation} \int_0^T\left\langle S_2^\|(\ell)\right\rangle_{ang} {\rm d} t \leq C_0 |\ell|^{2(1-\alpha)/3}, \end{equation} where $C_0:=C_0(d,T, u_0, f)$ depends on magnitude of the inertial energy dissipation (correspondingly the anomalous viscous dissipation if the Euler solution is obtained as a vanishing viscosity limit). The claimed regularity follows by applying Lemma \ref{prop1}. \end{proof} \begin{proof}[Proof of Theorem \ref{theorem}(b)] We employ the following definition of Besov spaces. Fix $\ell>0$ and consider the ``truncated ball" $B_T:=\{n:1/2<|n|<1\}$. The truncated-ball mean is then \begin{equation} ({ \mathcal{V}}_\ell f)(x):= \frac{1}{|B_T|} \int_{B_T} \!\! \delta f(\ell n;x) \ {\rm d} n. \end{equation} The norm for the Besov space $B_p^{s,\infty}(\mathbb{T}^d)$ can then be defined as \begin{equation} \|f\|_{B_p^{s,\infty}} := \|f\|_p + |f|_{B_p^{s,\infty}}, \qquad |f|_{B_p^{s,\infty}} := \sup_{N\geq 0} 2^{sN} \| { \mathcal{V}}_{2^{-N}} f\|_p. \end{equation} See Appendix C of \cite{E96} and Section 2.5.11--12 of \cite{HT83}. Our aim is to obtain a non-trivial $L^3$--integrable upper bound for the Besov norm $ \|u(t)\|_{B_3^{s,\infty}}$ under for some $s>0$. By assumption $u\in L^3(0,T;L^3({\mathbb T}^d))$, so we need only to find an integrable upper bound for the Besov semi-norm $|u(t)|_{B_3^{s,\infty}}^3$. First, by Jensen's inequality, the $p$th-power of ball-averages is bounded by the ball-average of the $p$th power: \begin{equation}\label{jensen} \| { \mathcal{V}}_{\ell} f\|_p^p \leq \frac{1}{|B_T|} \int_{B_T} \| \delta f(\ell n;\cdot)\|_p^p \ {\rm d} n, \qquad p\geq 1. \end{equation} Thus, letting $\ell_N:= 2^{-N}$, we have \begin{eqnarray} |u(t)|_{B_3^{s,\infty}}^3&=& \ \Big(\sup_{N\geq 0} \ell_N^{-s} \| { \mathcal{V}}_{2^{-N}} f\|_3\Big)^3 \nonumber \\ &\leq& \ \sup_{N\geq 0} \ell_N^{-3s}\frac{1}{|B_T|} \int_{B_T} \| \delta u(\ell_N n;\cdot,t)\|_3^3 \ {\rm d} n\nonumber\\ &=& \ \sup_{N\geq 0} \ell_N^{-3s} \frac{\ell_N^{-d}}{|B_T|} \int_{\ell_{N-1}}^{\ell_N} {\rm d} \rho\ \rho^{d-1} \int_{S^{d-1}} \|\delta u(\rho \hat{r};\cdot,t)\|_3^3 \ {\rm d}\omega(\hat{\ell}) \end{eqnarray} where $\rho=|r|$ and where we have used the upper bound \eqref{jensen}. Fix any scale $\ell_0$ and split into small and large \begin{eqnarray}\nonumber |u(t)|_{B_3^{s,\infty}}^3 &\leq & \sup_{|\ell| \leq \ell_0} \ell^{-3s } \int_{\mathbb{T}^d} \left\langle | \delta_\ell u(x,t) |^3\right\rangle_{ang}{\rm d} x + \sup_{|\ell| > \ell_0} \ell^{-3s } \int_{\mathbb{T}^d} \left\langle | \delta_\ell u(x,t) |^3\right\rangle_{ang}{\rm d} x \\ &\leq & \sup_{|\ell| \leq \ell_0} \ell^{-3s } \int_{\mathbb{T}^d} \left\langle | \delta_\ell u(x,t) |^3\right\rangle_{ang}{\rm d} x + 2\ell_0^{-3s } \| u\|_{L^3}^3. \end{eqnarray} For $\beta\in (0,1)$, we have \begin{eqnarray} \nonumber \int_0^T |u(t)|_{B_3^{s,\infty}}^3{\rm d} t \leq \sup_{|\ell| \leq \ell_0} \ell^{(1-\beta)-3s } \left(\frac{1}{|\ell|^{1-\beta}} \int_0^T\int_{\mathbb{T}^d} \left\langle | \delta_\ell u(x,t) |^3\right\rangle_{ang}{\rm d} x {\rm d} t\right)+ 2\ell_0^{-3s } \| u\|_{L^3(0,T;L^3(\mathbb{T}^d))}^3. \end{eqnarray} It follows that for any $s\leq (1-\beta)/3$, we have for any $\epsilon>0$ that there exists an $\ell_\epsilon$ such that for all $\ell_0\leq \ell_\epsilon$, \begin{eqnarray} \nonumber \int_0^T |u(t)|_{B_3^{s,\infty}}^3{\rm d} t &\leq& \ell_0^{(1-\beta)-3s } \sup_{|\ell| \leq \ell_0} \frac{1}{|\ell|^{1-\beta}} \int_0^T\int_{\mathbb{T}^d} \left\langle | \delta_\ell u(x,t) |^3\right\rangle_{ang}{\rm d} x {\rm d} t + 2\ell_0^{-3s } \| u\|_{L^3(0,T;L^3(\mathbb{T}^d))}^3 \\\nonumber &\leq& \ell_0^{-3s } \left( \limsup_{|\ell| \to 0} \frac{1}{|\ell|^{1-\beta}} \int_0^T\int_{\mathbb{T}^d} \left\langle | \delta_\ell u(x,t) |^3\right\rangle_{ang}{\rm d} x {\rm d} t+ \epsilon + 2 \| u\|_{L^3(0,T;L^3(\mathbb{T}^d))}^3\right)\\ \nonumber &\leq& \ell_0^{-3s } \left(-\int_0^T \int_{\mathbb{T}^d} D_{4/3}^*[u] {\rm d} x {\rm d} t +\epsilon + 2 \| u\|_{L^3(0,T;L^3(\mathbb{T}^d))}^3 \right)\\ &=& \ell_0^{-3s }\left( \frac{4}{d} \int_0^T \int_{\mathbb{T}^d} D[u] {\rm d} x {\rm d} t +\epsilon + 2 \| u\|_{L^3(0,T;L^3(\mathbb{T}^d))}^3 \right) \end{eqnarray} where we used the fact that the distributional limit exists by Hypothesis \ref{hyp45thsLaw} and employed our main Hypothesis \ref{hypothesis}(b) in passing to the second to last line. We remark that the bound on the Besov semi-norm depends on magnitude of the inertial (or anomalous) dissipation. \end{proof} \subsection*{Acknowledgments} I am enormously grateful to K. Iyer for providing the Figures \ref{figure1}, \ref{figure2} and \ref{figure3}, as well as for enlightening discussions. I would also like to thank P. K. Yeung for the DNS data, which used the Extreme Science and Engineering Discovery Environment (XSEDE) resource Stampede2 at the Texas Advanced Computing Center through allocation PHY200084. I am grateful to G. L. Eyink for many useful conversations; it was he who suggested that there should be a connection between the 4/5--law and self-regularization. I thank also P. Constantin, H. Q. Nguyen and V. Vicol for insightful discussion. This research was partially supported by NSF-DMS grant 2106233.
2,869,038,155,418
arxiv
\section{Introduction} \label{sec:Intro} The lure of Grand Unified Theories (GUTs) is that the Standard Model (SM) gauge symmetry, $SU(3)_c \times SU(2)_L \times U(1)_Y$, is unified into a single gauge group, so that the three SM gauge interactions originate from a single theory. Accordingly, the SM quarks and leptons are unified into certain representations of the GUT gauge group, leading to the quantization of their electric charges \cite{GUT}. Supersymmetric (SUSY) GUT models have been commonly studied in the literature, motivated by the fact that three SM gauge couplings are successfully unified at the GUT scale $M_{GUT} \simeq 10^{16}$ GeV with the weak scale SUSY \cite{SUSYGUT}. However, there is no evidence of the weak scale SUSY in the current data of the Large Hadron Collider experiments. This fact drives a renewed interest of non-SUSY GUTs in recent years. Among GUT models, an $SO(10)$ framework is arguably one of the most appealing scenario \cite{nonSUSYGUT}, where the SM fermions in each generation are nicely unified into a single ${\bf 16}$ representation of the $SO(10)$ gauge group along with a SM singlet right-handed neutrino (RHN). In the non-SUSY $SO(10)$ GUT framework, we may consider the spontaneous symmetry breaking (SSB) of $SO(10)$ in two steps down to the SM gauge groups \cite{S010nonSUSY1, S010nonSUSY}: For example, the $SO(10)$ group is first broken down to the Pati-Salam (PS) group $SU(4)_c \times SU(2)_L\times SU(2)_R$ at $M_{GUT}\simeq 10^{16}$ GeV. Next, the PS gauge group is broken to the SM gauge group $SU(3)_c \times SU(2)_L \times U(1)_Y$ at an intermediate scale $M_I \simeq 10^{11}$ GeV. Associated with the PS SSB, the Majorana masses for the RHNs are generated, which play the key role in the seesaw mechanism \cite{Seesaw} for generating light SM neutrino masses. The mass scale of RHNs at the intermediate scale is a natural scale for the seesaw mechanism. Leptogenesis \cite{leptogenesis1} is a very simple mechanism to generate the observed baryon asymmetry through the CP-violating out-of-equilibrium decay of Majorana RHNs. This scenario is automatically implemented in the $SO(10)$ GUT framework. Using a minimal set of Higgs fields, one ${\bf 10}$-plet and one ${\bf 126}$-plet, realistic fermion mass matrices can be reproduced (see, for example, Ref.~\cite{S010nonSUSY}). In general, GUT SSB produces stable topological defects such as monopoles and strings \cite{Tdefects, monopole1, monopole2, monopole3}. In the above example of two-step $SO(10)$ breaking, both the $SO(10)$ and the PS SSBs produce monopoles with their masses of order of the SSB scales \cite{monopole2}. Since such super-heavy monopoles would be over-abundant before the Big Bang Nucleosynthesis \cite{monopole3}, a mechanism to significantly reduce the monopole density is necessary for reproducing our universe. One of the original motivation of the cosmological inflation scenario was to solve this monopole problem by diluting the monopole density \cite{Guth}. To sufficiently dilute the monopoles, the inflation must take place after the SSB, or equivalently, the Hubble parameter during the inflation ($H_{inf}$) must be smaller than the SSB scale. For well-known simple inflation scenarios, such as an inflation with a Coleman-Weinberg type potential \cite{Shafi:2006cs} and quartic inflation with non-minimal gravitational coupling, we estimate $H_{inf} \simeq 10^{13-14}$ GeV \cite{Okada:2010jf, Okada:2014lxa}. Although such inflationary scenarios can inflate away the GUT scale monopoles, the intermediate scale monopoles still survive if $M_I < H_{inf}$ \cite{Senoguz:2015lba}. Hence, we need a ``low-scale inflation scenario'' with $H_{inf} < M_I$ to dilute the intermediate-sale monopoles. Hybrid inflation \cite{Linde:1993cn} is a well-known example of low-scale inflation scenario where the introduction of multi-scalar fields is crucial for realizing inflation. Another interesting example is the so-called inflection-point inflation (IPI) scenario which can be realized with a single scalar field. In IPI, the inflaton potential exhibits an approximate inflection-point and slow-roll inflation occurs in the vicinity of the inflection-point. In Ref.~\cite{IPI}, a successful IPI scenario has been proposed in the context of a $U(1)$ Higgs-Yukawa model where the Higgs field is identified with the inflaton field. In the model, the renormalization group (RG) improved effective potential of the inflaton/Higgs field realizes an approximate inflection-point at a scale $M$ if the running inflaton/Higgs quartic coupling $\lambda$ exhibits a minimum with almost vanishing value at $M$, namely $\lambda (\phi =M) \simeq 0$ and its beta-function $\beta_\lambda (\phi = M)\simeq 0$. To satisfy these conditions, it is crucial for the inflaton field to have both gauge and Yukawa interactions, and the gauge and Yukawa couplings at $\phi = M$ must be balanced to achieve $\beta_\lambda (\phi = M)\simeq 0$. A successful IPI scenario in Ref.~\cite{IPI} leads to an upper bound, $H_{\rm inf} \lesssim 10^{10}$ GeV. In this paper, we propose a simple non-SUSY GUT model based on the gauge group $SO(10) \times U(1)_\psi$. In addition to the $SO(10)$ ${\bf 16}$-plet SM fermions with a $U(1)_\psi$ charge of $+1$, the model includes three generations of $SO(10)$ ${\bf 10}$-plets and $SO(10)$ singlet fermions with $U(1)_\psi$ charges $-2$ and $+4$, respectively. Each generation of these fermions can be embedded into a ${\bf 27}$ representation of the $E_6$ group, and hence our model is free from all the gauge and mixed gauge-gravitational anomalies. As previously mentioned, we consider a two-step SSB of the $SO(10)$ gauge group to the SM gauge symmetry, with the PS gauge symmetry appearing at an intermediate scale. The $U(1)_\psi$ symmetry is also broken at the intermediate scale by the vacuum expectation value (VEV) of a $SO(10)$ singlet Higgs field. This field is identified with the inflaton field which drives the IPI inflation in our model, such that all monopoles associated with the GUT and the PS SSBs are adequately diluted. After inflation, the inflaton decays into the SM particles to reheat the universe. We show that a suitable parameter choice yields a reheating temperature smaller than the PS SSB scale but large enough to thermalize the RHNs for successful baryognesis via leptogenesis \cite{leptogenesis2}. The $SO(10)$ group has a center ${\bf Z}_4$ with a subgroup ${\bf Z}_2$. In our model, all the Higgs representations are ${\bf Z}_2$-even, hence the ${\bf Z}_2$ symmetry remains unbroken even after the SSB down to the SM \cite{SO10Z2}, and as a result the lightest mass eigenstate among electrically neutral components in the new ${\bf 10}$-plet and singlet fermions serves as a dark matter (DM) in our universe (for an axion DM scenario in the context of $SO(10)$ models, see, for example, Ref.~\cite{S010nonSUSY}). If the DM particle is mostly composed of a $SO(10)$ singlet fermion, it communicates with the SM particles mainly through the SM Higgs portal interactions. We identify the allowed parameter region for this Higgs-portal fermion DM scenario, which will be fully explored by the direct DM detection experiments in the near future. In addition to the discussion about the IPI scenario and the DM scenario, we consider other phenomenological constraints and theoretical consistencies, such as successful gauge coupling unification, the proton decay constraint, and the stability of the effective SM Higgs potential. We identify a model parameter space for which our GUT model is phenomenologically viable and theoretically consistent. The rest of this paper is organized as follows. In the next section, we define our $SO(10) \times U(1)_\psi$ GUT model. In Sec.~\ref{sec:S010Inf}, we first give a brief review of the IPI scenario and then implement the IPI in our model. We conclude the section with an evaluation of the reheating temperature after inflation. In Sec.~\ref{sec:GCUandPD}, we examine gauge coupling unification in the presence of the new fermions and Higgs fields, and we investigate its consistency with the current lower bound on the proton lifetime. In Sec.~\ref{sec:DM}, we discuss the DM scenario in our model. We identify a parameter region to reproduce the observed DM relic density that is consistent with the current direct DM detection bound. In Sec.~\ref{sec:HiggsStab}, we examine the stability of the effective SM Higgs potential and find a parameter region which can stabilize the SM Higgs potential up to the PS SSB scale. Our conclusion are summarized in Sec.~\ref{sec:conc}. \section{$SO(10) \times U(1)_\psi$} \label{sec:model} \begin{table}[t] \begin{center} \begin{tabular}{ll|c|c|c} \textbf{} & & $SO(10)$ & $U(1)_\psi$ & ${\bf Z}_4$ \\ \hline \multicolumn{1}{l|}{\multirow{3}{*}{\textbf{Fermions}}} &$ { 16}_{SM}^{(i)} $ & {\bf 16} & + 1 & $\omega$ \\ \multicolumn{1}{l|}{} &$ {10}_{E}^{(i)} $ & {\bf 10} & -- 2 & $\omega^2$ \\ \multicolumn{1}{l|}{} &$ {1}_{E}^{(i)} $ & {\bf 1} & + 4 & 1 \\ \hline \multicolumn{1}{l|}{\multirow{6}{*}{\textbf{Scalars}}} &$ {10}_{H} $ & {\bf 10} & -- 2 & $\omega^{2}$ \\ \multicolumn{1}{l|}{} &$ {45}_{H} $ & {\bf 45} & + 4 & 1 \\ \multicolumn{1}{l|}{} &$ {126}_{H} $ & {\bf 126} & + 2 & $\omega^2$ \\ \multicolumn{1}{l|}{} &$ {210}_{H} $ & {\bf 210} & \;\;\;0 & 1 \\ \cline{2-5} \multicolumn{1}{l|}{} &$ {\Phi}_A $ & {\bf 1} & + 4 & 1 \\ \multicolumn{1}{l|}{} &$ {\Phi}_B $ & {\bf 1} & -- 8 & 1 \\ \hline \end{tabular} \end{center} \caption{ Particle contents of the $SO(10) \times U(1)_\psi$ model. Here, $\omega= e^{i \pi/2} = i $. } \label{tab:PC} \end{table} The particle content of the $SO(10) \times U(1)_\psi$ model is listed in Table~\ref{tab:PC}. The model includes three generation of fermions in ${\bf 16}$ ($+1$), ${\bf 10}$ ($-2$), and ${\bf 1}$ ($+4$) representations of $SO(10)\times U(1)_\psi$. Each ${\bf 16}$-plet fermion (${16}_{\rm SM}^{(i)}$, $i = 1,2,3$) includes the $i$-th generation SM fermions plus one SM singlet RHN. The ${\bf 10}$-plets (${10}_{\rm E}^{(i)}$) and singlets (${1}_{\rm E}^{(i)}$) are new fermions. With the $U(1)_\psi$ charge assignments for the fermions in Table~\ref{tab:PC}, each generation of these fermions can be embedded into a ${\bf 27}$ representation of the $E_6$ group, and hence the model is free from all the gauge and mixed gauge-gravitational anomalies. Various representations of Higgs (scalar) fields are introduced in the table which break the $SO(10) \times U(1)_\psi$ group into the SM gauge group via the intermediate PS gauge group. The $SO(10)$ group has a center ${\bf Z}_4$, under which a ${\bf 16}$-plet transforms as ${\bf 16} \to i {\bf 16}$. The ${\bf Z}_4$ charges of all other representations are fixed by this transformation law, which are listed in the last column of Table~\ref{tab:PC}. By VEVs of the various Higgs fields in the table, the ${\bf Z}_4$ symmetry is broken to its sub-group ${\bf Z}_2$ \cite{SO10Z2}. Under this ${\bf Z}_2$ symmetry all the particles except the SM ${16}_{SM}^{(i)}$ are ${\bf Z}_2$-even. Because of the ${\bf Z}_2$ symmetry, the lightest mass eigenstate among the ${\bf 10}$-plets and the singlet fermions is stable and hence a DM candidate. In fact, the DM candidate is stable even when higher dimensional operator are introduced because of the $SO(10)$ and Lorentz invariance (see, for example, Ref.~\cite{Ferrari:2018rey} for a variety of DM candidates in the $SO(10)$ scenario). We assume a suitable Higgs potential for the Higgs fields listed in Table~\ref{tab:PC} such that their VEVs break $SO(10) \times U(1)_\psi$ to the SM gauge group. Consider the decomposition of Higgs representations under the PS gauge group of $SU(4)_c \times SU(2)_L\times SU(2)_R$: \begin{eqnarray} {\bf 210} &=& ({\bf 1},{\bf 1},{\bf 1}) \oplus ({\bf 15},{\bf 1},{\bf 1}) \oplus ({\bf 6}, {\bf 2},{\bf 2}) \oplus ({\bf 15},{\bf 3},{\bf 1}) \oplus ({\bf 15},{\bf 1},{\bf 3}) \oplus ({\bf 10},{\bf 2},{\bf 2}) \oplus (\overline{\bf 10},{\bf 2},{\bf 2}) , \nonumber \\ {\overline {\bf 126}} &=& ({\bf 6},{\bf 1},{\bf 1}) \oplus ({\bf 10},{\bf 1},{\bf 3}) \oplus ({\overline {\bf 10}}, {\bf 3}, {\bf 1}) \oplus ({\bf 15}, {\bf 2}, {\bf 2}), \nonumber \\ {\bf 45} &=& ({\bf 1},{\bf 1},{\bf 3}) \oplus ({\bf 1},{\bf 3},{\bf 1}) \oplus ({\bf 6},{\bf 2},{\bf 2})\oplus ({\bf 15},{\bf1},{\bf1}), \nonumber \\ {\bf 10}&=& ({\bf 1},{\bf 2},{\bf 2}) \oplus ({\bf 6},{\bf 1},{\bf 1}). \label{eq:PSdecomp} \end{eqnarray} We consider the following path for the SSBs: \begin{eqnarray} SO(10) \times U(1)_\psi & \xrightarrow{\langle{210}_{H}\rangle}& SU(4)_c \times SU(2)_L \times SU(2)_R \times U(1)_\psi \nonumber \\ & \xrightarrow[]{\langle{\overline {126}}_{H}\rangle, \langle{45}_{H}\rangle, \; \langle{ \Phi}_{A,B}\rangle }& SU(3)_c \times SU(2)_L \times U(1)_{Y} \nonumber \\ & \xrightarrow {\langle{10}_{H}\rangle}& SU(3)_c \times U(1)_{EM}. \label{eq:SB} \end{eqnarray} Here, the PS (and $U(1)_\chi$) singlet component of ${ 210}_{H}$, $({\bf 1},{\bf 1},{\bf 1})$, develops a GUT scale VEV ($ \langle {210}_{H} \rangle = M_{\rm GUT}$), which spontaneously breaks the $SO(10)$ gauge symmetry to the intermediate PS gauge group at the GUT scale. The PS gauge group is then spontaneously broken to the SM gauge group when $({\bf 10},{\bf 1},{\bf 3})$ of ${\overline {126}}_{H}$, $({\bf 15},{\bf1},{\bf1})$ of ${45}_{H}$ and ${\Phi}_{A,B}$ develop VEVs. For simplicity, we fix a common intermediate scale VEV for $\langle{\overline {126}}_{H}\rangle = \langle {45}_{H} \rangle = \langle{\Phi}_{A,B}\rangle \equiv M_I$. Under the PS group decomposition, we assume that only the Higgs components listed in Table~\ref{tab:HiggsVEVs} have intermediate-scale masses while the other components have GUT-scale mass. The mass spectrum of the scalars ${\Phi}_{A,B}$ will be discussed later. Under the SM gauge groups, there are four Higgs doublets: two in $({\bf 1},{\bf 2},{\bf 2})$ of ${10}_{H}$ and the other two in $({\bf 15}, {\bf 2}, {\bf 2})$ of ${\overline {126}}_{H}$. We assume that all of these four Higgs doublets develop non-zero VEV at the electroweak scale, and only one linear combination of the doublets is light (doublet-doublet Higgs mass splitting) \cite{S010nonSUSY}. The light Higgs doublet is identified with the SM Higgs doublet, and the other linear combinations are heavy with masses of order $M_I$.\footnote{ The electroweak scale VEV for the $({\bf 15}, {\bf 2}, {\bf 2})$ can be realized by an induced VEV mechanism from a mixed scalar coupling ${126} \; {\overline {126}}\; {126} \; {10}_H$ \cite{monopole1}. } Following the $U(1)_\psi$ SSB, the $U(1)_\psi$ gauge boson ($Z^\prime$) acquires its mass which is given by \begin{eqnarray} m_{Z^\prime} \simeq \; g \; \sqrt{16 \langle { \Phi}_{A}\rangle^2 + 64 \langle {\Phi}_{B}\rangle^2 + 4 \langle \overline{{126}}_H \rangle^2+ 16 \langle { 45}_H \rangle^2} = 10 g {M_I}, \label{eq:masses} \end{eqnarray} where $g$ is the $U(1)_\psi$ gauge coupling, $\langle {\Phi}_{A,B}\rangle = \langle{\overline {126}}_{H}\rangle = \langle {45}_{H} \rangle = M_I$, and we neglect the contributions from the electroweak scale VEVs. \begin{table}[t] \begin{center} \begin{tabular}{ c | c } & $M_{I}$ \\ \hline $\overline{{126}}_H$ & $({\bf10},{\bf1},{\bf3})$, $({\bf15},{\bf2},{\bf2})$ \\ ${45}_H$ & $({\bf15},{\bf1},{\bf1})$ \\ ${10}_H$ & $({\bf 1},{\bf 2},{\bf 2})$ \\ \end{tabular} \end{center} \caption{The Higgs mass spectrum; all other components have GUT scale masses.} \label{tab:HiggsVEVs} \end{table} Let us consider fermion masses in our model. The Yukawa couplings for the SM fermions are given by \begin{eqnarray} {\cal L} \supset {16}_{SM} \left(Y_{10} {10}_{H} + Y_{\overline{126}} {\overline {126}}_{H} \right) {16}_{SM}, \label{eq:SMY} \end{eqnarray} where the generation index has been suppressed. This is the so-called minimal $SO(10)$ model to generate realistic SM fermion mass matrices. Fitting of the fermion masses and flavor mixings is beyond the scope of the present work. We refer to Ref.~\cite{S010nonSUSY} for a detailed analysis of realistic fermion mass matrices. In Eq.~(\ref{eq:SMY}), the $U(1)_\psi$ gauge symmetry forbids Yukawa interaction of the form, ${16}_{SM} {10}_{H}^* {16}_{SM}$, which is generally allowed in non-SUSY $SO(10)$ models. The Yukawa couplings of new fermions are given by \begin{eqnarray} {\cal L} = &&\sum_i \frac{1}{2}Y_{A}^{(i)}{\Phi}_{A} {10}_{E}^{(i)} {10}_{E}^{(i)} + \sum_{i \neq j} \frac{1}{2}Y_{45}^{(ij)} {45}_H {10}_{E}^{(i)} {10}_{E}^{(j)} \nonumber \\ &&+\sum_i \frac{1}{2} Y_{B}^{(i)} {\Phi}_{B} { 1}_{E}^{(i)} { 1}_{E}^{(i)} + \sum_{i, j} { Y_H}^{(ij)} {1}_{E}^{(i)} { 10}_{E}^{(j)} {10}_{H}, \label{eq:ExoticY} \end{eqnarray} where $Y_{45}^{(ij)}$ is anti-symmetric. The mass spectrum of the new fermions will be discussed in Sec.~\ref{sec:DM}. \section{Inflation Scenario in $SO(10) \times U(1)_\psi$} \label{sec:S010Inf} As discussed in Sec.~\ref{sec:Intro}, a low-scale inflationary scenario with $H_{inf} < M_I$ is necessary to dilute the monopoles generated through the PS SSB at intermediate scale ($M_I$). In this section, we implement the IPI scenario in Ref.~\cite{IPI} to $SO(10) \times U(1)_\psi$ model and identify the parameter space to realize $H_{\rm inf} < M_{I}$. The $U(1)_\psi$ gauge symmetry is crucial for a successful IPI scenario, where the $SO(10)$ singlet Higgs field $\Phi_A$ is identified with the inflaton. \subsection{Inflection-point Inflation } For the reader's convenience, this sub-section is devoted to outline the general setup of the IPI scenario. See Ref.~\cite{IPI} for more details. The IPI is a low-scale inflation scenario driven by a single scalar field, in which the inflation potential exhibits an approximate inflection-point at a scale $M$. Consider the Taylor series of the inflaton potential $V (\phi)$ around $\phi=M$ up to the cubic term: \begin{eqnarray} V(\phi)\simeq V_0 +V_1 (\phi-M)+\frac{V_2}{2} (\phi-M)^2+\frac{V_3}{6} (\phi-M)^3, \label{eq:PExp} \end{eqnarray} where $V_0 = V(M)$, and $V_n \equiv {\rm d}^{n}V/{\rm d} \phi^n |_{\phi =M}$. It will soon be clear that higher order terms in the expansion can be neglected. Using the potential of Eq.~(\ref{eq:PExp}), the inflationary slow-roll parameters at the scale $M$ are expressed as \begin{eqnarray} \epsilon \simeq \frac{M_{P}^2}{2} \left( \frac{V_1}{V_0} \right)^2, \;\; \eta \simeq M_{P}^2 \left( \frac{V_2}{V_0} \right), \;\; \zeta^2 = M_{P}^4 \frac{V_1 V_3}{V_0^2}, \label{eq:IPa} \end{eqnarray} where $M_{P} = m_P/\sqrt{8\pi} = 2.43\times 10^{18}$ GeV is the reduced Planck mass. The inflationary predictions for the spectral-index ($n_s$), the tensor-to-scalar ratio ($r$), and the running of the spectral index ($\alpha$) are expressed in terms of the slow-roll parameters as \begin{eqnarray} n_s = 1-6\epsilon+2\eta, \; \; r = 16 \epsilon , \;\; \alpha = 16 \epsilon \eta -24 \epsilon^2-2 \zeta^2. \label{eq:IPred} \end{eqnarray} The amplitude of the scalar perturbation ($\Delta^2_{\mathcal{R}}$) is given by \begin{equation} \Delta_{\mathcal{R}}^2 = \frac{1}{24 \pi^2} \frac{V_0}{M_P^4 \epsilon }. \label{eq:PSpec} \end{equation} Using the central values, $\Delta_{\mathcal{R}}^2= 2.195 \times 10^{-9}$ and $n_s = 0.9649$, from the Planck 2018 results \cite{Planck2018}, we can express $V_1$ and $V_2$ as \begin{eqnarray} \frac{V_1}{M^3}&\simeq& 1.96 \times 10^3 \left(\frac{M}{M_P}\right)^3\left(\frac{V_0}{M^4}\right)^{3/2}, \nonumber \\ \frac{V_2}{M^2}&\simeq& -1.76 \times 10^{-2} \left(\frac{M}{M_P} \right)^2 \left(\frac{V_0}{M^4}\right), \label{eq:FEq-V1V2} \end{eqnarray} where we have used $\epsilon(M) \ll |\eta(M)|$ in the IPI scenario \cite{IPI} in deriving the second equation. Slow-roll inflation takes place as the inflaton field slowly rolls down the inflaton potential from $\phi = M $ to $\phi = \phi_E < M$, where $\phi_E$ is the inflaton value at the end of inflation which is determined by $\epsilon (\phi_E) =1$. As derived in Ref.~\cite{IPI}, the number of e-folding during inflation is approximately given by \begin{eqnarray} N\simeq \pi \frac{V_0}{M_{P}^2\sqrt{2 V_1 V_3}} \; . \label{eq:CV4} \end{eqnarray} To solve the horizon problem, we may set $N = 50-60$. Using Eqs.~(\ref{eq:FEq-V1V2}) and (\ref{eq:CV4}), we express $V_3$ as \begin{eqnarray} \frac{V_3}{M} \simeq 6.99 \times 10^{-7} \; \left( \frac{60}{N} \right)^2 \left( \frac{M}{M_P} \right) \left( \frac{V_0}{M^4} \right)^{1/2} . \label{eq:FEq-V3} \end{eqnarray} With the above expressions for $V_{1,2,3}$ in terms of $V_0$ and $M$, we find the IPI predictions for $r$ and $\alpha$ as follows: \begin{eqnarray} r &=& 3.08 \times 10^7 \; \left( \frac{V_0}{M_P^4} \right), \nonumber \\ \alpha &\simeq& - 2\zeta^2 = - 2.74 \times 10^{-3}\left(\frac{60}{N}\right)^2. \label{eq:FEq-r} \end{eqnarray} In the IPI scenario, the prediction for $\alpha$ is uniquely determined if $N$ is specified. For $N=60$, this prediction is consistent with $\alpha = - 0.0045\pm 0.0067$ from the Planck 2018 results \cite{Planck2018}. Precision measurement of $\alpha$ in future experiments can reduce the error to $\pm 0.002$ \cite{RunningSpectral}, so that the IPI prediction can be tested in the foreseeable future. \subsection{Inflection-point Inflation in $SO(10) \times U(1)_\psi$} \label{sec:S010IPI} Let us now implement the IPI scenario in the $SO(10) \times U(1)_\psi$ model by identifying the $SO(10)$ singlet Higgs field $\Phi_{A}$ with the inflaton. Assuming $\Phi_A$ is very weakly coupled to the other Higgs fields, we consider the tree-level inflaton/Higgs potential given by \begin{eqnarray} V_{tree} = \lambda \left( \Phi_A^\dagger \Phi_A - \frac{M_I}{2} \right)^2 \simeq \frac{1}{4}\lambda \varphi^4, \end{eqnarray} where $\varphi =\sqrt{2} \Re[ \Phi_A]$ is the real component of $\Phi_A$, and we identify it with the inflaton. To obtain the final expression for the inflaton potential, we have used $\varphi \gg M_I$ during inflation. Taking quantum corrections into account, we consider a renormalization group (RG) improved effective potential given by \begin{eqnarray} V(\varphi) = \frac{1}{4} \lambda (\varphi)\;\varphi^4. \label{eq:VEff} \end{eqnarray} Here, $\lambda (\varphi)$ is the solution to the following RG equations: \begin{eqnarray} \varphi \frac{d g}{d \varphi} &=& \frac{1}{16 \pi^2} \left( \frac{1448}{3} \right) g^3, \nonumber\\ \varphi \frac{d Y_{A}^{(i)}}{d \varphi} &=& \frac{1}{16 \pi^2}\left( 24 g^2 Y_{A}^{(i)} + Y_{A}^{(i)}\left(-48g^2 + \sum_j {Y_{A}^{(j)}}^2\right)\right), \nonumber\\ \varphi \frac{d \lambda}{d \varphi} &=& \beta_{\lambda}, \label{eq:RGEs} \end{eqnarray} where $ Y_{A}$s are the ${\bf 10}$-plet fermion Yukawa couplings, and the beta-function of the inflaton quartic coupling ($\beta_{\lambda}$) is given by \begin{eqnarray} \beta_{\lambda} = \frac{1}{16 \pi^2} \left( 5 \lambda^2 + 2 \lambda \left(- 48 g^2 + {Y_{A}^{(i)}}^2 + \sum_i {Y_{A}^{(i)}}^2 \right) + 6144 g^4 - 4 \sum_i {Y_{A}^{(i)}}^4 \right). \label{eq:BGen} \end{eqnarray} For simplicity, we have neglected the contribution of $Y_{45}^{(ij)}$ to $\beta_{\lambda}$ by assuming sufficiently small $Y_{45}^{(ij)}$. The constants $V_{1,2,3}$ in Eq.~(\ref{eq:PExp}) can be expressed in terms of $\lambda$ and $\beta_\lambda$ as follows: \begin{eqnarray} \frac{V_1}{M^3}&=& \left.\frac{1}{4} (4 \lambda + \beta_\lambda)\right|_{\varphi= M},\nonumber \\ \frac{V_2}{M^2}&=& \left.\frac{1}{4} (12\lambda + 7\beta_\lambda+M \beta_\lambda^\prime)\right|_{\varphi= M}, \nonumber \\ \frac{V_3}{M}&=& \left.\frac{1}{4} (24\lambda + 26\beta_\lambda+10M \beta_\lambda^\prime+M^2 \beta_\lambda^{\prime\prime})\right|_{\varphi= M}, \label{eq:ICons2} \end{eqnarray} where the prime denotes $d/d\varphi$. In order for the effective inflation potential to exhibit an approximate inflection-point at $M$, we require $V_1/M^3\simeq 0$ and $V_2/M^2\simeq 0$, so that \begin{eqnarray} \beta_\lambda (M)\simeq -4\lambda(M), \qquad M\beta_\lambda^{\prime}(M)\simeq -12 \lambda(M) - 7\beta_\lambda(M) \simeq 16 \lambda (M). \label{eq:Cond1} \end{eqnarray} For $g (M) , Y_A^{(i)} (M), \lambda (M)< 1$, we can approximate $M^2 \beta_\lambda^{\prime\prime}(M) = - M \beta_\lambda^{\prime}(M) + (M \beta_\lambda^{\prime}(M))^2 \simeq - M \beta_\lambda^{\prime}(M)$, where we have neglected contributions from higher order terms, namely ${\cal O}(g^8)$, ${\cal O}((Y_A^{(i)})^8)$ and ${\cal O}(\lambda^4)$. Together with the relations in Eq.~(\ref{eq:Cond1}), it simplifies the last equation in Eq.~(\ref{eq:ICons2}) to $V_3/M \simeq 16 \;\lambda(M)$. Using Eq.~(\ref{eq:FEq-V3}) and $V_0\simeq (1/4) \lambda(M) M^4$, we arrive at \begin{eqnarray} \lambda(M)\simeq 4.8 \times 10^{-16} \left(\frac{M}{M_{P}}\right)^2\left(\frac{60}{N}\right)^4. \label{eq:FEq1} \end{eqnarray} For the rest of the analysis, we set $N=60$. With the inflaton quartic coupling determined by $M$, we express the tensor-to-scalar ratio ($r$) and the Hubble parameter during the inflation ($H_{inf}$) as \begin{eqnarray} r &\simeq& 3.7 \times 10^{-9} \left(\frac{M}{M_{P}}\right)^6, \nonumber \\ H_{inf} &\simeq& \sqrt{\frac{V_0}{3 {M_P}^2}}\simeq 1.5\times 10^{10} \;{\rm GeV} \;\left(\frac{M}{M_P}\right)^3. \label{eq:FEqR} \end{eqnarray} Note that $H_{inf} \lesssim 10^{10}$ GeV for $M \lesssim M_P$. In Ref.~\cite{IPI}, an upper-bound $M \lesssim 5.7 M_P$ has been obtained from theoretical consistency. In the following sections, we will find $M_I=10^{11}-10^{12}$ GeV in our model. Therefore, the monopole problem is solved by taking $M \lesssim M_P$. For $M \lesssim M_P$, the prediction of the tensor-to-scalar ratio $r < 3.7 \times 10^{-9}$ is much smaller than the current upper bound of $r\ll 0.065$ from the Planck 2018 observation \cite{Planck2018}. The conditions in Eq.~(\ref{eq:Cond1}) to realize the (approximate) inflection-point at $M$ allows us to derive a relation between the gauge and Yukawa couplings. For simplicity, we assume $Y_{A}^{(1,2)} \ll Y_{A}^{(3)} \equiv Y$. Since the gauge and Yukawa couplings are independent of $\lambda$, we also assume $g, Y_{A}^{(3)} \gg \lambda$. In this case, the first condition in Eq.~(\ref{eq:Cond1}) with the very small $\lambda$ in Eq.~(\ref{eq:FEq1}) leads to $\beta_\lambda(M) \simeq 0$ such that \begin{eqnarray} Y(M)\simeq 6.3\;g(M). \label{eq:FEq3} \end{eqnarray} Employing this relation and explicitly evaluating the second condition in Eq.~(\ref{eq:Cond1}) using the RG equations in Eq.~(\ref{eq:RGEs}), we find a relation, $\lambda(M)\simeq 26 \, g(M)^6$. Thus, we can express the $U(1)_\psi$ gauge coupling as \begin{eqnarray} g(M)\simeq 1.6\times 10^{-3} \;\left(\frac{M}{M_{P}}\right)^{1/3}. \label{eq:FEq2} \end{eqnarray} Thus, all couplings at the scale $M$, namely $g(M)$, $Y(M)$ and $\lambda(M)$ are determined in terms of $M/M_P$. \begin{figure}[t] \begin{center} \includegraphics[scale=0.88]{lambdarge.eps} \; \includegraphics[scale=0.62]{Potential.eps} \end{center} \caption{ The left panel shows the RG running of inflaton quartic coupling as a function of $\varphi/M$. We have fixed $M = M_{P} $, so that $g (M)=1.6 \times 10^{-3}$, $Y (M) \simeq 1.0 \times 10^{-2}$, and $\lambda(M) \simeq 4.8 \times10^{-16}$. Dashed horizontal line corresponds to $\lambda=0$. The right panel shows the RG improved effective inflaton potential with an (approximate) inflection-point at $\varphi \simeq M$. } \label{fig:InfPot} \end{figure} Next we evaluate the low-energy values of $g(\varphi)$, $Y(\varphi)$ and $\lambda (\varphi)$ by solving the RG equations. Because of $g(M), Y(M) \ll1$, it is easy to find the approximate solutions to their RG equations: \begin{eqnarray} g(\varphi) &\simeq& g(M) +\beta_{g}(M) \ln \left[\frac{\varphi}{M}\right] ,\nonumber \\ Y(\varphi) &\simeq& Y(M)+ \beta_{Y}(M) \ln \left[\frac{\varphi}{M}\right], \label{eq:gYatVEV} \end{eqnarray} where $\beta_{g}(M)$ and $\beta_{Y}(M)$ are the beta-functions of $g$ and $Y$ evaluated at $M$, respectively. Since $\lambda (M)$ is extremely small, $\beta_{\lambda}$ is mainly controlled by the gauge and the Yukawa couplings, \begin{eqnarray} \beta_{\lambda}(\varphi) &\simeq& \frac{1}{16 \pi^2}\left( 6144\; g(\varphi) ^4 - 4\; Y(\varphi) ^4\right) \nonumber \\ &\simeq&\frac{1}{16 \pi^2} \times 16 \lambda (M) \ln \left[\frac{\varphi}{M}\right], \label{eq:betaL} \end{eqnarray} where we have used $\beta_\lambda (M) = 0$ and Eq.~(\ref{eq:gYatVEV}). Hence, we find the approximate solution, \begin{eqnarray} \lambda(\varphi) \simeq 3.8\times 10^{-15}\left(\frac{M}{M_P}\right)^2\left(\ln \left[\frac{\varphi}{M}\right] \right)^2. \label{eq:Lmu} \end{eqnarray} At the $U(1)_\psi$ symmetry breaking scale $\varphi= M_I$, we obtain the mass spectrum: \begin{eqnarray} m_{Z^\prime} &\simeq& 10 g(M_I) M_I \simeq 2.3 \times 10^{-2} \times M_I \;\left(\frac{M}{M_{P}}\right)^{1/3}, \nonumber \\ m_{\varphi}&=& \sqrt{ 2 \lambda(M_I)} M_I \simeq 8.7\times 10^{-8} \times M_I \left|\ln \left[\frac{M}{M_I}\right]\right| \left(\frac{M_I}{M_P}\right), \nonumber \\ m_{10}^{(3)} &\simeq& \frac{1}{2}Y(M_I) M_I \simeq \frac{1}{3} m_{Z^\prime}. \label{eq:mA} \end{eqnarray} In the following analysis, we fix $M = M_P$ for simplicity, so that the mass spectrum is uniquely determined by $M_I$. In Fig.~\ref{fig:InfPot}, we plot the running quartic coupling (left) and the RG improved effective inflaton potential (right). Here, $\lambda(M)\simeq 4.8 \times10^{-16}$, $g(M) \simeq 1.6\times 10^{-3}$, and $Y(M)\simeq 1.0 \times 10^{-2}$ with our choice of $M = M_P$. In the left panel, the running quartic coupling shows a minimum at $\varphi \simeq M$. In the right panel, we can see that the inflaton potential exhibits an (approximate) inflection-point at $\varphi \simeq M$ (marked as the vertical dashed-dotted line). \subsection{Reheating Temperature and Thermal Leptogenesis} \label{sec:Reheat} To connect our inflation scenario with the Standard Big Bang cosmology, we consider reheating after inflation. After the end of inflation, the inflaton rolls down to the potential minimum and then oscillates around the minimum. As the age of the universe reaches the lifetime of the inflaton, the latter decays to the SM particles and the total inflaton energy is transmitted to SM particles as radiation. Assuming that the decay products are instantly thermalized, we estimate the reheat temperature ($T_R$) by \begin{eqnarray} T_R \simeq \left(\frac{90}{\pi^2 g_*}\right)^{1/4} \sqrt{\Gamma M_P}, \label{eq:TR} \end{eqnarray} where $\Gamma$ is the decay width of the inflaton and $g_*$ is the total number of degrees of freedom of the thermal plasma. We may express the decay width of inflaton as \begin{eqnarray} \Gamma \simeq 1.4 \; {\rm GeV} \times \sqrt{g_*} \left(\frac{T_R [{\rm GeV}]}{10^{10}}\right)^2 . \label{eq:gamma1} \end{eqnarray} For a coupling between the inflaton and the SM particles, we consider the following gauge invariant coupling in the scalar potential: \begin{eqnarray} V\supset \Lambda \Phi_A 10_H^2 \supset \Lambda \varphi H_u H_d \supset \frac{\Lambda \sin2\beta}{2} \varphi H^\dagger H, \label{eq:InfPot} \end{eqnarray} where $\Lambda > 0$ is a free mass parameter, and $10_H \supset ({\bf 1},{\bf 2},{\bf 2}) = H_u \; ({\bf 1},{\bf 2}, +1/2)$ $\oplus H_d \; ({\bf 1},{\bf 2}^*, -1/2)$. The SM Higgs doublet (H) is realized as a linear combination of $H_u$ and $H_d$, and $H$ is embedded in $H_{u,d}$ as $H_{u} \supset H \sin\beta$ and $H_{d} \supset { H}^\dagger \cos\beta$, where $\tan\beta = v_u/v_d$ is a ratio of $H_u$ and $H_d$ VEVs. The decay width of the inflaton into a pair of SM Higgs doublets is given by \begin{eqnarray} \Gamma (\varphi \to H^\dagger H ) \simeq \frac{\Lambda^2 \sin^{2}2\beta}{4\pi \, m_\varphi}, \label{eq:gamma2} \end{eqnarray} where we have neglected the Higgs doublet mass. For $M = M_P$ and $M_I = 1.3 \times 10^{11}$ GeV, we obtain $m_{\varphi} = 1.9\times 10^{5}$ GeV from Eq.~(\ref{eq:mA}), and thus the reheating temperature, \begin{eqnarray} T_R \simeq 10^{10} \;{\rm GeV} \left(\frac{\Lambda [{\rm GeV}]}{4.2 \times 10^5}\right), \label{Lambda} \end{eqnarray} with $g_* = 100$ and $\beta = \pi/3$. Although the inflaton can also decay into a pair of SM Higgs doublets also through the quartic coupling $ \lambda_{mix}\Phi_A^\dagger \Phi_A 10_H^\dagger 10_H$, we have assumed this small. Another possibility for the inflaton decay is through the Yukawa coupling in Eq.~(\ref{eq:ExoticY}) if a ${\bf 10}$-plet fermion is light enough. Since the infalton mass is much smaller than $M_I$, the Yukawa coupling $Y_A^{(i)}$ is very small whenever a ${\bf 10}$-plet fermion is lighter than the inflaton. Thus, we neglect the partial decay width of the inflaton for this process. In our scenario, the Majorana RHN masses are generated by the PS SSB at the intermediate scale. This scale is a narural scale for the seesaw mechanism to generate light neutrino masses as well as thermal leptogenesis. As pointed out in Ref.~\cite{leptogenesis2}, there is a lower bound on the lightest RHN mass $\gtrsim 10^9$ GeV for a successful thermal leptogenesis scenario. If the lightest RHN mass to be $10^{9}$ GeV, we may adjust $\Lambda=4.2 \times 10^5$ GeV in Eq.~(\ref{Lambda}) so that $10^9 < T_R[{\rm GeV}] \simeq 10^{10} < M_I$ for successful thermal leptogenesis and also avoid a restoration of the PS symmetric vacuum. \section{Gauge Coupling Unification and Proton Decay} \label{sec:GCUandPD} As discussed before, the $SO(10)$ breaking to the SM proceeds in two-steps. In the bottom-up picture, the SM gauge groups are first unified into the PS gauge group $SU(4){_c} \times SU(2)_L \times SU(2)_R$ at the intermediate scale $M_I$, and then the PS gauge group is unified into the $SO(10)$ group at $M_{GUT}$. In this section, we examine the RG evolutions of the gauge couplings and determine the mass spectrum of the new particles in order to realize the successful gauge coupling unification. We also consider a lower bound on $M_{GUT}$ from the current experimental lower bound on the proton lifetime. We consider the contribution of new particles to the RG running of the gauge couplings. For the Higgs sector, the fields listed in Table~\ref{tab:HiggsVEVs} contribute to the RG evolutions of the gauge couplings above the PS SSB scale, while only the SM Higgs doublet contributes to the RG equations of the SM gauge couplings below the PS SSB scale. The new $10_{E}$ fermion decomposition under the PS gauge group is given by Eq.~(\ref{eq:PSdecomp}). Under the SM gauge group, \begin{eqnarray} {10}^{(i)}_{E} = D^{(i)} \; ({\bf1},{\bf2},+1/2) \oplus {\bar D}^{(i)} \; ({\bf1}, \bar{\bf2}, -1/2) \oplus T^{(i)} \; ({\bf3},{\bf1}, +1/3) \oplus {\bar T}^{(i)} \; ({\bar {\bf3}},{\bf1}, -1/3), \label{eq:10decomp} \end{eqnarray} where $D^{(i)}$ and ${\bar D}^{(i)}$ ($T$ and ${\bar T}^{(i)}$) are the SM $SU(2)_L$ doublets ($SU(3)_c$ triplets). In the previous section, we have fixed the ${10}^{(3)}_{E}$ fermion mass ($m_{10}^{(3)}$) in Eq.~(\ref{eq:mA}) by the IPI analysis. For the other two ${\bf 10}$-plet fermions, we consider a mass splitting between the doublets and triplets (the origin of the mass splitting will be discussed in the next section). It will turn out that this mass splitting is crucial to keep the unification scale $M_{GUT} < M_{P}$. Let us now examine the RG evolution of the gauge couplings by solving their RG equations at the 1-loop level. For energy scale $\mu$ below the PS SSB scale ($\mu < M_I$), the running SM gauge couplings obey the following RG equations: \begin{eqnarray} \mu \frac{d \alpha_{1}}{d \mu} &=& \frac{1}{2\pi}\alpha_{1}^2 \left(\frac{41}{10}+ \sum_{j= 1,2} \frac{2}{5} \theta (\mu - m_{D}^{(j)})+ \sum_{j= 1,2} \frac{4}{15}\theta (\mu - m_{T}^{(j)}) + \frac{2}{3} \theta (\mu -m_{10}^{(3)})\right), \nonumber \\ \mu \frac{d \alpha_{2}}{d \mu} &=& \frac{1}{2\pi}\alpha_{2}^2 \left(-\frac{19}{6} + \sum_{j= 1,2}\frac{2}{3} \theta (\mu - m_{D}^{(j)})+ \frac{2}{3} \theta (\mu - m_{10}^{(3)})\right), \nonumber \\ \mu \frac{d \alpha_{3}}{d \mu} &=& \frac{1}{2\pi}\alpha_{3}^2 \left(-7+ \sum_{j= 1,2}\frac{2}{3}\theta (\mu - m_{T}^{(j)}) + \frac{2}{3}\theta (\mu - m_{10}^{(3)}) \right). \label{eq:betafun1} \end{eqnarray} Here, $\alpha_{2,3} = g_{2,3}^2/4\pi$ with $g_{2,3}$ being the $SU(3)_c$ and $SU(2)_L$ gauge couplings, respectively, $\alpha_1 = g_1^2/4\pi$ with $g_{1} = \sqrt{5/3}\; g_Y$ and $g_Y$ the $U(1)_Y$ gauge coupling, $\theta$ is a Heaviside function, $m_{10}^{(3)} \simeq 7.7 \times 10^{-3} M_I$ is fixed from Eq.~(\ref{eq:mA}) with $M=M_P$, and $m_{D}^{(j)}$ ($m_{T}^{(j)}$) are the doublet (triplet) component masses of the two ${\bf 10}$-plet fermions. In the following analysis, we fix $m_{D}^{(1)}= m_{D}^{(2)} \equiv m_D$ and $m_{T}^{(1)}= m_{T}^{(2)} \equiv m_T$ ($m_{D, T} < M_I$), for simplicity. In solving the RG equations, we employ the SM gauge couplings at $\mu = m_t = 172.44$ GeV \cite{Buttazzo:2013uya}: \begin{eqnarray} g_{1} (m_t) = \sqrt{5/3} \times 0.35830, \qquad g_{2} (m_t) = 0.64779 , \qquad g_{3} (m_t) = 1.1666. \end{eqnarray} In our analysis, $m_{D,T}$ and $M_I$ are free parameters. For $M_I < \mu < M_{GUT}$, our theory is based on the PS gauge group. The relationship between the SM and the PS gauge couplings at $\mu = M_I$ are given by the tree-level matching conditions: \begin{eqnarray} \alpha_{2} (M_I) = {\alpha}_{L} (M_I) , \qquad \alpha_{3} (M_I) = {\alpha}_{4} (M_I) , \qquad \alpha_{1}^{-1} (M_I) = \frac{3}{5} {\alpha}_{R}^{-1} (M_I) + \frac{2}{5}{\alpha}_{4}^{-1}(M_I), \end{eqnarray} where $\alpha_{4,L,R}$ represent the gauge couplings of the gauge groups, $SU(4){_c}$, $SU(2)_L$, and $SU(2)_R$, respectively. With the initial values of the PS gauge couplings fixed by the matching conditions, we solve the following RG equations of the PS gauge couplings for $M_I < \mu < M_{GUT}$: \begin{eqnarray} \mu \frac{d {\alpha}_{4}}{d \mu} &=& \frac{1}{2\pi}{\alpha}_{4}^2 \left(+1\right), \nonumber \\ \mu \frac{d {\alpha}_{L}}{d \mu} &=& \frac{1}{2\pi}{\alpha}_{L}^2 \left(4\right), \nonumber \\ \mu \frac{d {\alpha}_{R}}{d \mu} &=& \frac{1}{2\pi}{\alpha}_{R}^2 \left(\frac{32}{3}\right). \label{eq:betafun2} \end{eqnarray} Here, the beta-functions include the contribution from all SM fermions, ${10}^{(i)}_{E}$ ($i= 1,2,3$) fermions, the Higgs fields listed in Table~\ref{tab:HiggsVEVs}, and the PS gauge bosons. \begin{figure}[t] \begin{center} \includegraphics[scale =0.8]{RGE2.eps} \end{center} \caption{ For $m_{D} = 2$ TeV and $m_T = 5\times10^4$ TeV (solid lines) and $m_{T} = 2\times10^6$ TeV (dashed lines), the three solid and dashed lines from top to bottom correspond to $\alpha_{{1,2,3}}$ for $\mu < M_I$ and ${\alpha}_{{R,L,4}}$ for $M_I< \mu < M_{GUT}$. For $m_{T} = 5\times10^4 \; (2\times10^6)$ TeV, we find $M_I \simeq 1.4 \times 10^{12} \; (2.4 \times 10^{11})$ GeV and $M_{GUT} \simeq 4.6 \times 10^{15} \; (3.4 \times 10^{16})$ GeV. } \label{fig:gcu} \end{figure} The analytic solutions for the above RG equations at scale $\mu$ are obtained as functions of three free parameters, $m_{D}$, $m_{T}$, and $M_I$. Next, we require gauge coupling unification at $\mu=M_{GUT}$: $ \alpha_{L} (M_{GUT}) = \alpha_{R} (M_{GUT}) = \alpha_{4} (M_{GUT}) \equiv \alpha_{GUT}$. With four free parameters, $m_{D}$, $m_{T}$, $M_I$ and $M_{GUT}$, we can always find a solution to satisfy the gauge coupling unification condition. Once we fix the values of $m_{D}$ and $m_{T}$, the mass scales $M_I$ and $M_{GUT}$ are determined from the unification condition. In Fig.~\ref{fig:gcu}, we plot the RG running of the gauge couplings for a fixed value of $m_{D} = 2$ TeV and two different values of $m_T = 5\times10^4$ TeV (solid lines) and $m_{T} = 2\times10^6$ TeV (dashed lines). The three solid lines from top to bottom correspond to $\alpha_{{1,2,3}}$ for $\mu < M_I$ and ${\alpha}_{{R,L,4}}$ for $M_I< \mu < M_{GUT}$. For $m_{T} = 5\times10^4 \; (2\times10^6)$ TeV, we find $M_I \simeq 1.4 \times 10^{12} \; ( 2.4 \times 10^{11})$ GeV and $M_{GUT} \simeq 4.6 \times 10^{15} \; (3.4 \times 10^{16})$ GeV. The plot shows that as we increase the triplet fermion mass $m_T$, $\alpha_{GUT}$ and $M_{GUT}$ values decrease while the $M_I$ value increases. \begin{figure}[t] \begin{center} \includegraphics[scale =0.65]{MIv3E.eps} \; \includegraphics[scale=0.58]{alphaGUT.eps}\\ \includegraphics[scale=0.95]{MGUTv3E.eps} \end{center} \caption{ Top-left and top-right panels show $M_{I}$ and $1/\alpha_{GUT}$ as a function of $m_{T}$, respectively, for $m_{D} = 1$ TeV, $2$ TeV, and $5$ TeV (solid lines from top to bottom). The bottom panel shows $M_{GUT}$ as a function of $m_{T}$ for $m_{D} = 1$ TeV, $2$ TeV, and $5$ TeV (solid lines from top to bottom). The gray shaded region is excluded by the lower bound on proton lifetime from the Super-K result. The search reach of proton lifetime by the future Hyper-K experiment, $\tau_{HK} \simeq 10 \times \tau_{SK}$ \cite{Abe:2011ts}, is depicted as the dashed line. } \label{fig:MIandMGUT} \end{figure} Since quarks and leptons are unified into a representation of the unified gauge group and baryon number is broken, proton decay is a typical prediction of GUTs. In our model, the main proton decay process, $p \to \pi^0 e^+$, is mediated by the $SO(10)$ GUT gauge bosons and the colored Higgs bosons in $10_H$. For the GUT gauge boson mediated process, the proton lifetime is estimated as $\tau_p \simeq (1/\alpha_{GUT}^2) M_{GUT}^4/m_p^5$ \cite{Nath:2006ut} in terms of the unified gauge coupling $\alpha_{GUT}$, the gauge coupling unification scale $M_{GUT}$ and the proton mass $m_p = 0.983$ GeV. For the colored Higgs mediated process, we estimate the proton lifetime as $\tau_p \simeq m_{HC}^4/\left(m_p^5 Y^2_u Y^2_d \right)$ \cite{Nath:2006ut}, where $Y_{u,d} \simeq 10^{-5}$ are the up and down quark Yukawa couplings, and $m_{HC}$ is a colored Higgs boson mass. Employing the lower bound on the proton lifetime for the process $p \to \pi^0 e^+$ by the Super-Kamionkande (Super-K) experiment, $\tau_{SK}> 1.6 \times 10^{34}$ years \cite{Miura:2016krn}, we find $M_{GUT}/\sqrt{\alpha_{GUT}} > 2.5 \times 10^{16}$ GeV and $m_{HC} > 4.5 \times 10^{11}$ GeV for the GUT gauge boson and the colored Higgs mediated processes, respectively. The proton decay bound constrains the parameter region for $m_{D}$ and $m_{T}$. We have taken $m_{HC} =M_{GUT}$ for the analysis in this section. However, our result for the gauge coupling unification remains almost the same even for $4.5 \times 10^{11}$ GeV$ < m_{HC} <M_{GUT}$, since the colored Higgs contribution to the beta-functions is not large. In Fig.~\ref{fig:MIandMGUT}, we show our results for the gauge coupling unification for various values of $m_D$ and $m_T$. The top panels depict $M_{I}$ (left panel) and $1/\alpha_{GUT}$ (right panel) as a function of $m_{T}$ for three fixed values of $m_{D} = 1$ TeV, $2$ TeV, and $5$ TeV from top to bottom. Gauge coupling unification is realized along the solid lines. In the bottom panel, we show $M_{GUT}$ as a function of $m_{T}$ for $m_{D} = 1$ TeV, $2$ TeV, and $5$ TeV from bottom to top, respectively. The gray shaded region is excluded by the Super-K result. Note that the Super-K constraint leads to an upper bound on the triplet fermion mass, $m_{T} < 2\times 10^6$ TeV, $8\times 10^5$ TeV and $4\times10^5$ TeV, respectively, for $m_{D} = 5$ TeV, $2$ TeV, and $1$ TeV. The search reach of the proton lifetime by the future Hyper-Kamiokande (Hyper-K) experiment, $\tau_{HK} \simeq 10 \times \tau_{SK}$ \cite{Abe:2011ts}, is depicted as the dashed line. We conclude this section with a comment on the result for the degenerate mass spectrum, $m_D^{(i)}= m_T^{(i)}$ ($i=1,2$). In this case, we find $M_I \simeq 1.7\times 10^9$ GeV and $M_{GUT}\simeq 1.4 \times 10^{19}$ GeV. This result is independent of the degenerate mass spectrum, since the ${\bf 10}$-plet fermions contribute to the gauge coupling beta-functions as complete $SO(10)$ multiplets. As shown in Figs.~\ref{fig:gcu} and \ref{fig:MIandMGUT}, the mass splitting lowers the gauge coupling unification scale starting from the Planck scale. \section{Dark Matter in $SO(10) \times U(1)_\psi$} \label{sec:DM} Because of the residual ${\bf Z}_2$ symmetry after the SSB, the lightest mass eigenstate from a linear combination of the $SO(10)$ singlet fermions and the ${\bf 10}$-plet fermions is stable and is a suitable DM candidate if it is electrically and color neutral. In this section we consider the DM physics in our model. In the SM gauge group decomposition, the DM candidate is a linear combination of SM singlet and the $SU(2)_L$ doublet fermions, the so-called ``singlet-doublet DM'' (SD-DM) scenario \cite{SDDM}. In the following, we identify the allowed parameter region to reproduce the observed DM relic density while satisfying the constraint from the direct DM detection experiments. \subsection{Doublet-triplet Fermion Mass Splitting and Triplet Fermions Lifetime} \label{sec:DTS} Before the DM physics analysis in the next subsection, we consider the color triplet fermions included in the ${\bf 10}$-plets. Although they are unstable, their lifetime can be very long since they decay through the colored Higgs boson and the GUT gauge boson which are very heavy. If the colored particles decay after Big Bang Nucleosynthesis (BBN) with the age of the universe around 1 second, the energetic decay products could destroy light nuclei which have been successfully synthesized during BBN. We can simply avoid this problem if the lifetime of the colored fermion is shorter than 1 second. In this subsection, we discuss how to realize this situation. In Sec.~\ref{sec:GCUandPD}, we have investigated gauge coupling unification by introducing the mass splitting between the doublet and the triplet components in ${\bf 10}$-plet fermions ($10_{E}^{(1,2)}$). We have found that this mass splitting results in gauge coupling unification below the Planck scale, $M_{GUT} < M_P$. This mass splitting is also important to shorten the color triplet fermions lifetime. We can generate the mass splitting by employing the Dimopolouos-Wilczek mechanism \cite{DWms1}. Consider Yukawa interaction for $10_{E}^{(1,2)}$ fermions with the ${45}_H$ in Eq.~(\ref{eq:ExoticY}). Following Ref.~\cite{DWms1}, we set the VEV for ${45}_H$ in the $B-L$ direction: $\langle { 45}_H\rangle = M_I \times diag (1, 1, 1 , 0 , 0) \times i \sigma_2$ \cite{DWms1}. Thus, the mass terms for the doublets and the triplet components of the ${\bf 10}$-plets are expressed as \begin{eqnarray} {\cal L}_{\rm mass} \supset \begin{pmatrix} {\bar D}^{(1)} & {\bar D}^{(2)} \end{pmatrix} \begin{pmatrix} m_{10}^{(1)} & 0 \\ 0 & m_{10}^{(2)} \end{pmatrix} \begin{pmatrix} D^{(1)} \\ D^{(2)} \end{pmatrix} + \begin{pmatrix} {\bar T}^{(1)} & {\bar T}^{(2)} \end{pmatrix} \begin{pmatrix} m_{10}^{(1)} & m_{45} \\ m_{45} & m_{10}^{(2)} \end{pmatrix} \begin{pmatrix} T^{(1)} \\ T^{(2)} \end{pmatrix}, \label{eq:DWmass} \end{eqnarray} where $m_{10}^{(1,2)} = Y_A^{(1,2)} M_I $ and $m_{45} = Y_{45}^{(12)}M_I$. As in the previous section, we set $m_{10}^{(1)} = m_{10}^{(2)} \equiv m_D$, and the mass eigenvalues of the triplet fermions are $m_{T}^{(1,2)} = |m_D \pm m_{45}|$. Setting $m_{45} = m_T \gg m_D$, we obtain almost degenerate triplet fermions masses, $m_{T}^{(1,2)} \simeq m_T \gg m_D$. This is the setup in the previous section. Let us now estimate the lifetime of the color triplet fermions. A triplet fermion decays into a doublet fermion in ${\bf 10}$-plet and the SM quark and lepton through an off-shell GUT gauge boson $({\bf 6, 2, 2}) \supset {\bf 45}$ in the PS gauge group decomposition. The partial lifetime of this process is calculated to be \begin{eqnarray} \tau_{T} &\simeq& 192 \pi^3 \frac{M_{GUT}^4}{m_{T}^5}. \label{eq:tautrip} \end{eqnarray} From Fig.~\ref{fig:MIandMGUT}, the proton lifetime constraint yields an upper bound on the triplet fermion mass for fixed $m_D$ values. Eq.~(\ref{eq:tautrip}) implies that the minimum lifetime of the triplets is determined by the upper bound on $m_T$. We find $m_D \lesssim 2$ TeV to satisfy the BBN constraint, $\tau_T<1$ s for a corresponding maximum value of $m_T$. A triplet fermion also decays into a $SO(10)$ singlet fermion, top quark and tau lepton through an off-shell colored Higgs boson. The partial lifetime of this process is calculated to be \begin{eqnarray} \tau_{T} &\simeq& \frac{192 \pi^3}{Y_{t}^2 {Y_H}^2} \frac{m_{HC}^4}{m_{T}^5} \simeq 1 \; {\rm s} \left(\frac{m_{HC}[{\rm GeV}]}{3.0 \times 10^{14}}\right)^4 \left(\frac{5.0 \times 10^4 }{m_{T}[{\rm TeV}]}\right)^5 \left(\frac{55}{m_0 [{\rm GeV}]}\right)^2, \label{eq:lifetime3E} \end{eqnarray} where $Y_{t} \simeq 1$ is the SM top Yukawa coupling, and we express $Y_H$ in terms of a new parameter $m_0$ defined as ${Y_H} = \left(\sqrt{2} m_0/v_h\right)$. This new parameter plays an important role in the DM physics analysis in the next sub-section as well as in the analysis in Sec.~\ref{sec:HiggsStab}. For our benchmark values used in the following sections, $m_{T} = 5.0\times 10^4$ GeV and $m_0 = 55$ GeV, the BBN constraint of $\tau_T <1$ s to an upper bound on the colored Higgs boson mass. Combining with the lower bound on the colored Higgs boson mass from the proton lifetime constraint, we find \begin{eqnarray} 4.5 \times 10^{11}< m_{HC} [{\rm GeV}] < 3.0 \times 10^{14}. \label{eq:mHCbound} \end{eqnarray} As we have mentioned in the previous section, our results for the gauge coupling unification remain almost the same even for $m_{HC}$ values in this range. \subsection{Singet-Doublet Fermion Dark Matter} \label{sec:SDDM} In our model, the DM candidate is a linear combination of the $SU(2)_L$ doublets in the ${\bf 10}$-plet and the singlet fermions. The doublet and singlet fermions individually acquire their masses from the VEVs of $\Phi_A$ and $\Phi_B$ as \begin{eqnarray} {\cal L} \supset \sum_i \left( m_{D}^{(i)} D^{(i)} {\bar D}^{(i)} + m_S^{(i)} 1_E^{(i)} 1_E^{(i)} \right), \end{eqnarray} where \begin{eqnarray} m_{D}^{(i)} = Y_{A}^{(i)} \langle \Phi_A\rangle = \frac{1}{\sqrt{2}} Y_{A}^{(i)} M_I, \qquad m_S^{(i)} = Y_{B}^{(i)} \langle \Phi_B\rangle= \frac{1}{\sqrt{2}} Y_{B}^{(i)} M_I. \end{eqnarray} In Sec.~\ref{sec:GCUandPD}, we have set $m_D^{(1)} = m_D^{(2)} = m_D = {\cal O} (1)$ TeV. In addition, the Yukawa interactions involving $10_H$ in Eq.~(\ref{eq:ExoticY}) generate the mixing masses between the doublets and the singlets after electroweak symmetry breaking: \begin{eqnarray} {\cal L} &\supset& \sum_{i, j} { Y_H}^{(ij)} {1}_{E}^{(i)} { 10}_{E}^{(j)} {10}_{H} \nonumber \\ &\supset& {Y_H}^{(ij)} \left(1_E^{(i)} D^{(j)} H_d+ 1_E^{(i)} {\bar D}^{(j)} H_u\right) \nonumber \\ &\supset&{Y_H}^{(ij)} ( \cos\beta \; 1_E^{(i)} D^{(j)} {H}^\dagger+ \sin\beta \; 1_E^{(i)} {\bar D}^{(j)} H) . \label{eq:YHD} \end{eqnarray} For simplicity, we choose only ${Y_H}^{(1,1)} \equiv Y_H$ to be sizable and real, and only consider the first generation for our DM physics discussion. Thus, the relevant Lagrangian is given by \begin{eqnarray} {\cal L} \supset m_{D} D {\bar D} + m_S S S + {Y_H} \left( \cos\beta D {H}^\dagger S + \sin\beta {\bar D} H S \right) + {\rm h.c}, \label{eq:YukawaDM} \end{eqnarray} where we have introduced a new notation, $D^{(1)} \equiv D$ and $1_E^{(1)} \equiv S$. Substituting $H = 1/\sqrt{2} (0, h + v_h)^T$ ($h$ is the SM Higgs boson and $v_h = 246$ GeV is the Higgs VEV), we obtain the mass matrix for the electrically neutral fermions: \begin{eqnarray} {\cal L} \supset \frac{1}{2} \begin{pmatrix} D_0 & {\bar D_0} & S \end{pmatrix} \begin{pmatrix} 0 & m_{D} & m_0 \sin\beta \\ m_{D} & 0 & m_0 \cos\beta \\ m_0 \sin\beta & m_0 \cos\beta & m_S \end{pmatrix} \begin{pmatrix} D_0 \\ {\bar D_0} \\ S \end{pmatrix}, \label{eq:massmat} \end{eqnarray} where $m_0 \equiv {Y_H} v_h/\sqrt{2}$. This symmetric mass matrix can be diagonalized by a single orthogonal matrix $U$ for the mass eigenstates $\psi^{1,2,3}$ with masses $m_{1,2,3}$ defined as $( \psi_1, \psi_2, \psi_3)^{T} = U^{-1} ( D, {\bar D} , S)^{T}$. The lightest mass eigenstate is identified with the DM particle. To simplify the DM analysis, we consider two extreme cases: (i) ${ m_S \gg m_{D}}$, where the DM is mostly the doublet component (a linear combination of $\psi_1$ and $\psi_2$). (ii) ${m_S \ll m_{D}}$, where the DM is mostly the singlet component ($\psi_{3}$). The first case is similar to the Higgsino-like neutralino DM scenario in the Minimal Supersymmetric SM. This case has been well studied in the literature (see, for example, \cite{ArkaniHamed:2006mb}), where the correct DM relic density is reproduced with the DM mass of around 1 TeV. In the following, we will focus on case (ii). For $m_S, m_0 \ll m_{D}$ in this case, the mass eigenvalues can be approximated as \begin{eqnarray} m_{1,2} \simeq m_D, \qquad m_{3} \simeq m_S - m_0 \left(\frac{m_0}{m_{D}}\right) \sin2\beta, \label{eq:DMmass} \end{eqnarray} From Eq.~(\ref{eq:YukawaDM}), we extract the interactions involving the DM particle ($\psi_3$), \begin{eqnarray} {\cal L} &\supset& \frac{1}{2} \begin{pmatrix} \psi_1 & \psi_2 & \psi_3 \end{pmatrix} U^T \begin{pmatrix} 0 & 0 & \frac{m_0 \sin\beta}{v_h} h \\ 0 & 0 & \frac{m_0 \cos\beta}{v_h} h \\ \frac{m_0 \sin\beta}{v_h} h & \frac{m_0 \cos\beta}{v_h} h & 0 \end{pmatrix} U \begin{pmatrix} \psi_1 \\ \psi_2 \\ \psi_3 \end{pmatrix}, \\ \nonumber &\supset & \frac {1}{2} y_{33} h \psi_{3} \psi_{3} + \frac{1}{2} y_{31} h \psi_{3} \psi_{1} + \frac{1}{2} y_{32} h \psi_{3} \psi_{2}, \label{eq:Lint} \end{eqnarray} where the couplings $y_{31}$, $y_{32}$, and $y_{33}$ are determined by the elements of the mixing matrix $U$, $m_0$ and $\beta$. The thermal relic density of the DM particle is evaluated by solving the Boltzmann equation, \begin{eqnarray} \frac{dY}{dx} = - \frac{\langle \sigma v \rangle}{x^2}\frac{s (m_{3})}{H(m_{3})} \left( Y^2-Y_{EQ}^2 \right), \label{eq:Boltzman} \end{eqnarray} where $x = m_{3}/T$, $H(m_{3})$ is the Hubble parameter and the yield ($Y = n/s$) is given by the ratio of the DM number density ($n$) and the entropy density ($s$), and $Y_{EQ}$ is the yield of the DM particle in thermal equilibrium: \begin{eqnarray} s(m_{3}) = \frac{2 \pi^2}{45} g_\star m_{3}^3, \; \; H(m_{3}) = \sqrt{\frac{\pi^2}{90} g_\star} \frac{m_{3}^2}{M_P}, \; \; Y_{EQ}(x) = \frac{g_{DM}}{2 \pi^2} \frac{x^2 m_{3}^3}{s(m_{3})} K_2(x), \end{eqnarray} with $K_2$ being the Bessel function of the second kind. In Eq.~(\ref{eq:Boltzman}), $\langle\sigma v\rangle$ is the thermal average of the total pair annihilation cross section of the DM particles times their relative velocity: \begin{eqnarray} \langle \sigma v \rangle = \frac{g_{DM}^2}{64 \pi^4} \left(\frac{m_{3}}{x}\right) \frac{1}{n_{EQ}^{2}} \int_{4 m_{3}^2}^\infty ds \; 2 (s- 4 m_{3}^2) \sigma(s) \sqrt{s} K_1 \left(\frac{x \sqrt{s}}{m_{3}}\right), \label{eq:ThAvgSigma} \end{eqnarray} where $g_{DM} = 2$ denotes the degrees of freedom of the Majorana fermion DM particle, $n_{EQ}=s(m_{3}) Y_{EQ}/x^3$ is the equilibrium number density of the DM particle, $K_1$ is the modified Bessel function of the first kind, and $\sigma (s)$ is the total annihilation cross section of the DM particle. The DM particle density at the present time is evaluated from \begin{eqnarray} \Omega_{DM} h^2 =\frac{m_{3} s_0 Y(x\to\infty)} {\rho_c/h^2}, \end{eqnarray} where $\rho_c/h^2 =1.05 \times 10^{-5}$ GeV/cm$^3$ is the critical density, and $s_0 = 2890$ cm$^{-3}$ is the entropy density of the present Universe. In order to evaluate $\sigma (s)$, we consider two processes for the pair annihilation of the DM particles: the $t/u$-channel processes mediated by the $\psi_{1,2}$ or charged fermions in $D$ and ${\overline D}$, and the $s$-channel process mediated by the SM Higgs boson. For the $t/u$-channel processes with $m_D \gg m_3$, we consider the effective Lagrangian after integrating out $\psi_{1,2}$, \begin{eqnarray} {\cal L}_{eff} = \frac{1}{2} \left(\frac{y_{31}^2}{m_D}\right) h h \psi_{3} \psi_{3} + \frac{1}{2} \left(\frac{y_{32}^2}{m_D}\right) h h \psi_{3} \psi_{3}. \end{eqnarray} For example, the cross section for $\psi_{1}$ mediated processes is estimated as \begin{eqnarray} \sigma_0 \simeq \left(\frac{1}{64\pi}\right) \left(\frac{y_{31}^2}{m_{D}}\right)^2 \simeq y_{31}^4 \left( \frac{1 {\rm TeV}}{m_{D}} \right)^2 {\rm pb}. \end{eqnarray} Here, we have assumed $m_3 > m_h$. Since the DM is mostly the singlet component, its coupling with the SM Higgs boson is suppressed, $y_{31}^4 \ll1$. Therefore, the cross section for this process is much smaller than a typical cross section of $1$ pb for a thermal DM. We can apply the same discussion for $\psi_2$ and charged fermion mediated process, and conclude that the cross sections for all the $t/u$-channel processes are too small to reproduce the observed DM relic density. Let us next consider the $s$-channel process mediated by the SM Higgs boson. Although the DM coupling with the SM Higgs is suppressed, the $s$-channel cross section can be enhanced if the DM mass is close to the Higgs boson resonance point, $ m_3\simeq m_h/2$. For $m_3 \ll m_D$, this will turn out to be the only possibility for reproducing the observed DM relic density. The $s$-channel cross section is given by \begin{eqnarray} {\sigma} (s) = \frac{y_{33}^2}{64} \left( 3 \left(\frac{m_b}{v_h} \right)^2 + 3 \left(\frac{m_c}{v_h} \right)^2+ 3 \left(\frac{m_\tau}{v_h} \right)^2 \right) \frac{\sqrt{s(s-4 m_{3}^2})}{\left(s- m_h^2\right)^2 + m_h^2 \Gamma_h^2}. \end{eqnarray} For the final states, we have considered a pair of bottom (b) quarks, charm (c) quarks, and tau ($\tau$) leptons with masses $m_b = 2.82$ GeV, $m_c = 685$ MeV, and $m_\tau = 1.75$ GeV \cite{Bora:2012tx}, respectively. $\Gamma_h = \Gamma_h^{SM}+ \Gamma_h^{DM}$ is the total decay width of the SM Higgs boson, where $\Gamma_h^{SM} = 4.07$ MeV \cite{Denner:2011mq} is the SM Higgs boson decay width in the SM and \begin{eqnarray} \Gamma_h^{DM} = \theta\left(m_h-2 m_3\right)\frac{y_{33}^2}{16\pi}m_h\left(1-\frac{4m_3^2}{m_h^2}\right)^{3/2} \end{eqnarray} is the partial decay width of the SM Higgs boson decay into a pair of DM particles. The annihilation cross section is determined by only two free parameters, $m_3$ and $y_{33}$. After numerically solving the Boltzmann equation with the $s$-channel cross section, we find the relation between $m_3$ and $y_{33}$ to reproduce the observed DM relic density of $\Omega_{DM}h^2 = 0.120$ \cite{Aghanim:2018eyx}. \subsection{Direct Detection Bound on Dark Matter} \label{sec:DD} \begin{figure}[t] \begin{center} \includegraphics[scale =0.91]{DM1.eps}\;\; \includegraphics[scale=0.85]{DM2.eps} \end{center} \caption{ Left panel: $y_{33}$ as a function of $m_{3}$ (solid curve) along which the observed DM density is reproduced. The gray shaded region is excluded by the XENON1T results and the horizontal dashed line marks the search reach of the future LUX-ZEPLIN experiment. Right panel: $m_D$ as a function of $m_3$ for different choices of $m_0 [{\rm GeV}] = 55, 45$ and $35$ (solid curves from top to bottom) and fixed $\beta= \pi/3$, corresponding to the solid curve in the left panel. The gray shaded region is excluded by the null LHC search results for a heavy charged lepton. } \label{fig:DM} \end{figure} Various experiments to directly search for the DM particles are in operation. The most severe constraint on the so-called spin-independent (SI) cross section of the DM particle scattering off nuclei is given by the XENON1T direct DM detection experiment \cite{Aprile:2018dbl}. We use this result to constrain the parameter space for $m_3$ and $y_{33}$. The SI elastic cross section for the DM scattering off a nucleon is given by \begin{eqnarray} \sigma_{\rm SI} \simeq \frac{1}{\pi} \left(\frac{y_{33}}{v_h}\right)^2 \left(\frac{\mu_{\rm eff}}{m_h^2}\right)^2 f_N^2, \label{eq:SIcs1} \end{eqnarray} where $\mu_{\rm eff} = m_N m_{3}/(m_N +m_{3})$ is the effective mass of the DM-nucleon system with a nucleon mass $m_N = 0.939$ GeV \cite{Patrignani:2016xqp}. The nuclear matrix element of a nucleon $f_N$ is given by \begin{eqnarray} f_N = \left(\sum_{q = u,d,s} f_{T_q} + \frac{2}{9} f_{T_G}\right) m_N, \label{eq:NME} \end{eqnarray} where $f_{T_q}$ values are determined by lattice QCD analysis: $f_{T_u} +f_{T_d} \simeq 0.056$ \cite{Ohki:2008ff} for up (u) and down (d) quarks and $\left|f_{T_s}\right| \leq 0.08$ \cite{Ohki:2008ff} for strange (s) quark, and $f_{T_G}$ is determined using trace anomaly condition, $\sum_{q = u,d,s} f_{T_q} + f_{T_G} = 1$ \cite{QCDanomaly}. Using a conservative value for $f_{T_s} =0$, we obtain $f_N^2 \simeq 0.0706 m_N^2$ and the SI cross section is given by \begin{eqnarray} \sigma_{\rm SI} \simeq 4.47 \times 10^{-7} {\rm pb} \times y_{33}^2, \label{eq:SIcs2} \end{eqnarray} where we have used $m_{3} \gg m_N$. In the left panel of Fig.~\ref{fig:DM}, we plot $y_{33}$ as a function of $m_{3}$ (solid black curve) along which the observed DM relic density, $\Omega_{DM}h^2 = 0.120$, is reproduced. For $m_{3} \simeq m_h/2$, XENON1T constraint, $\sigma_{\rm SI} \leq 1.0\times 10^{-10}$ pb \cite{Aprile:2018dbl}, leads to an upper bound on $y_{33} \leq 1.50 \times 10^{-2}$ from Eq.~(\ref{eq:SIcs2}). The gray shaded region is excluded by the XENON1T, and the allowed region for the DM mass lies in the range of $57.10 \lesssim m_{3}[{\rm GeV}] \lesssim 61.23$. The next generation LUX-ZEPLIN (LZ) experiment will improve the cross section bound significantly, $\sigma_{\rm SI} \leq 2.8\times 10^{-12}$ pb \cite{Akerib:2018lyp}, which corresponds to $y_{33} \leq 2.51 \times 10^{-3}$. This search reach is depicted as the horizontal dashed line. We can see that the current allowed region will be fully covered by the LZ experiment. Both $m_3$ and $y_{33}$ are determined in terms of the model parameters $\beta$, $m_0$, $m_S$, and $m_D$. As shown in the left panel of Fig.~\ref{fig:DM}, $y_{33}$ is determined as a function of $m_{3}$ in order to reproduce the observed DM relic density. Hence, $m_D$ is determined as a function of $m_0$, $\beta$, and $m_3$. In the right panel of Fig.~\ref{fig:DM}, we plot $m_D$ as a function of $m_3$ for different choices of $m_0 [{\rm GeV}] = 55, 45$ and $35$ (solid curves from top to bottom) and fixed $\beta= \pi/3$. The allowed range of the DM mass of $57.10 \lesssim m_{3}[{\rm GeV}] \lesssim 61.23$ is indicated by the two vertical dotted lines, which bound the allowed mass range of $m_D$ for different $m_0$ values. The gray shaded region is excluded by a lower mass bound of $m_D < 690$ GeV from the CMS search result for a heavy charged lepton at the LHC \cite{CMS:2018cgi}. \section{Stability of the SM Higgs Potential} \label{sec:HiggsStab} Because of the large top Yukawa coupling, the SM Higgs quartic coupling turns negative around $\mu \simeq 10^{10}$ GeV \cite{Buttazzo:2013uya}. This implies that the electroweak vacuum of the SM is unstable, which is, in principle, known as the Higgs potential instability problem. It may not be a serious problem for the SM because the electroweak vacuum is meta-stable with lifetime much longer than the age of the universe. However, in the GUT scenario, the SM Higgs is embedded inside a GUT Higgs multiplet and the negative quartic coupling of the SM Higgs may imply that some of the GUT Higgs multiplets have negative quartic couplings which can make the GUT vacuum unstable. To avoid this problem, we impose the condition that the SM Higgs quartic coupling remains positive up to the PS SSB scale. Let us evaluate the RG evolution of the SM Higgs quartic coupling ($\lambda_H$), to which the new ${\bf 10}$-plets fermions also contribute, in addition to the SM particles. As discussed in Sec.~\ref{sec:GCUandPD}, the ${\bf 10}$-plets modify the RG running of the SM gauge couplings, which in turn modifies the RG running of $\lambda_H$. In addition, the doublets in the ${\bf 10}$-plet fermions contribute to the beta-function of $\lambda_H$ through their Yukawa couplings with the SM Higgs doublets in Eq.~(\ref{eq:YukawaDM}). The RG equation of $\lambda_H$ is expressed as \begin{eqnarray} \mu \frac{d \lambda_H}{d \mu} &=& \frac{1}{16\pi^2} \left(\beta_{SM} +\theta(\mu - m_{D}) \left(4 \lambda_H {Y_H}^2\ - 4 {Y_H}^4 \right)\right), \label{eq:HiggsRGE} \end{eqnarray} where ${Y_H} = \left(\sqrt{2} m_0/v_h\right)$, and $\beta_{SM}$ denotes the beta function of the SM. The contribution of the doublet fermions (the second term in the right-hand side of Eq.~(\ref{eq:HiggsRGE})) is analogous to the top quark contribution, $\beta_{SM} \supset 12 y_t^2 \lambda_H - 12 y_t^4$, where $y_t$ is the top-quark coupling. The presence of such a coupling is effectively equivalent to the SM with a larger $y_t$. Hence, the Yukawa coupling may make the situation worse and destabilize the Higgs potential at an energy scale even lower than $\mu \simeq 10^{10}$ GeV. However, the presence of ${\bf 10}$-plet fermions also modify the running of the SM gauge couplings which generates a positive contribution to $\beta_{SM}$. See, for example, Ref.~\cite{Gogoladze:2010in}, where the authors have shown that the Higgs potential stability problem can be solved in the presence of TeV scale new fermions. We now show that the instability problem can also be solved in our model in the presence of the ${\bf 10}$-plet fermions. The RG running of $\lambda_H$ is determined by three parameters: $m_{D}$, $m_{T}$ and ${m_0}$. In the following analysis, we approximate $Y_H$ to be a constant. For fixed values of $m_{D}$, $m_{T}$ and ${m_0}$, we numerically solve the RG equations. In Fig.~\ref{fig:HStab}, we show the RG running of $\lambda_H$ for $m_{D} = 2$ TeV, $m_{T} = 5 \times 10^4$ TeV, and $m_0 [{\rm GeV}]=35$, $55$, and $60$ (solid lines from top to bottom). The horizontal dotted line represents $\lambda_H = 0$. From the gauge coupling unification analysis in Sec.~\ref{sec:GCUandPD}, we have found $M_I \simeq 2\times 10^{11}$ GeV for $m_{D} = 2$ TeV and $m_{T} = 5 \times 10^4$ TeV. In order to keep $\lambda_H (\mu) >0$ for $\mu< M_I$, we have found an upper bound on $m_0 \lesssim 55$ GeV. We have checked that the running of $Y_H$ can be ignored to a good approximation for $m_0 \lesssim 55$ GeV or, equivalently, $Y_H \lesssim 0.32$. \begin{figure}[t] \begin{center} \includegraphics[scale =0.8]{higgsRGE1.eps} \end{center} \caption{ RG running of $\lambda_H$ for $\mu < M_I \simeq 2 \times 10^{11}$ GeV, for $m_{D} = 2$ TeV and $m_{T} = 5\times 10^4$ TeV. The solid lines from top to bottom correspond to $m_0 [{\rm GeV}]=35$, $55$, and $60$, respectively. The dotted line depicts $\lambda_H = 0$. } \label{fig:HStab} \end{figure} \section{Conclusion} \label{sec:conc} We have proposed a simple non-supersymmetric GUT model based on the gauge group $SO(10) \times U(1)_\psi$. The model includes three generations of fermions in ${\bf 16}$ ($+1$), ${\bf 10}$ ($-2$) and ${\bf 1}$ ($+4$) representations. In addition to the ${\bf 16}$-plets that contains the SM fermions plus RHNs, the ${\bf 10}$-plet and singlet fermions are introduced. In the presence of the new fermions, the model is free from all the gauge and mixed gauge-gravitational anomalies. With the new fermions and a suitable set of Higgs fields, gauge coupling unification is achieved in two-step breaking of $SO(10)$ to the SM. Namely, the SM gauge couplings are partially unified in the PS group at the intermediate scale of $M_{I} = 10^{12}-10^{11}$ GeV with the PS group subsequently unified into $SO(10)$ group at $M_{GUT} = 5\times 10^{15}-10^{16}$ GeV. Since the Majorana masses for the RHNs are generated through the PS symmetry breaking, successful gauge coupling unification leads to the natural scale for the seesaw mechanism. We have found a correlation between $M_{GUT}$ and $M_{I}$, namely $M_{I}$ is increases as $M_{GUT}$ is decreases. Hence, the proton lifetime is predicted to be shorter for a higher $M_{I}$ value, which can be tested by the Hyper-Kamiokande experiment in the future. The new ${\bf 10}$-plet and singlet fermions have Yukawa couplings with two $SO(10)$-singlet $U(1)_\psi$ Higgs fields and the fermion masses are generated once the $U(1)_\psi$ symmetry is broken by the $U(1)_\psi$ Higgs fields VEVs. The $U(1)_\psi$ Higgs filed $\Phi_A$ which has the Yukawa coupling with the ${\bf 10}$-plet fermions is identified with the inflaton. We have shown that through its gauge and Yukawa interactions, the effective inflaton potential exhibits an approximate inflection-point and successful inflection-point inflation is realized. The Hubble parameter during the inflation is found to be much smaller than the PS symmetry breaking scale, $H_{inf} < M_I$, so that the cosmologically unwanted monopoles generated by the breaking of the GUT and the PS symmetries are diluted away. With a suitable choice of the model parameters, the reheating temperature after inflation can be high enough for a successful thermal leptogenesis while low enough not to restore the PS gauge symmetry. With the Higgs field contents of our model, a ${\bf Z}_2$ symmetry remains unbroken after the GUT symmetry breaking, and the lightest Majorana mass eigenstate from linear combinations of the ${\bf 10}$-plets and singlet fermions is stable and thus a viable DM candidate of our model. We focus on the case that the DM particle is mostly composed of the $SO(10)$ singlet fermion and it communicates with the SM particles through the Higgs-portal interactions. For this Higgs-portal fermion DM scenario, we have identified the model parameter region to reproduce the observed DM relic density while satisfying the current constraint from the direct DM detection experiments. The present allowed region will be fully covered by the future direct detection experiments such as LZ experiment. Finally, we have shown that in the presence of the new fermions, the SM Higgs potential is stabilized up to $M_I$. \section*{Acknowledgements} N.O. would like to thank the Particle Theory Group of the University of Delaware for hospitality during his visit. This work of is supported in part by the United States Department of Energy Grants DE-SC0012447 (N.O) and DE-SC0013880 (D.R and Q.S) and Bartol Research Grant BART-462114 (D.R).
2,869,038,155,419
arxiv
\section{Introduction} \label{sec:intro} \input{introduction.tex} \vspace{-5pt} \section{Problem Formulation} \vspace{-5pt} \label{sec:ProblemFormulation} \input{problemFormulation.tex} \section{STATISTICAL DETECTION} \label{sec:proposedMethod} \input{proposedMethod.tex} \section{Experiments and Results} \label{sec:experimentsAndResults} \input{experimentsAndResults.tex} \section{Conclusions} \label{sec:conclusions} \input{conclusions.tex} \newpage \bibliographystyle{IEEEtran} \subsection{Directional statistics features} Let us consider steered response power (SRP) approach to compute the spatial features at each time-frame $n$ for each randomly placed mobile device separately. Let $\bld{x}[n,k]=\left[x_{1}[n,k] \dots x_{M}[n,k] \right]^T$ denote the multi-channel speech signal in the short time Fourier transform (STFT) domain for a microphone array. Let $\bld{a}[\theta,k]$ denote the steering vector corresponding to a source at a spatial direction $\theta$ for the frequency bin $k$ with respect to a local coordinate system centered at the array. Assuming free field propagation and a compact array, we have \begin{equation} \bld{a}[\theta,k]=\left[ 1~e^{\left(-\frac{j2\pi k \tau_{21}(\theta)}{K}\right)}\dots e^{\left(-\frac{j2\pi k \tau_{M1}(\theta)}{K}\right)} \right]^T, \end{equation} where $\{\tau_{21}(\theta),\dots,\tau_{M1}(\theta) \}$ denote the TDOA values at the $M-1$ microphones with respect to the first microphone. SRP method \cite{dibiase2000high} can compute the spatial response function as, \begin{equation} s[n,\theta]=\sum\limits_{k=1}^{K} \left|\bld{a}[\theta,k]^H \bld{x}_f[n,k] \right|^2, \end{equation} where $\bld{x}_f[n,k]=\frac{\bld{x}[n,k]}{|\bld{x}[n,k]|}$ is the signal phase vector obtained after PHAT filtering. \par The response function $s[n,\theta]$ is evaluated at $L$ discrete angular positions $\bs{\Theta}=\{\theta_1,\dots,\theta_L\}$ with respect to the array. Since the source can be assumed to be relatively stationary compared to STFT/SRP computation, we smooth the discrete SRP function across time using recursive averaging, \begin{equation} \tilde{s}[n,\theta_l]=\alpha \tilde{s}[n,\theta_l] + (1-\alpha) s[n-1,\theta_l]. \end{equation} Smoothed SRP function is then normalized to represent the estimated directional statistics which is then used as feature for the mixture density modeling. \par Let $\bld{s}[n] \triangleq \frac{1}{C} \left[\tilde{s}[n,\theta_1]\dots \tilde{s}[n,\theta_L] \right]^T$, where $C=\sum\limits_{l=1}^{L} \tilde{s}[n,\theta_l]$ is the normalization constant. Thus the vector $\bld{s}[n]$ is a positive function and sums up to unity; hence, we can interpret the $\bld{s}[n]$ as a PMF over the set $\bs{\Theta}$ at each time-frame $n$. \par In the present formulation, we compute the directional statistics independently for each mobile device, and obtain $P$ number of directional statistic features $\{\bld{s}_p[n]\}$, one per mobile device, at each time frame. However, due to reverberation in the enclosure and other recording noise, $\{\bld{s}_p[n]\}$ do have estimation errors and hence a further statistical formulation is required to effectively combine the information from several recording devices. \subsection{Latent variable joint modeling} \begin{figure}[h] \centering \begin{minipage}[b]{0.48\linewidth} \centering \begin{tikzpicture}[scale=0.8,every node/.style={scale=0.8}] \draw[rounded corners=5pt] (3.8,-0.5) rectangle ++(2.5,4.0); \draw[rounded corners=5pt] (4.0,0) rectangle ++(2.0,2.0); \draw (5,4.1) node (priorParameters){$\{\pi_s\}$}; \node[draw,circle] at (5,2.8) (selectionVector) {$\bld{z}_n$}; \node[draw,circle,fill=gray] at (5,0.8) (observations) {$x_{m,p}[t]$}; \draw[->] (priorParameters) -- (selectionVector); \draw[->] (selectionVector) -- (observations); \draw (6.0,-0.3) node (n){$N$}; \draw (5.75,0.2) node (n){\scriptsize $MP$}; \end{tikzpicture} \centerline{(a)} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \centering \begin{tikzpicture}[scale=0.8,every node/.style={scale=0.8}] \draw[rounded corners=5pt] (3.8,-0.5) rectangle ++(2.5,4.0); \draw[rounded corners=5pt] (4.0,0) rectangle ++(2.0,2.0); \draw (5,4.1) node (priorParameters){$\{\pi_s\}$}; \node[draw,circle] at (5,2.8) (selectionVector) {$\bld{z}_n$}; \node[draw,circle,fill=gray] at (5,0.8) (observations) {$\bld{s}_p[n]$}; \draw[->] (priorParameters) -- (selectionVector); \draw[->] (selectionVector) -- (observations); \draw (6.0,-0.3) node (n){$N$}; \draw (5.75,0.2) node (n){\scriptsize $P$}; \end{tikzpicture} \centerline{(b)} \end{minipage} \vspace{-5pt} \caption{Generative model of (a) the microphone observations and (b) the directional statistics.}\label{fig:graphicalModel} \end{figure} We model the set of distributions $\{\bld{s}_p[n]\}, 0\leq n \leq N-1$ jointly using a mixture model. A graphical model describing the generation of observations is shown in Fig. \ref{fig:graphicalModel}. The latent variable selection vector $\bld{z}_n$ selects a directional position (hence a source or a speaker), from a set of $S$ sources based on a Bernoulli distribution with parameters $\bs{\pi}$, i.e., $\mathbb{P}(\bld{z}_n|\bs{\pi})=\prod\limits_{s=1}^{S} \pi_s^{z_{ns}}$. Now the overall generative model of the statistical observations can be stated as follows: the signal from a selected direction/speaker results in the observed signals $\{x_{m,p}(t)\}$ at the microphones of the devices, or equivalently the derived independent directional statistic features $\{\bld{s}_p[n]\}$ at the $P$ number of devices, according to \begin{equation} \mathbb{P}(\{\bld{s}_p[n]\} |z_{ns}=1,\bs{\Delta})=\prod\limits_{p=1}^{P} \mathbb{P}(\bld{s}_p[n] |\bs{\delta}_{sp}), \end{equation} where $\bs{\Delta}=\{\bs{\delta}_{sp},\forall s,p\}$ is the set of parameters of all the distributions. A Dirichlet distribution \cite{bishop2006pattern} is used to model the directional statistics, to suit the discrete nature of the directional data and to suit the EM derivation. Hence, \begin{equation} \mathbb{P}\left(\bld{s}_p[n] \big| \bs{\delta}_{sp}\right)=\mathcal{D}( \bld{s}_p[n];\bs{\delta}_{sp}), \end{equation} where standard Dirichlet distribution has the form, \begin{equation} \mathcal{D}( \bld{s}_p[n];\bs{\delta}_{sp})= \frac{\Gamma\left(\sum\limits_{l=1}^{L} \delta_{sp}[l]\right)}{\prod\limits_{l=1}^{L} \Gamma \left( \delta_{sp}[l] \right) }\prod\limits_{l=1}^{L} \bld{s}_p[n,l]^{\delta_{sp}[l]-1}. \end{equation} We assume the directional data to be independent across time, which results in the model, \begin{equation} \mathbb{P}(\bld{S},\bld{Z}|\bs{\Delta},\bs{\pi})=\prod\limits_{n=0}^{N-1}\prod\limits_{s=1}^{S}\left[\pi_s \prod\limits_{p=1}^{P} \mathcal{D}(\bld{s}_p[n] |\bs{\delta}_{sp}) \right]^{z_{ns}}, \end{equation} The parameters $\bs{\Delta}$ and $\bs{\pi}$ are estimated by maximizing the total likelihood function using the expectation-maximization (EM) algorithm. At iteration-$i$, the EM algorithm involves computation of (i) the posterior distribution $\mathbb{P}\left(\bld{Z}|\bld{S},\bs{\Delta}^{(i)},\bs{\pi}^{(i)} \right)$, and (ii) maximization of the expected joint likelihood objective $Q(\bs{\Delta},\bs{\pi})=\mathbb{E} \{ \log \mathbb{P}(\bld{S},\bld{Z}|\bs{\Delta},\bs{\pi}) \}$. \par It can be shown that, $\mathbb{P}\left(\bld{Z}|\bld{S},\bs{\Delta}^{(i)},\bs{\pi}^{(i)} \right)$ is an independent Bernoulli distribution with parameter, \begin{equation} \mathbb{P} \left(z_{ns}=1|\{\bld{s}_p[n]\},\bs{\Delta}^{(i)},\bs{\pi}^{(i)} \right) = \frac{\pi_s^{(i)} \prod\limits_{p=1}^{P} \mathcal{D}(\bld{s}_p[n];\bs{\delta}_{sp}^{(i)}) }{\sum\limits_{s=1}^{S}\pi_s^{(i)} \prod\limits_{p=1}^{P} \mathcal{D}(\bld{s}_p[n];\bs{\delta}_{sp}^{(i)}) } \end{equation} and $\mathbb{E}\{z_{ns}\}\triangleq \gamma_{ns}^{(i+1)} = \mathbb{P}(z_{ns}=1 \big|\{\bld{s}_p[n]\},\bs{\Delta}^{(i)},\bs{\pi}^{(i)})$. \par In the maximization step, the function $Q(\bs{\Delta},\bs{\pi})$ is maximized: \begin{multline}\label{eqn:qFunction} Q(\bs{\Delta},\bs{\pi})=\sum\limits_{n=0}^{N-1} \sum\limits_{s=1}^{S} \gamma_{ns}^{(i)} \log \pi_s+\\ \sum\limits_{n=0}^{N-1} \sum\limits_{s=1}^{S} \gamma_{ns}^{(i)} \sum\limits_{p=1}^{P} \log \mathcal{D}(\bld{s}_p[n];\bs{\delta}_{sp}). \end{multline} Maximization of eqn. \eqref{eqn:qFunction} with respect to $\pi_s$ subject to the constraint $\sum\limits_{s=1}^{S} \pi_s=1$ results in the estimate, \begin{equation} {\pi}_s^{(i+1)}=\frac{N_s}{N},\mbox{~~where~~}N_s=\sum\limits_{n=0}^{N-1} \gamma_{ns}^{(i+1)}. \end{equation} Maximization of \eqref{eqn:qFunction} with respect to $\bs{\delta}_{sp}$ requires solving the problem: \begin{equation} {\bs{\delta}}_{sp}^{(i+1)}=\underset{\delta_{sp}}{\arg\max} \sum\limits_{n=0}^{N-1} \gamma_{ns}^{(i+1)} \log \mathcal{D}(\bld{s}_p[n] ;\bs{\delta}_{sp}). \end{equation} Substituting for $\mathcal{D}(\bld{s}_p[n] ;\bs{\delta}_{sp})$, we get the optimization problem as, \begin{multline} {\bs{\delta}}_{sp}^{(i+1)}= \underset{\bs{\delta}_{sp}}{\arg\max} \sum\limits_{n=0}^{N-1} \gamma_{ns}^{(i+1)} \left[ \log \Gamma\left(\sum\limits_{l=1}^{L} \delta_{sp}[l]\right) - \right.\\\left. \sum\limits_{l=1}^{L} \log \Gamma(\delta_{sp}[l]) + \sum\limits_{l=1}^{L} \left(\delta_{sp}[l]-1 \right) \log \bld{s}_{p}[l] \right]. \end{multline} Gradient-descent based algorithm is used to solve for $\{{\bs{\delta}}_{sp},~\forall~s,p\}$ as shown in \cite{minka2000estimating}. \subsection{Diarization} At convergence of the EM algorithm, the posterior parameter, $\gamma_{ns}^{*}$ denotes the probability of $s^{th}$ source being active at $n^{th}$ time frame. The diarization information is obtained as the source label $s$ at each time frame $n$ using the max-rule over $s$, \begin{equation} \hat{s}[n]=\underset{s}{\arg\max}~\gamma_{ns}^{*}. \end{equation}
2,869,038,155,420
arxiv
\section{INTRODUCTION} The `Diffuse Ionized Medium' (DIM) is recognized as an important component of the ISM in galaxies. This gas, first discovered as the `Reynolds Layer' in our Galaxy (see, e.g. Reynolds 1990 for a review), seems to be ubiquitous in late-type spiral (e.g. Hoopes et al. 1996; Wang, Heckman \& Lehnert 1997--hereafter WHL) and irregular galaxies (e.g. Martin 1997). Indeed, the universal existence of the DIM has been inferred from integrated emission-line ratios of a large sample of normal late-type galaxies (Lehnert \& Heckman 1994). The observed properties of the DIM are characterized by relatively strong low ionization forbidden lines compared to normal HII regions, a low surface brightness, a rough spatial correlation with HII regions, and a significant contribution ( $\sim$20\%--40\%) to the global H$\alpha$ luminosity. The observations raise many interesting questions about the physical and dynamical state of the DIM that are still to be answered. The energy required to power the DIM suggests that the gas either soaks up nearly 100\% of the mechanical energy supplied by supernovae and stellar winds, or the topology of the interstellar medium must allow roughly 1/3 of the ionizing radiation produced by massive stars to escape HII regions and propagate into the disk. Although the diffuse nature of the DIM suggests that it maintains pressure balance with the rest of the ISM, little observational evidence has been collected to support this idea. Only in the Reynolds layer has the electron density been derived through observations of pulsar dispersion measures (e.g. Reynolds 1993) and seems consistent with the typical ISM thermal pressure of $\sim$3000~K~cm$^{-3}$ (Jenkins et al. 1983). Furthermore, it is still unknown how or if this pressure is regulated by the hotter coronal-phase gas created by supernovae. While these questions wait to be answered, a study of the general properties of the DIM in galaxies with widely differing star formation rates (SFRs) per unit area or per unit volume might provide us with more clues in this endeavor. The simple reason behind this is that star formation has a significant impact on the ISM. The energetics and dynamics of the DIM must therefore be strongly influenced (or even regulated) by the feedback from star formation (e.g. Elmegreen \& Parravano 1994). In starbursts, for example, the intense star formation can provide feedback to the ISM that might be significantly different {\it qualitatively} from the case in quiescent galaxies. Dynamically, the collective effect of supernovae exploding in a hot rarefied medium created by previous supernovae may minimize radiative losses and thus provide the energy to drive a galactic scale outflow in starbursts (e.g. Heckman, Armus \& Miley 1990). Thus, we would expect a more significant kinematic disturbance in the DIM in starbursts than in normal disks and a greater role of shock-heating in the energetics throughout the ISM. That is, the hot coronal-phase gas in starbursts should be more pervasive than in normal disks (as parameterized by the {\it porosity}, McKee \& Ostriker 1977) and would therefore have the potential to regulate the pressure of the ISM over a relatively larger volume. Since the DIM traces the heating and ionization of gas occupying a substantial fraction of the volume of the ISM and comprising much of its mass, the above issues are closely tied to our understanding of the DIM. We note that among the galaxies that have been searched for a DIM, most are normal spirals and irregulars. However, observational data suggest that there is a similar DIM component in starburst galaxies. For instance, in the nearby starburst NGC 253, Hoopes et al. (1996) found a faint H$\alpha$-emitting gas surrounding bright HII regions that is similar to the DIM in other quiescent spirals. More generally, the spectra of extra-nuclear regions of starbursts (e.g. Lehnert \& Heckman 1995) show that the relative strengths of the low-ionization lines are high---very similar to what has been observed in the DIM in normal galaxies. It is therefore worthwhile to systematically study the faint emission-line gas in starbursts and compare this gas to the DIM in normal galaxies. \section{SAMPLE SELECTION AND DATA DESCRIPTION} In this paper we will explore the emission-line properties of galaxies with a wide range of star formation rates (SFRs) using long-slit spectroscopic data. Our sample includes both normal, quiescent late-type spirals and IR-selected starbursts. Our data on normal galaxies come from a survey of the DIM in nearby face-on spirals. This subset involves seven galaxies selected on the basis of Hubble type (Sb and later), proximity (closer than 10 Mpc), large angular size ($>$ 10 arcmin), and relatively face-on orientation (inclination $<$ 65 degrees). Part of the data has been presented in Wang et al. (1997), and the rest can be found in Wang (1998). The reader can refer to these references for details of the observations and data reduction. Complete observational information is summarized in Table 1. The starbursts are a sample of infrared-bright (S$_{60\micron}$ $>$ 5.4 Jy), edge-on (a/b $\gtrsim$ 2), infrared-warm (S$_{60\micron}$/S$_{100\micron}$ $>$ 0.4) galaxies compiled by Lehnert (1993, dissertation, Johns Hopkins University) and also described by Lehnert and Heckman (1995). Analysis of both samples of data can help us understand the physical state and dynamics of the ionized gas over a wide range of star formation rates. We note that due to different sample selection criteria, the starbursts are generally much more distant and more inclined than our normal spirals. Otherwise, the major difference between the starbursts and the normal spirals is just the much larger star-formation rates per unit area ($\Sigma_{SFR}$) in the former (Lehnert \& Heckman 1996b). In order to compare our normal galaxies and starbursts, we used three quantities that have been measured with long-slit spectroscopy at many locations within each galaxy for both groups of galaxies. These are the line ratios represented by [SII]$\lambda\lambda$6716,6731/H$\alpha$, the H$\alpha$ surface brightnesses, and the H$\alpha$ or [NII]$\lambda$6584 linewidths. The [SII] doublet was selected to represent low ionization lines rather than [NII] because Nitrogen has a secondary nucleosynthetic origin, while Sulfur has a primary one. That is, the S/O ratio is independent of metallicity, while N/O $\propto$ metallicity (cf. Vila-Costas \& Edmunds 1993). As shown by WHL this results in systematic differences in the [NII]/H$\alpha$ ratio in the DIM in galaxies of different metallicity - differences that are not present in the [SII]/H$\alpha$ ratio. For our normal galaxies, we have multiple slit positions that spectroscopically sample representative regions in the disks of these galaxies (WHL; Wang 1998). We find that the emission-line properties in these galaxies vary spatially in a smooth way from the centers of HII regions out into the surrounding DIM. Data for the starbursts are in the form of long-slit spectra centered on the nuclei and oriented along both the minor and major axes. The spectra have been extracted using 3 pixel spacing starting from the nuclei and moving outward along the slit. We can then examine the variations in starbursts from bright starburst cores to fainter surrounding nebulae, by analogy with our analysis of the normal galaxies. The starbursts are generally much more distant than our normal spirals. Therefore we sample the gas in starbursts on a much larger scale, as each spatial resolution element would encompass a large number of HII regions. The [NII]$\lambda$6584 linewidth is used to represent kinematics in starbursts, while either H$\alpha$ linewidth (if EQW(H$\alpha$) $>$ 3$\AA$) or [NII] linewidth (if EQW(H$\alpha$) $<$ 3$\AA$) is used (see WHL) for our normal galaxy sample. This difference in the adopted linewidth is unlikely to affect our analysis significantly since the H$\alpha$ and [NII] linewidths correlate well with one another for the starburst (Lehnert \& Heckman 1995) sample. The number of galaxies we can use in the starburst sample is limited by the availability of photometric data and other relevant information. We use 32 galaxies from this sample that have been observed spectroscopically under nearly photometric conditions and hence can provide us with line ratios, H$\alpha$ surface brightnesses, and linewidths. We have excluded from this analysis the Circinus galaxy, whose H$\alpha$ image in Lehnert \& Heckman (1995) is dominated by the central AGN and NGC 5253, which is a dwarf galaxy unlike the members of our normal galaxy sample. In the following analysis we will attempt to normalize the star-formation rate in a given location of a galaxy by dividing the measured H$\alpha$ surface brightness at that location by the average H$\alpha$ surface brightness in the galaxy. To do so, we define a galaxy effective H$\alpha$ surface brightness $\Sigma_e$ as the ratio of half of the total H$\alpha$ flux to the solid angle subtended by the area within the H$\alpha$ half-light radius($\pi r_e^2$). This is effectively the average surface brightness within the H$\alpha$ half-light radius. In order to do this scaling we further selected 19 objects out of the subsample of 32 galaxies for which Lehnert and Heckman (1995) have measured the H$\alpha$ half-light radii and total H$\alpha$ fluxes. The measured total H$\alpha$+[NII] flux is corrected to the H$\alpha$ flux based on the measured [NII]/H$\alpha$ ratio within $2 r_e$. Thus, $\Sigma_e$ can be estimated for these 19 starburst galaxies and used to scale the observed values of $\Sigma_{H\alpha}$ as measured at different positions in each galaxy. All the spectral data in the normal spiral subsample have been properly calibrated for absolute surface brightness. In addition, we have measured $r_e$ from the H$\alpha$ images (WHL and Wang et al. 1998 in preparation). Therefore all data from this subsample can be utilized in this analysis. We have not attempted to correct the observed H$\alpha$ surface brightnesses for the effects of internal reddening. This is likely to be significant. For example, Kennicutt (1983) estimates an H$\alpha$ extinction of about 1 magnitude for typical giant HII regions in normal galaxies, and Armus et al (1989) find a typical value of about 2 magnitudes for IR-selected galaxies. Note that while an extinction-correction would increase the absolute values of the H$\alpha$ surface brightnesses (and implied star-formation rates per unit area) it will only affect the normalized surface-brightnesses defined above if the extinction varies spatially. We return to these issues later in the paper. \section{RESULTS} The relative strengths of the low-ionization emission-lines in starbursts correlate well with H$\alpha$ surface brightness (Figure 1). Data for HII regions and the DIM in our normal galaxies are also plotted for comparison. Both sets of data show higher strengths of low-ionization lines at lower surface brightness, and they all suggest a strong continuity in physical state between the high surface brightness and low surface brightness gas. In fact, the correlations can be roughly described as a power-law relation between [SII]/H$\alpha$ and $\Sigma_{H\alpha}$ with similar slopes for both samples. However, there is a noticeable offset between the two groups of galaxies: at a given line-ratio, the gas in the starbursts has an average surface brightness that is about an order-of-magnitude higher than the gas in the normal galaxies. This is not surprising as the starbursts have much higher H$\alpha$ surface brightnesses in general, and correspondingly higher $\Sigma_{SFR}$. We will further examine this systematic difference in surface brightness in the following paragraphs. We reject some DIM data points near the nucleus of M 81 due to heating processes other than pure photoionization by young stars in that region (cf. Devereux, Jacoby, \& Ciardullo 1995, 1996). While the line-ratios correlate strongly with surface-brightness, the starburst sample shows no correlation between linewidth and $\Sigma_{H\alpha}$, and only a weak correlation between linewidth and [SII]/H$\alpha$. Linewidths of the gas in starbursts are typically a few hundred kilometers per second, about a factor of 10 larger than those in normal galaxies. The data for the normal galaxies bear little resemblance to that of the starbursts with respect to these kinematic relations. It is likely that kinematics of the emission-line gas are different in the two types of galaxies for two major reasons. For the distant highly-inclined starbursts, the spectra sample a large number of emission line nebulae across starburst disks even within a small aperture, and the lines may be broadened by relative motions among the nebulae. Supernova-driven superwinds are also responsible for much of the linewidth broadening (Lehnert and Heckman 1996a), especially along the minor axis. Thus we conclude that linewidths do not track well with other emission-line parameters for the combined galaxy sample. In starbursts the continuity between high surface brightness regions (starburst nuclei) and low surface brightness (outer) regions, together with the lack of correlation between kinematics and emission line intensity ratios, suggests that the gas is mainly photoionized instead of mechanically heated. While we have drawn the same conclusion for the gas in the normal galaxies (WHL), Figure 1 shows a more general trend that is applicable to the gas in starbursts as well. It is not surprising that most of line emission results from photoionization since the ionizing radiation energy from OB stars is about an order of magnitude higher than the total kinetic energy released by supernovae and stellar winds (cf. Leitherer \& Heckman 1995). To further explore the possibility of photoionization, we need to understand the H$\alpha$ surface brightness offset between the two groups of galaxies in Figure 1, which is presumably physically related to the much higher star formation rates per unit area in the IR-selected starbursts. To test this idea, we adopt the effective surface brightness $\Sigma_e$ defined in section 2 above to scale the observed values of $\Sigma_{H\alpha}$ and then plot the line-ratio [SII]/H$\alpha$ versus relative (dimensionless) surface brightness $\Sigma_{H\alpha}/\Sigma_e$ (Figure 2). We have excluded the starburst data beyond $2 r_e$ because those data are relatively noisy and are more likely affected by superwind-driven shock-heating processes in the galaxy halos (Lehnert \& Heckman 1996a). Figure 2 shows remarkably universal pattern of line-ratio variation as a function of {\it normalized} surface brightness. The scaling by $\Sigma_e$ has successfully eliminated the systematic difference in surface brightness at a given line-ratio between the two groups of galaxies. Figure 2 demonstrates the interesting similarities in emission-line properties not only between HII regions in normal spirals and in starburst nuclei, but also between the DIM in normal galaxies and the relatively faint emission-line gas surrounding starburst nuclei. The universal continuity between high surface brightness and low surface brightness gas suggests that these emission-line properties all vary with a single parameter and therefore provides further support to the idea of photoionization as the dominant mechanism. Because of the similarity in emission-line properties, we propose that the fainter emission-line gas in starbursts and the DIM in normal galaxies has the same physical nature. While it remains to be confirmed observationally that the low surface brightness gas in the starbursts is generically diffuse rather than from the sum of many faint, discrete HII regions, our conjecture is supported by H$\alpha$ imaging results of the nearest starburst galaxies like M 82, NGC 253, NGC 5253, and NGC 1569 (cf. Lehnert \& Heckman 1995, Marlowe et al 1995; Martin 1997). In addition, we (Wang 1998; Wang et al. 1998) have analyzed our H$\alpha$ images of the normal galaxies and find that there is no {\it absolute} H$\alpha$ surface brightness limit that cleanly separates the DIM from bright HII regions. Instead, we show that defining the DIM in terms of a normalized surface-brightness analogous to that described above leads to a natural segregation that is independent of a galaxy's mean surface brightness in H$\alpha$. This implies that even in a starburst galaxy a DIM component would exist, but this diffuse gas will have a characteristically high absolute surface brightness. Since the mean $\Sigma_{H\alpha}$ seems to be a reasonable dividing point between the DIM and HII regions, we suggest that for the sample galaxies discussed in this paper, a crude surface brightness limit is the $\Sigma_{H\alpha}$ averaged within the half-light radius (=$\Sigma_e$). Therefore $\Sigma_{H\alpha} / \Sigma_e = 1.0$ could be used to isolate the `DIM' gas in Figure 2. For simplicity, in the following discussion we will tentatively adopt the same acronym DIM for the faint gas in the starbursts. We proceed to use a generic photoionization model to explain the correlation in Figure 2 and then address other implications of the data. \section{DISCUSSION} \subsection{Photoionization of the DIM} The observed inverse correlation between the relative strength of the low-ionization lines and the H$\alpha$ surface brightness could have a simple physical explanation if the emitting gas clouds all have roughly the same density. In this case, the higher the value of the local intensity of the ionizing radiation field, the higher the value of the ionization parameter U (defined to be the ratio of the densities of ionizing photons and electrons within a photoionized gas cloud). Simple ionization equilibrium arguments show that U determines the ionization state of the gas, while recombination means that the H$\alpha$ surface brightness will be proportional to the intensity of the ionizing radiation field. Thus, the proportionality between $U$ and the H$\alpha$ surface brightness will naturally produce enhanced relative intensities of low-ionization lines in the faint gas (cf. Domg\"{o}rgen \& Mathis 1994). The {\it generalized} relation between line-ratio and $\Sigma_{H\alpha}/\Sigma_e$ for the DIM in Figure 2 could then be explained {\it if} there is a direct proportionality between the average thermal pressure in the diffuse interstellar medium and the average star-formation rate per unit area in the galaxy. That is, the ratio of the H$\alpha$ surface brightness at a particular location compared to the mean value in the galaxy would then be proportional to the local value of the intensity of the ionizing radiation field divided by a quantity that is proportional to the thermal pressure and hence the density of the photoionized cloud (T $\sim 10^4$ K). Thus, $\Sigma_{H\alpha}/\Sigma_e$ $\propto$ $U$. Later in this section, we will briefly discuss some of the physics that might lead to such a proportionality. Here we simply remark that there is good empirical evidence that this proportionality is roughly obeyed when extreme starbursts like M 82 are compared to the disks of normal spirals like the Milky Way. In the M 82 starburst, the thermal gas pressure is $P/k$ $\sim$10$^7$ K cm$^{-3}$ (Heckman, Armus, \& Miley 1990) and the star-formation rate per unit area is $\Sigma_{SFR}$ $\sim$ 30 M$_{\odot}$ yr$^{-1}$ kpc$^{-2}$ for a Salpeter IMF extending from 0.1 to 100 M$_{\odot}$ (cf. Kennicutt 1998). Heckman, Armus, \& Miley (1990) and Lehnert \& Heckman (1996b) show these are typical values for both parameters in extreme starbursts. In comparison, in the local Milky Way disk the thermal gas pressure is $P/k$ $\sim$ 10$^{3.5}$ K cm$^{-3}$ (cf. Jenkins et al 1983; Reynolds 1993), and $\Sigma_{SFR}$ $\sim$ 4 $\times 10^{-3}$ M$_{\odot}$ yr$^{-1}$ kpc$^{-2}$ (McKee \& Williams 1997 adjusted to our adopted IMF). Thus, $\Sigma_{SFR}$ is roughly 7000 times greater in M 82 and the pressure is roughly 3000 times greater. This agrees with Lord et al (1996) who estimate that both the thermal pressure and FUV intensity in M 82 are three-to-four orders of magnitude higher than in the local ISM. Let us for the moment then adopt the conjecture that $P \propto \Sigma_{SFR}$, and derive a relation between $\Sigma_{H\alpha}$, $U$, and $n_{e}$ utilizing photoionization models. Suppose the gas in the DIM is illuminated by an isotropic ionizing radiation field. Then the one-sided incident ionizing flux $\Phi_{Lyc}$ is related to the observed area-averaged H$\alpha$ surface brightness of the cloud by \begin{equation} \Phi_{Lyc} = 4 \pi \frac{\Sigma_{H\alpha}}{h\nu} \frac{1}{f_{H\alpha}} \frac{A_{proj}}{A_{tot}} e_{H\alpha} \end{equation} (Vogel et al. 1995) where $e_{H\alpha}$ is the ratio of the intrinsic (extinction-corrected) and observed H$\alpha$ surface brightness and $f_{H\alpha}$ is the fraction of recombinations which produce H$\alpha$ photons (=0.46 for T = 10$^4$ K and Case B recombination). The ratio of observed area to total area $A_{proj} / A_{tot}$ is determined by the nebular geometry. It is 1/2 for a slab and 1/4 for a sphere. We use 1/3 to represent an average case. According to the definition of U, we can express $U = 4 \Phi_{Lyc} /n_e c$, therefore \begin{equation} \Sigma_{H\alpha} = 5.9\times10^{-14} \left(\frac{n_e}{1\ {\rm cm^{-3}}}\right)\ U\ e_{H\alpha}^{-1} \ \ \ \ \ {\rm ergs\ s^{-1}\ cm^{-2}\ arcsec^{-2}} \end{equation} To compare this to our data, we need to understand how $n_e$ in the DIM is related to the mean star-formation-rate per unit area, as measured by $\Sigma_e$. Since photoionized gas is generically in thermal equlibrium at a temperature of roughly 10$^4$ K (e.g. Osterbrock 1989), relating $n_e$ to $\Sigma_e$ is equivalent to determing the constant of proportionality in the relation $P \propto \Sigma_{SFR}$. To do this, we will adopt a purely empirical approach for the moment and insist that this constant agree with values for $P$ and $\Sigma_{SFR}$ in the ISM of the Milky Way. Later we will explore the possible physical basis of this. In the local disk of the Milky Way, $\Sigma_{SFR}$ $\sim$ 4 $\times 10^{-3}$ M$_{\odot}$ yr$^{-1}$ kpc$^{-2}$ implies an average intrinsic H$\alpha$ surface brightness for the disk of 1.2 $\times$ 10$^{-16}$ ergs s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ (where we have assumed contnuous star-formation with a Salpeter IMF extending from 0.1 to 100 $M_{\odot}$ - Leitherer \& Heckman 1995). A thermal pressure of $P/k$ $\sim$ 10$^{3.5}$ K cm$^{-3}$ implies n$_e$ $\sim$ 0.16 cm$^{-3}$ in the photoionized gas. Thus, the predicted relation between $\Sigma_e$ and $n_e$ based on our own Galaxy is: \begin{equation} \Sigma_e = 7.4\times10^{-16} \left(\frac{n_e}{1\ {\rm cm^{-3}}}\right) e_{H\alpha}^{-1} \ \ \ \ \ {\rm ergs\ s^{-1}\ cm^{-2}\ arcsec^{-2}} \end{equation} Photoionization models of the DIM (Sokolowski 1993) are able to reproduce the observed emission-line ratios provided that the cosmically-abundant, refractory elements (i.e. Fe and Si) are largely locked-up in dust grains, as in the case of diffuse clouds in our own Galaxy (cf. Savage \& Sembach 1996 and references therein). The models also better match the data if the radiation field incident on the DIM has been hardened due to radiative transfer en route to the DIM (e.g. there is an optical depth of-order unity at the Lyman edge between the DIM and the O stars). Adopting these `depleted and hardened' models, we then estimate an empirical relation \begin{equation} \frac {[SII]\lambda \lambda 6716,6731} {H\alpha} = 1.1\times10^{-2}\ U^{-0.58} \end{equation} appropriate for the range of the observed DIM lineratios ([SII]/H$\alpha$ $\approx$ 0.3 -- 1.5). This enables us to relate [SII]/H$\alpha$ approximately to $\Sigma_{H\alpha}/\Sigma_e$. The ratio of equations [2] and [3] imply that $\Sigma_{H\alpha}/\Sigma_e$ = 80 U. Thus, using Equation [4] above, we obtain: \begin{equation} \frac {[SII]\lambda \lambda 6716,6731} {H\alpha} = 0.14\ \left(\frac {\Sigma_{ H\alpha}} {\Sigma_e}\right) ^{-0.58} \end{equation} This relation is represented by the solid line in Figure 2. The data agree reasonably well with the prediction in the faint gas (e.g. $\Sigma_{H\alpha}/\Sigma_e$ $<$ 1). The deviation of the data from the prediction at higher surface brightnesses ($\Sigma_{H\alpha}/\Sigma_e$ $>$ 1) will be briefly discussed in section 4.3 below. As a `sanity check' we now estimate the values of n$_e$ in normal disks and starbursts that are implied by the measured values of $\Sigma_e$ based on Eq. [3]. One should keep in mind that $\Sigma_e$ may need to be corrected for inclination, so the values given here are upper limits, especially for the starburst sample where the inclination is high. We infer an average $n_e$ of 0.4 $e_{H\alpha}$ cm$^{-3}$ from the observed $\Sigma_e$ for the normal galaxies. Taking a typical extinction of 1 magnitude for H$\alpha$ (Kennicutt 1983) we obtain $n_e$ = 1.0 cm$^{-3}$. This is several times larger than the value $n_e$ $\sim$ 0.16 cm$^{-3}$ for the Reynolds layer (Reynolds 1993). There might be two major reasons for this discrepancy. Firstly, the H$\alpha$ extinction in the DIM may be less than the typical HII region value of 1 magnitude. Secondly, simple considerations of hydrostatic equilibrium (see below) imply that the total ISM pressure (e.g. the sum of thermal, turbulent, cosmic ray, and magnetic pressures) decreases with galactocentric distance, so that the thermal pressure and hence electron density in the DIM may be higher in the inner regions of galaxies. Now, $n_e$ for the Reynolds layer has been measured in the solar neighborhood (about 8 kpc from the Galactic center), while $r_e$ in our normal galaxies is typically 2--4 kpc (see Table 2). The electron density for the DIM in starbursts estimated from $\Sigma_e$ averages $\sim$~24 cm$^{-3}$, after correcting for two magnitudes of extinction in H$\alpha$ (Armus, Heckman, \& Miley 1989). This value for $n_e$ is then considerably higher than that in normal galaxies (as expected). To summarize, we have shown that a model in which the DIM in both starburst and normal galaxies is photoionized gas whose thermal pressure is proportional to the mean rate of star-formation per unit area in the galaxy can quantitatively reproduce the observed unified correlation between the ionization state of the DIM ([SII]$\lambda \lambda$ 6716,6731/H$\alpha$ line ratio) and the relative surface-brightness of the DIM shown in Figure 2. To make this test we have fixed the constant of proportionality in the relation $P \propto \Sigma_{SFR}$ to its value in the local disk of the Milky Way. We now turn to the possible physical basis of this relation. \subsection{A Supernova-Regulated ISM Pressure} Suppose we assume that the average thermal gas pressure $P$ within $r_e$ is maintained by the energy and mass released by supernovae (and stellar winds) inside $r_e$. We can then relate $P$ and therefore $n_e$ in the DIM to the effective surface brightness $\Sigma_e$. Chevalier and Clegg (1985) have shown that for the case of spherical symmetry and adiabatic conditions \begin{equation} P = 0.12\ \dot{M}^{1/2}\ \dot{E}^{1/2}\ r_e ^{-2} \end{equation} Where $\dot{M}$ and $\dot{E}$ are the rates at which gas in the ISM is shocked and heated respectively. While this is exact for a spherically-symmetric case, its difference from a disk geometry can be shown to be negligible (provided that the gas is adiabatic - see below). The starburst models of Leitherer \& Heckman (1995) predict a simple scaling between the rate at which a starburst would return kinetic energy ($\dot{E}$) and ionizing photons (Q). Case B recombination gives the scaling from Q to the H$\alpha$ luminosity. Now $\dot{M}$ is the amount of mass per unit time that is heated by supernovae and stellar winds. This will be larger than the ejecta directly returned from the massive stars by a `mass-loading' factor $m$. For a standard Salpeter IMF extending up to 100 M$_{\odot}$ and a constant rate of star-formation for a time longer than 40 Myr, the Leitherer \& Heckman models and equation 6 above then imply that $n_e$ is related to $\Sigma_e$ by \begin{equation} \Sigma_e \simeq 1.4\times10^{-15} \left(\frac{n_e}{1\ {\rm cm^{-3}}}\right) \ m^{-1/2}\ e_{H\alpha}^{-1} \ \ \ \ \ {\rm ergs\ s^{-1}\ cm^{-2}\ arcsec^{-2}} \end{equation} where we have assumed ${P} / {2 n_e k} = 10^4$~K. Eq. [7] agrees with the scaling relation between $\Sigma_e$ and $n_e$ (eq. [3]) of the local disk, for an appropriate $m$ value (see below). We can compare equation [6] to conditions in the local disk of our own Galaxy. Based on the rates at which stellar winds and supernovae inject mass and kinetic energy ($\sim$8$\times$10$^{-4}$ M$\sun$ yr$^{-1}$ kpc$^{-2}$ and 1.2$\times$10$^{39}$ ergs s$^{-1}$ kpc$^{-2}$ respectively) within 3 kpc from the Sun (Abbott 1982; Jura \& Kleinmann 1989), the predicted thermal pressure of the hot gas is $\sim$2200 m$^{1/2}$ K cm$^{-3}$, compared to the representative value of $10^{3.5}$ K cm$^{-3}$ from observations of neutral (Jenkins et al. 1983) and ionized (Reynolds 1993) diffuse gas near the Galactic midplane. The required amount of mass heated per unit time is about twice as much as that injected directly by supernovae and stellar winds ($m \sim$ 2). Only about 1/3 of this returned mass comes from high-mass stars (M$>$ 5 M$_{\odot}$),with the bulk coming from intermediate-mass AGB stars (Jura \& Kleinmann 1989). The situation in starbursts--where the mass is returned almost entirely by high-mass stars (Leitherer \& Heckman 1995)--is therefore somewhat different. Values for $m$ $\sim$ 3 to 10 have been estimated in starburst galaxies based on the mass, luminosity, and temperature of the X-ray emitting gas (e.g. Suchkov et al 1996; Della Ceca et al 1997; Wang et al 1997). Equation [6] assumes that the thermal gas pressure is determined by the deposition of mass and energy by supernovae and stellar winds, that the hot gas that results permeates the region of star-formation, and that radiative losses are negligible. These assumptions may be valid in starbursts driving superwinds (cf. Heckman, Lehnert, \& Armus 1993), but probably not in the ISM in normal galaxy disks where the interaction between stellar ejecta and the ISM is more complex (cf. Cioffi \& Shull 1991). We therefore consider next a different physical interpretation of the relation between gas pressure and star-formation intensity. \subsection{Hydrostatic Equilibrium and Pressure-Regulated Star Formation} As discussed above, the results in Figure 2 can be understood if the DIM in both normal and starburst galaxies is photoionized, has a roughly constant characteristic density in each galaxy, and the characteristic thermal pressure in the DIM is proportional to the rate of star-formation per unit area in that galaxy. In section 4.2, we considered the possibility that the physical coupling was provided by the energy and mass deposited in the ISM by supernovae and massive stars. Here, we consider a different interpretation, namely that the total pressure in the ISM is specified by hydrostatic equilibrium (e.g. Boulares \& Cox 1990), and that the star-formation rate per unit area is related to, or perhaps limited by, this pressure (cf. Dopita 1985). In a case of simple hydrostatic equilibrium, the total (thermal, turbulent, cosmic ray, plus magnetic) mid-plane pressure in a disk galaxy is given by \begin{equation} P_{tot} \propto \Sigma_{gas} (\Sigma_{*} + \Sigma_{gas}) \propto \Sigma_{gas} \Sigma_{tot} \end{equation} On empirical grounds, it is well-established that the star-formation-rate per unit area ($\Sigma_{SFR}$) in disk galaxies scales with both $\Sigma_{gas}$ and $\Sigma_{tot}$. This suggests that there might be a simple, direct scaling between $\Sigma_{SFR}$ and $P_{tot}$. In fact, Dopita \& Ryder (1994) parameterize the problem as \begin{equation} \Sigma_{SFR} \propto \Sigma_{gas}^{m} \Sigma_{tot}^{n} \end{equation} and find empirically that $m + n = 2.0 \pm 0.5$. Kennicutt (1998) finds that $\Sigma_{SFR} \propto \Sigma_{gas}^{1.4}$, while Figure 1 in Dopita \& Ryder (1994) implies $\Sigma_{SFR} \propto \Sigma_{*}^{0.6}$. Except in the most extreme starbursts, it is reasonable to take $\Sigma_{tot} \gg \Sigma_{gas}$, so that this last result means roughly that $\Sigma_{SFR} \propto \Sigma_{tot}^{0.6}$. Combining these results suggests that: \begin{equation} \Sigma_{SFR} \propto \Sigma_{gas}^{1.4} \Sigma_{tot}^{0.6} \propto P_{tot} (\Sigma_{gas}/\Sigma_{tot})^{0.4} \end{equation} Since there is only a small observed variation in $\Sigma_{gas}/\Sigma_{tot}$ (factors of a few) in the disks of late-type galaxies and typical starbursts, this implies that there should be a proportionality between $P_{tot}$ and $\Sigma_{SFR}$. If we now assume that the thermal component of the pressure scales with the total pressure, this is just what we require in order to understand Figure 2. As we emphasized in section 4.1 above, the rough quantitative agreement between photoionization models and the properties of the DIM in the starbursts and normal galaxies is independent of the nature of the physical, causal connection between the thermal pressure in the DIM and $\Sigma_{SFR}$. The agreement shown in Figure 2 is based simply on requiring that the constant of proportionality between these two quantities is consistent with values in the local disk of our Galaxy. \subsection{The High Surface-Brightness Gas} While the model of a photoionized, roughly isobaric DIM provides a satisfactory quantitative match to the data on the low surface brightness gas ($\Sigma_{H\alpha}/\Sigma_e$ $<$ 1), Figure 2 shows that there is a systematic offset between the predictions and the data in the high surface brightness range ($\Sigma_{H\alpha}/\Sigma_e$ $\sim$ 1 -- 10). This disagreement is in the sense that the observed emission-line ratio [SII]/H$\alpha$ is too high, and therefore that the actual ionization parameter $U$ in the gas must be smaller than predicted. This could be explained if the density in the high-surface-brightness gas is higher than estimated in the model. This is entirely plausible, since the high surface-brightness gas (the HII regions) will likely be significantly over-pressured with respect to the surrounding diffuse ISM (e.g. the HII regions may be self-gravitating or expanding into the lower-pressure DIM). More quantitatively, we note that the offset between the model and data for log([SII]/H$\alpha$) in the bright gas (a difference of $\sim$ 0.6 dex on average) would translate into a difference of a factor of $\sim$10 lower $U$ and hence higher $n_e$. An additional factor is that the Sokolowski models we have utilized for the DIM: 1) adopted a dust-depleted abundance pattern and 2) assumed that the ionizing radiation field had been hardened as it propagated to the DIM. Partial depletion onto grains may occur in HII regions (cf. Garnett et al 1995), but the assumption of spectral hardening is not appropriate for the HII regions. Dropping these assumptions would decrease the predicted ratio of [SII]/H$\alpha$ for a given U, and make the discrepancy worse in Figure 2. This would require a decrease in U (and increase in $n_e$) by an additional factor of $\sim$ 3. Using the value for $n_{e}$ in the DIM in normal galaxies estimated from $\Sigma_e$ above, we would then require a density of $\sim$30 cm$^{-3}$ in the HII regions. This agrees reasonably well with the average values of $n_e$ in disk HII regions of $\sim$10--100 cm$^{-3}$ (e.g. O'Dell and Casta$\tilde{\rm n}$eda 1984; Kennicutt, Keel \& Blaha 1989) measured with the [SII] and [OII] doublets. Our measurements of [SII]$\lambda$6716/[SII]$\lambda$6731 for bright HII regions suggest similar values for $n_e$. Following the same reasoning, we would estimate that the required density in the high-surface brightness gas in the centers of the starbursts must also be about 30 times higher than in the low-surface-brightness gas: $n_e\sim700\ cm^{-3}$. This is in satisfactory agreement with the directly measured central densities of 300 to 1000 cm$^{-3}$ (e.g. Heckman, Armus and Miley 1990; Lehnert and Heckman 1996a). \section{CONCLUSIONS} We have compared the emission-line properties of the low surface brightness gas in starburst galaxies with the DIM in normal spirals. Both samples show similar attributes of enhanced low-ionization forbidden-line strengths (as represented by [SII]/H$\alpha$) relative to typical HII region values and a strong inverse correlation between H$\alpha$ surface-brightness and the [SII]/H$\alpha$ line ratio (in the form of a smooth transition from high surface brightness to low surface brightness regions). The gas kinematics show no strong correlation with surface brightness and line ratio in the combined samples. Although the H$\alpha$ surface brightness corresponding to a given [SII]/H$\alpha$ line ratio is preferentially about an order-of-magnitude larger in starbursts than in normal galaxies, we have demonstrated that this can be understood as a consequence of the proportionately higher mean H$\alpha$ surface brightnesses of the starbursts. That is, we have shown that the {\it relative} surface brightness at a particular location, defined as the absolute surface brightness there ($\Sigma_{H\alpha}$) scaled by the mean surface brightness within the H$\alpha$ half-light radius ($\Sigma_e$) for the galaxy as-a-whole, exhibits a remarkably universal correlation with the [SII]/H$\alpha$ line ratio for normal and starburst galaxies alike. This suggests that the emission-line properties of the low surface brightness gas in both groups of galaxies can be unified to a simple relation between line ratio and relative surface brightness, and that the variations in line ratio and relative surface-brightness are controlled by a single parameter. We have constructed a simple photoionization model to explain the correlation between $\Sigma_{H\alpha}$/$\Sigma_e$ and line ratio. We have pointed out that the [SII]/H$\alpha$ line ratio has an inverse dependence on the ionization parameter $U$ (the local ratio of ionizing photons and electrons in the photoionized gas). For simple recombination, $\Sigma_{H\alpha}$ is proportional to the local intensity of the ionizing radiation field. {\it If} the average thermal pressure in the diffuse ISM in a galaxy ($P$) is proportional to the average rate of star-formation per unit area ($\Sigma_{SFR}$), then since $\Sigma_{SFR}$ can be measured by $\Sigma_e$, it follows that $U \propto \Sigma_{H\alpha}/\Sigma_e$. We have argued that this result naturally explains the universal dependence of line ratios on relative surface brightness. Our simple model is able to quantitatively reproduce the data for normal and starburst galaxies provided that the constant of proportionality in the relation between $P$ and $\Sigma_{SFR}$ is consistent with the observed values for both quantities in local Galactic ISM. Thus, we have emphasized that the agreement between our simple photoionization model and the data is independent of the detailed physical connection between star formation and ISM pressure. We have discussed two ways in which $P$ might be physically related to $\Sigma_{SFR}$. Following Chevalier \& Clegg (1985) we have first assumed that $P$ is regulated by the feedback of mass and energy from supernovae and massive stars. Scaling the amount of mass heated per supernova so that the predicted thermal pressure matches the observed pressure in the local Milky Way disk, we found that the photoionization model agrees roughly with the DIM data in both starburst and normal galaxies. As an alternative, we explored the possibility (e.g. Dopita 1985) that $\Sigma_{SFR}$ is determined (or limited) by the total (thermal, turbulent, cosmic ray, plus magnetic) pressure $P_{tot}$, and that $P_{tot}$ and $P$ are determined by a simple hydrostatic equilibrium condition in galactic disks, i.e. $P \propto P_{tot} \propto \Sigma_{tot} \Sigma_{gas}$. We used recent empirical results from Kennicutt (1998) and Dopita \& Ryder (1994) to argue that $\Sigma_{SFR} \propto P f_{gas}^{0.4}$, where f$_{gas}$ is the fractional gas mass in the disk. Since f$_{gas}$ varies only by small factors, this means that $\Sigma_{SFR}$ does roughly scale with $P$. The simple model can not account for the emission-line ratios in the high surface-brightness gas (the giant HII regions) unless the densities and thermal pressures there are roughly 30 times larger than in the DIM. We argue that this is both reasonable physically and in agreement with measurements. We conclude that the low surface brightness gas in the starbursts shares a common nature with the DIM in the normal galaxies, and propose that the former can be regarded as the same gas phase as the latter. Further morphological observations of the low surface brightness gas in starbursts can confirm this suggestion. \acknowledgments We thank S. Baum, D. Calzetti, R. Kennicutt, R. Wyse, C. Norman, C. Martin, and A. Ferguson for useful discussions and an anonymous referee for constructive suggestions. \clearpage
2,869,038,155,421
arxiv
\section{Phenomenology versus Theory} The most intriguing phenomena of hadron physics are confinement as well as dynamical and anomalous chiral symmetry breaking. Despite the fact that the theory of the Strong Interactions, Quantum Chromodynamics (QCD), is known since decades we still lack a fundamental understanding of the corresponding physics. As a phenomenon confinement is easily described. On one hand, representing the Strong Interaction by a local Quantum Field Theory ({\it i.e.}~by QCD) necessitates to introduce fundamental fields with a new quantum number, namely quarks and gluons with some \char'134 colour\char'134. The advantage of this approach is twofold: It provides a mathematical framework, and it orders the plethora of hadrons into a clearly arranged pattern. On the other hand, quarks and gluons have never been detected as particles, {\it i.e.} nobody has ever seen quarks and gluons making a track in a detector. The confinement hypothesis can therefore be formulated as: the colour-neutral hadrons, being a kind of bound states of coloured quarks and gluons, are the only strongly interacting particles, no \char'134 coloured\char'134 \ particles exist. This hypothesis has been extremely successful. The colour-charge version of ionization does plainly not occur. Even more, the concept of mutual forces by mutual polarization, the van-der-Waals forces, also does not have a colour-charge analogue. Thus as a phenomenon confinement seems to be plain and simple. As a theoretical concept confinement is astonishingly hard to put into precise terms. Even the question how to obtain a concise definition of `charge' did undergo some severe discussions when trying to find a theoretically unequivocal definition of confinement. {\it E.g.} the Wilson loop provides an order parameter only in the absence of fundamental charges, {\it i.e.}~quarks. Despite all efforts such an order parameter has not been found in the real world with light quarks, a satisfactory and detailed description of the underlying mechanisms of confinement stays elusive. The fact that for charges in higher representations there are common aspects with the Higgs mechanism complicates the issue even further. The situation is not drastically different when addressing dynamical Chiral Symmetry Breaking ($\chi$SB) and the $U_A(1)$ anomaly. As phenomena they are clearly identifiable, the first because of the relatively small pion mass and several patterns in the interaction of pions with themselves and other hadrons. The latter because of the large $\eta^\prime$ mass. When it comes to theoretically understanding $\chi$SB we also lack a lot of basic knowledge. We know that dynamical $\chi$SB comes along with the dynamical generation of \char'134 constituent\char'134 quark masses (which, however, depend on the momentum of the quarks). One may explain dynamical $\chi$SB and the $U_A(1)$ anomaly with two seemingly different approaches. One approach starts by considering quark zero modes in topologically non-trivial field configurations. A non-vanishing density of such zero modes in the limit of infinite volume signals dynamical $\chi$SB \cite{Banks:1979yr}. The non-vanishing topological susceptibility provides the explicit $U_A(1)$ symmetry breaking, see {\it e.g.\/} \cite{Leutwyler:1992yt} and references therein. The other approach rests on a supercritical effective interaction between quarks \cite{Miransky:1985ib,Pennington:1998cj}, usually described in a covariant Green's function approach see {\it e.g.\/} \cite{Alkofer:2000wg,Fischer:2006ub,Alkofer:2006jf,Aguilar:2008xm,Dudal:2008rm,Fischer:2009tn} and references therein. The mass generating mechanism becomes then similar to the generation of a gap in superconductors. Especially, if this interaction is infrared divergent the effective coupling always exceeds the critical one and therefore dynamical $\chi$SB occurs. What is more astonishing is the fact that a confining-type infrared divergence in the effective quark-quark interaction results in a non-vanishing $\eta^\prime$ mass \cite{Kogut:1973ab,Alkofer:2008et}. Therefore it may well be possible that these two so differently appearing approaches are merely two distinct but correct ways of describing the related physics and aspects thereof. As we have no commonly accepted complete picture of the strongly interacting domain of QCD the relation between confinement on the one hand and dynamical, resp., anomalous, $\chi$SB on the other hand is not firmly established. However, there are important hints that quark confinement and $\chi$SB are closely related. Even beyond the debated question whether the corresponding phase transition(s) occur(s) at the same temperature (see {\it e.g.\/} \cite{Bazavov:2009zn} and references therein) an analysis of the so-called dual quark condensate and dressed Polyakov loops points to such a close relation \cite{Bilgici:2008qy,Fischer:2009wc,Bilgici:2009tx} via linking confinement to spectral properties of the Dirac operator \cite{Gattringer:2006ci}. Again such a close relation can be found in the approaches mentioned above: Either when investigating topologically non-trivial, confining field configurations \cite{Di Giacomo:1999fa,Greensite:2003bk,Diakonov:2009jq} or when studying the infrared behaviour of QCD Green functions, and hereby especially the quark-gluon vertex in Landau gauge \cite{Alkofer:2008tt,Alkofer:2006gz}. But despite all evidence for a deep connection between confinement and $\chi$SB the situation is not conclusive yet. \section{Remarks on Quantum Field Theory} According to my understanding QCD is a {\bf local} Quantum Field Theory as expressed in the quote from Haag's book~\cite{Haag:1992hx} in a clear way as follows: {\sl ``The r\^ole of fields is to implement the principle of locality. The number and the nature of different basic fields needed in the theory is related to the charge structure, not to the empirical spectrum of particles.''} To put this understanding in a more precise setting: I assume validity of the Osterwalder-Schrader axioms \cite{Osterwalder:1973dx} except reflection positivity. This provides a well-defined mathematical framework as described in refs.\ \cite{Haag:1992hx,Nakanishi:1990qm} and a number of other monographs. It is important to note that all methods in Quantum Field Theory, including perturbation theory, lattice field theory, and functional approaches, rely on this framework. If it were true that QCD is not a local theory more or less all attempts to understand hadron physics from QCD are questionable. Fortunately, the results obtained from QCD provide evidence for the validity of locality. Gaining an understanding of physics is quite often related to develop intuitive pictures. In the case of confinement such a picture will be preferentially formulated with the help of the fundamental fields, the gluons and quarks. But these are only valid elements of the theory after gauge-fixing. Of course, confinement as an observable phenomenon exists without reference to any gauge, and in different gauges picturing confinement might result in quite different scenarios. However, this is exactly the point. Everybody will agree that the hydrogen atom can be described by quantum mechanics independent of the gauge chosen for electromagnetism. For gaining an understanding of the laws of Quantum Mechanics, however, it was of utmost importance that the spectrum of the hydrogen atom can be easiest understood when choosing Coulomb gauge. To gain knowledge in which gauge confinement will be explained easiest would be a tremendous step forward. Consequently, fixing the gauge is likely to be helpful for an understanding of confinement. As already mentioned the Wilson loop gives only a clear criterion in the absence of quarks. So, what are the possiblities for a theoretically sound definition of confinement? A potential procedure may look like: \begin{itemize} \item Construct a colour charge operator, {\it e.g.} as described in \cite{Nakanishi:1990qm}, \item demonstrate it to be well-defined (``unbroken charge"), and \item check for a mass gap in the physical state space. \end{itemize} In case one obtains a well-defined charge with unbroken global symmetry and a mass gap in the physical state space one has confinement \cite{vonSmekal:2008ws}. As pictorially presented\footnote{I thank Lorenz von Smekal for this figure.} in fig.~\ref{Conf} the unbroken global symmetry without a mass gap provides the Coulomb-type phase whereas broken global symmetry with mass gap gives the Higgs phase. \begin{figure}[htb] \includegraphics[width=75mm]{lorenz_plain.eps} \caption{A pictorial presentation how the field around a test charge and the (non-)existence of a mass gap allows to distinguish between the Coulomb, confinement and Higgs phase \cite{vonSmekal:2008ws}.} \label{Conf} \end{figure} To conclude this section let me emphasize the r\^ole of the Becchi-Rouet-Stora--Tyutin (BRST) symmetry in gauge-fixed quantum gauge field theories. The existence of BRST quartets and the construction of a BRST cohomology does not only allow the generalization of the Gupta-Bleuler mechanism of QED to QCD but also very likely is substantial in constructing the physical state space. The distinction between the complete and the positive-definite state space is hereby absolutely crucial in understanding the mathematical framework of quantum gauge field theories. An introduction to the subject can be found in ref.~\cite{Nakanishi:1990qm}, a short summary on how this may relate to the confinement problem in ref.~\cite{vonSmekal:2000pz}. \section{Requirements for an investigation of Confinement} First, confinement in four-dimensional field theories requires the dynamical generation of a physical mass scale. In presence of such a mass scale, however, the renormalisation group (RG) equations imply the existence of essential singularities in physical quantities (such as the $S$-matrix) as functions of the coupling at $g = 0$. This is due to the dependence of the RG invariant confinement scale on the coupling and the renormalisation scale $\mu$ near the ultraviolet fixed point as given by \begin{eqnarray} \Lambda &=& \mu \exp \left( - \int ^g \frac {dg'}{\beta (g')} \right) \nonumber \\ &\stackrel{g\to 0}{\rightarrow } &\mu \exp \left( - \frac 1 {2\beta_0g^2} \right), \label{Lambda} \end{eqnarray} with $\beta_0>0 $. Therefore a truely non-perturbative method is needed for the study of confinement. Second, in some scenarios confinement is related to severe infrared divergences, {\it i.e.}, divergences which cannot be removed from physical cross sections by a suitable summation over degenerate states as in QED.\footnote{See, however, ref.\ \cite{Braun:2007bx} which shows that confinement criteria can be fulfilled without infrared divergences.} In any finite volume these infrared divergences could be detected only by a careful extrapolation to infinite volume. Therefore either such an analysis of lattice results and/or an ab initio continuum approach is needed for an understanding of such confinement scenarios. Third, confinement implies the suppression of long-wavelength propagation. Phrased otherwise, confinement is a true quantum phenomenon. Therefore a purely (semi-)classical description is necessarily incomplete, a quantum theoretical picture is needed for an investigation of confinement. \section{Criteria for a Confinement picture} A successful confinement scenario should explain many properties either deduced from hadron physics or lattice calculations. One of them is \begin{itemize} \item {\em string formation.} \end{itemize} There are two distinct sorts of representation dependence of the static quark potential, depending on the static source separation: \begin{itemize} \item {\em Casimir Scaling.} Initially the slope of the linear potential $-$ the string tension $-$ is proportional to the quadratic Casimir of the group representation. \item {\em N-ality Dependence.} Asymptotically, the force between charged fields in an SU(N) gauge theory depends only on the so-called ``N-ality" of the group representation, given by the number of boxes mod $N$ in the Young tableau of the representation. \end{itemize} Another such property is the \begin{itemize} \item {\em absence of van-der-Waals forces} \end{itemize} as discussed in the introductory section. Related to the issue of the mathematical framework of the theory is the property of \begin{itemize} \item {\em positivity violation} \end{itemize} and hereby \begin{itemize} \item {\em the BRST quartet mechanism} for tree-level-positive fields. \item {\em antiscreening beyond perturbation theory} as expressed in the Oehme--Zimmermann superconvergence relations\footnote{See {e.g.} refs.\ \cite{Oehme:1980ai,Alkofer:2000wg}}. \end{itemize} And, last but not least, as a successful theory of confinement is a theory of Infrared QCD it should include a description of \begin{itemize} \item {\em dynamical $\chi$SB} \end{itemize} and the \begin{itemize} \item {\em $U_A(1)$ anomaly}. \end{itemize} \section{Candidates for a Confinement picture} There are many proposals for the confinement mechanism. It is impossible to provide an exhaustive list in a short article, so I will cite only those proposals which according to my opinion seem best supported by existing numerical studies or other arguments.\footnote{Some of these arguments are briefly reviewed in ref.\ \cite{Alkofer:2006fu}.} A line of thought is that the QCD functional integral is dominated by some special class of field configurations which cause the expectation value of a large Wilson loop to fall off exponentially with the minimal area of the loop. The leading candidates for these special configurations are magnetic monopoles \cite{Di Giacomo:1999fa}, dyons /calorons \cite{Diakonov:2009jq,Kraan:1998pm} or center vortices \cite{Greensite:2003bk}, although other objects have been suggested. A different approach is based on the special properties of quantum fields in Coulomb gauge, and hereby the existence of the gauge-fixing ambiguity, the Gribov problem, and the existence of a Gribov horizon plays a special role \cite{Gribov:1977wm,Zwanziger:1998ez}. Another idea is, preferentially in Landau gauge, to solve non-perturbatively for quark and gluon propagators and vertex functions, analytically by an infrared expansion of the complete set of Schwinger-Dyson and Exact Renormalization Group equations, and numerically by solving a truncated set of these equations, see {\it e.g.} \cite{vonSmekal:1998is,Pawlowski:2003hq,Alkofer:2004it,Fischer:2008uz} and references therein. Finally, there is a fascinating relationship between gauge theory in $D=4$ dimensions and string theory quantized in a special ten-dimension background geometry known as anti-DeSitter space. This is the AdS-CFT correspondence, see refs.\ \cite{Maldacena:1998im,Polchinski:2001tt} and many others. It has turned out that a number of these suggestions are related in interesting ways: monopole wordlines are found to lie on center vortex worldsheets, and center vortex worldsheets appear to be crucial in some ways to the confinement scenario in Coulomb gauge. Both Coulomb and Landau gauge investigations emphasize the importance of the Faddeev-Popov operator, and the infrared properties of the ghost propagator. \section{Outlook} In this contribution to a lively on-going discussion I tried to describe what are the difficulties encountered in the endeavour of studying infrared QCD. It is striking that after decades of effort we do not understand how the Strong Interaction really works at long distances. Nevertheless, there \emph{has} been appreciable progress in this subject. Step by step we uncover surprising details about confinement and dynamical, resp., anomalous, chiral symmetry breaking. Between the existing approaches there are not yet understood relations. Although many details are still missing these relations make plain that the different confinement pictures are definitely not mutually exclusive. Maybe we will learn that a non-trivial merger of all these scenarios of Infrared QCD will eventually fulfill all the criteria required for a consistent and convincing description. Even if this will constitute the major breakthrough for theory one should keep in mind that even then there is still a tough challenge left: Find an experimentally accessible hadron observable to verify/falsify the presented picture of confinement and chiral symmetry breaking. \section*{Acknowledgement} I thank the organizers of this Winter School, my colleagues Christof Gattringer, Leonid Glozman, Christian Lang, Heimo Latal, and Leopold Mathelitsch for inviting me, and especially for all their efforts to make this outstanding school possible. My knowledge about this subject would have not been possible without the discussions I enjoyed with many outstanding colleagues. I am grateful to all of them, but amongst them Christian Fischer, Felipe Llanes Estrada, Jeff Greensite, Axel Maas, Jan Pawlowski, Lorenz von Smekal, and Dan Zwanziger deserve a special mentioning. Last but not least, I thank Christian Fischer and Axel Maas for a critical reading of the manuscript.
2,869,038,155,422
arxiv
\section*{Content} Text and results for this section, as per the individual journal's instructions for authors. \section*{Section title} Text for this section\ldots \subsection*{Sub-heading for section} Text for this sub-heading\ldots \subsubsection*{Sub-sub heading for section} Text for this sub-sub-heading\ldots \paragraph*{Sub-sub-sub heading for section} Text for this sub-sub-sub-heading\ldots In this section we examine the growth rate of the mean of $Z_0$, $Z_1$ and $Z_2$. In addition, we examine a common modeling assumption and note the importance of considering the tails of the extinction time $T_x$ in studies of escape dynamics. We will first consider the expected resistant population at $vT_x$ for some $v>0$, (and temporarily assume $\alpha=0$) \[ E \bigl[Z_1(vT_x) \bigr]= \int_0^{v\wedge 1}Z_0(uT_x) \exp (\lambda_1)\,du . \] If we assume that sensitive cells follow a deterministic decay $Z_0(t)=xe^{\lambda_0 t}$ and approximate their extinction time as $T_x\approx-\frac{1}{\lambda_0}\log x$, then we can heuristically estimate the expected value as \begin{equation}\label{eqexpmuts} \begin{aligned}[b] & E\bigl[Z_1(vT_x)\bigr]\\ &\quad = \frac{\mu}{r}\log x \int_0^{v\wedge1}x^{1-u}x^{({\lambda_1}/{r})(v-u)}\,du . \end{aligned} \end{equation} Thus we observe that this expected value is finite for all $v>0$ (also see \cite{koon,xjon,marg,schn,koha,issnic}). \section*{Appendix} Text for this section\ldots \begin{backmatter} \section*{Acknowledgements Text for this section\ldots \section*{Funding Text for this section\ldots \section*{Abbreviations Text for this section\ldots \section*{Availability of data and materials Text for this section\ldots \section*{Ethics approval and consent to participate Text for this section\ldots \section*{Competing interests} The authors declare that they have no competing interests. \section*{Consent for publication Text for this section\ldots \section*{Authors' contributions} Text for this section \ldots \section*{Authors' information Text for this section\ldots \bibliographystyle{bmc-mathphys} \section{Conclusion} \label{s:conclusion} In this paper we presented a model for soft-real time job offloading over hybrid cloud topologies, along with offloading strategies that try to optimize (either all or in part) execution time, total energy consumption and fulfill QoS requirements in the form of job deadlines. We instantiated the model in a software system, \textsc{Jay}{}, and used it to evaluate a variety of offloading strategies in clouds formed by mobile devices and two-tier hybrid clouds formed by a network of mobile devices and a cloudlet. \textsc{Jay}{} is designed with adaptive scenarios in mind. Offloading strategies are fed with the necessary runtime information to perform time and energy-aware offloading on-the-fly, Moreover, it employs a modular architecture that allows multi-tier hybrid cloud topologies to be defined with customisable roles per tier or device regarding job generation and execution. The overall system flexibility was illustrated through experiments using a benchmark application configured to spawn jobs with different rates and different soft real-time deadlines, executed over different cloud configurations and offloading strategies. The results of these experiments show that offloading strategies sensitive to runtime conditions can effectively and dynamically adjust their offloading decisions to produce significant gains in execution time, energy consumption and fulfillment of job deadlines. For future work, we consider two key directions: \begin{itemize} \item Regarding application scenarios, we are particularly interested in articulating computation offloading with data-placement awareness, as in systems like Oregano~\cite{Sanches2020}, \trackchange{Added reference to~\cite{huang2019multi}, as suggested by Reviewer 1.}{our previous work on systems for data dissemination for hybrid edge clouds~\cite{Rodrigues2018,fmec20_ramble}, which are particular instances of a class of systems that have multiple users, and employ multiple mobile devices, servers, and network tiers~\cite{huang2019multi}. } A challenge in these scenarios is that jobs may potentially require data stored at distinct hosts and/or tiers in the cloud, hence the interplay between computation and data offloading can potentially play a key role. A different challenge is the possibility of high device churn and intermittent connectivity over heterogeneous communication links (WiFi, Bluetooth, 4G/5G, etc), requiring offloading to proceed opportunistically, to be articulated with fault tolerance mechanisms (e.g., job checkpointing or replication), and the overall handling of a more dynamic environment regarding computational resources, network bandwidth, and energy consumption. \item Regarding \textsc{Jay}{} as a system, it can be extended in a number of ways to support a richer set of offloading strategies and job workloads. Given its modular architecture, \textsc{Jay}{} can easily accommodate for other multi-objective offloading strategies, of which the hybrid latency-energy offloading strategy is just an example, that account for additional aspects beyond execution time and energy consumption, e.g., the costs of using an infrastructure cloud or mobile device network traffic. \trackchange{Added reference to TRACTOR~\cite{tractor}, as suggested by Reviewer 1.}{ Moreover, even if \textsc{Jay}{} is adaptive over a variety of hybrid cloud architectures, we believe that awareness of the cloud system used for offloading can lead to novel adaptive offloading strategies, e.g., as in the TRACTOR algorithm~\cite{tractor} that accounts for aspects such as power consumption of network switches at the edge-cloud level for traffic and power-aware virtual machine placement. } \trackchange{}{Finally,} the system can also be improved for adaptivity in terms of resource awareness to cope with changeable cloud links due to mobility, and computational resources (e.g., GPUs could be used on the mobile devices by our deep learning benchmark). Mobile applications also commonly exhibit features that would require our job model to richer, e.g., job precedences, job aggregation and their parallel execution, checkpointing to allow migration, etc. \end{itemize} \section{Evaluation\label{s:eval}} \vspace*{12pt} We now present the detailed experiments we conducted and the results we obtained. Their implications are also discussed. \subsection*{Experiments} Using the experimental setup described in the previous section, we conducted three sets of experiments: \begin{description}[style=unboxed,leftmargin=0cm] \item [1.] We first measured the baseline behavior, in terms of energy and computation time, for the set of devices at hand, when executing the benchmark application in our setup without considering any type of offloading. The goal was to allow a relative comparison between devices given their heterogeneity. \item [2.] We then considered offloading experiments for the benchmark application in a network formed only by the Android devices\trackchange{REVIEW}{ \sout{.}, where each device acts both as a job generator and worker. We compare the use the {\rm LOCAL}, {\rm TMIN} and {\rm HYBRID} strategies for job workloads with different values for mean job inter-arrival times and job deadlines. } \item [3.] \trackchange{REVIEW}{\sout{Finally, }T}he previous experiments were repeated for the benchmark application this time using a network that also includes a cloudlet \trackchange{REVIEW}{worker. Again we consider {\rm TMIN} and {\rm HYBRID} strategies, but also the {\rm SERVER} strategy that offloads all jobs to the cloudlet server. } \trackchange{REVIEW}{ \item [4.] Finally, we consider again a network with mobile devices, the effect of using the {\rm BALANCED} and ${\rm LF[f]}$ strategies versus {\rm TMIN} and {\rm HYBRID}, plus a different choice of configuration where jobs are generated and scheduled by a single external host and the mobile devices act only as workers, i.e., a Femtocloud-like configuration. } \end{description} \subsection*{Baseline experiments} For a baseline comparison between devices, we measured power consumption and time/energy consumption during job execution for all devices. We first ran scripts to measure (instantaneous) power consumption when devices were idle, uploading data and downloading data. Each script ran for 10 minutes and average power consumption results were gathered from 3 script executions. For the uploading/downloading power measurements the scripts continuously executed plain file uploads/downloads to/from a random host in the network. For job computation behavior, we ran a script that issued local object detection jobs continuously for 10 minutes, again for 3 rounds, and computed the average power consumption and job execution time. The results are listed, per device, in Table~\ref{tab:baseline}: power consumption (in Watts, when devices are idle, uploading, downloading, or computing); job execution time (seconds), and; energy consumption (in milliwatt-hour, taking into account power consumption when computing and the execution time per job). Overall, the results clearly expose the heterogeneity of the devices used in the experiments. Looking at the power consumption results, it is clear that computation is the major factor of increase in power consumption: 2.3--4.1 times more energy is consumed than when a device is idle, compared to just 1.1--3.5 times for uploading and 1.1--1.8 times for downloading. Compared to the Android devices, power consumption numbers for the cloudlet are an order of magnitude higher (approx.\ 10--40 times higher). Energy-wise, the two best-performing devices while computing are Google Pixel~4 and Xiaomi Mi~9T. Samsung Galaxy S7e is the most energy conservative device when in idle mode. Regarding the results for execution time and energy consumption per job, the cloudlet stands out again: it is both the most efficient device in computation time, and the least efficient one in energy consumption: jobs run 1.9--5.8 times faster than on the Android devices but on the other hand consuming 2.4--15.6 times more energy. Among the Android devices, and for both time and energy, Google Pixel~4 is the most efficient device, followed by Xiaomi Mi~9T, Samsung Galaxy Tab~S5e, and Samsung Galaxy~S7e, with Google Nexus~9 being the least efficient. We note that the measures for energy consumption per job are more relevant for our purposes (cf. Section~\ref{s:jay}) than those for instantaneous power consumption. Observe that Samsung Galaxy Tab S5e is more energy-efficient (consumes $5.0$ mWh per job) than Samsung Galaxy S7e (which consumes $5.5$ mWh per job, $10\%$ more), even if instantaneous power consumption is higher during computation ($4.5$~W vs. $3.7$~W, $21\%$ higher). The reason for this is that the higher power consumption is compensated in a larger proportion by faster job execution times in Samsung Galaxy Tab S5e ($4.0$~s vs. $5.4$~s, $33\%$ faster). As for the measured bandwidth during the duration of the experiment, we obtained values averaging 110~Mbit/s for download on all mobile devices and for upload we verified two distinct behaviors: Nexus 9, Pixel 4 and Samsung Galaxy S7e connected with a 300~Mbit/s connection averaging 210~Mbit/s speeds while Samsung Galaxy Tab S5e and Xiaomi Mi 9T connected to the router with a 150~Mbit/s connection leading to an average upload speed of 119~Mbit/s. As for the cloudlet, it was connected to our router via gigabit ethernet and we obtained and average of 941~Mbit/s upload speed and 946~Mbit/s download. \begin{figure*}[h!] [width=\textwidth]{results/bw/new_total_energy_v2.pdf} \caption{Android devices scenario -- energy, time, and QoS.} \label{fig:total_energy} \end{figure*} \begin{figure*}[h!] \begin{subfigure}{0.45\textwidth} [width=\textwidth]{results/bw/new_execution_distribution.pdf} \caption{Local vs. offloaded jobs.} \label{fig:distribution:share} \end{subfigure} \begin{subfigure}{0.45\textwidth} [width=\textwidth]{results/bw/new_device_execution_distribution_greys_v2.pdf} \caption{Executed jobs per device.} \label{fig:distribution:device} \end{subfigure} \caption{Android devices scenario -- job distribution.} \label{fig:distribution} \end{figure*} \subsection*{Offloading among Android devices} We considered a network formed by the Android devices, each running the benchmark application generating jobs with mean inter-arrival times for the governing Poisson process of $\lambda$ equal to $3$, $6$, $9$ and $12$ seconds (which translates to $20$, $10$, $6.7$ and $5$ jobs per minute respectively), and values of $d = 3, 6, 9, 12$ for their relative deadlines up to the value of $\lambda$ (i.e., $d \leq \lambda$). In conjunction, we considered three offloading strategies, presented in Section~\ref{s:model}: {\rm LOCAL} (local execution only, no offloading), {\rm TMIN} (offloads jobs strictly seeking to minimize execution time) and {\rm HYBRID} (balances QoS constraints for task deadlines with energy efficiency). The benchmark was executed 6 times for each offloading strategy with the same job generation seed, and each execution was configured to generate jobs for 10 minutes. A first set of overall results for the experiment is presented in Figure~\ref{fig:total_energy}. We present plots for the energy consumption and execution time per job (left and middle in the figure, lower numbers are better), along with the corresponding quality-of-service (QoS) that is expressed as the percentage of jobs with a fulfilled deadline (right, higher numbers are better). The average values are plotted for each measure, along with the amplitude of the $95\%$ gaussian confidence interval. Note that, for each configuration, the average energy consumption is obtained by measuring the total energy consumption in all of the devices, including idle time, divided by the number of jobs. Lower values of $\lambda$ imply more jobs, hence the average energy consumption tends to decrease with~$\lambda$ (conversely, idle time grows with~$\lambda$). From the results, we can first observe that both TMIN and HYBRID generally outperform LOCAL both in energy consumption and QoS. This shows that offloading jobs pays off in both dimensions when compared to strictly local execution of jobs. The exception to this pattern is observed when the relative deadline has the tightest value, i.e., $d=3$, and only in terms of QoS. In fact, the overall system becomes incapable of achieving reasonable QoS in all configurations when at this point: always below $30\%$, regardless of offloading strategy. In the more extreme case where $\lambda = d = 3$, the QoS is below $10\%$ and there is an extremely long execution time, due to the fact that jobs simply pile up in the system. In contrast, the QoS is always higher than~$50\%$ for all configurations with~$d>3$. \begin{figure*}[h!] \begin{subfigure}{0.45\textwidth} [width=\textwidth]{scripts/hybrid_12_9_matrix.pdf} \caption{HYBRID.} \label{fig:exec:HYBRID} \end{subfigure} \begin{subfigure}{0.45\textwidth} [width=\textwidth]{scripts/tmin_12_9_matrix.pdf} \caption{TMIN.} \label{fig:exec:TMIN} \end{subfigure} \caption{Android devices scenario -- flow of jobs for $\lambda=12$ and $d=9$.} \label{fig:exec} \end{figure*} Comparing TMIN and HYBRID, the results are very similar for~$d=3$ and~$d=6$ in all respects (energy, time, and QoS). Since these deadline values are the most tight, the HYBRID strategy has less scope for energy-efficient offloading choices and these tend to be similar to the choices made by TMIN. For $d>6$ there are noticeable differences though, highlighting that gains in energy consumption can be attained by the HYBRID strategy compared with TMIN at the cost of a slight penalty in QoS. The HYBRID strategy leads to a $10$--$20\%$ decrease in energy consumption compared to TMIN, while the QoS service is only marginally higher for TMIN, at most by~$5\%$. At the same time, the execution time is slightly higher for HYBRID, given that the strategy does not pick the the device estimated to run a job faster but, rather, the most energy-efficient among those that are estimated to comply with the job deadline. For example, when~$\lambda=12$ and~$d=9$ and for HYBRID we observed: $13\%$ less energy consumption ($9.5$~mWh compared to ${\sim}10.8$~mWh for TMIN); jobs taking $15\%$ longer ($4.5$ s vs. $3.9$ s), but; a QoS degradation of only $2\%$ ($97\%$ vs. $99\%$). The behavior of TMIN and HYBRID is compared in more detail in Figure~\ref{fig:distribution}, regarding the fraction of offloaded jobs (\ref{fig:distribution:share}, left) and the fraction of jobs executed per device (\ref{fig:distribution:device}, right). These results again illustrate that there is no significant difference between both strategies for the tighter deadline of $d=3$. As the value of~$d$ grows, however, the offloaded job ratio tends to grow and be significantly higher for the HYBRID strategy, whereas there are only small variations for TMIN for each value of~$\lambda$. When~$\lambda=12$ for instance, the offloading ratio increases progressively in the case of HYBRID as $d$ grows from~${\sim}40\%$ when~$d=3$ up to~${\sim}80\%$ when~$d=12$, while for TMIN it is ${\sim}40\%$ for all values of~$d$. If we look at the fraction of executed jobs per device (Figure~\ref{fig:distribution:device}), we see that they are overall in line with the baseline results, i.e., faster devices (which are also more energy-efficient) execute more jobs. For instance, Google Nexus 9, the slowest device, executes the fewest jobs, while Google Pixel 4, the fastest one, executes the most jobs. The total spread of jobs is more uniform in the case of TMIN than with HYBRID, while HYBRID tends to favor Google Pixel 4 significantly for $d\ge6$. \trackchange{Revised explanation, in line with reviewer comment.}{ These aspects are illustrated in particular for the flow of jobs when~$\lambda=12$ and~$d=9$, again comparing HYBRID (a) and TMIN (b), in Figure~\ref{fig:exec}. The job distribution is noticeably more biased towards Google Pixel 4 in the case of HYBRID: Google Pixel 4 executes $56\%$ of all jobs for HYBRID compared to $33\%$ for TMIN. TMIN offloads jobs more uniformly to the other devices (note that the size of the squares grows logarithmically), even if Nexus 9 only executes local jobs in the case of TMIN. } \begin{figure}[h!] [width=0.9\columnwidth]{results/bw/new_avg_prediction_error.pdf} \caption{Android devices scenario -- average error.} \label{fig:avg_prediction_error} \end{figure} \begin{figure*}[h!] [width=\textwidth]{results/bw/new_total_energy_cloudlet_v2.pdf} \caption{Cloudlet scenario -- energy, time and QoS.} \label{fig:total_energy_cloudlet} \end{figure*} \begin{figure*}[h!] \begin{subfigure}{0.45\textwidth} [width=\textwidth]{results/bw/new_execution_distribution_cloudlet.pdf} \caption{Local, device, and cloudlet job share.} \label{fig:execution_distribution_cloudlet} \end{subfigure} \begin{subfigure}{0.45\textwidth} [width=\textwidth]{results/bw/new_device_execution_distribution_greys_v2_cloudlet.pdf} \caption{Jobs per device.} \label{fig:device_execution_distribution_cloudlet} \end{subfigure} \caption{Cloudlet scenario -- job distribution.\label{fig:dist_cloudlet}} \end{figure*} We finish our analysis by highlighting estimation errors by \textsc{Jay}{}'s system profiler. Figure~\ref{fig:avg_prediction_error} depicts the average relative error in the estimated time for job execution, calculated as the difference between estimated and real execution time, expressed in percentage of the real execution time. As shown, the values are on average negative, meaning that the estimates tend to be pessimistic. In amplitude, they are less than~$20$\% except for HYBRID when~$\lambda = 12,9$ and $d \geq 9$, and both strategies when $\lambda = 3$. This is partly explained by the fact that the estimate $\TE$ for execution time of a job at a \textsc{Jay}{} instance accounts for the current number of jobs including the current one, but not the already spent executing the current job. This behavior, which can be mitigated in future developments of the \textsc{Jay}{} prototype, is amplified in configurations where one the devices obtains a high share of jobs (Google Pixel 4 in the case of HYBRID, for $\lambda = 12,9$ with $d \geq 9$). On the other hand, when the system has a high load and is unable to cope (the case of $\lambda=3$), estimates also tend to be less reliable. \subsection*{Extended scenario using cloudlet} We now present results for an extension of the previous experiment that introduces a cloudlet server. The cloudlet acts only as a \textsc{Jay}{} job executor, while job generation proceeds as before for the Android devices. As before, the HYBRID and TMIN strategies were evaluated but with the possibility of offloading from the devices to the cloudlet. We consider, in addition, the {\rm SERVER} strategy that uses the cloudlet as a standalone server that executes all jobs. We present measurements similar to the previous scenario and highlight the impact of the cloudlet. \begin{figure*}[h!] \begin{subfigure}{0.45\textwidth} [width=\textwidth]{scripts/hybrid_12_9_matrix_c.pdf} \caption{HYBRID.} \label{fig:cexec:HYBRID} \end{subfigure} \begin{subfigure}{0.45\textwidth} [width=\textwidth]{scripts/tmin_12_9_matrix_c.pdf} \caption{TMIN.} \label{fig:cexec:TMIN} \end{subfigure} \caption{Cloudlet scenario -- flow of jobs for $\lambda=12$ and $d=9$ (number of jobs).} \label{fig:cexec} \end{figure*} In Figure~\ref{fig:total_energy_cloudlet} we provide plots for energy consumption, execution time and QoS. Compared to the results of the scenario without cloudlet (cf. Figure~\ref{fig:total_energy}) an increase in energy consumption as well as in QoS is noticeable for the TMIN and HYBRID strategies. This would be expected, given that (in line with the baseline results) the cloudlet is the most time-efficient device but also the least energy-efficient one. The energy consumption is significantly higher, something that will always be true even if the cloudlet executes no jobs (in any case it will still actively consume energy). For example, the lowest energy consumption value is $23$~mWh for HYBRID and TMIN when $d=\lambda=3$, exceeding the value of the most energy-hungry configuration of the previous scenario, $12$~mWh for $\lambda=d=12$ in Figure~\ref{fig:total_energy}. On the other hand, the cloudlet improves QoS for HYBRID and TMIN significantly: it is now above $90\%$ for every configuration with $d\ge6$, and even $46\%$--$67\%$ for $\lambda=12,9,6$ when $d=3$ in comparison to the $10$--$15\%$ observed previously. QoS is very poor only, and again, in the extreme~$\lambda = d = 3$ case. Looking at the results for the SERVER strategy, they are generally worse than those obtained for HYBRID and TMIN. This is true for energy consumption in all configurations, and also QoS except for configurations with $\lambda=12$ where the SERVER strategy becomes competitive. This means that job execution/offloading by the Android devices pays off compared to using the cloudlet alone, much like in the previous scenario where it payed off when compared to using local job execution only. In this cloudlet scenario, energy consumption savings resulting from the use of HYBRID vs. TMIN can be more pronounced. In all configurations, HYBRID consumes less energy than TMIN, and the savings are noticeably more pronounced as~$d$ increases, e.g., for~$\lambda=12$, TMIN consumes just $4\%$ more energy when $d=3$ but $60\%$ more when $d=12$. On the other hand, on par with the decrease in energy consumption, HYBRID leads to noticeably longer job execution times as $d$ grows, e.g., again for $\lambda=12$ HYBRID causes jobs to last from $2\%$ longer when $d=3$ up to $214\%$ when $d=12$. The difference of behavior between HYBRID and TMIN is best understood looking at the job distribution results in Figure~\ref{fig:dist_cloudlet}, where we depict for all configurations the fractions of: (\ref{fig:execution_distribution_cloudlet}) locally executed jobs, jobs offloaded to Android devices, and jobs offloaded to the cloudlet, and; (\ref{fig:device_execution_distribution_cloudlet}) jobs per Android device and cloudlet. Besides the fact that HYBRID tends to have a lower ratio of locally executed jobs, as in the previous Android devices only scenario, the other major difference between HYBRID and TMIN is that HYBRID tends to offload significantly less jobs to the cloudlet than TMIN. Looking at the distribution per device, it is clear that with HYBRID Google Pixel 4 is the device executing more jobs, whereas TMIN privileges the cloudlet. In fact, in some configurations, the fraction of jobs executed by the cloudlet can be residual. The job flow for both strategies when $\lambda=12$ and $d=9$ is illustrated in Figure~\ref{fig:cexec}, and highlights this trend in one of the more extreme cases: the cloudlet executes less than~$1\%$ of all jobs for HYBRID while Google Pixel 4 executes~$57\%$, whereas for TMIN the fractions are $64\%$ for the cloudlet and $10\%$ for Google Pixel 4 (note that, as before, the size of the squares grows logarithmically). \begin{figure}[h!] [width=0.9\columnwidth]{results/bw/new_avg_prediction_error_cloudlet.pdf} \caption{Cloudlet scenario -- average error.} \label{fig:avg_prediction_error_cloudlet} \end{figure} Estimation errors by \textsc{Jay}{}'s system profiler are presented in Figure~\ref{fig:avg_prediction_error_cloudlet} for the cloudlet scenario, with similar trends to the Android devices' scenario (Figure~\ref{fig:avg_prediction_error}). The main difference is that estimation errors are not as high for the $\lambda=3$ case. For this configuration, the overall system copes much better with the high load scenario of in terms of job execution times even if QoS is still low, and execution estimate errors tend to be lower as a result. \trackchange{Additional Scenarios}{ \subsection*{Additional Scenarios} \begin{figure*}[h!] [width=\textwidth]{results/new_total_energy_extra.pdf} \caption{Additional scenarios -- energy, time and QoS.} \label{fig:total_energy_extra} \end{figure*} \begin{figure*}[h!] \begin{subfigure}{0.45\textwidth} [width=\textwidth]{results/new_execution_distribution_extra.pdf} \caption{Local and device job share.} \label{fig:execution_distribution_extra} \end{subfigure} \begin{subfigure}{0.45\textwidth} [width=\textwidth]{results/new_device_execution_distribution_greys_extra.pdf} \caption{Jobs per device.} \label{fig:device_execution_distribution_extra} \end{subfigure} \caption{Additional scenarios -- job distribution.\label{fig:dist_extra}} \end{figure*} A final set of results is now presented, considering again a network formed by mobile devices alone. We consider the effect of having a network with a Femtocloud configuration (FC), in which jobs are generated and scheduled by a single external host and the mobile devices act only as workers, in contrast to the mobile edge cloud configuration (MEC) where all devices act as job generators and workers. Furthermore, we present results for additional offloading strategies, {\rm BALANCED} and ${\rm LF[f]}$ (cf. Section~\ref{s:model}) that may potentially lead to different compromises in terms of time, energy, and job distribution among hosts. {\rm BALANCED} applies both in the FC and MEC cases, whereas ${\rm LF[f]}$ (``local-first'') by definition only applies in the MEC case (in the FC case, the external host does not act as worker, hence it does not execute jobs locally). Jobs were generated with the same methodology as in the previous experiments the MEC configuration, but results were gathered only for a job inter-arrival time of~$\lambda=9$ and deadlines~$d=6,9$. In the FC case, similar deadlines are considered, but the external host generates jobs with a $\lambda=\frac{9}{5}$ inter-arrival time, so that the overall workload is equivalent to the use of~$\lambda=9$ by all~$5$ devices in the MEC configuration. We empirically found these workload parameterisations to be illustrative of the behavior of the system for the strategies considered. As in the previous experiments, the results are presented in terms of: average energy consumption, average completion time and QoS (Figure~\ref{fig:total_energy_extra}), and; job offloading rates and job share per device (Figure~\ref{fig:dist_extra}). \subsubsection*{Femtocloud setting} Looking first at the FC results, the energy consumption values are clearly the lowest, as shown in Figure~\ref{fig:total_energy_extra} (top-left). Compared to the MEC scenario, the energy consumption values for TMIN and HYBRID are $14-20\%$ lower for $d=6$ and $25\%$ lower for~$d=9$. These gains, however, come at the cost of higher execution times and lower QoS: for $d=6$ execution times $7-13\%$ higher, and the QoS is $4-7\%$ lower; for~$d=9$ the execution time are $11-12\%$ higher but the QoS differences are small, lower than $2\%$ in absolute value. Thus, the results are mixed, especially in the case of $d=6$. A priori, it would be expected that the centralised offloading decisions to be more reliable in the FC configuration, since it is free from the interference that arises from concurrent offloading decisions by all devices in the MEC case. However, in the MEC case jobs can execute locally, e.g. for $d=6$ the share of local jobs is $56\%$ for TMIN and $47\%$ for HYBRID, as depicted at the bottom in Figure~\ref{fig:execution_distribution_extra}, and estimation errors tend to be lower for locally executed jobs. In the FC case (by definition) all jobs are offloaded leading to higher estimate errors. These two factors influence the behavior in different directions. \subsubsection*{${\rm LF}[f]$ strategies} By definition, ${\rm LF}[f]$ strategies try to execute as many jobs as possible locally, resorting to offloading through strategy~$f$ only when the local device is unable to cope with the deadline of a job. Accordingly, as shown in Figure~\ref{fig:dist_extra}(a), the share of locally executed jobs is significantly higher for ${\rm LF}[f]$ when compared to $f$ in almost all cases, $15-49\%$ more, except for TMIN when $d=6$ where the difference is negligible ($< 1\%$). The results for ${\rm LF}[f]$ strategies are otherwise indicative of energy/time/QoS trade-offs, as illustrated in Figure~\ref{fig:total_energy_extra}. This happens especially for~$d=6$. In this case, when compared to TMIN, ${\rm LF}[{\rm TMIN}]$ leads to a decrease of $9\%$ in energy consumption but, also, an increase of $9\%$ in execution time and a decrease of $4\%$ in QoS. This is expected as the base strategy, TMIN, seeks to minimize execution time and thus it will tend to do better in this metric as well as in QoS. Again for $d=6$, but using HYBRID as the base strategy this time, ${\rm LF}[{\rm HYBRID}]$ degrades execution time and QoS by even more, $12\%$ and~$9\%$ respectively, even if energy consumption is roughly the same ($1\%$ difference between both). Given that most energy-efficient devices tend to also be faster in our configuration, the degradation of execution time and QoS is expected, as with TMIN, but the difference in energy consumption is only noticeable for the larger deadline value of $d=9$, where the energy consumption of ${\rm LF}[{\rm HYBRID}]$ is $9\%$ higher. \subsubsection*{The {\rm BALANCED} strategy} Finally, the {\rm BALANCED} base strategy has the overall effect of smoothing the load distribution among devices, as intended; recall (from Section~\ref{s:model}) that the BALANCED strategy makes a random choice among devices that are estimated to comply with a job's deadline. Examining the numbers for the plot in Figure~\ref{fig:dist_extra} (b), for configurations that employ TMIN and HYBRID as a base or fallback strategy, the average shares of the jobs for the Xiaomi Mi 9T and Pixel 4 devices combined (the two devices that execute most jobs) are $64\%$ for $d=6$ and $63\%$ for $d=9$. In comparison, for configurations that employ BALANCED as a base or fallback strategy, the share of two devices is $3\%$ lower ($61\%$) for $d=6$ and, more noticeably, $10\%$ lower for $d=9$ ($53\%$). In more detail for $d=9$, the average individual share grows for all of the 3 least used devices in the case of BALANCED: from $3\%$ to $6\%$ for Nexus 9, from $12\%$ to $16\%$ for Galaxy S7e, and from $21\%$ to $25\%$ for Galaxy Tab S5e. At the same time, the average share in the case of BALANCED drops from $26\%$ to $24\%$ for Xiaomi Mi 9T, and from $37\%$ to $29\%$ for Pixel 4. Unlike the cases for the base strategies {\rm TMIN} and {\rm HYBRID}, the results for {\rm BALANCED} do not exhibit a clear trend with respect to energy/time/QoS trade-offs (Figure~\ref{fig:total_energy_extra}). All 6 executions per configuration use the same seed to guarantee a repeatable job generation pattern. The particular job pattern may be benefiting or hurting the behavior of {\rm BALANCED} in subtle ways according to the configuration parameters, and the actual random choice made by the {\rm BALANCED} strategy during execution. For instance, for $d=6$ and the FC setting, the {\rm BALANCED} strategy consumes $8-10\%$ more energy than TMIN and HYBRID, execution times are $5-6\%$ faster, and QoS is $4\%$ higher. The trend is however roughly symmetric for the MEC setting for instance: $9-11\%$ less energy, $10-12\%$ slower execution times, and a $7\%$ lower QoS value. } \section*{Figures} \begin{figure}[h!] \caption{Sample figure title} \end{figure} \begin{figure}[h!] \caption{Sample figure title} \end{figure} \section{Introduction\label{s:intro}} The last decade witnessed an impressive evolution in the storage and processing capabilities of mobile devices. Besides traditional processing cores, these microprocessors feature multiple GPU cores and also so called neural cores optimized for machine learning applications such as deep-learning and have reached performance levels comparable to laptop and some desktop analogs~\cite{mobile_processors}. Despite these advancements, some computational jobs are too demanding for mobile devices. Mobile cloud computing~\cite{FERNANDO201384} has traditionally tackled this problem by offloading computation and data generated by mobile device applications to cloud infrastructures. This move spares the battery in the devices and, in principle, speeds-up computation as the high-availability, elastic, cloud infra-structures can adapt to the computing and storage demands of the jobs spawned by the devices. This offloading is, however, not without problems. Many mobile applications involve the processing of locally produced data (e.g., video) and uploading such large volumes of data to cloud infra-structures is time consuming and may not even be feasible from a QoS point of view due to the high communication latencies involved. Also, from an energy point of view, offloading jobs and/or data to cloud infrastructures is globally highly inefficient. Mobile edge clouds~\cite{edgeclouds} and cloudlets~\cite{cloudlets}, on the other hand, try to harness the resources of local networks of devices and/or small servers, using device-to-device communication technologies such as Wifi and Wifi-Direct, at the edge to perform demanding computational jobs taking advantage of data locality to minimize latency and global energy consumption. In this approach, a given job is offloaded to one mobile device or a cloudlet in the network vicinity of the originating mobile device. The two approaches can be unified in a single, multi-tier architecture (Fig.~\ref{fig:network}), with: (a) local networks of devices (Tier 1), with less capable processing but fast and privileged access to raw data; (b) cloudlets directly accessible from the devices, with more processing muscle and storage (Tier 2), and; (c) traditional cloud infrastructures, featuring the highest performance and storage resources (Tier 3). \begin{figure}[t!] \centering [width=0.9\columnwidth]{images/Configurations_Simple_v3.pdf} \caption{A hybrid cloud environment.} \label{fig:network} \vspace{-0.2cm} \end{figure} Given this architecture and a mobile application that spawns computational jobs, we consider the problem of offloading these jobs over the tiers in such a way to optimize runtime metrics such as: total execution time, global energy consumption, fulfillment of QoS requirements. In general, the decision to offload (or not) a job is supported by knowledge of observables as reported from the participating devices and servers or inferred from data exchanges, namely: available network bandwidth, computational load at each device and server, the battery status of the devices. We previously introduced \textsc{Jay}{}~\cite{fmec20_jay} as tool to instantiate and experiment with different such cloud configurations, offloading strategies and mobile applications. In that paper we evaluated only latency-aware offloading strategies in several cloud configurations, from mobile edge clouds formed by Android devices up to 3-tier hybrid clouds, i.e., including also cloudlets and infrastructure cloud server instances. In this paper we put forward a unifying model for this architecture upon which we can precisely specify the infra-structure parameters (e.g., cloud tiers and topology), the application parameters (the rate and size distribution of jobs, offloading strategy, job deadlines), the observables (as described above) and the runtime metric function to optimize. We then use~\textsc{Jay}{} with the same object detection application as in~\cite{fmec20_jay} but with different model instances that include new QoS restrictions (jobs have deadlines) and different optimization functions such as total execution time, per device energy consumption and, total energy consumption. The cloud configurations we experiment with in this paper do not include tier-3 centralised cloud servers (e.g., Google Cloud, Amazon Web Services, or Microsoft Azure), as we would not be able to directly measure vital runtime observables such as energy consumption or at least infer them with enough confidence from the underlying virtualisation infrastructure. Thus, the main contributions of this paper are the following: \begin{description}[style=unboxed,leftmargin=0cm] \item [1.] a model that specifies computational scenarios over hybrid edge/cloudlet/cloud topologies; \item [2.] a complete \textsc{Jay}{} instance of the model that enables the execution of mobile applications over such network topologies through the definition of offloading strategies and optimization functions coupled with observables gathered at runtime, and; \item [3.] a case-study with an object detection application that generates jobs with deadlines while trying to optimize execution time, energy consumption or both. \end{description} \textsc{Jay}{} and the model implementation presented here are available at Github~\footnote{\url{https://github.com/jqmmes/Jay}}. The remainder of this paper is structured as follows. \trackchange{Related work appears right after the introduction.}{Related work is discussed in Section~\ref{s:rwork}.} Section~\ref{s:model} provides a description of the model we use to describe the aforementioned hybrid architecture and the computations therein. Section~\ref{s:jay} describes the \textsc{Jay}{} framework. Section~\ref{s:setup} presents the the scenarios we model in this paper and the experimental setup. Section~\ref{s:eval} presents the results from the experiments and discusses their implications. Finally, Section~\ref{s:conclusion} ends the paper with concluding remarks and a discussion of future work. \subsection*{System instantiation} We now present a sample system instantiation of \textsc{Jay}{}, later evaluated in the paper. It is composed of: a worker based on a FIFO job queue; a configurable scheduler that may implement any of the offloading strategies discussed in our system model, and; a system profiler that estimates time and energy consumption due to computation and network transmission at the local instance and aggregates it with the state information disseminated by remote instances. The result is a global snapshot of the state of the system that can be used to make adaptive offloading decisions. A summary of the model instantiation and associated notation is given in Table~\ref{t:notation}. \input{notation-table} \subsubsection*{Profiler overview} The profiler is responsible for the state estimation driving adaptive offloading, with the functionality illustrated in Figure~\ref{fig:profiling}. \begin{figure*}[h!] [width=0.7\textwidth]{images/profiling.pdf} \caption{State estimation by the system profiler.\label{fig:profiling}} \end{figure*} The first aim of the profiler is to estimate and disseminate the state of the local instance ($\hL$), and aggregate similar state reported by the remote instances ($h\neq\hL$), as shown lower right in the figure. The second aim is to use the state information for all available hosts ($\hL$ and other hosts $h\neq\hL$) to compute estimates for all hosts for the time ($\TI$, $\TC$, and $\TO$) and energy ($\EI$, $\EC$, and $\EO$) to run a job, and feed that information to the local scheduler, as illustrated in the lower left portion of the figure. The locally derived information comprises three components and their estimators (modules) also shown in the figure: $s_{\rm J}$, the state of local jobs, derived by the job state estimator; $s_{\rm E}$, the energy consumption state, derived by the energy state estimator; and~$s_{\rm B}$, the bandwidth for communication between~$\hL$ and every other host, derived by the bandwidth estimator. To accomplish their task, the estimators feed on notification events provided by the local worker and local broker regarding job and transmission events respectively, and runtime profiling of energy consumption and bandwidth measurements. Note that only $s_{\rm J}$ and~$s_{\rm E}$ need to be disseminated among instances, whereas the~$s_{\rm B}$ information for all hosts is derived locally at each instance. \subsubsection*{Job computation} The worker executes jobs in order-of-arrival, one at a time, and non-preemptively until completion. Pending jobs are kept on hold in a FIFO queue. This scheme is not adaptive to deadlines or other job characteristics. But, on the other hand, it allows for a simple estimation of the termination time for a released job that is not affected by the arrival of new jobs or more generally by overall variations in the system workload. Assume that job~$j_1$ starts running at time $t$, that jobs $j_2, \ldots, j_n$ are queued, and that there is an estimate~$\Delta_i$ for the time~$j_i$ takes to execute. Then an estimate for the termination time of~$j_i$ is simply~$t + \Delta_1 + \ldots + \Delta_i$. Note that this ``stable'' estimate, derived from limited information, would be impossible to achieve if we were to resort, for instance, to an earliest-deadline first (EDF) scheme in preemptive or non-preemptive form. In this case, the arrival of new jobs could potentially invalidate a previous estimate made during an offloading decision, and raise the need to model/estimate a worst-case behavior for job arrivals. The worker interacts with the system profiler by supplying a notification whenever a job is queued, starts, and ends. With this information, the profiler can compute an estimate of the job execution time and the worker's queue size and composition. From these quantities we can in turn derive estimates for~$\TC$~and~$\TE$ (cf. Figure~\ref{fig:model}). Feeding on the local worker information, the current profiler estimates~$\TE$ using a moving average of the execution time of jobs, and $\TC$ as: $$ \TC(h) = \left(n(h) +1\right) \times \TE(h) $$ The formula above simply expresses that the time to execute a job~$j$ will have to account for the wait for up to~$n$ jobs to complete at host~$h$, plus the time to actually execute~$j$. The estimate implicitly assumes however that job execution time tends to be uniform, i.e., there is only one class of job and their execution is regular. This in the case of the jobs we consider for evaluation later, but the scheme could be generalised, e.g. to handle several classes of jobs by accounting for the number of jobs per class, and irregular jobs by accounting for different job input sizes and/or considering execution time percentiles rather than a plain moving average. \subsubsection*{Network transmission times} In order to estimate network transmission times, the profiler issues periodic ping (round-trip) messages to all hosts in the network. The information gathered from these messages allows the bandwidth estimator at each host~$h$ to maintain a moving average for uploading and downloading bandwidth measures, $\BI(h)$ and~$\BO(h)$, respectively. These estimates are further refined with information gathered from broker notifications regarding the observed bandwidths when jobs (and their inputs) are uploaded to, or their outputs are downloaded from, a remote \textsc{Jay}{} instance. Assuming that the sizes of the inputs ($|j|_{\rm I}$) and outputs ($|j|_{\rm O}$) of a job $j$ are known, the profiler estimates $\TI$ and $\TO$ as follows: $$ \TI(h) = \BI(h) \times |j|_{\rm I} $$ $$ \TO(h) = \BO(h) \times |j|_{\rm O} $$ \subsubsection*{Power consumption estimates} The energy state monitor is responsible for maintaining running estimates for the power cost terms~$\PC$, ~$\PU$, and~$\PD$ that correspond, respectively, to the power consumption per time unit when executing jobs at, uploading data from, and downloading data to the local host. In contrast to other approaches, \textsc{Jay}{} produces estimates without resorting to any a priori, usually device-specific, derived model for power consumption. As such, power consumption estimates may be more crude but, on the other hand, reflect more closely the energy dynamics of the system at any given moment. At any given time, a \textsc{Jay}{} instance may be idle, performing computation, or transmitting data. This can be inferred by listening to job events from the worker and transmission events by the broker or bandwidth monitor. Whenever the worker starts a job, the active job computation status flag is enabled, meaning the ensuing energy consumption should reflect on the $\PC$ estimate. The same flag is disabled whenever the job ends. The~$\PU$ and~$\PD$ estimates are derived similarly, using upload and download status flags that are enabled and disabled according to the start and end events for uploads and downloads by the broker or the bandwidth monitor. The power consumption estimates are updated as moving averages in accordance to the values of the status flags, but when only one of the flags is active. Power consumption measures are obtained in a device-specific manner through an energy monitor. For instance, by measuring the current~$I$ and the voltage~$V$ in a device, a simple estimate for the power consumption would be~$P = I \times V$ (using Ohm's Law). With estimates for~$\PC$,~$\PU$, and~$\PD$ plus~$\TE$,~$\TI$, and~$\TO$ we can in turn express the corresponding energy costs for jobs as follows: $$ \EC(h) = \TE(h) \times P_C(h) $$ $$ \EI(h) = \TI(h) \times \left( \PU(\hL) + \PD(h)\right) $$ $$ \EO(h) =\TO(h) \times \left( \PD(\hL) + \PU(h)\right) $$ Note that, in line with the starting discussion for the system model, $\EC$ depends on~$\TE$ (the effective computation time at~$h$) rather than~$\TC$ (the entire time span the job is at~$h$), while~$\EI$ and~$\EO$ reflect the energy costs both at~$\hL$ and~$h$. \subsection*{Architecture} \begin{figure*}[h!] [width=0.8\textwidth]{images/task_lifecycle_v2.pdf} \caption{Life cycle of a job in \textsc{Jay}{}.} \label{fig:job_lifecycle} \end{figure*} The architecture of \textsc{Jay}{} is illustrated in Figure~\ref{fig:job_lifecycle}. A single \textsc{Jay}{} instance runs on a network host, comprising 4 services: \emphsf{Broker}, \emphsf{Scheduler}, \emphsf{Worker} and \emphsf{System Profiler}. The \emphsf{Broker} service mediates network interaction with external peers, wrapping up any necessary requests and data interchange with the other internal \textsc{Jay}{} services on the same local instance. Local applications interact with the broker for job execution, and broker-to-broker interaction occurs between \textsc{Jay}{} instances for job offloading and dissemination of state information. The internal state of each \textsc{Jay}{} instance over time is maintained by the \emphsf{System Profiler} service, reflecting for instance energy consumption, current job workload, and network transmissions. All instances disseminate their state periodically, hence the profiler is also aware of the (latest known) state of remote \textsc{Jay}{} instances. The system profiler (on each instance) is then able to construct a global snapshot of all \textsc{Jay}{} instances in the network at any given time. The goal is to use this dynamic global snapshot to guide the offloading decisions while adapting to evolving runtime conditions. Jobs are dealt with by the \emphsf{Scheduler} and \emphsf{Worker} services. The scheduler is responsible for choosing the host where to run a job submitted to the local instance by an application. In particular, it implements the offloading strategy. The scheduler's choice for assigning a job is taken from the set of all hosts having an active worker service, and can be based on state information as reported by the system profiler. Note that this set of active workers may include the local instance if it has an active worker service. Also, the local worker state is also observed by the profiler and included in the construction of the global state snapshot. The worker is in turn responsible for the actual execution of jobs, regardless of whether they are local or incoming from other hosts through offloading requests. \textsc{Jay}{} instances running only one of the scheduler or worker services merely act as a job execution clients or servers, respectively. On the other hand, instances may employ different implementations for the scheduler and/or worker. \subsection*{Job lifecycle} In line with the interplay between \textsc{Jay}{} services just described, we can trace the lifecycle of a job in terms of the stages indicated in Figure~\ref{fig:job_lifecycle}, as follows: \begin{description}[style=unboxed,leftmargin=0cm] \item[Job release:] an application first releases the job by placing an execution request to the broker service (step \emphsf{1}). For simplicity, we will only consider the case where the application resides on the same host as the broker, even if \textsc{Jay}{}'s architecture does not place such a constraint and other setups may be interesting from an application standpoint, e.g., to accommodate for jobs fired by IoT devices. \item[Offloading decision:] the job execution request is passed by the broker over to the local scheduler {(\emphsf{2})} to determine the host that should execute the job. The scheduler's decision (\emphsf{3}) may be that either the job executes locally (the local host was chosen) or needs to be offloaded to the target host. \item[Job execution:] for local execution, the broker passes the job for execution to the local worker (\emphsf{4a}), and when the job completes {(\emphsf{5a})} the job outputs are delivered to the application {(\emphsf{6a})}. In the offloading case, the job is sent to the target host (\emphsf{4b}) for execution (\emphsf{5b}) and will, at some point, produce the job outputs (\emphsf{6b}) that are then returned back to the originating host (\emphsf{7b}) and, finally, delivered back to the application (\emphsf{8b}). \end{description} \section{The \textsc{Jay}{} framework} \label{s:jay} \input{jay-arch} \input{jay-adapt} \section*{Acknowledgements \section*{Availability of data and materials \textsc{Jay}{} is available as open-source software at \url{https://github.com/jqmmes/Jay/}. \section*{Competing interests} The authors declare that they have no competing interests. \section*{Funding This work was partially funded by project SafeCities (POCI-01-0247-FEDER-041435) from Fundo Europeu de Desenvolvimento Regional (FEDER), through COMPETE 2020 and Portugal 2020. \section*{Authors' contributions} Joaquim Silva programmed \textsc{Jay}{} and conducted the evaluation experiments. All authors have participated in the conceptual design of the \textsc{Jay}{} and associated experiments, data analysis, and manuscript writing. \section*{Acknowledgements} The authors wish to thank the anonymous reviewers for the helpful feedback. \section*{Authors' information} All authors are affiliated to the Department of Computer Science, Faculty of Sciences, University of Porto (DCC/FCUP), and the Center for Research in Advanced Computing Systems at INESC TEC (CRACS/INESC-TEC). Joaquim Silva is a PhD student in Computer Science at DCC/FCUP, Eduardo R. B. Marques is an assistant professor at DCC/FCUP, Luís Lopes is an associate professor at DCC/FCUP, and Fernando Silva is a full professor at DCC/FCUP. All authors are researchers at CRACS/INESC TEC. \bibliographystyle{vancouver} \section{System Model\label{s:model}} \begin{figure*}[h!] \centering [width=0.8\textwidth]{images/model2.pdf} \caption{Illustration of system model.\label{fig:model}} \end{figure*} \trackchange{Added subsection header for a clearer text organisation.}{ \subsection*{Overview} } We now put forward the system model for our adaptive offloading framework. The overall rationale is as follows. We consider a set of hosts connected over a network, such that each host may generate and/or execute soft real-time jobs over time. A host may then execute a mixture of local jobs and offloaded jobs on behalf of other hosts, but it can also be that a host only generates or executes jobs. In this setting, offloading decisions can be informed and adaptive to runtime conditions. Information broadcast amongst hosts regarding variables such as network bandwidth and latency, host job load and available energy provide the required feedback. We consider each job has a soft real-time nature, meaning that it has an associated relative deadline expressing the maximum tolerable completion time for good QoS, and also that it requires communication among hosts in the case of offloading to supply job inputs (before the job's computation can proceed) and obtain job outputs (when the computation is done). \trackchange{}{ In what follows, we first lay out the base model concerning job characteristics and the time and energy costs for offloading, and then present sample offloading strategies over that model. } \trackchange{Added subsection header for a clearer text organisation.}{ \subsection*{Base definitions} } We associate each job~$j$ with a release time~$r(j)$, a termination time~$t(j) > r(j)$, a relative deadline~$d(j)$, an originating host~$\hL$ from a set of hosts~$\Hosts$, and, finally, a computation host~$\hE$ also in~$\Hosts$. When~$j$ is clear in context, these properties are simply denoted respectively as~$r$,~$t$,~$d$, ~$\hL$, and~$\hE$. We say that the deadline of the job is fulfilled if~$t \le r + d$. Furthermore, we say the job executes locally at~$\hL$ when~$\hL = \hE$, and that it is offloaded from~$\hL$ to~$\hE$ when~$\hL \neq \hE$. In the scenario of runtime adaptive offloading, for job~$j$ and at time~$r$, an offloading decision is made at time~$r$ to determine~$\hE$. We assume that decision to be computed locally (at $\hL$) and to have negligible overhead. In the case of offloading ($\hL \neq \hE$), we assume that network communication needs to take place between~$\hL$ and~$\hE$ for the inputs of~$j$ (data but possibly also code) to be available at~$\hE$ before~$j$ starts, and, later, once the computation~$j$ terminates, for the outputs to be transmitted back from~$\hE$ to $\hL$. This is illustrated in Figure~\ref{fig:model}, along with the formulation for time and energy overheads during offloading. We consider the offloading decision to be informed by estimates of completion time and energy consumption as follows. Per each host~$h \in \Hosts$ (including $\hL$) we model the estimated completion time and energy consumption of a given job $j$, $\ttime(h)$ and $\ener(h)$, respectively as: $$ \ttime(h) = \TI(h) + \TC(h) + \TO(h) $$ $$ \ener(h) = \EI(h) + \EC(h) + \EO(h) $$ where time ($\ttime$) and energy ($\ener$) are factored into a sum of three terms: $\TI$, $\EI$: the (time and energy) costs of input offloading job~$j$; $\TC$, $\EC$: the (time and energy) costs of the actual computation for job~$j$, and; $\TO$, $\EO$: the (time and energy) costs of downloading outputs of job~$j$. Note that, given that there is no need for network communication when~$h = \hL$, we should necessarily have $\TI(\hL) = \EI(\hL) = \TO(\hL) = \EO(\hL) = 0$. Regarding energy, our aim is not merely to account for the energy consumption at the originating host ($\hL$) of a job, but also in the computation host in the case of offloading ($\hE$). This means first that network I/O expressed by the~$\EI$ and~$\EO$ should account for the energy consumption both in~$\hL$ and~$\hE$: sending inputs from~$\hL$ to~$\hE$ requires energy to be consumed by $\hL$ in uploading the inputs, and~$\hE$ to download, and vice-versa in the case of outputs. Since~$\EI$ and~$\EO$ are respectively dependent on the transmission times~$\TI$ and $\TO$ and the power consumption when doing so, we model $\EI$ and $\EO$ as follows: $$ \EI(h) = \TI(h) \times \PI(h) $$ $$ \EO(h) = \TO(h) \times \PO(h) $$ where~$\PI$ and~$\PO$ are estimates for the power consumed per time unit at both~$\hL$ and $\hE$\footnote{in these formulae, when clear from the context, the index of $h$ is omitted for the sake of simplicity}, when, respectively, sending job inputs from~$\hL$ to~$\hE$ and receiving job outputs at~$\hL$ from~$\hE$. Moreover, the $\EC$ term reflects the cost of executing the job remotely at~$\hE$, but, as illustrated in Figure~\ref{fig:model}, it should only account for an estimate of the energy consumption \emph{while}~$\hE$ is effectively performing the computation for job~$j$ for an amount of time~$\TE$, rather than the energy consumption over the total period~$\TC$, during which the job may at times be pending (e.g., waiting for other jobs in $\hE$ to complete). Thus, we write: $$ \EC(h) = \TE(h) \times \PC(h) $$ where~$\PC$ is an estimate for the power consumed per time unit at~$h$ due to the execution of the actual computation of~$j$ at~$h$. \trackchange{Added subsection header for a clearer text organisation.}{ \subsection*{Offloading strategies} } \begin{figure*}[h!] \centering [width=0.8\textwidth]{images/offloading-strategies.pdf} \caption{Illustration of offloading strategies.\label{fig:strategies}} \end{figure*} We can now express offloading strategies that may account (or not) for completion time and energy consumption. We do so for the cases of interest instantiated within \textsc{Jay}{} and covered in our evaluation later in the paper. \trackchange{New figure added.}{ We use Figure~\ref{fig:strategies} as an illustration for the main rationale in the offloading strategies. The figure illustrates the offloading decision for a job originating at host $\hL=h_0$ in an environment with four other hosts, $h_1$ to $h_4$, such that each host may have different values regarding the estimates for time and energy consumption ($\ttime$ and $\ener$). } The simplest case is that of no offloading, which we designate as the {\rm LOCAL} strategy \trackchange{}{ (i.e., as illustrated in Figure~\ref{fig:strategies} jobs always execute locally). }: $$ {\rm LOCAL} \quad \equiv \quad \hE = \hL $$ The choice may also be fixed to a special host $\hS \neq \hL$ (assuming $\hS$ does not generate jobs), for instance a cloud server that is responsible for executing all jobs \trackchange{}{ (in Figure~\ref{fig:strategies}, $\hS$ is host $h_1$)} , designated as the {\rm SERVER} strategy: $$ {\rm SERVER} \quad \equiv \quad \hE = \hS $$ \trackchange{}{ In the case illustrated by Figure~\ref{fig:strategies} that {\rm LOCAL} and {\rm SERVER} would not lead to optimal choices regarding time and/or energy, as there are hosts that can execute the job faster than $\hL$ and $\hS$ ($h_2$ and $h_4$), or that may consume less energy to do so ($h_3$ and $h_4$). } Adaptiveness comes into play if we account for the $\ttime$ and/or $\ener$ estimates. A strategy that seeks to minimize the completion time of job, but ignoring energy consumption, can be defined as: $$ {\rm TMIN} \quad \equiv \quad \hE = {\rm argmin}_{h \:\in\: \Hosts} \: \ttime(h) $$ \trackchange{}{ Hence, in Figure~\ref{fig:strategies} we have $\hE = h_2$ for {\rm TMIN}. } In analogous manner, a strategy that seeks to minimize energy consumption can be defined as: $$ {\rm EMIN} \quad \equiv \quad \hE = {\rm argmin}_{h \:\in\: \Hosts} \: \ener(h) $$ but it will not however attend to QoS requirements in terms of deadline fulfilment, i.e., $\hE$ may be chosen regardless of whether $\ttime(h_E) \le d$ or not. \trackchange{}{ This is illustrated in Figure~\ref{fig:strategies}, where $\hE = h_3$ for {\rm TMIN} but $T(h_3) > d$. } Additionally, the most energy-efficient hosts will tend to be preferred. These hosts may possibly become congested with too many jobs whose execution can therefore be much delayed in time. The above strategy can be refined meaningfully to counter for these problems as: $$ {\rm HYBRID} \quad \equiv \quad \hE = {\rm argmin}_{h \:\in\: \Hosts \::\: \ttime(h) \le d} \: \ener(h) $$ balancing both time and energy costs and the fulfilment of $d$, as it expresses that $\hE$ is chosen as the host which consumes less energy amongst those that can satisfy the job deadline ($h \:\in\: \Hosts \::\: \ttime(h) \le d$). \trackchange{}{ This is illustrated in Figure~\ref{fig:strategies}, where $\hE = h_4$ is the host with lower $\ener$ value, among those with a $\ttime$ value lower than $d$ (all except $h_3$). } In the case where {\rm HYBRID} yields no result, i.e., no host is estimated to be able to satisfy the job deadline, the offloading decision may for instance fallback to {\rm TMIN} trying to complete the job as fast as possible anyway or to simply cancel the job altogether. \trackchange{Extended discussion}{ The {\rm TMIN} and {\rm HYBRID} strategies may lead to an imbalance between host loads, in the sense that most time-efficient and/or energy-efficient hosts will tend to have higher loads. This may be counter-productive if the hosts are stake are battery-constrained and we wish for instance to extend battery lifetime of all hosts more evenly, as in~\cite{TODO}. To spread the load more evenly, a balanced selection scheme of hosts amongst those that can comply with a job deadline can be defined. For instance, a balanced selection policy can be defined as: $$ {\rm BALANCED} \quad \equiv \quad \hE = {\rm random} \:\{ h \:\in\: \Hosts \::\: \ttime(h) \le d \:\} $$ i.e., $\hE$ is randomly selected among the hosts that can comply with the deadline. In this case, the offloading choice may not be energy-optimal or time-optimal, but the random choice will tend to promote a more balanced distribution of jobs. We could also refine ${\rm BALANCED}$ to be explicitly energy-aware by refining the definition with constraints for energy consumption or battery level thresholds in addition to the job's deadline. Also, in alternative, a round-robin job distribution could be considered instead to enforce stricter load balancing. Finally, we put in perspective the following trait found in a number of systems, e.g.~\cite{TODO}: restricted offloading where a job is only offloaded if the local host~$\hL$ is judged to be incapable of fulfilling the job's QoS, usually seeking costs due to network transmission, e.g., in terms of energy, amount of data, billing costs. In correspondence, we can formulate a ``local-first'' strategy ${\rm LF}_f$, where~$f$ is the policy to apply in the case of offloading, as: $$ {\rm LF} [f] \quad \equiv \quad \hE = \left\{ \begin{array}{@{}rl@{}} \hL, & \mbox{ if } \ttime(\hL) \le d \bigstrut \\ f, & \mbox{otherwise} \end{array} \right. $$ i.e., a job executes locally if the completion time estimate complies with deadline, otherwise~$f$ is evaluated to decide where the job should run, e.g., we can define the ${\rm LF}[{\rm TMIN}]$, ${\rm LF}[{\rm HYBRID}]$, or ${\rm LF}[{\rm BALANCED}]$ strategies. } \section{System Model\label{s:model}} \begin{figure*}[h!] \centering [width=0.8\textwidth]{images/model2.pdf} \caption{Illustration of system model.\label{fig:model}} \end{figure*} \trackchange{Added subsection header for a clearer text organisation.}{ \subsection*{Overview} } We now put forward the system model for our adaptive offloading framework. The overall rationale is as follows. We consider a set of hosts connected over a network, such that each host may generate and/or execute soft real-time jobs over time. A host may then execute a mixture of local jobs and offloaded jobs on behalf of other hosts, but it can also be that a host only generates or executes jobs. In this setting, offloading decisions can be informed and adaptive to runtime conditions. Information broadcast amongst hosts regarding variables such as network bandwidth and latency, host job load and available energy provide the required feedback. We consider each job has a soft real-time nature, meaning that it has an associated relative deadline expressing the maximum tolerable completion time for good QoS, and also that it requires communication among hosts in the case of offloading to supply job inputs (before the job's computation can proceed) and obtain job outputs (when the computation is done). \trackchange{}{ In what follows, we first lay out the base model concerning job characteristics and the time and energy costs for offloading, and then present sample offloading strategies over that model. } \trackchange{Added subsection header for a clearer text organisation.}{ \subsection*{Base definitions} } We associate each job~$j$ with a release time~$r(j)$, a termination time~$t(j) > r(j)$, a relative deadline~$d(j)$, an originating host~$\hL$ from a set of hosts~$\Hosts$, and, finally, a computation host~$\hE$ also in~$\Hosts$. When~$j$ is clear in context, these properties are simply denoted respectively as~$r$,~$t$,~$d$, ~$\hL$, and~$\hE$. We say that the deadline of the job is fulfilled if~$t \le r + d$. Furthermore, we say the job executes locally at~$\hL$ when~$\hL = \hE$, and that it is offloaded from~$\hL$ to~$\hE$ when~$\hL \neq \hE$. In the scenario of runtime adaptive offloading, for job~$j$ and at time~$r$, an offloading decision is made at time~$r$ to determine~$\hE$. We assume that decision to be computed locally (at $\hL$) and to have negligible overhead. In the case of offloading ($\hL \neq \hE$), we assume that network communication needs to take place between~$\hL$ and~$\hE$ for the inputs of~$j$ (data but possibly also code) to be available at~$\hE$ before~$j$ starts, and, later, once the computation~$j$ terminates, for the outputs to be transmitted back from~$\hE$ to $\hL$. This is illustrated in Figure~\ref{fig:model}, along with the formulation for time and energy overheads during offloading. We consider the offloading decision to be informed by estimates of completion time and energy consumption as follows. Per each host~$h \in \Hosts$ (including $\hL$) we model the estimated completion time and energy consumption of a given job $j$, $\ttime(h)$ and $\ener(h)$, respectively as: $$ \ttime(h) = \TI(h) + \TC(h) + \TO(h) $$ $$ \ener(h) = \EI(h) + \EC(h) + \EO(h) $$ where time ($\ttime$) and energy ($\ener$) are factored into a sum of three terms: $\TI$, $\EI$: the (time and energy) costs of input offloading job~$j$; $\TC$, $\EC$: the (time and energy) costs of the actual computation for job~$j$, and; $\TO$, $\EO$: the (time and energy) costs of downloading outputs of job~$j$. Note that, given that there is no need for network communication when~$h = \hL$, we should necessarily have $\TI(\hL) = \EI(\hL) = \TO(\hL) = \EO(\hL) = 0$. Regarding energy, our aim is not merely to account for the energy consumption at the originating host ($\hL$) of a job, but also in the computation host in the case of offloading ($\hE$). This means first that network I/O expressed by the~$\EI$ and~$\EO$ should account for the energy consumption both in~$\hL$ and~$\hE$: sending inputs from~$\hL$ to~$\hE$ requires energy to be consumed by $\hL$ in uploading the inputs, and~$\hE$ to download, and vice-versa in the case of outputs. Since~$\EI$ and~$\EO$ are respectively dependent on the transmission times~$\TI$ and $\TO$ and the power consumption when doing so, we model $\EI$ and $\EO$ as follows: $$ \EI(h) = \TI(h) \times \PI(h) $$ $$ \EO(h) = \TO(h) \times \PO(h) $$ where~$\PI$ and~$\PO$ are estimates for the power consumed per time unit at both~$\hL$ and $\hE$\footnote{in these formulae, when clear from the context, the index of $h$ is omitted for the sake of simplicity}, when, respectively, sending job inputs from~$\hL$ to~$\hE$ and receiving job outputs at~$\hL$ from~$\hE$. Moreover, the $\EC$ term reflects the cost of executing the job remotely at~$\hE$, but, as illustrated in Figure~\ref{fig:model}, it should only account for an estimate of the energy consumption \emph{while}~$\hE$ is effectively performing the computation for job~$j$ for an amount of time~$\TE$, rather than the energy consumption over the total period~$\TC$, during which the job may at times be pending (e.g., waiting for other jobs in $\hE$ to complete). Thus, we write: $$ \EC(h) = \TE(h) \times \PC(h) $$ where~$\PC$ is an estimate for the power consumed per time unit at~$h$ due to the execution of the actual computation of~$j$ at~$h$. \trackchange{Added subsection header for a clearer text organisation.}{ \subsection*{Offloading strategies} } \begin{figure*}[h!] \centering [width=0.8\textwidth]{images/offloading-strategies.pdf} \caption{Illustration of offloading strategies.\label{fig:strategies}} \end{figure*} \trackchange{}{We can now express offloading strategies that may take into consideration multiple metrics to decide where a computation will take place, e.g., completion time and energy consumption. We define several such strategies for which we present a thorough evaluation later in the paper. Figure~\ref{fig:strategies} illustrates the rationale in the offloading strategies. The example at stake concerns an offloading decision for a job originating at host $\hL=h_0$ in an environment with four other hosts, $h_1$ to $h_4$, such that each host may have different values regarding the estimates for time and energy consumption ($\ttime$ and $\ener$). } The simplest case is that of no offloading, which we designate as the {\rm LOCAL} strategy \trackchange{}{ (i.e., as illustrated in Figure~\ref{fig:strategies}, jobs always execute locally). }: $$ {\rm LOCAL} \quad \equiv \quad \hE = \hL $$ The choice may also be fixed to a special host $\hS \neq \hL$ (assuming $\hS$ does not generate jobs), for instance a cloud server that is responsible for executing all jobs \trackchange{}{ (in Figure~\ref{fig:strategies}, $\hS$ is host $h_1$)} , designated as the {\rm SERVER} strategy: $$ {\rm SERVER} \quad \equiv \quad \hE = \hS $$ \trackchange{}{ In the case illustrated in Figure~\ref{fig:strategies}, the {\rm LOCAL} and {\rm SERVER} strategies would not lead to optimal choices with respect to time and/or energy, as there are hosts that can execute the job faster than $\hL$ and $\hS$ ($h_2$ and $h_4$), or that may consume less energy than $\hL$ and $\hS$ to do so ($h_3$ and $h_4$). } Adaptiveness comes into play if we account for the $\ttime$ and/or $\ener$ estimates. A strategy that seeks to minimize the completion time of job, but ignoring energy consumption, can be defined as: $$ {\rm TMIN} \quad \equiv \quad \hE = {\rm argmin}_{h \:\in\: \Hosts} \: \ttime(h) $$ \trackchange{}{ Hence, in Figure~\ref{fig:strategies} we have $\hE = h_2$ for {\rm TMIN}. } In analogous manner, a strategy that seeks to minimize energy consumption can be defined as: $$ {\rm EMIN} \quad \equiv \quad \hE = {\rm argmin}_{h \:\in\: \Hosts} \: \ener(h) $$ but it will not however attend to QoS requirements in terms of deadline fulfilment, i.e., $\hE$ may be chosen regardless of whether $\ttime(h_E) \le d$ or not. \trackchange{}{ This is illustrated in Figure~\ref{fig:strategies}, where $\hE = h_3$ for {\rm TMIN} but $T(h_3) > d$. } Additionally, the most energy-efficient hosts will tend to be preferred. These hosts may possibly become congested with too many jobs whose execution can therefore be much delayed in time. The above strategy can be refined meaningfully to counter for these problems as: $$ {\rm HYBRID} \quad \equiv \quad \hE = {\rm argmin}_{h \:\in\: \Hosts \::\: \ttime(h) \le d} \: \ener(h) $$ balancing both time and energy costs and the fulfilment of $d$, as it expresses that $\hE$ is chosen as the host which consumes less energy amongst those that can satisfy the job deadline ($h \:\in\: \Hosts \::\: \ttime(h) \le d$). \trackchange{}{ This is illustrated in Figure~\ref{fig:strategies}, where $\hE = h_4$ is the host with lower $\ener$ value, among those with a $\ttime$ value lower than $d$ (all except $h_3$). } In the case where {\rm HYBRID} yields no result, i.e., no host is estimated to be able to satisfy the job deadline, the offloading decision may for instance fallback to {\rm TMIN} trying to complete the job as fast as possible anyway or to simply cancel the job altogether. \trackchange{Extended discussion}{ The {\rm TMIN} and {\rm HYBRID} strategies may lead to an imbalance between host loads, in the sense that most time-efficient and/or energy-efficient hosts will tend to have higher loads. This may be counter-productive if the hosts are stake are battery-constrained and we wish for instance to extend battery lifetime of all hosts as fairly as possible. To spread the load more evenly, a balanced selection scheme of hosts amongst those that can comply with a job deadline can be defined. For instance, a balanced selection policy can be defined as: $$ {\rm BALANCED} \quad \equiv \quad \hE = {\rm random} \:\{ h \:\in\: \Hosts \::\: \ttime(h) \le d \:\} $$ i.e., $\hE$ is randomly selected among the hosts that can comply with the deadline. In this case, the offloading choice may not be energy-optimal or time-optimal, but the random choice will tend to promote a more balanced distribution of jobs. We could also refine ${\rm BALANCED}$ to be explicitly energy-aware by refining the definition with constraints for energy consumption or battery level thresholds in addition to the job's deadline. Also, in alternative, a round-robin job distribution could be considered instead to enforce stricter load balancing. } \trackchange{}{Finally, we define a strategy that implements a form of ``restricted offloading'', a trait found in various systems discussed in Section~\ref{s:rwork}, such that jobs are only offloaded if local execution is deemed unsuitable. That is, a job is only offloaded if the local host~$\hL$ is judged to be incapable of fulfilling the job's QoS, like deadlines in our case but also possibly other factors, e.g., those associated to network transmission in terms of energy, amount of data, or financial costs. In line with this rationale, we formulate the ``local-first'' strategy ${\rm LF}[f]$, where~$f$ is the policy to apply in the case of offloading, as: $$ {\rm LF} [f] \quad \equiv \quad \hE = \left\{ \begin{array}{@{}rl@{}} \hL, & \mbox{ if } \ttime(\hL) \le d \bigstrut \\ f, & \mbox{otherwise} \end{array} \right. $$ i.e., a job executes locally if the completion time estimate complies with deadline, otherwise~$f$ is evaluated to decide where the job should run, e.g., we can define the ${\rm LF}[{\rm TMIN}]$, ${\rm LF}[{\rm HYBRID}]$, or ${\rm LF}[{\rm BALANCED}]$ strategies. } \section*{Related work}\label{s:rwork} Many systems have been proposed to leverage the power of nearby devices to increase computation capacity of a single mobile device. Cloudlets, introduced by Satyanarayanan et al.~\cite{Satyanarayanan2009}~\cite{Satyanarayanan2012}~\cite{Satyanarayanan2013}~\cite{Satyanarayanan2015a}, were amongst the first approaches for computational perfomance improvements by offloading computations from a less capable mobile device to a dedicated workstation on the viccinity of the network with higher computational power. Multiple mobile device cooperation, forming an Edge-Cloud, models were also developed~\cite{hyraxmsc} to try and divide heavy workloads for a single mobile device across a group of devices, that although limited in performance, with cooperation, can perform heavy tasks. More recently, several similar approaches for Edge-Computing have been published. Such as Fog computing~\cite{Bonomi2012}~\cite{Yousefpour2017} (or Mist Computing~\cite{Liyanage2016}) which is an highly virtualized platform that provides compute, storage and networking services between end devices and traditional cloud computing data centers located at the edge of network. Cyber Foraging~\cite{Balan2002} (first applied to mobile devices in~\cite{Verbelen2012}) uses opportunistically discovered servers in the environment to improve the performance of interactive applications and distributed file systems on mobile clients. Volunteer computing~\cite{Anderson2004}~\cite{Ryden2014}~\cite{Anderson2019}~\cite{Alonso-Monsalve2018}~\cite{Marosi2013}~\cite{Funai2014} is a platform where users donate their computing power to a larger cause. This concept has been around since the early 2000's when users shared their computer power and has been evolving towards mobile phones. Nowadays, users can share contribute to a larger project simply by installing an application on their device. Femto Clouds~\cite{Habak2015} are systems which provide a dynamic, self-configuring and multi-device mobile cloud out of a cluster of mobile devices. These systems are usually coordinated by a selected device (the coordinator) and are resillient to churn. Drop Computing~\cite{Ciobanu2019} which is yet another definition of an edge computing system, proposes the concept of decentralized computing over multilayered networks, combining cloud and wireless technologies over a social crowd formed between mobile and edge devices. Several systems such as Hyrax~\cite{hyraxmsc}, mCrowd~\cite{Yan2009}, CDroid~\cite{Barbera2013}, COSMOS~\cite{Shi2014}, POMAC~\cite{Hassan2014}, MVR~\cite{Wei2017}, Cumulus~\cite{Gedawy2017}, ActorEdge~\cite{Aske2017}, HERMES~\cite{Kao2017}. Rodrigues et. al.~\cite{Rodrigues2018}~\cite{rodrigues2019middleware}, Aura~\cite{Hasan2018}, Circa~\cite{Lin2020}, Oregano~\cite{Teofilo2017}~\cite{Sanches2020}, Jay~\cite{fmec20_jay} focus on increasing mobile device capabilites by offloading computations to other devices in viccinity or to more powerful cloudlets and clouds or by serving as cache servers improving overall QoS of end-users. While all of those systems address computation distribution amongst groups of devices, energy concerns mobile devices have been a problem since early mobile devices emerged~\cite{Satyanarayanan1997}~\cite{Flinn1999a}~\cite{Flinn1999}~\cite{Flinn2000}~\cite{Flinn2002}~\cite{Flinn2004}~\cite{Satyanarayanan2005}~\cite{Gurun2006}, and since the end of 2000s, these concerns have been tackled by several systems for smartphones in specific. MAUI~\cite{Cuervoy2010} was one of the first frameworks to try and improve smartphone Batteries via code offload. In this work, the authors explore the problem severity of battery duration for smartphones while performing computationally intensive tasks. In the framework developed, programmers can annotate methods that can be executed on a cloud server. MAUI instruments each method, using their profiler to make smart decisions on wether it would be benefitial to offload a method or not. If a disconnection occurs, the method is executed locally. MAUI works on .NET Common Language Runtime (CLR) applications. Energy monitoring in MAUI was done by using a hardware power meter attached to the mobile device battery. To map cpu consumption to battery consumption they collected cpu utilization samples and the corresponding energy consumption, building a simple linear model for power consumption using least-squares linear regression. The authors verified good improvements in energy consumption and performance using their framework with a close-by server. AIOLOS~\cite{Verbelen2012} is a cyber foraging middleware framework for Android. AIOLOS uses an estimation model that takes into account server resources and network state to device at runtime whether or not a method call should be offloaded. AIOLOS focus on two objectives: optimize energy and optimize execution time. In order to select which chunks of code should be candidates to be offloaded, programmers have to annotate their selected classes and follow a set of rules set in place by the middleware. CloneCloud~\cite{Chun2011} uses a different approach from MAUI in the sense that it performs an offline static and dynamic analysis of the application, creating code partitions that will be runt either locally or offloaded. This decision is made during code partitioning and tries to minimize a cost function. For energy consumption estimations they use 3 variables (CPU, Screen On and network state) mapping them to a power value. This system runs on Android with modified Dalvik VM modifications by the authors and Android-x86 VMs that run on the server. As in MAUI, authors show that for certaint applications they can improve greatly the energy consumption and execution time. In Cuckoo~\cite{Kemp2012} the authors developed a service that, at runtime, intercepts all method call and makes a decision wether to offload or not a method. In the version presented the decision was to allways offload methods with only wether the remote resource is reacheable or not as context information. Cuckoo server allows for multiple clients that discover this resource via sharing of QR codes. Synergy~\cite{Kharbanda2012} is a framework that allows tasks to be distributed amongst a group of devices (discovered using Alljoyn). In order to save energy, this framework performs frequency and voltage scaling to reduce devices cpu frequencies and voltage when running a task distributed across a group of devices. In order for an application to take any advantage of this framework it must be parallelizable, since portions are distributed in the network. ThinkAir~\cite{Kosta2012} exploits the concept of smartphone virtualization on the cloud and provides method-level computation offloading. In ThinkAir developers have to anotate offloadable methods and compile theirs application using the auther own customized NDK and code generator. This framework has four offloading policies: Execution Time, Energy, Execution Time and Energy, and Execution Time, Energy and Cost. The cost optimization factors the cost of renting and using a cloud server. Energy estimations is based on PowerTutor~\cite{yang2012powertutor} and it uses power consumption of individual elements of a smartphone (e.g. GPS, Screen, CPU, etc) and sums them to make an estimation. Phone2Cloud~\cite{Xia2014} is a computation offloading-based system for energy and performance improvements on smartphones by offloading computations to the cloud. This system is composed of seven components, including a bandwidth monitor, a resource monitor and an execution time predictor. The offloading decision decides wether to offload the whole or a part of an application to a cloud based on offloading-decision making algorithm that can be customized. In order to make energy estimations on the smartphone they use cpu power consumption and multiply it with expected application time. To estimate energy spent on the mobile phone if a computation is offloaded to the cloud they account in energy of sending and receiving data and the power consumption of waiting for computation to finish. If the tradeoff of offloading versus executing locally is benefitial, they offload the application. ULOOF~\cite{Neto2018} is a computation offload framework for Android apps (APK). This framework offloads computations on a method level, using pre-processed apks with anotated offloadable methods. This pre-processing generates a prepared APK with modified code to provide the offloading logic. The system is designed to offload computations to cloudlet or cloud machines that run Android-X86~\cite{huang2011android} VMs with a specific server applications that has an execution environment similar to a smartphone Android version (Dalvik or ART). The authors tested ULOOF using simulation and on a small testbed of real devices energy consumption, but the authors did not notice a large advantage of offloading to a cloudlet instead of a cloud. To test the framework, the authors implemented two applications that were not very data intensive, but that were computationally intensive and irregular. They also noticed that their estimations were much more accurate in terms of energy consumption on remote devices than locally, this is due to most of the energy spent on each method was on data transfers and not on the computation itself. The overhead of each method offloading decision was less than 40ms and for short execution methods, the overhead reached at most 32\% of the execution time, which corresponded to 33ms. The average execution time was 513.47ms. For future work, the authors pretend to estimate the mobility of user to improve predictions and address multiple users for a single server scenarios. Honeybee~\cite{Fernando2013}~\cite{Fernando2019} is a mobile edge computing system that supports P2P work sharing among dynamic mobile nodes. Honeybee API, a programming framework for developing mobile crowd computing applications focus on the dynamism of mobile edge networks, implementing a work stealing algorithm for load balancing and multiple fault-tolerance and recovery strategies. While their main focus is computation speedup, they also verify that offloading computations had an impact on the task delegator, reducing its energy consumption when there were multiple workers in the network. RAMOS~\cite{Gedawy2020} is a Femto Cloud platform that schedules tasks according to two different metrics - energy efficiency and latency minimization. This platform was written in Java for the controller node, Android for Android smartphones and Java fot IOT devices (Raspberry pi's and Intel Edison). The system consists of a controller that has several modules, such as a task originator interface, task manager scheduler and task characteristics estimator. In the controller there's also a worker manager and an accuracy enhancing module. Each worker has a resounce monitor, hearbeats manager, energy profiler and a tasks manager. Each mobile/IOT device has a limited \textit{energy budget}, is connected to a controller and has a FIFO task queue, from which only one task is executed at a given time. In order to test the performance of the system, authors used a desktop machine as a controller and a set of iot and android devices as worker nodes. They assign an energy budget to each worker and emulate an energy model with fixed consumptions per data megabyte, and megaflop (transfer and execution energy costs). They gather results comparing the RAMOS platform with two naive schedulers: first come first server and round robin. They analyse the scheduler results according to the Relative Completion Time, Relative Device Utilization, Computation Throughtput and Total Energy Consumption. The results show that the system can improve up to 40\% time speedup on time latency minimization and up to 30\% energy efficiency when compared to the naive schedulers. \begin{table} \caption{Comparing~\textsc{Jay}{} with Related Work} \label{tab:rw_compare} \centering \resizebox{\textwidth}{!}{% \begin{tabular}{p{0.09\textwidth} p{0.09\textwidth} p{0.09\textwidth} p{0.09\textwidth} p{0.09\textwidth} p{0.09\textwidth} p{0.09\textwidth} p{0.09\textwidth} p{0.09\textwidth} p{0.09\textwidth} p{0.09\textwidth}} \toprule Name & Objective & Code Modifications & Granularity & External Coordination? & Devices Supported & Built-in Energy-Aware? & Load Balanced? & Modular? & Dynamic connections? & Extensible?\\ \midrule ~\textsc{Jay}{} & Minimize latency. Minimize energy. Custom objectives & Submit tasks~\textsc{Jay}{} & Job & No & Android, x86 (JVM) & Yes, dynamic at runtime & Yes, Scheduler Specific & Yes & Yes & Yes \\ ULOOF & Energy efficiency. Latency minimization & Method anotations. Pre-processed APK & Method & No & Android, Android-x86 & Static energy curve per device & Yes & No & Yes & No\\ RAMOS & Energy efficiency. Latency minimization & Built around controller & Task & Yes & Android (\& IOT), x86 for Controller (Java) & Static budget per device & Yes & No & Yes & No \\ Honeybee & Performance gain. Energy conservation & Submit tasks to delegator & Task & No & Android & No (\%) & Yes, work stealing & No & Yes & No\\ MAUI & Energy conservation. & Method anotations & Method & No & .NET CLR & Static Linear curve & No & No & No & No\\ CloneCloud & Performance gain. Energy conservation & Off-line Partitioning & Code Partitions & No & Android w\ mod. Dalvik, Android-x86 & Static Curve & No & No & No & No\\ Synergy & Energy conservation & Parallelizable applications & Task & No & Android, Linux/ Windows & No/Energy Budget & Yes & No & Yes & No\\ ThinkAir & Performance gain. Energy conservation. Cost Reduction & Method annotations. Pre-processed & Method & No & Android, Android-VMs & Yes. PowerTutor & No & No & No & No\\ Phone2Cloud & Energy conservation & Application & Application & No & Android & Yes & No & No & No & No\\ \bottomrule \end{tabular}% } \end{table} \section{Related work}\label{s:rwork} \trackchange{Added reference to a few survey papers, including~\cite{shakarami2020survey} suggested by Reviewer 1}{ The general problem of computation offloading in mobile edge clouds received considerable attention in the last two decades, as documented in recent surveys~\cite{kumar2013survey,mach2017mobile,shakarami2020survey}. Our discussion of related work focuses on software } systems that, like \textsc{Jay}{}, conduct adaptive offloading, and in particular those \trackchange{typo}{that} implement energy-aware offloading policies. A number of systems focuses on semi-automated offloading to an edge cloud or centralised cloud infrastructure, without collaborative offloading between mobile devices, in line with the mobile cloud computing paradigm~\cite{FERNANDO201384}. In some systems of this kind, e.g., Cuckoo~\cite{Kemp2012} or COSMOS~\cite{Shi2014}, offloading policies merely seek to minimize latency without any energy awareness, and gains in energy consumption at the mobile device level are at most a by-product of offloading computation. A number of other systems support energy-aware offloading strategies supported by runtime profiling, like AIOLOS~\cite{Verbelen2012}, MAUI~\cite{Cuervoy2010}, Phone2Cloud~\cite{Xia2014}, ThinkAir~\cite{Kosta2012}, or ULOOF~\cite{Neto2018}. In the system model of all the former systems, energy consumption accounts only for the (local) mobile device that hosts applications, typically the energy consumed in network transmission during offloading, rather than also the upper processing tiers in network and computation terms like \textsc{Jay}{}, which may for instance also be battery-constrained (e.g.,~\cite{fmec20_ramble,Rodrigues2018}) and in any case may have restrictions regarding energy consumption (e.g., monetary costs). In any case, the offloading policies of the systems still reflect a concern for energy consumption, possibly in conjunction with latency: AIOLOS allows one of two configurable policies that either optimize for latency or energy; MAIUI minimizes energy consumption subject to a latency threshold constraint; Phone2Cloud offloads jobs whenever the estimated latency for local execution exceeds a configurable threshold, or when it perceive lower energy consumption by the mobile device is attainable; ThinkAir implements offloading policies that can seek to minimize only one of latency or energy consumption, both latency and energy consumption (offloading must pay off in both dimensions compared to local execution), and also optionally constrained by monetary posts due to the use of cloud services and; finally, ULOOF evaluates local and remote execution cost functions that are parametrised by a weight factor that can be used to attain a balance between latency and energy consumption. Other types of systems enable collaborative offloading among mobile devices forming an edge cloud, and, in some cases, also upper cloud tiers. There are systems of this kind which merely strive to optimize latency like FemtoClouds~\cite{Habak2015}, Honeybee~\cite{honeybee}, Oregano~\cite{Sanches2020}, and P3-Mobile~\cite{p3mobile}, while others are explicitly energy-aware in diverse manners, \trackchange{}{discussed next}. CWC~\cite{cwc} is a system for volunteer computing, where jobs are disseminated to a pool of mobile devices. To prevent battery consumption and intrusive user experience, jobs execute only when the devices are charging their batteries and have light computational loads, and may also be paused to minimize battery charging times. mClouds~\cite{mclouds} works over hybrid edge clouds that, like \textsc{Jay}{}, may be composed of mobile devices in a ad-hoc wireless network, cloudlet and public clouds, offloading jobs to each of the tiers according to connectivity conditions, and also (when multiple choices are available) by a cost model with weights that balances execution time and energy consumption in a configurable manner. MDC~\cite{mdc} is a system for collaborative offloading among mobile devices that seeks to maximize the battery lifetime of the set of involved devices by balancing energy consumption among them; this concern could provide an interesting refinement to the HYBRID strategy in this paper, e.g., by factoring in battery levels of devices in addition to their battery efficiency. RAMOS~\cite{Gedawy2020} offloads jobs over an edge cloud formed by heterogeneous mobile and IoT devices that act as job workers, a concept borrowed from FemtoClouds~\cite{Habak2015}. As in \textsc{Jay}{}, the RAMOS scheduler can be parametrised to minimize job latency or energy consumption, jobs have deadlines and are also executed in FIFO order by workers. RAMOS' architecture is centralised, however. Jobs originate and are scheduled in batches exclusively by a centralised controller node in contrast to \textsc{Jay}{}'s distributed architecture. Synergy~\cite{Kharbanda2012} considers collaborative offloading between devices in a peer-to-peer ad-hoc network, and, in order to maximise the devices' battery lifetime, balances latency and energy consumption by partitioning jobs among devices while at the same time scaling the devices CPU frequencies. \trackchange{RW Table Description}{ \input{rwork-table} Summarising the above discussion, Table~\ref{tab:related_work} provides a comparative overview of \textsc{Jay}{} and the other systems mentioned. The table does so first in terms of cloud architecture, making a distinction between: mobile cloud computing (MCC) systems, where devices offload jobs to a centralised cloud infrastructure; mobile edge computing (MEC) systems, where there is collaborative offloading among mobile devices; and Femtocloud systems, where a set of mobile devices is used as a worker pool for jobs fired by an external host. The table next indicates awareness to runtime information regarding time and/or energy, and the support for job deadlines in the system model. The two remaining columns characterize the scheduler component responsible for offloading decisions concerning its location and to the granularity of those offloading decisions in terms of how many jobs are accounted for at once. A scheduler may either operate locally per device or run on a central peer, and the granularity type distinguishes between single-job, multiple-job, and parallel job offloading by the scheduler. The latter is a special class of multiple-job offloading in which all jobs are bound by some type of parallel computation that is inherent to the job model. The main distinctive trait of \textsc{Jay}{} is that it is configurable in terms of target cloud architecture and scheduler operation. Thanks to a simple and flexible design, each of the peers in a \textsc{Jay}{} system instance may act as a scheduler, a worker, or both, as illustrated by the variety of evaluation scenarios we put forward later in the paper, meaning that one can use \textsc{Jay}{} for offloading using an MCC, MEC, or Femtocloud architecture, with per-device schedulers or a centralised one. The offloading strategies we instantiate and evaluate in \textsc{Jay}{} are partially illustrative of comparable time and/or energy-aware approaches found in other systems. However, \textsc{Jay}{} is not bound to any particular approach since offloading strategies are configurable, and there is a general design for monitoring runtime state information. Time-awareness is also reflected in \textsc{Jay}{} by the support of job deadlines, a feature supported by only a few other systems. Finally, \textsc{Jay}{} only supports single-job scheduling granularity, a characteristic that is more in line with on-the-fly offloading of independent jobs, as seen in most systems discussed. In contrast, multiple-job granularity is usually associated with the use of a centralised scheduler, a Femtocloud architecture, or a computation model that embodies parallelism. } \section{Experimental Setup} \label{s:setup} We used the \textsc{Jay}{} framework to evaluate the algorithms presented in Section~\ref{s:model}, namely: {\rm LOCAL}, {\rm SERVER}, {\rm TMIN} and {\rm HYBRID}. \subsection*{Devices} \begin{table*}[t!] \caption{Device characteristics.} \label{tab:devices} \centering \begin{tabular}{@{}llllll@{}} \toprule Device & Year & CPU & RAM & Battery & OS \bigstrut \\% & Category\\ \midrule Cloudlet & 2015 & Intel i7-6700K, 4x4.0 GHz & 16 GB & N/A & Ubuntu 20.04 LTS\bigstrut \\% & N/A \\ Google Nexus 9 & 2014 & Nvidia Tegra K1, 2x2.3 GHz & 2 GB& 6.7 Ah& Android 8.1 \bigstrut \\% & Mid\\ Google Pixel 4 & 2019 & Snapdragon 855, 1x2.84/3x2.42/4x1.78 GHz & 6 GB & 2.8 Ah & Android 11 \bigstrut \\% & High\\ Samsung Galaxy S7e & 2016 & Exynos 8890 Octa, 4x2.3/4x1.6 GHz & 4 GB & 3.6 Ah& Android 10\bigstrut \\% & High\\ Samsung Galaxy Tab S5e & 2019 & Snapdragon 670, 2x2.0/6x1.7 GHz & 6 GB & 7.0 Ah& Android 10\bigstrut \\% & Mid\\ Xiaomi Mi 9T & 2019 & Snapdragon 730, 2x2.2/6x1.8 GHz & 6 GB & 4.0 Ah & Android 10\bigstrut \\% & Mid/High\\ \bottomrule \end{tabular}% \end{table*} The experimental setup consisted of~5 Android devices and a PC-based cloudlet. In experiments detailed in the next section, Android devices are used as job generators and executors, while the cloudlet is used as job executor only. Their characteristics are summarised in Table~\ref{tab:devices}. It can be seen that the Android devices are quite heterogeneous in terms of CPU, RAM and battery capacity, as well as in terms of their Android OS version. Another important aspect is that the cloudlet has significantly more RAM (16 GB) than all Android devices and, as illustrated in detail later, also uses a higher performance CPU configuration. All devices were connected to the same local network, via an ASUS RT-AC56U router, featuring a 2.4~GHz 300~Mbit/s WiFi connection for the Android devices and a 1~Gb/s Ethernet connection for the cloudlet. Prior to each experiment, all Android devices had their batteries charged by at least 50\%, to prevent interference from builtin power saving mechanisms. They were then disconnected from the power outlet using a smart plug controlled remotely by a script. For monitoring energy we used the standard Android {\tt BatteryManager} API\footnote{\url{https://developer.android.com/reference/android/os/BatteryManager}}. This API provides current intensity ($I$) and voltage ($V$) information, respectively, through the {\tt BATTERY\_PROPERTY\_CURRENT\_NOW} counter and the {\tt EXTRA\_VOLTAGE} notifications. The API is available and reliable across all Android versions and devices we tested, from which we estimated the instantaneous power consumption ($P = I \times V$). We used this approach uniformly for all devices, even if in some devices/Android versions the API provided a richer set of attributes such as the {\tt BATTERY\_PROPERTY\_CURRENT\_AVERAGE}, for average current intensity, and {\tt BATTERY\_PROPERTY\_ENERGY\_COUNTER}, for the remaining battery power. As for the cloudlet, it had a permanent 220~V power supply. Its power consumption was monitored using a Meross MSS310 energy plug. \subsection*{Benchmark Application} We used a benchmark application that fires jobs for object detection in images using deep learning, similar to one we employed in previous work~\cite{fmec20_jay}. As illustrated in Figure~\ref{fig:tfapp}, each object detection job takes an image as input and yields a set of objects detected in the image along with corresponding bounding boxes and confidence scores. Here, we used a ``headless'' variant of this computational job with no GUI or human intervention. Overall, this type of computation is increasingly common to classify static images or live video frames in mobile devices~\cite{tomm17,www19dl}. It makes for an interesting case-study for offloading since jobs can be computationally intensive and may require high network bandwidth to transfer images. This can happen if the number of spawned jobs is large or if a QoS restriction is added to their execution (e.g., a deadline), or both. \begin{figure}[h!] \centering [width=0.5\columnwidth]{images/tf_od_app.png} \caption{Object detection in images.} \label{fig:tfapp} \end{figure} We make use of a MobileNet SSD model variant~\cite{mobilenet} trained with COCO~\cite{coco}, a popular image dataset used for benchmarking object detection models that contains 2.5 million labeled instances for 80 object types in more than 300.000 images. The specific model we use is \texttt{ssd\_mobilenet\_v1\_fpn\_coco}, available in standard TensorFlow (TF)~\cite{tensorflow} and TensorFlow Lite (TFLite) format from TensorFlow's Object Detection Zoo\footnote{\url{https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md}}. The object detection job code, adapted from a TensorFlow tutorial\footnote{\url{https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android}}, has been incorporated into two distinct Kotlin modules, each linked with the \textsc{Jay}{} core library. One module is used in the cloudlet (Linux) and employs the standard TF library. The other is used in the Android devices and employs TFLite. Besides CPUs, TF and TFLite may employ GPUs, if available. In the case of TFLite in Android, it can also use the specialised Google Neural Networks API~\footnote{\url{https://developer.android.com/ndk/guides/neuralnetworks/}}. Nevertheless, we configured the device's Kotlin module to use only CPUs given that this basic option works for all devices and operating system versions. \input{baseline-table} The benchmark application runs on every device and fires object detection jobs according to a Poisson process with a configurable job inter-arrival time~$\lambda$, and relative deadlines~$d \le \lambda$. Each job takes as input a randomly selected Ultra-HD image taken from the UltraEye dataset~\cite{ultraeye}, and produces an objection detection report of at most~$4$~KB. Each image has a pixel resolution of $3840\times2160$ and an average size of 2.2~MB. All images used were uploaded to the Android devices prior to benchmark execution (as mentioned earlier, the cloudlet does not generate jobs in our experiments, it only executes them on behalf of Android devices). \section*{Tables} \begin{table}[h!] \caption{Sample table title. This is where the description of the table should go} \begin{tabular}{cccc} \hline & B1 &B2 & B3\\ \hline A1 & 0.1 & 0.2 & 0.3\\ A2 & ... & .. & .\\ A3 & .. & . & .\\ \hline \end{tabular} \end{table}
2,869,038,155,423
arxiv
\section{Introduction} A black hole (BH) can be defined as the region $\mathcal{B}$ of the total space-time $\mathcal{M}$ which does not overlap with the causal past of future null infinity $J^- (\mathscr{I}^+)$~\cite{poi}: \begin{eqnarray} \mathcal{B} = \mathcal{M} - J^- (\mathscr{I}^+) \, . \end{eqnarray} The {\it event horizon} of a BH is the boundary delimiting the BH. Everything falling onto the BH and crossing the event horizon is lost for ever and it cannot affect events happening outside the BH any more. However, it may be possible that event horizons never form in nature, but that only apparent horizons can be created~\cite{h}. An {\it apparent horizon} is a closed surface of zero expansion for a congruence of outgoing null geodesics orthogonal to the surface~\cite{poi}. Outward-pointing light rays behind an apparent horizon actually move inwards and therefore they cannot cross the apparent horizon. In the special case of a stationary space-time, an event horizon is also an apparent horizon, but the reverse is not true in general. In particular, the event horizon is determined by the global properties of the space-time, while the apparent horizon depends on the observer. Astronomers have discovered at least two classes of astrophysical BH candidates~\cite{narayan}: stellar-mass objects in X-ray binary systems and super-massive objects at the center of every normal galaxy. These objects are thought to be BHs because they cannot be explained otherwise without introducing new physics: the stellar-mass BH candidates are too heavy to be neutron stars for any reasonable matter equation of state~\cite{ns}, while at least some of the super-massive objects in galactic nuclei are too heavy, compact, and old to be clusters of non-luminous bodies~\cite{maoz}. There is also a set of observations suggesting that BH candidates have really an event horizon~\cite{quiescent,outburst,sgra}. Basically, these objects seem to be able to swallow all the accreting gas without emitting any kind of electromagnetic radiation from their putative surface. In the case of low-mass X-ray binaries, we can compare systems in which the primary is thought to be a BH and the ones in which the primary is thought to be a neutron star. In the quiescent state, we can observe thermal radiation from the surface of neutron stars, while no such a radiation is observed from BH candidates~\cite{quiescent}. Neutron star systems show type-I X-ray bursts (as outcome of compression and heating of the gas accumulated on their surface), while the phenomenon has never been observed in binaries with BH candidates~\cite{outburst}. There are also strong constraints on the radiation emitted by the possible surface of the supermassive BH candidate at the center of our Galaxy~\cite{sgra}. This body of observations can be easily explained with the fact that BHs have no surface and that the gas crossing the event horizon cannot be seen by distant observers any more (see however Ref.~\cite{abra}). Strictly speaking, the confirmation for the existence of an event horizon would require the knowledge of the future null infinity of the Universe, which is clearly impossible for us. On the contrary, the non-observation of electromagnetic radiation emitted by the gas after falling into the compact object nicely meets the definition of apparent horizon. However, the geometry of the space-time around astrophysical BH candidates is practically stationary for the timescale of our observations, and that may make impossible to discriminate an event horizon from an apparent horizon. \section{Electromagnetic constraint} Let us image a BH as a gas of particles packed in a small region by the gravitational force\footnote{The model of BH I will consider may remind the one discussed in Ref.~\cite{gia}. The radius of the compact object, $R$, is larger than the one corresponding to the event horizon of a (classical) BH with the same mass and spin.}. As this gas has a finite temperature, it must radiate. However, if the object is very compact, the emitted radiation is strongly redshifted when it reaches a distant observer and the object can appear very faint. Here, I relax the quite common assumption of steady state $L = \dot{M} c^2$~\cite{quiescent,sgra}, where $L$ is the surface luminosity and $\dot{M}$ is the mass accretion rate. That would requires that the accreting gas hits the ``solid surface'' of the object and then radiates to infinity all its kinetic energy. If this were the case, a very compact object would not be able to increase its mass, or at least the process would be very inefficient, likely in contradiction with the observations of the super-massive objects in galactic nuclei. Moreover, there are no reasons to assume that BH candidates have a solid surface. In the picture in which we have a gas of particles packed in a small region by the gravitational force, the accreting gas enters into the compact object and both its rest-mass and kinetic energy contribute to increase the mass of the BH candidate. Let us now see the constraint we can obtain in this picture from the non-observation of thermal spectrum from BH candidates. The specific energy flux density of the compact object (often measured in erg~cm$^{-2}$~s$^{-1}$~Hz$^{-1}$) as detected by a distant observer is \begin{eqnarray} F = \int I_{\rm o} d\Omega \, , \end{eqnarray} where $I_{\rm o}$ is the specific intensity of the radiation as measured by the distant observer and $d\Omega$ is the element of the solid angle subtended by the image of the object on the observer's sky. $I_{\rm x}/\nu_{\rm x}^3 = {\rm const.}$ (Liouville's Theorem), where $\nu_{\rm x}$ is the photon frequency measured by any local observer on the photon path, and \begin{eqnarray} d\Omega = \frac{dxdy}{D^2} \, , \end{eqnarray} where $x$ and $y$ are the Cartesian coordinates on the observer's sky and $D$ is the distance of the compact object from the observer. The {\it equivalent isotropic luminosity} of the BH candidate is thus \begin{eqnarray} L = 4 \pi \int g^3 I_{\rm e} dx dy d\nu \, . \end{eqnarray} Here $g = \nu_{\rm o}/\nu_{\rm e}$ is the redshift factor, $\nu_{\rm o}$ is the photon frequency measured by the distant observer, and $\nu_{\rm e}$ and $I_{\rm e}$ are respective the photon frequency and the specific intensity of the radiation measured by an observer located at the point of emission of the photon, on the surface of the compact object, and corotating with the surface of the compact object. The emission should be like the one of a blackbody; that is, \begin{eqnarray} I_{\rm e} = \frac{2h\nu^3_{\rm e}}{c^2}\frac{1}{\exp \left(\frac{h\nu_{\rm e}}{k_{\rm B}T_{\rm e}}\right) - 1} \, , \end{eqnarray} where $T_{\rm e}$ is the temperature of the surface of the BH candidate measured by a locally corotating observer. For the sake of simplicity, we now consider a spherically-symmetric non-rotating object. The geometry of the space-time around the BH candidate will be described by the Schwarzschild solution, which is valid till the radius of the compact object, $R$. The luminosity becomes \begin{eqnarray}\label{eq-l} L = 4 \sigma g^4 T_{\rm e}^4 \int dx dy \, , \end{eqnarray} where $\sigma$ is the Stefan-Boltzmann constant and \begin{eqnarray} g = \left( 1 - \frac{2M}{R} \right)^{1/2} \, . \end{eqnarray} Here $g$ is a constant, but it would be a function of $x$ and $y$ in a more general background. The integrand in Eq.~(\ref{eq-l}) is simply the area of the apparent image of the BH candidate on the observer's sky: \begin{eqnarray} \int dxdy = \pi R_{\rm app}^2 = \left\{ \begin{array}{ll} 27 \pi M^2 & R < 3 M \\ \pi \frac{R^2}{g^2} & R > 3 M \end{array} \right. \, . \end{eqnarray} The radius $r = 3M$ is the capture photon radius of the Schwarzschild space-time. Inside such a radius, the gravitational force is so strong that any light rays coming from infinity is captured by the compact object. A distant observer sees therefore an object with an apparent temperature \begin{eqnarray} T_{\rm app} = g T_{\rm e} \approx T_{\rm e} \left(\frac{\delta}{2M}\right)^{1/2} \, , \end{eqnarray} where I wrote $R = 2M + \delta$ and assumed $\delta \ll 1$ and positive. The most stringent constraint on $\delta$ can be inferred from the observations of the supermassive BH candidate at the center of our Galaxy. Infrared and near-infrared data require $T_{\rm app} < 0.01$~eV~\cite{sgra}. If we assume a local temperature as high as $k_{\rm B} T_{\rm e} \sim m_{\rm p} c^2 \sim 1$~GeV (roughly the gravitational binding energy of a proton), we find \begin{eqnarray}\label{eq-c-em} \delta < 10^{-10} \; {\rm cm} \, , \end{eqnarray} as $M \approx 6 \cdot 10^{11}$~cm. With a lower temperature $T_{\rm e}$, the constraint would be weaker, while a higher temperature seems to be unlikely, as the object is old and the accreting gas would have already cooled it down. The proper distance of the boundary of the BH candidate from the event horizon of a Schwarzschild BH with the same mass is \begin{eqnarray}\label{eq-c-em2} \Delta \approx \frac{\delta}{\sqrt{1 - \frac{2M}{R}}} \approx \sqrt{2 M \delta} < 10 \; {\rm cm} \, . \end{eqnarray} Such a result should not change significantly if we consider a rotating object. \section{Stability constraint} The existence of event or apparent horizons in astrophysical BH candidates is also suggested by considerations concerning the stability of these objects. It is well known that rapidly-rotating very-compact objects may be affected by the {\it ergoregion instability}~\cite{ergo}. In the ergoregion, $g_{tt} > 0$ (if the metric has signature $-+++$) and the frame-dragging is so strong that stationary orbits are not allowed. That implies that in the ergoregion there are excitations with negative energy with respect to a stationary observer at infinity. These excitations can be seen as quasi-bound states: they are trapped by the gravitational potential on one side, and by the surface of the object (or by the center of the object if the latter is made of matter non-interacting with the excitations) on the other side. As some modes can escape to infinity carrying positive energy, negative energy modes in the ergoregion can grow indefinitely, thus generating an instability. Objects with a horizon may instead be stable because there may not be quasi-bound states in the ergoregion: any excitation in the ergoregion is swallowed by the BH. Let us notice, however, that the existence of a horizon is not sufficient in general to prevent the ergoregion instability~\cite{pani}. Roughly speaking, the instability timescale $\tau$ decreases as the angular velocity and the compactness of the compact object increases. For rotating very-compact objects, one typically finds that the instability is strong and occurs on a dynamical timescale $\tau \sim M$~\cite{cardoso}; that is, $\sim 1$~s for objects with a mass $M \sim 10$~$M_\odot$ and $\sim 10^7$~s if $M \sim 10^8$~$M_\odot$. While there are counter-examples in which rotating compact objects can be stable or very-long living~\cite{rezzolla}, it seems difficult that the latter can still meet observations requiring that astrophysical BH candidates can rotate very rapidly~\cite{spin,spin2} and have a high radiative efficiency~\cite{eta}. Let us notice, however, that the issue of the ergoregion instability can be discussed only within a well defined theoretical model (gravity theory, internal structure and composition of the compact object, etc.) and that it has been studied only for a very limited number of specific cases. Considerations on the non-observations of electromagnetic radiation from the surface of BH candidates are much more model-independent and rely on a set of assumptions that can be violated only invoking very exotic new physics. Here, I will discuss the ergoregion instability within the following picture. I assume that the geometry around an astrophysical BH candidate is exactly described by the Kerr solution up the radius of the compact object, $R$. Considerations on the ergoregion instability indeed require a specific background and we may think that possible deviations from the Kerr metric can be tested with other approaches~\cite{review}. In the case of a reflecting surface, the timescale for scalar instabilities can be estimated as~\cite{shinji} \begin{eqnarray} \tau \sim A(M,a_*) \left| \ln \left( \frac{R - R_{\rm H}}{2M \sqrt{1 - a^2_*}} \right) \right| \, , \end{eqnarray} where $R_{\rm H} = M(1 + \sqrt{1 - a_*^2})$ is the radius of the event horizon of a Kerr BH with mass $M$ and spin parameter $a_*$. $A(M,a_*)$ is a function of $M$ and $a_*$. For moderate values of the spin parameter $a_*$, $A \sim M$; that is, the instability occurs on a dynamical timescale. For high values of $a_*$, $A$ decreases very quickly. In the case of a Kerr BH, $R = R_{\rm H}$ and the object is stable. On the other hand, if $R = R_{\rm H} + \delta$, the fact that we observe long-living rapidly rotating BH candidates demands \begin{eqnarray}\label{eq-ei} \delta, \; \Delta \ll L_{\rm Pl} \approx 10^{-33} \, {\rm cm} \, , \end{eqnarray} where $\Delta$ is the physical distance encountered in the previous section. Eq.~(\ref{eq-ei}) essentially rules out the possibility that current BH candidates have no horizon, or at least something that behaves very much like a horizon for the unstable modes. The possibility of an exact Kerr background with $\delta$ so large that there is no ergoregion seems to be unlikely, as we know objects that, when the space-time around them is described by the Kerr solution, would have an accretion disk with inner edge inside the ergosphere~\cite{spin}. \section{Conclusions} In conclusion, we have observations suggesting that BH candidates have a horizon or at least putting constraints on the possible distance between the boundary of these compact objects and the event horizon of a BH with the same mass and spin. Such a distance can be seen as a measurement of how much close the formation of the horizon is. From the non-observation of thermal radiation from the putative surface of astrophysical BH candidates, one can infer the constraint in Eqs.~(\ref{eq-c-em}) and (\ref{eq-c-em2}): actually, such a bound is not so stringent, as one may argue that new physics can show up at much shorter scales. However, the result seems to be quite robust -- it is just supposed that the compact object must emit electromagnetic radiation due to its finite temperature -- and very exotic new physics is necessary to change these conclusions or to get a different bound. Considerations on the ergoregion instability are instead to be taken with caution. The timescale instability strongly depends on the exact model; i.e. gravity theory, internal structure and composition of the object, and so on, which we do not know. However, we can optimistically arrive at the following conclusion. If the geometry around astrophysical BH candidates is very close to the Kerr solution, the existence of stable or long-living objects likely requires some kind of horizon. Otherwise, we can probably hope to discover deviations from the Kerr background with tests already proposed in the literature and possible in a near future with new observational facilities. \begin{acknowledgments} This work was supported by the Humboldt Foundation. \end{acknowledgments}
2,869,038,155,424
arxiv
\section{Introduction} Since the Euclidean bounce solution of a scalar field theory has been introduced for a description of first-order phase transition, the scalar sector of a given model determines mainly the metastable decay process~\cite{Col,CC}. For cosmological phase transitions in the early universe, two more ingredients are needed to be included: one is gravitation~\cite{CD} and the other may be temperature~\cite{Lin}. When a spontaneous continuous symmetry breaking is taken into account, an enhancement of the tunneling is expected~\cite{Kus}. However, the uniqueness of the bounce O(3) symmetric solution as a nucleated bubble~\cite{Col2} has never been doubted before three of the authors reported the global {\it monopole-bubble} solution in the model of a scalar triplet with global SO(3) symmetry~\cite{Kim,KMS}. In this paper, we will address the same question to a gauge theory of SO(3) symmetry, and show that there exists another {\it gauged monopole-bubble} solution in which the center includes a 't Hooft-Polyakov monopole~\cite{HP}. The nucleation rate of gauged monopole-bubbles is lower than that of the bounce, but it is higher than that of global monopole-bubbles. Particularly, when the mass of monopole becomes small in the strong gauged coupling limit, the decay channel through a gauged monopole-bubble with thick wall is quite considerable in the limit of high temperature. If the size of a nucleated gauged monopole-bubble is smaller than its critical size, the bubble wall starts to shrink and then the monopole at its center disappears. On the other hand, for the opposite case, the bubble wall grows and the monopole survives safely. The remainder of this paper is organized as follows. In Sec. II we demonstrate the existence of a new gauged monopole-bubble solution in addition to the known bounce and the global monopole-bubble solution. In Sec. III we compute the nucleation rates of those solutions and present the evolution of bubbles. Section IV contains some brief concluding remarks. \section{Gauged Monopole-bubble Solution} The Euclidean action of Georgi-Glashow model with an SO(3) gauge symmetry is \begin{eqnarray}\label{action} S=\int^{\beta}_{0}dt_{E}\int d^{3}x\biggl\{\frac{1}{4}F^{a\;2}_{\mu\nu} +\frac{1}{2}(D_{\mu}\phi^{a})^{2}+V\biggr\}, \end{eqnarray} where the field strength tensor is $F^{a}_{\mu\nu}\equiv\partial_\mu A^{a}_{\nu} - \partial_\nu A^{a}_{\mu}+ e\epsilon^{abc}A^{b}_{\mu}A^{c}_{\nu}$, and the covariant derivative is ${D_{\mu} \phi}^{a}\equiv\partial_\mu\phi^{a} + e\epsilon^{abc}A^{b}_{\mu} \phi^{c}$. In order to describe a first-order phase transition, we choose a $\phi^{6}$-potential: \begin{eqnarray}\label{pot} V(\phi)=\frac{\lambda}{v^{2}}(\phi^{2}+\alpha v^{2})(\phi^{2} -v^{2})^{2}, \end{eqnarray} where the scalar amplitude $\phi$ is defined by $\phi=\sqrt{\phi^{a}\phi^{a}}$ $(a=1,2,3)$, and a parameter $\alpha$ $\;(0<\alpha<1/2)$ governs the transition rate from the symmetric false vacuum at $\phi=0$ to a broken true vacuum at $\phi=v$. In the high temperature limit, i.e., $\beta\rightarrow 0$, a static electrically-neutral object satisfies \begin{eqnarray}\label{seq1} (D_{i}^{2}\phi)^{a}=\frac{\partial V}{\partial\phi^{a}}\;\;, \end{eqnarray} \begin{eqnarray}\label{seq2} (D_{j}F_{ij})^{a}=e\epsilon_{abc}\phi^{b}(D_{i}\phi)^{c}. \end{eqnarray} The ordinary bounce configuration is obtained under the ansatz of the scalar field $\phi^{a}=(0,0,f(r))$ with boundary conditions, $df/dr|_{r=0}=0$ and $f(r=\infty)=0$. Here one may ask a question whether or not the gauge field affects the bounce configuration. Since the SO(3) gauge symmetry is spontaneously broken to an SO(2) symmetry and the first two components of the scalar field vanish, the gauge field decouples from the bounce solution. Therefore, even under a general ansatz for the electrically-neutral gauge field~\cite{Wit}, it is easy to show decoupling of the gauge field from the bounce solution~\cite{KKMS}. In this paper we are interested in a different bubble solution which is obtainable under the hedgehog ansatz: \begin{eqnarray}\label{an1} \phi^{a}=\hat{r}^{a}\phi(r),~~~A^{a}_{i}= \epsilon^{aij}\hat{r}^{j}\frac{1-K(r)}{er}. \end{eqnarray} Substituting Eq.~(\ref{an1}) into Eqs.~(\ref{seq1}) and (\ref{seq2}), we have \begin{eqnarray}\label{meq1} \frac{d^{2}\phi}{dr^{2}}+\frac{2}{r}\frac{d\phi}{dr} -2\frac{K^{2}}{r^{2}}\phi=\frac{dV}{d\phi}\;, \end{eqnarray} \begin{eqnarray}\label{meq2} \frac{d^{2}K}{dr^{2}}=K\Bigl(\frac{K^{2}-1}{r^{2}}+e^{2}\phi^{2}\Bigr)\;. \end{eqnarray} Note that the ordinary bounce solution $\phi_{\rm{b}}^{a}$ corresponds to $K=0$ solution where a Dirac monopole in Eq.~(\ref{an1}) decouples from this solution~\cite{Col}, and that the global monopole-bubble $\phi_{\rm{gm}}^{a}$ is supported by the replacement from $K(r)^{2}$ in Eq.~(\ref{meq1}) to 1 only when the gauge coupling $e$ vanishes because of Eq.~(\ref{meq2}) ~\cite{Kim,KMS}. The first-order transition from a symmetric false vacuum ($\phi=0$) to a true broken vacuum ($\phi=v$) when $0<\alpha<1/2$; possible boundary conditions are read as follows: regularity of the fields at the origin gives \begin{eqnarray}\label{bc0} \phi(r=0)=0,~~~K(r=0)=1, \end{eqnarray} and the spatial infinity should be in the initial setting before transition: \begin{eqnarray}\label{bcinf} \phi(r\rightarrow\infty)=0,~~~K(r\rightarrow\infty)=1. \end{eqnarray} Of course, the trivial solution of false symmetric vacuum, i.e., $\phi=0$ and $K=1$, is an unwanted one. Therefore, as shown in Fig.~\ref{fig1}, the nontrivial bubble solution of our interest is: $\phi(r)$ increases from $\phi=0$ at the beginning, reaches a maximum value $\phi_{\rm max}$, and decreases to 0 as $r$ goes to infinity. On the other hand, $K(r)$ starts to decrease from 1, arrives at a minimum, and then grows up to 1 at spatial infinity. To read the physics of the obtained sphaleron-type monopole-bubble solution, let us examine energy density: \begin{eqnarray} T^{0}_{\;0}=\frac{1}{e^{2}r^{2}}\biggl[\biggl(\frac{dK}{dr}\biggl)^{2}+ \frac{(K^{2}-1)^{2}}{2r^{2}}\biggr]+\frac{1}{2}\biggl(\frac{d\phi}{dr} \biggr)^{2}+\frac{K^{2}}{r^{2}}\phi^{2}+V. \label{ener} \end{eqnarray} \begin{figure} \setlength{\unitlength}{0.1bp} \special{! /gnudict 120 dict def gnudict begin /Color false def /Solid false def /gnulinewidth 5.000 def /userlinewidth gnulinewidth def /vshift -33 def /dl {10 mul} def /hpt_ 31.5 def /vpt_ 31.5 def /hpt hpt_ def /vpt vpt_ def /M {moveto} bind def /L {lineto} bind def /R {rmoveto} bind def /V {rlineto} bind def /vpt2 vpt 2 mul def /hpt2 hpt 2 mul def /Lshow { currentpoint stroke M 0 vshift R show } def /Rshow { currentpoint stroke M dup stringwidth pop neg vshift R show } def /Cshow { currentpoint stroke M dup stringwidth pop -2 div vshift R show } def /UP { dup vpt_ mul /vpt exch def hpt_ mul /hpt exch def /hpt2 hpt 2 mul def /vpt2 vpt 2 mul def } def /DL { Color {setrgbcolor Solid {pop []} if 0 setdash } {pop pop pop Solid {pop []} if 0 setdash} ifelse } def /BL { stroke gnulinewidth 2 mul setlinewidth } def /AL { stroke gnulinewidth 2 div setlinewidth } def /UL { gnulinewidth mul /userlinewidth exch def } def /PL { stroke userlinewidth setlinewidth } def /LTb { BL [] 0 0 0 DL } def /LTa { AL [1 dl 2 dl] 0 setdash 0 0 0 setrgbcolor } def /LT0 { PL [] 0 1 0 DL } def /LT1 { PL [4 dl 2 dl] 0 0 1 DL } def /LT2 { PL [2 dl 3 dl] 1 0 0 DL } def /LT3 { PL [1 dl 1.5 dl] 1 0 1 DL } def /LT4 { PL [5 dl 2 dl 1 dl 2 dl] 0 1 1 DL } def /LT5 { PL [4 dl 3 dl 1 dl 3 dl] 1 1 0 DL } def /LT6 { PL [2 dl 2 dl 2 dl 4 dl] 0 0 0 DL } def /LT7 { PL [2 dl 2 dl 2 dl 2 dl 2 dl 4 dl] 1 0.3 0 DL } def /LT8 { PL [2 dl 2 dl 2 dl 2 dl 2 dl 2 dl 2 dl 4 dl] 0.5 0.5 0.5 DL } def /Pnt { stroke [] 0 setdash gsave 1 setlinecap M 0 0 V stroke grestore } def /Dia { stroke [] 0 setdash 2 copy vpt add M hpt neg vpt neg V hpt vpt neg V hpt vpt V hpt neg vpt V closepath stroke Pnt } def /Pls { stroke [] 0 setdash vpt sub M 0 vpt2 V currentpoint stroke M hpt neg vpt neg R hpt2 0 V stroke } def /Box { stroke [] 0 setdash 2 copy exch hpt sub exch vpt add M 0 vpt2 neg V hpt2 0 V 0 vpt2 V hpt2 neg 0 V closepath stroke Pnt } def /Crs { stroke [] 0 setdash exch hpt sub exch vpt add M hpt2 vpt2 neg V currentpoint stroke M hpt2 neg 0 R hpt2 vpt2 V stroke } def /TriU { stroke [] 0 setdash 2 copy vpt 1.12 mul add M hpt neg vpt -1.62 mul V hpt 2 mul 0 V hpt neg vpt 1.62 mul V closepath stroke Pnt } def /Star { 2 copy Pls Crs } def /BoxF { stroke [] 0 setdash exch hpt sub exch vpt add M 0 vpt2 neg V hpt2 0 V 0 vpt2 V hpt2 neg 0 V closepath fill } def /TriUF { stroke [] 0 setdash vpt 1.12 mul add M hpt neg vpt -1.62 mul V hpt 2 mul 0 V hpt neg vpt 1.62 mul V closepath fill } def /TriD { stroke [] 0 setdash 2 copy vpt 1.12 mul sub M hpt neg vpt 1.62 mul V hpt 2 mul 0 V hpt neg vpt -1.62 mul V closepath stroke Pnt } def /TriDF { stroke [] 0 setdash vpt 1.12 mul sub M hpt neg vpt 1.62 mul V hpt 2 mul 0 V hpt neg vpt -1.62 mul V closepath fill} def /DiaF { stroke [] 0 setdash vpt add M hpt neg vpt neg V hpt vpt neg V hpt vpt V hpt neg vpt V closepath fill } def /Pent { stroke [] 0 setdash 2 copy gsave translate 0 hpt M 4 {72 rotate 0 hpt L} repeat closepath stroke grestore Pnt } def /PentF { stroke [] 0 setdash gsave translate 0 hpt M 4 {72 rotate 0 hpt L} repeat closepath fill grestore } def /Circle { stroke [] 0 setdash 2 copy hpt 0 360 arc stroke Pnt } def /CircleF { stroke [] 0 setdash hpt 0 360 arc fill } def /C0 { BL [] 0 setdash 2 copy moveto vpt 90 450 arc } bind def /C1 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 90 arc closepath fill vpt 0 360 arc closepath } bind def /C2 { BL [] 0 setdash 2 copy moveto 2 copy vpt 90 180 arc closepath fill vpt 0 360 arc closepath } bind def /C3 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 180 arc closepath fill vpt 0 360 arc closepath } bind def /C4 { BL [] 0 setdash 2 copy moveto 2 copy vpt 180 270 arc closepath fill vpt 0 360 arc closepath } bind def /C5 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 90 arc 2 copy moveto 2 copy vpt 180 270 arc closepath fill vpt 0 360 arc } bind def /C6 { BL [] 0 setdash 2 copy moveto 2 copy vpt 90 270 arc closepath fill vpt 0 360 arc closepath } bind def /C7 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 270 arc closepath fill vpt 0 360 arc closepath } bind def /C8 { BL [] 0 setdash 2 copy moveto 2 copy vpt 270 360 arc closepath fill vpt 0 360 arc closepath } bind def /C9 { BL [] 0 setdash 2 copy moveto 2 copy vpt 270 450 arc closepath fill vpt 0 360 arc closepath } bind def /C10 { BL [] 0 setdash 2 copy 2 copy moveto vpt 270 360 arc closepath fill 2 copy moveto 2 copy vpt 90 180 arc closepath fill vpt 0 360 arc closepath } bind def /C11 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 180 arc closepath fill 2 copy moveto 2 copy vpt 270 360 arc closepath fill vpt 0 360 arc closepath } bind def /C12 { BL [] 0 setdash 2 copy moveto 2 copy vpt 180 360 arc closepath fill vpt 0 360 arc closepath } bind def /C13 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 90 arc closepath fill 2 copy moveto 2 copy vpt 180 360 arc closepath fill vpt 0 360 arc closepath } bind def /C14 { BL [] 0 setdash 2 copy moveto 2 copy vpt 90 360 arc closepath fill vpt 0 360 arc } bind def /C15 { BL [] 0 setdash 2 copy vpt 0 360 arc closepath fill vpt 0 360 arc closepath } bind def /Rec { newpath 4 2 roll moveto 1 index 0 rlineto 0 exch rlineto neg 0 rlineto closepath } bind def /Square { dup Rec } bind def /Bsquare { vpt sub exch vpt sub exch vpt2 Square } bind def /S0 { BL [] 0 setdash 2 copy moveto 0 vpt rlineto BL Bsquare } bind def /S1 { BL [] 0 setdash 2 copy vpt Square fill Bsquare } bind def /S2 { BL [] 0 setdash 2 copy exch vpt sub exch vpt Square fill Bsquare } bind def /S3 { BL [] 0 setdash 2 copy exch vpt sub exch vpt2 vpt Rec fill Bsquare } bind def /S4 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt Square fill Bsquare } bind def /S5 { BL [] 0 setdash 2 copy 2 copy vpt Square fill exch vpt sub exch vpt sub vpt Square fill Bsquare } bind def /S6 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt vpt2 Rec fill Bsquare } bind def /S7 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt vpt2 Rec fill 2 copy vpt Square fill Bsquare } bind def /S8 { BL [] 0 setdash 2 copy vpt sub vpt Square fill Bsquare } bind def /S9 { BL [] 0 setdash 2 copy vpt sub vpt vpt2 Rec fill Bsquare } bind def /S10 { BL [] 0 setdash 2 copy vpt sub vpt Square fill 2 copy exch vpt sub exch vpt Square fill Bsquare } bind def /S11 { BL [] 0 setdash 2 copy vpt sub vpt Square fill 2 copy exch vpt sub exch vpt2 vpt Rec fill Bsquare } bind def /S12 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt2 vpt Rec fill Bsquare } bind def /S13 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt2 vpt Rec fill 2 copy vpt Square fill Bsquare } bind def /S14 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt2 vpt Rec fill 2 copy exch vpt sub exch vpt Square fill Bsquare } bind def /S15 { BL [] 0 setdash 2 copy Bsquare fill Bsquare } bind def /D0 { gsave translate 45 rotate 0 0 S0 stroke grestore } bind def /D1 { gsave translate 45 rotate 0 0 S1 stroke grestore } bind def /D2 { gsave translate 45 rotate 0 0 S2 stroke grestore } bind def /D3 { gsave translate 45 rotate 0 0 S3 stroke grestore } bind def /D4 { gsave translate 45 rotate 0 0 S4 stroke grestore } bind def /D5 { gsave translate 45 rotate 0 0 S5 stroke grestore } bind def /D6 { gsave translate 45 rotate 0 0 S6 stroke grestore } bind def /D7 { gsave translate 45 rotate 0 0 S7 stroke grestore } bind def /D8 { gsave translate 45 rotate 0 0 S8 stroke grestore } bind def /D9 { gsave translate 45 rotate 0 0 S9 stroke grestore } bind def /D10 { gsave translate 45 rotate 0 0 S10 stroke grestore } bind def /D11 { gsave translate 45 rotate 0 0 S11 stroke grestore } bind def /D12 { gsave translate 45 rotate 0 0 S12 stroke grestore } bind def /D13 { gsave translate 45 rotate 0 0 S13 stroke grestore } bind def /D14 { gsave translate 45 rotate 0 0 S14 stroke grestore } bind def /D15 { gsave translate 45 rotate 0 0 S15 stroke grestore } bind def /DiaE { stroke [] 0 setdash vpt add M hpt neg vpt neg V hpt vpt neg V hpt vpt V hpt neg vpt V closepath stroke } def /BoxE { stroke [] 0 setdash exch hpt sub exch vpt add M 0 vpt2 neg V hpt2 0 V 0 vpt2 V hpt2 neg 0 V closepath stroke } def /TriUE { stroke [] 0 setdash vpt 1.12 mul add M hpt neg vpt -1.62 mul V hpt 2 mul 0 V hpt neg vpt 1.62 mul V closepath stroke } def /TriDE { stroke [] 0 setdash vpt 1.12 mul sub M hpt neg vpt 1.62 mul V hpt 2 mul 0 V hpt neg vpt -1.62 mul V closepath stroke } def /PentE { stroke [] 0 setdash gsave translate 0 hpt M 4 {72 rotate 0 hpt L} repeat closepath stroke grestore } def /CircE { stroke [] 0 setdash hpt 0 360 arc stroke } def /BoxFill { gsave Rec 1 setgray fill grestore } def end } \begin{picture}(3600,2160)(0,0) \special{" gnudict begin gsave 0 0 translate 0.100 0.100 scale 0 setgray newpath LTb 450 400 M 63 0 V 2737 0 R -63 0 V 450 732 M 63 0 V 2737 0 R -63 0 V 450 1064 M 63 0 V 2737 0 R -63 0 V 450 1396 M 63 0 V 2737 0 R -63 0 V 450 1728 M 63 0 V 2737 0 R -63 0 V 450 2060 M 63 0 V 2737 0 R -63 0 V 450 400 M 0 63 V 0 1597 R 0 -63 V 1011 400 M 0 63 V 0 1597 R 0 -63 V 1571 400 M 0 63 V 0 1597 R 0 -63 V 2132 400 M 0 63 V 0 1597 R 0 -63 V 2692 400 M 0 63 V 0 1597 R 0 -63 V 3253 400 M 0 63 V 0 1597 R 0 -63 V 3250 400 M -63 0 V 63 332 R -63 0 V 63 332 R -63 0 V 63 332 R -63 0 V 63 332 R -63 0 V 63 332 R -63 0 V LTa 450 400 M 2800 0 V LTa 450 400 M 0 1660 V LTb 450 400 M 2800 0 V 0 1660 V -2800 0 V 450 400 L 1.000 UL LT1 450 400 M 3 9 V 3 10 V 2 9 V 3 10 V 3 9 V 3 9 V 3 10 V 2 9 V 3 10 V 3 9 V 3 10 V 3 9 V 2 10 V 3 9 V 3 10 V 3 9 V 3 9 V 2 10 V 3 10 V 3 9 V 3 10 V 3 9 V 2 10 V 3 9 V 3 10 V 3 10 V 3 9 V 2 10 V 3 9 V 3 10 V 3 10 V 3 10 V 2 9 V 3 10 V 3 10 V 3 10 V 3 9 V 3 10 V 2 10 V 3 10 V 3 10 V 3 10 V 3 10 V 2 10 V 3 9 V 3 10 V 3 10 V 3 10 V 2 10 V 3 10 V 3 10 V 3 11 V 3 10 V 2 10 V 3 10 V 3 10 V 3 10 V 3 10 V 2 10 V 3 10 V 3 11 V 3 10 V 3 10 V 2 10 V 3 10 V 3 11 V 3 10 V 3 10 V 2 10 V 3 10 V 3 11 V 3 10 V 3 10 V 2 10 V 3 10 V 3 11 V 3 10 V 3 10 V 2 10 V 3 10 V 3 10 V 3 10 V 3 10 V 2 10 V 3 10 V 3 10 V 3 10 V 3 10 V 2 10 V 3 10 V 3 10 V 3 10 V 3 10 V 2 9 V 3 10 V 3 10 V 3 9 V 3 10 V 2 9 V 3 10 V 3 9 V 3 9 V 3 10 V 2 9 V 3 9 V 3 9 V 3 9 V 3 9 V 3 9 V 2 9 V 3 9 V 3 8 V 3 9 V 3 9 V 2 8 V 3 9 V 3 8 V 3 8 V 3 8 V 2 8 V 3 8 V 3 8 V 3 8 V 3 8 V 2 8 V 3 7 V 3 8 V 3 7 V 3 7 V 2 8 V 3 7 V 3 7 V 3 7 V 3 7 V 2 7 V 3 6 V 3 7 V 3 6 V 3 7 V 2 6 V 3 6 V 3 7 V 3 6 V 3 6 V 2 6 V 3 5 V 3 6 V 3 6 V 3 5 V 2 6 V 3 5 V 3 5 V 3 5 V 3 6 V 2 5 V 3 5 V 3 4 V 3 5 V 3 5 V 2 4 V 3 5 V 3 4 V 3 5 V 3 4 V 2 4 V 3 4 V 3 4 V 3 4 V 3 4 V 2 4 V 3 4 V 3 4 V 3 3 V 3 4 V 2 3 V 3 4 V 3 3 V 3 3 V 3 3 V 3 4 V 2 3 V 3 3 V 3 3 V 3 3 V 3 2 V 2 3 V 3 3 V 3 3 V 3 2 V 3 3 V 2 2 V 3 3 V 3 2 V 3 3 V 3 2 V 2 2 V 3 3 V 3 2 V 3 2 V 3 2 V 2 2 V 3 2 V 3 2 V 3 2 V 3 2 V 2 2 V 3 2 V 3 1 V 3 2 V 3 2 V 2 1 V 3 2 V 3 2 V 3 1 V 3 2 V 2 1 V 3 2 V 3 1 V 3 2 V 3 1 V 2 1 V 3 2 V 3 1 V 3 1 V 3 1 V 2 2 V 3 1 V 3 1 V 3 1 V 3 1 V 2 1 V 3 1 V 3 2 V 3 1 V 3 1 V 2 1 V 3 1 V 3 0 V 3 1 V 3 1 V 2 1 V 3 1 V 3 1 V 3 1 V 3 1 V 2 0 V 3 1 V 3 1 V 3 1 V 3 0 V 3 1 V 2 1 V 3 1 V 3 0 V 3 1 V 3 1 V 2 0 V 3 1 V 3 0 V 3 1 V 3 1 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 1 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 -1 V 2 0 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 -1 V 3 0 V 2 -1 V 3 -1 V 3 0 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 0 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V currentpoint stroke M 3 -1 V 3 -1 V 3 -1 V 3 -2 V 2 -1 V 3 -1 V 3 -1 V 3 -2 V 3 -1 V 2 -2 V 3 -1 V 3 -1 V 3 -2 V 3 -2 V 2 -1 V 3 -2 V 3 -1 V 3 -2 V 3 -2 V 2 -2 V 3 -2 V 3 -2 V 3 -2 V 3 -2 V 2 -2 V 3 -2 V 3 -2 V 3 -2 V 3 -2 V 2 -3 V 3 -2 V 3 -2 V 3 -3 V 3 -2 V 2 -3 V 3 -3 V 3 -2 V 3 -3 V 3 -3 V 2 -3 V 3 -3 V 3 -3 V 3 -3 V 3 -3 V 2 -4 V 3 -3 V 3 -3 V 3 -4 V 3 -3 V 2 -4 V 3 -4 V 3 -3 V 3 -4 V 3 -4 V 2 -4 V 3 -4 V 3 -4 V 3 -4 V 3 -5 V 2 -4 V 3 -5 V 3 -4 V 3 -5 V 3 -5 V 3 -4 V 2 -5 V 3 -5 V 3 -5 V 3 -5 V 3 -6 V 2 -5 V 3 -5 V 3 -6 V 3 -5 V 3 -6 V 2 -6 V 3 -6 V 3 -5 V 3 -6 V 3 -7 V 2 -6 V 3 -6 V 3 -6 V 3 -7 V 3 -6 V 2 -7 V 3 -7 V 3 -6 V 3 -7 V 3 -7 V 2 -7 V 3 -7 V 3 -7 V 3 -7 V 3 -8 V 2 -7 V 3 -7 V 3 -8 V 3 -7 V 3 -8 V 2 -8 V 3 -8 V 3 -7 V 3 -8 V 3 -8 V 2 -8 V 3 -8 V 3 -8 V 3 -8 V 3 -9 V 2 -8 V 3 -8 V 3 -8 V 3 -9 V 3 -8 V 2 -9 V 3 -8 V 3 -9 V 3 -8 V 3 -9 V 2 -8 V 3 -9 V 3 -8 V 3 -9 V 3 -9 V 2 -8 V 3 -9 V 3 -9 V 3 -8 V 3 -9 V 2 -9 V 3 -8 V 3 -9 V 3 -9 V 3 -9 V 2 -8 V 3 -9 V 3 -8 V 3 -9 V 3 -9 V 3 -8 V 2 -9 V 3 -8 V 3 -9 V 3 -8 V 3 -9 V 2 -8 V 3 -9 V 3 -8 V 3 -8 V 3 -9 V 2 -8 V 3 -8 V 3 -8 V 3 -8 V 3 -8 V 2 -8 V 3 -8 V 3 -8 V 3 -8 V 3 -8 V 2 -8 V 3 -8 V 3 -7 V 3 -8 V 3 -7 V 2 -8 V 3 -7 V 3 -8 V 3 -7 V 3 -8 V 2 -7 V 3 -7 V 3 -7 V 3 -7 V 3 -7 V 2 -7 V 3 -7 V 3 -7 V 3 -7 V 3 -6 V 2 -7 V 3 -7 V 3 -6 V 3 -6 V 3 -7 V 2 -6 V 3 -7 V 3 -6 V 3 -6 V 3 -6 V 2 -6 V 3 -6 V 3 -6 V 3 -6 V 3 -5 V 2 -6 V 3 -6 V 3 -5 V 3 -6 V 3 -5 V 2 -6 V 3 -5 V 3 -6 V 3 -5 V 3 -5 V 2 -5 V 3 -5 V 3 -5 V 3 -5 V 3 -5 V 3 -5 V 2 -5 V 3 -4 V 3 -5 V 3 -5 V 3 -4 V 2 -5 V 3 -4 V 3 -5 V 3 -4 V 3 -4 V 2 -5 V 3 -4 V 3 -4 V 3 -4 V 3 -4 V 2 -4 V 3 -4 V 3 -4 V 3 -4 V 3 -4 V 2 -3 V 3 -4 V 3 -4 V 3 -3 V 3 -4 V 2 -3 V 3 -4 V 3 -3 V 3 -4 V 3 -3 V 2 -3 V 3 -4 V 3 -3 V 3 -3 V 3 -3 V 2 -3 V 3 -3 V 3 -3 V 3 -3 V 3 -3 V 2 -3 V 3 -3 V 3 -3 V 3 -3 V 3 -3 V 2 -2 V 3 -3 V 3 -3 V 3 -2 V 3 -3 V 2 -2 V 3 -3 V 3 -2 V 3 -3 V 3 -2 V 2 -3 V 3 -2 V 3 -2 V 3 -3 V 3 -2 V 2 -2 V 3 -2 V 3 -3 V 3 -2 V 3 -2 V 2 -2 V 3 -2 V 3 -2 V 3 -2 V 3 -2 V 3 -2 V 2 -2 V 3 -2 V 3 -2 V 3 -2 V 3 -2 V 2 -1 V 3 -2 V 3 -2 V 3 -2 V 3 -1 V 2 -2 V 3 -2 V 3 -1 V 3 -2 V 3 -2 V 2 -1 V 3 -2 V 3 -1 V 3 -2 V 3 -1 V 2 -2 V 3 -1 V 3 -2 V 3 -1 V 3 -2 V 2 -1 V 3 -1 V 3 -2 V 3 -1 V 3 -1 V 2 -2 V 3 -1 V 3 -1 V 3 -2 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -2 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 0 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 0 V 3 -1 V 3 -1 V 2 -1 V 3 0 V 3 -1 V 3 -1 V 3 -1 V 2 0 V 3 -1 V 3 -1 V 3 0 V 3 -1 V 2 -1 V 3 0 V 3 -1 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 0 V 3 -1 V 2 0 V 3 -1 V 3 -1 V 3 0 V 3 -1 V 2 0 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 -1 V currentpoint stroke M 3 0 V 3 -1 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 1.000 UL LT1 450 2060 M 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 0 V 3 -1 V 3 -1 V 2 0 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 -1 V 3 0 V 3 -1 V 3 -1 V 2 0 V 3 -1 V 3 -1 V 3 0 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 0 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 0 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -2 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -2 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -2 V 3 -1 V 3 -1 V 3 -1 V 2 -2 V 3 -1 V 3 -1 V 3 -2 V 3 -1 V 3 -1 V 2 -1 V 3 -2 V 3 -1 V 3 -1 V 3 -2 V 2 -1 V 3 -1 V 3 -2 V 3 -1 V 3 -1 V 2 -2 V 3 -1 V 3 -2 V 3 -1 V 3 -1 V 2 -2 V 3 -1 V 3 -1 V 3 -2 V 3 -1 V 2 -2 V 3 -1 V 3 -1 V 3 -2 V 3 -1 V 2 -2 V 3 -1 V 3 -2 V 3 -1 V 3 -1 V 2 -2 V 3 -1 V 3 -2 V 3 -1 V 3 -2 V 2 -1 V 3 -2 V 3 -1 V 3 -1 V 3 -2 V 2 -1 V 3 -2 V 3 -1 V 3 -2 V 3 -1 V 2 -2 V 3 -1 V 3 -2 V 3 -1 V 3 -2 V 2 -1 V 3 -1 V 3 -2 V 3 -1 V 3 -2 V 2 -1 V 3 -2 V 3 -1 V 3 -2 V 3 -1 V 2 -2 V 3 -1 V 3 -2 V 3 -1 V 3 -2 V 2 -1 V 3 -1 V 3 -2 V 3 -1 V 3 -2 V 3 -1 V 2 -2 V 3 -1 V 3 -2 V 3 -1 V 3 -2 V 2 -1 V 3 -2 V 3 -1 V 3 -1 V 3 -2 V 2 -1 V 3 -2 V 3 -1 V 3 -2 V 3 -1 V 2 -1 V 3 -2 V 3 -1 V 3 -2 V 3 -1 V 2 -2 V 3 -1 V 3 -1 V 3 -2 V 3 -1 V 2 -2 V 3 -1 V 3 -1 V 3 -2 V 3 -1 V 2 -2 V 3 -1 V 3 -1 V 3 -2 V 3 -1 V 2 -1 V 3 -2 V 3 -1 V 3 -2 V 3 -1 V 2 -1 V 3 -2 V 3 -1 V 3 -1 V 3 -2 V 2 -1 V 3 -1 V 3 -2 V 3 -1 V 3 -1 V 2 -2 V 3 -1 V 3 -1 V 3 -1 V 3 -2 V 2 -1 V 3 -1 V 3 -2 V 3 -1 V 3 -1 V 2 -1 V 3 -2 V 3 -1 V 3 -1 V 3 -1 V 2 -2 V 3 -1 V 3 -1 V 3 -1 V 3 -2 V 3 -1 V 2 -1 V 3 -1 V 3 -2 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -2 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -2 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -2 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 0 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 0 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 3 0 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 0 V 2 -1 V 3 -1 V 3 -1 V 3 0 V 3 -1 V 2 -1 V 3 -1 V 3 0 V 3 -1 V 3 -1 V 2 0 V 3 -1 V 3 -1 V 3 0 V 3 -1 V 2 -1 V 3 0 V 3 -1 V 3 -1 V 3 0 V 2 -1 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 0 V 3 -1 V 2 0 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 -1 V 2 0 V currentpoint stroke M 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 1 V 3 0 V 3 0 V 3 1 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 1 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 3 1 V 2 0 V 3 1 V 3 0 V 3 0 V 3 1 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 1 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 1 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 1 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 1 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 1 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 1 V 2 0 V 3 1 V 3 0 V 3 0 V 3 1 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 1 V 2 0 V 3 1 V 3 0 V 3 0 V 3 1 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 1 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 1 V 3 0 V 3 0 V 3 1 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 1 V 2 0 V 3 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 1 V 3 0 V 3 0 V 3 1 V 3 0 V 2 1 V 3 0 V 3 0 V 3 1 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 0 V 2 1 V 3 0 V 3 1 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 1 V 3 0 V 2 1 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 1 V 3 0 V 3 1 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 1 V 3 0 V 3 1 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 1 V 3 0 V 3 1 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 1 V 2 0 V currentpoint stroke M 3 0 V 3 1 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 1.000 UL LT0 450 400 M 3 9 V 3 9 V 2 9 V 3 9 V 3 9 V 3 9 V 3 9 V 2 9 V 3 9 V 3 9 V 3 10 V 3 9 V 2 9 V 3 9 V 3 9 V 3 9 V 3 9 V 2 9 V 3 9 V 3 9 V 3 9 V 3 9 V 2 10 V 3 9 V 3 9 V 3 9 V 3 9 V 2 9 V 3 9 V 3 9 V 3 10 V 3 9 V 2 9 V 3 9 V 3 9 V 3 9 V 3 10 V 3 9 V 2 9 V 3 9 V 3 9 V 3 9 V 3 10 V 2 9 V 3 9 V 3 9 V 3 9 V 3 10 V 2 9 V 3 9 V 3 9 V 3 9 V 3 10 V 2 9 V 3 9 V 3 9 V 3 9 V 3 10 V 2 9 V 3 9 V 3 9 V 3 9 V 3 9 V 2 10 V 3 9 V 3 9 V 3 9 V 3 9 V 2 9 V 3 9 V 3 9 V 3 9 V 3 9 V 2 9 V 3 9 V 3 9 V 3 9 V 3 9 V 2 8 V 3 9 V 3 9 V 3 9 V 3 9 V 2 8 V 3 9 V 3 8 V 3 9 V 3 9 V 2 8 V 3 8 V 3 9 V 3 8 V 3 9 V 2 8 V 3 8 V 3 8 V 3 8 V 3 8 V 2 8 V 3 8 V 3 8 V 3 8 V 3 8 V 2 7 V 3 8 V 3 8 V 3 7 V 3 7 V 3 8 V 2 7 V 3 7 V 3 7 V 3 8 V 3 7 V 2 6 V 3 7 V 3 7 V 3 7 V 3 6 V 2 7 V 3 6 V 3 7 V 3 6 V 3 6 V 2 6 V 3 6 V 3 6 V 3 6 V 3 6 V 2 6 V 3 5 V 3 6 V 3 5 V 3 5 V 2 6 V 3 5 V 3 5 V 3 5 V 3 5 V 2 4 V 3 5 V 3 5 V 3 4 V 3 4 V 2 5 V 3 4 V 3 4 V 3 4 V 3 4 V 2 4 V 3 3 V 3 4 V 3 4 V 3 3 V 2 3 V 3 4 V 3 3 V 3 3 V 3 3 V 2 3 V 3 2 V 3 3 V 3 3 V 3 2 V 2 2 V 3 3 V 3 2 V 3 2 V 3 2 V 2 2 V 3 2 V 3 1 V 3 2 V 3 2 V 2 1 V 3 1 V 3 2 V 3 1 V 3 1 V 3 1 V 2 1 V 3 1 V 3 0 V 3 1 V 3 0 V 2 1 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 -1 V 3 -1 V 2 0 V 3 -1 V 3 -1 V 3 -2 V 3 -1 V 2 -1 V 3 -2 V 3 -1 V 3 -2 V 3 -1 V 2 -2 V 3 -2 V 3 -2 V 3 -2 V 3 -2 V 2 -2 V 3 -2 V 3 -3 V 3 -2 V 3 -3 V 2 -2 V 3 -3 V 3 -3 V 3 -2 V 3 -3 V 2 -3 V 3 -3 V 3 -3 V 3 -4 V 3 -3 V 2 -3 V 3 -4 V 3 -3 V 3 -4 V 3 -3 V 2 -4 V 3 -4 V 3 -4 V 3 -4 V 3 -4 V 2 -4 V 3 -4 V 3 -4 V 3 -5 V 3 -4 V 2 -4 V 3 -5 V 3 -4 V 3 -5 V 3 -5 V 3 -4 V 2 -5 V 3 -5 V 3 -5 V 3 -5 V 3 -5 V 2 -5 V 3 -5 V 3 -5 V 3 -5 V 3 -6 V 2 -5 V 3 -5 V 3 -6 V 3 -5 V 3 -6 V 2 -5 V 3 -6 V 3 -6 V 3 -5 V 3 -6 V 2 -6 V 3 -5 V 3 -6 V 3 -6 V 3 -6 V 2 -6 V 3 -6 V 3 -6 V 3 -6 V 3 -6 V 2 -6 V 3 -6 V 3 -6 V 3 -6 V 3 -6 V 2 -6 V 3 -7 V 3 -6 V 3 -6 V 3 -6 V 2 -6 V 3 -7 V 3 -6 V 3 -6 V 3 -6 V 2 -6 V 3 -7 V 3 -6 V 3 -6 V 3 -6 V 2 -7 V 3 -6 V 3 -6 V 3 -6 V 3 -7 V 2 -6 V 3 -6 V 3 -6 V 3 -6 V 3 -6 V 2 -7 V 3 -6 V 3 -6 V 3 -6 V 3 -6 V 2 -6 V 3 -6 V 3 -6 V 3 -6 V 3 -6 V 3 -6 V 2 -6 V 3 -6 V 3 -6 V 3 -6 V 3 -6 V 2 -6 V 3 -6 V 3 -5 V 3 -6 V 3 -6 V 2 -5 V 3 -6 V 3 -6 V 3 -5 V 3 -6 V 2 -5 V 3 -6 V 3 -5 V 3 -6 V 3 -5 V 2 -6 V 3 -5 V 3 -5 V 3 -6 V 3 -5 V 2 -5 V 3 -5 V 3 -5 V 3 -6 V 3 -5 V 2 -5 V 3 -5 V 3 -5 V 3 -5 V 3 -5 V 2 -4 V 3 -5 V 3 -5 V 3 -5 V 3 -4 V 2 -5 V 3 -5 V 3 -4 V 3 -5 V 3 -4 V 2 -5 V 3 -4 V 3 -5 V 3 -4 V 3 -5 V 2 -4 V 3 -4 V 3 -4 V 3 -5 V 3 -4 V 2 -4 V 3 -4 V 3 -4 V 3 -4 V 3 -4 V 2 -4 V 3 -4 V 3 -4 V 3 -4 V 3 -3 V 2 -4 V 3 -4 V 3 -4 V 3 -3 V 3 -4 V 3 -4 V 2 -3 V 3 -4 V 3 -3 V 3 -4 V 3 -3 V 2 -4 V currentpoint stroke M 3 -3 V 3 -3 V 3 -4 V 3 -3 V 2 -3 V 3 -3 V 3 -4 V 3 -3 V 3 -3 V 2 -3 V 3 -3 V 3 -3 V 3 -3 V 3 -3 V 2 -3 V 3 -3 V 3 -3 V 3 -3 V 3 -3 V 2 -2 V 3 -3 V 3 -3 V 3 -3 V 3 -2 V 2 -3 V 3 -3 V 3 -2 V 3 -3 V 3 -2 V 2 -3 V 3 -2 V 3 -3 V 3 -2 V 3 -3 V 2 -2 V 3 -3 V 3 -2 V 3 -2 V 3 -3 V 2 -2 V 3 -2 V 3 -2 V 3 -3 V 3 -2 V 2 -2 V 3 -2 V 3 -2 V 3 -2 V 3 -3 V 2 -2 V 3 -2 V 3 -2 V 3 -2 V 3 -2 V 2 -2 V 3 -2 V 3 -2 V 3 -1 V 3 -2 V 2 -2 V 3 -2 V 3 -2 V 3 -2 V 3 -1 V 3 -2 V 2 -2 V 3 -2 V 3 -1 V 3 -2 V 3 -2 V 2 -1 V 3 -2 V 3 -2 V 3 -1 V 3 -2 V 2 -2 V 3 -1 V 3 -2 V 3 -1 V 3 -2 V 2 -1 V 3 -2 V 3 -1 V 3 -2 V 3 -1 V 2 -2 V 3 -1 V 3 -1 V 3 -2 V 3 -1 V 2 -1 V 3 -2 V 3 -1 V 3 -1 V 3 -2 V 2 -1 V 3 -1 V 3 -1 V 3 -2 V 3 -1 V 2 -1 V 3 -1 V 3 -2 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -2 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 0 V 3 -1 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 0 V 3 -1 V 3 -1 V 3 -1 V 2 -1 V 3 -1 V 3 0 V 3 -1 V 3 -1 V 2 -1 V 3 0 V 3 -1 V 3 -1 V 3 -1 V 2 0 V 3 -1 V 3 -1 V 3 0 V 3 -1 V 2 -1 V 3 0 V 3 -1 V 3 -1 V 3 0 V 2 -1 V 3 -1 V 3 0 V 3 -1 V 3 -1 V 2 0 V 3 -1 V 3 0 V 3 -1 V 3 -1 V 2 0 V 3 -1 V 3 0 V 3 -1 V 3 -1 V 2 0 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 0 V 3 -1 V 2 0 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 0 V 3 -1 V 2 0 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 -1 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V currentpoint stroke M 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 1.000 UL LT0 450 2060 M 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 0 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 0 V 3 -1 V 2 0 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 0 V 3 -1 V 2 0 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 0 V 3 -1 V 2 0 V 3 -1 V 3 0 V 3 -1 V 3 -1 V 2 0 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 0 V 3 -1 V 2 0 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 0 V 3 -1 V 2 -1 V 3 0 V 3 -1 V 3 0 V 3 -1 V 2 0 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 -1 V 3 0 V 3 -1 V 2 0 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 -1 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 -1 V 3 0 V 3 0 V 3 0 V 3 -1 V 2 0 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 -1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 -1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 -1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V currentpoint stroke M 3 1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 1 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 1 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V currentpoint stroke M 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 1 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 1 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 1 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 0 V 3 0 V 3 0 V 2 0 V 3 0 V 3 1 V stroke grestore end showpage } \put(1850,150){\makebox(0,0){\Large${vr}$}} \put(3550,1230){% \special{ps: gsave currentpoint currentpoint translate 270 rotate neg exch neg exch translate}% \makebox(0,0)[b]{\Large\shortstack{${K}$}}% \special{ps: currentpoint grestore moveto}% } \put(100,1230){% \special{ps: gsave currentpoint currentpoint translate 270 rotate neg exch neg exch translate}% \makebox(0,0)[b]{\Large\shortstack{${{\phi}/{v}}$}}% \special{ps: currentpoint grestore moveto}% } \put(3300,2060){\makebox(0,0)[l]{1}} \put(3300,1728){\makebox(0,0)[l]{0.8}} \put(3300,1396){\makebox(0,0)[l]{0.6}} \put(3300,1064){\makebox(0,0)[l]{0.4}} \put(3300,732){\makebox(0,0)[l]{0.2}} \put(3300,400){\makebox(0,0)[l]{0}} \put(3253,300){\makebox(0,0){10}} \put(2692,300){\makebox(0,0){8}} \put(2132,300){\makebox(0,0){6}} \put(1571,300){\makebox(0,0){4}} \put(1011,300){\makebox(0,0){2}} \put(450,300){\makebox(0,0){0}} \put(400,2060){\makebox(0,0)[r]{1}} \put(400,1728){\makebox(0,0)[r]{0.8}} \put(400,1396){\makebox(0,0)[r]{0.6}} \put(400,1064){\makebox(0,0)[r]{0.4}} \put(400,732){\makebox(0,0)[r]{0.2}} \put(400,400){\makebox(0,0)[r]{0}} \end{picture} \vspace{3mm} \caption{Plots of monopole-bubbles for $\lambda=1$ and $e=0.3$. The dashed lines correspond to the scalar amplitude $\phi(r)/v$ and the gauge field $K(r)$ of a thin-wall monopole-bubble ($\alpha$ = 0.15), and the solid lines to those of a thick-wall monopole-bubble ($\alpha$ = 0.35).} \label{fig1} \end{figure} For small $r$ we attempt a power series solution: \begin{eqnarray}\label{pr0} \phi(r)\approx\phi_{0}\biggl[r-\frac{1}{10}(4\kappa_{0} -m^{2})r^{3}+\cdots\biggr], \end{eqnarray} which is formally odd under $r\rightarrow -r$, and \begin{eqnarray}\label{kr0} K(r)\approx 1-\kappa_{0}r^{2}+\frac{1}{10}(3\kappa^{2}_{0}+ e^{2}\phi_{0}^{2})r^{4}+\cdots, \end{eqnarray} which is even in form under $r\rightarrow -r$. Here $m$ denotes the mass of scalar particles at the symmetric vacuum, i.e., $m=\sqrt{d^{2}V/d\phi^{2}|_{\phi=0}}=\sqrt{2\lambda(1-2\alpha)}v$. If $\kappa_{0}$ is negative, then Eq.~(\ref{meq2}) says that $K(r)$ is a monotonic increasing function bounded below 1. It means that any solution with a negative $\kappa_{0}$ cannot satisfy the boundary condition (\ref{bcinf}) at spatial infinity, and thereby $\kappa_{0}$ must be positive. A characteristic of this new monopole-bubble, which makes it distinguished from the ordinary bounce configuration, is to have a matter lump at the center of the bubble. If we look into the behavior of energy density for small $r$ by use of the formulas (\ref{pr0}) and (\ref{kr0}), it has \begin{eqnarray}\label{ener2} T^{0}_{\;0}&=& 6\kappa_{0}^{2}{e^{2}}+{3\phi_{0}^{2}\over 2}+\lambda\alpha v^4 +\Biggr[m^{2}\phi_{0}^{2} -2\kappa_{0}\biggr(\frac{4\kappa_{0}^2}{e^{2}}+3\phi_{0}^{2}\biggr) \Biggr]r^{2}+\cdots. \end{eqnarray} The first term in Eq.~(\ref{ener2}) is always positive, which stems from the winding between the 3-dimensional space and the SO(3) internal space. It is an evidence of the formation of an extended object at the center of the bubble. The behavior of energy density at the center depends on the sign of the second term in Eq.~(\ref{ener2}). If it is positive, $T^{0}_{\;0}(r)$ increases from 0, reaches a maximum, and decreases, as shown in the solid line of Fig.~\ref{fig2}. This structure is like a domain wall with the width $1/m_{\rm H}\sim 1/\sqrt{8\lambda(1+\alpha)} v$. In the case of global monopole-bubbles, the existence of this wall is automatic since $\kappa_{0}$ is zero and the second term is always positive \cite{Kim,KMS}. However, when the gauge coupling is sufficiently large so as to make the second term negative, strong gauge repulsion can sweep the monopole wall away and the energy density is a decreasing function near the origin, as shown in the dashed line of Fig.~\ref{fig2}. \begin{figure} \setlength{\unitlength}{0.1bp} \special{! /gnudict 120 dict def gnudict begin /Color false def /Solid false def /gnulinewidth 5.000 def /userlinewidth gnulinewidth def /vshift -33 def /dl {10 mul} def /hpt_ 31.5 def /vpt_ 31.5 def /hpt hpt_ def /vpt vpt_ def /M {moveto} bind def /L {lineto} bind def /R {rmoveto} bind def /V {rlineto} bind def /vpt2 vpt 2 mul def /hpt2 hpt 2 mul def /Lshow { currentpoint stroke M 0 vshift R show } def /Rshow { currentpoint stroke M dup stringwidth pop neg vshift R show } def /Cshow { currentpoint stroke M dup stringwidth pop -2 div vshift R show } def /UP { dup vpt_ mul /vpt exch def hpt_ mul /hpt exch def /hpt2 hpt 2 mul def /vpt2 vpt 2 mul def } def /DL { Color {setrgbcolor Solid {pop []} if 0 setdash } {pop pop pop Solid {pop []} if 0 setdash} ifelse } def /BL { stroke gnulinewidth 2 mul setlinewidth } def /AL { stroke gnulinewidth 2 div setlinewidth } def /UL { gnulinewidth mul /userlinewidth exch def } def /PL { stroke userlinewidth setlinewidth } def /LTb { BL [] 0 0 0 DL } def /LTa { AL [1 dl 2 dl] 0 setdash 0 0 0 setrgbcolor } def /LT0 { PL [] 0 1 0 DL } def /LT1 { PL [4 dl 2 dl] 0 0 1 DL } def /LT2 { PL [2 dl 3 dl] 1 0 0 DL } def /LT3 { PL [1 dl 1.5 dl] 1 0 1 DL } def /LT4 { PL [5 dl 2 dl 1 dl 2 dl] 0 1 1 DL } def /LT5 { PL [4 dl 3 dl 1 dl 3 dl] 1 1 0 DL } def /LT6 { PL [2 dl 2 dl 2 dl 4 dl] 0 0 0 DL } def /LT7 { PL [2 dl 2 dl 2 dl 2 dl 2 dl 4 dl] 1 0.3 0 DL } def /LT8 { PL [2 dl 2 dl 2 dl 2 dl 2 dl 2 dl 2 dl 4 dl] 0.5 0.5 0.5 DL } def /Pnt { stroke [] 0 setdash gsave 1 setlinecap M 0 0 V stroke grestore } def /Dia { stroke [] 0 setdash 2 copy vpt add M hpt neg vpt neg V hpt vpt neg V hpt vpt V hpt neg vpt V closepath stroke Pnt } def /Pls { stroke [] 0 setdash vpt sub M 0 vpt2 V currentpoint stroke M hpt neg vpt neg R hpt2 0 V stroke } def /Box { stroke [] 0 setdash 2 copy exch hpt sub exch vpt add M 0 vpt2 neg V hpt2 0 V 0 vpt2 V hpt2 neg 0 V closepath stroke Pnt } def /Crs { stroke [] 0 setdash exch hpt sub exch vpt add M hpt2 vpt2 neg V currentpoint stroke M hpt2 neg 0 R hpt2 vpt2 V stroke } def /TriU { stroke [] 0 setdash 2 copy vpt 1.12 mul add M hpt neg vpt -1.62 mul V hpt 2 mul 0 V hpt neg vpt 1.62 mul V closepath stroke Pnt } def /Star { 2 copy Pls Crs } def /BoxF { stroke [] 0 setdash exch hpt sub exch vpt add M 0 vpt2 neg V hpt2 0 V 0 vpt2 V hpt2 neg 0 V closepath fill } def /TriUF { stroke [] 0 setdash vpt 1.12 mul add M hpt neg vpt -1.62 mul V hpt 2 mul 0 V hpt neg vpt 1.62 mul V closepath fill } def /TriD { stroke [] 0 setdash 2 copy vpt 1.12 mul sub M hpt neg vpt 1.62 mul V hpt 2 mul 0 V hpt neg vpt -1.62 mul V closepath stroke Pnt } def /TriDF { stroke [] 0 setdash vpt 1.12 mul sub M hpt neg vpt 1.62 mul V hpt 2 mul 0 V hpt neg vpt -1.62 mul V closepath fill} def /DiaF { stroke [] 0 setdash vpt add M hpt neg vpt neg V hpt vpt neg V hpt vpt V hpt neg vpt V closepath fill } def /Pent { stroke [] 0 setdash 2 copy gsave translate 0 hpt M 4 {72 rotate 0 hpt L} repeat closepath stroke grestore Pnt } def /PentF { stroke [] 0 setdash gsave translate 0 hpt M 4 {72 rotate 0 hpt L} repeat closepath fill grestore } def /Circle { stroke [] 0 setdash 2 copy hpt 0 360 arc stroke Pnt } def /CircleF { stroke [] 0 setdash hpt 0 360 arc fill } def /C0 { BL [] 0 setdash 2 copy moveto vpt 90 450 arc } bind def /C1 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 90 arc closepath fill vpt 0 360 arc closepath } bind def /C2 { BL [] 0 setdash 2 copy moveto 2 copy vpt 90 180 arc closepath fill vpt 0 360 arc closepath } bind def /C3 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 180 arc closepath fill vpt 0 360 arc closepath } bind def /C4 { BL [] 0 setdash 2 copy moveto 2 copy vpt 180 270 arc closepath fill vpt 0 360 arc closepath } bind def /C5 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 90 arc 2 copy moveto 2 copy vpt 180 270 arc closepath fill vpt 0 360 arc } bind def /C6 { BL [] 0 setdash 2 copy moveto 2 copy vpt 90 270 arc closepath fill vpt 0 360 arc closepath } bind def /C7 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 270 arc closepath fill vpt 0 360 arc closepath } bind def /C8 { BL [] 0 setdash 2 copy moveto 2 copy vpt 270 360 arc closepath fill vpt 0 360 arc closepath } bind def /C9 { BL [] 0 setdash 2 copy moveto 2 copy vpt 270 450 arc closepath fill vpt 0 360 arc closepath } bind def /C10 { BL [] 0 setdash 2 copy 2 copy moveto vpt 270 360 arc closepath fill 2 copy moveto 2 copy vpt 90 180 arc closepath fill vpt 0 360 arc closepath } bind def /C11 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 180 arc closepath fill 2 copy moveto 2 copy vpt 270 360 arc closepath fill vpt 0 360 arc closepath } bind def /C12 { BL [] 0 setdash 2 copy moveto 2 copy vpt 180 360 arc closepath fill vpt 0 360 arc closepath } bind def /C13 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 90 arc closepath fill 2 copy moveto 2 copy vpt 180 360 arc closepath fill vpt 0 360 arc closepath } bind def /C14 { BL [] 0 setdash 2 copy moveto 2 copy vpt 90 360 arc closepath fill vpt 0 360 arc } bind def /C15 { BL [] 0 setdash 2 copy vpt 0 360 arc closepath fill vpt 0 360 arc closepath } bind def /Rec { newpath 4 2 roll moveto 1 index 0 rlineto 0 exch rlineto neg 0 rlineto closepath } bind def /Square { dup Rec } bind def /Bsquare { vpt sub exch vpt sub exch vpt2 Square } bind def /S0 { BL [] 0 setdash 2 copy moveto 0 vpt rlineto BL Bsquare } bind def /S1 { BL [] 0 setdash 2 copy vpt Square fill Bsquare } bind def /S2 { BL [] 0 setdash 2 copy exch vpt sub exch vpt Square fill Bsquare } bind def /S3 { BL [] 0 setdash 2 copy exch vpt sub exch vpt2 vpt Rec fill Bsquare } bind def /S4 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt Square fill Bsquare } bind def /S5 { BL [] 0 setdash 2 copy 2 copy vpt Square fill exch vpt sub exch vpt sub vpt Square fill Bsquare } bind def /S6 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt vpt2 Rec fill Bsquare } bind def /S7 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt vpt2 Rec fill 2 copy vpt Square fill Bsquare } bind def /S8 { BL [] 0 setdash 2 copy vpt sub vpt Square fill Bsquare } bind def /S9 { BL [] 0 setdash 2 copy vpt sub vpt vpt2 Rec fill Bsquare } bind def /S10 { BL [] 0 setdash 2 copy vpt sub vpt Square fill 2 copy exch vpt sub exch vpt Square fill Bsquare } bind def /S11 { BL [] 0 setdash 2 copy vpt sub vpt Square fill 2 copy exch vpt sub exch vpt2 vpt Rec fill Bsquare } bind def /S12 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt2 vpt Rec fill Bsquare } bind def /S13 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt2 vpt Rec fill 2 copy vpt Square fill Bsquare } bind def /S14 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt2 vpt Rec fill 2 copy exch vpt sub exch vpt Square fill Bsquare } bind def /S15 { BL [] 0 setdash 2 copy Bsquare fill Bsquare } bind def /D0 { gsave translate 45 rotate 0 0 S0 stroke grestore } bind def /D1 { gsave translate 45 rotate 0 0 S1 stroke grestore } bind def /D2 { gsave translate 45 rotate 0 0 S2 stroke grestore } bind def /D3 { gsave translate 45 rotate 0 0 S3 stroke grestore } bind def /D4 { gsave translate 45 rotate 0 0 S4 stroke grestore } bind def /D5 { gsave translate 45 rotate 0 0 S5 stroke grestore } bind def /D6 { gsave translate 45 rotate 0 0 S6 stroke grestore } bind def /D7 { gsave translate 45 rotate 0 0 S7 stroke grestore } bind def /D8 { gsave translate 45 rotate 0 0 S8 stroke grestore } bind def /D9 { gsave translate 45 rotate 0 0 S9 stroke grestore } bind def /D10 { gsave translate 45 rotate 0 0 S10 stroke grestore } bind def /D11 { gsave translate 45 rotate 0 0 S11 stroke grestore } bind def /D12 { gsave translate 45 rotate 0 0 S12 stroke grestore } bind def /D13 { gsave translate 45 rotate 0 0 S13 stroke grestore } bind def /D14 { gsave translate 45 rotate 0 0 S14 stroke grestore } bind def /D15 { gsave translate 45 rotate 0 0 S15 stroke grestore } bind def /DiaE { stroke [] 0 setdash vpt add M hpt neg vpt neg V hpt vpt neg V hpt vpt V hpt neg vpt V closepath stroke } def /BoxE { stroke [] 0 setdash exch hpt sub exch vpt add M 0 vpt2 neg V hpt2 0 V 0 vpt2 V hpt2 neg 0 V closepath stroke } def /TriUE { stroke [] 0 setdash vpt 1.12 mul add M hpt neg vpt -1.62 mul V hpt 2 mul 0 V hpt neg vpt 1.62 mul V closepath stroke } def /TriDE { stroke [] 0 setdash vpt 1.12 mul sub M hpt neg vpt 1.62 mul V hpt 2 mul 0 V hpt neg vpt -1.62 mul V closepath stroke } def /PentE { stroke [] 0 setdash gsave translate 0 hpt M 4 {72 rotate 0 hpt L} repeat closepath stroke grestore } def /CircE { stroke [] 0 setdash hpt 0 360 arc stroke } def /BoxFill { gsave Rec 1 setgray fill grestore } def end } \begin{picture}(3600,2160)(0,0) \special{" gnudict begin gsave 0 0 translate 0.100 0.100 scale 0 setgray newpath LTb 450 400 M 63 0 V 3037 0 R -63 0 V 450 815 M 63 0 V 3037 0 R -63 0 V 450 1230 M 63 0 V 3037 0 R -63 0 V 450 1645 M 63 0 V 3037 0 R -63 0 V 450 2060 M 63 0 V 3037 0 R -63 0 V 450 400 M 0 63 V 0 1597 R 0 -63 V 1225 400 M 0 63 V 0 1597 R 0 -63 V 2000 400 M 0 63 V 0 1597 R 0 -63 V 2775 400 M 0 63 V 0 1597 R 0 -63 V 3550 400 M 0 63 V 0 1597 R 0 -63 V LTa 450 400 M 3100 0 V LTa 450 400 M 0 1660 V LTb 450 400 M 3100 0 V 0 1660 V -3100 0 V 450 400 L 1.000 UL LT0 450 1192 M 8 0 V 4 0 V 4 0 V 3 1 V 4 0 V 4 1 V 4 0 V 4 1 V 4 0 V 4 1 V 4 1 V 3 1 V 4 1 V 4 1 V 4 1 V 4 1 V 4 1 V 4 2 V 4 1 V 3 1 V 4 2 V 4 1 V 4 2 V 4 2 V 4 1 V 4 2 V 4 2 V 3 2 V 4 2 V 4 1 V 4 2 V 4 2 V 4 2 V 4 2 V 4 2 V 3 2 V 4 2 V 4 3 V 4 2 V 4 2 V 4 2 V 4 2 V 4 2 V 3 2 V 4 2 V 4 2 V 4 2 V 4 2 V 4 2 V 4 2 V 4 2 V 3 2 V 4 2 V 4 2 V 4 1 V 4 2 V 4 1 V 4 2 V 4 1 V 3 2 V 4 1 V 4 1 V 4 1 V 4 1 V 4 1 V 4 1 V 4 0 V 3 1 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 -1 V 4 0 V 3 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -2 V 4 -2 V 4 -2 V 3 -2 V 4 -2 V 4 -2 V 4 -3 V 4 -3 V 4 -3 V 4 -3 V 4 -3 V 3 -4 V 4 -4 V 4 -4 V 4 -4 V 4 -4 V 4 -5 V 4 -4 V 4 -5 V 3 -5 V 4 -5 V 4 -6 V 4 -5 V 4 -6 V 4 -5 V 4 -6 V 4 -6 V 3 -7 V 4 -6 V 4 -6 V 4 -7 V 4 -7 V 4 -6 V 4 -7 V 4 -7 V 3 -7 V 4 -7 V 4 -8 V 4 -7 V 4 -7 V 4 -8 V 4 -7 V 4 -8 V 3 -7 V 4 -8 V 4 -7 V 4 -8 V 4 -7 V 4 -8 V 4 -8 V 4 -7 V 3 -8 V 4 -7 V 4 -8 V 4 -7 V 4 -8 V 4 -7 V 4 -8 V 4 -7 V 3 -7 V 4 -8 V 4 -7 V 4 -7 V 4 -7 V 4 -7 V 4 -7 V 4 -7 V 3 -7 V 4 -7 V 4 -6 V 4 -7 V 4 -6 V 4 -7 V 4 -6 V 4 -6 V 3 -7 V 4 -6 V 4 -6 V 4 -6 V 4 -5 V 4 -6 V 4 -6 V 4 -5 V 3 -6 V 4 -5 V 4 -6 V 4 -5 V 4 -5 V 4 -5 V 4 -5 V 4 -5 V 3 -5 V 4 -5 V 4 -4 V 4 -5 V 4 -4 V 4 -5 V 4 -4 V 4 -4 V 3 -5 V 4 -4 V 4 -4 V 4 -4 V 4 -4 V 4 -3 V 4 -4 V 4 -4 V 3 -4 V 4 -3 V 4 -4 V 4 -3 V 4 -4 V 4 -3 V 4 -3 V 4 -3 V 3 -4 V 4 -3 V 4 -3 V 4 -3 V 4 -3 V 4 -3 V 4 -3 V 4 -2 V 3 -3 V 4 -3 V 4 -2 V 4 -3 V 4 -3 V 4 -2 V 4 -3 V 4 -2 V 3 -3 V 4 -2 V 4 -2 V 4 -3 V 4 -2 V 4 -2 V 4 -2 V 4 -3 V 3 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 3 -2 V 4 -1 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -1 V 4 -2 V 3 -2 V 4 -1 V 4 -2 V 4 -2 V 4 -1 V 4 -2 V 4 -1 V 4 -2 V 3 -1 V 4 -2 V 4 -1 V 4 -2 V 4 -1 V 4 -2 V 4 -1 V 4 -1 V 3 -2 V 4 -1 V 4 -1 V 4 -2 V 4 -1 V 4 -1 V 4 -1 V 4 -2 V 3 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -2 V 4 -1 V 4 -1 V 3 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 3 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 3 -1 V 4 0 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 3 0 V 4 -1 V 4 -1 V 4 -1 V 4 0 V 4 -1 V 4 -1 V 4 -1 V 3 0 V 4 -1 V 4 -1 V 4 -1 V 4 0 V 4 -1 V 4 -1 V 4 0 V 3 -1 V 4 -1 V 4 0 V 4 -1 V 4 -1 V 4 0 V 4 -1 V 4 -1 V 3 0 V 4 -1 V 4 0 V 4 -1 V 4 -1 V 4 0 V 4 -1 V 4 0 V 3 -1 V 4 0 V 4 -1 V 4 0 V 4 -1 V 4 -1 V 4 0 V 4 -1 V 3 0 V 4 -1 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 -1 V 4 0 V 3 -1 V 4 0 V 4 -1 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 -1 V 3 0 V 4 -1 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 -1 V 4 0 V 3 0 V 4 -1 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 -1 V 3 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 0 V 4 -1 V 4 0 V 3 0 V 4 -1 V 4 0 V 4 0 V 4 0 V 4 0 V 4 -1 V 4 0 V 3 0 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V currentpoint stroke M 4 0 V 4 1 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 1 V 4 0 V 4 0 V 4 0 V 3 1 V 4 0 V 4 0 V 4 1 V 4 0 V 4 1 V 4 0 V 4 1 V 3 0 V 4 1 V 4 1 V 4 0 V 4 1 V 4 1 V 4 1 V 4 0 V 3 1 V 4 1 V 4 1 V 4 1 V 4 1 V 4 1 V 4 2 V 4 1 V 3 1 V 4 2 V 4 1 V 4 1 V 4 2 V 4 2 V 4 1 V 4 2 V 3 2 V 4 2 V 4 2 V 4 2 V 4 2 V 4 2 V 4 2 V 4 3 V 3 2 V 4 3 V 4 2 V 4 3 V 4 3 V 4 2 V 4 3 V 4 3 V 3 3 V 4 3 V 4 4 V 4 3 V 4 3 V 4 4 V 4 3 V 4 4 V 3 4 V 4 4 V 4 4 V 4 3 V 4 5 V 4 4 V 4 4 V 4 4 V 3 4 V 4 5 V 4 4 V 4 5 V 4 4 V 4 5 V 4 4 V 4 5 V 3 4 V 4 5 V 4 5 V 4 5 V 4 4 V 4 5 V 4 5 V 4 5 V 3 4 V 4 5 V 4 5 V 4 4 V 4 5 V 4 5 V 4 4 V 4 5 V 3 4 V 4 5 V 4 4 V 4 4 V 4 4 V 4 4 V 4 4 V 4 4 V 3 4 V 4 4 V 4 3 V 4 4 V 4 3 V 4 4 V 4 3 V 4 3 V 3 3 V 4 2 V 4 3 V 4 2 V 4 3 V 4 2 V 4 2 V 4 2 V 3 2 V 4 1 V 4 2 V 4 1 V 4 1 V 4 1 V 4 1 V 4 1 V 3 0 V 4 1 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 -1 V 4 -1 V 4 0 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 3 -2 V 4 -1 V 4 -2 V 4 -1 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 3 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -3 V 3 -2 V 4 -2 V 4 -3 V 4 -2 V 4 -3 V 4 -2 V 4 -3 V 4 -2 V 3 -3 V 4 -2 V 4 -3 V 4 -2 V 4 -3 V 4 -2 V 4 -3 V 4 -2 V 3 -3 V 4 -2 V 4 -3 V 4 -2 V 4 -3 V 4 -2 V 4 -3 V 4 -2 V 3 -3 V 4 -2 V 4 -2 V 4 -3 V 4 -2 V 4 -2 V 4 -2 V 4 -3 V 3 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 3 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -1 V 4 -2 V 4 -2 V 3 -2 V 4 -1 V 4 -2 V 4 -2 V 4 -1 V 4 -2 V 4 -1 V 4 -2 V 3 -1 V 4 -2 V 4 -1 V 4 -1 V 4 -2 V 4 -1 V 4 -1 V 4 -2 V 3 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -2 V 4 -1 V 4 -1 V 3 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 0 V 3 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 0 V 4 -1 V 4 -1 V 3 0 V 4 -1 V 4 -1 V 4 -1 V 4 0 V 4 -1 V 4 0 V 4 -1 V 3 -1 V 4 0 V 4 -1 V 4 0 V 4 -1 V 4 0 V 4 -1 V 4 0 V 3 -1 V 4 0 V 4 -1 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 -1 V 3 0 V 4 -1 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 -1 V 3 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 3 0 V 4 -1 V 4 0 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 3 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 0 V 4 0 V 4 -1 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 -1 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V currentpoint stroke M 1.000 UL LT1 454 1924 M 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 1 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 -1 V 4 0 V 4 0 V 4 0 V 4 0 V 4 -1 V 4 0 V 3 -1 V 4 0 V 4 -1 V 4 -1 V 4 0 V 4 -1 V 4 -1 V 4 -2 V 3 -1 V 4 -1 V 4 -2 V 4 -2 V 4 -1 V 4 -2 V 4 -3 V 4 -2 V 3 -2 V 4 -3 V 4 -3 V 4 -3 V 4 -3 V 4 -4 V 4 -3 V 4 -4 V 3 -4 V 4 -5 V 4 -4 V 4 -5 V 4 -5 V 4 -6 V 4 -6 V 4 -5 V 3 -7 V 4 -6 V 4 -7 V 4 -7 V 4 -7 V 4 -8 V 4 -7 V 4 -8 V 3 -9 V 4 -9 V 4 -8 V 4 -10 V 4 -9 V 4 -10 V 4 -10 V 4 -10 V 3 -11 V 4 -11 V 4 -11 V 4 -11 V 4 -11 V 4 -12 V 4 -12 V 4 -12 V 3 -13 V 4 -12 V 4 -13 V 4 -13 V 4 -13 V 4 -13 V 4 -14 V 4 -13 V 3 -14 V 4 -14 V 4 -13 V 4 -14 V 4 -14 V 4 -14 V 4 -15 V 4 -14 V 3 -14 V 4 -14 V 4 -14 V 4 -15 V 4 -14 V 4 -14 V 4 -14 V 4 -14 V 3 -14 V 4 -15 V 4 -13 V 4 -14 V 4 -14 V 4 -14 V 4 -13 V 4 -14 V 3 -13 V 4 -13 V 4 -13 V 4 -13 V 4 -13 V 4 -13 V 4 -12 V 4 -13 V 3 -12 V 4 -12 V 4 -12 V 4 -12 V 4 -11 V 4 -11 V 4 -12 V 4 -11 V 3 -11 V 4 -10 V 4 -11 V 4 -10 V 4 -10 V 4 -10 V 4 -10 V 4 -9 V 3 -10 V 4 -9 V 4 -9 V 4 -9 V 4 -9 V 4 -9 V 4 -8 V 4 -8 V 3 -8 V 4 -8 V 4 -8 V 4 -8 V 4 -7 V 4 -7 V 4 -8 V 4 -7 V 3 -6 V 4 -7 V 4 -7 V 4 -6 V 4 -7 V 4 -6 V 4 -6 V 4 -6 V 3 -6 V 4 -5 V 4 -6 V 4 -5 V 4 -6 V 4 -5 V 4 -5 V 4 -5 V 3 -5 V 4 -5 V 4 -4 V 4 -5 V 4 -5 V 4 -4 V 4 -4 V 4 -5 V 3 -4 V 4 -4 V 4 -4 V 4 -4 V 4 -4 V 4 -3 V 4 -4 V 4 -4 V 3 -3 V 4 -4 V 4 -3 V 4 -3 V 4 -4 V 4 -3 V 4 -3 V 4 -3 V 3 -3 V 4 -3 V 4 -3 V 4 -3 V 4 -2 V 4 -3 V 4 -3 V 4 -2 V 3 -3 V 4 -3 V 4 -2 V 4 -2 V 4 -3 V 4 -2 V 4 -2 V 4 -3 V 3 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 3 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -1 V 4 -2 V 4 -2 V 4 -2 V 3 -1 V 4 -2 V 4 -1 V 4 -2 V 4 -1 V 4 -2 V 4 -1 V 4 -2 V 3 -1 V 4 -2 V 4 -1 V 4 -1 V 4 -2 V 4 -1 V 4 -1 V 4 -1 V 3 -2 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -2 V 4 -1 V 4 -1 V 3 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 3 -1 V 4 -1 V 4 0 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 3 -1 V 4 0 V 4 -1 V 4 -1 V 4 -1 V 4 0 V 4 -1 V 4 -1 V 3 -1 V 4 0 V 4 -1 V 4 -1 V 4 0 V 4 -1 V 4 0 V 4 -1 V 3 -1 V 4 0 V 4 -1 V 4 0 V 4 -1 V 4 -1 V 4 0 V 4 -1 V 3 0 V 4 -1 V 4 0 V 4 -1 V 4 0 V 4 -1 V 4 0 V 4 -1 V 3 0 V 4 0 V 4 -1 V 4 0 V 4 -1 V 4 0 V 4 -1 V 4 0 V 3 0 V 4 -1 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 -1 V 3 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 0 V 4 -1 V 4 0 V 3 0 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 0 V 4 0 V 3 -1 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 1 V 4 0 V 4 0 V 4 0 V 3 0 V 4 1 V 4 0 V 4 0 V 4 1 V 4 0 V 4 0 V 4 1 V 3 0 V 4 1 V 4 0 V 4 1 V 4 1 V 4 0 V 4 1 V 4 1 V 3 0 V 4 1 V 4 1 V 4 1 V 4 1 V 4 1 V 4 1 V 4 1 V 3 1 V 4 1 V 4 1 V 4 2 V 4 1 V 4 1 V 4 2 V 4 1 V 3 2 V 4 2 V 4 1 V 4 2 V 4 2 V 4 2 V 4 2 V 4 2 V 3 2 V 4 3 V 4 2 V 4 2 V currentpoint stroke M 4 3 V 4 2 V 4 3 V 4 3 V 3 3 V 4 3 V 4 3 V 4 3 V 4 3 V 4 3 V 4 3 V 4 4 V 3 3 V 4 4 V 4 4 V 4 3 V 4 4 V 4 4 V 4 4 V 4 4 V 3 4 V 4 5 V 4 4 V 4 4 V 4 5 V 4 4 V 4 5 V 4 5 V 3 4 V 4 5 V 4 5 V 4 5 V 4 5 V 4 5 V 4 4 V 4 5 V 3 5 V 4 5 V 4 5 V 4 5 V 4 5 V 4 5 V 4 5 V 4 5 V 3 5 V 4 5 V 4 5 V 4 5 V 4 4 V 4 5 V 4 5 V 4 4 V 3 5 V 4 4 V 4 4 V 4 5 V 4 4 V 4 4 V 4 4 V 4 3 V 3 4 V 4 4 V 4 3 V 4 3 V 4 3 V 4 3 V 4 3 V 4 3 V 3 3 V 4 2 V 4 2 V 4 3 V 4 2 V 4 1 V 4 2 V 4 2 V 3 1 V 4 1 V 4 1 V 4 1 V 4 1 V 4 1 V 4 0 V 4 0 V 3 1 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 -1 V 4 -1 V 3 0 V 4 -1 V 4 -1 V 4 -2 V 4 -1 V 4 -1 V 4 -2 V 4 -1 V 3 -2 V 4 -1 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 3 -2 V 4 -2 V 4 -3 V 4 -2 V 4 -2 V 4 -3 V 4 -2 V 4 -3 V 3 -2 V 4 -2 V 4 -3 V 4 -2 V 4 -3 V 4 -2 V 4 -3 V 4 -3 V 3 -2 V 4 -3 V 4 -2 V 4 -3 V 4 -2 V 4 -3 V 4 -2 V 4 -3 V 3 -2 V 4 -3 V 4 -2 V 4 -3 V 4 -2 V 4 -2 V 4 -3 V 4 -2 V 3 -2 V 4 -3 V 4 -2 V 4 -2 V 4 -2 V 4 -3 V 4 -2 V 4 -2 V 3 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 4 -2 V 3 -2 V 4 -1 V 4 -2 V 4 -2 V 4 -2 V 4 -1 V 4 -2 V 4 -2 V 3 -1 V 4 -2 V 4 -1 V 4 -2 V 4 -1 V 4 -2 V 4 -1 V 4 -2 V 3 -1 V 4 -1 V 4 -2 V 4 -1 V 4 -1 V 4 -1 V 4 -2 V 4 -1 V 3 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 3 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 -1 V 4 0 V 4 -1 V 3 -1 V 4 -1 V 4 -1 V 4 0 V 4 -1 V 4 -1 V 4 0 V 4 -1 V 3 -1 V 4 0 V 4 -1 V 4 -1 V 4 0 V 4 -1 V 4 0 V 4 -1 V 3 0 V 4 -1 V 4 -1 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 -1 V 3 0 V 4 -1 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 -1 V 4 0 V 3 0 V 4 -1 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 -1 V 3 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 0 V 4 -1 V 4 0 V 3 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 0 V 4 0 V 4 -1 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 -1 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 -1 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 -1 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 4 0 V 3 0 V 4 0 V 4 0 V 4 0 V currentpoint stroke M stroke grestore end showpage } \put(2000,150){\makebox(0,0){\Large${vr}$}} \put(100,1230){% \special{ps: gsave currentpoint currentpoint translate 270 rotate neg exch neg exch translate}% \makebox(0,0)[b]{\Large\shortstack{$T^{0}_{0}/e^{2}v^{4}$}}% \special{ps: currentpoint grestore moveto}% } \put(3550,300){\makebox(0,0){8}} \put(2775,300){\makebox(0,0){6}} \put(2000,300){\makebox(0,0){4}} \put(1225,300){\makebox(0,0){2}} \put(450,300){\makebox(0,0){0}} \put(400,2060){\makebox(0,0)[r]{1.6}} \put(400,1645){\makebox(0,0)[r]{1.2}} \put(400,1230){\makebox(0,0)[r]{0.8}} \put(400,815){\makebox(0,0)[r]{0.4}} \put(400,400){\makebox(0,0)[r]{0}} \end{picture} \vspace{3mm} \caption{Profiles of the energy density for $\lambda=1$ and $\alpha=0.35$. The solid line stands for a monopole-bubble with a monopole wall ($e=0.1$), and the dashed line for a monopole-bubble without a monopole wall ($e=0.65$).} \label{fig2} \end{figure} At the asymptotic region for large $r$, the scalar field approaches the boundary value in Eq.~(\ref{bcinf}) exponentially, \begin{eqnarray}\label{pinf} \phi\approx\phi_{\infty}\frac{1+mr}{(mr)^{2}}e^{-mr}. \end{eqnarray} The gauge field has a long range power tail because the gauge boson becomes massless in the symmetric phase, \begin{eqnarray}\label{kinf} K(r)\approx 1-\frac{\kappa_{\infty}}{r}+\frac{3\kappa^{2}_{\infty}} {4r^{2}}+\cdots, \end{eqnarray} where $\kappa_{\infty}$ should also be nonnegative for the same reason as $\kappa_{0}$ is in Eq.~(\ref{kr0}). Outside the bubble wall, the vacuum is in the symmetric phase with $\big<\phi\big>=0$, and the SO(3) gauge field is massless. Specifically, the magnetic field looks like that of a dipole: \begin{eqnarray} F^{a}_{ij}\sim\frac{1}{e}(\epsilon^{ajk}\hat{r}^{i}-\epsilon^{aik}\hat{r}^{j} )\frac{\hat{r}^{k}}{r}\frac{dK}{dr}\sim {\cal O} (1/r^{3}). \end{eqnarray} The corresponding leading term of the energy density, $T^{0}_{\;0}-V(0)$, in Eq.~(\ref{ener}) is of order $1/r^{6}$, and this fast decay can also be read from Fig.~\ref{fig2}. This means that the monopole at the center is screened by the bubble wall. In the remainder of this section we make analytic arguments on the properties of gauged monopole-bubbles under the thin-wall assumption. If the energy gap between the false vacuum and the true vacuum, $\Delta V\equiv V(0)-V(v)=\lambda\alpha v^{4}$, is small enough, the bubble radius becomes much larger than both of the monopole size and the width of bubble wall, and we may adopt the thin-wall approximation. The typical scale of the monopole core or the outer wall is given $\sim1/m_{\rm H}$ by the configuration of the Higgs field, while it is given $\sim 1/ev$ by the configuration of the gauge field. Here we assume that both scales are much smaller than the bubble radius, and therefore we implicitly exclude the weak coupling case $e\ll1$. First, let us consider the behavior of the fields near the maximum scalar amplitude $\phi_{\rm max}$, which is close to $v$ (see the solid lines in Fig.~\ref{fig1}). Around $\phi=\phi_{\rm max}$, the derivative of the scalar field ${d\phi}/{dr}$ is close to zero, and the slope of the potential ${dV}/{d\phi}$ is almost flat. Therefore, if the corresponding value of $K$ at the turning point $r_{\rm turn}$ with $\phi(r_{\rm turn})=\phi_{\rm max}$ is negligibly small, which will turn out to be indeed the case, the scalar field equation (\ref{meq1}) implies that the quadratic derivative of the potential $d^{2}V/d\phi^{2}$ is also very small near $\phi_{\rm max}$. Thus we see that $\phi$ stays near $\phi_{\rm max}$ for a long range of $r$. In this case, the first term of the right-hand side of Eq.~(\ref{meq2}) becomes negligible near $\phi\approx\phi_{\rm max}$ since $r_{\rm turn}$ is large enough. Therefore, $K$ falls to zero exponentially, i.e., $K\sim e^{-e\phi_{\rm max}r}$, where the scale $e\phi_{\rm max}\approx ev$ in the exponential is nothing but the mass of gauge boson in the Higgs phase. We can also confirm that $\phi$ approaches exponentially to the true vacuum expectation value $v$, i.e., $\phi\approx v-\phi_{\rm max}{e^{-m_{\rm H}r}}/{m_{\rm H}r}$ where $\phi_{\rm turn}$ is a constant to be fixed by boundary conditions (\ref{bc0}) and (\ref{bcinf}). Secondly, we discuss how energy density is distributed in the region between the monopole core and the bubble wall. Because this region is in a symmetry-broken phase of which the breaking pattern is SO(3)$\rightarrow$SO(2), one of the three internal gauge degrees remains massless and it is to be identified as the photon field described by \begin{eqnarray}\label{abel} F_{\mu\nu}=\frac{\phi^{a}}{\phi}F^{a}_{\mu\nu}+\frac{1}{e\phi^{3}} \epsilon_{abc}\phi^{a}(D_{\mu}\phi)^{b}(D_{\nu}\phi)^{c}. \end{eqnarray} Inserting the monopole ansatz (\ref{an1}) into Eq.~(\ref{abel}), we obtain zero electric field but in contrast the magnetic field with the monopole charge $g=1/e$, i.e., $F_{ij}=\epsilon^{ija}\hat{r}^{a}/er^{2}$. Since $\phi\approx v$ and $K\approx 0$ around $r=r_{\rm turn}$, one can easily notice that the contribution of the magnetic field comes from the first term of Eq.~(\ref{abel}), and then identify the matter lump produced at the center as a 't Hooft-Polyakov monopole \cite{HP}. If we look at the expression of energy density (\ref{ener}) near $\phi=\phi_{\rm max}\sim v$, \begin{eqnarray}\label{mener} T^{0}_{\;0}\sim\frac{1}{2 e^{2}r^{4}}+V(v), \end{eqnarray} the above leading term is interpreted as magnetic energy of a 't Hooft-Polyakov monopole, and its total energy is finite when we do not take into account the contribution from latent heat term $V(v)$. This is different from the global monopole-bubble case, where the produced global monopole contributes a $1/r^{2}$ term to energy density, since this scalar phase term, which would make the energy diverge, is eaten up by the gauge field. Finally, we argue how the gauge field affects the bubble size. Let us estimate the radius by Coleman's action-minimum method \cite{Col,Lin2}. After assuming spherical symmetry and the hedgehog configuration (\ref{an1}), the action (\ref{action}) is reduced to \begin{equation}\label{action2} S=4\pi\beta\int^{\infty}_{0}r^2dr \left\{\frac12{\phi'}^2+{K^2\phi^2\over r^2}+V +{(1-K^2)^2\over 2e^2r^4}+{{K'}^2\over e^2r^2}\right\}. \end{equation} By the thin-wall assumption, the core is approximately described as a 't Hooft-Polyakov monopole, and we may approximate $\phi=v$ and $K=0$ between the monopole core and the bubble wall. Then the difference between the action for a gauged monopole-bubble and that for the false vacuum is \begin{eqnarray} B_{\rm lm}&\equiv&S(\phi)-S(\phi=0) \nonumber\\ &=&\beta\left(-{4\pi\over 3}R^3\Delta V+4\pi\sigma_{\phi}R^2+M_K -{2\pi\over e^2R}\right)+{\rm constant~core~term}, \end{eqnarray} where $R$ is the bubble radius, and $\sigma_{\phi}$ and $M_L$ are constants, which are defined as \begin{equation} \sigma\equiv\frac12\int^{R+\epsilon}_{R-\epsilon}dr\left({d\phi\over dr}\right)^2, ~~~ M_K\equiv{4\pi\over e^2}\int^{R+\epsilon}_{R-\epsilon}dr \left({dK\over dr}\right)^2t. \end{equation} Similarly, for a normal bubble and a global monopole-bubble, we obtain \begin{eqnarray} B_{\rm b}&=&\beta\left(-{4\pi\over 3}R^3\Delta V+4\pi\sigma_{\phi}R^2\right), \\ B_{\rm gm}&=&\beta\left(-{4\pi\over 3}R^3\Delta V +4\pi\sigma_{\phi}R^2+4\pi v^2R\right)+{\rm constant~core~term}. \end{eqnarray} Each radius is determined by the condition $dB/dR=0$: \begin{equation} R_{\rm b}={2\sigma_{\phi}\over\Delta V}, ~~~ R_{\rm gm}\approx R_{\rm b}+{v^2\over 2\sigma_{\phi}} \end{equation} What we need is an inequality among $R_{\rm b},~ R_{\rm gm},~ R_{\rm lm}$ rather than a complicated expression of $R_{\rm lm}$. Thus we evaluate \begin{equation}\label{dB} {dB_{\rm lm}\over dR}\Big|_{R=R_{\rm b}} ={2\pi\beta\over e^2R_{\rm b}^2}>0, ~~~ {dB_{\rm lm}\over dR}\Big|_{R=R_{\rm gm}} =2\pi\beta\left({1\over 4e^2R_{\rm gm}^2}-v^2\right)<0, \end{equation} where the last inequality is supported by the initial assumption $R\gg1/ev$. Because $R_{\rm lm}$ is a local maximum of $B_{\rm lm}$, Eq.~(\ref{dB}) imply \begin{equation} R_{\rm b}<R_{\rm lm}<R_{\rm gm}. \end{equation} We may interpret that the bubble radius is mostly determined by energy inside the bubble. \section{Nucleation Rate and Evolution} We begin this section by comparing the nucleation rates of the bounce and of gauged monopole-bubbles. In the semiclassical approximation, the decay rate per unit volume is estimated by the use of the leading exponential factor which is given by the Euclidean action of the bubble solution~\cite{Col}, multiplied by a prefactor determined by zero modes of the fluctuation~\cite{CC}. When a continuous symmetry or a part of it is spontaneously broken, the number of zero modes increases and then the tunneling rate into vacua is enhanced~\cite{Kus}. In the present model, the SO(3) gauge symmetry is broken to an SO(2) symmetry and we have two bubble solutions; the relative decay rate between the bounce $\phi^{a}_{\rm b}$ and the gauged monopole-bubble $\phi^{a}_{\rm lm}$ is meaningful: \begin{eqnarray}\label{decay2} \frac{\Gamma_{\rm lm}}{\Gamma_{\rm b}}\sim \left(\frac{S(\phi^{a}_{\rm lm})}{S(\phi^{a}_{\rm b})}\right)^{3/2} \left|\frac{\det'[S''(\phi^{a}_{\rm lm})]}{\det'[S''(\phi^{a}_{\rm b})]} \right|^{-1/2}\frac{\int d^3x (\phi^{a}_{\rm lm})^2}{\int d^3x (\phi^{a}_{\rm b})^2}e^{-[S(\phi^{a}_{\rm lm})-S(\phi^{a}_{\rm b})]}, \end{eqnarray} where primed determinants denote infinite products over the nonzero eigenvalues, which are assumed to be order 1 here. The value of the action in Eq.~(\ref{decay2}) is estimated after removing the constant vacuum energy density at the metastable symmetric vacuum, that means $S(\phi=0)=0$ in Eq.~(\ref{decay2}). In order to see the leading effect, we plot in Fig.~\ref{fig3} the values of the Euclidean actions for a bounce, gauged monopole-bubbles ($e=0.3$ and $0.91$), and global monopole-bubble for $e=0$. As expected, the bounce has always the minimum action irrespective of the thickness of the bubble wall (or equivalently the parameter $\alpha$) (see Fig.~\ref{fig3}) so that it forms a dominant decay channel (see Table~\ref{table1}). The difference of the Euclidean actions is almost a constant for a given gauge coupling $e$, and it is roughly the monopole mass multiplied by inverse temperature. The values of the Euclidean action decrease as the gauge coupling increases. If we recall the mass formula $4\pi v/e$ of BPS monopole of unit winding~\cite{Bog}, one may easily notice that this gap decreases in the strong coupling limit. This behavior can easily be understood by rescaling the variables to the dimensionless ones: $\tilde{t}_{E}=evt_{E}$, $\rho=evr$, $\phi=vh$, $\tilde{V}=\tilde{\lambda}(h^{2}+\alpha)(h^{2}-1)^{2}$, and $\tilde{\lambda}=\lambda/e^{2}$. Then the action is rewritten as $S\sim(4\pi v\beta/e)\times ({\rm dimensionless}\;{\rm energy})$, which reflects the decrease of the monopole mass for strong gauge coupling. If we take into account the enhancement due to the prefactor in Eq.~(\ref{decay2}), we may obtain even an unbelievable relative decay rate in the high temperature limit, e.g., $\Gamma_{\rm m}/\Gamma_{\rm b} > 1$ when $v\beta=0.1$ and $\alpha=0.4$. However, since the transition pattern becomes weakly first-order at such high temperature, the above result does not imply a possibility that the decay through a monopole-bubble is dominant decay channel. Instead, it means that in a high temperature ($v\beta\sim 1$) the relative production rate is considerable for thick-wall bubbles ($\alpha \approx 0.4$) with strong gauge coupling ($e \sim 1$) (see Table~\ref{table1}). For reference we give the values of the Euclidean action for the global monopole-bubbles when $e=0$, which are always larger than those of the gauged monopole-bubbles because of their long energy tail proportional to the radius of the global monopole-bubble. \begin{figure} \setlength{\unitlength}{0.1bp} \special{! /gnudict 120 dict def gnudict begin /Color false def /Solid false def /gnulinewidth 5.000 def /userlinewidth gnulinewidth def /vshift -33 def /dl {10 mul} def /hpt_ 31.5 def /vpt_ 31.5 def /hpt hpt_ def /vpt vpt_ def /M {moveto} bind def /L {lineto} bind def /R {rmoveto} bind def /V {rlineto} bind def /vpt2 vpt 2 mul def /hpt2 hpt 2 mul def /Lshow { currentpoint stroke M 0 vshift R show } def /Rshow { currentpoint stroke M dup stringwidth pop neg vshift R show } def /Cshow { currentpoint stroke M dup stringwidth pop -2 div vshift R show } def /UP { dup vpt_ mul /vpt exch def hpt_ mul /hpt exch def /hpt2 hpt 2 mul def /vpt2 vpt 2 mul def } def /DL { Color {setrgbcolor Solid {pop []} if 0 setdash } {pop pop pop Solid {pop []} if 0 setdash} ifelse } def /BL { stroke gnulinewidth 2 mul setlinewidth } def /AL { stroke gnulinewidth 2 div setlinewidth } def /UL { gnulinewidth mul /userlinewidth exch def } def /PL { stroke userlinewidth setlinewidth } def /LTb { BL [] 0 0 0 DL } def /LTa { AL [1 dl 2 dl] 0 setdash 0 0 0 setrgbcolor } def /LT0 { PL [] 0 1 0 DL } def /LT1 { PL [4 dl 2 dl] 0 0 1 DL } def /LT2 { PL [2 dl 3 dl] 1 0 0 DL } def /LT3 { PL [1 dl 1.5 dl] 1 0 1 DL } def /LT4 { PL [5 dl 2 dl 1 dl 2 dl] 0 1 1 DL } def /LT5 { PL [4 dl 3 dl 1 dl 3 dl] 1 1 0 DL } def /LT6 { PL [2 dl 2 dl 2 dl 4 dl] 0 0 0 DL } def /LT7 { PL [2 dl 2 dl 2 dl 2 dl 2 dl 4 dl] 1 0.3 0 DL } def /LT8 { PL [2 dl 2 dl 2 dl 2 dl 2 dl 2 dl 2 dl 4 dl] 0.5 0.5 0.5 DL } def /Pnt { stroke [] 0 setdash gsave 1 setlinecap M 0 0 V stroke grestore } def /Dia { stroke [] 0 setdash 2 copy vpt add M hpt neg vpt neg V hpt vpt neg V hpt vpt V hpt neg vpt V closepath stroke Pnt } def /Pls { stroke [] 0 setdash vpt sub M 0 vpt2 V currentpoint stroke M hpt neg vpt neg R hpt2 0 V stroke } def /Box { stroke [] 0 setdash 2 copy exch hpt sub exch vpt add M 0 vpt2 neg V hpt2 0 V 0 vpt2 V hpt2 neg 0 V closepath stroke Pnt } def /Crs { stroke [] 0 setdash exch hpt sub exch vpt add M hpt2 vpt2 neg V currentpoint stroke M hpt2 neg 0 R hpt2 vpt2 V stroke } def /TriU { stroke [] 0 setdash 2 copy vpt 1.12 mul add M hpt neg vpt -1.62 mul V hpt 2 mul 0 V hpt neg vpt 1.62 mul V closepath stroke Pnt } def /Star { 2 copy Pls Crs } def /BoxF { stroke [] 0 setdash exch hpt sub exch vpt add M 0 vpt2 neg V hpt2 0 V 0 vpt2 V hpt2 neg 0 V closepath fill } def /TriUF { stroke [] 0 setdash vpt 1.12 mul add M hpt neg vpt -1.62 mul V hpt 2 mul 0 V hpt neg vpt 1.62 mul V closepath fill } def /TriD { stroke [] 0 setdash 2 copy vpt 1.12 mul sub M hpt neg vpt 1.62 mul V hpt 2 mul 0 V hpt neg vpt -1.62 mul V closepath stroke Pnt } def /TriDF { stroke [] 0 setdash vpt 1.12 mul sub M hpt neg vpt 1.62 mul V hpt 2 mul 0 V hpt neg vpt -1.62 mul V closepath fill} def /DiaF { stroke [] 0 setdash vpt add M hpt neg vpt neg V hpt vpt neg V hpt vpt V hpt neg vpt V closepath fill } def /Pent { stroke [] 0 setdash 2 copy gsave translate 0 hpt M 4 {72 rotate 0 hpt L} repeat closepath stroke grestore Pnt } def /PentF { stroke [] 0 setdash gsave translate 0 hpt M 4 {72 rotate 0 hpt L} repeat closepath fill grestore } def /Circle { stroke [] 0 setdash 2 copy hpt 0 360 arc stroke Pnt } def /CircleF { stroke [] 0 setdash hpt 0 360 arc fill } def /C0 { BL [] 0 setdash 2 copy moveto vpt 90 450 arc } bind def /C1 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 90 arc closepath fill vpt 0 360 arc closepath } bind def /C2 { BL [] 0 setdash 2 copy moveto 2 copy vpt 90 180 arc closepath fill vpt 0 360 arc closepath } bind def /C3 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 180 arc closepath fill vpt 0 360 arc closepath } bind def /C4 { BL [] 0 setdash 2 copy moveto 2 copy vpt 180 270 arc closepath fill vpt 0 360 arc closepath } bind def /C5 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 90 arc 2 copy moveto 2 copy vpt 180 270 arc closepath fill vpt 0 360 arc } bind def /C6 { BL [] 0 setdash 2 copy moveto 2 copy vpt 90 270 arc closepath fill vpt 0 360 arc closepath } bind def /C7 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 270 arc closepath fill vpt 0 360 arc closepath } bind def /C8 { BL [] 0 setdash 2 copy moveto 2 copy vpt 270 360 arc closepath fill vpt 0 360 arc closepath } bind def /C9 { BL [] 0 setdash 2 copy moveto 2 copy vpt 270 450 arc closepath fill vpt 0 360 arc closepath } bind def /C10 { BL [] 0 setdash 2 copy 2 copy moveto vpt 270 360 arc closepath fill 2 copy moveto 2 copy vpt 90 180 arc closepath fill vpt 0 360 arc closepath } bind def /C11 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 180 arc closepath fill 2 copy moveto 2 copy vpt 270 360 arc closepath fill vpt 0 360 arc closepath } bind def /C12 { BL [] 0 setdash 2 copy moveto 2 copy vpt 180 360 arc closepath fill vpt 0 360 arc closepath } bind def /C13 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 90 arc closepath fill 2 copy moveto 2 copy vpt 180 360 arc closepath fill vpt 0 360 arc closepath } bind def /C14 { BL [] 0 setdash 2 copy moveto 2 copy vpt 90 360 arc closepath fill vpt 0 360 arc } bind def /C15 { BL [] 0 setdash 2 copy vpt 0 360 arc closepath fill vpt 0 360 arc closepath } bind def /Rec { newpath 4 2 roll moveto 1 index 0 rlineto 0 exch rlineto neg 0 rlineto closepath } bind def /Square { dup Rec } bind def /Bsquare { vpt sub exch vpt sub exch vpt2 Square } bind def /S0 { BL [] 0 setdash 2 copy moveto 0 vpt rlineto BL Bsquare } bind def /S1 { BL [] 0 setdash 2 copy vpt Square fill Bsquare } bind def /S2 { BL [] 0 setdash 2 copy exch vpt sub exch vpt Square fill Bsquare } bind def /S3 { BL [] 0 setdash 2 copy exch vpt sub exch vpt2 vpt Rec fill Bsquare } bind def /S4 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt Square fill Bsquare } bind def /S5 { BL [] 0 setdash 2 copy 2 copy vpt Square fill exch vpt sub exch vpt sub vpt Square fill Bsquare } bind def /S6 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt vpt2 Rec fill Bsquare } bind def /S7 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt vpt2 Rec fill 2 copy vpt Square fill Bsquare } bind def /S8 { BL [] 0 setdash 2 copy vpt sub vpt Square fill Bsquare } bind def /S9 { BL [] 0 setdash 2 copy vpt sub vpt vpt2 Rec fill Bsquare } bind def /S10 { BL [] 0 setdash 2 copy vpt sub vpt Square fill 2 copy exch vpt sub exch vpt Square fill Bsquare } bind def /S11 { BL [] 0 setdash 2 copy vpt sub vpt Square fill 2 copy exch vpt sub exch vpt2 vpt Rec fill Bsquare } bind def /S12 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt2 vpt Rec fill Bsquare } bind def /S13 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt2 vpt Rec fill 2 copy vpt Square fill Bsquare } bind def /S14 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt2 vpt Rec fill 2 copy exch vpt sub exch vpt Square fill Bsquare } bind def /S15 { BL [] 0 setdash 2 copy Bsquare fill Bsquare } bind def /D0 { gsave translate 45 rotate 0 0 S0 stroke grestore } bind def /D1 { gsave translate 45 rotate 0 0 S1 stroke grestore } bind def /D2 { gsave translate 45 rotate 0 0 S2 stroke grestore } bind def /D3 { gsave translate 45 rotate 0 0 S3 stroke grestore } bind def /D4 { gsave translate 45 rotate 0 0 S4 stroke grestore } bind def /D5 { gsave translate 45 rotate 0 0 S5 stroke grestore } bind def /D6 { gsave translate 45 rotate 0 0 S6 stroke grestore } bind def /D7 { gsave translate 45 rotate 0 0 S7 stroke grestore } bind def /D8 { gsave translate 45 rotate 0 0 S8 stroke grestore } bind def /D9 { gsave translate 45 rotate 0 0 S9 stroke grestore } bind def /D10 { gsave translate 45 rotate 0 0 S10 stroke grestore } bind def /D11 { gsave translate 45 rotate 0 0 S11 stroke grestore } bind def /D12 { gsave translate 45 rotate 0 0 S12 stroke grestore } bind def /D13 { gsave translate 45 rotate 0 0 S13 stroke grestore } bind def /D14 { gsave translate 45 rotate 0 0 S14 stroke grestore } bind def /D15 { gsave translate 45 rotate 0 0 S15 stroke grestore } bind def /DiaE { stroke [] 0 setdash vpt add M hpt neg vpt neg V hpt vpt neg V hpt vpt V hpt neg vpt V closepath stroke } def /BoxE { stroke [] 0 setdash exch hpt sub exch vpt add M 0 vpt2 neg V hpt2 0 V 0 vpt2 V hpt2 neg 0 V closepath stroke } def /TriUE { stroke [] 0 setdash vpt 1.12 mul add M hpt neg vpt -1.62 mul V hpt 2 mul 0 V hpt neg vpt 1.62 mul V closepath stroke } def /TriDE { stroke [] 0 setdash vpt 1.12 mul sub M hpt neg vpt 1.62 mul V hpt 2 mul 0 V hpt neg vpt -1.62 mul V closepath stroke } def /PentE { stroke [] 0 setdash gsave translate 0 hpt M 4 {72 rotate 0 hpt L} repeat closepath stroke grestore } def /CircE { stroke [] 0 setdash hpt 0 360 arc stroke } def /BoxFill { gsave Rec 1 setgray fill grestore } def end } \begin{picture}(3600,2160)(0,0) \special{" gnudict begin gsave 0 0 translate 0.100 0.100 scale 0 setgray newpath LTb 300 400 M 63 0 V 3187 0 R -63 0 V 300 769 M 63 0 V 3187 0 R -63 0 V 300 1138 M 63 0 V 3187 0 R -63 0 V 300 1507 M 63 0 V 3187 0 R -63 0 V 300 1876 M 63 0 V 3187 0 R -63 0 V 764 400 M 0 63 V 0 1597 R 0 -63 V 1229 400 M 0 63 V 0 1597 R 0 -63 V 1693 400 M 0 63 V 0 1597 R 0 -63 V 2157 400 M 0 63 V 0 1597 R 0 -63 V 2621 400 M 0 63 V 0 1597 R 0 -63 V 3086 400 M 0 63 V 0 1597 R 0 -63 V LTa 300 400 M 3250 0 V LTb 300 400 M 3250 0 V 0 1660 V -3250 0 V 300 400 L 1.000 UP 1.000 UL LT0 764 1752 M 465 -449 V 464 -255 V 2157 881 L 2621 758 L 3086 658 L 764 1752 Pls 1229 1303 Pls 1693 1048 Pls 2157 881 Pls 2621 758 Pls 3086 658 Pls 1.000 UP 1.000 UL LT0 764 1333 M 465 -286 V 1693 892 L 2157 783 L 464 -86 V 465 -77 V 764 1333 Pls 1229 1047 Pls 1693 892 Pls 2157 783 Pls 2621 697 Pls 3086 620 Pls 1.000 UP 1.000 UL LT1 764 1916 M 465 -540 V 464 -291 V 2157 901 L 2621 769 L 3086 664 L 764 1916 Crs 1229 1376 Crs 1693 1085 Crs 2157 901 Crs 2621 769 Crs 3086 664 Crs 1.000 UP 1.000 UL LT2 764 982 M 1229 716 L 1693 592 L 464 -78 V 464 -32 V 465 -29 V 764 982 Star 1229 716 Star 1693 592 Star 2157 514 Star 2621 482 Star 3086 453 Star stroke grestore end showpage } \put(1925,150){\Large\makebox(0,0){$\alpha$}} \put(3086,300){\makebox(0,0){0.4}} \put(2621,300){\makebox(0,0){0.35}} \put(2157,300){\makebox(0,0){0.3}} \put(1693,300){\makebox(0,0){0.25}} \put(1229,300){\makebox(0,0){0.2}} \put(764,300){\makebox(0,0){0.15}} \put(250,1876){\makebox(0,0)[r]{80}} \put(250,1507){\makebox(0,0)[r]{60}} \put(250,1138){\makebox(0,0)[r]{40}} \put(250,769){\makebox(0,0)[r]{20}} \put(250,400){\makebox(0,0)[r]{0}} \put(1200,1750){\makebox(0,0){$S(\phi^{a}_{\rm{gm}})$}} \put(1020,1450){\makebox(0,0)[r]{$S(\phi^{a}_{\rm{lm}}$;~{\it e}=0.3)}} \put(1070,1100){\makebox(0,0)[r]{$S(\phi^{a}_{\rm{lm}}$;~{\it e}=0.91)}} \put(900,800){\makebox(0,0)[r]{$S(\phi^{a}_{\rm{b}})$}} \end{picture} \caption{Plots of Euclidean actions $S/v\beta$ versus $\alpha$ for various bubble solutions. The dashed line corresponds to a global monopole-bubble ($e=0$), two solid lines to gauged monopole-bubbles of $e=0.3$ and $0.91$, and the dotted line to an ordinary bounce. Here $\lambda$ is set to be 1.} \label{fig3} \end{figure} \begin{table} \begin{center}{ \begin{tabular}{c| c c c} \hline\hline $\alpha$ & 0.2 & 0.3 & 0.4\\ \hline\hline ($\Gamma_{\rm gm}/\Gamma_{\rm b})|_{e=0}$ ~~& $1\times 10^{-15}$ & $6\times 10^{-9}$ & $1\times 10^{-4}$ \\ ($\Gamma_{\rm lm}/\Gamma_{\rm b})|_{e=0.3}$ ~& $6\times 10^{-14}$ & $2\times 10^{-8}$ & $1\times 10^{-4}$ \\ ($\Gamma_{\rm lm}/\Gamma_{\rm b})|_{e=0.91}$ & $4\times 10^{-8}$ & $2\times 10^{-6}$ & $8\times 10^{-4}$\\ \hline \end{tabular} } \end{center} \caption{The values of the relative decay rates, $\Gamma_{\rm gm}/\Gamma_{\rm b}$ and $\Gamma_{\rm lm}/\Gamma_{\rm b}$, for various $\alpha$ and $v\beta$ with $e=1$, $\lambda=1$, and $v\beta = 1$ when we set the ratio of determinant factors to be 1.} \label{table1} \end{table} Completion of the first-order phase transition is achieved by the growth and percolation of nucleated bubbles. The actual process is complicated because our O(3) bounces and gauged monopole-bubbles are generated at high temperature. However, one simple but probably correct way is to analyze the time-dependent field equations with appropriate initial configurations. The obtained Euclidean solution is time-independent both in Euclidean spacetime and in Lorentzian spacetime, and it therefore does not evolve in itself. Hence we give small radial fluctuations to the static solution as an initial configuration, and solve the time-dependent field equations numerically. If the bubble radius is smaller than the critical radius, both the bounce and the gauged monopole-bubble collapse; so does the magnetic monopole at its center. On the other hand, as Fig.~\ref{fig4} shows, if it is larger than the critical radius, the bubble wall starts to grow and the magnetic monopole remains to be stable with small damped oscillation. The thick wall of the monopole-bubble becomes a thin wall. We also find the the velocity of the wall approaches the light velocity. Of course, inclusion of radiation is needed to obtain the bubble dynamics more precisely. Here we just give a short comment on this topic: since we have three massless gauge bosons outside the bubble wall and only one photon inside it, the expected pressure difference may decrease the terminal velocity of the wall. \begin{figure} \setlength{\unitlength}{0.1bp} \special{! /gnudict 120 dict def gnudict begin /Color false def /Solid false def /gnulinewidth 5.000 def /userlinewidth gnulinewidth def /vshift -33 def /dl {10 mul} def /hpt_ 31.5 def /vpt_ 31.5 def /hpt hpt_ def /vpt vpt_ def /M {moveto} bind def /L {lineto} bind def /R {rmoveto} bind def /V {rlineto} bind def /vpt2 vpt 2 mul def /hpt2 hpt 2 mul def /Lshow { currentpoint stroke M 0 vshift R show } def /Rshow { currentpoint stroke M dup stringwidth pop neg vshift R show } def /Cshow { currentpoint stroke M dup stringwidth pop -2 div vshift R show } def /UP { dup vpt_ mul /vpt exch def hpt_ mul /hpt exch def /hpt2 hpt 2 mul def /vpt2 vpt 2 mul def } def /DL { Color {setrgbcolor Solid {pop []} if 0 setdash } {pop pop pop Solid {pop []} if 0 setdash} ifelse } def /BL { stroke gnulinewidth 2 mul setlinewidth } def /AL { stroke gnulinewidth 2 div setlinewidth } def /UL { gnulinewidth mul /userlinewidth exch def } def /PL { stroke userlinewidth setlinewidth } def /LTb { BL [] 0 0 0 DL } def /LTa { AL [1 dl 2 dl] 0 setdash 0 0 0 setrgbcolor } def /LT0 { PL [] 0 1 0 DL } def /LT1 { PL [4 dl 2 dl] 0 0 1 DL } def /LT2 { PL [2 dl 3 dl] 1 0 0 DL } def /LT3 { PL [1 dl 1.5 dl] 1 0 1 DL } def /LT4 { PL [5 dl 2 dl 1 dl 2 dl] 0 1 1 DL } def /LT5 { PL [4 dl 3 dl 1 dl 3 dl] 1 1 0 DL } def /LT6 { PL [2 dl 2 dl 2 dl 4 dl] 0 0 0 DL } def /LT7 { PL [2 dl 2 dl 2 dl 2 dl 2 dl 4 dl] 1 0.3 0 DL } def /LT8 { PL [2 dl 2 dl 2 dl 2 dl 2 dl 2 dl 2 dl 4 dl] 0.5 0.5 0.5 DL } def /Pnt { stroke [] 0 setdash gsave 1 setlinecap M 0 0 V stroke grestore } def /Dia { stroke [] 0 setdash 2 copy vpt add M hpt neg vpt neg V hpt vpt neg V hpt vpt V hpt neg vpt V closepath stroke Pnt } def /Pls { stroke [] 0 setdash vpt sub M 0 vpt2 V currentpoint stroke M hpt neg vpt neg R hpt2 0 V stroke } def /Box { stroke [] 0 setdash 2 copy exch hpt sub exch vpt add M 0 vpt2 neg V hpt2 0 V 0 vpt2 V hpt2 neg 0 V closepath stroke Pnt } def /Crs { stroke [] 0 setdash exch hpt sub exch vpt add M hpt2 vpt2 neg V currentpoint stroke M hpt2 neg 0 R hpt2 vpt2 V stroke } def /TriU { stroke [] 0 setdash 2 copy vpt 1.12 mul add M hpt neg vpt -1.62 mul V hpt 2 mul 0 V hpt neg vpt 1.62 mul V closepath stroke Pnt } def /Star { 2 copy Pls Crs } def /BoxF { stroke [] 0 setdash exch hpt sub exch vpt add M 0 vpt2 neg V hpt2 0 V 0 vpt2 V hpt2 neg 0 V closepath fill } def /TriUF { stroke [] 0 setdash vpt 1.12 mul add M hpt neg vpt -1.62 mul V hpt 2 mul 0 V hpt neg vpt 1.62 mul V closepath fill } def /TriD { stroke [] 0 setdash 2 copy vpt 1.12 mul sub M hpt neg vpt 1.62 mul V hpt 2 mul 0 V hpt neg vpt -1.62 mul V closepath stroke Pnt } def /TriDF { stroke [] 0 setdash vpt 1.12 mul sub M hpt neg vpt 1.62 mul V hpt 2 mul 0 V hpt neg vpt -1.62 mul V closepath fill} def /DiaF { stroke [] 0 setdash vpt add M hpt neg vpt neg V hpt vpt neg V hpt vpt V hpt neg vpt V closepath fill } def /Pent { stroke [] 0 setdash 2 copy gsave translate 0 hpt M 4 {72 rotate 0 hpt L} repeat closepath stroke grestore Pnt } def /PentF { stroke [] 0 setdash gsave translate 0 hpt M 4 {72 rotate 0 hpt L} repeat closepath fill grestore } def /Circle { stroke [] 0 setdash 2 copy hpt 0 360 arc stroke Pnt } def /CircleF { stroke [] 0 setdash hpt 0 360 arc fill } def /C0 { BL [] 0 setdash 2 copy moveto vpt 90 450 arc } bind def /C1 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 90 arc closepath fill vpt 0 360 arc closepath } bind def /C2 { BL [] 0 setdash 2 copy moveto 2 copy vpt 90 180 arc closepath fill vpt 0 360 arc closepath } bind def /C3 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 180 arc closepath fill vpt 0 360 arc closepath } bind def /C4 { BL [] 0 setdash 2 copy moveto 2 copy vpt 180 270 arc closepath fill vpt 0 360 arc closepath } bind def /C5 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 90 arc 2 copy moveto 2 copy vpt 180 270 arc closepath fill vpt 0 360 arc } bind def /C6 { BL [] 0 setdash 2 copy moveto 2 copy vpt 90 270 arc closepath fill vpt 0 360 arc closepath } bind def /C7 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 270 arc closepath fill vpt 0 360 arc closepath } bind def /C8 { BL [] 0 setdash 2 copy moveto 2 copy vpt 270 360 arc closepath fill vpt 0 360 arc closepath } bind def /C9 { BL [] 0 setdash 2 copy moveto 2 copy vpt 270 450 arc closepath fill vpt 0 360 arc closepath } bind def /C10 { BL [] 0 setdash 2 copy 2 copy moveto vpt 270 360 arc closepath fill 2 copy moveto 2 copy vpt 90 180 arc closepath fill vpt 0 360 arc closepath } bind def /C11 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 180 arc closepath fill 2 copy moveto 2 copy vpt 270 360 arc closepath fill vpt 0 360 arc closepath } bind def /C12 { BL [] 0 setdash 2 copy moveto 2 copy vpt 180 360 arc closepath fill vpt 0 360 arc closepath } bind def /C13 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 90 arc closepath fill 2 copy moveto 2 copy vpt 180 360 arc closepath fill vpt 0 360 arc closepath } bind def /C14 { BL [] 0 setdash 2 copy moveto 2 copy vpt 90 360 arc closepath fill vpt 0 360 arc } bind def /C15 { BL [] 0 setdash 2 copy vpt 0 360 arc closepath fill vpt 0 360 arc closepath } bind def /Rec { newpath 4 2 roll moveto 1 index 0 rlineto 0 exch rlineto neg 0 rlineto closepath } bind def /Square { dup Rec } bind def /Bsquare { vpt sub exch vpt sub exch vpt2 Square } bind def /S0 { BL [] 0 setdash 2 copy moveto 0 vpt rlineto BL Bsquare } bind def /S1 { BL [] 0 setdash 2 copy vpt Square fill Bsquare } bind def /S2 { BL [] 0 setdash 2 copy exch vpt sub exch vpt Square fill Bsquare } bind def /S3 { BL [] 0 setdash 2 copy exch vpt sub exch vpt2 vpt Rec fill Bsquare } bind def /S4 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt Square fill Bsquare } bind def /S5 { BL [] 0 setdash 2 copy 2 copy vpt Square fill exch vpt sub exch vpt sub vpt Square fill Bsquare } bind def /S6 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt vpt2 Rec fill Bsquare } bind def /S7 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt vpt2 Rec fill 2 copy vpt Square fill Bsquare } bind def /S8 { BL [] 0 setdash 2 copy vpt sub vpt Square fill Bsquare } bind def /S9 { BL [] 0 setdash 2 copy vpt sub vpt vpt2 Rec fill Bsquare } bind def /S10 { BL [] 0 setdash 2 copy vpt sub vpt Square fill 2 copy exch vpt sub exch vpt Square fill Bsquare } bind def /S11 { BL [] 0 setdash 2 copy vpt sub vpt Square fill 2 copy exch vpt sub exch vpt2 vpt Rec fill Bsquare } bind def /S12 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt2 vpt Rec fill Bsquare } bind def /S13 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt2 vpt Rec fill 2 copy vpt Square fill Bsquare } bind def /S14 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt2 vpt Rec fill 2 copy exch vpt sub exch vpt Square fill Bsquare } bind def /S15 { BL [] 0 setdash 2 copy Bsquare fill Bsquare } bind def /D0 { gsave translate 45 rotate 0 0 S0 stroke grestore } bind def /D1 { gsave translate 45 rotate 0 0 S1 stroke grestore } bind def /D2 { gsave translate 45 rotate 0 0 S2 stroke grestore } bind def /D3 { gsave translate 45 rotate 0 0 S3 stroke grestore } bind def /D4 { gsave translate 45 rotate 0 0 S4 stroke grestore } bind def /D5 { gsave translate 45 rotate 0 0 S5 stroke grestore } bind def /D6 { gsave translate 45 rotate 0 0 S6 stroke grestore } bind def /D7 { gsave translate 45 rotate 0 0 S7 stroke grestore } bind def /D8 { gsave translate 45 rotate 0 0 S8 stroke grestore } bind def /D9 { gsave translate 45 rotate 0 0 S9 stroke grestore } bind def /D10 { gsave translate 45 rotate 0 0 S10 stroke grestore } bind def /D11 { gsave translate 45 rotate 0 0 S11 stroke grestore } bind def /D12 { gsave translate 45 rotate 0 0 S12 stroke grestore } bind def /D13 { gsave translate 45 rotate 0 0 S13 stroke grestore } bind def /D14 { gsave translate 45 rotate 0 0 S14 stroke grestore } bind def /D15 { gsave translate 45 rotate 0 0 S15 stroke grestore } bind def /DiaE { stroke [] 0 setdash vpt add M hpt neg vpt neg V hpt vpt neg V hpt vpt V hpt neg vpt V closepath stroke } def /BoxE { stroke [] 0 setdash exch hpt sub exch vpt add M 0 vpt2 neg V hpt2 0 V 0 vpt2 V hpt2 neg 0 V closepath stroke } def /TriUE { stroke [] 0 setdash vpt 1.12 mul add M hpt neg vpt -1.62 mul V hpt 2 mul 0 V hpt neg vpt 1.62 mul V closepath stroke } def /TriDE { stroke [] 0 setdash vpt 1.12 mul sub M hpt neg vpt 1.62 mul V hpt 2 mul 0 V hpt neg vpt -1.62 mul V closepath stroke } def /PentE { stroke [] 0 setdash gsave translate 0 hpt M 4 {72 rotate 0 hpt L} repeat closepath stroke grestore } def /CircE { stroke [] 0 setdash hpt 0 360 arc stroke } def /BoxFill { gsave Rec 1 setgray fill grestore } def end } \begin{picture}(3600,2160)(0,0) \special{" gnudict begin gsave 0 0 translate 0.100 0.100 scale 0 setgray newpath LTb 450 400 M 63 0 V 3037 0 R -63 0 V 450 677 M 63 0 V 3037 0 R -63 0 V 450 953 M 63 0 V 3037 0 R -63 0 V 450 1230 M 63 0 V 3037 0 R -63 0 V 450 1507 M 63 0 V 3037 0 R -63 0 V 450 1783 M 63 0 V 3037 0 R -63 0 V 450 2060 M 63 0 V 3037 0 R -63 0 V 450 400 M 0 63 V 0 1597 R 0 -63 V 1070 400 M 0 63 V 0 1597 R 0 -63 V 1690 400 M 0 63 V 0 1597 R 0 -63 V 2310 400 M 0 63 V 0 1597 R 0 -63 V 2930 400 M 0 63 V 0 1597 R 0 -63 V 3550 400 M 0 63 V 0 1597 R 0 -63 V LTa 450 400 M 3100 0 V LTa 450 400 M 0 1660 V LTb 450 400 M 3100 0 V 0 1660 V -3100 0 V 450 400 L 1.000 UL LT0 3187 1947 M 263 0 V 450 400 M 22 36 V 23 37 V 22 36 V 22 37 V 23 36 V 22 37 V 22 37 V 23 37 V 22 36 V 22 37 V 23 37 V 22 36 V 22 36 V 22 36 V 23 35 V 22 34 V 22 34 V 23 32 V 22 31 V 22 29 V 23 28 V 22 26 V 22 24 V 23 22 V 22 20 V 22 17 V 23 15 V 22 13 V 22 10 V 23 8 V 22 5 V 22 3 V 23 0 V 22 -2 V 22 -4 V 23 -7 V 22 -9 V 22 -10 V 22 -13 V 23 -15 V 22 -16 V 22 -18 V 23 -19 V 22 -21 V 22 -21 V 23 -22 V 22 -24 V 22 -23 V 23 -24 V 22 -24 V 22 -24 V 23 -24 V 22 -24 V 22 -24 V 23 -23 V 22 -23 V 22 -22 V 23 -21 V 22 -21 V 22 -20 V 23 -19 V 22 -19 V 22 -17 V 22 -17 V 23 -17 V 22 -15 V 22 -15 V 23 -14 V 22 -14 V 22 -13 V 23 -12 V 22 -11 V 22 -11 V 23 -11 V 22 -10 V 22 -9 V 23 -9 V 22 -9 V 22 -8 V 23 -7 V 22 -7 V 22 -7 V 23 -7 V 22 -6 V 22 -6 V 23 -5 V 22 -5 V 22 -5 V 22 -5 V 23 -4 V 22 -5 V 22 -4 V 23 -3 V 22 -4 V 22 -3 V 23 -4 V 22 -3 V 22 -2 V 23 -3 V 22 -3 V 22 -2 V 23 -3 V 22 -2 V 22 -2 V 23 -2 V 22 -2 V 22 -2 V 23 -2 V 22 -1 V 22 -2 V 23 -1 V 22 -2 V 22 -1 V 22 -1 V 23 -1 V 22 -2 V 22 -1 V 23 -1 V 22 -1 V 22 -1 V 23 -1 V 22 0 V 22 -1 V 23 -1 V 22 -1 V 22 0 V 23 -1 V 22 -1 V 22 0 V 23 -1 V 22 -1 V 22 0 V 23 -1 V 22 0 V 22 0 V 23 -1 V 22 0 V 22 -1 V 20 0 V 1.000 UL LT1 3187 1847 M 263 0 V 450 400 M 22 71 V 23 72 V 22 73 V 22 73 V 23 76 V 22 77 V 22 78 V 23 80 V 22 81 V 22 82 V 23 81 V 22 79 V 22 76 V 22 72 V 23 65 V 22 59 V 22 51 V 23 44 V 22 36 V 22 29 V 23 23 V 22 18 V 22 13 V 23 9 V 22 7 V 22 5 V 23 3 V 22 3 V 22 1 V 23 1 V 22 0 V 22 0 V 23 -1 V 22 -1 V 22 -1 V 23 -3 V 22 -3 V 22 -5 V 22 -7 V 23 -9 V 22 -13 V 22 -15 V 23 -20 V 22 -24 V 22 -29 V 23 -35 V 22 -39 V 22 -45 V 23 -50 V 22 -53 V 22 -57 V 23 -58 V 22 -59 V 22 -59 V 23 -58 V 22 -56 V 22 -54 V 23 -51 V 22 -48 V 22 -45 V 23 -41 V 22 -39 V 22 -35 V 22 -33 V 23 -30 V 22 -28 V 22 -25 V 23 -23 V 22 -21 V 22 -19 V 23 -18 V 22 -16 V 22 -15 V 23 -13 V 22 -13 V 22 -11 V 23 -11 V 22 -9 V 22 -9 V 23 -8 V 22 -8 V 22 -7 V 23 -6 V 22 -6 V 22 -6 V 23 -5 V 22 -5 V 22 -4 V 22 -4 V 23 -4 V 22 -4 V 22 -3 V 23 -3 V 22 -3 V 22 -3 V 23 -3 V 22 -3 V 22 -2 V 23 -2 V 22 -2 V 22 -2 V 23 -2 V 22 -2 V 22 -2 V 23 -1 V 22 -2 V 22 -1 V 23 -1 V 22 -2 V 22 -1 V 23 -1 V 22 -1 V 22 -1 V 22 -1 V 23 -1 V 22 -1 V 22 -1 V 23 -1 V 22 0 V 22 -1 V 23 -1 V 22 -1 V 22 0 V 23 -1 V 22 0 V 22 -1 V 23 0 V 22 -1 V 22 0 V 23 -1 V 22 0 V 22 -1 V 23 0 V 22 0 V 22 -1 V 23 0 V 22 0 V 22 -1 V 20 0 V 1.000 UL LT2 3187 1747 M 263 0 V 450 400 M 22 34 V 23 34 V 22 31 V 22 30 V 23 28 V 22 24 V 22 22 V 23 21 V 22 22 V 22 25 V 23 32 V 22 41 V 22 54 V 22 65 V 23 76 V 22 85 V 22 90 V 23 92 V 22 91 V 22 85 V 23 77 V 22 68 V 22 58 V 23 48 V 22 39 V 22 31 V 23 25 V 22 19 V 22 15 V 23 12 V 22 9 V 22 7 V 23 5 V 22 4 V 22 3 V 23 2 V 22 2 V 22 1 V 22 1 V 23 0 V 22 0 V 22 0 V 23 0 V 22 0 V 22 -1 V 23 -1 V 22 -2 V 22 -1 V 23 -2 V 22 -2 V 22 -3 V 23 -3 V 22 -3 V 22 -3 V 23 -4 V 22 -4 V 22 -4 V 23 -4 V 22 -4 V 22 -3 V 23 -4 V 22 -3 V 22 -3 V 22 -3 V 23 -4 V 22 -5 V 22 -7 V 23 -12 V 22 -19 V 22 -30 V 23 -43 V 22 -59 V 22 -76 V 23 -89 V 22 -98 V 22 -99 V 23 -97 V 22 -90 V 22 -81 V 23 -71 V 22 -62 V 22 -54 V 23 -46 V 22 -40 V 22 -34 V 23 -30 V 22 -25 V 22 -23 V 22 -19 V 23 -16 V 22 -15 V 22 -13 V 23 -11 V 22 -10 V 22 -8 V 23 -8 V 22 -6 V 22 -6 V 23 -5 V 22 -5 V 22 -4 V 23 -3 V 22 -3 V 22 -3 V 23 -2 V 22 -2 V 22 -2 V 23 -2 V 22 -1 V 22 -2 V 23 -1 V 22 -1 V 22 -1 V 22 0 V 23 -1 V 22 -1 V 22 0 V 23 -1 V 22 0 V 22 0 V 23 -1 V 22 0 V 22 0 V 23 -1 V 22 0 V 22 0 V 23 0 V 22 0 V 22 -1 V 23 0 V 22 0 V 22 0 V 23 0 V 22 0 V 22 0 V 23 0 V 22 0 V 22 -1 V 20 0 V stroke grestore end showpage } \put(3137,1747){\makebox(0,0)[r]{$vt=4$}} \put(3137,1847){\makebox(0,0)[r]{$vt=2$}} \put(3137,1947){\makebox(0,0)[r]{$vt=0$}} \put(2000,150){\makebox(0,0){\Large$vr$}} \put(100,1230){% \special{ps: gsave currentpoint currentpoint translate 270 rotate neg exch neg exch translate}% \makebox(0,0)[b]{\Large\shortstack{$\phi/v$}}% \special{ps: currentpoint grestore moveto}% } \put(3550,300){\makebox(0,0){10}} \put(2930,300){\makebox(0,0){8}} \put(2310,300){\makebox(0,0){6}} \put(1690,300){\makebox(0,0){4}} \put(1070,300){\makebox(0,0){2}} \put(450,300){\makebox(0,0){0}} \put(400,2060){\makebox(0,0)[r]{1.2}} \put(400,1783){\makebox(0,0)[r]{1}} \put(400,1507){\makebox(0,0)[r]{0.8}} \put(400,1230){\makebox(0,0)[r]{0.6}} \put(400,953){\makebox(0,0)[r]{0.4}} \put(400,677){\makebox(0,0)[r]{0.2}} \put(400,400){\makebox(0,0)[r]{0}} \end{picture} \begin{center}{(a)} \end{center} \setlength{\unitlength}{0.1bp} \special{! /gnudict 120 dict def gnudict begin /Color false def /Solid false def /gnulinewidth 5.000 def /userlinewidth gnulinewidth def /vshift -33 def /dl {10 mul} def /hpt_ 31.5 def /vpt_ 31.5 def /hpt hpt_ def /vpt vpt_ def /M {moveto} bind def /L {lineto} bind def /R {rmoveto} bind def /V {rlineto} bind def /vpt2 vpt 2 mul def /hpt2 hpt 2 mul def /Lshow { currentpoint stroke M 0 vshift R show } def /Rshow { currentpoint stroke M dup stringwidth pop neg vshift R show } def /Cshow { currentpoint stroke M dup stringwidth pop -2 div vshift R show } def /UP { dup vpt_ mul /vpt exch def hpt_ mul /hpt exch def /hpt2 hpt 2 mul def /vpt2 vpt 2 mul def } def /DL { Color {setrgbcolor Solid {pop []} if 0 setdash } {pop pop pop Solid {pop []} if 0 setdash} ifelse } def /BL { stroke gnulinewidth 2 mul setlinewidth } def /AL { stroke gnulinewidth 2 div setlinewidth } def /UL { gnulinewidth mul /userlinewidth exch def } def /PL { stroke userlinewidth setlinewidth } def /LTb { BL [] 0 0 0 DL } def /LTa { AL [1 dl 2 dl] 0 setdash 0 0 0 setrgbcolor } def /LT0 { PL [] 0 1 0 DL } def /LT1 { PL [4 dl 2 dl] 0 0 1 DL } def /LT2 { PL [2 dl 3 dl] 1 0 0 DL } def /LT3 { PL [1 dl 1.5 dl] 1 0 1 DL } def /LT4 { PL [5 dl 2 dl 1 dl 2 dl] 0 1 1 DL } def /LT5 { PL [4 dl 3 dl 1 dl 3 dl] 1 1 0 DL } def /LT6 { PL [2 dl 2 dl 2 dl 4 dl] 0 0 0 DL } def /LT7 { PL [2 dl 2 dl 2 dl 2 dl 2 dl 4 dl] 1 0.3 0 DL } def /LT8 { PL [2 dl 2 dl 2 dl 2 dl 2 dl 2 dl 2 dl 4 dl] 0.5 0.5 0.5 DL } def /Pnt { stroke [] 0 setdash gsave 1 setlinecap M 0 0 V stroke grestore } def /Dia { stroke [] 0 setdash 2 copy vpt add M hpt neg vpt neg V hpt vpt neg V hpt vpt V hpt neg vpt V closepath stroke Pnt } def /Pls { stroke [] 0 setdash vpt sub M 0 vpt2 V currentpoint stroke M hpt neg vpt neg R hpt2 0 V stroke } def /Box { stroke [] 0 setdash 2 copy exch hpt sub exch vpt add M 0 vpt2 neg V hpt2 0 V 0 vpt2 V hpt2 neg 0 V closepath stroke Pnt } def /Crs { stroke [] 0 setdash exch hpt sub exch vpt add M hpt2 vpt2 neg V currentpoint stroke M hpt2 neg 0 R hpt2 vpt2 V stroke } def /TriU { stroke [] 0 setdash 2 copy vpt 1.12 mul add M hpt neg vpt -1.62 mul V hpt 2 mul 0 V hpt neg vpt 1.62 mul V closepath stroke Pnt } def /Star { 2 copy Pls Crs } def /BoxF { stroke [] 0 setdash exch hpt sub exch vpt add M 0 vpt2 neg V hpt2 0 V 0 vpt2 V hpt2 neg 0 V closepath fill } def /TriUF { stroke [] 0 setdash vpt 1.12 mul add M hpt neg vpt -1.62 mul V hpt 2 mul 0 V hpt neg vpt 1.62 mul V closepath fill } def /TriD { stroke [] 0 setdash 2 copy vpt 1.12 mul sub M hpt neg vpt 1.62 mul V hpt 2 mul 0 V hpt neg vpt -1.62 mul V closepath stroke Pnt } def /TriDF { stroke [] 0 setdash vpt 1.12 mul sub M hpt neg vpt 1.62 mul V hpt 2 mul 0 V hpt neg vpt -1.62 mul V closepath fill} def /DiaF { stroke [] 0 setdash vpt add M hpt neg vpt neg V hpt vpt neg V hpt vpt V hpt neg vpt V closepath fill } def /Pent { stroke [] 0 setdash 2 copy gsave translate 0 hpt M 4 {72 rotate 0 hpt L} repeat closepath stroke grestore Pnt } def /PentF { stroke [] 0 setdash gsave translate 0 hpt M 4 {72 rotate 0 hpt L} repeat closepath fill grestore } def /Circle { stroke [] 0 setdash 2 copy hpt 0 360 arc stroke Pnt } def /CircleF { stroke [] 0 setdash hpt 0 360 arc fill } def /C0 { BL [] 0 setdash 2 copy moveto vpt 90 450 arc } bind def /C1 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 90 arc closepath fill vpt 0 360 arc closepath } bind def /C2 { BL [] 0 setdash 2 copy moveto 2 copy vpt 90 180 arc closepath fill vpt 0 360 arc closepath } bind def /C3 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 180 arc closepath fill vpt 0 360 arc closepath } bind def /C4 { BL [] 0 setdash 2 copy moveto 2 copy vpt 180 270 arc closepath fill vpt 0 360 arc closepath } bind def /C5 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 90 arc 2 copy moveto 2 copy vpt 180 270 arc closepath fill vpt 0 360 arc } bind def /C6 { BL [] 0 setdash 2 copy moveto 2 copy vpt 90 270 arc closepath fill vpt 0 360 arc closepath } bind def /C7 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 270 arc closepath fill vpt 0 360 arc closepath } bind def /C8 { BL [] 0 setdash 2 copy moveto 2 copy vpt 270 360 arc closepath fill vpt 0 360 arc closepath } bind def /C9 { BL [] 0 setdash 2 copy moveto 2 copy vpt 270 450 arc closepath fill vpt 0 360 arc closepath } bind def /C10 { BL [] 0 setdash 2 copy 2 copy moveto vpt 270 360 arc closepath fill 2 copy moveto 2 copy vpt 90 180 arc closepath fill vpt 0 360 arc closepath } bind def /C11 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 180 arc closepath fill 2 copy moveto 2 copy vpt 270 360 arc closepath fill vpt 0 360 arc closepath } bind def /C12 { BL [] 0 setdash 2 copy moveto 2 copy vpt 180 360 arc closepath fill vpt 0 360 arc closepath } bind def /C13 { BL [] 0 setdash 2 copy moveto 2 copy vpt 0 90 arc closepath fill 2 copy moveto 2 copy vpt 180 360 arc closepath fill vpt 0 360 arc closepath } bind def /C14 { BL [] 0 setdash 2 copy moveto 2 copy vpt 90 360 arc closepath fill vpt 0 360 arc } bind def /C15 { BL [] 0 setdash 2 copy vpt 0 360 arc closepath fill vpt 0 360 arc closepath } bind def /Rec { newpath 4 2 roll moveto 1 index 0 rlineto 0 exch rlineto neg 0 rlineto closepath } bind def /Square { dup Rec } bind def /Bsquare { vpt sub exch vpt sub exch vpt2 Square } bind def /S0 { BL [] 0 setdash 2 copy moveto 0 vpt rlineto BL Bsquare } bind def /S1 { BL [] 0 setdash 2 copy vpt Square fill Bsquare } bind def /S2 { BL [] 0 setdash 2 copy exch vpt sub exch vpt Square fill Bsquare } bind def /S3 { BL [] 0 setdash 2 copy exch vpt sub exch vpt2 vpt Rec fill Bsquare } bind def /S4 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt Square fill Bsquare } bind def /S5 { BL [] 0 setdash 2 copy 2 copy vpt Square fill exch vpt sub exch vpt sub vpt Square fill Bsquare } bind def /S6 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt vpt2 Rec fill Bsquare } bind def /S7 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt vpt2 Rec fill 2 copy vpt Square fill Bsquare } bind def /S8 { BL [] 0 setdash 2 copy vpt sub vpt Square fill Bsquare } bind def /S9 { BL [] 0 setdash 2 copy vpt sub vpt vpt2 Rec fill Bsquare } bind def /S10 { BL [] 0 setdash 2 copy vpt sub vpt Square fill 2 copy exch vpt sub exch vpt Square fill Bsquare } bind def /S11 { BL [] 0 setdash 2 copy vpt sub vpt Square fill 2 copy exch vpt sub exch vpt2 vpt Rec fill Bsquare } bind def /S12 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt2 vpt Rec fill Bsquare } bind def /S13 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt2 vpt Rec fill 2 copy vpt Square fill Bsquare } bind def /S14 { BL [] 0 setdash 2 copy exch vpt sub exch vpt sub vpt2 vpt Rec fill 2 copy exch vpt sub exch vpt Square fill Bsquare } bind def /S15 { BL [] 0 setdash 2 copy Bsquare fill Bsquare } bind def /D0 { gsave translate 45 rotate 0 0 S0 stroke grestore } bind def /D1 { gsave translate 45 rotate 0 0 S1 stroke grestore } bind def /D2 { gsave translate 45 rotate 0 0 S2 stroke grestore } bind def /D3 { gsave translate 45 rotate 0 0 S3 stroke grestore } bind def /D4 { gsave translate 45 rotate 0 0 S4 stroke grestore } bind def /D5 { gsave translate 45 rotate 0 0 S5 stroke grestore } bind def /D6 { gsave translate 45 rotate 0 0 S6 stroke grestore } bind def /D7 { gsave translate 45 rotate 0 0 S7 stroke grestore } bind def /D8 { gsave translate 45 rotate 0 0 S8 stroke grestore } bind def /D9 { gsave translate 45 rotate 0 0 S9 stroke grestore } bind def /D10 { gsave translate 45 rotate 0 0 S10 stroke grestore } bind def /D11 { gsave translate 45 rotate 0 0 S11 stroke grestore } bind def /D12 { gsave translate 45 rotate 0 0 S12 stroke grestore } bind def /D13 { gsave translate 45 rotate 0 0 S13 stroke grestore } bind def /D14 { gsave translate 45 rotate 0 0 S14 stroke grestore } bind def /D15 { gsave translate 45 rotate 0 0 S15 stroke grestore } bind def /DiaE { stroke [] 0 setdash vpt add M hpt neg vpt neg V hpt vpt neg V hpt vpt V hpt neg vpt V closepath stroke } def /BoxE { stroke [] 0 setdash exch hpt sub exch vpt add M 0 vpt2 neg V hpt2 0 V 0 vpt2 V hpt2 neg 0 V closepath stroke } def /TriUE { stroke [] 0 setdash vpt 1.12 mul add M hpt neg vpt -1.62 mul V hpt 2 mul 0 V hpt neg vpt 1.62 mul V closepath stroke } def /TriDE { stroke [] 0 setdash vpt 1.12 mul sub M hpt neg vpt 1.62 mul V hpt 2 mul 0 V hpt neg vpt -1.62 mul V closepath stroke } def /PentE { stroke [] 0 setdash gsave translate 0 hpt M 4 {72 rotate 0 hpt L} repeat closepath stroke grestore } def /CircE { stroke [] 0 setdash hpt 0 360 arc stroke } def /BoxFill { gsave Rec 1 setgray fill grestore } def end } \begin{picture}(3600,2160)(0,0) \special{" gnudict begin gsave 0 0 translate 0.100 0.100 scale 0 setgray newpath LTb 500 400 M 63 0 V 2987 0 R -63 0 V 500 732 M 63 0 V 2987 0 R -63 0 V 500 1064 M 63 0 V 2987 0 R -63 0 V 500 1396 M 63 0 V 2987 0 R -63 0 V 500 1728 M 63 0 V 2987 0 R -63 0 V 500 2060 M 63 0 V 2987 0 R -63 0 V 500 400 M 0 63 V 0 1597 R 0 -63 V 1110 400 M 0 63 V 0 1597 R 0 -63 V 1720 400 M 0 63 V 0 1597 R 0 -63 V 2330 400 M 0 63 V 0 1597 R 0 -63 V 2940 400 M 0 63 V 0 1597 R 0 -63 V 3550 400 M 0 63 V 0 1597 R 0 -63 V LTa 500 400 M 0 1660 V LTb 500 400 M 3050 0 V 0 1660 V -3050 0 V 500 400 L 1.000 UL LT0 3187 1747 M 263 0 V 500 2060 M 22 -1 V 22 -2 V 22 -3 V 22 -4 V 22 -6 V 22 -6 V 22 -8 V 22 -9 V 22 -10 V 22 -11 V 22 -11 V 22 -13 V 21 -13 V 22 -13 V 22 -14 V 22 -13 V 22 -15 V 22 -14 V 22 -14 V 22 -13 V 22 -14 V 22 -12 V 22 -13 V 22 -11 V 22 -11 V 22 -11 V 22 -9 V 22 -9 V 22 -7 V 22 -7 V 22 -6 V 22 -5 V 22 -4 V 22 -3 V 22 -2 V 22 -2 V 22 -1 V 21 0 V 22 0 V 22 1 V 22 1 V 22 2 V 22 2 V 22 3 V 22 3 V 22 3 V 22 3 V 22 4 V 22 4 V 22 3 V 22 4 V 22 4 V 22 4 V 22 4 V 22 4 V 22 4 V 22 4 V 22 5 V 22 4 V 22 4 V 22 3 V 22 4 V 21 4 V 22 4 V 22 4 V 22 3 V 22 4 V 22 3 V 22 4 V 22 3 V 22 4 V 22 3 V 22 3 V 22 4 V 22 3 V 22 3 V 22 3 V 22 3 V 22 3 V 22 3 V 22 3 V 22 2 V 22 3 V 22 3 V 22 2 V 22 3 V 22 3 V 21 2 V 22 3 V 22 2 V 22 3 V 22 2 V 22 2 V 22 3 V 22 2 V 22 2 V 22 3 V 22 2 V 22 2 V 22 2 V 22 2 V 22 3 V 22 2 V 22 2 V 22 2 V 22 2 V 22 2 V 22 2 V 22 2 V 22 2 V 22 2 V 22 2 V 21 2 V 22 1 V 22 2 V 22 2 V 22 2 V 22 2 V 22 2 V 22 1 V 22 2 V 22 2 V 22 2 V 22 2 V 22 1 V 22 2 V 22 2 V 22 1 V 22 2 V 22 2 V 22 2 V 22 1 V 22 2 V 22 2 V 22 1 V 22 2 V 22 2 V 21 1 V 20 2 V 1.000 UL LT1 3187 1647 M 263 0 V 500 2060 M 22 -1 V 22 -2 V 22 -4 V 22 -5 V 22 -7 V 22 -9 V 22 -10 V 22 -12 V 22 -13 V 22 -15 V 22 -16 V 22 -18 V 21 -19 V 22 -20 V 22 -21 V 22 -21 V 22 -23 V 22 -22 V 22 -23 V 22 -23 V 22 -23 V 22 -22 V 22 -22 V 22 -21 V 22 -20 V 22 -20 V 22 -18 V 22 -17 V 22 -16 V 22 -15 V 22 -13 V 22 -12 V 22 -11 V 22 -9 V 22 -8 V 22 -6 V 22 -4 V 21 -3 V 22 -2 V 22 0 V 22 2 V 22 3 V 22 4 V 22 6 V 22 7 V 22 7 V 22 9 V 22 10 V 22 11 V 22 11 V 22 11 V 22 12 V 22 12 V 22 12 V 22 12 V 22 12 V 22 12 V 22 11 V 22 12 V 22 11 V 22 11 V 22 10 V 21 11 V 22 10 V 22 9 V 22 10 V 22 8 V 22 9 V 22 8 V 22 8 V 22 8 V 22 7 V 22 7 V 22 6 V 22 7 V 22 6 V 22 5 V 22 6 V 22 5 V 22 5 V 22 5 V 22 4 V 22 5 V 22 4 V 22 4 V 22 4 V 22 3 V 21 4 V 22 3 V 22 3 V 22 4 V 22 3 V 22 3 V 22 3 V 22 2 V 22 3 V 22 3 V 22 2 V 22 3 V 22 2 V 22 3 V 22 2 V 22 3 V 22 2 V 22 2 V 22 2 V 22 2 V 22 3 V 22 2 V 22 2 V 22 2 V 22 2 V 21 2 V 22 2 V 22 2 V 22 2 V 22 1 V 22 2 V 22 2 V 22 2 V 22 2 V 22 2 V 22 2 V 22 1 V 22 2 V 22 2 V 22 2 V 22 1 V 22 2 V 22 2 V 22 1 V 22 2 V 22 2 V 22 2 V 22 1 V 22 2 V 22 1 V 21 2 V 20 2 V 1.000 UL LT2 3187 1547 M 263 0 V 500 2060 M 22 -1 V 22 -4 V 22 -6 V 22 -9 V 22 -12 V 22 -15 V 22 -19 V 22 -22 V 22 -27 V 22 -31 V 22 -37 V 22 -41 V 21 -47 V 22 -51 V 22 -55 V 22 -59 V 22 -62 V 22 -64 V 22 -64 V 22 -64 V 22 -63 V 22 -62 V 22 -60 V 22 -57 V 22 -55 V 22 -52 V 22 -49 V 22 -46 V 22 -43 V 22 -40 V 22 -37 V 22 -34 V 22 -31 V 22 -29 V 22 -25 V 22 -23 V 22 -20 V 21 -17 V 22 -15 V 22 -12 V 22 -10 V 22 -8 V 22 -5 V 22 -3 V 22 -1 V 22 1 V 22 4 V 22 5 V 22 8 V 22 9 V 22 12 V 22 14 V 22 16 V 22 18 V 22 20 V 22 22 V 22 24 V 22 26 V 22 28 V 22 31 V 22 33 V 22 35 V 21 37 V 22 39 V 22 42 V 22 44 V 22 45 V 22 48 V 22 49 V 22 49 V 22 50 V 22 49 V 22 48 V 22 46 V 22 43 V 22 39 V 22 36 V 22 33 V 22 30 V 22 27 V 22 24 V 22 21 V 22 20 V 22 18 V 22 17 V 22 15 V 22 14 V 21 13 V 22 12 V 22 12 V 22 10 V 22 11 V 22 9 V 22 9 V 22 8 V 22 8 V 22 8 V 22 7 V 22 7 V 22 6 V 22 6 V 22 6 V 22 5 V 22 6 V 22 4 V 22 5 V 22 4 V 22 5 V 22 3 V 22 4 V 22 4 V 22 3 V 21 3 V 22 4 V 22 3 V 22 2 V 22 3 V 22 3 V 22 2 V 22 3 V 22 2 V 22 3 V 22 2 V 22 2 V 22 2 V 22 2 V 22 3 V 22 1 V 22 2 V 22 2 V 22 2 V 22 2 V 22 2 V 22 2 V 22 2 V 22 2 V 22 1 V 21 2 V 20 2 V stroke grestore end showpage } \put(3137,1547){\makebox(0,0)[r]{$vt=4$}} \put(3137,1647){\makebox(0,0)[r]{$vt=2$}} \put(3137,1747){\makebox(0,0)[r]{$vt=0$}} \put(2025,150){\makebox(0,0){\Large$vr$}} \put(100,1230){% \special{ps: gsave currentpoint currentpoint translate 270 rotate neg exch neg exch translate}% \makebox(0,0)[b]{\Large\shortstack{$K$}}% \special{ps: currentpoint grestore moveto}% } \put(3550,300){\makebox(0,0){10}} \put(2940,300){\makebox(0,0){8}} \put(2330,300){\makebox(0,0){6}} \put(1720,300){\makebox(0,0){4}} \put(1110,300){\makebox(0,0){2}} \put(500,300){\makebox(0,0){0}} \put(450,2060){\makebox(0,0)[r]{1}} \put(450,1728){\makebox(0,0)[r]{0.96}} \put(450,1396){\makebox(0,0)[r]{0.92}} \put(450,1064){\makebox(0,0)[r]{0.88}} \put(450,732){\makebox(0,0)[r]{0.84}} \put(450,400){\makebox(0,0)[r]{0.8}} \end{picture} \begin{center}{(b)} \end{center} \caption{Evolution of a gauged monopole-bubble with a thick-wall: (a) scalar amplitude $\phi/v$ and (b) gauge field $K$, where $\lambda=1$, $e=0.3$, and $\alpha=0.4$.} \label{fig4} \end{figure} \section{Concluding Remarks} We have seen that when a first-order phase transition in the gauge theory in which the symmetry breaking pattern is SO(3)$\rightarrow$SO(2) is considered in the high temperature limit; a new gauged monopole-bubble solution was found. It is distinguished from the known Euclidean bounce by the production of a 't Hooft-Polyakov monopole at the center of the bubble the moment it is nucleated. The production rate of monopole-bubbles is smaller than that of the bounce, but it is considerable for thick-wall cases with strong gauge coupling in the limit of high temperature. When the size of a nucleated bubble is larger than the critical size, the bubble wall starts to move outward so that the magnetic monopole remains stable in the bubble at least before bubble collision. In a theoretical sense, the existence of such a gauged monopole-bubble solution demonstrates clearly a possible drastic effect of the gauge field on bubble nucleation. Although it is important to understand the evolution of the gauged monopole-bubble through zero-modes, the stability including angular fluctuations is left for future work. The realization of these monopole-bubbles in a real material is not known yet. \section*{Acknowledgments} {The authors would like to thank Kyoungtae Kimm for discussions. Numerical Computation of this work was carried out at the Yukawa Institute Computer Facility. Y.K. wishes to acknowledge the financial support of the Korea Research Foundation made in the program year of 1997. N. S. was supported by JSPS Research Fellowships for Young Scientist. This work was supported partially by the Grant-in-Aid for Scientific Research Fund of the Ministry of Education, Science, Sports and Culture (No.\ 9702603 and No.\ 09740334) and by the Waseda University Grant for Special Research Projects.}
2,869,038,155,425
arxiv
\section{Introduction} Consider a statistical decision problem in which $\mathcal{X}$ is a sample space, $\Theta$ is a parameter space, and $\mathcal{P}$ is a statistical model $\{P_{\theta}:\theta\in\Theta\}$ such that for each $\theta\in\Theta$, $P_{\theta}$ is a probability measure on $\mathcal{X}$. Let $\mathcal{A}$ be an action space and $\mathcal{D}$ a decision space, comprising the whole set of measurable functions from $\mathcal{X}$ to $\mathcal{A}$. Let $L$ be a loss function $\Theta\times \mathcal{A}\to\mathbb{R}\cup\{+\infty\}$ and $R$ a corresponding risk function defined by $R(\theta,\delta)=\int L(\theta,\delta(x)) \mathrm{d}P_{\theta}(x)$ for every $\theta\in\Theta$ and every $\delta\in\mathcal{D}$. Our focus is on the use of $\varepsilon$-admissibility. For $\varepsilon>0$, $\varepsilon$-admissibility is defined as follows: an estimator $\delta$ is $\varepsilon$-admissible if and only if there exists no estimator $\tilde{\delta}$ such that for every $\theta\in\Theta$, $R(\theta,\tilde{\delta})<R(\theta,\delta)-\varepsilon$. In other words, $\delta$ is $\varepsilon$-admissible if for any other estimator $\tilde{\delta}$, $\delta$ is not inferior to $\tilde{\delta}$ at some $\theta\in\Theta$ when $\varepsilon$ is subtracted from the risk of $\delta$. For $\delta\in\mathcal{D}$, the infimum of possible values of $\varepsilon$ such that $\delta$ is $\varepsilon$-admissible is denoted by $\mathcal{R}(\Theta,\delta)$: $$\mathcal{R}(\Theta,\delta):=\sup_{\tilde{\delta}\in\mathcal{D}}\inf_{\theta\in\Theta}[R(\theta,\delta)-R(\theta,\tilde{\delta})].$$ If $\mathcal{R}(\theta,\delta)=\varepsilon>0$, then $\delta$ is $\varepsilon$-admissible, and if $\delta$ is $\varepsilon$-admissible, then $\mathcal{R}(\Theta,\delta)\leq \varepsilon$. A smaller value of $\mathcal{R}(\Theta,\delta)$ is preferable. For further details, see \citet{BlackwellandGirshick(1954)}, \citet{Farrell(1968)}, \citet{Ferguson_Book}, \citet{Hartigan_Book}, and \citet{HeathandSudderth(1978)}. Although the concept of $\varepsilon$-admissibility was once widely studied in statistical decision theory, as a research topic, it has long been abandoned. In the present paper, we emphasize the use of $\varepsilon$-admissibility as a criterion for comparing the estimators in high-dimensional and nonparametric statistical models. We show, through two important examples, that by adding a comparison using the value of $\mathcal{R}(\Theta,\delta)$ to that using the minimax rate of convergence, the performance of estimators in high-dimensional and nonparametric statistical models can be more successfully compared. In high-dimensional and nonparametric models, the minimax rate of convergence in an asymptotics has been used to measure the performance of an estimator. For example, the minimax rate of convergence of an estimator $\delta$ in the asymptotics in which the dimension $d$ of a parameter space $\Theta$ grows to infinity is defined by $d^{\alpha}$, where $\alpha$ is the minimum number that satisfies $0<\lim_{d\to\infty}\inf_{\delta\in\mathcal{D}}\sup_{\theta\in\Theta}R(\theta,\delta)/d^{\alpha}<\infty$. When using the minimax approach, the key criterion is whether or not the rate of convergence of the estimator matches the minimax rate of convergence (\citealp{Tsybakov_Book} and \citealp{Wasserman_Book}). However, the minimax rate of convergence often fails to clearly distinguish between estimators. In such cases, adding a comparison using $\varepsilon$-admissibility can be helpful. In the present study, we investigated the use of $\varepsilon$-admissibility by application to two examples: estimation of the mean in a high-dimensional Poisson model and estimation of the mean in a Gaussian infinite sequence model. We first show, using estimation of the mean in the high-dimensional Poisson model, that $\varepsilon$-admissibility preserves the dominating result in a finite dimensional setting, in contrast with the minimax approach. Consider estimation of the mean in a $d$-dimensional Poisson model with an $\mathcal{L}^{1}$-constraint parameter space. This estimation appears in discretization of an inhomogeneous Poisson point process model (see Appendix \ref{Appendix:Poissonpointprocess}). In a setting in which $d>2$ is fixed, it is known that the James--Stein type estimator $\hat{\theta}_{\mathrm{JS}}$ dominates the Bayes estimator $\hat{\theta}_{\mathrm{J}}$ based on Jeffreys' prior when using the divergence loss; see \citet{Komaki(2004)}, \citet{Komaki(2006)}, and \citet{Komaki(2015)} and see also \citet{GhoshandYang(1988)}. Here, $\delta$ is said to dominate $\tilde{\delta}$ if and only if $R(\theta,\delta)\leq R(\theta,\tilde{\delta})$ for all $\theta\in\Theta$ and there exists $\theta_{0}\in\Theta$ such that $R(\theta_{0},\delta)<R(\theta_{0},\tilde{\delta})$. Unfortunately, the minimax rate of convergence can not determine whether $\hat{\theta}_{\mathrm{JS}}$ is superior to $\hat{\theta}_{\mathrm{J}}$ because the rates of convergence of both $\hat{\theta}_{\mathrm{J}}$ and a minimax estimator are $d$. In contrast, by applying $\varepsilon$-admissibility, we can decide that $\hat{\theta}_{\mathrm{J}}$ is better than $\hat{\theta}_{\mathrm{JS}}$ even in the asymptotic sense, because a simple calculation introduced in Section \ref{Section:PoissonL1} will show that $\lim_{d\to\infty}\mathcal{R}(\Theta,\hat{\theta}_{\mathrm{J}})/d>0$ and $\lim_{d\to\infty}\mathcal{R}(\Theta,\hat{\theta}_{\mathrm{JS}})< 2$. We further show, through estimation of the mean in a Gaussian infinite sequence model, that $\varepsilon$-admissibility can quantify the degree of preference of one asymptotically minimax estimator over another. Consider the estimation of the mean in a Gaussian infinite sequence model with a Sobolev-type constraint parameter space. This model is a canonical model in nonparametric statistics, and has been shown to be statistically equivalent to the nonparametric regression model (\citealp{Tsybakov_Book}, pp.~65--69). In this context, \citet{Zhao(2000)} demonstrated that any Gaussian prior of which the Bayes estimator is asymptotically minimax places no mass on the parameter space. Zhao also constructed a prior of which the Bayes estimator is asymptotically minimax and the mass on the parameter space is strictly positive. See also \citet{ShenandWasserman(2001)}. However, the goodness due to the strictly positive mass on the parameter space has yet been quantified. We show that a modification of the prior discussed in \citet{Zhao(2000)} yields an asymptotically minimax Bayes estimator that, from the viewpoint of $\varepsilon$-admissibility, is superior to one based on the Gaussian prior. This is discussed in Section \ref{Section:Gaussiansequence}. Finally, we address the relationship to admissibility. Any Bayes estimator based on a prior on $\Theta$ is admissible and thus $\varepsilon$-admissible for any $\varepsilon>0$, so that $\varepsilon$-admissibility can not be used to compare such Bayes estimators. However, in practice, estimators based on a prior that puts full mass on $\Theta$ are rarely used, as there exist few settings in which the full information on $\Theta$ is known in advance. The estimators discussed in Sections \ref{Section:PoissonL1} and \ref{Section:Gaussiansequence} do not depend on knowing the full structure of $\Theta$. The rest of the paper is organized as follows. In Section \ref{Section:properties}, we introduce the properties of $\varepsilon$-admissibility and discuss its relationship with a related concept introduced by \citet{Chatterjee(2014)}, known as $C$-admissibility. We also introduce the asymptotic notation. Section \ref{Section:Discussions} concludes the paper. An additional demonstration using the high-dimensional Gaussian sequence model with an $\mathcal{L}^{2}$-constraint parameter space is provided in Appendix \ref{Appendix:Additionalexample}. \section{Preliminaries}\label{Section:properties} \subsection{Bounds for $\varepsilon$-admissibility} In this subsection, we provide the general lower and upper bounds for $\mathcal{R}(\Theta,\delta)$ that are used in later sections. While these bounds are fundamental and have been widely used in the literature of statistical decision theoretic literature (see, for example, Chapter 5 of \citet{LehmannandCasella_Book}), the proofs help clarify the concept of $\varepsilon$-admissibility. Throughout this subsection, we use a fixed estimator $\delta\in\mathcal{D}$. \begin{lem} \label{lower_maximin} For an estimator $\tilde{\delta}$ that dominates $\delta$, \begin{align*} \mathcal{R}(\Theta,\delta)\geq \inf_{\theta\in\Theta}[R(\theta,\delta)-R(\theta,\tilde{\delta})]\geq 0. \end{align*} \end{lem} \begin{proof} The first inequality follows immediately from the definition of $\mathcal{R}(\Theta,\delta)$. The second inequality follows since for any $\theta\in\Theta$, $R(\theta,\delta)\geq R(\theta,\tilde{\delta})$. \end{proof} \begin{lem} \label{upper_maximin} For a probability measure $\Pi$ on $\Theta$, \begin{align*} \mathcal{R}(\Theta,\delta)\leq \int_{\Theta}[R(\theta,\delta)-R(\theta,\delta_{\Pi})]\mathrm{d}\Pi(\theta), \end{align*} where $\delta_{\Pi}$ is the Bayes solution with respect to $\Pi$, i.e., the minimizer of $\int_{\Theta}R(\theta,\delta)\mathrm{d}\Pi(\theta)$. \end{lem} \begin{proof} Since for a function $f$ of $\theta$, $\inf_{\theta\in\Theta} f(\theta)\leq \int_{\Theta} f(\theta)\mathrm{d}\Pi(\theta)$, we have \begin{align*} \mathcal{R}(\Theta,\delta)=&\sup_{\tilde{\delta}}\inf_{\theta}[R(\theta,\delta)-R(\theta,\tilde{\delta})] \nonumber\\ \leq&\sup_{\tilde{\delta}}\int_{\Theta}[R(\theta,\delta)-R(\theta,\tilde{\delta})]\mathrm{d}\Pi(\theta) \nonumber\\ =&\int_{\Theta}R(\theta,\delta)\mathrm{d}\Pi(\theta)-\inf_{\tilde{\delta}}\int_{\Theta}R(\theta,\tilde{\delta})\mathrm{d}\Pi(\theta) \nonumber\\ =&\int_{\Theta}R(\theta,\delta)\mathrm{d}\Pi(\theta)-\int_{\Theta}R(\theta,\delta_{\Pi})\mathrm{d}\Pi(\theta), \end{align*} where the last equality follows from the definition of the Bayes solution. \end{proof} Next, we describe the relationships between $\varepsilon$-admissibility and admissibility and between $\varepsilon$-admissibility and minimaxity. Although these relationships are not used in this paper, they also help clarify the nature of $\varepsilon$-admissibility. \begin{prop} If $\delta$ is admissible, then $\mathcal{R}(\Theta,\delta)=0$. If $\delta$ is minimax with a constant risk, then again $\mathcal{R}(\Theta,\delta)=0$. \end{prop} \begin{proof} The first claim holds because from the admissibility of $\delta$, we have $\inf_{\theta\in\Theta}[R(\theta,\delta)-R(\theta,\tilde{\delta})]\leq 0$ for any $\tilde{\delta}\in\mathcal{D}$ and thus $\sup_{\tilde{\delta}}\inf_{\theta\in\Theta}[R(\theta,\delta)-R(\theta,\tilde{\delta})]\leq 0$. The second claim holds because letting $c := \inf_{\tilde{\delta}} \sup_{\theta} R( \theta , \tilde{\delta} )$ yields \begin{align*} \mathcal{R}(\Theta,\delta)=&\sup_{\tilde{\delta}\in\mathcal{D}}\inf_{\theta\in\Theta}[R(\theta,\delta)-R(\theta,\tilde{\delta})] =c-\inf_{\tilde{\delta}}\sup_{\theta}R(\theta,\tilde{\delta}) =0. \end{align*} \end{proof} \subsection{Relationship to $C$-admissibility} The concept of $C$-admissibility has appeared in the recent literature on estimation under the shape restriction, and its connection to $\varepsilon$-admissibility should be noted. An estimator $\delta$ is $C>0$-admissible if and only if for every other estimator $\tilde{\delta}$, there exists $\theta\in\Theta$ such that $C\times R(\theta,\delta)\leq R(\theta,\tilde{\delta})$. See \citet{Chatterjee(2014)} and \citet{Chenetal(2017)} for a discussion of this. The only difference between $\varepsilon$-admissibility and $C$-admissibility is that $\varepsilon$-admissibility is based on the risk difference, whereas $C$-admissibility is based on the risk ratio. \citet{Chenetal(2017)} argues that for a given estimator $\delta$, the smallest value of $C$ for which $\delta$ is $C$-admissible has a minimax interpretation: \begin{align*} \sup\{C:\text{$\delta$ is $C$-admissible}\}=\inf_{\tilde{\delta}}\sup_{\theta\in\Theta}\frac{R(\theta,\tilde{\delta})}{R(\theta,\delta)}. \end{align*} Likewise, the minus of the smallest value of $\varepsilon$ for which $\delta$ is $\varepsilon$-admissible also has a minimax interpretation: \begin{align} -\inf\{\varepsilon:\text{$\delta$ is $\varepsilon$-admissible}\}=\inf_{\tilde{\delta}}\sup_{\theta\in\Theta}[R(\theta,\tilde{\delta})-R(\theta,\delta)]. \label{minimaxinterpretation} \end{align} The quantity (\ref{minimaxinterpretation}) itself is of interest. \citet{OrlitskyandSuresh(2015)} conducted the regret analysis based on the quantity (\ref{minimaxinterpretation}) for some baseline estimator $\delta$. The difference between the present paper and those of \citet{Chatterjee(2014)} and \citet{Chenetal(2017)} is that the latter addressed a universal bound for $C$ irrespective of the dimension of the parameter space and the sample size, whereas our paper uses the rate of diminution of $\varepsilon$ as the dimension or the sample size grows to infinity for the performance comparison. \subsection{Asymptotic notation} In this subsection, we set out the asymptotic notation used in later sections. For positive functions $f(d)$ and $g(n)$, the relation $f(d)\lesssim g(d)$ as $d\to\infty$ means that $$\lim_{d\to\infty}f(d)/g(d)<\infty.$$ The relation $f(d)\asymp g(d)$ as $d\to\infty$ means that $f(d)\lesssim g(d)$ and $g(d)\lesssim g(d)$. \section{Poisson sequence model with $\mathcal{L}^{1}$-constraint parameter space} \label{Section:PoissonL1} In this section, we present further details of the example discussed in the introduction. Let $\mathcal{X}=\mathbb{N}^{d}$, $\Theta=\{\theta=(\theta_{1},\ldots,\theta_{d}):\sum_{i=1}^{d}\theta_{i}/d\leq 1,\theta_{i}\geq 0, i=1,\ldots,d \}$, and $\mathcal{P}=\{P_{\theta}=\otimes_{i=1}^{d}\mathrm{Po}(\theta_{i}):\theta\in\Theta\}$, where $\mathrm{Po}(\lambda)$ is a Poisson distribution with mean $\lambda$. Let $\mathcal{A}=\mathbb{R}^{d}_{+}$ with the corresponding decision space $\mathcal{D}$. Let $L(\theta,a)=D_{\mathrm{KL}}(P_{\theta}\mid\mid P_{a})$ with the corresponding risk function $R(\theta,\hat{\theta})$, where $D_{\mathrm{KL}}(P_{\theta}\mid\mid P_{\theta'})$ is the Kullback--Leibler divergence from $P_{\theta}$ to $P_{\theta'}$: \begin{align*} D_{\mathrm{KL}}(P_{\theta}\mid\mid P_{\theta'}):=\int \log\frac{\mathrm{d}P_{\theta}}{\mathrm{d}P_{\theta'}}\mathrm{d}P_{\theta} =\sum_{i=1}^{d}\left[\theta_{i}\log\frac{\theta_{i}}{\theta'_{i}}+\theta_{i}-\theta'_{i}\right]. \end{align*} We discuss the performance of the following two estimators from the viewpoint of the minimax rate of convergence and that of $\varepsilon$-admissibility. Let \begin{align*} \hat{\theta}_{\mathrm{J},i}(X):=X_{i}+1/2, i=1,\ldots,d \end{align*} and let \begin{align*} \hat{\theta}_{\mathrm{JS},i}(X):=\frac{\sum_{j=1}^{d}X_{j}+1}{\sum_{j=1}^{d}X_{j}+d/2}(X_{i}+1/2), i=1,\ldots,d. \end{align*} The estimator $\hat{\theta}_{\mathrm{J}}$ is the Bayes estimator based on Jeffreys' prior and the estimator $\hat{\theta}_{\mathrm{JS}}$ is the James--Stein type estimator used in Poisson sequence models. For further details, see \citet{Komaki(2004)} and \citet{Komaki(2006)}. \subsection{Main results for the Poisson sequence model} We first discuss the minimax rate of convergence. The following theorem shows that, from the minimax rate of convergence, it is impossible to determine whether $\hat{\theta}_{\mathrm{JS}}$ is better than $\hat{\theta}_{\mathrm{J}}$. \begin{thm}\label{thm:Poissonminimax} We have \begin{align*} \inf_{\hat{\theta}\in\mathcal{D}}\sup_{\theta\in\Theta}R(\theta,\hat{\theta}) \asymp \sup_{\theta\in\Theta}R(\theta,\hat{\theta}_{\mathrm{J}}) \asymp \sup_{\theta\in\Theta}R(\theta,\hat{\theta}_{\mathrm{JS}}) \asymp d \end{align*} as $d\to\infty$ \end{thm} Next, we discuss $\varepsilon$-admissibility. The following theorem shows that, from the viewpoint of $\varepsilon$-admissibility, the James--Stein type estimator is superior to the Bayes estimator based on Jeffreys' prior. \begin{thm}\label{thm:Poissonweakadmissibility} We have \begin{align*} \mathcal{R}(\Theta,\hat{\theta}_{\mathrm{JS}}) \lesssim 1\lesssim d\lesssim \mathcal{R}(\Theta,\hat{\theta}_{\mathrm{J}}) \end{align*} as $d\to\infty$. \end{thm} The proofs of theorems are given in the next subsection. \subsection{Proofs of theorems} In this subsection, we give the proofs of Theorems \ref{thm:Poissonminimax} and \ref{thm:Poissonweakadmissibility} \begin{proof}[Proof of Theorem \ref{thm:Poissonminimax}] This proof relies on the fact that a minimax risk is bounded below by a Bayes risk: For a probability distribution $\Pi$ on $\Theta$, we have \begin{align} \inf_{\hat{\theta}\in\mathcal{D}}\sup_{\theta\in\Theta}R(\theta,\hat{\theta}) \geq\int R(\theta,\hat{\theta}_{\Pi}) \mathrm{d}\Pi(\theta), \label{eq:Bayesriskbound} \end{align} where $\hat{\theta}_{\Pi}$ is the Bayes solution with respect to $\Pi$. Let $\Pi$ be \begin{align*} \Pi(\mathrm{d}\theta)=\frac{1}{2}\delta_{0}(\mathrm{d}\theta)+ \frac{1}{2}\delta_{d}(\mathrm{d}\|\theta\|_{1})\otimes \mathrm{Dir}\left(\frac{1}{2},\cdots,\frac{1}{2}\right)\left(\mathrm{d}\frac{\theta_{1}}{\|\theta\|_{1}},\ldots,\mathrm{d}\frac{\theta_{d}}{\|\theta\|_{1}}\right), \end{align*} where $\delta_{x}$ is the Dirac measure having a mass on $x$, $\mathrm{Dir} (1/2,\ldots,1/2) \allowbreak ( \mathrm{d}x_{1}, \ldots, \mathrm{d}x_{d} )$ is the Dirichlet distribution of which the density is proportional to $ x_{1}^{1/2-1}\times \cdots \times x_{d}^{1/2-1}$, and $\| \theta \|_{1} := \sum_{i} | \theta_{i} |$. First, we show that the Bayes risk $b(\Pi):=\int R(\theta,\hat{\theta}_{\Pi})\mathrm{d}\Pi(\theta)$ is of order $d$ by assuming that the following two claims hold: \begin{enumerate} \setlength{\leftskip}{2cm} \item[Claim C1.] $b(\Pi)$ is given by \begin{align} b(\Pi)=\frac{d}{2}&\left\{\mathrm{e}^{-d}\log(1+\mathrm{e}^{d})+\psi(3/2)-\psi(d/2+1) \right. \nonumber\\ &\left.+\mathrm{E}_{X\sim\mathrm{Po}(d)}\log(d/2+X) \right. \nonumber\\ &\left.-\mathrm{E}_{b\sim\mathrm{Beta}(3/2,d/2-1)}\mathrm{E}_{X\sim\mathrm{Po}(bd)}\log(1/2+X)\right\}, \label{eq:Bayesrisk} \end{align} where $\psi(\cdot)$ is the digamma function, that is, the derivative of the log of the Gamma function; \item[Claim C2.] for any $\varepsilon\in(0,1)$, the asymptotic inequality \begin{align*}\mathrm{E}_{X\sim\mathrm{Po}(d)}\log(d/2+X)\geq\log(3d/2)+\log(1-\varepsilon)+\mathrm{o}_{\varepsilon}(1)\end{align*} holds, \end{enumerate} where $\mathrm{o}_{\varepsilon}(1)$ is the $\mathrm{o}(1)$ term depending on $\varepsilon$. Applying Jensen's inequality to $x\to\log(1/2+x)$ yields \begin{align*} \mathrm{E}_{b\sim\mathrm{Beta}(3/2,d/2-1)}\mathrm{E}_{X\sim\mathrm{Po}(bd)}\log(1/2+X)\leq \mathrm{E}_{b\sim\mathrm{Beta}(3/2,d/2-1)}\log(1/2+bd). \end{align*} Since $\mathrm{Beta}(3/2,d/2-1)$ converges weakly to $\delta_{1}$ as $d \to \infty$, we have \begin{align} \mathrm{E}_{b\sim\mathrm{Beta}(3/2,d/2-1)}\log(1/2+bd)=\log d+\mathrm{o}(1). \label{eq:lastterm} \end{align} Note that $b\in[0,1]\to\log(1/2+b)$ is bounded and continuous. Thus, from Claims C1 and C2, from the asymptotic inequality (\ref{eq:lastterm}), and from the asymptotic relationship that $\psi(d/2+1)=\log(d/2)+\mathrm{o}(1)$ (see \citet{NIST}), we have \begin{align*} b(\Pi)\geq\frac{d}{2}(1+\psi(3/2)+\log(1-\varepsilon)+\mathrm{o}_{\varepsilon}(1)). \end{align*} Taking $\varepsilon$ such that $1+\psi(3/2)+\log(1-\varepsilon)>0$, it is shown that the Bayes risk $b(\Pi):=\int R(\theta,\hat{\theta}_{\Pi})\mathrm{d}\Pi(\theta)$ is of order $d$. \vspace{5mm} Next, we prove that Claim C1 holds. Let $\mathrm{S}:=\mathrm{Dir}(1/2,\ldots,1/2)$. For $i=1,\ldots,d$, we have \begin{align*} \hat{\theta}_{\Pi,i}(X)= \begin{cases} d\times \frac{\int w_{i}(\prod_{j=1}^{d}w_{j}^{x_{j}})\mathrm{d}\mathrm{S}(w_{1},\ldots,w_{d}) } { \int (\prod_{j=1}^{d}w_{j}^{x_{j}})\mathrm{d}\mathrm{S}(w_{1},\ldots,w_{d}) } & \text{ if $x_{j}\neq 0$ for some $j$}, \\ d\times \frac{\mathrm{e}^{-d}}{1+\mathrm{e}^{-d}}\int w_{i}\mathrm{d}\mathrm{S}(w_{1},\ldots,w_{d}) & \text{ if all $x_{j}$s are 0}. \end{cases} \end{align*} Substituting the above expression of $\hat{\theta}_{\Pi}$ into $R(\theta,\hat{\theta}_{\Pi})$, we have \begin{align} R(\theta,\hat{\theta}_{\Pi})=& \sum_{i=1}^{d}\theta_{i}\log\frac{\theta_{i}}{d/d}+\left(\sum_{i=1}^{d}\theta_{i}\right)\mathrm{e}^{-\sum_{i=1}^{d}\theta_{i}}\log(1+\mathrm{e}^{d}) \nonumber\\ &-\left(\sum_{i=1}^{d}\theta_{i}\right)+d-d\mathrm{e}^{-\sum_{i=1}^{d}\theta_{i}}\frac{1}{1+\mathrm{e}^{-d}} \nonumber\\ &-\left(\sum_{i=1}^{d}\theta_{i}\right)\log d \nonumber\\ &+\left(\sum_{i=1}^{d}\theta_{i}\right)\mathrm{E}_{X\sim\mathrm{Po}(\sum_{i=1}^{d}\theta_{i})}\log(d/2+X) \nonumber\\ &-\sum_{i=1}^{d}\theta_{i}\mathrm{E}_{X_{i}\sim\mathrm{Po}(\theta_{i})}\log(1/2+X_{i}). \label{eq:risk_Pi} \end{align} Since \begin{align*} \int\sum_{i=1}^{d}\theta_{i}\log\theta_{i}\mathrm{d}\Pi(\theta)=&d\log d+d\int \sum_{i}^{d}w_{i}\log w_{i}\mathrm{d}S(w_{1},\ldots,w_{d}) \nonumber\\ =&d\log d+\psi(3/2)-\psi(d/2+1), \end{align*} taking the expectation of the right hand side of (\ref{eq:risk_Pi}) over $\theta$ with respect to $\Pi$ shows that Claim C1 holds. \vspace{5mm} Finally, we prove that Claim C2 holds. We have \begin{align*} \mathrm{E}_{X\sim\mathrm{Po}(d)}\log(d/2+X)&=\log(3d/2)+\mathrm{E}_{X\sim\mathrm{Po}(d)}\log\left(1+\frac{2}{3\sqrt{d}}\frac{X-d}{\sqrt{d}}\right). \end{align*} Since for a random variable $X$ distributed according to $\mathrm{Po}(d)$, $(X-d)/d$ converges to $0$ in probability as $d \to \infty$, we have, for any $\varepsilon\in(0,1)$, \begin{align*} \mathrm{E}_{X\sim\mathrm{Po}(d)}&\log\left(1+\frac{2}{3\sqrt{d}}\frac{X-d}{\sqrt{d}}\right) \nonumber\\ &\geq\log(1-\varepsilon)+\mathrm{E}_{X\sim\mathrm{Po}(d)}1_{|2(X-d)/d|>\varepsilon}\log\left(1+\frac{2}{3\sqrt{d}}\frac{X-d}{\sqrt{d}}\right) \nonumber\\ &\geq\log(1-\varepsilon)+\mathrm{Pr}(|2(X-d)/d|>\varepsilon) \log(1/3) \nonumber\\ &=\log(1-\varepsilon)+\mathrm{o}_{\varepsilon}(1), \end{align*} where $X$ is a random variable distributed according to $\mathrm{Po}(d)$. This completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:Poissonweakadmissibility}] It suffices to show that for any $d\in\mathbb{N}$, \begin{align} \mathcal{R}(\Theta,\hat{\theta}_{\mathrm{J}}) \geq - d\log\left(1+(d/2-1)\frac{1-\mathrm{e}^{-d}}{d}\right) +d/2-1 \label{eq:risk_Jeffreys} \end{align} and that for any $d\in\mathbb{N}$, \begin{align} \mathcal{R}(\Theta,\hat{\theta}_{\mathrm{JS}}) \leq \frac{1}{2}. \label{eq:risk_JS} \end{align} \vspace{5mm} The proof of (\ref{eq:risk_Jeffreys}) follows the proof that $\hat{\theta}_{\mathrm{JS}}$ dominates $\hat{\theta}_{\mathrm{J}}$; see \citet{Komaki(2004)}. Applying Lemma \ref{upper_maximin} with $\tilde{\delta}=\hat{\theta}_{\mathrm{JS}}$ yields \begin{align*} \mathcal{R}(\Theta,\hat{\theta}_{\mathrm{J}}) \geq \inf_{\theta\in\Theta}[R(\theta,\hat{\theta}_{\mathrm{J}})-R(\theta,\hat{\theta}_{\mathrm{JS}})]. \end{align*} Since \begin{align*} R(\theta,\hat{\theta}_{\mathrm{J}})-R(\theta,\hat{\theta}_{\mathrm{JS}}) =\mathrm{E}_{X\sim P_{\theta}}\sum_{i=1}^{d}\left[\theta_{i}\log\frac{\hat{\theta}_{\mathrm{JS},i}(X) }{\hat{\theta}_{\mathrm{J},i}(X)} +\hat{\theta}_{\mathrm{J},i}(X) -\hat{\theta}_{\mathrm{JS},i}(X) \right], \end{align*} we have \begin{align*} \mathcal{R}(\Theta,\hat{\theta}_{\mathrm{J}}) \geq \inf_{\theta\in\Theta} \mathrm{E}_{X\sim P_{\theta}} \left[ \sum_{i=1}^{d}\theta_{i}\log\frac{\sum_{j=1}^{d}X_{j}+1}{\sum_{j=1}^{d}X_{j}+d/2}+(d/2-1) \right]. \end{align*} Since the distribution of $Z=\sum_{i=1}^{d}X_{i}$ is $\mathrm{Po}(\mu)$ with $\mu=\sum_{i=1}^{d}\theta_{i}$, we have \begin{align*} \inf_{\theta\in\Theta} &\mathrm{E}_{X\sim P_{\theta}} \left[ \sum_{i=1}^{d}\theta_{i}\log\frac{\sum_{j=1}^{d}X_{j}+1}{\sum_{j=1}^{d}X_{j}+d/2}+(d/2-1) \right] \nonumber\\ &= \inf_{\mu\in[0,d]} \mathrm{E}_{Z\sim\mathrm{Po}(\mu)} \left[ -\mu\log\left(1+\frac{d/2-1}{Z+1} \right)+ (d/2-1) \right] \nonumber\\ &\geq \inf_{\mu\in[0,d]} \left\{-\mu \log\left(1+ \mathrm{E}_{Z\sim \mathrm{Po}(\mu)}\frac{d/2-1}{Z+1}\right)\right\} +(d/2-1) \nonumber\\ &= \inf_{\mu\in[0,d]} \left\{-\mu \log\left(1+ (d/2-1)\frac{1-\mathrm{e}^{-\mu}}{\mu}\right)\right\} +(d/2-1) \nonumber\\ &= -d\log\left(1+d/2-1\right) + (d/2-1). \end{align*} Here the first inequality follows from Jensen's inequality and the third equality from the identity \begin{align*} \mathrm{E}_{Z\sim\mathrm{Po}(\mu)}\left[1/(Z+1)\right]=\{1-\mathrm{e}^{-\mu}\}/\mu. \end{align*} \vspace{5mm} The proof of (\ref{eq:risk_JS}) immediately follows from Lemma \ref{upper_maximin} with $\Pi=\delta_{0}$, which completes the proof of Theorem \ref{thm:Poissonweakadmissibility}. \end{proof} \section{Gaussian infinite sequence model with Sobolev-type constraint parameter space} \label{Section:Gaussiansequence} In this section, we consider estimation of the mean in a Gaussian infinite sequence model with Sobolev-type constraint parameter space. Let $\mathcal{X}=\mathbb{R}^{\infty}$. Let $\Theta=\{\theta=(\theta_{1},\theta_{2}\ldots,)\in l^{2}:\sum_{i=1}^{\infty}i^{2\alpha}\theta_{i}^{2}\leq B\}$ with $\alpha>0$ and $B>0$ and $\mathcal{P}=\{P_{\theta}=\otimes_{i=1}^{\infty}\mathcal{N}(\theta_{i},1/n):\theta\in\Theta\}$. The hyperparameter $\alpha$ controls the smoothness level of a true function in the nonparametric regression model and the hyperparameter $B$ controls the volume of the parameter space. In this paper, we assume that $\alpha$ is known. Even in the setting in which $\alpha$ is known, the results in this section are noble. Let $\mathcal{A}=\mathbb{R}^{\infty}$ with the corresponding decision space $\mathcal{D}$ and $L(\theta,a)=\|\theta-a\|^{2}$ with the corresponding risk function $R(\theta,\hat{\theta})$, where $\|b\|^{2}:=\sum_{i=1}^{\infty}b_{i}^{2}$ for $b\in\mathbb{R}^{\infty}$. In this section, we consider the asymptotics in which the sample size $n$ grows to $\infty$. Let $\hat{\theta}_{\mathrm{G}}$ be the Bayes estimator based on the Gaussian prior $$\mathrm{G}=\otimes_{i=1}^{\infty}\mathcal{N}(0,i^{-2\alpha-1})$$ and $\hat{\theta}_{\mathrm{S}}$ be the Bayes estimator based on the prior $$\mathrm{S}=\sum_{d=1}^{\infty}M(d)\left[\left\{\otimes_{i=1}^{d}\mathcal{N}(0,d^{2\alpha+1}i^{-2\alpha-1}/n)\right\} \otimes\left\{\otimes_{i=d+1}^{\infty}\mathcal{N}(0,0)\right\}\right],$$ where $M(d)=\mathrm{e}^{-ad}/\sum_{i=1}^{\infty}\mathrm{e}^{-ai}$. \begin{rem} The prior $\mathrm{S}$ is discussed in \citet{YanoandKomaki(2017)}, and is a modification of the compound prior in \citet{Zhao(2000)}. The compound prior $\mathrm{C}$ is given as follows: $$\mathrm{C}=\sum_{d=1}^{\infty}M(d)\left[\left\{\otimes_{i=1}^{d}\mathcal{N}(0,i^{-2\alpha-1})\right\} \otimes\left\{\otimes_{i=d+1}^{\infty}\mathcal{N}(0,0)\right\}\right].$$ The modification is necessary to ensure that $\mathcal{R}(\Theta,\hat{\theta}_{\mathrm{S}})$ remains sufficiently small. Roughly speaking, it does require the prior mass condition under which the prior puts nearly the full mass on $\Theta$ to make $\mathcal{R}(\Theta,\hat{\theta}_{\mathrm{S}})$ sufficiently small; for further details, see the proof of Theorem \ref{thm:Sobolevweakadmissibility_S} below. The mass placed on $\Theta$ by the compound prior $\mathrm{C}$ is strictly less than 1 even as $n \to \infty$, whereas that by the prior $\mathrm{S}$ grows to 1 as $n\to \infty$ for a fixed $B>0$; see Lemma \ref{PriorMeasureonEllipsoid}. To demonstrate that the compound prior $\mathrm{C}$ places a mass on $\Theta$ that is strictly less than 1 even as $n \to \infty$, we have $\mathrm{C}\left(\Theta\right)\leq \mathrm{Pr}(N^{2}\leq B)<1$, where $N$ is a one-dimensional standard normal random variable. \end{rem} \subsection{Existing result for the Gaussian sequence model} The following existing result shows that, from the viewpoint of the minimax rate of convergence, $\hat{\theta}_{\mathrm{G}}$ and $\hat{\theta}_{\mathrm{S}}$ yield the same performance. \begin{lem}[Theorem 5.1.~in \citet{Zhao(2000)} and Theorem 2 in \citet{YanoandKomaki(2017)}] For any $\alpha>0$ and any $B>0$, we have \begin{align*} \inf_{\hat{\theta}\in\mathcal{D}}\sup_{\theta\in\Theta}R(\theta,\hat{\theta}) \asymp \sup_{\theta\in\Theta}R(\theta,\hat{\theta}_{\mathrm{G}}) \asymp \sup_{\theta\in\Theta}R(\theta,\hat{\theta}_{\mathrm{S}}) \asymp \left(1/n\right)^{2\alpha/(2\alpha+1)} \end{align*} as $n\to \infty$. \end{lem} \subsection{Main results for the Gaussian sequence model} The noble results presented in this subsection show that, from the viewpoint of $\varepsilon$-admissibility, $\hat{\theta}_{\mathrm{S}}$ is superior to $\hat{\theta}_{\mathrm{G}}$ in the case that $\alpha=1$ and $B=1$ or the case that $B$ is sufficiently small. Numerical evaluations also show the superiority of $\hat{\theta}_{\mathrm{S}}$ over $\hat{\theta}_{\mathrm{G}}$ for any $\alpha>0$ and for any $B>0$. The results are based on two theorems: Theorem \ref{thm:Sobolevweakadmissibility_G} shows that $\mathcal{R}(\Theta,\hat{\theta}_{\mathrm{G}})$ is of the same order as the maximum risk of the estimator, which indicates that there exists an estimator $\hat{\theta}$ such that $$R(\theta,\hat{\theta})+\mathrm{O}(n^{-2\alpha/(2\alpha+1)})<R(\theta,\hat{\theta}_{\mathrm{G}})<\mathrm{O}(n^{-2\alpha/(2\alpha+1)})$$ for all $\theta\in\Theta$. Theorem \ref{thm:Sobolevweakadmissibility_S} shows that $\mathcal{R}(\Theta,\hat{\theta}_{\mathrm{S}})$ has an exponential decay. \begin{thm}\label{thm:Sobolevweakadmissibility_G} There exists a constant $c$ depending only on $B$ and $\alpha$ such that the inequality \begin{align*} \lim_{n\to\infty}\mathcal{R}(\Theta,\hat{\theta}_{\mathrm{G}})/n^{-2\alpha/(2\alpha+1)}>c \end{align*} holds. For $B=1$ and $\alpha=1$, $c$ is taken to be strictly positive. For any $\alpha>0$, $c$ is taken to be strictly positive if $B$ is sufficiently small. \end{thm} \begin{thm}\label{thm:Sobolevweakadmissibility_S} We have \begin{align*} \mathcal{R}(\Theta,\hat{\theta}_{\mathrm{S}}) \lesssim \exp\left\{-\frac{a}{2}\left(nB\right)^{\frac{1}{4\alpha+2}}\right\} \end{align*} as $n\to \infty$. \end{thm} Proofs are provided in Subsection \ref{subsec:proof_Gaussian}. \begin{figure}[htb] \begin{minipage}{0.4\hsize} \begin{center} \includegraphics[width=0.90\hsize]{Fig_alphaval.eps} \caption{Possible choice of $c$ in Theorem \ref{thm:Sobolevweakadmissibility_G} with $B=1$} \label{Fig:alphaval} \end{center} \end{minipage} \begin{minipage}{0.4\hsize} \begin{center} \includegraphics[width=0.90\hsize]{Fig_Bval.eps} \caption{Possible choice of $c$ in Theorem \ref{thm:Sobolevweakadmissibility_G} with $\alpha=1$} \label{Fig:Bval} \end{center} \end{minipage} \end{figure} Although we do not provide a proof of the strict positivity of $c$ in Theorem \ref{thm:Sobolevweakadmissibility_G} for general settings, numerical evaluations (see Figures \ref{Fig:alphaval} and \ref{Fig:Bval}) show that we can assume strict positivity of $c$ in the case that $\alpha=1$ or in the case that $B=1$. Here, the choice of $c$ in Figures \ref{Fig:alphaval} and \ref{Fig:Bval} is described at the beginning of the proof of Theorem \ref{thm:Sobolevweakadmissibility_G}. \subsection{Proofs of theorems}\label{subsec:proof_Gaussian} \begin{proof}[Proof of Theorem \ref{thm:Sobolevweakadmissibility_G}] Let $\hat{\theta}_{\mathrm{G}(s)}$ be the Bayes estimator based on $\otimes_{i=1}^{\infty}\mathcal{N}(0,si^{-2\alpha-1})$. For $s\in(0,1)$, we will show that \begin{align} \inf_{\theta\in\Theta}[R&(\theta,\hat{\theta}_{\mathrm{G}})-R(\theta,\hat{\theta}_{\mathrm{G}(s)})] \nonumber\\ =&n^{-\frac{2\alpha}{2\alpha+1}} \times\left[ B\left(\frac{2\alpha+2}{4\alpha+1}\right)^{\frac{2\alpha+2}{2\alpha+1}} \left\{\frac{1}{\{1+\frac{2\alpha+2}{4\alpha+1}\}^{2}}-\frac{s^{-2}}{\{1+\frac{2\alpha+2}{s(4\alpha+1)}\}^{2}}\right\} \right. \nonumber\\ &\left.\quad\quad\quad\quad+(1-s^{\frac{1}{2\alpha+1}})\int_{0}^{\infty}\frac{1}{(1+x^{2\alpha+1})^{2}}\mathrm{d}x-n^{-\frac{1}{2\alpha+1}}\right]. \end{align} Letting $c$ be the supremum of the right hand side in the above inequality, will completes the proof. The strict positivity of $c$ for the specific settings is proved in the last step. \vspace{5mm} First, a direct evaluation of the risks yields \begin{align*} R&(\theta,\hat{\theta}_{\mathrm{G}})-R(\theta,\hat{\theta}_{\mathrm{G}(s)}) \nonumber\\ =&\sum_{i=1}^{\infty}\theta_{i}^{2}\left\{\left(\frac{i^{2\alpha+1}/n}{1+i^{2\alpha+1}/n}\right)^{2} -\left(\frac{i^{2\alpha+1}/\{ns\}}{1+i^{2\alpha+1}/\{ns\}}\right)^{2}\right\} \nonumber\\ &+n^{-1}\sum_{i=1}^{\infty}\left\{\left(\frac{1}{1+i^{2\alpha+1}/n}\right)^{2}-\left(\frac{1}{1+i^{2\alpha+1}/\{ns\}}\right)^{2}\right\} \nonumber\\ =&\left(\sum_{j=1}^{\infty}j^{2\alpha}\theta_{j}^{2}\right)\sum_{i=1}^{\infty} \frac{i^{2\alpha}\theta_{i}^{2}}{\sum_{j=1}^{\infty}j^{2\alpha}\theta_{j}^{2}} \left\{\left(\frac{i^{2\alpha+1}/n}{1+i^{2\alpha+1}/n}\right)^{2} -\left(\frac{i^{2\alpha+1}/\{ns\}}{1+i^{2\alpha+1}/\{ns\}}\right)^{2}\right\} \nonumber\\ &+n^{-1}\sum_{i=1}^{\infty}\left\{\left(\frac{1}{1+i^{2\alpha+1}/n}\right)^{2}-\left(\frac{1}{1+i^{2\alpha+1}/\{ns\}}\right)^{2}\right\}. \end{align*} For $\theta=0$, \begin{align*} R(\theta,\hat{\theta}_{\mathrm{G}})-R(\theta,\hat{\theta}_{\mathrm{G}(s)}) \geq n^{-1}\sum_{i=1}^{\infty}\left\{\left(\frac{1}{1+i^{2\alpha+1}/n}\right)^{2}-\left(\frac{1}{1+i^{2\alpha+1}/\{ns\}}\right)^{2}\right\}. \end{align*} For $\theta\neq 0$, a calculation in Appendix \ref{Appendix:details_firstterm} yields \begin{align*} R&(\theta,\hat{\theta}_{\mathrm{G}})-R(\theta,\hat{\theta}_{\mathrm{G}(s)}) \nonumber\\ \geq&B\left(\frac{2\alpha+2}{4\alpha+1}\right)^{\frac{2\alpha+2}{2\alpha+1}}n^{-\frac{2\alpha}{2\alpha+1}} \left[ \frac{1}{\{1+\frac{2\alpha+2}{4\alpha+1}\}^{2}}-\frac{s^{-2}}{\{1+\frac{2\alpha+2}{s(4\alpha+1)}\}^{2}} \right] \nonumber\\ &+n^{-1}\sum_{i=1}^{\infty}\left\{\left(\frac{1}{1+i^{2\alpha+1}/n}\right)^{2}-\left(\frac{1}{1+i^{2\alpha+1}/\{ns\}}\right)^{2}\right\}. \end{align*} Therefore, we have \begin{align} \inf_{\theta\in\Theta}[R&(\theta,\hat{\theta}_{\mathrm{G}})-R(\theta,\hat{\theta}_{\mathrm{G}(s)})] \nonumber\\ =&B\left(\frac{2\alpha+2}{4\alpha+1}\right)^{\frac{2\alpha+2}{2\alpha+1}}n^{-\frac{2\alpha}{2\alpha+1}} \left[ \frac{1}{\{1+\frac{2\alpha+2}{4\alpha+1}\}^{2}}-\frac{s^{-2}}{\{1+\frac{2\alpha+2}{s(4\alpha+1)}\}^{2}} \right] \nonumber\\ &+n^{-1}\sum_{i=1}^{\infty}\left\{\left(\frac{1}{1+i^{2\alpha+1}/n}\right)^{2}-\left(\frac{1}{1+i^{2\alpha+1}/\{ns\}}\right)^{2}\right\}. \label{eq:riskexp} \end{align} By convergence of the Riemann sum $\sum_{i=1}^{\infty}\{1+(i/N)^{2\alpha+1}\}^{-2}$ for a positive number $N$, we have \begin{align} \sum_{i=1}^{\infty}\frac{1}{(1+(i/N)^{2\alpha+1})^{2}} \leq N\int_{0}^{\infty}\frac{1}{(1+x^{2\alpha+1})^{2}}\mathrm{d}x \leq \sum_{i=1}^{\infty}\frac{1}{(1+(i/N)^{2\alpha+1})^{2}}+1. \label{eq:Riemannsum} \end{align} Combining (\ref{eq:Riemannsum}) with (\ref{eq:riskexp}) yields \begin{align} \inf_{\theta\in\Theta}[R&(\theta,\hat{\theta}_{\mathrm{G}})-R(\theta,\hat{\theta}_{\mathrm{G}(s)})] \nonumber\\ \geq&B\left(\frac{2\alpha+2}{4\alpha+1}\right)^{\frac{2\alpha+2}{2\alpha+1}}n^{-\frac{2\alpha}{2\alpha+1}} \left[ \frac{1}{\{1+\frac{2\alpha+2}{4\alpha+1}\}^{2}}-\frac{s^{-2}}{\{1+\frac{2\alpha+2}{s(4\alpha+1)}\}^{2}} \right] \nonumber\\ &+n^{-\frac{2\alpha}{2\alpha+1}}\{1-s^{\frac{1}{2\alpha+1}}\}\int_{0}^{\infty}\frac{1}{(1+x^{2\alpha+1})^{2}}\mathrm{d}x-n^{-1} \nonumber\\ =&n^{-\frac{2\alpha}{2\alpha+1}} \times\left[ B\left(\frac{2\alpha+2}{4\alpha+1}\right)^{\frac{2\alpha+2}{2\alpha+1}} \left\{\frac{1}{\{1+\frac{2\alpha+2}{4\alpha+1}\}^{2}}-\frac{s^{-2}}{\{1+\frac{2\alpha+2}{s(4\alpha+1)}\}^{2}}\right\} \right. \nonumber\\ &\left.\quad\quad\quad+(1-s^{\frac{1}{2\alpha+1}})\int_{0}^{\infty}\frac{1}{(1+x^{2\alpha+1})^{2}}\mathrm{d}x-n^{-\frac{1}{2\alpha+1}}\right]. \end{align} For $B=1$ and $\alpha=1$, taking $s=0.9$ confirms that the right hand side is positive. The choice of $s$ follows from the direct evaluation of the integral $\int_{0}^{\infty} (1+x^{2\alpha+1})^{-2} \mathrm{d}x$. For an arbitrary $s\in(0,1)$ and sufficiently small $B>0$, the right hand side is positive. This completes the proof. \end{proof} \vspace{5mm} \begin{proof}[Proof of Theorem \ref{thm:Sobolevweakadmissibility_S}] Let $T=\lfloor (nB)^{1/(4\alpha+2)} \rfloor$. Let $\widetilde{\mathrm{S}}$ be the probability distribution obtained by restricting $\mathrm{S}$ to $\Theta$. The corresponding Bayes estimator is denoted by $\hat{\theta}_{\widetilde{\mathrm{S}}}$. Let $\mathcal{D}^{*}:=\{\delta\in (l_{2})^{\mathbb{R}^{\infty}}:\delta(x)\in\Theta\}$ From Lemma \ref{upper_maximin}, it suffices to show that \begin{align} \inf_{\delta}\int R(\theta,\delta)\mathrm{d}\widetilde{\mathrm{S}}(\theta) +\mathrm{O}(\exp(-aT)) \geq \int R(\theta,\hat{\theta}_{\mathrm{S}})\mathrm{d}\widetilde{\mathrm{S}}(\theta). \label{eq:gammaBayes} \end{align} Since $\hat{\theta}_{\widetilde{\mathrm{S}}}$ is included in $\mathcal{D}^{*}$, \begin{align*} \inf_{\delta}\int R(\theta,\delta)\mathrm{d}\widetilde{\mathrm{S}}(\theta) = \inf_{\delta\in\mathcal{D}}\int R(\theta,\delta)\mathrm{d}\widetilde{\mathrm{S}}(\theta). \end{align*} Since $1_{\theta\in\Theta}=1_{\theta\in l_{2}}-1_{\theta\in\Theta^{\mathrm{c}}}$, we have \begin{align*} \inf_{\delta\in\mathcal{D}^{*}}&\int R(\theta,\delta)\mathrm{d}\widetilde{\mathrm{S}}(\theta) \nonumber\\ &\geq \frac{1}{\mathrm{S}(\Theta)} \left[ \inf_{\delta\in\mathcal{D}^{*}} \int R(\theta,\delta)\mathrm{d}\mathrm{S}(\theta) -\sup_{\delta\in\mathcal{D}^{*}} \int_{\theta\in\Theta^{\mathrm{c}}} R(\theta,\delta)\mathrm{d}\mathrm{S}(\theta) \right] \nonumber\\ &\geq \frac{1}{\mathrm{S}(\Theta)} \left[ \inf_{\delta} \int R(\theta,\delta)\mathrm{d}\mathrm{S}(\theta) -\sup_{\delta\in\mathcal{D}^{*}} \int_{\theta\in\Theta^{\mathrm{c}}} R(\theta,\delta)\mathrm{d}\mathrm{S}(\theta) \right]. \end{align*} Since $||\theta-\delta(X)||^{2} \leq 2(B+||\theta||^{2})$ for $\delta\in\mathcal{D}^{*}$ and $\inf_{\delta}\int R(\theta,\delta)\mathrm{d}\mathrm{S}=\int R(\theta,\hat{\theta}_{\mathrm{S}}) \mathrm{d}\mathrm{S}(\theta)$, we have \begin{align*} &\left[ \inf_{\delta} \int R(\theta,\delta)\mathrm{d}\Lambda(\theta) -\sup_{\delta\in\mathcal{D}^{*}} \int_{\theta\in\mathcal{E}^{\mathrm{c}}} R(\theta,\delta)\mathrm{d}\mathrm{S}(\theta) \right]\nonumber\\ &\geq \left[\int 1_{\theta\in\Theta} R(\theta,\hat{\theta}_{\mathrm{S}})\mathrm{d}\mathrm{S}(\theta) -(2B\mathrm{S}(\Theta^{\mathrm{c}})+2\sqrt{\mathrm{E}_{\mathrm{S}}[\|\theta\|^{4}]}\mathrm{S}^{1/2}(\Theta^{\mathrm{c}})) \right] \nonumber\\ &\geq \left[\int 1_{\theta\in\Theta} R(\theta,\hat{\theta}_{\mathrm{S}})\mathrm{d}\mathrm{S}(\theta) -(2B\mathrm{S}(\Theta^{\mathrm{c}})+2cn^{-1}\mathrm{S}^{1/2}(\Theta^{\mathrm{c}})) \right], \end{align*} where $c:=\sum_{d=1}^{\infty}M(d)d^{2\alpha+1}<\infty$. To complete the proof, we use the following lemma. \begin{lem} \label{PriorMeasureonEllipsoid} There exists a constant $c_{1}$ depending on $a>0$ such that for a sufficiently large $n\in\mathbb{N}$, the inequality \begin{align*} \mathrm{S}\left( (\Theta)^{\mathrm{c}}\right) \leq c_{1}\mathrm{e}^{-a \lfloor (nB)^{1/(2\times(2\alpha+1))}\rfloor } \end{align*} holds. \end{lem} The proof of Lemma \ref{PriorMeasureonEllipsoid} is given after completing the proof of Theorem \ref{thm:Sobolevweakadmissibility_S}. By Lemma \ref{PriorMeasureonEllipsoid}, for a sufficiently large $n>0$, we have \begin{align*} \inf_{\delta}\int R(\theta,\delta)\mathrm{d}\widetilde{\mathrm{S}}(\theta) \geq \left[\int R(\theta,\hat{\theta}_{\mathrm{S}})\mathrm{d}\widetilde{\mathrm{S}}(\theta) -\mathrm{O}\left(\exp\left\{-\frac{a}{2}T\right\}\right) \right], \end{align*} which demonstrates that inequality (\ref{eq:gammaBayes}) holds. \end{proof} \begin{proof}[Proof of Lemma \ref{PriorMeasureonEllipsoid}] Let $T=\lfloor (nB)^{1/(4\alpha+2)} \rfloor$. By definition, \begin{align*} \mathrm{S}\left( \Theta^{\mathrm{c}}\right) &= \sum_{d=1}^{\infty}M(d)\mathrm{Pr}\left(\sum_{i=1}^{d}i^{-1}|N_{i}|^{2} > \frac{nB}{d^{2\alpha+1}}\right) \nonumber\\ &\leq \sum_{d=1}^{T}M(d)\mathrm{Pr}\left(\sum_{i=1}^{d}i^{-1}|N_{i}|^{2} > \frac{nB}{d^{2\alpha+1}}\right) +\sum_{d=T+1}^{\infty}M(d) \nonumber\\ &= \sum_{d=1}^{T}M(d)\mathrm{Pr}\left(\sum_{i=1}^{d}|N_{i}|^{2} > \frac{nB}{T^{2\alpha+1}}\right) +\sum_{d=T+1}^{\infty}M(d), \end{align*} where $\{N_{i}\}_{i=1}^{T}$ are independent random variables distributed according to $\mathcal{N}(0,1)$. We next apply an exponential inequality for the chi-square statistics (Lemma 1 in \citet{LaurentandMassart(2000)}): for any $x>d$, \begin{align*} \mathrm{Pr}\left(\sum_{i=1}^{d}|N_{i}|^{2}\geq x^{2} \right)\leq \mathrm{e}^{-\frac{(\sqrt{x}-\sqrt{d})^{2}}{2}}. \end{align*} Setting $x=(1/2)(nB)^{1/2}$, we have \begin{align*} \mathrm{S}\left( \Theta^{\mathrm{c}} \right) \leq \sum_{d=1}^{T}M(d)\mathrm{e}^{-\frac{1}{8}(nB)^{1/2}(1+\mathrm{o}(nB)) } +\sum_{d=T+1}^{\infty}M(d). \end{align*} Setting $nB$ such that the $\mathrm{o}(nB)$ term in the above inequality is less than $1/2$ completes the proof. \end{proof} \section{Discussion and conclusions}\label{Section:Discussions} In this paper, we have demonstrated the usefulness of $\varepsilon$-admissibility in high-dimensional and nonparametric statistical models by presenting two new results. These results suggest the use of $\varepsilon$-admissibility in conjunction with the other criteria such as the minimax rate of convergence.
2,869,038,155,426
arxiv
\section{\label{sec:introduction}Introduction} Superstring perturbation theory instructs us to compute scattering amplitudes $\A_{g,n}$ as integrals of correlations functions of vertex operators over the moduli space $\mathcal{M}_{g,n}$ of genus-$g$ Riemann surfaces with $n$ punctures. Schematically, the formula encountered in textbooks on string theory is \begin{equation}\label{eq:1.1} \mathcal{A}_{g,n} \sim \int_{\mathcal{M}_{g,n}} \!\!\! \left< \mathcal{V}_1(z_1) \mathcal{V}_2(z_2) \cdots \mathcal{V}_n(z_n) \right>\, \d \mu_{g,n}\, , \end{equation} where $\mathcal{V}_i(z_i)$ are vertex operators inserted at positions $z_i$ and $\d\mu_{g,n}$ denotes the measure on $\mathcal{M}_{g,n}$ involving $z_i$'s and the surface moduli \cite{Polyakov:1981rd, Polchinski:1998rq}. Recall that the moduli space $\mathcal{M}_{g,n}$ is $(3g{+}n{-}3)$-dimensional, real or complex depending on whether we deal with open or closed strings respectively. It is well-known, albeit not often emphasized, that the above prescription is only approximately correct and \eqref{eq:1.1} is ill-defined. The problem with \eqref{eq:1.1} has a physical origin. It can be traced back to the fact that the target space is Lorentzian, while the worldsheet theory is Euclidean. Of course, the reason to insist on a Euclidean worldsheet is so that we can use the powerful tools of two-dimensional CFTs and avoid spurious singularities that would come with a Lorentzian worldsheet \cite{Mandelstam:1973jk}. The price we have to pay, however, is that certain ambiguities related to causal and unitary propagation of strings in space-time (which in quantum field theory are addressed by the Feynman $i\varepsilon$ prescription) remain unresolved. This can be already seen on explicit examples, such as $g=1$ and $n=4$, where \eqref{eq:1.1} is purely real and hence cannot be consistent with the unitarity via the optical theorem. Recall that the question of unitarity in the target space is separate from that of unitarity of the worldsheet theory. Witten proposed to cure this problem by zooming in on the boundaries of the moduli space $\mathcal{M}_{g,n}$ corresponding to Riemann surfaces degenerating to Feynman diagrams and resolve the aforementioned ambiguities by requiring consistency with the field-theory $i\varepsilon$ prescription \cite{Witten:2013pra}. Let us focus on the open-string case, where one can think of the open-string moduli space $\mathcal{M}_{g,n}$ as a contour embedded in its complexification $\mathcal{M}_{g,n}(\mathbb{C})$.\footnote{Here, $\mathcal{M}_{g,n}(\mathbb{C})$ denotes the complexification of the open string moduli space. It is in general a cover of the corresponding closed string moduli space.} The task is then to prescribe an integration contour that coincides approximately with $\mathcal{M}_{g,n}$ on the bulk of the integration contour, but is otherwise designed to implement the Witten $i\varepsilon$ near the boundaries. The subject of this paper is a concrete realization of this idea. Problems with the integration contour are somewhat milder after viewing string theory as an effective field theory and committing to the $\alpha'$-expansion. In fact, virtually all computations of string amplitudes are done this way, see, e.g., \cite{Green:1987mn,DHoker:1988pdl, Schlotterer:2011psa, Gerken:2020xte,10.1007/978-3-030-37031-2_3,10.1007/978-3-030-37031-2_4,Berkovits:2022ivl,Mafra:2022wml} for reviews. By contrast, in this work, we are interested in exploring intrinsically stringy properties of amplitudes and hence work at finite $\alpha'$, where we need to face the aforementioned difficulties. The simplest case in which \eqref{eq:1.1} needs to be corrected is already at genus zero (for any $n \geq 4$), but in a sense ``anything goes'' and the precise rerouting of the contour does not affect the final answer. This is related to fact that $\A_{0,n}$ is a tree-level amplitude and hence does not have any branch cuts. Likewise, at higher genus the part of the integration contour lying in the $z_i$ coordinates is easily fixable, but the part lying in the directions of the Riemann surface moduli needs additional work. The simplest interesting case is therefore $g=1$ and $n=4$, where $\mathcal{M}_{1,4}$ depends on the modular parameter $\tau$ in addition to the positions of the punctures. Describing and manipulating this contour will be the main results of this paper. We focus on the simplest case of open strings, including annulus and M\"obius strip topologies. \begin{figure} \centering \begin{tikzpicture} [scale=2.5] \begin{scope} \draw [light-gray] (1.1,0) -- (-1.1,0); \draw [light-gray] (0,0) -- (0,2); \draw [light-gray,dashed] (0,2) -- (0,2.5); \node at (-1,-0.15) {$-1$}; \node at (1,-0.15) {$1$}; \node at (0,-0.15) {$0$}; \node at (0.5,-0.15) {$\frac{1}{2}$}; \node at (-0.5,-0.15) {-$\frac{1}{2}$}; \node at (1.195,2.63) {$\tau$}; \draw (1.1,2.7) -- (1.1,2.55) -- (1.25,2.55); \draw [light-gray] (0,0) arc [radius=1, start angle=0, end angle= 90]; \draw [light-gray] (1,0) arc [radius=1, start angle=0, end angle= 180]; \draw [light-gray] (1,1) arc [radius=1, start angle=90, end angle= 180]; \draw [light-gray] (0.5,0) -- (0.5,0.866); \draw [light-gray] (-0.5,0) -- (-0.5,0.866); \draw [light-gray] (0.5,0.866) -- (0.5,2); \draw [light-gray] (-0.5,0.866) -- (-0.5,2); \draw [light-gray,dashed] (0.5,2) -- (0.5,2.5); \draw [light-gray,dashed] (-0.5,2) -- (-0.5,2.5); \draw [light-gray] (0.5,2.5) -- (0.5,2.7); \draw [light-gray] (-0.5,2.5) -- (-0.5,2.7); \draw [light-gray] (0,0) arc [radius=0.333, start angle=0, end angle= 180]; \draw [light-gray] (-0.334,0) arc [radius=0.333, start angle=0, end angle= 180]; \draw [light-gray] (0.666,0) arc [radius=0.333, start angle=0, end angle= 180]; \draw [light-gray] (1,0) arc [radius=0.333, start angle=0, end angle= 180]; \draw [light-gray] (0,0) arc [radius=0.196, start angle=0, end angle= 180]; \draw [light-gray] (-0.608,0) arc [radius=0.196, start angle=0, end angle= 180]; \draw [light-gray] (0.392,0) arc [radius=0.196, start angle=0, end angle= 180]; \draw [light-gray] (1,0) arc [radius=0.196, start angle=0, end angle= 180]; \draw [light-gray] (0,0) arc [radius=0.142, start angle=0, end angle= 180]; \draw [light-gray] (-0.716,0) arc [radius=0.142, start angle=0, end angle= 180]; \draw [light-gray] (0.284,0) arc [radius=0.142, start angle=0, end angle= 180]; \draw [light-gray] (1,0) arc [radius=0.142, start angle=0, end angle= 180]; \draw [light-gray] (0,0) arc [radius=0.108, start angle=0, end angle= 180]; \draw [light-gray] (-0.784,0) arc [radius=0.108, start angle=0, end angle= 180]; \draw [light-gray] (0.216,0) arc [radius=0.108, start angle=0, end angle= 180]; \draw [light-gray] (1,0) arc [radius=0.108, start angle=0, end angle= 180]; \draw [light-gray] (0.5,0) arc [radius=0.122, start angle=0, end angle= 180]; \draw [light-gray] (-0.5,0) arc [radius=0.122, start angle=0, end angle= 180]; \draw [light-gray] (0.744,0) arc [radius=0.122, start angle=0, end angle= 180]; \draw [light-gray] (-0.256,0) arc [radius=0.122, start angle=0, end angle= 180]; \draw [light-gray] (0.5,0) arc [radius=0.060, start angle=0, end angle= 180]; \draw [light-gray] (-0.5,0) arc [radius=0.060, start angle=0, end angle= 180]; \draw [light-gray] (0.620,0) arc [radius=0.060, start angle=0, end angle= 180]; \draw [light-gray] (-0.380,0) arc [radius=0.060, start angle=0, end angle= 180]; \draw [light-gray] (0.666,0) arc [radius=0.039, start angle=0, end angle= 180]; \draw [light-gray] (0.412,0) arc [radius=0.039, start angle=0, end angle= 180]; \draw [light-gray] (-0.588,0) arc [radius=0.039, start angle=0, end angle= 180]; \draw [light-gray] (-0.334,0) arc [radius=0.039, start angle=0, end angle= 180]; \draw [light-gray] (0.334,0) arc [radius=0.062, start angle=0, end angle= 180]; \draw [light-gray] (0.790,0) arc [radius=0.062, start angle=0, end angle= 180]; \draw [light-gray] (-0.666,0) arc [radius=0.062, start angle=0, end angle= 180]; \draw [light-gray] (-0.210,0) arc [radius=0.062, start angle=0, end angle= 180]; \draw[line width=0.5mm, Maroon] (0,0) -- (0,2.7); \draw[line width=0.5mm, Maroon, ->] (0,0.4) -- (0,1.5); \draw[line width=0.5mm, Maroon] (0.5,0) -- (0.5,2.7); \draw[line width=0.5mm, Maroon, ->] (0.5,2.7) -- (0.5,1.4); \end{scope} \end{tikzpicture} \qquad \begin{tikzpicture} [scale=2.5] \begin{scope} \draw [light-gray] (1.1,0) -- (-1.1,0); \draw [light-gray] (0,0) -- (0,2); \draw [light-gray,dashed] (0,2) -- (0,2.5); \node at (-1,-0.15) {$-1$}; \node at (1,-0.15) {$1$}; \node at (0,-0.15) {$0$}; \node at (0.5,-0.15) {$\frac{1}{2}$}; \node at (-0.5,-0.15) {-$\frac{1}{2}$}; \node at (1.195,2.63) {$\tau$}; \draw (1.1,2.7) -- (1.1,2.55) -- (1.25,2.55); \draw [light-gray] (0,0) arc [radius=1, start angle=0, end angle= 90]; \draw [light-gray] (1,0) arc [radius=1, start angle=0, end angle= 180]; \draw [light-gray] (1,1) arc [radius=1, start angle=90, end angle= 180]; \draw [light-gray] (0.5,0) -- (0.5,0.866); \draw [light-gray] (-0.5,0) -- (-0.5,0.866); \draw [light-gray] (0.5,0.866) -- (0.5,2); \draw [light-gray] (-0.5,0.866) -- (-0.5,2); \draw [light-gray,dashed] (0.5,2) -- (0.5,2.5); \draw [light-gray,dashed] (-0.5,2) -- (-0.5,2.5); \draw [light-gray] (0.5,2.5) -- (0.5,2.7); \draw [light-gray] (-0.5,2.5) -- (-0.5,2.7); \draw [light-gray] (0,0) arc [radius=0.333, start angle=0, end angle= 180]; \draw [light-gray] (-0.334,0) arc [radius=0.333, start angle=0, end angle= 180]; \draw [light-gray] (0.666,0) arc [radius=0.333, start angle=0, end angle= 180]; \draw [light-gray] (1,0) arc [radius=0.333, start angle=0, end angle= 180]; \draw [light-gray] (0,0) arc [radius=0.196, start angle=0, end angle= 180]; \draw [light-gray] (-0.608,0) arc [radius=0.196, start angle=0, end angle= 180]; \draw [light-gray] (0.392,0) arc [radius=0.196, start angle=0, end angle= 180]; \draw [light-gray] (1,0) arc [radius=0.196, start angle=0, end angle= 180]; \draw [light-gray] (0,0) arc [radius=0.142, start angle=0, end angle= 180]; \draw [light-gray] (-0.716,0) arc [radius=0.142, start angle=0, end angle= 180]; \draw [light-gray] (0.284,0) arc [radius=0.142, start angle=0, end angle= 180]; \draw [light-gray] (1,0) arc [radius=0.142, start angle=0, end angle= 180]; \draw [light-gray] (0,0) arc [radius=0.108, start angle=0, end angle= 180]; \draw [light-gray] (-0.784,0) arc [radius=0.108, start angle=0, end angle= 180]; \draw [light-gray] (0.216,0) arc [radius=0.108, start angle=0, end angle= 180]; \draw [light-gray] (1,0) arc [radius=0.108, start angle=0, end angle= 180]; \draw [light-gray] (0.5,0) arc [radius=0.122, start angle=0, end angle= 180]; \draw [light-gray] (-0.5,0) arc [radius=0.122, start angle=0, end angle= 180]; \draw [light-gray] (0.744,0) arc [radius=0.122, start angle=0, end angle= 180]; \draw [light-gray] (-0.256,0) arc [radius=0.122, start angle=0, end angle= 180]; \draw [light-gray] (0.5,0) arc [radius=0.060, start angle=0, end angle= 180]; \draw [light-gray] (-0.5,0) arc [radius=0.060, start angle=0, end angle= 180]; \draw [light-gray] (0.620,0) arc [radius=0.060, start angle=0, end angle= 180]; \draw [light-gray] (-0.380,0) arc [radius=0.060, start angle=0, end angle= 180]; \draw [light-gray] (0.666,0) arc [radius=0.039, start angle=0, end angle= 180]; \draw [light-gray] (0.412,0) arc [radius=0.039, start angle=0, end angle= 180]; \draw [light-gray] (-0.588,0) arc [radius=0.039, start angle=0, end angle= 180]; \draw [light-gray] (-0.334,0) arc [radius=0.039, start angle=0, end angle= 180]; \draw [light-gray] (0.334,0) arc [radius=0.062, start angle=0, end angle= 180]; \draw [light-gray] (0.790,0) arc [radius=0.062, start angle=0, end angle= 180]; \draw [light-gray] (-0.666,0) arc [radius=0.062, start angle=0, end angle= 180]; \draw [light-gray] (-0.210,0) arc [radius=0.062, start angle=0, end angle= 180]; \draw[line width=0.5mm, Maroon] (0,0.4) -- (0,2); \draw[line width=0.5mm, Maroon, ->] (0,0.4) -- (0,1.5); \draw[line width=0.5mm, Maroon] (0,0) arc (-90:90:0.2); \draw[line width=0.5mm, Maroon] (0.5,0.4) -- (0.5,2); \draw[line width=0.5mm, Maroon, ->] (0.5,2) -- (0.5,1.4); \draw[line width=0.5mm, Maroon] (0.5,0) arc (-90:90:0.2); \draw[line width=0.5mm, Maroon] (0,2) -- (0.5,2); \draw[line width=0.5mm, Maroon, ->] (0,2) -- (0.3,2); \draw[line width=0.5mm, Maroon] (0,2.2) -- (0.5,2.2); \draw[line width=0.5mm, Maroon, <-] (0.2,2.2) -- (0.5,2.2); \draw[line width=0.5mm, Maroon] (0,2.2) -- (0,2.7); \draw[line width=0.5mm, Maroon, ->] (0,2.2) -- (0,2.5); \draw[line width=0.5mm, Maroon] (0.5,2.2) -- (0.5,2.7); \draw[line width=0.5mm, Maroon, ->] (0.5,2.7) -- (0.5,2.4); \node at (0.7,2.4) {$\color{Maroon}\gamma$}; \node at (0.7,1.4) {$\color{Maroon}\Gamma$}; \end{scope} \end{tikzpicture} \caption{\label{fig:Gamma}Contours of integration in the upper half-plane of the modular parameter $\tau$. \textbf{Left:} Textbook contour corresponding to integration over the annulus ($i\mathbb{R}$) and M\"obius strip ($\frac{1}{2} + i \mathbb{R}$) topologies. The gray lines carving out the fundamental domain and its images are there only to guide the eye. \textbf{Right:} The integration contour consistent with causality and unitarity. The part approaching infinity can be evaluated exactly and integrating over $\Gamma$ is the main challenge addressed in this paper.} \end{figure} Let us first recall the textbook definition of the integration contour used in \eqref{eq:1.1} for the planar one-loop open-string amplitude. The integrand of \eqref{eq:1.1} is known explicitly and will be reviewed later in Section~\ref{subsec:basic amplitudes}. As mentioned above, the most critical part of the contour lies in the $\tau$-plane, see Figure~\ref{fig:Gamma} (left). The integration over the annulus corresponds to the contour on the imaginary axis $i\mathbb{R}$ directed upwards. By itself, the integral over this contour is divergent as $\tau \to i\infty$, which corresponds to the annulus becoming thick and looking like a disk with a single closed-string emission at zero momentum. In type I superstring, this divergence is cured by the M\"obius strip topology. It turns out that it can be described by exactly the same integrand as the annulus, except the contour needs to be shifted to $\frac{1}{2} + i\mathbb{R}$ and its orientation reversed. The two parts of the contour meet at infinity and cancel the divergence provided the gauge group is chosen to be $\SO(32)$. The problems with the integration contour occur near $\tau =0$ and $\tau=\frac{1}{2}$, where the annulus and the M\"obius strip become very thin and look like Feynman diagrams. This is precisely the part of the contour that has to be appropriately modified. It turns out that the prescription consistent with the Witten $i\varepsilon$ is to choose the contour with two semi-circles illustrated in Figure~\ref{fig:Gamma} (right). Details behind this constructions will be given in Section~\ref{subsec:integration contour}. Since the integrand is holomorphic in the $\tau$ upper half-plane, the resulting contour can be freely deformed. We used this fact to split it into the part $\gamma$ enclosing the $i\infty$ and the rest which we call $\Gamma$. The former can be easily evaluated in terms of derivatives of the tree-level amplitude, so the main focus will be on $\Gamma$. Note that the size of the semi-circles in $\Gamma$ does not matter. A similar contour can be designed for the non-planar scattering amplitudes, see Section~\ref{subsec:integration contour} for details. We stress that this contour is \emph{not} related by a contour deformation to the original contour. The reason is that the correlation function (the integrand) has essential singularities at every rational $\tau$ and hence the usual Cauchy contour deformation arguments do not apply there. Instead, $\gamma \cup \Gamma$ should be treated as a new proposal for the integration contour in the $\tau$-space. A more precise description on the whole complex moduli space $\mathcal{M}_{1,n}(\mathbb{C})$ will be given in \cite{LorenzSebastian}. As a matter of fact, the essential singularities are another way of determining that the contour on the left of Figure~\ref{fig:Gamma} could not have been the correct one: it simply gives a divergence close to the real axis because of the bad direction of approach at the singularities. In a previous publication \cite{Eberhardt:2022zay}, we have already checked implicitly that the above contour is needed for consistency with unitarity at $n=4$. More precisely, if $\Gamma$ corresponds to the choice of $+i\varepsilon$ prescription, the $-i\varepsilon$ version would be described by a similar contour with both half-circles bulging to the left. The imaginary part of the amplitude, with is proportional to the difference between the two choices, would be given by integrating over two circles anchored at $\tau = 0$ and $\tau=\frac{1}{2}$. After additional massaging, this contour gives rise to an explicit and convergent integral representation of the imaginary part $\Im \A_{1,4}$. We checked that this answer is in perfect agreement with computing the imaginary part using unitarity cuts. We refer the reader to \cite{Eberhardt:2022zay} for more details. Direct integration over the $n$-dimensional contour involving $\Gamma$ and the $z_i$ moduli using generalizations of the Pochhammer contour will be presented elsewhere \cite{LorenzSebastian}. For $n=4$, this gives a $4$-dimensional contour. But the bottom line of the above discussion of the imaginary part is that, in a sense, integrating over the circles anchored at any rational $\tau$ is already a solved problem and it leads to $2$-dimensional integrals. Computing $\A_{1,n}$ can therefore be made more efficient if we managed to deform $\Gamma$ to a collection of circles. A realization of this idea is inspired by the beautiful work of Rademacher \cite{Rademacher, RademacherZuckerman}, who employed a similar deformation to provide convergent infinite-sum representations of the Fourier coefficients of certain modular forms. Starting with $\Gamma$, an iterative process described in Section~\ref{subsec:Rademacher contour} allows us to deform it into the contour $\Gamma_\infty$ shown in Figure~\ref{fig:Rademacher}. We call it the \emph{Rademacher contour}.\footnote{Versions of the Rademacher contour appeared previously in many other areas of high-energy theory, see e.g.\ \cite{Dijkgraaf:2000fq,Moore:2004fg, Denef:2007vg, Alday:2019vdr} for an incomplete cross-section.} It consists of an infinite number of Ford circles $C_{a/c}$ touching the real axis at rational points $\tau = \frac{a}{c}$ for every such fraction between $0$ and $\frac{1}{2}$. Each circle has a radius $\frac{1}{2c^2}$. The point is that each of them can be manipulated into a convergent integral expression. Roughly speaking, the smaller the circle, the smaller its contribution. Therefore, however insane the Rademacher contour might look like, it leads to an explicit formula for computing the amplitude. \begin{figure} \centering \begin{tikzpicture} \begin{scope} \node at (10.2,4.9) {$\tau$}; \draw (10.4,4.7) -- (10.0,4.7) -- (10.0,5.1); \tikzmath{ int \p, \q; for \q in {2,...,50}{ for \p in {1,...,\q/2}{ if gcd(\p,\q) == 1 then { \f = 16*\p/\q; \r = 8/(\q*\q); { \draw[line width=0.5mm, black!30!white, Maroon] (\f,\r) circle(\r); }; }; }; }; } \draw[ultra thick, Maroon, ->] (3.52,0.39) arc (192.7:100:.5); \draw[ultra thick, Maroon, ->] (4.48,0.64) arc (196.3:100:0.888); \draw[ultra thick, Maroon, ->] (6.62,0.55) arc (226.4:100:2); \node at (0,-.4) {$0$}; \node at (8,-.4) {$\frac{1}{2}$}; \node at (5.33,-.4) {$\frac{1}{3}$}; \node at (4,-.4) {$\frac{1}{4}$}; \node at (3.2,-.4) {$\frac{1}{5}$}; \node at (6.4,-.4) {$\frac{2}{5}$}; \node at (2.67,-.4) {$\frac{1}{6}$}; \node at (2.28,-.4) {$\frac{1}{7}$}; \node at (4.57,-.4) {$\frac{2}{7}$}; \node at (6.86,-.4) {$\frac{3}{7}$}; \node at (1.2,-.4) {$\cdots$}; \node at (8,2) {$C_{1/2}$}; \node at (5.33,0.8) {$C_{1/3}$}; \node at (4,0.45) {\scalebox{0.8}{$C_{1/4}$}}; \node at (3.2,0.3) {\scalebox{0.6}{$C_{1/5}$}}; \node at (1.5,0.6) {$\color{Maroon}\Gamma_\infty$}; \end{scope} \end{tikzpicture} \caption{\label{fig:Rademacher}Rademacher contour $\Gamma_\infty$ in the $\tau$-plane is a sum of infinitely many Ford circles $C_{a/c}$ for all irreducible fractions $\frac{a}{c} \in (0,\frac{1}{2}]$. Each circle touches the real axis at $\tau = \frac{a}{c}$, has radius $\frac{1}{2c^2}$, and is oriented clockwise.} \end{figure} The analysis becomes rather complicated due to the infinite number of branches of the integrand. After the dust settles, the final answer for the planar type I superstring amplitude $A^{\text{p}}$ takes the following form: \begin{equation}\label{eq:1.2} A^{\text{p}}(s,t) = \Delta A^{\text{p}}(s,t) + \sum_{\substack{\mathrm{irreducible}\\ \mathrm{fractions}\\ 0 < \frac{a}{c} \leq \frac{1}{2}}} \sum_{\begin{subarray}{c}\mathrm{windings}\\ n_\L,n_\mathrm{D},n_\mathrm{R},n_\U \ge 0 \\ n_\L+n_\mathrm{D}+n_\mathrm{R}+n_\U=c-1 \end{subarray}}A^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}_{a/c}(s,t)\ . \end{equation} The first contribution is the result of integrating over $\gamma$ and can be written explicitly in terms of derivatives of the Veneziano amplitude, see eq.~\eqref{eq:Delta A planar}. The second term involves a sum over all irreducible fractions $\frac{a}{c}$ between $0$ and $\frac{1}{2}$, as dictated by the Rademacher contour $\Gamma_\infty$. For each of them, we sum over a finite number of integers $n_\L$, $n_\mathrm{D}$, $n_\mathrm{R}$, $n_\U$ that depend on how the four punctures are placed relative to one another. We can interpret these terms more physically as winding numbers in the following way. Close to the real axis of $\tau$, Riemann surfaces become very skinny and look like worldlines. Consider the $s$-channel, in which the external particles $1$ and $2$ are incoming and hence are placed at past infinity, and likewise $3$ and $4$ are at future infinity since they are outgoing. We can separate them by an imagined space-like cut surface. Embedding the worldline in spacetime, the color lines of the annulus (near $\tau = \frac{0}{1})$ would go through the cut surface twice: on the way in and out. However, those of the M\"obius strip (near $\tau = \frac{1}{2}$) would do it four times since it needs an extra winding. As a generalization, close to the point $\tau = \frac{a}{c}$, the color lines do exactly $c{-}1$ extra windings. Moreover, we can count how many extra windings occur between every pair of punctures, starting from the $(1,2)$ and ending on $(4,1)$. Let us call these numbers $(n_\L, n_\mathrm{D}, n_\mathrm{R}, n_\U)$. They have to add up to $c{-}1$. An example is given in Figure~\ref{fig:windings}. This gives an interpretation of every term in the sum \eqref{eq:1.2}. \begin{figure} \centering \vspace{-4em} \begin{tikzpicture}[ CoilColor/.store in=\coilcolor,CoilColor=black, Step/.store in=\Step,Step=0.1, Coil/.style={ double=black, draw=gray!50, decoration={ #1, segment length=3mm, coil }, decorate, }, Coil2/.style={ decorate, decoration={ markings, mark= between positions 0 and 1 step \Step with { \begin{scope}[yscale=#1] \draw[xshift=9.2,fill,\coilcolor!70!black] (0,0)++(-135: 0.2 and 0.4) .. controls +(-0.2,0) and +(-0.3,0) .. (90: 0.2 and 0.4) .. controls +(-0.33,0) and +(-0.23,0) .. (-135: 0.2 and 0.4); \draw[white,line width=2pt] (0,0)++(90: 0.2 and 0.4) .. controls +(0.3,0) and +(0.2,0) .. (-45: 0.2 and 0.4); \draw[fill=\coilcolor,\coilcolor] (0,0)++(90: 0.2 and 0.4) .. controls +(0.3,0) and +(0.2,0) .. (-45: 0.2 and 0.4) .. controls +(0.25,0) and +(0.35,0) .. (90: 0.2 and 0.4); \end{scope} } } } ] \draw[Coil2=3.5,CoilColor=black] (1,-4) -- ++ (0,3); \draw [line width=0.2mm, bend right=130, looseness=3] (-0.4,-0.98) to (-0.4,-3.99); \draw (-2.27,-1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-2.57,-1.5) {1}; \draw (-2.27,-3.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-2.57,-3.5) {2}; \draw (2,-3.55) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.3,-3.55) {3}; \draw (2,-1.45) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.3,-1.45) {4}; \draw[dashed, very thick, Maroon] (0.8,-0.5) to (0.8,-4.6); \end{tikzpicture} \vspace{-4em} \caption{\label{fig:windings}Example contribution to the sum \eqref{eq:1.2} with $(n_\L, n_\mathrm{D}, n_\mathrm{R}, n_\U) = (0,1,7,1)$ and hence $c=10$. Time flows to the right and the cut (dashed line) separates the punctures with incoming ($1$ and $2$) and outgoing ($3$ and $4$) momenta. The four integers $(n_\L, n_\mathrm{D}, n_\mathrm{R}, n_\U)$ count the number of extra windings across the cut between the punctures $1$ and $2$, $2$ and $3$, $3$ and $4$, $4$ and $1$, respectively.} \end{figure} The explicit expressions for $A^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}_{a/c}$ are given in Section~\ref{subsec:results}. In addition to the aforementioned winding numbers, they depend on the kinematic variables $(s,t)$ and we find they are proportional to $\frac{1}{c^5}$. Every such term is given by a convergent two-dimensional integral similar to the ones originating from the phase space integration for unitarity cuts. We argue that the whole sum in \eqref{eq:1.2} convergent for $s>0$, although the convergence is sufficiently fast for useful numerical evaluation when $s \gtrsim \frac{1}{\alpha'}$. For $s \to 0$, convergence breaks down. To understand the origin of this behavior, consider the toy model $A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}(s,t)=\frac{1}{c^5}\mathrm{e}^{i \alpha' s \phi}$, where $\phi$ is a ``random'' phase that can depend on the winding numbers, $\frac{a}{c}$, and the kinematics. This is a good model for the actual formula for $A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}(s,t)$ that we derive in this paper, see eq.~\eqref{eq:planar four-point function s-channel}. As we take $\alpha' s \ll 1$, the phases stop mattering and the number of terms in the four-fold sum (over $a,n_\L, n_\mathrm{D}, n_\mathrm{R}, n_\U$ subject to one constraint) in \eqref{eq:1.2} is $\mathcal{O}(c^4)$. Therefore, in the strict $\alpha' = 0$ limit, we end up with a harmonic sum of the type $\sum_{c=1}^{\infty} \frac{1}{c}$, which indeed diverges. Increasing $s$ helps with convergence. For example, if the phases $\phi$ were sufficiently random (as they seem to be in practice), they would give ${\mathcal O}(\frac{1}{\sqrt{c}})$ enhancement to each sum, thus making the series converge pretty well. Indeed, we prove that \eqref{eq:1.2} converges for every $s \in \mathbb{Z}_{>0}$. Analogous formulas can be derived in the non-planar case. The formula \eqref{eq:1.2} allows us to compute $A^{\mathrm{p}}(s,t)$ for given $s$ and $t$ and finally plot the amplitude. As an example, the results in the forward limit, $t=0$, are shown in Figure~\ref{fig:Ap-forward}. We plotted $A^\text{p}(s,0) \sin^2(\pi s)$ with the additional factor that removes double poles at every integer $s$ to make the plot readable. One can notice a few interesting features. The imaginary part dominates by around two orders of magnitude over the real part (in the figure, the former is multiplied by $\tfrac{1}{20}$ for readability). The imaginary part remains constant or slightly increasing, while the real part oscillates around zero. We cross-checked our results with the unitarity cut computation \cite{Eberhardt:2022zay}, explicit evaluation using the generalized Pochhammer contour \cite{LorenzSebastian}, and computations of mass shifts, finding agreement in all cases. Details of the computations that went into Figure~\ref{fig:Ap-forward} are given in Section~\ref{subsec:numerical}. \begin{figure} \centering \includegraphics[scale=1.2]{figures/forward-data} \caption{\label{fig:Ap-forward}Planar open-string amplitude $A^\text{p}(s,t)$ in the forward limit $t=0$, plotted as a function of $s$. For the purpose of the plot, the amplitude is multiplied by $\sin^2(\pi s)$ in order to remove double poles. The real part is given in orange and the imaginary part (rescaled by $\tfrac{1}{20}$) is in blue. Faint vertical lines indicate values of $s$ at which a new threshold opens up. The error bars from extrapolation $c \to \infty$ are smaller than the line widths.} \end{figure} This interpretation in terms of winding numbers allows us to predict possible singularities of $A^\text{p}$. For example, a pole in the $s$-channel can only happen when the punctures $1$ and $2$ (or $3$ and $4$) are brought together. But this can only happen when $n_\L = 0$ (or $n_\mathrm{R} = 0$). Hence if one is interested in averaged mass shifts and decay widths of strings, which can be read off from the coefficient of the double pole at integer $s$, the computation simplifies quite dramatically. We use this fact to compute mass shifts and decay widths up to $s \leq 16$, see Appendix~\ref{app:mass-shifts}. Moreover, we observe that mass shifts and decay widths provide a rough estimate for the average behaviour of the amplitude. As a practical application, we used them to compute the high-energy fixed-angle behavior. This limit was previously studied using saddle-point methods in \cite{Gross:1989ge} (see also \cite{Gross:1987kza,Gross:1987ar} for closed strings), who found evidence that the amplitude is suppressed as $\mathrm{e}^{-\alpha' S_{\mathrm{tree}}}$ as $\alpha' \to \infty$ with $s/t$ fixed, where $S_{\mathrm{tree}}$ is the tree-level on-shell action. But it is actually not possible to perform saddle-point analysis correctly without knowing the original integration contour, so the discussion of \cite{Gross:1987kza,Gross:1987ar,Gross:1989ge} should be viewed only as a heuristic. It is thus interesting to study how the high-energy behavior looks like in practice and now we have a great tool to do so. \begin{figure} \centering \includegraphics[scale=1.2]{figures/fixed-angle-data} \caption{\label{fig:fixed-angle-data}Exponential decay of the planar open-string amplitude in the high-energy fixed-angle limit. We plot $A^{\mathrm{p}}(s,-\tfrac{s}{4})\sin^2(\pi s)$ with the absolute values of real and imaginary parts in orange and blue respectively. The data for $s \leq 16$ is computed using \eqref{eq:1.2} (with $c\leq 10$) and for all integer $s$ using mass shifts and decay widths (with $c \leq 1000$). Faint vertical lines indicate energies at which a new threshold opens up. The gray dashed lines correspond to exponential suppression $\mathrm{e}^{-S_{\mathrm{tree}}}$ with $S_{\mathrm{tree}} = s \log(s) + t \log(-t) + u \log(-u) \approx 0.56 s$ and are plotted with two different constants to guide the eye. The data confirms exponential decay.} \end{figure} In Figure \ref{fig:fixed-angle-data} we plot an example numerical evaluation of $A^{\mathrm{p}}(s,t) \sin^2(\pi s)$ at $60^{\circ}$ scattering angle (translating to $t=-\tfrac{s}{4}$) for a range of energies $s$ on a logarithmic scale. The data spans roughly $8$ orders of magnitude. We plot a continuous curve up to $s \leq 16$ and also high-precision values at all integer $s$ (since the plot is logarithmic, the spikes indicate zeros of the amplitude). The latter are computed using mass shifts and decay widths. The gray dashed lines indicate the exponential decay $\mathrm{e}^{-\alpha' S_{\mathrm{tree}}}$ and are in perfect agreement with the numerical data. We reaffirm this result by extracting the exponent of the exponential decay for a couple of scattering angles in Figure~\ref{fig:ratios}. Details of these computations are given in Section~\ref{subsec:numerical}. For the imaginary part, this behavior was already verified in \cite{Eberhardt:2022zay}, where the coefficient of the exponential was also proposed, explaining departure from a pure exponential in Figure~\ref{fig:fixed-angle-data}. Despite following the exponential envelope, the data has a jagged behavior, which is a result of receiving contributions from an infinite number of saddle points with the same exponential decay but different phases. We leave a more careful study of the high-energy behavior of the string amplitudes, where the integration contour $\Gamma$ is taken as a starting point, until future work. A lot of physical aspects of open-string amplitudes, including computing the cross-section, dominance of low partial-wave spins, and low-energy expansions, were already discussed in \cite{Eberhardt:2022zay}, where we refer the reader for details. \begin{figure} \centering \includegraphics[scale=1.2]{figures/ratios} \caption{\label{fig:ratios}Coefficient of the exponential suppression for two different scattering angles, corresponding to setting $t = -\tfrac{s}{2}$ and $t = -\tfrac{s}{3}$. After normalizing the amplitude by $\sin^2(\pi s) \sqrt{-8\pi t u /s}$, we plot the real part of its exponent in the units of the tree-level action $S_\mathrm{tree}$. The data for $s \lesssim 14.5$ is computed using \eqref{eq:1.2} with $c \leq 10$ and for all integer $s$ with $c \leq 1000$ using mass shifts and decay widths. The gray dashed line indicates the exponential suppression $\mathrm{e}^{-S_\mathrm{tree}}$ with which we find perfect agreement.} \end{figure} \medskip This paper is organized as follows. In Section~\ref{sec:review}, we review the definitions of one-loop open string amplitudes and how Riemann surface degenerations encode different singularities of the amplitudes. We advise readers familiar with the subject to skip directly ahead to Section~\ref{sec:summary}, where the main results of this paper are summarized in Section~\ref{sec:summary}, including the proposal for the integration contour, the Rademacher procedure, and the numerical calculations. The curious reader then hopefully wants to know how to derive these results, which we explain in the following three Sections. In Section~\ref{sec:two-point function}, we study the warm-up example of the two-point function, which illustrates most of the ideas employed in the full computation in a simplified setting. In Section~\ref{sec:planar amplitude derivation}, we give details of the manipulations leading to the formula \eqref{eq:1.2} in the planar case and in Section~\ref{sec:non-planar} we treat the non-planar one. Computations of mass shifts and decay widths are given in Section~\ref{sec:mass-shifts}. Finally, we conclude with a list of future directions in Section~\ref{sec:conclusion}. This paper comes with a number of appendices. In Appendix~\ref{app:Delta A planar} we review the computation of the cusp contribution to the planar amplitude. In Appendix~\ref{app:convergence}, we discuss convergence of the Rademacher method. In Appendix~\ref{app:count number of solutions quadratic equation} we explain how to count the solutions to quadratic equations modulo prime powers, which is an ingredient in the computations of mass shifts. In Appendix~\ref{app:mass-shifts} we tabulate the results for the numerical evaluation of mass shifts and decay widths. \section{Review: One-loop open string amplitudes}\label{sec:review} \subsection{\label{subsec:basic amplitudes}Annulus and M\"obius strip topologies} In this work, we will consider one-loop open string four-point amplitudes in type I superstring theory. There are three possible diagrams for the scattering of four gluons that are depicted in Figure~\ref{fig:open string diagrams}. The three basic cases are: the planar annulus diagram, where all four vertex operators are inserted on one boundary of the annulus, the (planar) M\"obius strip diagram and the non-planar annulus diagram, where two vertex operators are one boundary and the other two are on the other boundary of the annulus. Of course, all these diagrams also exist with the roles of the punctures $1$, $2$, $3$ and $4$ permuted. We should immediately mention that the non-planar annulus diagram with one vertex operator on one boundary and all three others on the other boundary vanishes identically. The reason is that one has to trace over the Chan--Paton group factors and a single vertex operator insertion on one boundary leads to the factor $\Tr(t^a)=0$, where $t^a$ are the adjoint generators for $\SO(N)$ with $N=32$. \begin{figure} \centering \begin{tikzpicture} \begin{scope} \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \draw (20:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (1.8,.51) {4}; \draw (-20:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (1.8,-.51) {3}; \draw (160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,.51) {1}; \draw (-160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,-.51) {2}; \end{scope} \begin{scope}[shift={(5,0)}] \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \draw (20:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (1.8,.51) {4}; \draw (-20:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (1.8,-.51) {3}; \draw (160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,.51) {1}; \draw (-160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,-.51) {2}; \fill[white] (-.6,.5) rectangle (.6,1.6); \fill[black!10!white] (-.62,.53) to (0,1.2) to[bend right=30] (-.62,1.375); \fill[black!10!white] (.62,.53) to (0,1.2) to[bend left=30] (.62,1.375); \draw[very thick, out=54.3, in=154.3, looseness=.8] (-.65,.47) to (.65,1.35); \fill[white] (0,1.2) circle (.1); \draw[very thick, out=125.7, in=25.7, looseness=.8] (.65,.47) to (-.65,1.35); \end{scope} \begin{scope}[shift={(10,0)}] \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \draw (20:.8) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (.4,.3) {3}; \draw (-20:.8) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (.4,-.3) {4}; \draw (160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,.51) {1}; \draw (-160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,-.51) {2}; \end{scope} \end{tikzpicture} \caption{\label{fig:open string diagrams}Three open string topologies at the one-loop level. \textbf{Left:} Planar annulus. \textbf{Middle:} M\"obius strip. \textbf{Right:} Non-planar annulus.} \end{figure} The color structure of the planar amplitudes is hence of the form $\Tr(t^{a_1} t^{a_2} t^{a_3} t^{a_4})$, whereas the color structure of the non-planar amplitudes is $\Tr(t^{a_1}t^{a_2})\Tr(t^{a_3}t^{a_4})$. Note that, relative to the M\"obius strip, the annulus contribution has an additional factor of $N$ coming from the Chan--Paton trace $\Tr(\varnothing) = N$ on the empty boundary. The integrands for these open string amplitudes are well-known \cite{Green:1981ya, Schwarz:1982jn}. Setting $\alpha'=1$, they are given respectively by \begin{subequations} \label{eq:integrands four point functions} \begin{align} \A_{\text{an}}^{\text{p}}& = 2^9 \pi^2 g_\text{s}^4 N \, t_8\, \Tr(t^{a_1}t^{a_2}t^{a_3}t^{a_4})\, \frac{(-i)}{32}\int_{\mathcal{M}_{1,4}^{\text{p,an}}} \!\!\!\!\! \d \tau\, \d z_1 \, \d z_2 \, \d z_3\, \prod_{j<i} \vartheta_1(z_{ij},\tau)^{-s_{ij}}\, , \label{eq:planar annulus four point function}\\ \A_{\text{M\"ob}}& =2^9 \pi^2 g_\text{s}^4 \, t_8\, \Tr(t^{a_1}t^{a_2}t^{a_3}t^{a_4})\, i\int_{\mathcal{M}_{1,4}^{\text{M\"ob}}} \!\!\! \d \tau\, \d z_1 \, \d z_2 \, \d z_3 \, \prod_{j<i} \vartheta_1(z_{ij},\tau)^{-s_{ij}} \ , \label{eq:Moebius strip four point function}\\ \A_{\text{an}}^{\text{n-p}}& =2^9 \pi^2 g_\text{s}^4 \, t_8 \Tr(t^{a_1}t^{a_2})\Tr(t^{a_3}t^{a_4})\, \frac{(-i)}{32}\int_{\mathcal{M}_{1,4}^{\text{n-p,an}}} \!\!\! \d \tau\, \d z_1 \, \d z_2 \, \d z_3 \, \prod_{j=1}^2\prod_{i=3}^4 \vartheta_4(z_{ij},\tau)^{-s_{ij}} \nonumber\\ &\hspace{7cm}\times \big(\vartheta_1(z_{21},\tau)\vartheta_1(z_{43},\tau)\big)^{-s} \ . \label{eq:non-planar annulus four point function} \end{align} \end{subequations} The planar amplitude is a sum of the first two contributions. The amplitudes depend on the Mandelstam invariants $s_{ij} = -(p_i + p_j)^2$, where $p_i$ denotes the external momentum associated to the vertex operator at position $z_i$. Only two kinematic invariants, say $(s,t)$ with $s = s_{12}$ and $t = s_{23}$ are independent. We use the mostly-minus signature in which, e.g., the $s$-channel kinematics is described by $-s<t<0$. We denote $z_{ij} = z_i - z_j$. We use the following standard conventions for the Jacobi theta function \begin{subequations} \begin{align} \vartheta_1(z,\tau)&=i \sum_{n \in \ZZ} (-1)^n \mathrm{e}^{2\pi i(n-\frac{1}{2}) z+\pi i (n-\frac{1}{2})^2 \tau}\ , \label{eq:definition theta1} \\ \vartheta_4(z,\tau)&= \sum_{n \in \ZZ} (-1)^n\mathrm{e}^{2\pi i n z+\pi i n^2 \tau}\ . \label{eq:definition theta4} \end{align} \end{subequations} $g_\mathrm{s}$ is the string coupling constant. The prefactor $t_8$ depends on the polarizations and kinematics and will be spelled out below. The moduli spaces $\mathcal{M}_{1,4}$ are real four-dimensional and consists of the purely imaginary modular parameter $\tau$ (for the M\"obius strip we have $\tau \in \frac{1}{2}+i \RR$) and purely real $z_i$, which are appropriately ordered, i.e.,\ $0\le z_1 \le z_2 \le z_3 \le z_4=1$ in the planar case and $-1 \le z_1 \le z_2 \le 1$, $z_{21}\le 1$, $0 \le z_3 \le z_4=1$ in the non-planar case. The $\U(1)$ isometry group of the annulus and the M\"obius strip allows us to pick the location of one puncture, say $z_4$, arbitrarily and we will choose $z_4=1$ in the following. The factor of $-i$ in front of the expressions compensates for our choice to integrate over purely imaginary $\tau$ which supplies a further $i$ from the Jacobian. As written, the amplitudes are hence real. This immediately tells us that the above description of ${\cal M}_{1,4}$ cannot be quite correct: scattering amplitudes at loop level need imaginary parts for consistency with unitarity. We will temporarily ignore this issue and come back to it in Section~\ref{subsec:integration contour}, where we will define a prescription for the integration contour similar to the causal Feynman $i\varepsilon$ in quantum field theory. Geometrically, these formulas arise as follows. For an open string worldsheet such as the annulus and the M\"obius strip, there is always a corresponding closed surface that double covers the original open surface. The covering is branched along the boundaries of the Riemann surface. It is given in both cases by the torus. It can be constructed by taking the orientation double cover of the original surface and gluing the boundaries. For example, the orientation double cover of the M\"obius strip is an annulus and the aforementioned torus is obtained by gluing its two boundaries. As an orientation double cover, the covering surface admits an orientation-reversing involution $\Phi$ such that the original surface is given by the respective quotient where one identifies $z \sim \Phi(z)$. In the case at hand, the torus is as usual realized by $\TT=\CC/\Lambda$ with $\Lambda=\langle 1,\tau \rangle$. Taking then $z \in \TT$, the orientation-reversing map can be chosen as $\Phi(z)=\overline{z}$. For this to yield a well-defined map on the torus, we need $\Lambda=\overline{\Lambda}$. This is true in two distinct cases: \begin{enumerate} \item $\tau \in i \RR_{>0}$. In this case, the resulting surface has two boundaries, namely the boundary corresponding to $z \in \RR+\ZZ \, \Im(\tau)$ and the boundary corresponding to $z \in \RR+(\ZZ+\frac{1}{2})\Im(\tau)$. The resulting geometry is an annulus. \item $\tau\in \frac{1}{2}+i \RR_{>0}$. In this case, there is only a single boundary given by the translates of the real line and we hence obtain a M\"obius strip as the quotient surface. \end{enumerate} In particular, vertex operators for the annulus can be either inserted on the real line, $z_j \in \RR$, or on the line $z_j \in \RR+\frac{1}{2}i \tau$. For the planar case, we will always choose the real line. The close connection to the torus explains the appearance of Jacobi theta functions in \eqref{eq:integrands four point functions}. In fact, the Green's function on the torus is given by \begin{equation} G(z_i,z_j)=\log\left|\frac{\vartheta_1(z_{ij},\tau)}{\vartheta_1'(\tau)}\right|^2-\frac{2\pi [\Im(z_{ij})]^2}{\Im(\tau)}\ . \end{equation} The non-holomorphic piece is necessary because of the constant function that is a zero mode of the Laplacian. Consequently, the Green's function satisfies \begin{equation} \Delta_{z_i}G(z_i,z_j)=2\pi \delta^2(z_i-z_j)-\frac{2\pi}{\Im(\tau)} \end{equation} and the right-hand side has a vanishing integral over the torus. Now the free boson propagator on the quotient surface is simply given by \begin{equation} \frac{1}{2}G(z_i,z_j)=\begin{cases} \log \frac{\vartheta_1(z_{ji},\tau)}{\vartheta_1'(\tau)} \qquad & \text{if $z_i$ and $z_j$ are on the same boundary}\ , \\ \log \frac{\vartheta_4(z_{ji},\tau)}{\vartheta_1'(\tau)} \qquad & \text{if $z_i$ and $z_j$ are on different boundaries}\ . \end{cases} \end{equation} This explains the various $\vartheta_i$-factors appearing in \eqref{eq:integrands four point functions}. It also explains why the planar annulus amplitude has exactly the same form as the M\"obius strip amplitude, except that $\tau$ is shifted to $\tau+\frac{1}{2}$ in the latter one. The relative overall normalization between the two diagrams requires a more careful discussion, see \cite{Green:1984ed}. These are the amplitudes for type I superstrings which are traditionally derived in the RNS formalism, although the pure-spinor superstring is actually more effective in deriving these formulas \cite{Berkovits:2004px}. From the perspective of the RNS formalism, it is surprising that one ends up with a simple integral over the \emph{bosonic} moduli space of Riemann surfaces. In general, superstring amplitudes involve integrals over supermoduli space: a supermanifold of dimension $3g-3+n|2g-2+n$ (for $n$ NS-punctures). Hence for a four-point function, there are four fermionic integrals to be done. In general, there is no canonical way of performing integrals over the fermionic directions since there is no preferred choice of such ``fermionic directions''. Correspondingly, superstring theory usually does not give canonical integrands over the moduli space of Riemann surfaces. At genus one, one is however in luck, since there is a very natural choice for how to do the fermionic integrals. In the traditional formalisms, fermionic integrals are performed by inserting picture-changing operators and on a genus-one surface we need exactly $n$ of them. Hence we can consider the correlation function of picture 0 NS-sector vertex operator, which leads to a well-defined integrand on the reduced space of supermoduli: the moduli space of spin-curves.\footnote{Strictly speaking this distribution of picture changing operators (PCOs) is not entirely consistent, but it seems that one can get away with it at genus 1.} One then has to sum over the non-trivial spin-structures of the respective surfaces to finally reduce the amplitude to an integral over ordinary moduli space. Through various miraculous cancellations, one ends up with the simple integrands given in \eqref{eq:integrands four point functions}. In particular, the one-loop determinants of the worldsheet fields together with the additional contributions from the vertex operators (beyond the Green's function explained above) essentially cancel out in the end. They produce the coordinate-independent factor $t_8$. It is given by \begin{align}\label{eq:t8 def} t_8 = &\;\tr_v(F_1 F_2 F_3 F_4) + \tr_v(F_1 F_3 F_2 F_4) + \tr_v(F_1 F_2 F_4 F_3) \\ &- \tfrac{1}{4} \Big( \tr_v(F_1 F_2) \tr_v(F_3 F_4) + \tr_v(F_1 F_3) \tr_v(F_2 F_4) + \tr_v(F_1 F_4) \tr_v(F_2 F_3) \Big)\ ,\nonumber \end{align} where the linearized field strengths are $F_i^{\mu\nu} = p_i^\mu \epsilon_i^\nu - \epsilon_i^\mu p_i^\nu$ with polarization vectors $\epsilon_i^\mu$ and the traces $\tr_v$ are taken over the Lorentz indices. It equals the Yang--Mills numerator and is the unique permutation-invariant structure consistent with gauge invariance and supersymmetry. Consequently, the four-gluon amplitude at any genus is guaranteed to have this universal factor present. For convenience, we will strip off the prefactors from \eqref{eq:integrands four point functions} that do not affect the analysis and denote the resulting amplitudes with non-curly symbols. In particular, after simplifying the integrand, the planar annulus amplitude is given by \begin{equation} A_\text{an}^\text{p} = -i \int_{i\mathbb{R}_{\geq 0}} \!\!\! \mathrm{d}\tau \int_{0 \leq z_1 \leq z_2 \leq z_3 \leq 1}\!\!\!\!\!\!\!\!\!\!\!\!\! \mathrm{d}z_1\, \mathrm{d}z_2 \, \mathrm{d} z_3 \left( \frac{\vartheta_1(z_{21},\tau)\vartheta_1(z_{43},\tau)}{\vartheta_1(z_{31},\tau)\vartheta_1(z_{42},\tau)}\right)^{\!-s} \!\left( \frac{\vartheta_1(z_{32},\tau)\vartheta_1(z_{41},\tau)}{\vartheta_1(z_{31},\tau)\vartheta_1(z_{42},\tau)}\right)^{\!-t} \end{equation} and similarly the M\"obius strip contribution $A_{\text{M\"ob}}$ is obtained by replacing the $\tau$ integration with $\frac{1}{2} + i\mathbb{R}_{\geq 0}$ and multiplying by $-1$. The total planar amplitude is then \begin{equation} A^{\text{p}} = A_{\text{an}}^{\text{p}} + A_{\text{M\"ob}}\ . \end{equation} We also set $N=32$, which is required for this combination to be well-defined. Finally, the non-planar amplitude is given by \begin{align} A^{\text{n-p}}& = \frac{-i}{32}\int_{i\mathbb{R}_{\geq 0}} \!\!\! \d \tau \int_{\begin{subarray}{c} \, 0 \leq z_1 \leq z_2 \leq 1 \\ 0 \leq z_3 \leq 1 \end{subarray}} \d z_1 \, \d z_2 \, \d z_3 \, \prod_{j=1}^2\prod_{i=3}^4 \vartheta_4(z_{ij},\tau)^{-s_{ij}} \big(\vartheta_1(z_{21},\tau)\vartheta_1(z_{43},\tau)\big)^{-s} . \end{align} All conventions agree with those used in \cite{Eberhardt:2022zay}. \begin{figure} \centering \begin{tikzpicture} \begin{scope} \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \draw (160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,.51) {1}; \draw (-160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,-.51) {2}; \draw[very thick, fill=black!10!white] (2.2,0) circle (.7); \draw (2.4,.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,1) {4}; \draw (2.4,-.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,-1) {3}; \end{scope} \begin{scope}[shift={(7,0)}] \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \draw (180:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,0) {1}; \draw[very thick, fill=black!10!white] (2.2,0) circle (.7); \draw (2.9,0) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (3.2,0) {3}; \draw (2.4,.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,1) {4}; \draw (2.4,-.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,-1) {2}; \end{scope} \begin{scope}[shift={(0,-4)}] \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \draw[very thick, fill=black!10!white] (2.2,0) circle (.7); \draw (2.81,.35) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (3.15,.35) {3}; \draw (2.4,.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,1) {4}; \draw (2.4,-.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,-1) {1}; \draw (2.81,-.35) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (3.15,-.35) {2}; \end{scope} \begin{scope}[shift={(7,-4)}] \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \draw (20:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (1.8,.51) {4}; \draw (-20:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (1.8,-.51) {3}; \draw (160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,.51) {1}; \draw (-160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,-.51) {2}; \fill[white] (-.6,.5) rectangle (.6,1.6); \fill[black!10!white] (-.65,.47) to (0,1.05) to (0,1.15) to (-.65,1.35); \fill[black!10!white] (.65,.47) to (0,1.05) to (0,1.15) to (.65,1.35); \draw[very thick, out=54.3, in=270, looseness=1, fill=black!10!white] (-.65,.47) to (0,1.1); \draw[very thick, out=90, in=25.7, looseness=1, fill=black!10!white] (0,1.1) to (-.65,1.35); \draw[very thick, out=125.7, in=270, looseness=1, fill=black!10!white] (.65,.47) to (0,1.1); \draw[very thick, out=90, in=154.3, looseness=1, fill=black!10!white] (0,1.1) to (.65,1.35); \end{scope} \end{tikzpicture} \caption{Four basic degenerations of the planar annulus amplitude. \textbf{Top left:} Massive pole exchange. \textbf{Top right:} Wave function renormalization. \textbf{Bottom left:} Tadpole. \textbf{Bottom right:} Non-separating degeneration.} \label{fig:planar annulus single degeneration} \end{figure} \subsection{Singularities of open strings} \label{subsec:singularities integrands} As it stands, the integrals in eq.~\eqref{eq:integrands four point functions} are all divergent. There are relatively benign divergences such as the collision of $z_1$ and $z_2$. We are already used to dealing with such singularities in the tree-level disk amplitude. They lead to the poles of the string amplitude corresponding to the exchange of massive string-modes, e.g., collision of $z_1$ and $z_2$ gives poles at $s \in \mathbb{Z}_{\geq 0}$. However, the story becomes more complicated at one-loop because the amplitude will also have discontinuities which are reflected in more intricate singular behaviours. Before discussing them, we should recall some basic features of the moduli space of open Riemann surfaces. We will mostly just discuss the planar annulus case, since all other cases are similar. There are various degenerations of the surface that are familiar from the Deligne--Mumford compactification of the closed string moduli space. There is one additional type of boundary in the open string that appears when closing a hole without vertex operators, which we discuss below. The basic single degenerations of the planar annulus diagrams are depicted in Figure~\ref{fig:planar annulus single degeneration}. Of course, these also exist with punctures relabelled in all ways such that the original permutation is preserved alonge the outer boundary. Let us discuss the physical meaning and behaviour of the integrand near these degenerations in turn. Near degenerations, the worldsheet develops a very long ``neck'' connecting the two parts of the surface. This situation is conformally equivalent to a pinched cycle as drawn in Figure~\ref{fig:planar annulus single degeneration}. Hence, near the degeneration, the string worldsheet collapses to a worldline and one makes contact with the effective field-theory description, at least for that part of the diagram. To fully reduce to field-theory amplitudes, one has to completely degenerate the surface, i.e., pinch four compatible cycles. \subsubsection{Massive pole exchange} For example, one such degeneration corresponding to a field-theory bubble diagram is depicted in Figure~\ref{fig:planar annulus maximal degeneration}. Such Feynman diagrams correspond to a field theory with only cubic vertices and every cubic vertex is identified with a three-punctured disk in the string worldsheet. In fact, this relation can be made precise for the open string via Witten's open cubic string field theory. \begin{figure} \centering \begin{tikzpicture} \begin{scope} \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \draw[very thick, fill=black!10!white] (2.2,0) circle (.7); \draw[very thick, fill=black!10!white] (-2.2,0) circle (.7); \fill[white] (-.6,.5) rectangle (.6,1.6); \fill[black!10!white] (-.65,.47) to (0,1.05) to (0,1.15) to (-.65,1.35); \fill[black!10!white] (.65,.47) to (0,1.05) to (0,1.15) to (.65,1.35); \draw[very thick, out=54.3, in=270, looseness=1, fill=black!10!white] (-.65,.47) to (0,1.1); \draw[very thick, out=90, in=25.7, looseness=1, fill=black!10!white] (0,1.1) to (-.65,1.35); \draw[very thick, out=125.7, in=270, looseness=1, fill=black!10!white] (.65,.47) to (0,1.1); \draw[very thick, out=90, in=154.3, looseness=1, fill=black!10!white] (0,1.1) to (.65,1.35); \fill[white] (-.6,-.5) rectangle (.6,-1.6); \fill[black!10!white] (-.65,-.47) to (0,-1.05) to (0,-1.15) to (-.65,-1.35); \fill[black!10!white] (.65,-.47) to (0,-1.05) to (0,-1.15) to (.65,-1.35); \draw[very thick, out=-54.3, in=-270, looseness=1, fill=black!10!white] (-.65,-.47) to (0,-1.1); \draw[very thick, out=-90, in=-25.7, looseness=1, fill=black!10!white] (0,-1.1) to (-.65,-1.35); \draw[very thick, out=-125.7, in=-270, looseness=1, fill=black!10!white] (.65,-.47) to (0,-1.1); \draw[very thick, out=-90, in=-154.3, looseness=1, fill=black!10!white] (0,-1.1) to (.65,-1.35); \draw (-2.4,.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-2.4,1) {1}; \draw (-2.4,-.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-2.4,-1) {2}; \draw (2.4,.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,1) {4}; \draw (2.4,-.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,-1) {3}; \node at (4,0) {$\Longleftrightarrow$}; \end{scope} \begin{scope}[shift={(8,0)}] \draw[very thick] (-3,1) to (-2,0) to (-3,-1); \draw[very thick] (3,1) to (2,0) to (3,-1); \draw[very thick] (0,0) circle (1); \draw[very thick] (-1,0) to (-2,0); \draw[very thick] (1,0) to (2,0); \fill (-2,0) circle (.1); \fill (-1,0) circle (.1); \fill (1,0) circle (.1); \fill (2,0) circle (.1); \end{scope} \end{tikzpicture} \caption{The maximal degeneration corresponding to a bubble diagram in the $s$-channel.} \label{fig:planar annulus maximal degeneration} \end{figure} In this way, we obtain one field-theory-like propagator for every pinched cycle in the open string worldsheet and the singularities that various regions in moduli space produce are expected to reproduce the standard behaviors in field theory as review below. In particular, the first picture in Figure~\ref{fig:planar annulus single degeneration} corresponds to the exchange of a massive intermediate particle. It leads to poles in the amplitude for $s \in \ZZ_{\ge 0}$ since there are physical particles in string theory that can go on-shell in this case. In fact, the amplitude will have double poles at $s \in \ZZ_{\ge 0}$ since we can consider the double degeneration where also punctures $1$ and $2$ split off and produce a further propagator which can go on-shell. As we shall discuss below, the existence of double poles at $s \in \ZZ_{\ge 0}$ is directly tied to the mass renormalization of the intermediate massive states. Since the mass of the massless particles is protected by gauge invariance, the double pole is actually absent for $s=0$. \subsubsection{Wave function renormalization} The second picture in Figure~\ref{fig:planar annulus single degeneration} corresponds to the one-loop wave-function renormalization of the disk four-point amplitude. Let us see this explicitly. The degeneration is obtained by changing the coordinates to \begin{equation} z_2=1-\lambda,\qquad z_3=1-\lambda x \end{equation} for $0<x<1$ and small $\lambda$. Then $x$ will become a cross-ratio on the disk that splits off from the annulus as depicted in the figure. We can see this directly at the level of the integrand. In this limit $\vartheta_1(z_{ij},\tau)\sim z_{ij} \vartheta_1'(0)$ for $i, j \in \{2,3,4\}$ and hence the amplitude becomes \begin{equation} A^\text{p}_\text{an}=-i \int_{i \RR_{\ge 0}} \mathrm{d}\tau \int_0^1 \mathrm{d}z_1 \int_0^\delta \mathrm{d}\lambda \int_0^1 \mathrm{d}x\ \lambda\ x^{-s} \, (1-x)^{-t}\ . \label{eq:wave function renormalization degeneration} \end{equation} This result is indeed proportional to the disk amplitude, which is obtained by integrating over $x$. We cut off the integral over $\lambda$ off at some small positive $\delta$, which is where the approximation of the degeneration breaks down. For fixed $\tau$, $z_1$ and $x$, the resulting integral over $\lambda$ is convergent as $\delta \to 0$. This means that the integrand is non-singular as we approach the degeneration corresponding to the wave function renormalization and thus the wave function renormalization actually vanishes. This is a consequence of supersymmetry: we are considering the scattering of massless gauge bosons which sit in a $\frac{1}{2}$-BPS multiplet of the spacetime supersymmetry algebra. This protects them from wave function renormalization. The upshot of this discussion is that we do not have to worry about the degenerations corresponding to wave function renormalizations: nothing special is happening there. \subsubsection{Tadpoles} In a similar vein, the third diagram in Figure~\ref{fig:planar annulus single degeneration} represents the tadpole diagram of string theory. Consistency of the theory requires the vanishing of this diagram. This is indeed the case as one can see by a similar scaling as in \eqref{eq:wave function renormalization degeneration}, now setting \begin{equation} z_1=1-\lambda,\qquad z_2=1-\lambda x_2,\qquad z_3=1-\lambda x_1 \end{equation} with $0<x_1<x_2<1$ being the two cross-ratios on the disk with five marked points. Thus also the regions in moduli space corresponding to the tadpole do not deserve special attention. \subsubsection{Non-separating degenerations} Finally, the most subtle degeneration is the non-separating degeneration depicted in the last picture of Figure~\ref{fig:planar annulus single degeneration}. The name indicates that the resulting nodal surface is still connected: it is topologically a six-punctured disk with two punctures glued together. Non-separating degenerations cause the appearance of discontinuities and branch cuts in the string amplitude. In fact, the singularity structure near such a discontinuity is more complicated than the Deligne--Mumford compactification of moduli makes one suspect. In particular, the string integrand does actually not extend to a smooth function over all of the compactified moduli space. The depicted degeneration corresponds to $\tau\to 0$ with $\frac{z_{ij}}{\tau}$ fixed, meaning that also all $z_{ij}\to 0$ at the same rate. It will turn out that it is actually more convenient to just take $\tau\to 0$ and keep all $z_i$'s fixed. The string amplitude automatically singles out the correct degenerations. To investigate the behaviour of the integrand near this degeneration, one uses the modular covariance of the integrand: \begin{align} \left(\frac{\vartheta_1(z_{21},\tau)\vartheta_1(z_{43},\tau)}{\vartheta_1(z_{31},\tau)\vartheta_1(z_{42},\tau)}\right)^{\!-s} &= \mathrm{e}^{-\pi i s \tilde{\tau} (z_{21}^2+z_{43}^2-z_{42}^2-z_{31}^2)} \left(\frac{\vartheta_1(z_{21}\tilde{\tau} ,\tilde{\tau})\vartheta_1(z_{43}\tilde{\tau},\tilde{\tau})}{\vartheta_1(z_{31}\tilde{\tau},\tilde{\tau})\vartheta_1(z_{42}\tilde{\tau},\tilde{\tau})}\right)^{-s} \\ &=\mathrm{e}^{2\pi i s \tilde{\tau} z_{32}z_{41}} \left(\frac{\vartheta_1(z_{21}\tilde{\tau} ,\tilde{\tau})\vartheta_1(z_{43}\tilde{\tau},\tilde{\tau})}{\vartheta_1(z_{31}\tilde{\tau},\tilde{\tau})\vartheta_1(z_{42}\tilde{\tau},\tilde{\tau})}\right)^{-s}\ , \end{align} where $\tilde{\tau}=-\frac{1}{\tau}$. Since $0<z_{ij} \tilde{\tau}<\tilde{\tau}$ and $\tilde{\tau}$ has large imaginary part, we can actually use that the $n=0$ term in the definition of the Jacobi theta function \eqref{eq:definition theta1} will dominate in this limit, i.e., $\vartheta_1(z_{ij} \tilde{\tau},\tilde{\tau}) \sim i \mathrm{e}^{-\pi i \tilde{\tau}z_{ij}+\frac{1}{4}\pi i \tilde{\tau}}$. This yields \begin{equation} A_\text{an}^\text{p} =-i \int_{i/\delta}^{i \infty} \frac{\mathrm{d}\tilde{\tau}}{-\tilde{\tau}^2} \int \mathrm{d}z_1 \, \mathrm{d}z_2\, \mathrm{d}z_3\ \tilde{q}^{-s(1-z_{41})z_{32}-t z_{43} z_{21}}+\text{higher order in $\tilde{q}$}\ , \label{eq:annulus non-separating degeneration leading singularity} \end{equation} where $\tilde{q}=\mathrm{e}^{2\pi i \tilde{\tau}}$. We again cut off the integral over $\tau$ (and hence over $\tilde{\tau}$) off at small $\delta$ where our approximations break down. One notices that, e.g., in the $s$-channel with large $s>0$ and small $t<0$, the exponent of $\tilde{q}$ can be negative and hence the integral over $\tilde{\tau}$ is generically divergent. For fixed $s$ and $t$, there is a fixed number of terms in the expansion of the $\vartheta_1$-function that yield divergent contributions as $\tilde{\tau} \to i \infty$. These are precisely the terms that can contribute to the imaginary part of the amplitude, since positive exponents of $\tilde{q}$ come with manifestly real coefficients and hence cannot contribute to $\Im A^{\text{p}}_{\text{an}}$. Below, we will discuss how to properly deal with the divergent contributions. For now, let us note that the imaginary part of the amplitude is much simpler than the real part because only finitely many terms in the $\tilde{q}$-expansion contribute to it. This is of course expected physically because the imaginary part of the amplitude can in principle be computed by the optical theorem from tree-level amplitudes, which was discussed in detail in \cite{Eberhardt:2022zay}. Let us also mention that this discussion immediately implies that the string integrand does \emph{not} extend to a well-defined smooth function on the Deligne--Mumford compactification of moduli space. Indeed, which term in the $\tilde{q}$-expansion in \eqref{eq:annulus non-separating degeneration leading singularity} dominates for $\tilde{q}\to 0$ depends on the choice of the other moduli $z_i$ and consequently is not a smooth function of the $z_i$'s in this limit. Contrary to what is often stated, it means that the string integrands have a more complicated singularity structure than that predicted by the Deligne--Mumford compactification and to properly characterize them one would also need to consider a more complicated compactification of the moduli space. As far as we are aware, this has not been made precise in the literature. \subsubsection{Closed string pole} There is one final singularity of the integrand that is special to the open string. If one views the annulus as a cylinder, then one can make the cylinder very long and pinch the corresponding closed string cycle. For the non-planar diagram, this leads to two disks connected at a single joint node as illustrated in Figure~\ref{fig:closed string pole non-planar diagram}. \begin{figure} \centering \begin{tikzpicture} \draw[very thick, fill=black!10!white] (0,1.5) to (2.5,0) to (0,-1.5); \draw[very thick, fill=white] (0,0) ellipse (.5 and 1.5); \draw[very thick, fill=black!10!white] (5,1.5) to (2.5,0) to (5,-1.5); \draw[thick, fill=black!10!white, dashed] (5,0) ellipse (.5 and 1.5); \draw[very thick] (4.83,-1.42) arc(-110:110: .5 and 1.5); \draw (5.45,.7) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (5.8,.7) {3}; \draw (5.45,-.7) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (5.8,-.7) {4}; \draw (0.44,.7) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (0,.7) {1}; \draw (0.44,-.7) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (0,-.7) {2}; \end{tikzpicture} \caption{The degeneration leading to the closed string pole.} \label{fig:closed string pole non-planar diagram} \end{figure} This degeneration is actually of real codimension-2, in contrast with codimension-1 cases encountered above, since it corresponds to a closed string degeneration and indeed each of the two disks has only one real modulus. Explicitly, this corresponds to taking $\tau \to i \infty$, where to leading order $\vartheta_4(z_{ij},\tau) \sim 1$ and $\vartheta_1(z_{ij},\tau)=-2 \mathrm{e}^{\frac{1}{4} \pi i \tau} \sin(\pi z_{ij})$. The amplitude becomes \begin{align} A^\text{n-p}\sim -i \int_{i/\delta}^{i \infty} \mathrm{d}\tau \int_{0\le z_1 \le z_2\le 1} \mathrm{d}z_1\, \mathrm{d}z_2 \int_0^1 \mathrm{d}z_3\ \left(4 \sin(\pi z_{21}) \sin(\pi z_{43})\right)^{-s} \mathrm{e}^{-\frac{1}{2} \pi i s \tau}\ . \end{align} The integral over $\tau$ is again divergent, but can in this case defined in an ad-hoc manner, e.g., by analytic continuation in $s$. The fact that the integrand only depends on $z_{21}$ and $z_{43}$ corresponds to the fact that this degeneration is really codimension-2. We can hence fix $z_2=1$ and compute \begin{equation} A^\text{n-p}\sim -\frac{2}{\pi s} \left(\int_0^1 \mathrm{d}z\ \left(2 \sin(\pi z)\right)^{-s}\right)^{\!2}\ . \end{equation} The remaining integral corresponds to the disk two-point function with one closed-string vertex operator. Higher closed-string string poles can be observed by keeping more terms in the $q$-expansion of the Jacobi theta functions. Since $\vartheta_4(z,\tau)$ has a $q$-expansion with half-integer powers of $q$, it leads to the following integrals over $\tau$: \begin{equation} -i \int_{i/\delta}^{i \infty} \mathrm{d}\tau \ \mathrm{e}^{-\frac{1}{2}\pi i s \tau+\pi i k \tau} \sim -\frac{2}{\pi(s-2k)} \end{equation} with integer $k$. Thus, the amplitude has a closed-string pole at every even integer. The factor of 2 compared to the normalization of the open-string spectrum corresponds to the usual relative normalization between the open and closed string spectrum. For the planar amplitudes, this region in the moduli space also exists. However, the corresponding closed-string exchange happens at zero momentum and thus corresponds to a closed string tadpole. Explicitly, we have for the planar annulus amplitude for $\tau \to i \infty$ \begin{equation} A^\text{p}_\text{an} \sim -i\int_{i/\delta}^{i \infty} \mathrm{d}\tau \int \mathrm{d}z_1\, \mathrm{d}z_2 \, \mathrm{d}z_3\ \left(\frac{\sin(\pi z_{21}) \sin(\pi z_{43})}{\sin(\pi z_{31}) \sin(\pi z_{42})} \right)^{-s} \left(\frac{\sin(\pi z_{32}) \sin(\pi z_{41})}{\sin(\pi z_{31}) \sin(\pi z_{42})} \right)^{-t}\ . \label{eq:tau to infinity behaviour planar amplitude} \end{equation} Here, we again replaced the Jacobi theta functions by their leading behaviour as $\tau \to i\infty$. The integrand becomes completely $\tau$-independent and hence the integral over $\tau$ diverges. Nevertheless, we can observe that the M\"obius strip has a similar divergence as $\tau \to i\infty$ and since $\tau$ and $\tau+\frac{1}{2}$ become indistinguishable for large $\Im(\tau)$, the integrands approach exactly the same value, except for the overall minus sign that is present in \eqref{eq:Moebius strip four point function} compared to \eqref{eq:planar annulus four point function} when $N=32$. Hence the two divergences can be cancelled against each other in the full planar string amplitude. This is known as the Fischler--Susskind--Polchinski mechanism \cite{Fischler:1986ci, Polchinski:1994fq}. In the next section, we will see an elegant way of combining the annulus and the M\"obius strip diagrams naturally. \section{Summary of the results}\label{sec:summary} In this section, we explain all the conceptual ideas utilized in this paper without going too much into technical details. The full derivation of our formulas for the scattering amplitudes is given in Section~\ref{sec:planar amplitude derivation} and Section~\ref{sec:non-planar}, after a simpler warmup example that we discuss in Section~\ref{sec:two-point function}. Consequences for the mass-shifts of the string spectrum are explored in Section~\ref{sec:mass-shifts}. In this section, we also demonstrate the usefulness of the Rademacher expansion by explicitly evaluating it numerically. \subsection{Integration contour} \label{subsec:integration contour} After having discussed various singularities of the open string integrands, we move on to making sense of the integrals near the degenerations. One obvious idea for evaluating string amplitudes is to chop up the integration domain, compute the individual pieces in (typically disjoint) kinematic regions where they converge, and define the final result via analytic continuation back into a single kinematic point. This approach was used in the old literature on string amplitudes, see, e.g., \cite{DHoker:1993hvl,DHoker:1993vpp,DHoker:1994gnm}, but is difficult to make practical beyond genus zero. The simple reason is that in order to perform analytic continuation, one needs an analytic expression to begin with and these are near impossible to find for such an intricate object as a string scattering amplitude. Alternatively, one might define analytic continuation using dispersive methods such as using Mandelstam representation. But in contrast with quantum field theory, these are difficult to construct due to the exponential divergences at infinity. Likewise, $\alpha'$-expansion is not an option because it would only provide an asymptotic series. Hence, we are lead to the conclusion that in order to evaluate string amplitudes at finite $\alpha'$, one has to face the problem of constructing the correct integration contour. A physical picture for deciding how to construct the contour was proposed more recently by Witten \cite{Witten:2013pra}. He pointed out that the reason for the divergences is that we treat the worldsheet as a Euclidean 2D CFT, whereas the target space is Lorentzian. To get the correct causal structure of the amplitude we would hence like to perform the computations on a Lorentzian worldsheet. While this is not possible globally on the moduli space without introducing spurious UV divergences, it is possible near the degenerations where long tubes or strips develop on the worldsheet which can be endowed with a Lorentzian metric. For a codimension-$1$ degeneration, there is always a modulus $\tilde{q}$ that represents a good local coordinate on moduli space such that $\tilde{q}=0$ is the degenerate locus. For example, in the four degenerations depicted in Figure~\ref{fig:planar annulus single degeneration}, $\tilde{q}$ corresponds to: \begin{equation} z_{43},\quad z_{42},\quad z_{41},\quad \mathrm{and}\quad \mathrm{e}^{-\frac{2\pi i}{\tau}}, \end{equation} respectively. Here, $\tilde{q}$ measures the width of the neck connecting the two parts of the surface, which is conformally equivalent to saying that $\tilde{q}=\mathrm{e}^{-t}$, where $t$ is the proper Euclidean length of the tube. Thus the Lorentzian analytic continuation corresponds to rotating into the upper half-plane after some large proper time $t_\ast \gg 1$, corresponding to the swirl contour in the $\tilde{q}$-plane, see Figure~\ref{fig:Euclidean to Lorentzian contour}. \begin{figure} \centering \begin{tikzpicture} \begin{scope} \draw[->, gray] (-0.5,0) -- (5,0); \draw[->, gray] (0,-0.5) -- (0,2); \draw[thick] (4.5,2) -- (4.5,1.5) -- (5,1.5); \node at (4.75,1.75) {$t$}; \draw[very thick, Maroon] (0,0) -- (3,0); \draw[->, very thick, Maroon] (0,0) -- (1.5,0); \draw[very thick, Maroon] (3,0) -- (3.3,2); \draw[->, very thick, Maroon] (3,0) -- (3.15,1); \fill (3,0) circle (0.07) node[below right] {$t_\ast \gg 1$}; \node (E) at (1,0.8) {\footnotesize Euclidean}; \node (L) at (1.5,1.5) {\footnotesize Lorentzian}; \draw[->] (E) to (1.8,0.2); \draw[->] (L) to (3,1.2); \end{scope} \begin{scope}[shift={(8,0)}] \draw[->, gray] (-0.5,0) -- (5,0); \draw[->, gray] (0,-0.5) -- (0,2); \draw[thick] (4.5,2) -- (4.5,1.5) -- (5,1.5); \node at (4.75,1.75) {$\tilde{q}$}; \draw[very thick, Maroon, variable=\t, domain=0:2400, samples=200, smooth] plot ({-\t}:{1-.0004*\t}); \draw[very thick, Maroon] (1,0) -- (4,0); \draw[->, very thick, Maroon] (4,0) -- (2.5,0); \fill (1,0) circle (0.07) node[below right] {$e^{-t_\ast}$}; \end{scope} \end{tikzpicture} \caption{\label{fig:Euclidean to Lorentzian contour}The integration contour in the neighborhood of the divisor at small $\tilde{q} = e^{-t}$. \textbf{Left:} In the $t$-plane, the Euclidean contour running along the real axis is rotated into a Lorentzian one after some large proper time $t_\ast$. The fact that the Lorentzian contour retains a small positive real part is equivalent to the Feynman $i\varepsilon$ prescription. \textbf{Right:} The image of the contour in the variable $\tilde{q}$ involves an infinite spiral onto the divisor at $\tilde{q}=0$ with radius $e^{-t_\ast}$.} \end{figure} This necessitates the definition of a complexification of the open-string moduli space $\mathcal{M}_{1,n}$. We already discussed this complexification in Section~\ref{subsec:basic amplitudes}. It is induced from the underlying complex moduli space of the torus that appears as the orientation double cover. In practice, it just corresponds to allowing complex $z_i$'s and arbitrary $\tau$ in the upper half-plane $\HH$. The contour of integration is most interesting for the $\tau$-part of the integral, since it leads to the discontinuities of the amplitude. Here, we describe it for any $n$. As we approach the $\tau \to 0$ region, the local parameter is given by $\tilde{q}=\mathrm{e}^{-\frac{2\pi i}{\tau}}$. From the above discussion, the relevant contour hence takes the form \begin{equation} \frac{2\pi i}{\tau}= t_\ast +i t \end{equation} for some large real constant $t_\ast$ and $t \in \RR_{\ge 0}$ describing the Wick-rotated part of the contour, i.e., \begin{equation} \tau=\frac{2\pi i }{t_\ast +i t}\ . \end{equation} This contour maps to a semi-circle in the complex $\tau$-plane with radius $\frac{\pi}{t_\ast}$ and centered at $\frac{\pi i}{t_\ast}$, bulging to the right, see Figure~\ref{fig:tau contours}. The actual contour itself is of course not important since we can always deform it. The only important content of it is that we approach $\tau=0$ from the right and not from the top. The direction in which we approach $\tau=0$ matters since $\tau=0$ is an essential singularity of the integrand. \begin{figure} \centering \qquad \begin{tikzpicture}[scale=1] \draw[line width=0.6mm, Maroon] (0,0) arc (-90:90:1); \draw[line width=0.6mm, Maroon] (4,0) arc (-90:90:1); \draw[line width=0.6mm, Maroon] (0,2) -- (0,6) -- (4,6) -- (4,2); \draw[line width=0.6mm, Maroon, ->] (0,2) -- (0,4.5); \draw[line width=0.6mm, Maroon, ->] (0,6) -- (2,6); \draw[line width=0.6mm, Maroon, ->] (4,6) -- (4,4.3); \fill (0,0) circle (.1) node[below] {$\tau=0$}; \fill (4,0) circle (.1) node[below, align=center] {$\tau=\frac{1}{2}$ {\footnotesize in the planar case} \\ $\tau=2$ {\footnotesize in the non-planar case}}; \fill (2,7) circle (.1) node[above, align=center] {{\footnotesize closed string pole} \\ \vspace{0.7em}$\tau \to i \infty$}; \draw[thick] (5.5,8) -- (5.5,7.5) -- (6,7.5); \node at (5.75,7.75) {$\tau$}; \node at (4.5,4) {$\textcolor{Maroon}{\Gamma}$}; \end{tikzpicture} \caption{Contour of integration in the $\tau$-plane for open string one-loop amplitudes. The endpoint of the contour is $\tau = \frac{1}{2}$ in the planar case and $\tau = 2$ in the non-planar case.} \label{fig:tau contours} \end{figure} The other interesting place of the $\tau$-integration is $\tau \to i\infty$, where in the planar case we expect the Fischler--Susskind--Polchinski mechanism to cancel the respective singularities and in the non-planar case the closed string pole originates from. Let us start with the planar case. Recall that the M\"obius strip case is obtained by shifting $\tau \to \tau + \frac{1}{2}$ up to a minus sign compared to the annulus case, so the part of the contour close to $\tau = \frac{1}{2}$ looks the same as near $\tau = 0$, except for reversed orientation, see Figure~\ref{fig:tau contours}. Most of the two contours cancel out and the end result is simply that the annulus and M\"obius strip contours get connected, up to a contribution from $i\infty$. The resulting contour $\Gamma$ is displayed in Figure~\ref{fig:tau contours}. We should mention that there is a bit of choice involved in this contour. We could have for example also declared that the M\"obius strip corresponds to $\tau \in \frac{3}{2}+i \RR_{\ge 0}$ (since the integrands are periodic in $\tau$) and hence the horizontal connection between the contours would have been longer. Another common definition is to take the principal value of the integral in $q = \mathrm{e}^{2\pi i \tau}$ that runs through the pole at $q=0$. This corresponds to putting no horizontal part in the contour. All these definitions differ by a multiple of the residue of the pole at infinity. More precisely, for the four-point amplitude, the principal value definition and the definition in terms of a closed contour differ by \begin{equation} \Delta A^\text{p} = \frac{i}{2} \int_{0 \leq z_1 \leq z_2 \leq z_3 \leq 1} \!\!\!\!\!\!\!\!\!\! \mathrm{d}z_1\, \mathrm{d}z_2\, \mathrm{d}z_3 \ \left(\frac{\sin(\pi z_{21}) \sin(\pi z_{43})}{\sin(\pi z_{31}) \sin(\pi z_{42})} \right)^{\!-s} \left(\frac{\sin(\pi z_{32}) \sin(\pi z_{41})}{\sin(\pi z_{31}) \sin(\pi z_{42})} \right)^{\!-t} , \label{eq:Delta A planar integral} \end{equation} as worked out in eq.~\eqref{eq:tau to infinity behaviour planar amplitude}. The remaining integral is the disk four-point function with an additional closed-string dilaton vertex operator at zero momentum inserted. This follows directly from the geometry of the degeneration: for large $\tau$, the hole of the annulus closes and is replaced by a puncture with no momentum inflow. The leading term comes from the massless level and the Lorentz index structure only allows for a scalar, i.e., the dilaton. The dilaton vertex operator at zero momentum is in fact equal to the action itself and hence an insertion of this operator simply renormalizes $\alpha'$. As one can check the residue is indeed proportional to the $\alpha'$-derivative of the tree-level amplitude, explicitly \begin{equation} \Delta A^\text{p}=\frac{i}{(2\pi)^2}\left[\frac{\mathrm{d}}{\mathrm{d}s} \left(\frac{\Gamma(1-s)\Gamma(-t)}{\Gamma(1-s-t)}\right)+\frac{\mathrm{d}}{\mathrm{d}t} \left(\frac{\Gamma(-s)\Gamma(1-t)}{\Gamma(1-s-t)}\right)\right] \ .\label{eq:Delta A planar} \end{equation} We check this in Appendix~\ref{app:Delta A planar}. This fact is known as the soft-dilaton theorem in string theory \cite{Ademollo:1975pf,Shapiro:1975cz}. In practice, we will hence compute with the closed contour $\Gamma$ as in Figure~\ref{fig:tau contours}, but will then subtract the contribution from the closed string pole in the end in order to obtain the actual amplitude with correct reality properties. We will however mostly suppress this in the following discussion. Thus, the combined planar amplitude is given by \begin{align} A^\text{p} &= \Delta A^\text{p} -i \int_{\Gamma} \mathrm{d}\tau \!\! \int \mathrm{d}z_1 \, \mathrm{d}z_2 \, \mathrm{d} z_3 \left( \frac{\vartheta_1(z_{21},\tau)\vartheta_1(z_{43},\tau)}{\vartheta_1(z_{31},\tau)\vartheta_1(z_{42},\tau)}\right)^{\!-s} \left( \frac{\vartheta_1(z_{32},\tau)\vartheta_1(z_{41},\tau)}{\vartheta_1(z_{31},\tau)\vartheta_1(z_{42},\tau)}\right)^{\!-t} . \label{eq:planar amplitude} \end{align} We did not spell out explicitly the appropriate contours for the $z_i$-integration. We will analyze them further below. A very similar contour can be defined for the non-planar case. In this case, the $\tau$-contour as defined via the $i \varepsilon$ prescription is the same as the one for the planar annulus. However, we can notice that the integrand is quasi-periodic in $\tau \to \tau+2$, which leads to the simple phase $\mathrm{e}^{-\pi i s}$ of the integrand. This means that we can compute the expression \begin{equation} (1-\mathrm{e}^{-\pi i s}) A^\text{n-p} \label{eq:non-planar pi i s prefactor} \end{equation} by subtracting from the original contour a contour that is shifted as $\tau \to \tau+2$. This then cancels the infinite horizontal tail that runs horizontally to the left as $\tau=it_\ast -t$. The contour is then essentially identical to the planar contour, except that it ends at $\tau=2$ instead of $\tau=\frac{1}{2}$, see Figure~\ref{fig:tau contours}. \subsection{Rademacher contour} \label{subsec:Rademacher contour} The Rademacher contour is a general method to compute contour integrals over modular objects. It was historically used to derive an asymptotic expansion for the partition numbers, which appear as the Fourier coefficients of the Dedekind eta-function $\eta(\tau)^{-1}$, see \cite{RademacherParition}. Let us explain the basic method at the simple example $\eta(\tau)^{-24}$, which gives the bosonic open-string partition function and hence is interesting in its own right. We will consider the more complicated examples relevant for the open superstring amplitudes below. Suppose we want to compute \begin{equation} \int_{\Gamma} \frac{\mathrm{d}\tau}{\eta(\tau)^{24}}\ , \end{equation} where the contour is identical to the one displayed in Figure~\ref{fig:tau contours} with the endpoint at $\tau=\frac{1}{2}$.\footnote{In the open bosonic string, we face the same problem as in the planar four-point function discussed above, that we should really take the principal value at the pole $\tau=i\infty$ in order to get a physical result. For the bosonic string, there is a double pole at $\tau=i\infty$ and hence a correction term as in \eqref{eq:Delta A planar} does not exist. This makes the present computation unphysical.} \begin{figure} \centering \begin{tikzpicture} \begin{scope} \draw[very thick, black!30!white] (0,0) arc (-90:90:4); \draw[very thick, black!30!white] (8,0) arc (-90:-270:4); \tikzmath{ int \p, \q; for \q in {2,...,20}{ for \p in {1,...,\q}{ if gcd(\p,\q) == 1 then { \f = 8*\p/\q; \r = 4/(\q*\q); { \draw[very thick, light-gray] (\f,\r) circle(\r); }; }; }; }; } \node at (0,-.4) {$0$}; \node at (8,-.4) {$1$}; \node at (4,-.4) {$\frac{1}{2}$}; \node at (2.67,-.4) {$\frac{1}{3}$}; \node at (5.33,-.4) {$\frac{2}{3}$}; \node at (2,-.4) {$\frac{1}{4}$}; \node at (6,-.4) {$\frac{3}{4}$}; \node at (1.6,-.4) {$\frac{1}{5}$}; \node at (3.2,-.4) {$\frac{2}{5}$}; \node at (4.8,-.4) {$\frac{3}{5}$}; \node at (6.4,-.4) {$\frac{4}{5}$}; \node at (1.33,-.4) {$\frac{1}{6}$}; \node at (6.67,-.4) {$\frac{5}{6}$}; \node at (1.14,-.4) {$\frac{1}{7}$}; \node at (2.285,-.4) {$\frac{2}{7}$}; \node at (3.43,-.4) {$\frac{3}{7}$}; \node at (4.57,-.4) {$\frac{4}{7}$}; \node at (5.715,-.4) {$\frac{5}{7}$}; \node at (6.86,-.4) {$\frac{6}{7}$}; \node at (0,4) {$C_{0/1}$}; \node at (4,0.9) {$C_{1/2}$}; \node at (8,4) {$C_{1/1}$}; \node at (2.67,0.4) {\scalebox{0.7}{$C_{1/3}$}}; \node at (5.33,0.4) {\scalebox{0.7}{$C_{2/3}$}}; \node at (2,1) {$\color{Maroon}\Gamma_2$}; \draw[ultra thick, Maroon] (0,0) arc (-90:-36.9:4); \draw[ultra thick, Maroon] (3.2,1.6) arc (143.1:-90:1); \draw[ultra thick, Maroon, ->] (4,2) -- (4.01,2); \end{scope} \end{tikzpicture} \caption{The Ford circles $C_{a/c}$ in the $\tau$ upper half-plane. The original contour of integration $\Gamma$ can be deformed to the second Rademacher contour $\Gamma_2$.} \label{fig:Ford circles} \end{figure} One deforms the contour in a series of steps as follows. Let us first recall the \emph{Farey sequence} $F_n$, which consists of all fractions $0<\frac{a}{c} \le 1$ such that $a$ and $c$ are coprime integers, $(a,c)=1$, and $c \le n$. By convention, we do not include 0, even though it is often included in the literature. The first few terms are \begin{subequations} \begin{align} F_1 &= ( \tfrac{1}{1} )\ ,\\ F_2 &= ( \tfrac{1}{2}, \tfrac{1}{1} )\ ,\\ F_3 &= ( \tfrac{1}{3}, \tfrac{1}{2}, \tfrac{2}{3}, \tfrac{1}{1} )\ ,\\ F_4 &= ( \tfrac{1}{4}, \tfrac{1}{3}, \tfrac{1}{2}, \tfrac{2}{3}, \tfrac{3}{4}, \tfrac{1}{1} )\ ,\\ F_5 &= ( \tfrac{1}{5}, \tfrac{1}{4}, \tfrac{1}{3}, \tfrac{2}{5}, \tfrac{1}{2},\tfrac{3}{5},\tfrac{2}{3}, \tfrac{3}{4}, \tfrac{4}{5}, \tfrac{1}{1} )\ .\label{eq:Farey5} \end{align} \end{subequations} It is a non-trivial fact that one can draw \emph{Ford circles} $C_{a/c}$ around the points $\tau = \frac{a}{c}+\frac{i}{2c^2}$ such that none of the circles overlap and two of them touch only if they are neighbors in the Farey sequence $F_n$ for some $n$. We now construct a series of Rademacher contours $\Gamma_n$ as follows. For $n=2$, we start with the contour that follows the arc of the Ford circle $C_{0/1}$ until the common point of $C_{0/1}$ and $C_{1/2}$ is reached, where we start following the arc of the Ford circle $C_{1/2}$ as depicted in Figure~\ref{fig:Ford circles}. The resulting contour is called $\Gamma_2$. It is equivalent to the original contour $\Gamma$ we described in Figure~\ref{fig:tau contours}. The contour $\Gamma_3$ includes the following modification to $\Gamma_2$: we follow the arc of $C_{0/1}$ only until it touches the circle $C_{1/3}$, which we then follow until we touch the circle $C_{1/2}$, which we then follow until $\tau=\frac{1}{2}$. We iteratively modify the contour further in the same way so that the contour $\Gamma_n$ describes an arc of $C_{0/1}$ until we meet $C_{1/n}$. We then follow the arcs of all the Ford circles in the Farey sequence $F_n$ until we reach the endpoint at $\tau = \frac{1}{2}$. Hence all Ford circles $C_{a/c}$ with $\frac{a}{c} \le \frac{1}{2}$ and $c \le n$ appear in the contour $\Gamma_n$. For example, the contour $\Gamma_5$, following the circles listed in \eqref{eq:Farey5}, is depicted in Figure~\ref{fig:Rademacher contour 01}. The obvious idea is now to take the limiting contour $\Gamma_\infty$ that encircles every Ford circle $C_{a/c}$ with $0<\frac{a}{c} \le \frac{1}{2}$ precisely once and hence \begin{equation} \int_{\Gamma_\infty} \frac{\mathrm{d}\tau}{\eta(\tau)^{24}} = \sum_{c=1}^\infty \sum_{\begin{subarray}{c} 1\le a\le \frac{c}{2} \\ (a,c)=1 \end{subarray} }\int_{C_{a/c}} \frac{\mathrm{d}\tau}{\eta(\tau)^{24}}\ . \end{equation} The contour $\Gamma_\infty$ was already illustrated in Figure~\ref{fig:Rademacher}. Of course it is not obvious from our discussion that this procedure converges, but we will find that it does in all the cases of interest. This can be proved rigorously by estimating the contributions to the integral from the remaining small arcs on the contour $\Gamma_n$. We do this in Appendix~\ref{app:convergence}. In fact, this procedure always converges when the modular weight of the integrand is negative. \begin{figure} \centering \begin{tikzpicture} \begin{scope} \node at (10.2,4.9) {$\tau$}; \draw (10.4,4.7) -- (10.0,4.7) -- (10.0,5.1); \tikzmath{ int \p, \q; for \q in {2,...,20}{ for \p in {1,...,\q/2}{ if gcd(\p,\q) == 1 then { \f = 16*\p/\q; \r = 8/(\q*\q); { \draw[very thick, light-gray] (\f,\r) circle(\r); }; }; }; }; } \draw[ultra thick, Maroon] (0,0) arc (-90:-67.4:8); \draw[ultra thick, Maroon] (3.08,0.62) arc (112.6:12.7:0.32); \draw[ultra thick, Maroon] (3.52,0.39) arc (192.7:16.3:.5); \draw[ultra thick, Maroon] (4.48,0.64) arc (196.3:-28.1:0.888); \draw[ultra thick, Maroon] (6.12,0.47) arc (151.9:46.4:.32); \draw[ultra thick, Maroon] (6.62,0.55) arc (226.4:-90:2); \draw[ultra thick, Maroon, ->] (3.52,0.39) arc (192.7:100:.5); \draw[ultra thick, Maroon, ->] (4.48,0.64) arc (196.3:100:0.888); \draw[ultra thick, Maroon, ->] (6.62,0.55) arc (226.4:100:2); \node at (0,-.4) {$0$}; \node at (8,-.4) {$\frac{1}{2}$}; \node at (5.33,-.4) {$\frac{1}{3}$}; \node at (4,-.4) {$\frac{1}{4}$}; \node at (3.2,-.4) {$\frac{1}{5}$}; \node at (6.4,-.4) {$\frac{2}{5}$}; \node at (2.67,-.4) {$\frac{1}{6}$}; \node at (2.28,-.4) {$\frac{1}{7}$}; \node at (4.57,-.4) {$\frac{2}{7}$}; \node at (6.86,-.4) {$\frac{3}{7}$}; \node at (8,2) {$C_{1/2}$}; \node at (5.33,0.8) {$C_{1/3}$}; \node at (4,0.45) {\scalebox{0.8}{$C_{1/4}$}}; \node at (3.2,0.3) {\scalebox{0.6}{$C_{1/5}$}}; \node at (1.5,0.6) {$\color{Maroon}\Gamma_5$}; \end{scope} \end{tikzpicture} \caption{The fifth Rademacher contour $\Gamma_5$ obtained by following the Ford circles in the fifth Farey sequence \eqref{eq:Farey5} according to the rules given in the text.} \label{fig:Rademacher contour 01} \end{figure} So far, it may seem like this procedure has not gained us much. However, due to modular invariance, it is much simpler to compute the integral over the Ford circle than over the original contour. Indeed, consider the following modular transformation \begin{equation} \gamma(\tau)=\frac{a\tau+b}{c\tau+d}\ , \label{eq:modular transformation Ca/c} \end{equation} where $b$ and $d$ are chosen such that $ad-bc=1$. Then \begin{equation} \eta\left(\frac{a\tau+b}{c\tau+d}\right)^{24}=(c \tau+d)^{12}\, \eta(\tau)^{24}\ . \end{equation} We can use this modular transformation to change variables in the integral over $C_{a/c}$ and obtain \begin{equation} \int_{C_{a/c}} \frac{\mathrm{d}\tau}{\eta(\tau)^{24}}=-\int_{\longrightarrow} \frac{\mathrm{d}\tau}{(c\tau+d)^{14} \, \eta(\tau)^{24}}\ . \end{equation} The additional two powers of $c\tau+d$ come from the Jacobian. Due to our judicious choice of the modular transformation, the new contour runs now horizontally, i.e., we mapped the circle touching the real axis at $\tau=\frac{a}{c}$ to the circle at $\tau=i\infty$. After the modular transformation, the contour starts at $i-\infty$ and runs to $i+\infty$. This is opposite to the natural orientation of the circle at $i\infty$ and leads to the additional minus sign. The new integrand is holomorphic in the upper half-plane, except for a singularity at $\tau=i\infty$. We may hence deform the contour from $i+\RR$ to $iL+\RR$ for arbitrarily large $L$. We will frequently denote such a horizontal contour by $\longrightarrow$. For large imaginary parts of $\tau$, it is then advantageous to use the Fourier expansion of $\eta(\tau)^{-24}$, which gives \begin{equation} \int_{C_{a/c}} \frac{\mathrm{d}\tau}{\eta(\tau)^{24}}= -\int_{\longrightarrow} \frac{\mathrm{d}\tau}{(c\tau+d)^{14}} \left(\mathrm{e}^{-2\pi i \tau}+24+\mathcal{O}(\mathrm{e}^{2\pi i \tau})\right)\ . \end{equation} For large $L$, all the contributions coming from $\mathcal{O}(\mathrm{e}^{2\pi i \tau})$ are exponentially suppressed and do not contribute. Similarly, the contribution from the constant term 24 does not contribute since it is polynomially suppressed thanks to the prefactor $(c \tau+d)^{-14}$. We thus conclude that we have the exact equality \begin{equation} \int_{C_{a/c}} \frac{\mathrm{d}\tau}{\eta(\tau)^{24}}= -\int_{\longrightarrow} \frac{\mathrm{d}\tau}{(c\tau+d)^{14}} \mathrm{e}^{-2\pi i \tau}\ . \end{equation} The fact that we can reduce integrals of modular objects along the Ford circles back to integrals over elementary functions in this way is at the heart of the power of the Rademacher method. After we have argued for the vanishing of the higher Fourier terms in the expansion, we can deform the contour back to finite $L$. In fact, we would like to deform it to large \emph{negative} values of $L$, since the integrand is exponentially suppressed there. The only obstruction to this procedure is the 14${}^\mathrm{th}$ order pole at $\tau=-\frac{d}{c}$ and hence its residue is the only contributing factor. Thus we get \begin{equation} \int_{C_{a/c}} \frac{\mathrm{d}\tau}{\eta(\tau)^{24}}=2\pi i \Res_{\tau=-\frac{d}{c}} \frac{\mathrm{e}^{-2\pi i \tau}}{(c\tau+d)^{14}}=\frac{(2\pi)^{14} \mathrm{e}^{\frac{2\pi i d}{c}}}{13! \, c^{14}}\ , \end{equation} where we recall that $d$ was determined by $a$ through $ad \equiv 1 \bmod c$. Let us write $d=a^*$ for the inverse $\bmod\; c$. Thus we find for the bosonic open string partition function \begin{equation} Z_\text{open}=-i \int_{\Gamma_\infty} \frac{\mathrm{d}\tau}{\eta(\tau)^{24}}=\frac{-i (2\pi)^{14}}{13!} \sum_{c=1}^\infty \frac{1}{c^{14}}\sum_{\begin{subarray}{c} 1 \le a\le \frac{c}{2} \\ (a,c)=1 \end{subarray}}\mathrm{e}^{\frac{2\pi i a^*}{c}}\ . \end{equation} This is a very fast-converging infinite sum representation of the partition function and trivial to evaluate numerically to very high accuracy. \subsection{Results for the four-point amplitudes}\label{subsec:results} Using the same basic idea, one can also evaluate the integrals \begin{equation}\label{eq:Ap-Ford-circle} \int_{C_{a/c}}\hspace{-.3cm} \mathrm{d}\tau\, \mathrm{d}z_1 \, \mathrm{d}z_2 \, \mathrm{d} z_3\ \left( \frac{\vartheta_1(z_{21},\tau)\vartheta_1(z_{43},\tau)}{\vartheta_1(z_{31},\tau)\vartheta_1(z_{42},\tau)}\right)^{-s} \left( \frac{\vartheta_1(z_{32},\tau)\vartheta_1(z_{41},\tau)}{\vartheta_1(z_{31},\tau)\vartheta_1(z_{42},\tau)}\right)^{-t} \end{equation} and similarly for the non-planar amplitude. Detailed derivations will be given in Section~\ref{sec:planar amplitude derivation} and Section~\ref{sec:non-planar}. Here, we simply present the results of this computation and highlight some of its features. \subsubsection{\label{subsec:planar-sle1}Planar amplitude in the \texorpdfstring{$s$}{s}-channel with \texorpdfstring{$s\le 1$}{s<1}} Let us first explain the simplest case of interest: the planar amplitude $A^\text{p}$ in the $s$-channel for $0 < s \leq 1$ and $t<0$. The amplitude, and hence also our formula, behaves discontinuously as we cross the normal thresholds of the string that are located at \begin{equation} s=(\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \end{equation} for integers $m_\mathrm{D},\, m_\U \in \ZZ_{\ge 0}$ corresponding to mass levels of string states in the units $\alpha'=1$. Each threshold is connected with a new two-particle exchange being kinematically allowed. After the massless threshold, the next one appears at $s=1$ corresponding to $(m_\mathrm{D}, m_\U) = (0,1)$ or $(1,0)$. Hence the formula for the amplitude is going to be particularly simple when $s \leq 1$. Evaluating the integrals over the circle $C_{a/c}$ always leads to further sums which we label by integers $n_\L,\, n_\mathrm{D},\, n_\mathrm{R},\, n_\U$ satisfying the constraint $n_\L+n_\mathrm{D}+n_\mathrm{R}+n_\U=c-1$. These integers are associated to particular winding numbers which we explain further below. We hence write \begin{equation} A^{\text{p}} = \Delta A^{\text{p}} + \sum_{c=1}^\infty \sum_{\begin{subarray}{c} 1 \le a \le \frac{c}{2} \\ (a,c)=1 \end{subarray}} \sum_{\begin{subarray}{c} n_\L,n_\mathrm{D},n_\mathrm{R},n_\U \ge 0 \\ n_\L+n_\mathrm{D}+n_\mathrm{R}+n_\U=c-1 \end{subarray}}A^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}_{a/c}\ , \label{eq:planar amplitude decomposition} \end{equation} where $\Delta A^\text{p}$ is given by \eqref{eq:Delta A planar}. The individual contributions $A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}$ are given by \begin{multline} A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}=-\frac{16\pi i \, \mathrm{e}^{-\pi i\sum_{a=\L,\mathrm{R},\, b=\mathrm{D},\U} \big[s \sum_{m=n_a+1}^{n_a+n_b}+t \sum_{m=n_b+1}^{n_a+n_b}\big] \st{\frac{md}{c}}}}{15c^5 \sqrt{stu}} \int_{P > 0} \hspace{-0.4cm} \d t_\L \, \d t_\mathrm{R} \\ \times P(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}}\left( \frac{\Gamma(-t_\L)\Gamma(s+t_\L)}{\Gamma(s)}\begin{cases}\mathrm{e}^{2\pi i t_\L \st{\frac{d n_\L}{c}}} \;\;\mathrm{if}\ n_\L>0 \\ \frac{\sin(\pi(s+t_\L))}{\sin(\pi s)} \;\; \mathrm{if}\ n_\L=0 \end{cases} \right) \big(\L \leftrightarrow \mathrm{R} \big)\ . \label{eq:planar four-point function s-channel s<1 Rademacher} \end{multline} Let us dissect this formula one by one. The following number-theoretic (discontinuous) \emph{sawtooth} function makes an appearance: \begin{equation} \st{x}=\begin{cases} x-\lfloor x \rfloor -\frac{1}{2} \quad&\mathrm{if}\quad x \not \in \ZZ\ , \\ 0 \quad&\mathrm{if}\quad x \in \ZZ\ . \end{cases} \label{eq:st definition} \end{equation} As in the open-string partition function example discussed in Section~\ref{subsec:Rademacher contour}, $d$ denotes the inverse of $a$ mod $c$, i.e., $ad \equiv 1 \bmod c$. Here, $P(s,t,t_\L,t_\mathrm{R})$ is the following polynomial in $t_\L$ and $t_\mathrm{R}$, also known as the Baikov polynomial: \begin{equation} P(s,t,t_\L,t_\mathrm{R})=\frac{s^2 t^2 - 2 s^2 t t_\L + s^2 t_\L^2 - 2 s^2 t t_\mathrm{R} - 2 s^2 t_\L t_\mathrm{R} - 4 s t t_\L t_\mathrm{R} + s^2 t_\mathrm{R}^2}{4 s t (s + t)}\ . \end{equation} It measures the volume of the two-particle phase space: it is equal to $\ell_\perp^2$, where $\ell_\perp$ is the transverse part of the loop momentum that is orthogonal to all external momenta. The integration bounds are therefore impose only by taking $P>0$. Note that the two factors in the second line of this ``generalized Baikov representation'' are simply the tree-level Veneziano amplitudes decorated with extra phases. The left one depends only on ``left'' variables such as $t_\L$ and $n_\L$, while the other one only on the ``right'' variables. The reader may rightfully ask why the representation \eqref{eq:planar four-point function s-channel s<1 Rademacher} is better than the original integrals from \eqref{eq:integrands four point functions}, given that it still involves infinite sums over $a$, $c$, and $n_a$'s, as well as two integrals over $t_\L$ and $t_\mathrm{R}$. In the regime $s<1$, interest in \eqref{eq:planar four-point function s-channel s<1 Rademacher} is indeed rather of theoretical nature. While we believe that the representation is convergent, it does so very slowly and is also not absolutely convergent. Indeed, it is precisely on the cusp of convergence. Let us understand why. The factor $A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}$ depends on $a$, $n_\L,\,n_\mathrm{D},\,n_\mathrm{R}$ and $n_\U$ only via phases (disregarding for the moment the case distinction in \eqref{eq:planar four-point function s-channel s<1 Rademacher}). Thus naively analyzing convergence of the whole sum \eqref{eq:planar amplitude decomposition} would lead one to the rough estimate \begin{align} |A^{\text{p}}-\Delta A^{\text{p}}|&\le F(s,t) \sum_{c=1}^\infty \sum_{\begin{subarray}{c} 1 \le a \le \frac{c}{2} \\ (a,c)=1 \end{subarray}} \sum_{\begin{subarray}{c} n_\L,n_\mathrm{D},n_\mathrm{R},n_\U \ge 0 \\ n_\L+n_\mathrm{D}+n_\mathrm{R}+n_\U=c-1 \end{subarray}} \frac{1}{c^5}\ . \end{align} The latter sum is logarithmically divergent because there are $\mathcal{O}(c^3)$ choices for $n_\L,\,n_\mathrm{D},\,n_\mathrm{R}$ and $n_\U$ and $\mathcal{O}(c)$ choices for $a$ and thus we end up with a harmonic series. However, at least heuristically, we expect that the sum converges, albeit not absolutely. The reasoning is that for very large values of $c$, the phases in \eqref{eq:planar four-point function s-channel s<1 Rademacher} look completely random and thus we expect that even though there are $\mathcal{O}(c^4)$ choices for $(a,n_\L,n_\mathrm{D},n_\mathrm{R},n_\U)$, each sum only leads to a ${\mathcal O}(\sqrt{c})$ enhancement. This would lead to a convergent sum. Since the phases involve $s$ and $t$, convergence becomes worse for small values of $s$ and $t$ and completely breaks down if we try to approach $s \to 0$. This is the manifestation of a the massless branch cut in our formula. However, while convergence of \eqref{eq:planar four-point function s-channel s<1 Rademacher} for $s\le 1$ is slow, it becomes much faster for the generalization of the formula for $s \ge 1$ that we describe below. In this regime, it is actually very practical and allows to directly evaluate the amplitude. Let us now explain the geometrical interpretation of $n_\L, \,n_\mathrm{D}, \,n_\mathrm{R}$ and $n_\U$ as winding numbers. For this we should recall that we analytically continued $\tau$ inside the complexified moduli space, which we can identify with the moduli space of a torus (without invariance under modular transformations). For $\tau\sim\frac{a}{c}$, the relevant torus becomes very thin and the $z_i$'s are all on a line that winds around the long cycle of the torus $c$ times. For the case $\frac{a}{c}=\frac{0}{1}$ and $\frac{a}{c}=\frac{1}{2}$, this is just the fact that the boundary of the annulus goes once around the annulus, while the boundary of the M\"obius strip winds twice around, see Figure~\ref{fig:open string diagrams}. Geometrically, we can hence think of a loop that winds $c$ times around itself with the four vertex operators on it. Every term $A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}$ corresponds to a consistent way of cutting this diagram in the $s$-channel. The integers $n_\L$, $n_\mathrm{D}$, $n_\mathrm{R}$ and $n_\U$ correspond to the amount of windings that separate the four vertex operators. We displayed the four possibilities for $c=2$ in Figure~\ref{fig:windings c=2}. In general, there are $\frac{1}{6}c(c+1)(c+2)$ ways to distribute the four vertex operators like this. This also explains why the case $n_\L=0$ or $n_\mathrm{R}=0$ plays a special role in the formula \eqref{eq:planar four-point function s-channel s<1 Rademacher}. In this case, two vertex operators can collide which manifests itself as poles in $s \in \ZZ_{>0}$ in the amplitude. The amplitude has in fact double poles at every positive integer $s$ corresponding to the mass renormalization of massive states. We can read-off the mass shifts from the prefactors of these double poles. The only diagrams that contribute to these prefactors are those with $n_\L=n_\mathrm{R}=0$. Thus the mass shifts are much simpler physical quantities than the full amplitude and we also analyze them extensively in this paper. Our results for them are discussed in Section~\ref{subsec:results mass shifts}. \begin{figure} \centering \begin{tikzpicture} \begin{scope}[scale=.85] \draw[domain=90+360:90+720, smooth, variable=\x, very thick, gray, samples=100] plot ({\x}: {1.5+.15*cos(\x/2-45)}); \draw[domain=90:90+360, smooth, variable=\x, very thick, samples=100] plot ({\x}: {1.5+.15*cos(\x/2-45)}); \draw (170:{1.5+.15*cos(85-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (170:1.9) {1}; \draw (190:{1.5-.15*cos(95-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (190:1.1) {2}; \draw (-10:{1.5+.15*cos(-5-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-10:1.9) {3}; \draw (10:{1.5+.15*cos(5-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (10:1.9) {4}; \draw[dashed, very thick, Maroon] (0,-1.8) to (0,1.8); \node at (0,.75) {$n_\L=1$}; \node at (0,.25) {$n_\mathrm{D}=0$}; \node at (0,-.25) {$n_\mathrm{R}=0$}; \node at (0,-.75) {$n_\U=0$}; \end{scope} \begin{scope}[shift={(3.8,0)}, scale=.85] \draw[domain=90+360:90+720, smooth, variable=\x, very thick, gray, samples=100] plot ({\x}: {1.5+.15*cos(\x/2-45)}); \draw[domain=90:90+360, smooth, variable=\x, very thick, samples=100] plot ({\x}: {1.5+.15*cos(\x/2-45)}); \draw (170:{1.5+.15*cos(85-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (170:1.9) {1}; \draw (190:{1.5+.15*cos(95-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (190:1.9) {2}; \draw (-10:{1.5+.15*cos(-5-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-10:1.9) {3}; \draw (10:{1.5+.15*cos(5-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (10:1.9) {4}; \draw[dashed, very thick, Maroon] (0,-1.8) to (0,1.8); \node at (0,.75) {$n_\L=0$}; \node at (0,.25) {$n_\mathrm{D}=1$}; \node at (0,-.25) {$n_\mathrm{R}=0$}; \node at (0,-.75) {$n_\U=0$}; \end{scope} \begin{scope}[shift={(7.6,0)}, scale=.85] \draw[domain=90+360:90+720, smooth, variable=\x, very thick, gray, samples=100] plot ({\x}: {1.5+.15*cos(\x/2-45)}); \draw[domain=90:90+360, smooth, variable=\x, very thick, samples=100] plot ({\x}: {1.5+.15*cos(\x/2-45)}); \draw (170:{1.5-.15*cos(190-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (170:1.9) {1}; \draw (190:{1.5-.15*cos(170-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (190:1.9) {2}; \draw (-10:{1.5-.15*cos(-10-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-10:1.1) {3}; \draw (10:{1.5+.15*cos(10-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (10:1.9) {4}; \draw[dashed, very thick, Maroon] (0,-1.8) to (0,1.8); \node at (0,.75) {$n_\L=0$}; \node at (0,.25) {$n_\mathrm{D}=0$}; \node at (0,-.25) {$n_\mathrm{R}=1$}; \node at (0,-.75) {$n_\U=0$}; \end{scope} \begin{scope}[shift={(11.4,0)}, scale=.85] \draw[domain=90+360:90+720, smooth, variable=\x, very thick, gray, samples=100] plot ({\x}: {1.5+.15*cos(\x/2-45)}); \draw[domain=90:90+360, smooth, variable=\x, very thick, samples=100] plot ({\x}: {1.5+.15*cos(\x/2-45)}); \draw (170:{1.5-.15*cos(190-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (170:1.9) {1}; \draw (190:{1.5-.15*cos(170-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (190:1.9) {2}; \draw (-10:{1.5-.15*cos(-10-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-10:1.1) {3}; \draw (10:{1.5-.15*cos(10-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (10:1.1) {4}; \draw[dashed, very thick, Maroon] (0,-1.8) to (0,1.8); \node at (0,.75) {$n_\L=0$}; \node at (0,.25) {$n_\mathrm{D}=0$}; \node at (0,-.25) {$n_\mathrm{R}=0$}; \node at (0,-.75) {$n_\U=1$}; \end{scope} \end{tikzpicture} \caption{Four possibilities for windings of vertex operators for $c=2$ that correspond to all the generalized $s$-channel cuts. For example, in the first case $(n_\L, n_\mathrm{D}, n_\mathrm{R}, n_\U)=(1,0,0,0)$, because going from the puncture $1$ to $2$ requires one winding and no windings are necessary to travel between the other pairs of punctures.} \label{fig:windings c=2} \end{figure} \subsubsection{Higher values of \texorpdfstring{$s$}{s}} Equation \eqref{eq:planar four-point function s-channel s<1 Rademacher} can be systematically extended to higher values of $s$. In this case, we get contributions from each mass-level that can be exchanged in the scattering process labelled by the integers $m_\mathrm{D}$ and $m_\U$ mentioned above. The generalization of \eqref{eq:planar four-point function s-channel s<1 Rademacher} now reads \begin{align} A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}&=-\frac{16\pi i \, \mathrm{e}^{-\pi i\sum_{a=\L,\mathrm{R},\, b=\mathrm{D},\U} \big[s \sum_{m=n_a+1}^{n_a+n_b}+t \sum_{m=n_b+1}^{n_a+n_b}\big] \st{\frac{md}{c}}}}{15c^5 \sqrt{stu}} \sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \nonumber\\ &\times \mathrm{e}^{\frac{2\pi i d}{c}(m_\mathrm{D} n_\mathrm{D}+m_\U n_\U)}\int_{P_{m_\mathrm{D},m_\U} > 0} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \d t_\L \, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}}\, Q_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R}) \nonumber\\ &\times \left( \frac{\Gamma(-t_\L)\Gamma(s+t_\L-m_\mathrm{D}-m_\U)}{\Gamma(s)}\begin{cases}\mathrm{e}^{2\pi i t_\L \st{\frac{d n_\L}{c}}} &\text{if}\;\; n_\L>0 \\ \frac{\sin(\pi(s+t_\L))}{\sin(\pi s)} &\text{if}\;\; n_\L=0 \end{cases} \right) \big(\L \leftrightarrow \mathrm{R}\big)\, . \label{eq:planar four-point function s-channel} \end{align} The polynomials $P_{m_\mathrm{D},m_\U}$ still have a purely kinematical interpretation of $\ell_\perp^2$. They are explicitly given by \begin{align} P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})=-\frac{1}{4stu}\det \begin{bmatrix} 0 & s & u & \!\! m_\U-s-t_\L \\ s & 0 & t & t_\L-m_\mathrm{D} \\ u & t & 0 & m_\mathrm{D}-t_\mathrm{R} \\ m_\U-s-t_\L & t_\L-m_\mathrm{D} & m_\mathrm{D}-t_\mathrm{R} & 2m_\mathrm{D} \end{bmatrix}\ , \end{align} where the determinant arises kinematically as a certain Gram determinant. Moreover, the factors $Q_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})$ are polynomials in the four arguments. Physically, they appear from the sum over polarizations and degeneracies of the internal states. They are defined as follows. Take \begin{align} Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}(s,t) &=[q_\L^{m_\L} q_\mathrm{D}^{m_\mathrm{D}}q_\mathrm{R}^{m_\mathrm{R}}q_\U^{m_\U}] \prod_{\ell=1}^\infty \prod_{a=\L,\mathrm{R}}(1-q^\ell q_a^{-1})^{-s}(1-q^\ell q_a)^{-s}\nonumber\\ &\qquad\times\prod_{a=\mathrm{D},\U}(1-q^\ell q_a^{-1})^{-t}(1-q^{\ell-1} q_a)^{-t}\nonumber\\ &\qquad\times \prod_{a=\L,\mathrm{R}}(1-q^{\ell}q_a^{-1} q_\mathrm{D}^{-1})^{-u} (1-q^{\ell-1}q_a q_\mathrm{D})^{-u}\ , \label{eq:QmL,mD,mR,mU definition} \end{align} where $q=q_\L q_\mathrm{D} q_\mathrm{R} q_\U$ and $[q_\L^{m_\L} q_\mathrm{D}^{m_\mathrm{D}}q_\mathrm{R}^{m_\mathrm{R}}q_\U^{m_\U}]$ denotes the coefficient of the relevant term in the series expansion around each $q_a=0$. We then have \begin{multline} Q_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R}) = \!\!\!\sum_{m_\L,m_\mathrm{R}=0}^{m_\mathrm{D}+m_\U} Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}(s,t) (-t_\L)_{m_\L}(-s-t_\L+m_\L+1)_{m_\mathrm{D}+m_\U-m_\L} \\ \times (-t_\mathrm{R})_{m_\mathrm{R}}(-s-t_\mathrm{R}+m_\mathrm{R}+1)_{m_\mathrm{D}+m_\U-m_\mathrm{R}}\ , \label{eq:definition Qm2,m4} \end{multline} where $(a)_n=a(a+1) \cdots (a+n-1)$ is the rising Pochhammer symbol. In practice, we computed all the polynomials $Q_{m_\mathrm{D},m_\U}$ with $(\sqrt{m_\mathrm{D}} + \sqrt{m_\U})^2 \leq s \leq 39$. They rapidly grow in the number of terms. In the ancillary file \texttt{Q.txt} we included all the ones needed to reproduce our results up to $s \leq 16$. In the language of the Rademacher expansion, the sum over $m_\mathrm{D}$ and $m_\U$ corresponds to the sum over so-called polar terms in the modular integrand. When crossing one of the production thresholds, a new polar term arises and contributes to the integral. \subsubsection{Imaginary part} As explained in Section~\ref{subsec:integration contour} and \cite{Eberhardt:2022zay}, the imaginary part is much simpler to compute. To be precise, we have \begin{equation} \Im A^\text{p} = -\frac{1}{2i} \bigg( A_{0/1}^{0,0,0,0}\; -\hspace{-0.6cm}\sum_{\begin{subarray}{c} n_\L,n_\mathrm{D},n_\mathrm{R},n_\U \ge 0 \\ n_\L+n_\mathrm{D}+n_\mathrm{R}+n_\U=1 \end{subarray}} \hspace{-0.6cm} A_{1/2}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U} \bigg)\ . \label{eq:planar amplitude imaginary part} \end{equation} The first term corresponds to the $s$-channel cut of the annulus computed as a circle anchored at $\tau = \frac{0}{1}$ (in this edges case, we set $a^\ast=0$). The other four terms correspond to the four possible ways that we can cut the M\"obius strip in the $s$-channel, computed using a Ford circle at $\tau = \frac{1}{2}$. The overall minus sign comes about because of the orientation of the contour and $\frac{1}{2i}$ is the normalization extracting the imaginary part, see \cite{Eberhardt:2022zay} for details. It is a very non-trivial identity that the imaginary part of eq.~\eqref{eq:planar amplitude decomposition} recovers indeed eq.~\eqref{eq:planar amplitude imaginary part}. While our derivation shows that this indeed holds, we do not have a direct proof of this fact (see however below for some special cases). \subsubsection{Other channels and the non-planar case} We also derive the corresponding formulas for the $u$-channel of the planar amplitude and for the $s$- and $u$-channel of the non-planar amplitude. The reader can find the results in these three cases in eq.~\eqref{eq:planar four point function u-channel Rademacher}, \eqref{eq:non-planar four point function s-channel Rademacher} and \eqref{eq:non-planar four point function u-channel Rademacher}. The formulas are essentially all identical, except that the allowed range of $(n_\L,n_\mathrm{D},n_\mathrm{R},n_\U)$ is different and the appearing phases are slightly different. The $u$-channel formulas also do not exhibit poles because the corresponding vertex operators are not allowed to collide. We also remark again that the non-planar formula does not need a correction from the cusp, i.e.\ there is no $\Delta A^{\text{n-p}}$. For the non-planar amplitude, the range of fractions runs from $0<\frac{a}{c} \le 2$, since the endpoint of the integration contour is different, see Figure~\ref{fig:tau contours}. We should also remember that the non-planar amplitude has an additional factor of $(1-\mathrm{e}^{-\pi i s}) A^{\text{n-p}}$ in front, see eq.~\eqref{eq:non-planar pi i s prefactor}. Thus naively, the amplitude has triple poles at every even integer $s$. One can however easily check that they cancel out of the final expression. \subsection{Results for the mass-shifts} \label{subsec:results mass shifts} As already mentioned, the above formulas allow us to compute mass-shifts in a convenient way. Recall that they originate from the worldsheet degenerations illustrated in Figure~\ref{fig:double pole degenerations}. Mass shifts are given by the coefficient of the double pole $\DRes_{s=s_\ast}$ in \eqref{eq:planar four-point function s-channel} at every positive integer $s_\ast$. Only the terms with $n_\L=n_\mathrm{R}=0$ contribute and for them we have \begin{align} \DRes_{s = s_\ast} A_{a/c}^{0,n_\mathrm{D},0,n_\U}&=-\frac{16\pi i\, \mathrm{e}^{\frac{2\pi i s_\ast d}{c} n_\mathrm{D} n_\U}}{15c^5 \sqrt{-s_\ast t(s_\ast+t)}\, \Gamma(s_\ast)^2}\sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s_\ast \end{subarray}} \mathrm{e}^{\frac{2\pi i d}{c}(m_\mathrm{D} n_\mathrm{D}+m_\U n_\U)} \nonumber\\ &\qquad\times \int_{P_{m_\mathrm{D},m_\U} > 0} \hspace{-1cm} \d t_\L \, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s_\ast,t,t_\L,t_\mathrm{R})^{\frac{5}{2}}\, Q_{m_\mathrm{D},m_\U}(s_\ast,t,t_\L,t_\mathrm{R}) \nonumber\\ &\qquad\qquad\times (t_\L+1)_{s_\ast-m_\mathrm{D}-m_\U-1}(t_\mathrm{R}+1)_{s_\ast-m_\mathrm{D}-m_\U-1}\ . \label{eq:mass-shifts} \end{align} For every mass-level, the integral over $t_\L$ and $t_\mathrm{R}$ can be explicitly evaluated and gives a polynomial of degree $s_\ast{-}1$ in $t$. In particular, in the simplest mass-shift at $s=1$ takes the simple form \begin{equation} \DRes_{s=1} A^\text{p}=\frac{i}{(2\pi)^2}-\frac{\pi^2 i}{210}\sum_{c=1}^\infty \frac{1}{c^5}\sum_{\begin{subarray}{c} 1 \le a \le \frac{c}{2} \\ (a,c)=1 \end{subarray}}\sum_{n=0}^{c-1} \mathrm{e}^{-\frac{2\pi i n(n+1)a^*}{c}}\ , \label{eq:mass-shift s=1} \end{equation} where $d=a^*$ denotes again the inverse mod $c$. Such sums are classical objects in number theory. In particular, the sum over $n$ is known as a \emph{Gauss sum} and can be explicitly evaluated in terms of the Jacobi symbol (a generalization of the Legendre symbol). \begin{figure} \centering \begin{tikzpicture} \begin{scope} \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \draw[very thick, fill=black!10!white] (2.2,0) circle (.7); \draw[very thick, fill=black!10!white] (-2.2,0) circle (.7); \draw (-2.4,.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-2.4,1) {1}; \draw (-2.4,-.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-2.4,-1) {2}; \draw (2.4,.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,1) {4}; \draw (2.4,-.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,-1) {3}; \end{scope} \begin{scope}[shift={(8,0)}] \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \fill[white] (-.6,.5) rectangle (.6,1.6); \fill[black!10!white] (-.62,.53) to (0,1.2) to[bend right=30] (-.62,1.375); \fill[black!10!white] (.62,.53) to (0,1.2) to[bend left=30] (.62,1.375); \draw[very thick, out=54.3, in=154.3, looseness=.8] (-.65,.47) to (.65,1.35); \fill[white] (0,1.2) circle (.1); \draw[very thick, out=125.7, in=25.7, looseness=.8] (.65,.47) to (-.65,1.35); \draw[very thick, fill=black!10!white] (2.2,0) circle (.7); \draw[very thick, fill=black!10!white] (-2.2,0) circle (.7); \draw (-2.4,.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-2.4,1) {1}; \draw (-2.4,-.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-2.4,-1) {2}; \draw (2.4,.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,1) {4}; \draw (2.4,-.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,-1) {3}; \end{scope} \end{tikzpicture} \caption{Worldsheet configurations leading to double poles at every $s \in \mathbb{Z}_{>0}$ for the planar annulus and M\"obius strip topologies respectively.} \label{fig:double pole degenerations} \end{figure} One can also perform strong tests on these formulas by invoking some further number theory. There are two ways to compute the imaginary part of the mass-shift: either take the imaginary part directly in \eqref{eq:mass-shifts} or take the imaginary part in \eqref{eq:planar amplitude imaginary part}. For the mass-shifts we can check the equality of the two formulas directly. We show that \begin{equation} F_{s}^{m_\mathrm{D},m_\U}(c)=\frac{2}{c}\Im \bigg[i\sum_{\begin{subarray}{c} 1 \le a \le \frac{c}{2} \\ (a,c)=1 \end{subarray}}\sum_{\begin{subarray}{c} n_\mathrm{D},n_\U \ge 0 \\ n_\mathrm{D}+n_\U=c-1\end{subarray}} \mathrm{e}^{\frac{2\pi i d}{c} (sn_\mathrm{D} n_\U+m_\mathrm{D} n_\mathrm{D}+m_\U n_\U)}\bigg] \label{eq:definition F overview} \end{equation} is almost a multiplicative function, meaning that (suppressing other labels) \begin{equation} F(c)=F(p_1^{\ell_1}) \cdots F(p_k^{\ell_k}) \ ,\label{eq:multiplicative function} \end{equation} where $c=p_1^{\ell_1} \cdots p_k^{\ell_k}$ is the prime factorization of $c$. The identity \eqref{eq:multiplicative function} can fail for finitely many prime numbers, but they can be treated separately. For example, in the simplest case for the mass-shift at $s=1$, we have \begin{equation} F_1^{0,0}(c)=\begin{cases} 0 \quad &\text{if }c=1\text{ or }c \ge 3\text{ contains a square}\ , \\ 2 \quad &\text{if }c=2\ , \\ 1 \quad &\text{if }c \ge 3\text{ is square-free}\ , \end{cases} \end{equation} where we recall that square-free means that the number has no repeated prime factor. Using properties of Euler products, one has \begin{equation} \frac{105}{\pi^4}=\frac{\zeta(4)}{\zeta(8)}=\sum_{c=1}^\infty \frac{1}{c^4} \ \begin{cases} 0 \quad &\text{if }c \ge 3\text{ contains a square}\ , \\ 1 \quad &\text{if }c \ge 3\text{ is square-free\ ,} \end{cases} \label{eq:series evaluation square free} \end{equation} which leads to the exact evaluation \begin{equation} \Im\eqref{eq:mass-shift s=1}=\frac{\pi^2}{448}\ . \end{equation} This result agrees with the one obtained by directly extracting the double pole from \eqref{eq:planar amplitude imaginary part}. Similarly, one can check that the corresponding equalities hold for higher values of $s$, where they involve more non-trivial $L$-functions that generalize the Riemann zeta-function appearing in \eqref{eq:series evaluation square free}. This computation provides a completely independent check of (parts of) our formula \eqref{eq:planar four-point function s-channel}. The generalization to higher $s = s_\ast$ is quite straightforward. In the ancillary file \texttt{DRes.txt}, we included the expressions in terms of Gauss sums needed that compute them up to $s \leq 16$. To highlight the main result, we find \begin{subequations} \begin{align} \DRes_{s=1} A^\text{p} &= d_1 + \frac{\pi^2}{448}i\, ,\\ \DRes_{s=2} A^\text{p} &= (1+t)\left(d_2 + \frac{17\pi^2}{7560}i \right)\, ,\\ \DRes_{s=3} A^\text{p} &= (1+t)(2+t)\left(d_3 + \frac{167341 \pi^2}{143700480}i\right)\, . \end{align} \end{subequations} The imaginary parts can be evaluated exactly and the real parts $d_{s_\ast}$ are constants we can compute with arbitrary precision, but did not manage to express in terms of known quantities: \begin{equation} d_1 \approx 8.36799 \cdot 10^{-5}\, , \qquad d_2 \approx -1.61091 \cdot 10^{-4}\, , \qquad d_3 \approx -9.05359 \cdot 10^{-6}\, . \end{equation} The neat factorization pattern into $(1+t)(2+t)\cdots (s_\ast - 1 + t)$ breaks down starting with $s_\ast = 4$ (but as noticed in \cite{Eberhardt:2022zay}, it holds approximately for the imaginary parts). The reason is that the spectrum no longer consists of a single supermultiplet and particles of varies spins at the same mass level get different mass shifts and decay widths. Numerical evaluation of higher $\DRes_{s=s_\ast}$ up to $c \leq 1000$ is given in App.~\ref{app:mass-shifts}. Since the mass shifts are easier to evaluate, we can also use them to explore the behaviour of the amplitude at high energies. Since they control the value of the amplitude at all integer values of $s \in \ZZ$, they give an approximate idea of the high-energy behaviour. However, as we shall see the real part of the amplitude oscillates and thus only knowing integer values can be somewhat misleading. \subsection{Numerical computations and convergence}\label{subsec:numerical} Let us explain how to use the Rademacher formula in practical computations and summarize our observations on the convergence in $c$. To take specific examples, we will discuss the type of manipulations that went into producing the plots shown in Section~\ref{sec:introduction}. The first step is to simplify the integration domain, which can be done by a change of variables from $(t_\L,t_\mathrm{R})$ to $(x,y)$ as follows: \begin{equation} t_{\L/\mathrm{R}}=\frac{\sqrt{\Delta_{m_\mathrm{D},m_\U}}}{2\sqrt{s}}(\sqrt{-u}x \pm \sqrt{-t} y)+\frac{1}{2}(m_\mathrm{D}+m_\U-s)\ , \end{equation} where we introduced \begin{equation} \Delta_{m_\mathrm{D},m_\U}(s) =\left[s-(\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2\right]\left[s-(\sqrt{m_\mathrm{D}}-\sqrt{m_\U})^2\right]\ . \end{equation} This change leads to the Jacobian $\sqrt{tu}\, \Delta_{m_\mathrm{D},m_\U}/2s$. In addition, the polynomial $P_{m_\mathrm{D},m_\U}$ simplifies to \begin{equation} P_{m_\mathrm{D},m_\U} = \frac{\Delta_{m_\mathrm{D},m_\U}(s)}{4s} (1-x^2-y^2)\, . \end{equation} and hence the overall powers of $\Delta_{m_\mathrm{D},m_\U}$ can be pulled out of the integral. We thus obtain the simplified formula for \eqref{eq:planar four point function Aa/c n1,n2,n3 final result}: \begin{align} A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}&=-\frac{\pi i \, \mathrm{e}^{-\pi i\sum_{a=\L,\mathrm{R},\, b=\mathrm{D},\U} \big[s \sum_{m=n_a+1}^{n_a+n_b}+t \sum_{m=n_b+1}^{n_a+n_b}\big] \st{\frac{md}{c}}}}{60 c^5 s^4 \Gamma^2(s) \sin^2(\pi s)} \!\!\!\!\sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \!\!\!\! \Delta_{m_\mathrm{D},m_\U}^{\frac{7}{2}}(s) \nonumber\\ &\times \mathrm{e}^{\frac{2\pi i d}{c}(m_\mathrm{D} n_\mathrm{D}+m_\U n_\U)}\int_{\mathbb{D}} \d x \, \d y\ (1{-}x^2{-}y^2)^{\frac{5}{2}}\, Q_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R}) \nonumber\\ &\times \!\left(\! \Gamma(-t_\L)\Gamma(s{+}t_\L{-}m_\mathrm{D}{-}m_\U) \begin{cases}\mathrm{e}^{2\pi i t_\L \st{\frac{d n_\L}{c}}} \sin(\pi s) &\!\text{if}\; n_\L>0 \\ \sin(\pi(s+t_\L)) &\!\text{if}\; n_\L=0 \end{cases} \right) \! \big(\L \leftrightarrow \mathrm{R}\big)\ . \end{align} The integration domain $\mathbb{D}$ is the unit disk, $x^2 + y^2 < 1$. Written this way, every integral in the sum is convergent and the only singularities come from the explicit sine function out in the front. This is the reason, in all computations we multiply out by $\sin^2(\pi s)$ in order to remove these divergences from plots. As mentioned before, these double poles at every positive integer $s$ are associated with worldsheet degenerations illustrated in Figure~\ref{fig:double pole degenerations} and they come from the terms $n_\L = 0$ and $n_\mathrm{R}=0$ where two punctures can collide. Analogous formulas can be derived in other channels and the non-planar case. In the forward limit, $t=0$, the expression undergoes some simplifications because $t_\L = t_\mathrm{R}$ and consequently the variable $y$ can be integrated out. The result is \begin{align} A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}\Big|_{t=0}\!\!\!&=\!-\frac{i \pi^2 \mathrm{e}^{-\pi i s\sum_{a=\L,\mathrm{R},\, b=\mathrm{D},\U} \sum_{m=n_a+1}^{n_a+n_b} \st{\frac{md}{c}}}}{192 c^5 s^4 \Gamma^2(s) \sin^2(\pi s)} \hspace{-0.8cm} \sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \hspace{-0.8cm} \Delta_{m_\mathrm{D},m_\U}^{\frac{7}{2}}(s)\, \mathrm{e}^{\frac{2\pi i d}{c}(m_\mathrm{D} n_\mathrm{D}+m_\U n_\U)}\nonumber\\ &\times \int_{-1}^{1} \d x\ (1{-}x^2)^{3}\, Q_{m_\mathrm{D},m_\U}(s,0,t_\L,t_\L)\, \Gamma^2(-t_\L)\, \Gamma^2(s{+}t_\L{-}m_\mathrm{D}{-}m_\U) \nonumber\\ &\times \left( \begin{cases}\mathrm{e}^{2\pi i t_\L \st{\frac{d n_\L}{c}}} \sin(\pi s) &\text{if}\quad n_\L>0 \\ \sin(\pi(s+t_\L)) &\text{if}\quad n_\L=0 \end{cases} \right) \big(\L \leftrightarrow \mathrm{R}\big)\ . \end{align} In practice, we perform the sums over the winding numbers $n_\L$, $n_\mathrm{D}$, $n_\mathrm{R}$, $n_\U$ and fractions $\frac{a}{c}$ within the integrand. Notice that $a$ never appears explicitly, so the sum can be expressed as that over $d$, running over the range $\{1,2,\dots,\lfloor\frac{c}{2}\rfloor\}^*$, where the star $*$ denotes the inverse mod $c$. To perform a computation in a finite amount of time, we need to truncate the sum in \eqref{eq:planar amplitude decomposition} at some $c$. As highlighted before, due to the oscillating terms in the sums, it is difficult to accurately estimate the truncation errors analytically. As an alternative, we can fit the dependence on $c$ and extrapolate the data to $c \to \infty$, thus obtaining some error bars on the amplitude computed using the Rademacher method. Let us first use fitting to get an estimate on the rate of convergence of the Rademacher method. This can be done quite reliably for the imaginary part, since in that case we can compute the exact value with arbitrary precision using \eqref{eq:planar amplitude imaginary part}. On the other hand, we take the imaginary part of \eqref{eq:planar amplitude decomposition} computed up to $c \leq c_\ast$ and fit the result to the simple ansatz \begin{equation} \eqref{eq:planar amplitude imaginary part} - \alpha\, c_\ast^{-\beta}\, .\label{eq:fit} \end{equation} The exponent $\beta$ in principle depends on the kinematics. Note that it corresponds to the convergence of partial sums, not individual terms in the $c$-sum. Any positive $\beta$ indicates convergence. The ``random phase'' model explained in Section~\ref{subsec:planar-sle1} corresponds to $\beta \approx 2$. In Section~\ref{sec:mass-shifts}, we will prove that for positive integers $s$, we have $\beta = 3$. In Appendix~\ref{app:convergence}, we further argue that that $\beta > 0$ for every $s>0$. The goal of the following discussion is to extract $\beta$ directly from the data. We focus on the forward limit case, $t=0$, corresponding to the amplitude plotted in Figure~\ref{fig:Ap-forward}. After taking a logarithm, \eqref{eq:fit} can be fitted using linear regression. In practice, it is difficult to control the systematic errors coming from the fact we might not have reached the asymptotic regime in $c$. In practice we have access to data $c_\ast \leq c_\mathrm{max}$, where $c_\mathrm{max}=68$ for data points with $s \leq 1$ and $c_\mathrm{max}=40$ for those with $s \leq 12$. It is beneficial to drop multiple data points at low $c_\ast$, but not too many to reduce the statistics. To find a balance, we scan over multiple cutoffs $c_\mathrm{min}$ such that fitting only the data $c_\mathrm{min} \leq c_\ast \leq c_\mathrm{max}$ gives the most accurate fit, quantified by the highest value of the adjusted coefficient of determination $\bar{R}^2$. We also discard all unreliable fits with $\bar{R}^2 < 0.99$. The resulting $\beta$'s together with the error bars are plotted in Figure~\ref{fig:ImA-fit}. \begin{figure} \centering \includegraphics[scale=1.2]{figures/ImA-fit} \caption{\label{fig:ImA-fit}Fitted values of the exponent $\beta$ in \eqref{eq:fit} together with their standard deviation error bars. The gray dashed line corresponds to $\beta=3$ expected at positive integers $s$.} \end{figure} As anticipated, at positive integers $s$, convergence reaches $\beta \approx 3$ predicted by the Gauss sum formula. We observe that for $s$ just above an integer, convergence changes drops drastically. As expected, when $s \to 0$, the value $\beta \approx 0$ is reached indicating poorer and poorer convergence due to lack of cancellations between terms in the Rademacher expansion. Across all energies, the results are consistent with $\beta > 0$. In order to illustrate why the jumps in convergence happens across integers, in Figure~\ref{fig:convergence-jump} we plot $-\alpha c_\ast^{-\beta}$ (the difference between truncated and exact values) for two values: $s=1$ and $s=1.1$. The former reaches the asymptotic behavior fairly soon. For example, keeping the data points $c_{\mathrm{max}} = 40$ leads to the fit $\beta = 3.07 \pm 0.09$, while taking the extended set $c_{\mathrm{max}} = 190$ gives $\beta = 3.013 \pm 0.012$. On the other hand, for $s=1.1$, the data set $c_{\mathrm{max}} = 40$ gives rise to the fit $\beta = 0.61 \pm 0.03$, while the set $c_{\mathrm{max}} = 190$ gives $\beta = 0.911 \pm 0.002$. The two values disagree, indicating a presence of the systematic error: the fact that for a given $c_{\mathrm{max}}$ the asymptotic regime might not have been reached. The qualitative difference between $s=1$ and $s=1.1$ can be observed in Figure~\ref{fig:convergence-jump}. It is the fact that $s=1.1$ develops a hump around $c_\ast \approx 20$ and settles down to its power-law behavior only for much larger values of $c_\ast$. Due to this feature, the error bars on the data points right above each $\mathbb{Z}_{>0}$ in Figure~\ref{fig:ImA-fit} are underestimated. \begin{figure} \centering \includegraphics[scale=1.2]{figures/convergence-jump} \caption{\label{fig:convergence-jump}Two examples of the difference between $c_\ast$-truncated and exact values of $\Im A^\text{p}(s,0)$ as a function of $c_\ast$ for $s=1$ and $s=1.1$, illustrating a drastic change in convergence rates visible in Figure~\ref{fig:ImA-fit}.} \end{figure} Finally, we can analyze the convergence of the full amplitude, which is intrinsically more difficult, because we do not a priori know the exact value. We perform the same analysis as above, except using a $3$-parameter non-linear fit: \begin{equation}\label{eq:fit2} \gamma - \alpha c_\ast^{-\beta}\, . \end{equation} Here, $\gamma$ is the $c_\ast \to \infty$ extrapolated value of the amplitude. This quantity, together with its error bars is plotted in Figure~\ref{fig:Ap-forward}. In Figure~\ref{fig:convergence} we plot the exponents $\beta$ obtained by fitting the real part of the amplitude. Overall, uncertainties become much larger due to a more complicated fitting function used. We employ the same procedure as before, except in addition to filtering out by $\bar{R}^2$, we also discard data points for which the relative error on the central value $\gamma$ is bigger than $10\%$. Once again, we observe that $\beta \approx 0$ as $s \to 0$. For larger $s$, the convergence rate $\beta$ stabilizes to more or less a constant. For a range of values, it is consistent with the ``random phase'' model that would correspond to $\beta = 2$. Note that systematic errors, just as in the case of Figure~\ref{fig:ImA-fit}, are not taken into account. In the case of Figures~\ref{fig:fixed-angle-data} and \ref{fig:ratios}, no extrapolation to $c_\ast \to \infty$ is used. All the points are plotted with $c_\ast = 10$ with a subset with slightly higher $c_\ast = 16$ and integer $s$ with $c_\ast = 1000$. In practice, we found that the result is convergent to a percent level around $c_\ast \approx 10$ and, especially plotted on a logarithmic scale, using higher cutoffs does not lead to a noticeable difference. The spacing between values of $s$ sampled is $\delta s = 0.01$. The vertical spikes on the plots are caused by the change of the sign of the amplitude. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{figures/convergence} \caption{\label{fig:convergence}Estimated power-law exponent $\beta$ after fitting the data for the real part to \eqref{eq:fit2} for every value of $s$. The value of $\beta=2$ indicated by the dashed line would correspond to the ``random phase'' model explained in Section~\ref{subsec:planar-sle1}.} \end{figure} \section{Warm-up: Two-point amplitude} \label{sec:two-point function} Before diving directly into the derivation of the four-point function, we demonstrate some further features of the method at a simpler example, namely a two-point function. Let \begin{equation} I(s)=-i\int_{\Gamma} \mathrm{d}\tau \int_0^1 \mathrm{d}z\ \left(\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}\right)^{2s} \label{eq:two-point function integral} \end{equation} for $s> 0$.\footnote{For $s\le 0$, the Rademacher procedure does not converge in this case.} This is basically the two-point function in bosonic open-string theory. The contour $\Gamma$ runs as before from 0 to $\frac{1}{2}$. Of course, unless $s=0$, this two-point function is off-shell. This does not prevent us from computing this integral, however. This calculation contains all the the main ideas that go into the computation of the four-point function below, but is still much lower in complexity and thus serves as a good toy example. Compared to the partition function analyzed using the Rademacher method in Section~\ref{subsec:Rademacher contour}, the new aspect we want to learn about from \eqref{eq:two-point function integral} is how to deal with branch cuts of the integrand. \subsection{Modular transformation} The general logic of the Rademacher contour posits that \begin{equation} I(s)=\sum_{c=1}^\infty \sum_{\begin{subarray}{c} 1 \le a \le \frac{c}{2} \\ (a,c)=1\end{subarray}} \int_{C_{a/c}} \mathrm{d}\tau \int_0^1 \mathrm{d}z\, \left(\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}\right)^{2s}\ . \end{equation} The $z$-contour is unaffected by this contour deformation. To compute the integral over the circle $C_{a/c}$, we use the modular properties of the integrand. Notice that \begin{equation} \Phi(z,\tau)=\frac{\vartheta_1(z,\tau)^2}{\eta(\tau)^6} \end{equation} is a weak Jacobi form of index $1$ and weight $-2$, which means that transforms as follows under modular transformations and shifts in $z$: \begin{subequations} \begin{align} \Phi\left(\frac{z}{c \tau+d},\frac{a \tau+b}{c \tau+d}\right)&=(c \tau+d)^{-2} \mathrm{e}^{\frac{2\pi i cz^2}{c\tau+d}} \Phi(z,\tau)\ , \\ \Phi(z+m \tau+n,\tau)&=\mathrm{e}^{-2\pi i (m^2 \tau+2 m z)} \Phi(z,\tau)\ . \end{align} \end{subequations} Conceptually, these transformation behaviour means that $\Phi(z,\tau)$ is a holomorphic section of a certain line bundle over the moduli space of two-punctured tori $\overline{\mathcal{M}}_{1,2}$. Proceeding as in \eqref{eq:modular transformation Ca/c}, we set \begin{equation} \tau=\frac{a \tau'+b}{c \tau'+d} \end{equation} for a new modular parameter $\tau'$. This modular transformation has the property that the line $\tau'\in i+\RR$ gets mapped to the circle $C_{a/c}$. We have \begin{subequations} \label{eq:modular transformation Phi} \begin{align} \Phi(z,\tau)&=\Phi\left(\frac{z(c \tau'+d)}{c \tau'+d}, \frac{a \tau'+b}{c \tau'+d}\right) \\ &=(c \tau'+d)^{-2} \mathrm{e}^{2\pi i c(c \tau'+d)z^2} \Phi(z (c \tau'+d),\tau')\ . \end{align} \end{subequations} Thus we obtain \begin{align} \int_{C_{a/c}} &\mathrm{d}\tau \int_0^1 \mathrm{d}z\, \left(\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}\right)^{2s} \nonumber\\ &=-\int_{\longrightarrow} \frac{\mathrm{d} \tau'}{(c \tau'+d)^{2+2s}} \int_0^1 \mathrm{d}z \ \mathrm{e}^{2\pi i c(c \tau'+d)z^2} \left(\frac{\vartheta_1(z (c \tau'+d),\tau')}{\eta(\tau')^3}\right)^{2s} \ . \label{eq:integral Ca/c two-point after modular transformation} \end{align} As before, $\longrightarrow$ denotes the contour parallel to the real axis. The minus sign appears because we turned around the orientation of the contour. Since we raised \eqref{eq:modular transformation Phi} to a fractional power, we need to be careful about branch cuts. In the integrand, by $(c \tau'+d)^{2+2s}$ we mean its principal branch. The correct branch on the right-hand side can be determined by considering the integrand for $z \to 0$. Using that $\vartheta_1'(0,\tau)=2\pi \eta(\tau)^3$, we have for the integrand before modular transformation: \begin{equation} \left(\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}\right)^{2s} \to (2\pi z)^{2s}\ , \end{equation} and after: \begin{equation} \frac{\mathrm{e}^{2\pi i c(c \tau'+d)z^2}}{(c \tau'+d)^{2s}} \left(\frac{\vartheta_1(z (c \tau'+d),\tau')}{\eta(\tau')^3}\right)^{2s} \to \frac{(2\pi z(c \tau'+d))^{2s}}{(c \tau'+d)^{2s}}=(2\pi z)^{2s}\ , \end{equation} where we use the principal branch throughout. Since these two expressions agree, we conclude that the branch on the right hand side of \eqref{eq:integral Ca/c two-point after modular transformation} is specified by taking the principal branch in the region $z \to 0$ and then following the branch smoothly for the other values of $z$. A similar argument could have been applied also for $z \to 1$ and the branch again becomes the principal branch in that region. Finally, we shift $\tau' \to \tau'-\frac{d}{c}$ since this will be more convenient in the following. We also rename $\tau' \to \tau$ for better readability. Thus, in the end we have \begin{align} \int_{C_{a/c}} \mathrm{d}\tau \int_0^1 \mathrm{d}z\, \left(\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}\right)^{2s}&=-\int_{\longrightarrow} \frac{\mathrm{d} \tau}{(c \tau)^{2+2s}} \ q^{s c^2 z^2} \left(\frac{\vartheta_1(z c \tau,\tau-\frac{d}{c})}{\eta(\tau-\frac{d}{c})^3}\right)^{2s} \ , \label{eq:integral Ca/c two-point function} \end{align} where $q=\mathrm{e}^{2\pi i \tau}$. \subsection{Tropical behaviour} As in the example we explained in Section~\ref{subsec:Rademacher contour}, the main trick to evaluate the integral on the right hand side of \eqref{eq:integral Ca/c two-point function} explicitly is to push the horizontal contour up to very high values of $\Im \tau$. The result is then only sensitive to the singular behaviour of the integrand. Let us work out the leading singular behaviour of the integrand, which is controlled by the tropicalization of the integrand. To leading order, the integrand goes like $q^\mathrm{Trop}$ as $\Im \tau \to \infty$, where the function $\mathrm{Trop}$ still depends on the other moduli and kinematics of the problem ($z$ and $s$ in this case). We are interested in the limit $q \to 0$, which is dominated by most negative $\mathrm{Trop}$. We can work out $\mathrm{Trop}$ from the definition of the Jacobi theta function, \begin{equation} \vartheta_1(zc\tau,\tau-\tfrac{d}{c})=-i \sum_{n=-\infty}^{\infty} (-1)^n\, \mathrm{e}^{-\frac{\pi i d(2n-1)^2}{4c}} q^{\frac{1}{2}(n-\frac{1}{2})^2-(n-\frac{1}{2})zc}\ . \end{equation} The exponents $\frac{1}{2}(n-\tfrac{1}{2})^2 - (n-\frac{1}{2})zc$ grow when $n \to \pm \infty$ and thus there is a minimal exponent which controls the behaviour of the theta function near $q \to 0$. The minimum exponent appears for $n=\lfloor cz \rfloor+1$. Combining this fact with the leading behaviour of the other factors in the integrand, we get the behavior \begin{equation} q^{s c^2 z^2} q^{2s \left[ \frac{1}{2}(\frac{1}{2}+\lfloor cz \rfloor)^2 - (\frac{1}{2} + \lfloor cz \rfloor) cz \right]} q^{-\frac{s}{4}} = q^{\mathrm{Trop}}\, , \end{equation} and hence we conclude \begin{equation} \mathrm{Trop}=-s \, \{c z\}(1-\{c z\})\ , \label{eq:leading Trop two-point function} \end{equation} where $\{x\}$ denotes the fractional part, i.e., $\{x\}=x-\lfloor x \rfloor$. This function is periodic with period $\frac{1}{c}$. Moreover, we notice that it vanishes on the boundary of the segments $z \in [\frac{n}{c},\frac{n+1}{c}]$. It means that the boundaries of these segments do not contribute to the integral, since when $z=\frac{n}{c}$, we can take $\Im \tau \to \infty$, which makes the integrand arbitrarily small. This makes it natural to split up the integral into disjoint contributions. Let us set \begin{equation} z=\frac{n+\xi}{c} \label{eq:z xi change of variables 2-point function} \end{equation} with $n \in \{0,1,\dots,c-1\}$ and $\xi \in [0,1]$. \subsection{Branches of \texorpdfstring{$\log \vartheta_1$}{log theta1}} \label{subsec:branches log theta1} To continue, we should carefully discuss the branch of the integrand. It will be sufficient to study \begin{equation} \log \vartheta_1(cz\tau,\tau-\tfrac{d}{c})\ . \end{equation} with $z=\frac{n+\xi}{c}$. Recall that we specified the branch of the integrand by taking the principal branch near $z \to 0$. We want to determine the branch of the logarithm that is obtained by continuously following the branch as we vary $z$ from $0$ to our desired value. We claim that for this branch, \begin{multline} \log \vartheta_1((n+\xi) \tau,\tau-\tfrac{c}{d})=-\pi i\tau (n+\xi)^2+\pi i\tau (\xi-\tfrac{1}{2})^2+\tfrac{\pi i}{2}-\tfrac{\pi i d}{4c}-2\pi i \sum_{m=1}^n \st{\tfrac{md}{c}}\\ +\log \prod_{\ell=1}^\infty\big(1-\mathrm{e}^{-\frac{2\pi i d \ell}{c}}q^\ell\big)\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell+n)}{c}} q^{\ell-\xi}\big)\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell-n-1)}{c}}q^{\ell+\xi-1}\big)\ .\label{eq:log theta1 branch} \end{multline} Recall the definition of the sawtooth function $\st{x}$ given in eq.~\eqref{eq:st definition}, which will play a very important role in the following. We use the product representation of the Jacobi theta function for the argument since it is more convenient. One can easily prove this claim by induction over $n$, as will be done below. Notice that the function $\log \vartheta_1(cz\tau,\tau-\tfrac{d}{c})$ has branch points for \begin{equation} c z \tau \in \ZZ+\ZZ(\tau-\tfrac{d}{c})\, , \end{equation} which never lie on the interval $z \in (0,1)$. Thus the choice of branch is independent of $\tau$ and it will be convenient to choose $\Im \tau$ very large, i.e., $q=\mathrm{e}^{2\pi i \tau}$ small. For $n=0$, we use the product representation of $\vartheta_1$ to write \begin{align} \log \vartheta_1(\xi \tau,\tau-\tfrac{c}{d})&=\log \Big[ i\, q^{\frac{1}{8}-\frac{\xi}{2}}\mathrm{e}^{-\frac{\pi i d}{4c}} \prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{2\pi i d\ell}{c}} q^\ell) \nonumber \\ &\qquad\quad\times (1-\mathrm{e}^{-\frac{2\pi i d\ell)}{c}} q^{\ell-\xi})(1-\mathrm{e}^{-\frac{2\pi i d(\ell-1)}{c}} q^{\ell+\xi-1}) \Big]\\ &=\tfrac{\pi i}{2} + \tfrac{\pi i\tau}{4}-\pi i \tau \xi-\tfrac{\pi i d}{4c} +\log \Big[ \prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{2\pi i d\ell}{c}} q^\ell) \nonumber \\ &\qquad\quad\times (1-\mathrm{e}^{-\frac{2\pi i d\ell}{c}} q^{\ell-\xi})(1-\mathrm{e}^{-\frac{2\pi i d(\ell-1)}{c}} q^{\ell+\xi-1}) \Big]\ . \end{align} In the last line, we chose the principal branch for $\Re \tau=\frac{d}{c}$ and large $\Im \tau$, since this was our prescription for to determine the branch.\footnote{The choice $\Re \tau=\frac{d}{c}$ is not necessary if we look at the ratio $\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}$ since the leading exponent $q^{\frac{1}{8}}$ cancels out.} We next discuss the induction step $n \to n+1$. We can start with the formula \eqref{eq:log theta1 branch} for $n$ and then smoothly take $\xi$ from the range $[0,1]$ to the range $[1,2]$. We can discuss every factor in the infinite product separately. Since we are taking $q$ very small, the only dangerous factors are those where the exponent of $q$ can become less than zero, which only happens for the second term in the infinite product for $\ell=1$. Thus it is enough to discuss the branch of \begin{equation} \log \left(1-\mathrm{e}^{2\pi i \varphi} q^{1-\xi}\right) \end{equation} for a phase $\varphi$. For $\xi \in [0,1]$, the choice of branch in the function is clear. For $\xi>1$, the second term dominates. The correct branch is obtained by the following consideration. First note that for $\varphi=\frac{1}{2}$, the branch is trivial to choose and we have \begin{equation} \log (1+q^{1-\xi})=2\pi i \tau (1-\xi)+\log (1+q^{\xi-1})\ . \end{equation} For arbitrary $\varphi$, we have \begin{equation} \log (1-\mathrm{e}^{2\pi i \varphi} q^{1-\xi})=2\pi i \tau (1-\xi)+\pi i +2\pi i \varphi+2\pi i k +\log (1+\mathrm{e}^{-2\pi i \varphi} q^{\xi-1}) \end{equation} for some integer $k$. Finally, we use that the phase can only jump for integer $\varphi$, since then the contour in $\xi$ hits the branch point. Thus the correct branch is \begin{equation} \log (1-\mathrm{e}^{2\pi i \varphi} q^{1-\xi})=2\pi i \tau (1-\xi)+2\pi i \st{\varphi} +\log (1+\mathrm{e}^{-2\pi i \varphi} q^{\xi-1}) , \end{equation} where we defined the sawtooth function $\st{\varphi}$ in \eqref{eq:st definition}. Applying this identity to \eqref{eq:log theta1 branch} with $\varphi=-\frac{d(\ell+1)}{c}$ gives \begin{align} \log \, &\vartheta_1((n+\xi) \tau,\tau-\tfrac{c}{d}) \nonumber\\ &=-\pi i\tau (n+\xi)^2+\pi i \tau (\xi-\tfrac{1}{2})^2+2\pi i \tau (1-\xi)-\tfrac{\pi i}{2}-\tfrac{\pi i d}{4c}-2\pi i \sum_{m=1}^{n+1} \st{\tfrac{md}{c}}\nonumber\\ &\qquad+\log \Big[ \prod_{\ell=1}^\infty\big(1-\mathrm{e}^{-\frac{2\pi i d \ell}{c}}q^\ell\big)\prod_{\ell=2}^\infty \big(1-\mathrm{e}^{-\frac{2\pi i d(\ell+n)}{c}} q^{\ell-\xi}\big)\nonumber\\ &\qquad\qquad\qquad\times \big(1-\mathrm{e}^{\frac{2\pi i d(1+n)}{c}} q^{\xi-1}\big)\prod_{\ell=1}^\infty\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell-n-1)}{c}}q^{\ell+\xi-1}\big) \Big]\, , \end{align} and after further massaging: \begin{align} \log \, &\vartheta_1((n+\xi) \tau,\tau-\tfrac{c}{d}) = -\pi i\tau (n+\xi)^2+\pi i \tau (\xi-\tfrac{3}{2})^2-\tfrac{\pi i}{2}-\tfrac{\pi i d}{4c}-2\pi i \sum_{m=1}^{n+1} \st{\tfrac{md}{c}}\nonumber\\ &\qquad+\log \Big[ \prod_{\ell=1}^\infty\big(1-\mathrm{e}^{-\frac{2\pi i d \ell}{c}}q^\ell\big)\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell+n+1)}{c}} q^{\ell+1-\xi}\big)\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell-n-2)}{c}}q^{\ell+\xi-2}\big) \Big]\ . \end{align} This is the claimed expression expression for $\log \vartheta_1((n+1+\xi) \tau,\tau-\tfrac{c}{d})$, but with $\xi$ replaced with $\xi-1$, showing that \eqref{eq:log theta1 branch} is the correct branch for the logarithm. For future reference, let us note that \begin{equation} \sum_{m=1}^{c-1} \bigst{\frac{m d}{c}}=0\ . \end{equation} This is because we are summing over all non-zero elements of the ring $\ZZ_c$. They are paired up as $md$ and $-md$ and hence cancel out pairwise thanks to $\st{\frac{md}{c}}+\st{-\frac{md}{c}}=0$. It continues to hold if $md=-md \bmod c$, since then $\frac{md}{c}\equiv\frac{1}{2}$ and hence $\st{\frac{md}{c}}=0$. This means that the phase on the last segment $\frac{c-1}{c}<z<1$ is again trivial. This had to happen because we can either use $z \to 0$ or $z \to 1$ to fix the branch as described above. \subsection{Thresholds from \texorpdfstring{$q$}{q}-expansion} Let us now insert the branch \eqref{eq:log theta1 branch} into \eqref{eq:integral Ca/c two-point function}. We get \begin{multline} \int_{C_{a/c}} \mathrm{d}\tau \int_0^1 \mathrm{d}z\ \left(\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}\right)^{2s}=\sum_{n=0}^{c-1} \int_{\longrightarrow} \frac{\mathrm{d}\tau}{c(-ic\tau)^{2+2s}} \int_0^1 \mathrm{d}\xi\ q^{s \xi(\xi-1)}\mathrm{e}^{-4\pi i s \sum_{m=1}^n \st{\frac{md}{c}}} \\ \times \prod_{\ell=1}^\infty \frac{\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell+n)}{c}} q^{\ell-\xi}\big)^{2s}\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell-n-1)}{c}}q^{\ell+\xi-1}\big)^{2s}}{\big(1-\mathrm{e}^{-\frac{2\pi i d \ell}{c}} q^\ell\big)^{4s}} \, .\label{eq:two-point function Ca/c integral split up} \end{multline} We absorbed the phase $\mathrm{e}^{\pi i s}$ and the overall minus sign into the denominator. We also get another factor $\frac{1}{c}$ from the Jacobian of the change of variables \eqref{eq:z xi change of variables 2-point function}. Let us $q$-expand the integrand. There are only finitely many terms in the $q$-expansion that can potentially contribute to the integral. Indeed, every term in the $q$-expansion is of the form $q^{\mathrm{Trop}_{m_\mathrm{D},m_\U}}$ with \begin{equation} \mathrm{Trop}_{m_\mathrm{D},m_\U}=s\xi(\xi-1)+m_\mathrm{D} \xi+m_\U(1-\xi) \label{eq:Trop two-point function} \end{equation} and $m_\mathrm{D},\, m_\U$ to non-negative integers. A term of this form can only contribute when $\mathrm{Trop}_{m_\mathrm{D},m_\U} < 0$ for some choice of $\xi \in [0,1]$. $\mathrm{Trop}_{m_\mathrm{D},m_\U}$ attains its minimum for \begin{equation} \xi_\text{min}=\frac{m_\U-m_\mathrm{D}+s}{2s}\ , \end{equation} which lies inside the unit interval for $|m_\U-m_\mathrm{D}|\le s$. In the other case where $|m_\mathrm{D}-m_\U| \ge s$, the minimum of $\mathrm{Trop}_{m_\mathrm{D},m_\U}$ is attained on the boundary of the interval, where $\mathrm{Trop}_{m_\mathrm{D},m_\U}$ is always non-negative. Thus only $(m_\mathrm{D},m_\U)$ with $|m_\mathrm{D}-m_\U| \le s$ can potentially be negative somewhere on the unit interval. In this case we have \begin{equation} \min_{\xi \in [0,1]} \mathrm{Trop}_{m_\mathrm{D},m_\U} = -\frac{[s-(\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2][s-(\sqrt{m_\mathrm{D}}-\sqrt{m_\U})^2]}{4s}\, . \end{equation} This expression is negative and hence contributes to the integral when either \begin{equation} s \ge (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2\quad \text{or}\quad s \le (\sqrt{m_\mathrm{D}}-\sqrt{m_\U})^2\ . \end{equation} Since \begin{equation} s \ge | m_\mathrm{D}-m_\U|=|\sqrt{m_\mathrm{D}}-\sqrt{m_\U}| (\sqrt{m_\mathrm{D}}+\sqrt{m_\U}) \ge |\sqrt{m_\mathrm{D}}-\sqrt{m_\U}|^2\ , \end{equation} the latter is incompatible with the assumption $s \ge |m_\mathrm{D}-m_\U|$. Thus only terms with $s \ge (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2$ contribute to the integral. This is precisely how thresholds manifest themselves. Assuming now that $s \ge (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2$, we may extend the integration region $\xi \in [0,1]$ to $\xi \in \RR$, since the exponent $\mathrm{Trop}_{m_\mathrm{D},m_\U}$ is positive outside of the interval $[0,1]$ and hence the region outside of the unit interval also leads to a vanishing contribution for the integral. This extension of the integration contour will simplify the analysis later on. Next, let us note that the phase of a term of the form $q^{m_\mathrm{D} \xi+m_\U(1-\xi)}$ appearing in the $q$-expansion of the infinite product in \eqref{eq:two-point function Ca/c integral split up} is given by $\mathrm{e}^{\frac{2\pi id}{c}(n m_\mathrm{D} -(n+1)m_\U)}$. This becomes obvious if we set $q_\mathrm{D}=q^{\xi}$ and $q_\U=q^{1-\xi}$, so that $q_\mathrm{D} q_\U=q$. We can then write terms appearing in \eqref{eq:two-point function Ca/c integral split up} as \begin{subequations} \begin{align} \mathrm{e}^{-\frac{2\pi i d (\ell+n)}{c}} q^{\ell-\xi}&=\mathrm{e}^{\frac{2\pi i d}{c}(n (\ell-1) -(n+1)\ell)} q_\mathrm{D}^{\ell-1}q_\U^{\ell}\ , \\ \mathrm{e}^{-\frac{2\pi i d (\ell-n-1)}{c}} q^{\ell+\xi-1}&=\mathrm{e}^{\frac{2\pi i d}{c}(n \ell-(n+1)(\ell-1))} q_\mathrm{D}^\ell q_\U^{\ell-1}\ , \\ \mathrm{e}^{-\frac{2\pi i d \ell}{c}} q^\ell &= \mathrm{e}^{\frac{2\pi i d}{c}(n \ell-(n+1)\ell)} q_\mathrm{D}^\ell q_\U^{\ell}\ . \end{align} \end{subequations} We can thus write \begin{multline} \prod_{\ell=1}^\infty \frac{\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell+n)}{c}} q^{\ell-\xi}\big)^{2s}\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell-n-1)}{c}}q^{\ell+\xi-1}\big)^{2s}}{\big(1-\mathrm{e}^{-\frac{2\pi i d \ell}{c}} q^\ell\big)^{4s}}\\ =\sum_{m_\mathrm{D},m_\U=0}^\infty Q_{m_\mathrm{D},m_\U}^{(2)}(s)\, \mathrm{e}^{\frac{2\pi id}{c}(n m_\mathrm{D} -(n+1)m_\U)}q^{m_\mathrm{D} \xi+m_\U(1-\xi)} \end{multline} with \begin{equation} Q_{m_\mathrm{D},m_\U}^{(2)}(s)=[q_\mathrm{D}^{m_\mathrm{D}} q_\U^{m_\U}] \prod_{\ell=1}^\infty \frac{(1-q_\mathrm{D}^{\ell-1}q_\U^\ell)^{2s}(1-q_\mathrm{D}^{\ell}q_\U^{\ell-1})^{2s}}{(1-q_\mathrm{D}^{\ell}q_\U^{\ell})^{4s}}\ . \end{equation} The superscript $(2)$ is intended to distinguish these coefficients from similar coefficients appearing in the four-point function case. We can insert this expansion in \eqref{eq:two-point function Ca/c integral split up} to get \begin{align} &\int_{C_{a/c}} \mathrm{d}\tau \int_0^1 \mathrm{d}z\, \left(\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}\right)^{2s}=\sum_{n=0}^{c-1} \sum_{\begin{subarray}{c} m_\mathrm{D},m_\U\ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \mathrm{e}^{-4\pi i s \sum_{m=1}^n \st{\frac{md}{c}}+\frac{2\pi i (n m_\mathrm{D}-(n+1) m_\U) d}{c}} \nonumber\\ &\qquad\qquad\qquad \times Q_{m_\mathrm{D},m_\U}^{(2)}(s)\int_{\longrightarrow} \frac{\mathrm{d} \tau}{c(-ic \tau)^{2+2s}}\ \int_{-\infty}^\infty \mathrm{d}\xi \ q^{s \xi(\xi-1)+m_\mathrm{D} \xi+m_\U(1-\xi)} \ . \end{align} \subsection{\label{subsec:2-pt assembling}Assembling the result} The integral over $\xi$ is Gaussian and simple to perform. Let us denote \begin{equation} \Delta_{m_\mathrm{D},m_\U}(s) = [s-(\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2][s-(\sqrt{m_\mathrm{D}}-\sqrt{m_\U})^2]\ . \label{eq:definition Delta} \end{equation} Evaluating the Gaussian integral leads to \begin{multline} \int_{C_{a/c}} \mathrm{d}\tau \int_0^1 \mathrm{d}z\, \left(\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}\right)^{2s} =\sum_{\begin{subarray}{c} m_\mathrm{D},m_\U\ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}}\sum_{n=0}^{c-1} \mathrm{e}^{-4\pi i s \sum_{m=1}^n \st{\frac{md}{c}}+\frac{2\pi i (n m_\mathrm{D}-(n+1)m_\U) d}{c}} \\ \times Q_{m_\mathrm{D},m_\U}^{(2)}(s)\int_{\longrightarrow} \frac{\mathrm{d} \tau}{(-i\tau)^{\frac{5}{2}+2s}c^{3+2s}\sqrt{2s}}\ q^{-\frac{\Delta_{m_\mathrm{D},m_\U}}{4s}} \ . \end{multline} The integral over $\tau$ can be computed exactly. Let us consider \begin{equation} \int_{\longrightarrow} \frac{\mathrm{d}\tau}{(-i \tau)^z} \mathrm{e}^{-2\pi i \tau a} \end{equation} for an arbitrary positive parameter $a$ and arbitrary exponent $z$. Changing the variables to $x=2\pi i \tau a$ gives \begin{equation} \int_{\longrightarrow} \frac{\mathrm{d}\tau}{(-i \tau)^z} \mathrm{e}^{-2\pi i \tau a}=-i(2\pi a)^{z-1}\int_{\uparrow} \mathrm{d}x\ (-x)^{-z} \mathrm{e}^{-x}\ . \end{equation} Upon performing the change of variables from $\tau$ to $x$, the contour gets rotated by 90 degrees and now runs upwards, which we signified by the arrow $\uparrow$. We can deform this contour into the Hankel contour $\mathcal{H}$ which runs from $\infty+i \varepsilon$ to $\infty-i\varepsilon$ by surrounding whole branch cut going from $x=0$ to $x=\infty$ along the real axis. We then use the Hankel representation of the Gamma function, \begin{equation} \int_{\mathcal{H}} \mathrm{d}x\ (-x)^{-z} \mathrm{e}^{-x}=-\frac{2\pi i}{\Gamma(z)}\ . \end{equation} Therefore, we find \begin{equation} \int_{\longrightarrow} \frac{\mathrm{d}\tau}{(-i \tau)^z} \mathrm{e}^{-2\pi i \tau a}=i(2\pi a)^{z-1}\int_{\mathcal{H}} \mathrm{d}x\ (-x)^{-z} \mathrm{e}^{-x}=\frac{2\pi(2\pi a)^{z-1}}{\Gamma(z)}\, . \label{eq:Hankel contour identity q integral} \end{equation} We can thus finish the computation as follows, \begin{multline} \int_{C_{a/c}} \mathrm{d}\tau \int_0^1 \mathrm{d}z\, \left(\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}\right)^{2s}=\frac{\pi^{\frac{5}{2}+2s}}{2^{1+2s}s^{2+2s}c^{3+2s}\Gamma(\frac{5}{2}+2s)} \\ \times \!\!\!\! \sum_{\begin{subarray}{c} m_\mathrm{D},m_\U\ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \!\!\!\! Q_{m_\mathrm{D},m_\U}^{(2)}(s)\, \Delta_{m_\mathrm{D},m_\U}^{\frac{3}{2}+2s}(s) \sum_{n=0}^{c-1} \mathrm{e}^{-4\pi i s \sum_{m=1}^n \st{\frac{md}{c}}+\frac{2\pi i (n m_\mathrm{D}-(n+1)m_\U) d}{c}}\ . \end{multline} We can finally assemble all the circles of the Rademacher contour to obtain the final result for the integral \eqref{eq:two-point function integral}, \begin{multline} I(s)=-i \sum_{c=1}^\infty \frac{\pi^{\frac{5}{2}+2s}}{2^{1+2s}s^{2+2s}c^{3+2s}\Gamma(\frac{5}{2}+2s)}\sum_{\begin{subarray}{c} 1 \le a \le \frac{c}{2} \\ (a,c)=1 \end{subarray}} \sum_{\begin{subarray}{c} m_\mathrm{D},m_\U\ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \!\!\!\! Q_{m_\mathrm{D},m_\U}^{(2)}(s)\, \Delta_{m_\mathrm{D},m_\U}^{\frac{3}{2}+2s}(s)\\ \times \sum_{n=0}^{c-1} \mathrm{e}^{-4\pi i s \sum_{m=1}^n \st{\frac{md}{c}}+\frac{2\pi i (n m_\mathrm{D}-(n+1)m_\U) d}{c}}\ . \label{eq:two-point function evaluated} \end{multline} As usual, $d$ denotes the inverse of $a \bmod c$. This is our final result for the two-point function. \subsection{Cross-checks} Given the amount of non-trivial manipulations that went into this computation, it would be reassuring to perform a stress test. We explain now such a check. Let us change the computation slightly and compute instead the integral over the contour in \eqref{eq:two-point function integral} that runs from $0$ to $1$ (instead of $0$ to $\frac{1}{2}$). Let us denote the corresponding result by $\tilde{I}(s)$. The Rademacher logic still applies to this integral and \eqref{eq:two-point function evaluated} still holds, except that the summation range over $a$ gets extended to $1 \le a \le c$, reflecting the fact that we now need to use all the circles up to 1 in the contour. \begin{figure} \centering \includegraphics[scale=0.9]{figures/convergence-two-point-function.pdf} \caption{Convergence of the Rademacher method for the two-point function. We plot the relative error \eqref{eq:two-point function exact difference} on the $y$-axis, which decreases with larger cutoffs $c$ for any $s$.} \label{fig:two-point function check} \end{figure} The integral $\tilde{I}(s)$ over the new contour is very simple to evaluate analytically. Indeed, since the integrand is periodic in $\tau \to \tau+1$, we are simply extracting the leading Fourier coefficient in $\tau$. Since \begin{equation} \frac{\vartheta_1(z,\tau)}{\eta(\tau)^3} \overset{\Im \tau \to \infty}{\longrightarrow} 2 \sin(\pi z)\ , \end{equation} the constant Fourier coefficient of the integrand is $(2 \sin(\pi z))^{2s}$. We thus get \begin{equation} \tilde{I}(s)=-i \int_0^1 \mathrm{d}z \left(2 \sin(\pi z)\right)^{2s}=-i \, \frac{4^s \, \Gamma(s+\frac{1}{2})}{\sqrt{\pi}\, \Gamma(s+1)}\ . \label{eq:two-point function alternative contour simple evaluation} \end{equation} We can easily check numerically whether this equals \eqref{eq:two-point function evaluated} with extended range over $a$. We plot in Figure~\ref{fig:two-point function check} the quantity \begin{equation} \left|\frac{\eqref{eq:two-point function evaluated}\text{ with extended range $1 \le a \le c$}}{\eqref{eq:two-point function alternative contour simple evaluation}}-1\right| \label{eq:two-point function exact difference} \end{equation} in the interval $0 \le s \le 5$ for larger and larger cutoffs $c$. Clearly, the error goes to zero as $c$ grows. Moreover, thanks to the presence of the factor $\frac{1}{c^{3+2s}}$ in \eqref{eq:two-point function evaluated}, convergence is much faster for large values of $s$. This check nicely tests the whole formula. As another remark, we note that the limit $s \to 0$ is rather subtle. The exact answer \eqref{eq:two-point function alternative contour simple evaluation} clearly goes to $-i$ as $s \to 0$. On the other hand, \eqref{eq:two-point function evaluated} for fixed $c$ in goes to zero even if we extend the range to $1 \leq a \leq c$. This means that we are not allowed to commute the limit with the infinite sum over $c$ in \eqref{eq:two-point function evaluated}. We can see this explicitly by plotting \eqref{eq:two-point function evaluated} (with extended range $1 \le a \le c$) for different cutoffs $c$ near $s=0$. The results are plotted in Figure~\ref{fig:limit exchange two-point function}. The curve obtained from the Rademacher method converges everywhere to the exact answer, except at $s=0$. \begin{figure} \centering \includegraphics[scale=0.9]{figures/limit-exchange-two-point-function.pdf} \caption{The behaviour of the Rademacher formula near $s=0$. The gray dashed line is the exact answer \eqref{eq:two-point function alternative contour simple evaluation} (multiplied by $i$ to make it real), while the different contours are the answer obtained form \eqref{eq:two-point function evaluated} when truncating the sum at different maximal values of $c$. } \label{fig:limit exchange two-point function} \end{figure} \section{Four-point planar amplitude \texorpdfstring{$A^{\text{p}}(s,t)$}{Ap(s,t)}} \label{sec:planar amplitude derivation} We now derive the equation \eqref{eq:planar four-point function s-channel} for the planar amplitude in the $s$-channel. Many of the steps performed here are analogous to the steps for the two-point function and thus we keep the discussion of these steps more brief. The formula for the amplitude in the $u$-channel will turn out to be similar as well. \subsection{Integrand on the Ford circle \texorpdfstring{$C_{a/c}$}{Cac}} We focus on the contribution of the Ford circle $C_{a/c}$ to the planar amplitude given by the integral \eqref{eq:Ap-Ford-circle}. Let us call it $A^\mathrm{p}_{a/c}$. The full planar amplitude $A^{\mathrm{p}}$ is the sum of $A^\mathrm{p}_{a/c}$ for all irreducible fractions $\frac{a}{c}$ plus the cusp contribution $\Delta A^{\mathrm{p}}$. As in our toy example above, we want to perform the change of variables \begin{equation} \tau=\frac{a\tau'+b}{c \tau'+d}\ . \end{equation} Using the modular behaviour of the theta-functions under, we have \begin{multline} A^{\mathrm{p}}_{a/c} = i \int_{\longrightarrow} \frac{\mathrm{d}\tau}{c^2 \tau^2} q^{c^2 s z_{41}z_{32}-c^2tz_{21}z_{43}} \left(\frac{\vartheta_1(z_{21}c \tau,\tau-\frac{d}{c})\vartheta_1(z_{43}c \tau,\tau-\frac{d}{c})}{\vartheta_1(z_{31}c \tau,\tau-\frac{d}{c})\vartheta_1(z_{42}c \tau,\tau-\frac{d}{c})}\right)^{-s} \\ \times \left(\frac{\vartheta_1(z_{32}c \tau,\tau-\frac{d}{c})\vartheta_1(z_{41}c \tau,\tau-\frac{d}{c})}{\vartheta_1(z_{31}c \tau,\tau-\frac{d}{c})\vartheta_1(z_{42}c \tau,\tau-\frac{d}{c})}\right)^{-t}\ . \label{eq:circle contribution Rademacher} \end{multline} Here we renamed $\tau' \to \tau-\frac{d}{c}$.\footnote{The original $\tau$ will not appear again and hopefully this does not lead to confusions.} We also set $q=\mathrm{e}^{2\pi i \tau}$. The overall sign changes again due to the choice of orientation for the contour. The branch of the right-hand side is determined as follows. We first note that the integrand of the original integral \eqref{eq:planar amplitude} simplifies for small $z_{ij}$ to \begin{equation} \left( \frac{\vartheta_1(z_{21},\tau)\vartheta_1(z_{43},\tau)}{\vartheta_1(z_{31},\tau)\vartheta_1(z_{42},\tau)}\right)^{-s} \left( \frac{\vartheta_1(z_{32},\tau)\vartheta_1(z_{41},\tau)}{\vartheta_1(z_{31},\tau)\vartheta_1(z_{42},\tau)}\right)^{-t} \to \left(\frac{z_{21}z_{43}}{z_{31}z_{42}}\right)^{-s}\left(\frac{z_{32}z_{41}}{z_{31}z_{42}}\right)^{-t}\ , \end{equation} independently of $\tau$. This is compatible with the leading behaviour of the integrand \eqref{eq:circle contribution Rademacher} as $z_{ij} \to 0$. Thus we take the principal branch in \eqref{eq:circle contribution Rademacher} for small $z_{ij}$ and then follow the branch smoothly when varying $z_i$. \subsection{Tropicalization} We now want to again want to push the $\tau$ contour to large values of $\Im \tau$. The leading behaviour is controlled by the function $\mathrm{Trop}$, which appears as the leading exponent $q^\mathrm{Trop}$ as $q \to 0$. It is given by \begin{equation} \mathrm{Trop}=\frac{1}{2}\sum_{i>j} s_{ij} \{c z_{ij}\}(1-\{c z_{ij}\})\ , \end{equation} where we remind the reader that $\{x\}$ denotes the fractional part of $x$. We are again interested in the region with $\mathrm{Trop}<0$, since the regions with $\mathrm{Trop}>0$ give a vanishing contribution to the integral when we take the limit $\Im \tau \to \infty$. Clearly, $\mathrm{Trop}$ is a periodic function with period $\frac{1}{c}$ in all $z_i$'s. As a consequence, the regions where $\mathrm{Trop}<0$ will come in families, since we can always translate a region with $\mathrm{Trop}<0$ by a multiple of $\frac{1}{c}$ in the $z_i$'s to obtain a new region with $\mathrm{Trop}<0$. For example, the subregion of the parameter space $(z_1,z_2,z_3)$ with $\mathrm{Trop}<0$ in the $s$-channel is depicted in Figure~\ref{fig:Trop regions s-channel c=3}. Recall that we fix $z_4 = 1$. \begin{figure} \centering \includegraphics{figures/plot3s.png} \caption{The regions $\Gamma_{n_1,n_2,n_3}$ in parameter space where $\mathrm{Trop}<0$ for $c=3$.} \label{fig:Trop regions s-channel c=3} \end{figure} There are in total $\frac{1}{6}m(m+1)(m+2)$ regions in the $z_i$-parameter space, where $\mathrm{Trop}<0$. We will label them as $\Gamma_{n_1,n_2,n_3}$ with $1 \le n_1 \le n_2 \le n_3 \le c$. Each such $\Gamma_{n_1,n_2,n_3}$ is fully contained in the following region \begin{subequations} \begin{align} \label{eq:region Rn1,n2,n3} \frac{n_{ij}-1}{c}&\le z_{ij}\le \frac{n_{ij}+1}{c}\ , \qquad ij \in \{21,\, 43\}\ , \\ \frac{n_{ij}}{c}&\le z_{ij}\le \frac{n_{ij}+1}{c}\ , \qquad ij \in \{31,\, 41,\, 32,\, 42\}\ . \end{align} \end{subequations} Here, $n_{ij} \equiv n_i-n_j$ and $n_4 \equiv c$. We denote the contribution from the region $\Gamma_{n_1,n_2,n_3}$ by $A_{a/c}^{n_1,n_2,n_3}$, so that \begin{equation} A_{a/c}^{\mathrm{p}} = \sum_{1 \le n_1 \le n_2 \le n_3 \le c} A_{a/c}^{n_1,n_2,n_3}\ . \end{equation} We then set \begin{equation} z_i=\frac{n_i+\xi_i}{c} \end{equation} on each of the individual regions so that the integration range of $\xi_i$ is always the same in each region. \subsection{Contributions with \texorpdfstring{$\mathrm{Trop}<0$}{Trop<0}} We can determine the correct branch of the Jacobi theta functions raised to the powers of $s$ or $t$ by the same logic as for the two-point function. Inserting eq.~\eqref{eq:log theta1 branch} for the correct branch gives immediately \begin{align} A_{a/c}^{n_1,n_2,n_3}&=i \, \int_{\longrightarrow} \frac{\mathrm{d}\tau}{c^5 \tau^2} \int \mathrm{d}\xi_1\, \mathrm{d}\xi_2\, \mathrm{d}\xi_3\ \prod_{i>j} q^{-\frac{1}{2}s_{ij} \xi_{ij}(\xi_{ij}-1)} \mathrm{e}^{2\pi i s_{ij} \sum_{m=1}^{n_{ij}} \st{\frac{md}{c}}} \nonumber\\ &\qquad\times \prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{2\pi i d (\ell+n_{ij})}{c}} q^{\ell-\xi_{ij}})^{-s_{ij}}(1-\mathrm{e}^{-\frac{2\pi i d (\ell-n_{ij}-1)}{c}} q^{\ell+\xi_{ij}-1})^{-s_{ij}} \ .\label{eq:integral Aa/c n1,n2,n3} \end{align} The integration region over the $\xi_i$'s is such that both the inequalities \eqref{eq:region Rn1,n2,n3} and $0 \le z_1 \le z_2 \le z_3 \le 1$ are satisfied. This means that for a generic region $\Gamma_{n_1,n_2,n_3}$ we have \begin{equation} -1 \le \xi_{21},\, \xi_{43} \le 1\ , \qquad 0 \le \xi_{31},\, \xi_{32},\, \xi_{41},\, \xi_{42} \le 1\ . \end{equation} For the regions with $n_{21}=0$, we have the smaller integration region where $\xi_{21}\ge 0$ should be imposed. Similarly when $n_{43}=0$, the integration region is restricted by $\xi_{43} \ge 0$. The branch in this formula is defined for $\xi_{ij}>0$, where the constant factor in the infinite product dominates for small $q$. \subsection{Thresholds from the \texorpdfstring{$q$}{q}-expansion} As a next step, we $q$-expand the integrand in \eqref{eq:integral Aa/c n1,n2,n3}. As for the two-point function, it will turn out that for a given $s$, there are only finitely many terms that contribute to the integral. We have to be careful with the two factors $(1-\mathrm{e}^{\frac{2\pi i d n_{21}}{c}} q^{\xi_{21}})^{-s}$ and $(1-\mathrm{e}^{\frac{2\pi i d n_{43}}{c}} q^{\xi_{43}})^{-s}$ that are present in the infinite product in \eqref{eq:integral Aa/c n1,n2,n3}. Since $\xi_{21}$ and $\xi_{43}$ are allowed to go to zero in the integration region, we are not allowed to $q$-expand these factors, but have to leave them unexpanded. For the purpose of analyzing which term can dominate where, we notice that any term appearing in the $q$-expansion is of the form \begin{equation} q^{-\frac{1}{2}\sum_{i>j} s_{ij} \xi_{ij}(\xi_{ij}-1)+m_\L \xi_{21}+m_\mathrm{D} \xi_{32}+m_\mathrm{R} \xi_{43}+m_\U(1-\xi_{41})} (1-\mathrm{e}^{\frac{2\pi i d n_{21}}{c}} q^{\xi_{21}})^{-s}(1- \mathrm{e}^{\frac{2\pi i d n_{43}}{c}}q^{\xi_{43}})^{-s} \label{eq:single term} \end{equation} for four non-negative integers that we denote by $m_\L$, $m_\mathrm{D}$, $m_\mathrm{R}$, and $m_\U$. The names indicate that they play the role of the (square of the) internal masses on the left, bottom, right and top part of a box Feynman diagram that approximates the worldsheet. It is easy to see that these integers satisfy the condition \begin{equation} 0 \le m_\L,\, m_\mathrm{R} \le m_\mathrm{D}+m_\U\ . \label{eq:restriction m1 m3} \end{equation} Let us work out the contribution from such a term to $A_{a/c}^{n_1,n_2,n_3}$. We first consider the leading exponent as $q \to 0$ \begin{multline} \mathrm{Trop}_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}=-\frac{1}{2}\sum_{i>j} s_{ij} \xi_{ij}(\xi_{ij}-1)+ \left(\begin{cases} m_\L \xi_{21} &\;\text{if}\quad \xi_{21}>0 \\ (m_\L-s) \xi_{21}\ &\;\text{if}\quad \xi_{21}<0 \end{cases}\right) + m_\mathrm{D} \xi_{32}\\ +\left(\begin{cases} m_\mathrm{R} \xi_{43} &\;\text{if}\quad \xi_{43}>0 \\ (m_\mathrm{R}-s)\xi_{43} &\;\text{if}\quad \xi_{43}<0 \end{cases}\right) +m_\U(1-\xi_{41})\ . \end{multline} The term contributes to the integral if $\mathrm{Trop}_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}$ becomes negative somewhere on the integration region. A straightforward analysis shows that $\mathrm{Trop}_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}$ attains its minimum at $\xi_{21}=0$ and $\xi_{43}=0$ (but is not differentiable there). Thus it suffices to restrict $\mathrm{Trop}_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}$ to this special case and analyze where it is negative. We have, setting $\xi_3 \equiv \xi$ and $\xi_1=0$, \begin{equation} \mathrm{Trop}_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}\Big|_{\begin{subarray}{c} \xi_{21}=0 \\ \xi_{43}=0 \end{subarray}} =s \xi(\xi-1)+m_\mathrm{D}\xi+m_\U(1-\xi)\ , \end{equation} which coincides with the Trop function in the two-point function case, see eq.~\eqref{eq:Trop two-point function}. Let us remark that this is not surprising from a field theory point of view. The $\xi_i$'s play the role of Schwinger parameters and taking $\xi_{21} \to 0$, $\xi_{43}\to 0$ essentially reduces the diagram to a bubble diagram with masses squared $m_\mathrm{D}$ and $m_\U$, which is what we analyzed in Section~\ref{sec:two-point function}. Thus the same conclusion as there holds and a term of the form \eqref{eq:single term} only contributes to the amplitude for $s \geq (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2$. \subsection{Evaluating a single term in the \texorpdfstring{$q$}{q}-expansion} \label{subsec:evaluating single term q expansion} We now focus on a single term in the $q$-expansion of the form \eqref{eq:single term}. Evaluating such a term is the only essentially new ingredient not present in the two-point function analysis of Section~\ref{sec:two-point function}. Let us change variables as follows \begin{equation} \xi_{21}=\alpha_\text{L}\ , \qquad \xi_{43}=\alpha_\text{R}\ , \qquad \xi_{31}=\frac{1}{s}(-m_\mathrm{D}+s+t_\text{L}+u \alpha_\text{R})\ . \end{equation} We also integrate in the unity \begin{equation} 1=\sqrt{\frac{-i s \tau}{2 t u}}\int \d t_\mathrm{R} \ q^{\frac{1}{4stu}(s t_\mathrm{R}-(s+2t)t_\L-2tu \alpha_\mathrm{R}+(m_\mathrm{D}+m_\U)t-st)^2}\ . \end{equation} This yields the following contribution from a single term \eqref{eq:single term}, \begin{align} i \, &\int_{\longrightarrow} \frac{\mathrm{d}\tau}{c^5 \tau^2} \int \mathrm{d}\xi_1\, \mathrm{d}\xi_2\, \mathrm{d}\xi_3\ q^{-\frac{1}{2}\sum_{i>j} s_{ij} \xi_{ij}(\xi_{ij}-1)+m_\L \xi_{21}+m_\mathrm{D} \xi_{32}+m_\mathrm{R} \xi_{43}+m_\U(1-\xi_{41})} \nonumber \\ &\qquad\times (1-\mathrm{e}^{\frac{2\pi i d n_{21}}{c}} q^{\xi_{21}})^{-s}(1- \mathrm{e}^{\frac{2\pi i d n_{43}}{c}}q^{\xi_{43}})^{-s} \nonumber \\ &=i \int_{\longrightarrow} \frac{\d \tau}{c^5 \tau^2} \sqrt{\frac{-i \tau}{2stu}}\int \d t_\L \, \d t_\mathrm{R}\, \d \alpha_\L\, \d \alpha_\mathrm{R}\ q^{-\alpha_\L(t_\L-m_\L)-\alpha_\mathrm{R} (t_\mathrm{R}-m_\mathrm{R})-P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})} \nonumber\\ &\qquad\times (1-\mathrm{e}^{\frac{2\pi i d n_{21}}{c}} q^{\alpha_\L})^{-s}(1- \mathrm{e}^{\frac{2\pi i d n_{43}}{c}}q^{\alpha_\mathrm{R}})^{-s} \ . \label{eq:single contribution before changing integration region} \end{align} Here, the polynomial $P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})$ is given by \begin{subequations} \begin{align} P_{m_\mathrm{D},m_\U}&=-\frac{\det \mathcal{G}_{p_1p_2p_3\ell}}{\det \mathcal{G}_{p_1p_2p_3}}\\ &=-\frac{1}{4stu}\Big[s^2(t_\L-t_\mathrm{R})^2+2 s t(m_\mathrm{D}+m_\U-s)(t_\L+t_\mathrm{R})-4 s t t_\L t_\mathrm{R}\nonumber\\ &\qquad -4st m_\mathrm{D} m_\U+t^2(m_\mathrm{D}-m_\U)^2-s t^2 (2m_\mathrm{D}+2m_\U-s)\Big]\ . \end{align} \end{subequations} Here, $\mathcal{G}_{p_1p_2p_3}$ and $\mathcal{G}_{p_1p_2p_3\ell}$ denote the Gram determinants of the respective momenta (where $\ell$ is the field-theoretic loop momentum). As explained in \cite{Eberhardt:2022zay}, this polynomial is expected from field-theory considerations where it plays the role of the kernel in the Baikov representation \cite{Baikov:1996iu} of the imaginary part of the amplitude. The image of the integration region for the $\xi_i$'s is \begin{equation} \mathcal{R}=\Big\{(\alpha_\L,\alpha_\mathrm{R},t_\L,t_\mathrm{R}) \Big| \begin{array}{l} \;-1\, (0) \le \alpha_\L,\, \alpha_\mathrm{R} \le 1\, , \\ t_\L{-}m_\mathrm{D} \le -u \alpha_\mathrm{R},\, t \alpha_\mathrm{R},\, s\alpha_\L-u \alpha_\mathrm{R},\, s \alpha_\L+t \alpha_\mathrm{R} \le s{+}t_\L{-}m_\mathrm{D} \end{array}\Big\} \ , \label{eq:region R} \end{equation} with $t_\mathrm{R}$ unrestricted. The lower limit on $\alpha_\L$ is $0$ instead of $-1$ when $n_{21}=0$ and the lower limit of $\alpha_\mathrm{R}$ is $0$ when $n_{43}=0$ and we indicated these special cases with values in the parenthesis. We claim that we can change the integration region to the following: \begin{equation} \tilde{\mathcal{R}}=\{(\alpha_\L,\alpha_\mathrm{R},t_\L,t_\mathrm{R})\, |\,\alpha_\L,\, \alpha_\mathrm{R}\ge \infty \, (0) \, ,\ P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})\ge 0\} \label{eq:region Rtilde} \end{equation} without changing the value of the integral. To see that this is possible, we need to check that the leading exponent \begin{multline} \mathrm{Trop}= -\alpha_\L(t_\L-m_\L)-\alpha_\mathrm{R}(t_\mathrm{R}-m_\mathrm{R})-P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})\\ -\min(\alpha_\L,0)s -\min(\alpha_\mathrm{R},0) s \label{eq:Trop R Rtilde regions} \end{multline} is everywhere positive on the difference $(\mathcal{R}^c \cap \tilde{\mathcal{R}}) \cup (\mathcal{R} \cap \tilde{\mathcal{R}}^c)$. For this statement to be true, one needs to use the fact that the range of $m_\L$ and $m_\mathrm{R}$ is bounded as in eq.~\eqref{eq:restriction m1 m3}. The statement is also only true if $s \ge (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2$, which is the range where the term contributes. In practice, we checked the correctness of this statement numerically. It would be of course nice to show this analytically, but the involved algebra unfortunately gets very quickly very complicated. After changing the integration region in \eqref{eq:single contribution before changing integration region}, we can integrate out $\alpha_\L$ and $\alpha_\mathrm{R}$ analytically. In both cases, we need to compute an integral of the form \begin{equation} \int_{-\infty\, (0)}^\infty \mathrm{d}\alpha\ q^{-\alpha t} (1-\mathrm{e}^{2\pi i \varphi}q^\alpha)^{-s}=\frac{i}{2\pi \tau}\int_0^{\infty\, (1)} \mathrm{d}x \ x^{-t-1} \left(1-\mathrm{e}^{2\pi i \varphi}x \right)^{-s} \label{eq:Gamma function type integral} \end{equation} for some phase $\mathrm{e}^{2\pi i \varphi}$. The integration boundaries in parentheses apply when $\varphi \in \ZZ$. Let us first assume that $\tau \in i \RR$ so that $q$ is real. Since varying $\tau$ does not change the branch cut structure of the integrand, the result depends analytically on $\tau$ and we can obtain the general result by analytic continuation. On the right-hand side, we changed variables to $x=q^{\alpha}$. The boundary $\alpha \to \infty$ gets mapped to $x=0$, while the lower boundary gets mapped to $\infty$ and $1$ in the two cases. In the case $\varphi\in \ZZ$, we end up with the integral \begin{equation} \eqref{eq:Gamma function type integral}=\frac{i}{2\pi \tau} \int_0^1 \mathrm{d}x\ x^{-t-1} \left(1-x \right)^{-s}=\frac{i}{2\pi \tau} \frac{\Gamma(1-s)\Gamma(-t)}{\Gamma(1-s-t)}\ . \end{equation} Now assume that $\varphi \not \in \ZZ$. We rotate the contour by defining \begin{equation} y=-\mathrm{e}^{2\pi i \varphi}x=\mathrm{e}^{2\pi i \st{\varphi}} x\ . \end{equation} When rotating the contour, the arc at infinity gives a vanishing contribution to the integral in $s$-channel kinematics and can be discarded. Thus we have \begin{equation} \eqref{eq:Gamma function type integral}=\frac{i\, \mathrm{e}^{2\pi i t \st{\varphi}}}{2\pi \tau} \int_{0}^{\infty} \mathrm{d}y \ y^{-t-1}(1+y)^{-s} \\ =\frac{i\, \mathrm{e}^{2\pi i t \st{\varphi}}}{2\pi \tau} \frac{\Gamma(-t)\Gamma(s+t)}{\Gamma(s)}\ . \end{equation} The choice of branch of $\mathrm{e}^{2\pi i t \st{\varphi}}$ is correct, as can be easily seen from the following two facts: (i) the branch can only jump when $\varphi \in \ZZ$, since then the branch point crosses the integration contour and (ii) this is the correct branch for $\varphi=\frac{1}{2}$, where we do not have to rotate the contour at all. As mentioned above, these results still hold for $\tau \not \in i \RR$ since by varying $\tau$ continuously, no branch points cross the integration contour. Coming back to \eqref{eq:single contribution before changing integration region}, we can now fully evaluate the contribution of a single term in the $q$-expansion: \begin{align} \eqref{eq:single contribution before changing integration region} &=i \int_{\longrightarrow} \frac{\d \tau}{c^5 \tau^2} \sqrt{\frac{- i \tau}{2stu}}\int_{P_{m_\mathrm{D},m_\U} > 0}\hspace{-1cm} \d t_\L \, \d t_\mathrm{R} \left(\frac{i}{2\pi \tau}\right)^2 q^{-P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})} \nonumber\\ &\qquad\times\! \left(\!\begin{cases}\mathrm{e}^{2\pi i (t_\L-m_\L) \st{\frac{d n_{21}}{c}}} &\;\text{if}\quad n_{21}>0 \\ \frac{\sin(\pi(s+t_\L))}{\sin(\pi s)} &\;\text{if}\quad n_{21}=0 \end{cases}\right)\! \left(\!\begin{cases}\mathrm{e}^{2\pi i(t_\mathrm{R}-m_\mathrm{R}) \st{\frac{d n_{43}}{c}}} & \;\text{if}\quad n_{43}>0 \\ \frac{\sin(\pi(s+t_\mathrm{R}))}{\sin(\pi s)} &\;\text{if}\quad n_{43}=0 \end{cases}\right) \nonumber\\ &\qquad\times \frac{\Gamma(-t_L+m_\L)\Gamma(-t_R+m_\mathrm{R})\Gamma(s+t_\L-m_\L)\Gamma(s+t_\mathrm{R}-m_\mathrm{R})}{\Gamma(s)^2} \end{align} In order to integrate out the $\tau$ variable, we can use the Hankel contour representation of the Gamma function, \begin{equation} \int_{\longrightarrow} \frac{\mathrm{d}\tau}{(-i \tau)^{\frac{7}{2}}} \ \mathrm{e}^{-2\pi i \tau a}=-i\, (2\pi a)^{\frac{5}{2}}\int_{\uparrow} \mathrm{d}x \ (-x)^{-\frac{7}{2}}\ \mathrm{e}^{-x}=\frac{2\pi(2\pi a)^{\frac{5}{2}}}{\Gamma(\frac{7}{2})} \end{equation} that we already explained in Section~\ref{subsec:2-pt assembling}. The result is \begin{align} \eqref{eq:single contribution before changing integration region} &=-\frac{16\pi i}{15c^5 \sqrt{stu}} \int_{P_{m_\mathrm{D},m_\U} > 0}\hspace{-1cm} \d t_\L \, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}}\nonumber\\ &\qquad\times\!\left(\!\begin{cases}\mathrm{e}^{2\pi i (t_\L-m_\L) \st{\frac{d n_{21}}{c}}} &\;\text{if}\quad n_{21}>0 \\ \frac{\sin(\pi(s+t_\L))}{\sin(\pi s)} &\;\text{if}\quad n_{21}=0 \end{cases}\right)\!\left(\! \begin{cases}\mathrm{e}^{2\pi i(t_\mathrm{R}-m_\mathrm{R}) \st{\frac{d n_{43}}{c}}} &\;\text{if}\quad n_{43}>0 \\ \frac{\sin(\pi(s+t_\mathrm{R}))}{\sin(\pi s)} &\;\text{if}\quad n_{43}=0 \end{cases}\right)\nonumber\\ &\qquad\times \frac{\Gamma(-t_\L+m_\L)\Gamma(-t_\mathrm{R}+m_\mathrm{R})\Gamma(s+t_\L-m_\L)\Gamma(s+t_\mathrm{R}-m_\mathrm{R})}{\Gamma(s)^2} \ . \label{eq:contribution single q term} \end{align} \subsection{Assembling the result} We can now combine eq.~\eqref{eq:integral Aa/c n1,n2,n3} for the contribution of $A_{a/c}^{n_1,n_2,n_3}$ to the amplitude with our evaluation of a contribution of a single term in the $q$-expansion \eqref{eq:contribution single q term} to obtain \begin{align} A_{a/c}^{n_1,n_2,n_3}&=-\frac{16\pi i \, \mathrm{e}^{2\pi i \sum_{i>j} s_{ij} \sum_{m=1}^{n_{ij}} \st{\frac{md}{c}}}}{15c^5 \sqrt{stu}} \sum_{\begin{subarray}{c} m_\L,m_\mathrm{D},m_\mathrm{R},m_\U\ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} [q^{m_\L \xi_{21}+m_\mathrm{D} \xi_{32}+m_\mathrm{R} \xi_{43}+m_\U(1-\xi_{41})}] \nonumber\\ &\quad \times \prod_{i>j}\prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{2\pi i d (\ell+n_{ij})}{c}} q^{\ell-\xi_{ij}})^{-s_{ij}} \hspace{-0.6cm} \prod_{\ell=1+\delta_{ij,21}+\delta_{ij,41}}^\infty \hspace{-0.6cm} (1-\mathrm{e}^{-\frac{2\pi i d (\ell-n_{ij}-1)}{c}} q^{\ell+\xi_{ij}-1})^{-s_{ij}} \nonumber\\ &\quad \times \int_{P_{m_\mathrm{D},m_\U} >0} \d t_\L \, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}}\nonumber\\ &\quad\times\!\left(\!\begin{cases}\mathrm{e}^{2\pi i (t_\L-m_\L) \st{\frac{d n_{21}}{c}}} &\,\text{if}\quad n_{21}>0 \\ \frac{\sin(\pi(s+t_\L))}{\sin(\pi s)} &\,\text{if}\quad n_{21}=0 \end{cases}\right)\!\left(\! \begin{cases}\mathrm{e}^{2\pi i(t_\mathrm{R}-m_\mathrm{R}) \st{\frac{d n_{43}}{c}}} &\,\text{if}\quad n_{43}>0 \\ \frac{\sin(\pi(s+t_\mathrm{R}))}{\sin(\pi s)} &\,\text{if}\quad n_{43}=0 \end{cases}\right)\nonumber\\ &\quad\times \frac{\Gamma(-t_\L+m_\L)\Gamma(-t_\mathrm{R}+m_\mathrm{R})\Gamma(s+t_\L-m_\L)\Gamma(s+t_\mathrm{R}-m_\mathrm{R})}{\Gamma(s)^2}\ . \end{align} We can simplify this formula further. Let us look at the coefficients of the infinite product in more detail. We notice that the phase of a given term in the infinite product is entirely determined by the exponent and we can write \begin{align} &[q^{m_\L \xi_{21}+m_\mathrm{D} \xi_{32}+m_\mathrm{R} \xi_{43}+m_\U(1-\xi_{41})}] \prod_{i>j}\prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{2\pi i d (\ell+n_{ij})}{c}} q^{\ell-\xi_{ij}})^{-s_{ij}}\nonumber\\ &\hspace{3.85cm}\times\prod_{\ell=1+\delta_{ij,21}+\delta_{ij,43}}^\infty \!\!\!\!\! (1-\mathrm{e}^{-\frac{2\pi i d (\ell-n_{ij}-1)}{c}} q^{\ell+\xi_{ij}-1})^{-s_{ij}} \nonumber \\ &=\mathrm{e}^{\frac{2\pi i d}{c}(m_\L n_{21}+m_\mathrm{D} n_{32}+m_\mathrm{R} n_{43}-m_\U(n_{41}+1)} [q^{m_\L \xi_{21}+m_\mathrm{D} \xi_{32}+m_\mathrm{R} \xi_{43}+m_\U(1-\xi_{41})}]\nonumber\\ &\qquad\times\prod_{i>j}\prod_{\ell=1}^\infty (1- q^{\ell-\xi_{ij}})^{-s_{ij}}\prod_{\ell=1+\delta_{ij,21}+\delta_{ij,43}}^\infty(1- q^{\ell+\xi_{ij}-1})^{-s_{ij}}\ . \end{align} Part of this phase combines with the phase that we obtained from evaluating the integrals over $\alpha_\L$ and $\alpha_\mathrm{R}$. We note that $\mathrm{e}^{\frac{2\pi i m_\L n_{21}d}{c}}\mathrm{e}^{-2\pi i m_\L\st{\frac{dn_{21}}{c}}}=(-1)^{m_\L}$. Setting \begin{equation} q_\L=q^{\xi_{21}},\qquad q_\mathrm{D}=q^{\xi_{32}},\qquad q_\mathrm{R}=q^{\xi_{43}},\qquad q_\U=q^{1-\xi_{41}} \end{equation} recovers the definition \eqref{eq:QmL,mD,mR,mU definition} for the polynomials coefficients $Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}(s,t)$. We thus have at this stage \begin{align} A_{a/c}^{n_1,n_2,n_3}&=-\frac{16\pi \, \mathrm{e}^{2\pi i \sum_{i>j} s_{ij} \sum_{m=1}^{n_{ij}} \st{\frac{md}{c}}}}{15c^5 \sqrt{stu}} \sum_{\begin{subarray}{c} m_\L,m_\mathrm{D},m_\mathrm{R},m_\U\ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} (-1)^{m_\L+m_\mathrm{R}}\, Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}(s,t) \nonumber\\ &\qquad \times \mathrm{e}^{\frac{2\pi i d}{c}(m_\mathrm{D} n_{32}-m_\U(n_{41}+1))}\int_{P_{m_\mathrm{D},m_\U}> 0}\hspace{-1cm} \d t_\L \, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}} \nonumber\\ &\qquad\times\left(\!\begin{cases}\mathrm{e}^{2\pi i t_\L \st{\frac{d n_{21}}{c}}} &\;\text{if}\quad n_{21}>0 \\ \frac{\sin(\pi(s+t_\L))}{\sin(\pi s)} &\;\text{if}\quad n_{21}=0 \end{cases}\right)\left(\!\begin{cases}\mathrm{e}^{2\pi i t_\mathrm{R} \st{\frac{d n_{43}}{c}}} &\;\text{if}\quad n_{43}>0 \\ \frac{\sin(\pi(s+t_\mathrm{R}))}{\sin(\pi s)} &\;\text{if}\quad n_{43}=0 \end{cases}\right)\nonumber\\ &\qquad\times \frac{\Gamma(-t_\L+m_\L)\Gamma(-t_\mathrm{R}+m_\mathrm{R})\Gamma(s+t_\L-m_\L)\Gamma(s+t_\mathrm{R}-m_\mathrm{R})}{\Gamma(s)^2}\ . \end{align} We can now carry out the sum over $m_\L$ and $m_\mathrm{R}$. Recall the definition for the polynomials $Q_{m_\mathrm{D},m_\U}(s,t)$ given in \eqref{eq:definition Qm2,m4}, which we reproduce here: \begin{multline} Q_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})\equiv \!\!\!\sum_{m_\L,m_\mathrm{R}=0}^{m_\mathrm{D}+m_\U} Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}(s,t) (-t_\L)_{m_\L}(-s-t_\L+m_\L+1)_{m_\mathrm{D}+m_\U-m_\L} \\ \times (-t_\mathrm{R})_{m_\mathrm{R}}(-s-t_\mathrm{R}+m_\mathrm{R}+1)_{m_\mathrm{D}+m_\U-m_\mathrm{R}}\ . \end{multline} We used the fact that the range of $m_\L$ and $m_\mathrm{R}$ is given by \eqref{eq:restriction m1 m3}.\footnote{This is almost what we called $Q_{m_\mathrm{D},m_\U}$ in our earlier paper \cite{Eberhardt:2022zay}. For $m_\mathrm{D}=m_\U$, the two are literally the same, while it differs by a factor of $2$ from what we called $Q_{m_\mathrm{D},m_\U}$ before, since we include both $m_\mathrm{D} < m_\U$ and $m_\mathrm{D} > m_\U$ in the present sum.} We can hence write \begin{align} A_{a/c}^{n_1,n_2,n_3}&=-\frac{16\pi i \, \mathrm{e}^{2\pi i \sum_{i>j} s_{ij} \sum_{m=1}^{n_{ij}} \st{\frac{md}{c}}}}{15c^5 \sqrt{stu}} \sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \mathrm{e}^{\frac{2\pi i d}{c}(m_\mathrm{D} n_{32}-m_\U(n_{41}+1))}\nonumber\\ &\qquad \times \int_{P_{m_\mathrm{D},m_\U} > 0} \hspace{-1cm} \d t_\L \, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}} \, Q_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})\nonumber\\ &\qquad\times\left(\begin{cases}\mathrm{e}^{2\pi i t_\L \st{\frac{d n_{21}}{c}}} &\;\text{if}\quad n_{21}>0 \\ \frac{\sin(\pi(s+t_\L))}{\sin(\pi s)} &\;\text{if}\quad n_{21}=0 \end{cases}\right)\left(\begin{cases}\mathrm{e}^{2\pi i t_\mathrm{R} \st{\frac{d n_{43}}{c}}} &\;\text{if}\quad n_{43}>0 \\ \frac{\sin(\pi(s+t_\mathrm{R}))}{\sin(\pi s)} &\;\text{if}\quad n_{43}=0 \end{cases}\right)\nonumber\\ &\qquad\times \frac{\Gamma(-t_\L)\Gamma(-t_\mathrm{R})\Gamma(s+t_\L-m_\mathrm{D}-m_\U)\Gamma(s+t_\mathrm{R}-m_\mathrm{D}-m_\U)}{\Gamma(s)^2}\ . \label{eq:planar four point function Aa/c n1,n2,n3 final result} \end{align} \subsection{Renaming \texorpdfstring{$(n_1,n_2,n_3)$}{(n1,n2,n3)}} The final step is an aesthetic one. We change our labelling of the contributions $A_{a/c}^{n_1,n_2,n_3}$ to make the final formula more symmetric. Let us introduce \begin{equation} n_\L=n_{21}\ , \qquad n_\mathrm{D}=n_{32}\ , \qquad n_\mathrm{R}=n_{43}\ , \qquad n_\U=n_1-1\ , \end{equation} so that $n_\L,\, n_\mathrm{D},\, n_\mathrm{R},\, n_\U \ge 0$ and their sum is constrained to $c-1$. In terms of these variables, the prefactor phase can be written as \begin{align} \sum_{i>j} s_{ij} \sum_{m=1}^{n_{ij}} \bigst{\frac{md}{c}}&=s \bigg[ \sum_{m=1}^{n_\L}+\sum_{m=1}^{n_\mathrm{R}}-\sum_{m=1}^{n_\mathrm{D}+n_\L}-\sum_{m=1}^{n_\mathrm{D}+n_\mathrm{R}} \bigg]\bigst{\frac{md}{c}} \nonumber\\ &\qquad+t \bigg[ \sum_{m=1}^{n_\mathrm{D}}+\sum_{m=1}^{c-1-n_\U}-\sum_{m=1}^{n_\mathrm{D}+n_\L}-\sum_{m=1}^{n_\mathrm{D}+n_\mathrm{R}} \bigg]\bigst{\frac{md}{c}} \\ &=-s \bigg[ \sum_{m=n_\L+1}^{n_\L+n_\mathrm{D}}+\sum_{m=n_\mathrm{R}+1}^{n_\mathrm{R}+n_\mathrm{D}}\bigg]\bigst{\frac{md}{c}}\nonumber\\ &\qquad-t \bigg[ \sum_{m=n_\mathrm{D}+1}^{n_\L+n_\mathrm{D}}-\sum_{m=n_\mathrm{D}+n_\mathrm{R}+1}^{c-1-n_\U}\bigg]\bigst{\frac{md}{c}}\ . \end{align} By using periodicity in $m$ mod $c$, we can rewrite the last term as follows: \begin{align} \sum_{m=n_\mathrm{D}+n_\mathrm{R}+1}^{c-1-n_\U} \bigst{\frac{md}{c}}=\sum_{m=n_\mathrm{D}+n_\mathrm{R}+1-c}^{-1-n_\U} \bigst{\frac{md}{c}}=-\sum_{m=n_\U+1}^{n_\U+n_\L} \bigst{\frac{md}{c}}\ , \end{align} where we shifted the summation domain by $c$ steps and then renamed $m \to -m$. Thus, \begin{align} \sum_{i>j} s_{ij} \sum_{m=1}^{n_{ij}} \bigst{\frac{md}{c}} &=-s \sum_{a=\L,\mathrm{R}} \sum_{m=n_a+1}^{n_a+n_\mathrm{D}}\bigst{\frac{md}{c}}-t \sum_{a=\mathrm{D},\U} \sum_{m=n_a+1}^{n_a+n_\L}\bigst{\frac{md}{c}}\ . \end{align} This looks still slightly asymmetric. However, we claim that \begin{equation} \sum_{a=\L,\mathrm{R}} \sum_{m=n_a+1}^{n_a+n_\mathrm{D}}\bigst{\frac{md}{c}}=\sum_{a=\L,\mathrm{R}} \sum_{m=n_a+1}^{n_a+n_\U}\bigst{\frac{md}{c}} \end{equation} and similarly the second term is invariant under $n_\L \to n_\mathrm{R}$, which makes it obvious that the expression has in fact the required symmetries. To prove this, we only need to use that $\st{x}$ is an asymmetric function. We have \begin{align} &\sum_{a=\L,\mathrm{R}} \sum_{m=n_a+1}^{n_a+n_\mathrm{D}}\bigst{\frac{md}{c}}-\sum_{a=\L,\mathrm{R}} \sum_{m=n_a+1}^{n_a+n_\U}\bigst{\frac{md}{c}} \nonumber\\ &=\bigg[\sum_{m=n_\L+1}^{n_\L+n_\mathrm{D}} -\sum_{m=-n_\mathrm{R}-n_\mathrm{D}}^{-n_\mathrm{R}-1}-\sum_{m=n_\L+1}^{n_\L+n_\U}+\sum_{m=-n_\mathrm{R}-n_\U}^{-n_\mathrm{R}-1}\bigg] \bigst{\frac{md}{c}} \\ &=\bigg[\sum_{m=n_\L+1}^{n_\L+n_\mathrm{D}} -\sum_{m=n_\L+n_\U+1}^{n_\L+n_\mathrm{D}+n_\U}-\sum_{m=n_\L+1}^{n_\L+n_\U}+\sum_{m=n_\L+n_\mathrm{D}+1}^{n_\L+n_\mathrm{D}+n_\U}\bigg] \bigst{\frac{md}{c}}=0\ . \end{align} Here we sent $m \to -m$ in the second and fourth term and then shifted summation variables by $c$. The first and last term, as well as the second and third term then join up and the terms cancel. Thus we can write the phases fully symmetrically as follows, \begin{equation} \sum_{i>j} s_{ij} \sum_{m=1}^{n_{ij}} \bigst{\frac{md}{c}}=-\frac{1}{2}\sum_{\begin{subarray}{c} a=\L,\mathrm{R}\\ b=\mathrm{D},\U \end{subarray}} \bigg[s \sum_{m=n_a+1}^{n_a+n_b}+\, t \sum_{m=n_b+1}^{n_a+n_b}\bigg] \bigst{\frac{md}{c}}\ . \end{equation} Inserting this into \eqref{eq:planar four point function Aa/c n1,n2,n3 final result} then finally yields \eqref{eq:planar four-point function s-channel}. This is our final formula for the planar open-string amplitude in the $s$-channel. \subsection{Results in the \texorpdfstring{$u$}{u}-channel} \label{subsec:planar u-channel} We now consider the $u$-channel contribution. As we shall see, we almost get it for free from the $s$-channel contribution. There are $\frac{1}{6}(m-1)m(m+1)$ regions in $(z_1,z_2,z_3)$-parameter space for which $\mathrm{Trop}<0$. We will denote them by $\Gamma_{n_1,n_2,n_3}$ where $1 \le n_1 \le n_2 < n_3 \le c$. They are separated by the inequalities \begin{subequations} \label{eq:bounds regions u-channel} \begin{align} \frac{n_{ij}}{c}&\le z_{ij}\le \frac{n_{ij}+1}{c}\ , \qquad ij \in \{21,\, 41,\, 43\}\ , \\ \frac{n_{32}-1}{c}&\le z_{32}\le \frac{n_{32}}{c}\ , \\ \frac{n_{ij}-1}{c}&\le z_{ij}\le \frac{n_{ij}+1}{c}\ , \qquad ij \in \{31,\, 42\}\ . \end{align} \end{subequations} Here we defined again $n_4\equiv c$. Let us set $z_i=\frac{n_i+\xi_i}{c}$ as before. It follows that the contribution $A_{a/c}^{n_1,n_2,n_3}$ to the $u$-channel amplitude equals \begin{align} A_{a/c}^{n_1,n_2,n_3}&=i \, \int_{\longrightarrow} \frac{\mathrm{d}\tau}{c^5 \tau^2} \int \mathrm{d}\xi_1\, \mathrm{d}\xi_2\, \mathrm{d}\xi_3\ \prod_{i>j} q^{-\frac{1}{2}s_{ij} \xi_{ij}(\xi_{ij}-1)} \mathrm{e}^{2\pi i s_{ij} \sum_{m=1}^{n_{ij}-\delta_{ij,32}} \st{\frac{md}{c}}} \nonumber\\ &\qquad\times \prod_{\begin{subarray}{c} i>j\\ ij \ne 32 \end{subarray}}\prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{2\pi i d (\ell+n_{ij})}{c}} q^{\ell-\xi_{ij}})^{-s_{ij}}(1-\mathrm{e}^{-\frac{2\pi i d (\ell-n_{ij}-1)}{c}} q^{\ell+\xi_{ij}-1})^{-s_{ij}} \nonumber\\ &\qquad\times \prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{2\pi i d (\ell+n_{32}-1)}{c}} q^{\ell-\xi_{32}-1})^{-s_{32}}(1-\mathrm{e}^{-\frac{2\pi i d (\ell-n_{32})}{c}} q^{\ell+\xi_{32}})^{-s_{32}}\ . \label{eq:planar u channel before swap} \end{align} This formula is straightforward to derive. To deal with the factor of the form $\vartheta_1(c \tau z_{32},\tau-\frac{d}{c})=\vartheta_1(\tau(n_{32}-1+\xi_{32}+1), \tau - \frac{d}{c})$, we use eq.~\eqref{eq:log theta1 branch} with $n=n_{32}-1$ and then insert $\xi=\xi_{32}+1$, which is positive. This is almost identical to \eqref{eq:integral Aa/c n1,n2,n3}, except for a different integration region over the $\xi_i$'s and a slightly different phase. We also treated the factor with $ij=32$ separately in order to ensure that we land on the correct branch. We can relate this to the $s$-channel contributions as follows. Consider swapping $\xi_2$ with $\xi_3$, $n_2$ with $n_3$ and $s$ with $u$ (i.e.\ we swap all the labels 2 with the labels 3). This turns $\xi_{32} \to -\xi_{32}$, $n_{32} \to -n_{32}$, while the terms in the second line get permuted. We can then combine them with the terms in the third line to obtain \begin{align} A_{a/c}^{n_1,n_3,n_2}\Big|_{2 \leftrightarrow 3}&=i \, \int_{\longrightarrow} \frac{\mathrm{d}\tau}{c^5 \tau^2} \int \mathrm{d}\xi_1\, \mathrm{d}\xi_2\, \mathrm{d}\xi_3\ \prod_{i>j} q^{-\frac{1}{2}s_{ij} \xi_{ij}(\xi_{ij}-1)} \nonumber\\ &\qquad\times \mathrm{e}^{2\pi i \sum_{i>j,\, ij \ne 32}s_{ij} \sum_{m=1}^{n_{ij}} \st{\frac{md}{c}}+2\pi i t \sum_{m=1}^{-n_{32}-1}\st{\frac{md}{c}} } \nonumber\\ &\qquad\times \prod_{i>j}\prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{2\pi i d (\ell+n_{ij})}{c}} q^{\ell-\xi_{ij}})^{-s_{ij}}(1-\mathrm{e}^{-\frac{2\pi i d (\ell-n_{ij}-1)}{c}} q^{\ell+\xi_{ij}-1})^{-s_{ij}}\ . \end{align} In terms of the new $\xi_i$'s, the integration region is bounded by \begin{equation} -1 \le \xi_{21},\, \xi_{43} \le 1 \ , \qquad 0 \le \xi_{31},\, \xi_{41},\, \xi_{32},\, \xi_{42} \le 1\ , \end{equation} which coincides with the integration region in the $s$-channel. Up to the slightly different phase, this directly coincides with the $s$-channel $A_{a,c}^{n_1,n_2,n_3}$. Note that $n_{32}<0$, so a modification of this particular factor in the phase is expected. Note also that $n_{21}>0$ and $n_{43}>0$ on the right-hand side and thus only one of the cases in the $s$-channel formula appears in this case. Let us also express this result in terms of $n_\L=n_{31}$, $n_\mathrm{D}=-n_{32}$, $n_\mathrm{R}=n_{42}$ and $n_\U=c-1-n_{41}$, so that $n_\L+n_\mathrm{D}+n_\mathrm{R}+n_\U=c-1$. We have of course also the following inequalities: \begin{equation} n_\L>0\ ,\quad n_\mathrm{D}<0,\quad n_\mathrm{R}>0\ , \quad n_\U \ge 0\ , \quad n_\L+n_\mathrm{D} \ge 0\ , \quad n_\mathrm{R}+n_\mathrm{D} \ge 0\ . \end{equation} Taking \eqref{eq:planar four-point function s-channel} and exchanging all labels finally gives \begin{align} A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}&=-\frac{16\pi i \, \mathrm{e}^{-2\pi is \sum_{a=\L,\mathrm{R}}\sum_{m=n_a+n_\mathrm{D}+1}^{n_a} \st{\frac{md}{c}}+2\pi i t \big[\sum_{m=n_\L+1}^{n_\L+n_\mathrm{R}+n_\mathrm{D}}-\sum_{m=-n_\mathrm{D}}^{n_\mathrm{R}}\big] \st{\frac{md}{c}}}}{15c^5 \sqrt{stu}} \nonumber\\ &\qquad \times \sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le u \end{subarray}}\hspace{-.5cm}\mathrm{e}^{\frac{2\pi i d}{c}(m_\mathrm{D} n_\mathrm{D}+m_\U n_\U)}\int_{P_{m_\mathrm{D},m_\U}(s,u,t_\L,t_\mathrm{R})\ge 0} \hspace{-2cm} \d t_\L \, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s,u,t_\L,t_\mathrm{R})^{\frac{5}{2}} \nonumber\\ &\qquad\times Q_{m_\mathrm{D},m_\U}(s,u,t_\L,t_\mathrm{R})\frac{\Gamma(-t_\L)\Gamma(u+t_\L-m_\mathrm{D}-m_\U)}{\Gamma(u)}\, \mathrm{e}^{2\pi i t_\L \st{\frac{d n_\L}{c}}}\nonumber\\ &\qquad\times (\L \leftrightarrow \mathrm{R})\ . \label{eq:planar four point function u-channel Rademacher} \end{align} \section{Four-point non-planar amplitude \texorpdfstring{$A^{\text{n-p}}(s,t)$}{Anp(s,t)}}\label{sec:non-planar} Let us derive the analogous formula for the non-planar annulus amplitude $A^{\text{n-p}}(s,t)$. Recall from Section~\ref{subsec:basic amplitudes} that the amplitude of the non-planar annulus was given by \begin{equation} A^{\text{n-p}} = \frac{-i}{32(1-\mathrm{e}^{-\pi i s})}\int_{\Gamma} \d \tau\, \d z_1 \, \d z_2 \, \d z_3 \, \prod_{j=1}^2\prod_{i=3}^4 \vartheta_4(z_{ij},\tau)^{-s_{ij}}\big(\vartheta_1(z_{21},\tau)\vartheta_1(z_{43},\tau)\big)^{-s}\ . \label{eq:non-planar integrand} \end{equation} The integration contour $\Gamma$ is the $\tau$-plane is the one described in Figure~\ref{fig:tau contours} with the endpoints at $\tau=0$ and $\tau=2$. The prefactor $(1-\mathrm{e}^{-\pi i s})^{-1}$ came from the choice of the contour, but we will suppress it from now on for readability by introducing $\tilde{A}^{\text{n-p}} = (1-\mathrm{e}^{-\pi i s}) A^{\text{n-p}}$. The integration region in the $z_i$'s can be described by the inequalities \begin{equation} 0 \le z_{21},\, z_{43} \le 1\ \quad 0 \le z_{42}\le 1\, . \end{equation} Note also that the integrand is periodic in $(z_1,z_2) \to (z_1+1,z_2+1)$, which corresponds to taking the punctures around the inner boundary. We again want to compute the contribution from the integral around the circles $C_{a/c}$, where now $0<\frac{a}{c} \le 2$. The treatment of the correct branches is more complicated in this case compared to the planar case and the generalization of the computation is not entirely straightforward. \subsection{Shifting \texorpdfstring{$z_i$}{zi}} Our strategy will be to recycle the computation of the planar annulus as much as possible. It is thus advantageous to relate $\vartheta_4$ to $\vartheta_1$. Let us set \begin{equation} z_1=y_1,\qquad z_2=y_2,\qquad z_3=y_3+\frac{\tau}{2},\qquad z_4=y_4+\frac{\tau}{2}\, . \end{equation} We do not fix $z_4=1$ for the moment. Then some of the arguments of the theta functions in \eqref{eq:non-planar integrand} become $\vartheta_4(y_{31}+\frac{\tau}{2},\tau)$ etc. Let us compute \begin{subequations} \begin{align} \vartheta_4(z,\tau)=\vartheta_4(y+\tfrac{\tau}{2},\tau)&=\sum_{n \in \ZZ} (-1)^n \mathrm{e}^{2\pi i n(y+\frac{\tau}{2})+\pi i n^2 \tau }\\ &=\mathrm{e}^{-\frac{\pi i \tau}{4}-\pi i y} \sum_{n \in \ZZ} (-1)^n \mathrm{e}^{2\pi i (n+\frac{1}{2}) y+\pi i (n+\frac{1}{2})^2 \tau} \\ &= i \mathrm{e}^{-\frac{\pi i \tau}{4}-\pi i y} \vartheta_1(y,\tau)\ . \end{align} \end{subequations} We can thus write \begin{equation} \tilde{A}^{\text{n-p}} =\frac{-i}{32}\int_{\Gamma} \d \tau\, \d y_1 \, \d y_2 \, \d y_3 \, \mathrm{e}^{\pi i s-\frac{\pi i s\tau}{2}-\pi i s(y_3+y_4-y_1-y_2)}\prod_{i>j} \vartheta_1(y_{ij},\tau)^{-s_{ij}} \ , \label{eq:non-planar annulus shifted y} \end{equation} where the integration contour in $y_i$ follows from the one in $z_i$ by shifting. The natural way to choose the branch in this expression is to take $y_i$ close to zero with $y_{ij}>0$, then integrand simplifies to \begin{equation} \mathrm{e}^{\pi i s-\frac{\pi i s \tau}{2}} \prod_{i>j} y_{ij}^{-s_{ij}} \label{eq:theta1 choice of branch cut}\ . \end{equation} Since all $y_{ij}$'s are positive, the branch is the canonical one. We have to make sure that this choice follows from the original choice of branch in \eqref{eq:non-planar integrand} by analytic continuation, which was specified by taking all $z_i$'s close to zero by similar reasoning. It suffices to do this for one of the $\vartheta_4$'s. We can also assume that $\tau$ is purely imaginary and very large since the overall phase depends continuously on $\tau$. Consider \begin{equation} \log \vartheta_1(z+\tfrac{\sigma\tau}{2},\tau) \end{equation} for $\sigma \in [0,1]$ and take $z \in \RR$ small, but bigger than 0. For $\sigma=0$, the choice of branch is clear, since $\vartheta_1(z,\tau)\to z \vartheta_1'(0,\tau) \sim 2\pi z \mathrm{e}^{\frac{\pi i \tau}{4}}$ as $\Im \tau \to \infty$. It corresponds to the choice of branch in \eqref{eq:theta1 choice of branch cut}. We want to follow the branch smoothly from $\sigma=0$ to $\sigma=1$. For large $\Im \tau$ we have approximately \begin{equation} \log \vartheta_1(z+\tfrac{\sigma\tau}{2},\tau)\sim \frac{\pi i \tau }{4}+\log \left(-i\, \mathrm{e}^{\pi i (z+\frac{\sigma\tau}{2})}+i\, \mathrm{e}^{-\pi i (z+\frac{\sigma\tau}{2})}\right)\ . \end{equation} We took out $\frac{\pi i\tau}{4}$, since this term is real. For $\sigma=0$, both terms are equally relevant and as discussed above, we choose the principal branch. For $\sigma=1$, the second term dominates and is essentially purely imaginary. However, we never cross the negative axis (since the first term is always less than the second term in magnitude) and thus the principal branch of the logarithm gives the correct answer everywhere. In particular for $\sigma=1$, we get \begin{equation} \log \vartheta_1(z+\tfrac{\tau}{2},\tau) \sim -\frac{\pi i \tau}{4}+\frac{\pi i}{2}-\pi i z \sim \frac{\pi i \tau}{4}+\frac{\pi i}{2}-\pi i z+\log \vartheta_4(z,\tau)\ . \end{equation} This shows that the branch in \eqref{eq:non-planar annulus shifted y} is the one that we get from the original expression \eqref{eq:non-planar integrand} by following the straight line from $z_{3,4} \sim 0$ to $z_{3,4} \sim \frac{\tau}{2}$. \subsection{Modular transformation} The next step is as before to set \begin{equation} \tau=\frac{a \tau'+b}{c \tau'+d}\ . \end{equation} Then $\tau' \in i+\RR$ gets mapped to the circle $C_{a/c}$. Hence, for the contribution from a single circle we get \begin{multline} \tilde{A}_{a/c}=\frac{i}{32} \int_{\longrightarrow} \frac{\mathrm{d}\tau'}{(c \tau'+d)^2} \int \mathrm{d}z_1\, \mathrm{d}z_2\, \mathrm{d}z_3\ \mathrm{e}^{\pi i s-\frac{\pi i s(a \tau'+b)}{2(c \tau'+d)}-\pi i s(y_3+y_4-y_1-y_2)} \\ \times \prod_{i>j} \mathrm{e}^{-\pi i c(c \tau'+d) s_{ij}y_{ij}^2} \vartheta_1((c \tau'+d)y_{ij},\tau')^{-s_{ij}}\ . \end{multline} We are guaranteed that this choice of branch is correct for $y_{ij} \to 0$, where the theta functions drastically simplify. Everywhere else, the branch is determined by analytic continuation. \subsection{Further shifts of \texorpdfstring{$z_i$}{zi}} To proceed further, we now want to re-express the result in terms of the original $z_i$. For this it is convenient to shift the $z_i$'s further. Set \begin{equation} z_{1,2}=\zeta_{1,2}\ , \qquad z_{3,4}=\zeta_{3,4}+\frac{a}{2c}\ . \end{equation} Contrary to the shift that led to $y_i$, this is a real shift. This does not change the integration region and we can still integrate over \begin{equation} 0 \le \zeta_{21},\, \zeta_{43} \le 1\ , \qquad 0 \le \zeta_{42} \le 1 \ .\label{eq:zeta integration region} \end{equation} We now have \begin{equation} y_{31}=z_{31}-\frac{a \tau'+b}{2(c \tau'+d)}=\zeta_{31}-\frac{a \tau'+b}{2(c \tau'+d)}+\frac{a}{2c}=\zeta_{31}+\frac{1}{2c(c \tau'+d)}\ . \end{equation} This is advantageous because $y_{31}$ and $\zeta_{31}$ are much closer together now. Consider now one of the factors for $ij \in \{31,\, 32,\, 41,\, 42\}$. For the purpose of discussing the branch, we look at \begin{equation} f(\zeta)= \log \vartheta_1\big((c \tau'+d) \zeta+\tfrac{1}{2c},\tau'\big)\ . \end{equation} We recall that the branch of the expression is determined from the behaviour near $y=0$ with $y>0$ small, i.e.\ $\zeta=-\frac{1}{2c(c \tau'+d)}+y$ with $y>0$ small. On the other hand, we can naturally determine a branch by taking $\zeta=0$, $\tau'$ large and purely imaginary. Since $0< \frac{1}{2c} \le \tfrac{1}{2}$, we have \begin{equation} \log \vartheta_1\big( \tfrac{1}{2c},\tau' \big)=\tfrac{\pi i \tau}{4}+ \log \left[ 2\sin\big(\tfrac{\pi}{2c}\big) \right]\ , \end{equation} which also has a natural branch since $\sin\big(\tfrac{\pi}{2c}\big)> 0$. These two choices of branches are easily seen to be equivalent. Indeed, set $\zeta=-\frac{\sigma }{2c(c \tau'+d)}+y$ and follow the branch from $\sigma=0$ to $\sigma=1$. We can again take $\tau$ purely imaginary and very large. Then \begin{equation} \log \vartheta_1\big((c \tau'+d) y+ \tfrac{1-\sigma}{2c},\tau' \big)\sim \frac{\pi i \tau}{4}+\log \left[ 2 \sin\pi \big((c \tau'+d) y+ \tfrac{1-\sigma}{2c}\big) \right] \end{equation} Since the path $2 \sin\pi \big((c \tau'+d) y+ \tfrac{1-\sigma}{2c}\big)$ never crosses the negative real axis, we can just take the principal branch of the logarithm everywhere. We thus conclude that our alternative determination of the branch in terms of $\zeta$ is equally valid and will be more convenient in the following. To summarize our analytic continuations so far, consider Figure~\ref{fig:shifts of z}. It shows the paths of the two analytic continuations we have performed. First, we analytically continued the integrand from $z_3\sim 0$ to $z_3 \sim \frac{\tau}{2}$, then we analytically continued back to the real axis. In the picture, $\tau$ is close to $\frac{a}{c}$, so that $\Im \tau'$ is large. Clearly, this path of analytic continuation is equivalent to the horizontal path, since we do not surround any branch points of the integrand (represented by crosses in the picture). This confirms that we are still on the correct branch. The same comment applies for $z_4$. \begin{figure} \centering \begin{tikzpicture} \fill (0,0) circle (.06) node[below] {0}; \fill (5,0) circle (.06) node[below] {1}; \fill (6,1) circle (.06) node[above] {$\tau$}; \fill (1,1) circle (.06) node[above] {$\tau-1$}; \draw[very thick] (2.94,.44) to (3.06,.56); \draw[very thick] (2.94,.56) to (3.06,.44) node[above] {$\tfrac{\tau}{2}$}; \draw[very thick, Maroon,->] (.2,.03) to (3.3,.47); \draw[very thick, Maroon,->] (3.35,.45) to (3,0); \draw[thick] (7,1.4) to (7,1) to (7.4,1); \node at (7.24,1.24) {$z_3$}; \end{tikzpicture} \caption{The two analytic continuations in $z_3$.} \label{fig:shifts of z} \end{figure} We can plug this change of variables back into \eqref{eq:non-planar annulus shifted y} to obtain the following expression: \begin{multline} \tilde{A}_{a/c}=\frac{i}{32} \int_{\longrightarrow} \frac{\mathrm{d}\tau}{(c \tau'+d)^2} \int_0^1 \mathrm{d}\zeta_1 \, \mathrm{d} \zeta_2\, \mathrm{d}\zeta_3 \ \mathrm{e}^{-\frac{\pi i s a}{2c}+\pi i s} \\ \times \prod_{i>j} \mathrm{e}^{-\pi i c(c \tau'+d) s_{ij}\zeta_{ij}^2} \vartheta_1\big((c \tau'+d)\zeta_{ij}+\tfrac{\delta(i,j)}{2c},\tau'\big)^{-s_{ij}} \end{multline} Here and in the following, we used the short-hand notation \begin{equation} \delta(i,j)=\begin{cases} 1\ , \quad & ij=31,\, 32,\, 41\text{ or }42\ , \\ 0\ , \quad & ij=21,\, 43\ . \end{cases} \end{equation} Finally, we shift $\tau' \to \tau-\frac{d}{c}$ and set $q=\mathrm{e}^{2\pi i \tau}$, yielding \begin{multline} \tilde{A}_{a/c}=\frac{i}{32} \int_{\longrightarrow} \frac{\mathrm{d}\tau}{c^2 \tau^2} \int_0^1 \mathrm{d}\zeta_1 \, \mathrm{d} \zeta_2\, \mathrm{d}\zeta_3 \ \mathrm{e}^{-\frac{\pi i s a}{2c}+\pi i s} \\ \times \prod_{i>j} q^{-\frac{1}{2}s_{ij} c^2\zeta_{ij}^2} \vartheta_1\big(c \tau \zeta_{ij}+\tfrac{\delta(i,j)}{2c},\tau-\tfrac{d}{c}\big)^{-s_{ij}}\ . \label{eq:Aa/c non planar annulus} \end{multline} We recall that the branch in this expression is chosen by setting all $\zeta_{ij} \sim 0$ with $\zeta_{21}>0$, $\zeta_{43}>0$, $\Re \tau=\frac{d}{c}$ and $\Im \tau$ large and evaluating the powers on the principal branch. Everywhere else, the branch is defined by analytic continuation. Notice that the expression is quite similar to the planar expression at this point, it only differs by an overall phase depending on $a$ and $c$ as well as the additional shifts in the theta function. These will also only lead to phases. Finally the integration region is different, see eq.~\eqref{eq:zeta integration region}. At this point, it is convenient to shift the integration contour in $\tau$ to large imaginary values of $\tau$ so that we can again take advantage of the tropicalization of the integration. \subsection{Subdividing the \texorpdfstring{$\zeta_i$}{zetai}-integration} We now continue the analysis for the $s$-channel. Since the integrand is the same as for the planar annulus up to phases, the contributions where $\mathrm{Trop}<0$ happen in the same region. In particular \eqref{eq:region Rn1,n2,n3} still holds when we replace $z_i \to \zeta_i$. The integration region is however larger and thus the integers $(n_1,n_2,n_3)$ can take more values. From the last inequality in \eqref{eq:region Rn1,n2,n3}, we see that $0 \le n_3 \le c$. From the first inequality in \eqref{eq:region Rn1,n2,n3}, we see that $0 \le n_{21} \le c$. To cover the whole integration region, we can let $0 \le n_{32} \le c-1$. The upper and lower bounds here could be shifted, since the integrand is periodic. It is only important that we cover $c$ different values. We hence have overall \begin{equation} 0 \le n_{21} \le c\ , \qquad 0 \le n_{32} \le c-1\ , \qquad 0 \le n_{43} \le c\ . \end{equation} This defines the regions $\Gamma_{n_1,n_2,n_3}$. There are $c(c+1)^2$ such regions. For each of the regions, we set \begin{equation} \zeta_i=\frac{n_i+\xi_i}{c}\ . \end{equation} The integration region for $n_{21}>0$, $n_{21}<c$, $n_{43}>0$ and $n_{43}<c$ are given by \begin{equation} -1\le \xi_{21},\, \xi_{43}\le 1\ , \qquad 0\le \xi_{31},\, \xi_{32},\, \xi_{41},\, \xi_{42}\le 1\ . \end{equation} The cases $n_{21}=0$, $n_{21}=c$, $n_{43}=0$ or $n_{43}=c$ are special since in this case the region for $\xi_{21}$, $\xi_{43}$ are modified. For $n_{21}=0$, we have $0\le \xi_{21}\le 1$, while for $n_{21}=c$, we have $-1\le \xi_{21}\le 0$ and similarly for $n_{43}$. \subsection{More branches of \texorpdfstring{$\log \vartheta_1$}{log theta1}} We next analyze the correct branch of the integrand in these regions. We have as before, see eq.~\eqref{eq:log theta1 branch} \begin{multline} \log \vartheta_1((n+\xi) \tau,\tau-\tfrac{c}{d})=-\pi i\tau (n+\xi)^2+\pi i\tau (\xi-\tfrac{1}{2})^2+\tfrac{\pi i}{2}-\tfrac{\pi i d}{4c}-2\pi i \sum_{m=1}^n \st{\tfrac{md}{c}}\\ +\log \left[ \prod_{\ell=1}^\infty\big(1-\mathrm{e}^{-\frac{2\pi i d \ell}{c}}q^\ell\big)\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell+n)}{c}} q^{\ell-\xi}\big)\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell-n-1)}{c}}q^{\ell+\xi-1}\big) \right]\ , \end{multline} where the branch on the right hand side is the correct one that comes from smoothly following the branch between $\xi=-n$ and $0<\xi<1$. We similarly want to relate the branches of $\log \vartheta_1((n+\xi)\tau+\frac{1}{2c},\tau-\frac{d}{c})$ for different values of $n$. We claim \begin{multline} \log \vartheta_1((n+\xi) \tau+\tfrac{1}{2c},\tau-\tfrac{c}{d})=-\pi i\tau (n+\xi)^2+\pi i \tau (\xi-\tfrac{1}{2})^2+\tfrac{\pi i}{2}-\tfrac{\pi i (d+2)}{4c}-2\pi i \sum_{m=1}^n \st{\tfrac{2md+1}{2c}}\\ +\log \left[ \prod_{\ell=1}^\infty\big(1-\mathrm{e}^{-\frac{2\pi i d \ell}{c}}q^\ell\big)\big(1-\mathrm{e}^{-\frac{\pi i (2d(\ell+n)+1)}{c}} q^{\ell-\xi}\big)\big(1-\mathrm{e}^{-\frac{\pi i (2d(\ell-n-1)-1)}{c}}q^{\ell+\xi-1}\big) \right]\ , \label{eq:log theta1 1/2c shifted branch} \end{multline} The argument is by induction over $n$ and identical to the one given in Section~\ref{subsec:branches log theta1}. Thus we will not repeat the argument here. Taking \eqref{eq:log theta1 branch} and \eqref{eq:log theta1 1/2c shifted branch} now gives the following formula for the contribution of $\tilde{A}_{a/c}^{n_1,n_2,n_3}$ to the non-planar amplitude: \begin{align} \tilde{A}_{a/c}^{n_1,n_2,n_3}&=\frac{i}{32} \int_{\longrightarrow} \frac{\mathrm{d}\tau}{c^5 \tau^2} \int \mathrm{d}\xi_{1} \, \mathrm{d}\xi_2\, \mathrm{d} \xi_3\ \mathrm{e}^{-\frac{\pi i s(a+2)}{2c}+\pi i s+\sum_{i>j} s_{ij}\sum_{m=1}^{n_{ij}} \st{\frac{2md+\delta(i,j)}{2c}}} \nonumber\\ &\qquad \times\prod_{i>j}q^{-\frac{1}{2}s_{ij} \xi_{ij}(\xi_{ij}-1)}\prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{\pi i (2d(\ell+n_{ij})+\delta(i,j))}{c}} q^{\ell-\xi_{ij}})^{-s_{ij}}\nonumber\\ &\qquad\times(1-\mathrm{e}^{-\frac{\pi i (2d(\ell-n_{ij}-1)-\delta(i,j))}{c}} q^{\ell+\xi_{ij}-1})^{-s_{ij}}\ . \end{align} \subsection{Evaluating the contribution from fixed \texorpdfstring{$(n_1,n_2,n_3)$}{(n1,n2,n3)} in the \texorpdfstring{$s$}{s}-channel} Next, we evaluate the contribution from a given $(n_1,n_2,n_3)$. This is very similar to the situation for the planar annulus. As discussed repeatedly, we can just do the $q$-expansion of the integrand, except for the factors involving $q^{\xi_{21}}$ or $q^{\xi_{43}}$, since those may go to zero. As before, when extracting the coefficient of the term $q^{m_\L \xi_{21}+m_\mathrm{D} \xi_{32}+m_\mathrm{R} \xi_{43}+m_\U(1-\xi_{41})}$, we get \begin{align} &[q^{m_\L \xi_{21}+m_\mathrm{D} \xi_{32}+m_\mathrm{R} \xi_{43}+m_\U(1-\xi_{41})}]\prod_{i>j}\prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{\pi i (2d(\ell+n_{ij})+\delta(i,j))}{c}} q^{\ell-\xi_{ij}})^{-s_{ij}}\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times \prod_{\ell=2-\delta(i,j)}^\infty (1-\mathrm{e}^{-\frac{\pi i (2d(\ell-n_{ij}-1)-\delta(i,j))}{c}} q^{\ell+\xi_{ij}-1})^{-s_{ij}} \nonumber\\ &\qquad=\mathrm{e}^{\frac{\pi i}{c}(2m_\L d n_{21}+m_\mathrm{D}(2d n_{32}+1)+2m_\mathrm{R} d n_{43}-m_\U(2d(n_{41}+1)+1))} Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}(s,t)\ , \end{align} where we used the definition of $Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}$ given in \eqref{eq:QmL,mD,mR,mU definition}. We thus have \begin{align} \tilde{A}_{a/c}^{n_1,n_2,n_3}&=\frac{i}{32c^5} \mathrm{e}^{-\frac{\pi i s(a+2)}{2c}+\pi i s+2\pi i \sum_{i>j} s_{ij}\sum_{m=1}^{n_{ij}} \st{\frac{2md+\delta(i,j)}{2c}}} \hspace{-0.6cm} \sum_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U\ge 0} \hspace{-0.6cm} Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}(s,t) \nonumber\\ &\qquad\times \mathrm{e}^{\frac{\pi i}{c}(2m_\L d n_{21}+m_\mathrm{D}(2d n_{32}+1)+2m_\mathrm{R} d n_{43}-m_\U(2d(n_{41}+1)+1))} \nonumber\\ &\qquad\times \int_{\longrightarrow} \frac{\mathrm{d}\tau}{ \tau^2} \int \mathrm{d}\xi_{1} \, \mathrm{d}\xi_2\, \mathrm{d} \xi_3\ q^{-\sum_{i>j}\frac{1}{2}s_{ij} \xi_{ij}(\xi_{ij}-1)+m_\L \xi_{21}+m_\mathrm{D} \xi_{32}+m_\mathrm{R} \xi_{43}+m_\U(1-\xi_{41})}\nonumber\\ &\qquad\qquad\qquad(1-\mathrm{e}^{\frac{2\pi i d n_{21}}{c}}q^{\xi_{21}})^{-s}(1-\mathrm{e}^{\frac{2\pi i d n_{43}}{c}}q^{\xi_{43}})^{-s}\ . \end{align} This expression is actually imprecise for $n_{21}=c$ or $n_{43}=c$, a case that we did not encounter in the planar amplitude. In this case, we have $\xi_{21}<0$ and thus we cannot specify the branch using the region $\xi_{21}>0$. In this case, we can follow the argument above how to relate these two branches and the correct prescription is \begin{equation} (1-\mathrm{e}^{\frac{2\pi i d n_{21}}{c}}q^{\xi_{21}})^{-s}=\mathrm{e}^{-2\pi i s \st{\frac{d n_{21}}{c}}} q^{-s \xi_{21}} (1-\mathrm{e}^{-\frac{2\pi i d n_{21}}{c}}q^{-\xi_{21}})^{-s}\ . \end{equation} This cancels the corresponding phase factor in $\mathrm{e}^{-2\pi i s\sum_{m=1}^{n_{21}} \st{\frac{md}{c}}}$. For $n_{21}=c$, we have $\st{\frac{dn_{21}}{c}}=0$ according to our definition \eqref{eq:st definition} and thus the prefactor does not need modification. For $n_{21}=c$ we then use the right hand side of this equation for the correct branch. A similar comment applies to the case $n_{43}=c$. We now continue as in the planar case in evaluating the integrals over the $\xi_i$'s. After introducing $\alpha_\L$, $\alpha_\mathrm{R}$, $t_\L$ and $t_\mathrm{R}$ as in Section~\ref{subsec:evaluating single term q expansion}, this is reduced to the computation of the integrals \begin{equation} \int_{-\infty\, (0)}^{\infty\, (0)} \mathrm{d}\alpha\ q^{-\alpha t}(1-\mathrm{e}^{2\pi i \varphi} q^\alpha)^{-s}\ . \end{equation} There is now a new case appearing corresponding to $n_{21}=c$ or $n_{43}=c$ in which the upper limit of the integral is 0. In this case, we should change the specification of the branch as discussed above by first factoring out $\mathrm{e}^{-2\pi i s \st{\varphi}}q^{-\alpha s}$. We then have for $\varphi \in \ZZ$: \begin{subequations} \begin{align} \int_{-\infty}^0 \mathrm{d}\alpha\ q^{-\alpha (s+t)} (1-q^{-\alpha})^{-s}&=\frac{i}{2\pi \tau} \int_0^1 \mathrm{d}x\ x^{s+t-1}(1-x)^{-s}\\ &=\frac{i}{2\pi \tau} \frac{\Gamma(1-s)\Gamma(s+t)}{\Gamma(t+1)}\\ &=-\frac{\sin(\pi t)}{\sin(\pi s)} \frac{i}{2\pi \tau} \frac{\Gamma(-t)\Gamma(s+t)}{\Gamma(s)}\ . \end{align} \end{subequations} Thus we get \begin{align} \tilde{A}_{a/c}^{n_1,n_2,n_3}&=-\frac{\pi i \, \mathrm{e}^{-\frac{\pi i s(a+2)}{2c}+\pi i s+2\pi i \sum_{i>j} s_{ij}\sum_{m=1}^{n_{ij}} \st{\frac{2md+\delta(i,j)}{2c}}}}{30c^5\sqrt{stu}}\sum_{\begin{subarray}{c} m_\L,m_\mathrm{D},m_\mathrm{R},m_\U\ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \hspace{-0.6cm}Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}(s,t) \nonumber\\ &\quad\times (-1)^{m_\L+m_\mathrm{R}}\mathrm{e}^{\frac{\pi i}{c}(m_\mathrm{D} (2dn_{32}+1)-m_\U(2d(n_{41}+1)+1)}\int_{P_{m_\mathrm{D},m_\U}> 0}\hspace{-1.4cm} \d t_\L \, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}} \nonumber\\ &\quad\times \left(\begin{cases} \mathrm{e}^{2\pi i t_\L \st{\frac{d n_{21}}{c}}} &\text{if}\;\; 0<n_{21}<c \\ \frac{\sin(\pi(s+t_\L))}{\sin(\pi s)} &\text{if}\;\; n_{21}=0 \\ -\frac{\sin(\pi t_\L)}{\sin(\pi s)} &\text{if}\;\; n_{21}=c \end{cases}\right)\left(\begin{cases} \mathrm{e}^{2\pi i t_\mathrm{R} \st{\frac{d n_{43}}{c}}} &\text{if}\;\; 0<n_{43}<c \\ \frac{\sin(\pi(s+t_\mathrm{R}))}{\sin(\pi s)}&\text{if}\;\; n_{43}=0 \\ -\frac{\sin(\pi t_\mathrm{R})}{\sin(\pi s)}&\text{if}\;\; n_{43}=c \end{cases}\right) \nonumber\\ &\quad\times \frac{\Gamma(-t_\L+m_\L)\Gamma(-t_\mathrm{R}+m_\mathrm{R})\Gamma(s+t_\L-m_\L)\Gamma(s+t_\mathrm{R}-m_\mathrm{R})}{\Gamma(s)^2}\ . \end{align} We now recall the definition of \eqref{eq:definition Qm2,m4} to perform the sum over $m_\L$ and $m_\mathrm{R}$. This gives \begin{align} \tilde{A}_{a/c}^{n_1,n_2,n_3}&=-\frac{\pi i \, \mathrm{e}^{-\frac{\pi i s(a+2)}{2c}+\pi i s+2\pi i \sum_{i>j} s_{ij}\sum_{m=1}^{n_{ij}} \st{\frac{2md+\delta(i,j)}{2c}}}}{30c^5\sqrt{stu}}\sum_{\begin{subarray}{c} m_\mathrm{D},m_\U\ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \mathrm{e}^{\frac{\pi i}{c}(m_\mathrm{D}-m_\U)} \nonumber\\ &\quad\times \mathrm{e}^{\frac{2\pi id}{c}(m_\mathrm{D} n_{32}-m_\U (n_{41}+1))} \int_{P_{m_\mathrm{D},m_\U}> 0} \hspace{-1.2cm} \d t_\L \, \d t_\mathrm{R}\, P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}} Q_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})\nonumber\\ &\quad\times \left(\begin{cases} \mathrm{e}^{2\pi i t_\L \st{\frac{d n_{21}}{c}}} &\text{if}\;\; 0<n_{21}<c \\ \frac{\sin(\pi(s+t_\L))}{\sin(\pi s)} &\text{if}\;\; n_{21}=0 \\ -\frac{\sin(\pi t_\L)}{\sin(\pi s)} &\text{if}\;\; n_{21}=c \end{cases}\right)\left(\begin{cases} \mathrm{e}^{2\pi i t_\mathrm{R} \st{\frac{d n_{43}}{c}}} &\text{if}\;\; 0<n_{43}<c \\ \frac{\sin(\pi(s+t_\mathrm{R}))}{\sin(\pi s)}&\text{if}\;\; n_{43}=0 \\ -\frac{\sin(\pi t_\mathrm{R})}{\sin(\pi s)}&\text{if}\;\; n_{43}=c \end{cases}\right) \nonumber\\ &\quad\times\frac{\Gamma(-t_\L)\Gamma(-t_\mathrm{R})\Gamma(s+t_\L-m_\mathrm{D}-m_\U)\Gamma(s+t_\mathrm{R}-m_\mathrm{D}-m_\U)}{\Gamma(s)^2}\ . \label{eq:non-planar four point function s-channel Rademacher} \end{align} It does not seem particularly fruitful to express things in terms of $n_\L$, $n_\mathrm{D}$, $n_\mathrm{R}$ and $n_\U$ in this case. Their geometric meaning is also much less clear, since there are two boundaries in this case. Let us merely note that \begin{align} \sum_{m=0}^{c-1} \bigst{\frac{2md+1}{2c}}&=\sum_{m=0}^{c-1} \bigst{\frac{2m+1}{2c}} \\ &=\frac{1}{2}\sum_{m=0}^{c-1} \left[\bigst{\frac{2m+1}{2c}}+\bigst{\frac{2(-m-1)+1}{2c}}\right]=0 \end{align} and thus the overall phase only depends on $n_{31}$, $n_{32}$, $n_{42}$ and $n_{41} \bmod c$. \subsection{Results in the \texorpdfstring{$u$}{u}-channel} We finally treat the $u$-channel of the non-planar annulus diagram. Start again with \eqref{eq:Aa/c non planar annulus}. The regions $\Gamma_{n_1,n_2,n_3}$ are analogous to the $u$-channel regions of the planar annulus, because the two integrals only differ by phases. In particular, the region $\Gamma_{n_1,n_2,n_3}$ is also specified by \eqref{eq:bounds regions u-channel}, except that the $\zeta_i$'s now play the role of $z_i$'s. There are $c^3$ regions in total. We can label them by with \begin{equation} 0 \le n_{21} \le c-1\ , \qquad 0 \le n_{32} \le c-1\ , \qquad 0 \le n_{43} \le c-1\ . \end{equation} Now essentially the same formula as \eqref{eq:planar u channel before swap} holds for the non-planar case as well, except that various phases are modified accordingly: \begin{align} \tilde{A}_{a/c}^{n_1,n_2,n_3}&=\frac{i}{32} \, \mathrm{e}^{-\frac{\pi i s(a+2)}{2c}+\pi i s}\int_{\longrightarrow} \frac{\mathrm{d}\tau}{c^5 \tau^2} \int \mathrm{d}\xi_1\, \mathrm{d}\xi_2\, \mathrm{d}\xi_3\nonumber\\ &\qquad\times \prod_{i>j} q^{-\frac{1}{2}s_{ij} \xi_{ij}(\xi_{ij}-1)} \mathrm{e}^{2\pi i s_{ij} \sum_{m=1}^{n_{ij}-\delta_{ij,32}} \st{\frac{2md+\delta(i,j)}{2c}}} \nonumber\\ &\qquad\times \prod_{\begin{subarray}{c} i>j\\ ij \ne 32 \end{subarray}}\prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{\pi i (2d (\ell+n_{ij})+\delta(i,j))}{c}} q^{\ell-\xi_{ij}})^{-s_{ij}}\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\times(1-\mathrm{e}^{-\frac{\pi i (2d(\ell-n_{ij}-1)-\delta(i,j))}{c}} q^{\ell+\xi_{ij}-1})^{-s_{ij}} \nonumber\\ &\qquad\times \prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{\pi i (2d (\ell+n_{32}-1)+1)}{c}} q^{\ell-\xi_{32}-1})^{-s_{32}}\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\times(1-\mathrm{e}^{-\frac{\pi i (2d (\ell-n_{32})-1)}{c}} q^{\ell+\xi_{32}})^{-s_{32}}\ . \label{eq:non planar u channel before swap} \end{align} One can then give the same argument as in the planar case to relate this to the non-planar $s$-channel amplitude. Let us exchange all quantities labeled with 2 with all the corresponding quantities labelled with 3. This maps $\delta(i,j)$ to \begin{equation} \tilde{\delta}(i,j)=\begin{cases} 1\ , \quad &ij \in \{21,\, 41,\, 43\}\ , \\ -1 \ , \quad &ij =32\ , \\ 0 \ , \quad &ij\in \{31,\, 42\}\ . \end{cases} \end{equation} One obtains \begin{align} \tilde{A}_{a/c}^{n_1,n_2,n_3}\Big|_{2 \leftrightarrow 3}&=\frac{i}{32}\, \mathrm{e}^{-\frac{\pi i u(a+2)}{2c}+\pi i u} \int \frac{\mathrm{d}\tau}{c^5 \tau^2} \int \mathrm{d}\xi_1 \, \mathrm{d}\xi_2 \, \mathrm{d}\xi_3\ \prod_{i>j} \prod_{i>j} q^{-\frac{1}{2}s_{ij} \xi_{ij}(\xi_{ij}-1)} \nonumber\\ &\qquad\times\mathrm{e}^{2\pi i \sum_{i>j,\,ij \ne 32} s_{ij} \sum_{m=1}^{n_{ij}} \st{\frac{2md+\tilde{\delta}(i,j)}{2c}}+2\pi i u \sum_{m=1}^{-n_{32}-1} \st{\frac{2md+1}{2c}}} \nonumber\\ &\qquad\times \prod_{i>j}\prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{\pi i (2d (\ell+n_{ij})+\tilde{\delta}(i,j))}{c}} q^{\ell-\xi_{ij}})^{-s_{ij}}\nonumber\\ &\qquad\qquad\qquad\qquad\times (1-\mathrm{e}^{-\frac{\pi i d ((\ell-n_{ij}-1)-\tilde{\delta}(i,j))}{c}} q^{\ell+\xi_{ij}-1})^{-s_{ij}}\ . \end{align} Up to slightly different phases, this coincides with the $s$-channel expression. In particular, we can proceed as before and get \begin{align} \tilde{A}_{a/c}^{n_1,n_2,n_3}\Big|_{2 \leftrightarrow 3}&=-\frac{\pi i\, \mathrm{e}^{-\frac{\pi i u(a+2)}{2c}+\pi i u+2\pi i \sum_{i>j,\,ij \ne 32} s_{ij} \sum_{m=1}^{n_{ij}} \st{\frac{2md+\tilde{\delta}(i,j)}{2c}}+2\pi i u \sum_{m=1}^{-n_{32}-1} \st{\frac{2md+1}{2c}}}}{30 c^5 \sqrt{stu}}\nonumber\\ &\qquad\times \hspace{-.2cm}\sum_{\begin{subarray}{c} m_\L,m_\mathrm{D},m_\mathrm{R},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}}\hspace{-.3cm} \mathrm{e}^{\frac{\pi i}{c}(m_\L(2d n_{21}+1)+m_\mathrm{D}(2d n_{32}-1)+m_\mathrm{R}(2 d n_{43}+1)-m_\U(2d(n_{41}+1)+1))} \nonumber\\ &\qquad\times Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}(s,t) \int_{P_{m_\mathrm{D},m_\U} > 0} \hspace{-1cm} \d t_\L\, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}} \nonumber\\ &\qquad\times \mathrm{e}^{2\pi i (t_\L-m_\L)\st{\frac{2d n_{21}+1}{2c}}+2\pi i (t_\mathrm{R}-m_\mathrm{R})\st{\frac{2d n_{43}+1}{2c}}} \nonumber\\ &\qquad\times \frac{\Gamma(-t_\L+m_\L)\Gamma(-t_\mathrm{R}+m_\mathrm{R})\Gamma(s+t_\L-m_\L)\Gamma(s+t_\mathrm{R}-m_\mathrm{R})}{\Gamma(s)^2}\ . \end{align} The phases again partially cancel and we can perform the sum over $m_\L$ and $m_\mathrm{R}$. We obtain \begin{align} \tilde{A}_{a/c}^{n_1,n_2,n_3}\Big|_{2 \leftrightarrow 3}&=-\frac{\pi i\, \mathrm{e}^{-\frac{\pi i u(a+2)}{2c}+\pi i u+2\pi i \sum_{i>j,\,ij \ne 32} s_{ij} \sum_{m=1}^{n_{ij}} \st{\frac{2md+\tilde{\delta}(i,j)}{2c}}+2\pi i u \sum_{m=1}^{-n_{32}-1} \st{\frac{2md+1}{2c}}}}{30 c^5 \sqrt{stu}}\nonumber\\ &\qquad\times \sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \mathrm{e}^{\frac{\pi i}{c}(-m_\mathrm{D}-m_\U)+\frac{2\pi id}{c}(m_\mathrm{D} n_{32}-m_\U(n_{41}+1))} \nonumber\\ &\qquad\times \int_{P_{m_\mathrm{D},m_\U} > 0} \hspace{-1cm} \d t_\L\, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}} Q_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R}) \nonumber\\ &\qquad\times \mathrm{e}^{2\pi i t_\L\st{\frac{2d n_{21}+1}{2c}}+2\pi i t_\mathrm{R}\st{\frac{2d n_{43}+1}{2c}}} \nonumber\\ &\qquad\times \frac{\Gamma(-t_\L)\Gamma(-t_\mathrm{R})\Gamma(s+t_\L-m_\L-m_\mathrm{R})\Gamma(s+t_\mathrm{R}-m_\L-m_\mathrm{R})}{\Gamma(s)^2}\ . \end{align} We can now swap the labels 2 and 3 back, which leads to \begin{align} \tilde{A}_{a/c}^{n_1,n_2,n_3}&=-\frac{\pi i\, \mathrm{e}^{-\frac{\pi i s(a+2)}{2c}+\pi i s+2\pi i \sum_{i>j} s_{ij} \sum_{m=1}^{n_{ij}-\delta_{ij,32}} \st{\frac{2md+\delta(i,j)}{2c}}}}{30 c^5 \sqrt{stu}}\nonumber\\ &\qquad\times \sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \mathrm{e}^{\frac{\pi i}{c}(-m_\mathrm{D}-m_\U)+\frac{2\pi id}{c}(-m_\mathrm{D} n_{32}-m_\U(n_{41}+1))} \nonumber\\ &\qquad\times \int_{P_{m_\mathrm{D},m_\U}(u,t,t_\L,t_\mathrm{R}) > 0} \hspace{-2cm} \d t_\L\, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(u,t,t_\L,t_\mathrm{R})^{\frac{5}{2}}\, Q_{m_\mathrm{D},m_\U}(u,t,t_\L,t_\mathrm{R}) \nonumber\\ &\qquad\times \mathrm{e}^{2\pi i t_\L\st{\frac{2d n_{31}+1}{2c}}+2\pi i t_\mathrm{R}\st{\frac{2d n_{42}+1}{2c}}} \nonumber\\ &\qquad\times \frac{\Gamma(-t_\L)\Gamma(-t_\mathrm{R})\Gamma(u+t_\L-m_\L-m_\mathrm{R})\Gamma(u+t_\mathrm{R}-m_\L-m_\mathrm{R})}{\Gamma(u)^2}\ . \label{eq:non-planar four point function u-channel Rademacher} \end{align} \section{Mass shifts and decay widths}\label{sec:mass-shifts} With the formulas we derived, we will cross check them first in a simpler case, namely in the computation of the mass shifts and decay widths. They appear as the double residue of the amplitude at integer $s$ in the $s$-channel. This leads to lots of classical number theory. \subsection{Double residues in terms of Gauss sums} To illustrate the procedure, we first discuss the mass-shift at $s=1$. Only terms with $n_{\L}=n_{\mathrm{R}}=0$ can contribute to the double residue in \eqref{eq:planar four-point function s-channel}. It is straightforward to take the double residue and derive eq.~\eqref{eq:mass-shifts} from it. To obtain the mass-shift at $s=1$, we only need the term with $m_\mathrm{D}=m_\U=0$. It thus remains to compute the integral \begin{equation} \int_{P_{0,0} > 0}\hspace{-0.6cm} \d t_\L \, \d t_\mathrm{R}\ P_{0,0}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}}\ . \end{equation} Since the region $P_{0,0} > 0$ is bounded by an ellipse, we can change coordinates to map it to the unit circle, which gives immediately \begin{equation} \int_{P_{0,0} > 0}\hspace{-0.6cm} \d t_\L \, \d t_\mathrm{R}\ P_{0,0}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}}=\frac{s^4 \sqrt{stu}}{64} \int_{\mathbb{D}} \d x\, \d y\ (1-x^2-y^2)^{\frac{5}{2}}=\frac{\pi s^4 \sqrt{stu}}{224}\ , \end{equation} where $\mathbb{D} = \{x^2 + y^2 < 1\}$. We thus have simply \begin{equation} \DRes_{s=1} A^{0,n_\mathrm{D},0,n_\U}=-\frac{\pi^2 i \, \mathrm{e}^{\frac{2\pi i d}{c} n_\mathrm{D} n_\U}}{210 c^5}\ . \end{equation} Note that we can omit the sawtooth function $\st{x}$ in the exponent since $s$ is integer. Given that $n_\U=c-1-n_\mathrm{D}$, it is more convenient to write this entirely in terms of $n \equiv n_\mathrm{D}$. To obtain the double residue, we have to sum over $n \in \{0,\dots,c-1\}$, $d$ and $c$. The sum over $n$ is known as a Gauss sum: \begin{equation} G(-d,-d,c) \equiv \sum_{n=0}^{c-1} \mathrm{e}^{ - \frac{2\pi i n(n+1) d}{c}}\ . \end{equation} The general notation will be explained below. These are very classical objects in number theory. We also have \begin{equation} \DRes_{s=1} \Delta A^{\text{p}}=-\frac{i}{(2\pi)^2}\Res_{s=1} \frac{\Gamma(1-s)\Gamma(-t)}{\Gamma(1-s-t)}=\frac{i}{(2\pi)^2}\ . \end{equation} Hence we obtain \begin{align} \DRes_{s=1} A^{\text{p}} =\frac{i}{(2\pi)^2}-\sum_{c=1}^\infty \sum_{a=1,\, (a,c)=1}^{\frac{c}{2}} \frac{\pi^2 i\, G(-a^*,-a^*,c) }{210c^5}\ , \label{eq:mass shift sum} \end{align} where $a^*$ is the modular image of $a$ mod $c$, i.e., $a a^*=1 \bmod c$. More generally, we can evaluate the double residues at higher integer levels $s_\ast \in \mathbb{Z}_{>0}$. Their real and imaginary parts correspond to mass shifts and decay widths respectively. The general formula is \begin{align} \DRes_{s = s_\ast} A_{a/c}^{0,n,0,c-1-n}&=-\frac{\pi i \mathrm{e}^{-4\pi i s_\ast \sum_{m=1}^{n} \st{\frac{md}{c}} }}{60c^5 s_\ast^2 (s_\ast!)^2} \hspace{-0.6cm}\sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s_\ast \end{subarray}} \hspace{-0.6cm} \Delta_{m_\mathrm{D},m_\U}(s_\ast)^{\frac{7}{2}}\, \mathrm{e}^{\frac{2\pi i d}{c}(m_\mathrm{D} n-m_\U(n+1) )}\nonumber\\ &\qquad\times\int_{\DD} \d x\, \d y\ (1-x^2-y^2)^{\frac{5}{2}} Q_{m_\mathrm{D},m_\U} (s_\ast,t,t_\L,t_\mathrm{R}) \nonumber\\ &\qquad\qquad\times (t_\L+1)_{s_\ast-m_\mathrm{D}-m_\U-1}(t_\mathrm{R}+1)_{s_\ast-m_\mathrm{D}-m_\U-1}\ , \end{align} where \begin{equation} t_{\L,\mathrm{R}}=\frac{\sqrt{\Delta_{m_\mathrm{D},m_\U}(s_\ast)}}{2 \sqrt{s_\ast}}(\sqrt{s_\ast + t} x\pm \sqrt{-t} y)+\frac{1}{2} (m_\mathrm{D}+m_\U-s_\ast) \end{equation} and $\Delta_{m_\mathrm{D},m_\U}$ was defined in eq.~\eqref{eq:definition Delta}. Since $s_\ast$ is integer, we can simplify the sum of the sawtooth function: \begin{equation} \mathrm{e}^{-4\pi i s_\ast \sum_{m=1}^{n} \st{\frac{md}{c}}}=\mathrm{e}^{-\frac{2\pi i s d n(n+1)}{c}}\ . \end{equation} At this point, we can perform the sum over $n$. They are classical Gauss sums: \begin{align} G(-d s_\ast,d(m_\mathrm{D}-m_\U-s_\ast),c)=\sum_{n=0}^{c-1} \mathrm{e}^{-\frac{2\pi i d n(s_\ast n+s_\ast-m_\mathrm{D}+m_\U)}{c}}\ . \end{align} Putting everything together, we obtain \begin{align} \DRes_{s=s_\ast} A^{\text{p}} &= \frac{i}{(2\pi)^2}\frac{\Gamma(t+s_\ast)}{\Gamma(t+1)\Gamma(s)} -\sum_{c=1}^\infty \sum_{a=1,\, (a,c)=1}^{\frac{c}{2}} \frac{\pi i }{60c^5 s_\ast^2 (s_\ast!)^2} \hspace{-.3cm}\sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s_\ast \end{subarray}} \hspace{-.3cm} \Delta_{m_\mathrm{D},m_\U}(s_\ast)^{\frac{7}{2}} \nonumber\\ &\qquad\times \mathrm{e}^{-\frac{2\pi i s_\ast m_\U a^*}{c}} G(-a^*s_\ast,a^*(m_\mathrm{D}-m_\U-s_\ast),c)\int_{\DD} \d x\, \d y\ (1-x^2-y^2)^{\frac{5}{2}} \nonumber\\ &\qquad\times Q_{m_\mathrm{D},m_\U} (s_\ast,t,t_\L,t_\mathrm{R})\,(t_\L+1)_{s_\ast-m_\mathrm{D}-m_\U-1}(t_\mathrm{R}+1)_{s_\ast-m_\mathrm{D}-m_\U-1}\ . \end{align} The integrals for integer $s_\ast$ can always be performed analytically and for a given mass level this expression can be further simplified. For example, for the first few mass levels, we get \begin{subequations} \begin{align} \DRes_{s=2} A&=\frac{i (1+t)}{(2\pi)^2}-\frac{i\pi^2(1+t)}{3780} \sum_{c=1}^\infty \frac{1}{c^5}\sum_{a=1,\, (a,c)=1}^{\frac{c}{2}} \Big[G(-2a^*,-a^*,c)\nonumber\\ &\qquad\qquad+16 G(-2a^*,-2a^*,c)+ \mathrm{e}^{-\frac{2\pi i a^*}{c}} G(-2a^*,-a^*,c) \Big]\ , \\ \DRes_{s=3} A&=\frac{i(1+t)(2+t)}{8\pi^2}-\frac{i \pi^2(1+t)(2+t)}{4490640} \sum_{c=1}^\infty \frac{1}{c^5}\sum_{a=1,\, (a,c)=1}^{\frac{c}{2}} \nonumber\\ &\qquad\times\Big[113 G(-3 a^*,-a^*,c)+2048 G(-3 a^*,-2 a^*,c)+6561 G(-3 a^*,-3 a^*,c)\nonumber\\ &\qquad\qquad+2048 \mathrm{e}^{-\frac{2 i \pi a^*}{c}} G(-3 a^*,-4 d,c)+113 \mathrm{e}^{-\frac{4 i \pi a^*}{c}} G(-3 a^*,-5 a^*,c)\Big]\ , \\ \DRes_{s=4} A&=\frac{i(1+t)(2+t)(3+t)}{24\pi^2}-\frac{i \pi^2(2+t)}{39852933120} \sum_{c=1}^\infty \frac{1}{c^5}\sum_{a=1,\, (a,c)=1}^{\frac{c}{2}} \nonumber\\ &\qquad\times\Big[\left(103827 t^2+415308 t+309568\right) G(-4 a^*,-a^*,c)\nonumber\\ &\qquad\qquad+22528 \left(87 t^2+348 t+272\right) G(-4 a^*,-2 a^*,c)\nonumber\\ &\qquad\qquad+19683 \left(405 t^2+1620 t+1216\right) G(-4a^*,-3 a^*,c)\nonumber\\ &\qquad\qquad+524288 \left(24 t^2+96 t+71\right) G(-4 a^*,-4 a^*,c)\nonumber\\ &\qquad\qquad+19683 \left(405 t^2+1620 t+1216\right) \mathrm{e}^{-\frac{2 i \pi a^*}{c}} G(-4 a^*,-5 a^*,c)\nonumber\\ &\qquad\qquad+22528 \left(87 t^2+348 t+272\right) \mathrm{e}^{-\frac{4 i \pi a^*}{c}} G(-4 a^*,-6 a^*,c)\nonumber\\ &\qquad\qquad+\left(103827 t^2+415308 t+309568\right) \mathrm{e}^{-\frac{6 i \pi a^*}{c}} G(-4 a^*,-7 a^*,c) \Big]\ . \end{align}\label{eq:mass shifts}% \end{subequations} Here, \begin{equation} G(a,b,c)=\sum_{n=0}^c \mathrm{e}^{\frac{2\pi i (a n^2+b n)}{c}} \end{equation} denotes the general quadratic Gauss sum. As we will explain below, it can be efficiently calculated. The first three mass-levels are special since the degeneracy is not lifted and the double residues have a factorized form. Starting from $s=5$, the expressions also contain square roots, but they become too unwieldy to display here. Expressions up to $s_\ast \leq 16$ are provided in the ancillary file \texttt{DRes.txt}. We also evaluate them numerically to high precision with the results shown in App.~\ref{app:mass-shifts}. \subsection{Gauss sums} \label{subsec:Gauss sums} To continue, it is useful to recall some elementary number theory that allows us to evaluate Gauss sums. We refer to any book on number theory for more details. Let us define the Legendre symbol for any odd prime $p$ as follows: \begin{equation} \jac{a}{p}=\begin{cases} 0 \ , \quad &a \equiv 0 \bmod p\ , \\ 1\ , \quad &\text{$a$ is a quadratic residue mod $p$}\ , \\ -1\ , \quad &\text{$a$ is not a quadratic residue mod $p$}\ . \end{cases} \end{equation} A quadratic residue mod $p$ is by definition an element of the finite field $\FF_p$ that has a square root. For example in $\FF_5$, \begin{equation} 1^2=1\ , \quad 2^2=4\ , \quad 3^2=4\ , \quad 4^2=1\ . \end{equation} Hence $1$ and $4$ are quadratic residues mod 5 and correspondingly $\jac{1}{5}=\jac{4}{5}=1$, whereas $\jac{2}{5}=\jac{3}{5}=-1$. One then extends the definition to the Jacobi symbol as follows. For an odd integer $n$, let $n=p_1^{m_1} p_2^{m_2} \cdots p_k^{m_k}$ be its prime factorization. Then one defines \begin{equation} \jac{a}{n}=\prod_{i=1}^k\jac{a}{p_1}^{m_k}\ . \end{equation} The Jacobi symbol is a multiplicative function in both the top and bottom argument, \begin{equation} \jac{ab}{n}=\jac{a}{n}\jac{b}{n}\ , \qquad \jac{a}{mn}=\jac{a}{m} \jac{b}{m}\ . \end{equation} Famously, it satisfies the law of quadratic reciprocity. For $a$ and $b$ odd coprime integers, one has \begin{equation} \jac{a}{b}\jac{b}{a}=(-1)^{\frac{(a-1)(b-1)}{4}}\ . \end{equation} The law of quadratic reciprocity can be exploited to give a fast algorithm to compute the Jacobi symbol (runtime $\mathcal{O}(\log a \log b)$). Let us recall the definition of the Gauss sum: \begin{align} G(a,b,c)=\sum_{n=0}^{c-1} \mathrm{e}^{\frac{2\pi i (a n^2+b n)}{c}}\ . \end{align} They can be evaluated in closed form in terms of Jacobi symbols. First, we can reduce to the case where $(a,c)=1$ as follows, \begin{align} G(a,b,c)&=\begin{cases} (a,c) \, G\big(\tfrac{a}{(a,c)},\tfrac{b}{(a,c)}, \tfrac{c}{(a,c)}\big)\ , \qquad & b \mid (a,c)\ , \\ 0\ , \qquad &\text{otherwise}\ . \end{cases} \end{align} Assuming $(a,c)=1$, we can next reduce to the case with $b=0$ by `completing the square' in the sum. For odd $c$ and even $b$, this is always possible. We first reduce to these cases by using \begin{align} G(a,b,c)&=\begin{cases} 0\ , \qquad & (a,c)=1,\, c \equiv 0 \bmod 4 \text{ and }b \text{ odd}\ , \\ 2 G(2a,b,\tfrac{c}{2})\ , \qquad & (a,c)=1,\, c \equiv 2 \bmod 4 \text{ and }b \text{ odd}\ . \end{cases} \end{align} We can now assume that $b$ is even or $c$ is odd. This always ensures the existence of a solution to the equation \begin{equation} 2am+b \equiv 0 \bmod c\ . \end{equation} Indeed, for $c$ odd, $2$ is invertible and the solution is given by $m=-2^* a^* b$, where ${}^*$ denotes the inverse mod $c$. For $b$ even, we can divide the equation by 2 and solve instead $am+\frac{b}{2} \equiv 0 \bmod c$, which always has a solution. We can then shift the summation variable by $m$ to eliminate the linear term and get \begin{equation} G(a,b,c)=\mathrm{e}^{\frac{2\pi i (a m^2+b m)}{c}} G(a,0,c)\ . \end{equation} Finally, $G(a,0,c)$ with $(a,c)=1$ is computed by the following classical formula, \begin{align} G(a,0,c)=\begin{cases} 0\ , \qquad &(a,c)=1\text{ and } c \equiv 2 \bmod 4\ , \\ \varepsilon(c) \sqrt{c} \jac{a}{c}\ , \qquad & (a,c)=1\text{ and } c\text{ odd}\ , \\ (1+i) \varepsilon(a)^{-1} \sqrt{c} \jac{c}{a}\ , \qquad & (a,c)=1\text{ and } c \equiv 0 \bmod 4\ . \end{cases} \end{align} We used the abbreviation \begin{equation} \varepsilon(c)=\begin{cases} 1 \ , \qquad & c \equiv 1 \bmod 4\ , \\ i \ , \qquad & c \equiv 3 \bmod 4\ . \end{cases} \end{equation} This explains how to efficiently compute the Gauss sums appearing in the formulas for the mass-shifts \eqref{eq:mass shifts}. We have implemented these formulae to generate results in the Appendix~\ref{app:mass-shifts}. \subsection{Recovering the decay width} As a consistency check of our analysis, we will now demonstrate that the imaginary part of \eqref{eq:mass shifts} equals the expected value that is obtained by just computing the contribution from the annulus ($c=1$) and from the M\"obius strip ($c=2$). Let us first note that \begin{equation} \Re \mathrm{e}^{-\frac{2\pi i s m_\U a^*}{c}} G(-a^*s,a^*(m_\mathrm{D}-m_\U-s),c)=\Re \sum_{n=0}^{c-1} \mathrm{e}^{-\frac{2\pi i a^*(sn(n+1)-m_\mathrm{D} n+m_\U(n+1))}{c}} \end{equation} and this is obviously unchanged when replacing $a^* \to -a^*$. Since the modular inverse of $a$ mod $c$ satisfies $(-a)^*=-a^*$, we could hence sum over $a=1,\dots,c$ with $(a,c)=1$, as long as we compensate by a factor of 2. The exception to this occurs for $c=1$ and $c=2$, since extending the summation range to $c$ would count $a=\frac{c}{2}$ only once. For $c \ge 3$, $a$ can never be equal to $\frac{c}{2}$, since it would either be non-integer or not be coprime to $c$. Thus, the imaginary part of \eqref{eq:mass shift sum} can be written as \begin{multline} \Im \DRes_{s=1} A^{\text{p}} = \frac{\pi^2\, G(-1,-1,1) }{420}-\frac{\pi^2 \, G(-1,-1,2) }{420 \cdot 2^5}+\frac{1}{4\pi^2}\\ -\frac{1}{2}\sum_{c=1}^\infty \sum_{a=1,\, (a,c)=1}^{c} \frac{\pi^2 G(-a^*,-a^*,c) }{210c^5}\ . \end{multline} The first two terms are precisely the contribution from the annulus and the M\"obius strip. As expected, the M\"obius strip contributes with a negative sign. Thus, it remains to show that the last two terms cancel. The same logic applies to the higher mass-shifts. Consistency with the previously computed imaginary part requires that when we extend the sum over $a$ up to $c$ and compensate by dividing by $2$, then the infinite sum should precisely cancel the simple contribution that comes from $\Delta A^{\text{p}}$. Taking the modular inverse is unnecessary at this point because when $a$ runs over $\ZZ_c^\times$, then so does $a^*$. Here $\ZZ_c^\times$ is the set of units in the ring $\ZZ_c$ (i.e.\ all elements with $(a,c)=1$, since those have an inverse mod $c$). Let us hence denote $a^*=d$ in the following. To summarize, we need to show that \begin{subequations} \begin{align} \frac{1}{2\pi^2}&\overset{!}{=} \sum_{c=1}^\infty \sum_{d\in \ZZ_c^\times} \frac{\pi^2 G(-d,-d,c) }{210c^5}\ , \label{eq:L function identity s=1} \\ \frac{1+t}{2\pi^2} &\overset{!}{=}\frac{\pi^2(1+t)}{3780} \sum_{c=1}^\infty \frac{1}{c^5}\sum_{d \in \ZZ_c^\times} \Big[G(-2d,-d,c)\nonumber\\ &\qquad\qquad+16 G(-2d,-2d,c)+ \mathrm{e}^{-\frac{2\pi i d}{c}} G(-2d,-d,c) \Big]\ , \label{eq:L function identity s=2} \end{align}\label{eq:L function identity}% \end{subequations} and so on. We first demonstrate the equality explicitly for $s=1$ and then explain how it generalizes for higher values of $s$. \subsubsection{Case \texorpdfstring{$s=1$}{s=1}} Let us set \begin{align} F(c)=\frac{1}{c}\sum_{d \in \ZZ_c^\times} G(-d,-d,c)= \frac{1}{c}\sum_{d \in \ZZ_c^\times} \sum_{n=1}^c \mathrm{e}^{-\frac{2\pi i n(n-1)d}{c}}\ . \end{align} This definition agrees with \eqref{eq:definition F overview}, except for $c=1$ and $c=2$. Our aim is to determine $F$ explicitly. The result will be $F(c)=| \mu(c)|$, where $\mu(c)$ is the M\"obius function, defined by \begin{equation} \mu(n)=\begin{cases} 1\ , \quad &\text{$n$ has an even number of prime factors}\ , \\ -1\ , \quad &\text{$n$ has an odd number of prime factors}\ , \\ 0\ , \quad &\text{$n$ has a repeated prime factor}\ . \end{cases} \end{equation} We prove this in two steps. First, we show that $F(c)$ is multiplicative, i.e.\ for $c=c_1c_2$ and $(c_1,c_2)=1$ we have $F(c_1c_2)=F(c_1)F(c_2)$. The Chinese remainder theorem says that $\ZZ_{c_1c_2}^\times \cong \ZZ_{c_1}^\times \times \ZZ_{c_2}^\times$, i.e.\ $d \mapsto (d_1=d \bmod c_1,d_2=d \bmod c_2)$ is a group isomorphism. It is the restriction of the corresponding ring isomorphism $\ZZ_{c_1c_2} \cong \ZZ_{c_1} \times \ZZ_{c_2}$ to the group of units. We also notice that \begin{equation} (c_1+c_2,c_1c_2)=(c_1+c_2,c_1)(c_1+c_2,c_2)=(c_2,c_1)(c_1,c_2)=1\ , \end{equation} and hence $c_1+c_2$ is a unit. Thus we may replace $d \in \ZZ_c^\times$ in the sum with $(c_1+c_2)d$, since both run over the units of $\ZZ_c$. We then get \begin{align} F(c_1c_2)&=\frac{1}{c_1c_2}\sum_{n\in \ZZ_{c_1c_2}} \sum_{d \in \ZZ_{c_1c_2}^\times} \mathrm{e}^{-\frac{2\pi i n(n-1) (c_1+c_2)d}{c_1c_2}} \\ &=\frac{1}{c_1c_2}\sum_{n\in \ZZ_{c_1c_2}} \sum_{d \in \ZZ_{c_1c_2}^\times} \mathrm{e}^{-\frac{2\pi i n(n-1) d}{c_1}}\mathrm{e}^{-\frac{2\pi i n(n-1) d}{c_2}}\ . \end{align} Now let $d_i=d \bmod c_i$ and $n_i=n \bmod c_i$. Then with the help of the Chinese remainder theorem we conclude \begin{align} F(c_1c_2)&=\frac{1}{c_1 c_2}\sum_{n_1\in \ZZ_{c_1}}\sum_{n_2\in \ZZ_{c_2}} \sum_{d_1 \in \ZZ_{c_1}^\times}\sum_{d_2 \in \ZZ_{c_2}^\times} \mathrm{e}^{-\frac{2\pi i n_1(n_1-1) d_1}{c_1}}\mathrm{e}^{-\frac{2\pi i n_2(n_2-1) d_2}{c_2}}=F(c_1)F(c_2)\ . \end{align} It then remains to evaluate $F(p^k)$ for $p$ prime and $k \ge 1$, since this will determine $F(c)$ completely. \begin{align} F(p^k)&=\frac{1}{p^k} \left(\sum_{n \in \ZZ_{p^k}}\sum_{d \in \ZZ_{p^k}} \mathrm{e}^{-\frac{2\pi i n(n-1) d}{p^k}}-\sum_{n \in \ZZ_{p^k}}\sum_{p\, \mid\, d \in \ZZ_{p^k}} \mathrm{e}^{-\frac{2\pi i n(n-1) d}{p^k}}\right) \\ &=\frac{1}{p^k}\left(\sum_{n \in \ZZ_{p^k}}\sum_{d \in \ZZ_{p^k}} \mathrm{e}^{-\frac{2\pi i n(n-1) d}{p^k}}-\sum_{n \in \ZZ_{p^k}}\sum_{d \in \ZZ_{p^{k-1}}} \mathrm{e}^{-\frac{2\pi i n(n-1) d}{p^{k-1}}}\right) \\ &=\frac{1}{p^k}\left(\sum_{n \in \ZZ_{p^k}} p^k \delta_{p^k \mid n(n-1)}-\sum_{n \in \ZZ_{p^k}} p^{k-1} \delta_{p^{k-1} \mid n(n-1)}\right)\ . \end{align} Now $p^k \mid n(n-1)$ precisely for $n=0$ or $n=1 \in \ZZ_{p^k}$. The same reasoning applies in the second term where $p^{k-1} \mid n(n-1)$ when $n=r p^{k-1}$ or $n=r p^{k-1}+1$ with $r=0,\dots,p-1$. For $k \ge 2$ these are $2p$ possibilities, whereas for $k=1$, these are only $p$ possibilities. Thus \begin{equation} F(p^k)=\frac{1}{p^k}\left(2p^k-(2-\delta_{k,1})\, p \times p^{k-1}\right)= \delta_{k,1} = |\mu(p^k)|\ . \end{equation} Thus by multiplicativity of $F$ and the Moebius function we conclude \begin{equation} F(c)=|\mu(c)| \end{equation} for any positive integer $c$. We can then evaluate \begin{align} \sum_{c=1}^\infty \frac{|\mu(c)|}{c^4}&=\prod_{p \in \PP} \sum_{k=0}^\infty \mu(p^k) p^{-4k} \\ &=\prod_{p \in \PP} (1+p^{-4})\\ &=\prod_{p \in \PP} \frac{1-p^{-8}}{1-p^{-4}}=\frac{\zeta(4)}{\zeta(8)}=\frac{105}{\pi^4}\ , \label{eq:L function abs mu} \end{align} where we used multiplicativity of $|\mu(c)|$ and the fact that every integer can be uniquely written as the product of its prime factors. We also used the Euler product of the Riemann zeta-function, \begin{equation} \zeta(\sigma)=\prod_{p \in \PP} (1-p^{-\sigma})^{-1}\ . \end{equation} We thus get \begin{align} \sum_{c=1}^\infty \sum_{d \in \ZZ_c^\times} \frac{\pi^2 G(-d,-d,c)}{210 c^5}=\frac{\pi^2}{210} \sum_{c=1}^\infty \frac{|\mu(c)|}{c^4}=\frac{1}{2\pi^2}\ , \end{align} which demonstrates eq.~\eqref{eq:L function identity s=1}. \subsubsection{\label{subsec:higher-values-of-s}Higher values of \texorpdfstring{$s$}{s}} For higher decay widths we can proceed similarly. We define more generally \begin{align} F_s^{m_\mathrm{D},m_\U}(c)&=\frac{1}{c} \sum_{d \in \ZZ_c^\times} \mathrm{e}^{-\frac{2\pi i m_\U d}{c}} G(-ds,d(m_\mathrm{D}-m_\U-s),c) \\ &= \frac{1}{c}\sum_{d \in \ZZ_c^\times}\sum_{n=0}^{c-1} \mathrm{e}^{-\frac{2\pi i d(sn(n+1)-m_\mathrm{D} n+m_\U(n+1))}{c}}\ , \end{align} which again agrees with the definition \eqref{eq:definition F overview} except for $c=1$ and $c=2$. The same argument as for $s=1$, $m_\mathrm{D}=m_\U=0$ shows that $F_s^{m_\mathrm{D},m_\U}(c)$ is a multiplicative function. Thus it suffices again to compute $F_s^{m_\mathrm{D},m_\U}(c)$ on prime powers. Proceeding as before, this gives \begin{align} F_s^{m_\mathrm{D},m_\U}(p^k)&=\frac{1}{p^k}\sum_{n \in \ZZ_{p^k}} \left(\sum_{d \in \ZZ_{p^k}}-\sum_{d \in \ZZ_{p^k},\, d \equiv 0 \bmod p}\right) \mathrm{e}^{-\frac{2\pi i d(sn(n+1)-m_\mathrm{D} n+m_\U(n+1))}{p^k}} \\ &=\sum_{n \in \ZZ_{p^k}} \left(\delta_{p^k \mid sn(n+1)-m_\mathrm{D} n+m_\U(n+1))} - \frac{1}{p} \delta_{p^{k-1} \mid sn(n+1)-m_\mathrm{D} n+m_\U(n+1))}\right)\\ &=\sum_{n \in \ZZ_{p^k}} \delta_{p^k \mid sn(n+1)-m_\mathrm{D} n+m_\U(n+1))}-\sum_{n \in \ZZ_{p^{k-1}}} \delta_{p^{k-1} \mid sn(n+1)-m_\mathrm{D} n+m_\U(n+1))}\ . \end{align} We hence need to count the number of solutions to the equation \begin{equation} sn^2+(s-m_\mathrm{D}+m_\U)n+m_\U \equiv 0 \bmod p^k\ . \label{eq:quadratic equation} \end{equation} This is done in Appendix~\ref{app:count number of solutions quadratic equation}. Let us note that the discriminant of this quadratic equation is given by \begin{equation} \Delta_{m_\mathrm{D},m_\U}=\big[s-(\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2\big]\big[s-(\sqrt{m_\mathrm{D}}-\sqrt{m_\U})^2\big]\ . \end{equation} Let us first consider the generic case, by which we mean that $\Delta_{m_\mathrm{D},m_\U} \ne 0 \bmod p$, $p \ne 2$ and $s \ne 0 \bmod p$. Let us denote the set of all the special primes for which this is not the case by $\PP_{s,m_\mathrm{D},m_\U}$. In this case, the number of solutions is independent of $k \ge 1$ and is given by the Legendre symbol \begin{equation} \jac{\Delta_{m_\mathrm{D},m_\U}}{p}+1\ . \end{equation} This implies that for a generic prime \begin{equation} F_s^{m_\mathrm{D},m_\U}(p^k)=\jac{\Delta_{m_\mathrm{D},m_\U}}{p} \delta_{k,1}=\jac{\Delta_{m_\mathrm{D},m_\U}}{p} |\mu(p^k)|\ . \end{equation} For the exceptional primes $\PP_{s,m_\mathrm{D},m_\U}$, the formula for the number of integer solutions is more complicated and explained in Appendix~\ref{app:count number of solutions quadratic equation}. It is sufficient here to know that since $\Delta_{s,m_\mathrm{D},m_\U} \ne 0$ by construction, the number of solutions always stabilizes for $k \ge k_0$ and thus $F_s^{m_\mathrm{D},m_\U}(p^k)=0$ for sufficiently high $k$. By multiplicativity, we can write the sum involved in the mass-shift in terms of an infinite product over primes, \begin{align} \sum_{c=1}^\infty \frac{F_s^{m_\mathrm{D},m_\U}(c)}{c^4}=\prod_{p \in \PP} \left(1+\jac{\Delta_{m_\mathrm{D},m_\U}}{p} p^{-4} \right) \!\!\prod_{p \in \PP_{s,m_\mathrm{D},m_\U}} \!\!\!\! \frac{\sum_{k\ge 0} F_s^{m_\mathrm{D},m_\U}(p^k)p^{-4k}}{1+\jac{\Delta_{m_\mathrm{D},m_\U}}{p} p^{-4}} \ .\label{eq:sum f to Dirichlet L function} \end{align} Since there are finitely many exceptions, the second factor on the right hand side is easy to evaluate. It remains to evaluate the first factor. Here $\jac{\Delta}{c}$ appears with $c$ not necessarily odd. This constitutes a generalization of the Jacobi symbol known as the Kronecker symbol. Its definition involves in general several case distinctions, but we do not need all of them because $\Delta>0$ and $\Delta \equiv 0,\, 1 \bmod 4$. In this special case, the definition reads \begin{equation} \jac{\Delta}{c}=\jac{\Delta}{|c|}=\prod_{p \in \PP} \jac{\Delta}{p}^{k_p}\ , \end{equation} where $|c|=\prod_p p^{k_p}$ is the prime factorization of $|c|$. Hence we only need to define \begin{equation} \jac{\Delta}{2}=\begin{cases} 0\ , \quad &\Delta \equiv 0 \bmod 2\ , \\ 1\ , \quad &\Delta \equiv \pm 1 \bmod 8\ , \\ -1\ , \quad &\Delta \equiv \pm 3 \bmod 8\ . \end{cases} \end{equation} The Kronecker symbol is periodic in $c$ with period $\Delta$ because of quadratic reciprocity (this requires $\Delta \equiv 0,\, 1 \bmod 4$). We evaluate for $\Delta \equiv 0,\, 1 \bmod 4$, \begin{align} \prod_{p \in \PP} \left(1-\jac{\Delta}{p} p^{-4} \right)^{-1} &= \sum_{c=1}^\infty \jac{\Delta}{c} \frac{1}{c^4} \\ &=\frac{1}{2} \sum_{c \in \ZZ \setminus \{0\}} \Res_{z=c}\frac{1}{z^4} \sum_{m=0}^{\Delta-1} \jac{\Delta}{m}\frac{\pi }{\Delta} \cot\left(\frac{\pi(z-m)}{\Delta}\right) \\ &= -\frac{1}{2}\Res_{z=0}\frac{1}{z^4} \sum_{m=1}^{\Delta-1} \jac{\Delta}{m} \frac{\pi }{\Delta}\cot\left(\frac{\pi(z-m)}{\Delta}\right) \\ &=\sum_{(m,\Delta)=1}\frac{\pi^4(2+\cos(\frac{2m \pi}{\Delta})) }{6 \Delta^4 \sin(\frac{m \pi}{\Delta})^4} \jac{\Delta}{m}\ . \end{align} We then finish the calculation by noting that \begin{align} \prod_{p \in \PP} \left(1+\jac{\Delta}{p} p^{-4}\right)&=\prod_{p,\, \Delta \not\equiv 0 \bmod p} \frac{1-p^{-8}}{1-\jac{\Delta}{p}p^{-4}} \\ &=\frac{1}{\zeta(8)}\prod_{p \, \mid\, \Delta} (1-p^{-8})^{-1}\sum_{(m,\Delta)=1}\!\!\frac{\pi^4(2+\cos(\frac{2m \pi}{\Delta})) }{6 \Delta^4 \sin(\frac{m \pi}{\Delta})^4} \jac{\Delta}{m}\ . \label{eq:evaluation Dirichlet L function} \end{align} Combining \eqref{eq:sum f to Dirichlet L function} and \eqref{eq:evaluation Dirichlet L function} allows us to compute the sums appearing in the imaginary part of the mass-shift. It is then simple to implement these formulas and check the required identities such as \eqref{eq:L function identity}. We checked that the corresponding identities hold up to $s=12$. \section{Conclusions}\label{sec:conclusion} In this work, we revisited the formal expression \eqref{eq:1.1} describing scattering amplitudes of strings as integrals over the moduli space of Riemann surfaces with punctures. We converted it into a practical formula \eqref{eq:1.2} to compute one-loop four-point open-string amplitudes. It can be thought of as a sum over thin worldsheets with a given number of windings, where terms with more and more windings become more suppressed. This formula allowed us to compute the corresponding amplitudes at finite values of $\alpha'$ for the first time, as illustrated on examples in Figures~\ref{fig:Ap-forward}, \ref{fig:fixed-angle-data}, and \ref{fig:ratios}. There are a couple of open questions that we were not able to fully solve as well as a number of future research directions, which we outline below. \paragraph{Convergence of the Rademacher expansion.} While we have provided, in our view, strong evidence for the convergence of the Rademacher contour, we were unable to rigorously prove it. As we have also mentioned, the convergence properties deteriorate at low energies, since the phases in the Rademacher expansion tend to be close to unity and do not cancel out. At $s=t=0$ the convergence completely breaks down, which is the manifestation of the massless branch cut in our formula. In order to develop this formalism more systematically, it is of vital importance to understand the involved phases better. While the sawtooth function $\st{x}$ makes a frequent appearance in number theory, the sums in \eqref{eq:planar four-point function s-channel} at least naively are not easy to bound using standard number-theoretic techniques. One obviously would like to do better than simply argue for the randomness of these phases for high values of $c$. We should mention that there are some cases in the literature where convergence of the Rademacher series for positive weight is established \cite{Cheng:2011ay}. \paragraph{Low-energy expansion.} A related issue is the important cross check of making contact with the low-energy expansion of the amplitudes. There is a large body of literature studying the $\alpha'$ expansion of one-loop string amplitudes, see \cite{Broedel:2014vla, Broedel:2018izr, Mafra:2019ddf, Mafra:2019xms, Edison:2021ebi} for the open string in particular. It seems to be quite hard to extract the low-energy behaviour from the Rademacher formula, since this is the regime where convergence breaks down. It is also not possible to take $\alpha'$ derivatives of our formula and commute the derivatives with the infinite sum over $c$. Every such derivative makes the individual terms grow faster with $c$ and after a sufficient number of derivatives, the sum no longer converges. To better understand the involved subtlety, consider the infinite sum \begin{equation} \sum_{n\ne 0} \frac{1}{|n|} \mathrm{e}^{i n x}=- \log \big(4 \sin(\tfrac{x}{2})^2\big)\ , \label{eq:toy example sum} \end{equation} which has similar convergence properties as the sums that we encountered in this paper and the breakdown of convergence at $x=0$ also manifests itself as a logarithmic branch cut. Without knowing the right-hand side, it is equally challenging to extract the series expansion of the left-hand side around $x=0$, since the sum is divergent as soon as one takes a derivative in $x$ and commutes it with the infinite sum. \paragraph{Analytic continuation in $s$ and $t$.} The Rademacher expansion of the planar $s$-channel string amplitude given in eq.~\eqref{eq:planar four-point function s-channel} is only valid for physical kinematics. In fact, it does not even converge when we start to consider complex values of $s$ and $t$, since then the phases start to grow exponentially. However, as is well-known, it is often fruitful to study the extension of the amplitude to the complex Mandelstam invariants and study its analytic properties. In the context of string amplitudes, some analytic properties of the amplitude were studied in \cite{DHoker:1994gnm, Eberhardt:2022zay}, but a full understanding is missing. The fact that \eqref{eq:planar four-point function s-channel} cannot be easily analytically continued does not mean that such an analytic continuation does not exist. In fact, one might come to a similar conclusion in the toy example \eqref{eq:toy example sum}, but of course the right hand side has a perfectly good analytic continuation with branch points at $x\in 2\pi \ZZ$, where the convergence of the sum breaks down. It would be very desirable to have a formula for the string amplitude that holds for arbitrary complex values of $s$ and $t$; or short of that, a way to access the other branches of the amplitude for real values of $s$ and $t$. We expect that deforming the integration contour appropriately can achieve such a goal and plan to report on it elsewhere \cite{LorenzSebastian}. \paragraph{High-energy limit.} Since we now have explicit control over the amplitude at intermediate energies, it is natural to try to extract the asymptotics for very high energies from our formula. The high-energy limit of string amplitudes was first analyzed by Gross and Mende \cite{Gross:1987kza} and Gross and Ma\~nes \cite{Gross:1989ge} in the case of the open string, where the integral over moduli space was evaluated using saddle-point techniques. As already mentioned in Section~\ref{sec:introduction}, performing this computation rigorously is currently out of reach and the results of \cite{Gross:1987kza,Gross:1989ge} should be viewed as a heuristic. As we saw in this work, the asymptotic formula of Gross and Ma\~nes seem to be true ``on average'', but there is a number of very complicated oscillations on top that seem hard to predict from a saddle-point of view.\footnote{Incidentally, there seems to be a nice analogy with the recent discussion of wormholes in quantum gravity, where the saddle point gives the ``averaged'' contribution to some quantity such as the spectral form factor, while the true behaviour of the quantity has many erratic oscillations on top of this averaged smooth behaviour, see e.g.\ \cite{Cotler:2016fpe} and numerous follow-up works.} Our formula seems to open a different avenue to access the high-energy behaviour in detail as we have already demonstrated numerically, see Figure~\ref{fig:fixed-angle-data}. A good understanding of the growth behaviour of the polynomials $Q_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})$ for large values of the parameters would give a detailed analytic control over the high-energy limit of the amplitude and can hopefully make contact with the saddle-point evaluation. Additionally, the integration contour we proposed in this work can be taken as a starting point for a rigorous saddle-point analysis using methods of complex Morse theory. \paragraph{Other string topologies.} As an immediate question, one may ask how general the methods employed in this paper are. The logic generalizes straightforwardly to other open string one-loop amplitudes with an arbitrary number of external vertex operators. However, the modular weight of the integrand is at least naively positive for $n \ge 5$ external vertex operators and convergence might become even more delicate than in the four-point case. We should however note that this might be misleading. Contrary to the four-point function, the five-point function does not admit a canonical integrand that should be integrated over $\mathcal{M}_{1,5}$, but instead several different representations that differ by total derivatives. It might be more illuminating to think of the integrand as living on the moduli space of super Riemann surfaces as described by \cite{Witten:2012bh, Witten:2013pra}, where the integrand has a canonical form. Since there is no non-trivial topology in the fermionic directions of moduli space, the contour deformation into the Rademacher contour straightforwardly extends to supermoduli space and its complexification. The extension to higher loops and closed strings at one loop is much less clear at this stage. One would expect that the general logic that one can derive an infinite sum representation for the string amplitude where every term is controlled by a degeneration in complexified moduli space continuous to hold. To make such an expectation concrete, one needs a version of the Rademacher contour for other genera. For the open string at two loops, this seems to be possible. In \cite{Cardoso:2021gfg, LopesCardoso:2022hvc}, the Rademacher expansion for the inverse of the Igusa cusp form $\Phi_{10}^{-1}$ at genus 2 was derived in the context of microstate counting of black holes. This is almost what we need for the partition function of the bosonic open string in analogy to what we discussed in Section~\ref{subsec:Rademacher contour} at one loop. Indeed, the inverse of the Igusa cusp form is the partition function of 24 free bosons. However, for the closed string, even at genus 1, the mathematical technology needed for this computation is to our knowledge not available in the literature. In the simplest toy model for the partition function of the closed bosonic string, one wants to evaluate the integral \begin{equation} \int_{\mathcal{F}} \frac{\d^2 \tau}{(\Im \tau)^{14}\, |\eta(\tau)^{24}|^2} \label{eq:closed string partition function naive} \end{equation} over the fundamental domain. One again has to modify the contour near the cusp to implement the $i \varepsilon$ prescription. Let us set $\tau=x+y$ with $x \in \RR$ and $y \in i\RR$ for the real slice of moduli space. One then allows $x$ and $y$ to be both complex in order to pass to the complexification. The appropriate complexification of the moduli space is given by two copies of the upper half-plane modded out by a single diagonal modular group, $(\HH \times \HH)/\PSL(2,\ZZ)$. The proper contour for the integral \eqref{eq:closed string partition function naive} analogous to the one discussed in Section~\ref{subsec:integration contour} is then \begin{multline} \int_{\Gamma} \frac{\d^2 \tau}{(\Im \tau)^{14}\, |\eta(\tau)^{24}|^2}=\int_{\mathcal{F}_L} \frac{\d^2 \tau}{(\Im \tau)^{14}\, |\eta(\tau)^{24}|^2}\\ +\int_{-\frac{1}{2}}^{\frac{1}{2}} \d x \int_{ iL-\RR_{\ge 0}} \d y\ \frac{1}{y^{14} \eta(x+y)^{24}\eta(-x+y)^{24}}\ , \end{multline} which is indeed convergent. Here, $\mathcal{F}_L$ is the usual fundamental domain cut off at $\Im \tau=L$ that also often features in other regularizations of the integral over moduli space. While the integral can be easily evaluated numerically (and equals roughly $29399.1+98310i$), we are not aware of any exact analytic evaluation of the integral. See however \cite{Korpas:2019ava} for a term-by-term evaluation in the $q$-expansion, \cite{Lerche:1987qk} for the evaluation of integrals of holomorphic modular functions using the transformation property of the Eisenstein series $E_2$ and \cite{Angelantonj:2013eja, Florakis:2016boz} for the application of the Rankin--Selberg method for similar integrals. \paragraph{Oscillations and relation to chaos.} We have seen numerically that the real part of the one-loop amplitude features many seemingly erratic oscillations, see Figures \ref{fig:Ap-forward} and \ref{fig:fixed-angle-data}. The meaning of these oscillations from a scattering amplitudes point of view is not entirely clear, since there are very few consistency checks that can be performed on the real part of the one-loop amplitude without knowing further data. In particular, it is not directly constrained by unitary. Some constraints are imposed by the analytic structure, which allow one to compute dispersion relations. We will report on these elsewhere \cite{LorenzSebastian}. One perspective on these amplitudes is from the point of view of chaos. Going to stronger coupling will eventually make contact with black hole physics (although black holes are non-perturbative in the string coupling of the form $\mathcal{O}(\mathrm{e}^{-1/g_\text{s}^2})$ and thus not necessarily visible in string perturbation theory). Such a view for tree-level scattering amplitudes with one or more heavy external states was advocated in \cite{Gross:2021gsj}. We believe that the one-loop amplitude is a much better probe for such chaotic behaviour since it involves arbitrarily massive internal states. It would be interesting to make this link more precise. \acknowledgments We thank Nima Arkani-Hamed, Pinaki Banerjee, Simon Caron-Huot, Eric D'Hoker, Aaron Hillman, Abhiram Kidambi, Juan Maldacena, Giulio Salvatori, Oliver Schlotterer, and Gabriele Veneziano for useful discussions. L.E. and S.M. are supported by the grant DE-SC0009988 from the U.S. Department of Energy. S.M. gratefully acknowledges funding provided by the Sivian Fund. \section{\label{sec:introduction}Introduction} Superstring perturbation theory instructs us to compute scattering amplitudes $\A_{g,n}$ as integrals of correlations functions of vertex operators over the moduli space $\mathcal{M}_{g,n}$ of genus-$g$ Riemann surfaces with $n$ punctures. Schematically, the formula encountered in textbooks on string theory is \begin{equation}\label{eq:1.1} \mathcal{A}_{g,n} \sim \int_{\mathcal{M}_{g,n}} \!\!\! \left< \mathcal{V}_1(z_1) \mathcal{V}_2(z_2) \cdots \mathcal{V}_n(z_n) \right>\, \d \mu_{g,n}\, , \end{equation} where $\mathcal{V}_i(z_i)$ are vertex operators inserted at positions $z_i$ and $\d\mu_{g,n}$ denotes the measure on $\mathcal{M}_{g,n}$ involving $z_i$'s and the surface moduli \cite{Polyakov:1981rd, Polchinski:1998rq}. Recall that the moduli space $\mathcal{M}_{g,n}$ is $(3g{+}n{-}3)$-dimensional, real or complex depending on whether we deal with open or closed strings respectively. It is well-known, albeit not often emphasized, that the above prescription is only approximately correct and \eqref{eq:1.1} is ill-defined. The problem with \eqref{eq:1.1} has a physical origin. It can be traced back to the fact that the target space is Lorentzian, while the worldsheet theory is Euclidean. Of course, the reason to insist on a Euclidean worldsheet is so that we can use the powerful tools of two-dimensional CFTs and avoid spurious singularities that would come with a Lorentzian worldsheet \cite{Mandelstam:1973jk}. The price we have to pay, however, is that certain ambiguities related to causal and unitary propagation of strings in space-time (which in quantum field theory are addressed by the Feynman $i\varepsilon$ prescription) remain unresolved. This can be already seen on explicit examples, such as $g=1$ and $n=4$, where \eqref{eq:1.1} is purely real and hence cannot be consistent with the unitarity via the optical theorem. Recall that the question of unitarity in the target space is separate from that of unitarity of the worldsheet theory. Witten proposed to cure this problem by zooming in on the boundaries of the moduli space $\mathcal{M}_{g,n}$ corresponding to Riemann surfaces degenerating to Feynman diagrams and resolve the aforementioned ambiguities by requiring consistency with the field-theory $i\varepsilon$ prescription \cite{Witten:2013pra}. Let us focus on the open-string case, where one can think of the open-string moduli space $\mathcal{M}_{g,n}$ as a contour embedded in its complexification $\mathcal{M}_{g,n}(\mathbb{C})$.\footnote{Here, $\mathcal{M}_{g,n}(\mathbb{C})$ denotes the complexification of the open string moduli space. It is in general a cover of the corresponding closed string moduli space.} The task is then to prescribe an integration contour that coincides approximately with $\mathcal{M}_{g,n}$ on the bulk of the integration contour, but is otherwise designed to implement the Witten $i\varepsilon$ near the boundaries. The subject of this paper is a concrete realization of this idea. Problems with the integration contour are somewhat milder after viewing string theory as an effective field theory and committing to the $\alpha'$-expansion. In fact, virtually all computations of string amplitudes are done this way, see, e.g., \cite{Green:1987mn,DHoker:1988pdl, Schlotterer:2011psa, Gerken:2020xte,10.1007/978-3-030-37031-2_3,10.1007/978-3-030-37031-2_4,Berkovits:2022ivl,Mafra:2022wml} for reviews. By contrast, in this work, we are interested in exploring intrinsically stringy properties of amplitudes and hence work at finite $\alpha'$, where we need to face the aforementioned difficulties. The simplest case in which \eqref{eq:1.1} needs to be corrected is already at genus zero (for any $n \geq 4$), but in a sense ``anything goes'' and the precise rerouting of the contour does not affect the final answer. This is related to fact that $\A_{0,n}$ is a tree-level amplitude and hence does not have any branch cuts. Likewise, at higher genus the part of the integration contour lying in the $z_i$ coordinates is easily fixable, but the part lying in the directions of the Riemann surface moduli needs additional work. The simplest interesting case is therefore $g=1$ and $n=4$, where $\mathcal{M}_{1,4}$ depends on the modular parameter $\tau$ in addition to the positions of the punctures. Describing and manipulating this contour will be the main results of this paper. We focus on the simplest case of open strings, including annulus and M\"obius strip topologies. \begin{figure} \centering \begin{tikzpicture} [scale=2.5] \begin{scope} \draw [light-gray] (1.1,0) -- (-1.1,0); \draw [light-gray] (0,0) -- (0,2); \draw [light-gray,dashed] (0,2) -- (0,2.5); \node at (-1,-0.15) {$-1$}; \node at (1,-0.15) {$1$}; \node at (0,-0.15) {$0$}; \node at (0.5,-0.15) {$\frac{1}{2}$}; \node at (-0.5,-0.15) {-$\frac{1}{2}$}; \node at (1.195,2.63) {$\tau$}; \draw (1.1,2.7) -- (1.1,2.55) -- (1.25,2.55); \draw [light-gray] (0,0) arc [radius=1, start angle=0, end angle= 90]; \draw [light-gray] (1,0) arc [radius=1, start angle=0, end angle= 180]; \draw [light-gray] (1,1) arc [radius=1, start angle=90, end angle= 180]; \draw [light-gray] (0.5,0) -- (0.5,0.866); \draw [light-gray] (-0.5,0) -- (-0.5,0.866); \draw [light-gray] (0.5,0.866) -- (0.5,2); \draw [light-gray] (-0.5,0.866) -- (-0.5,2); \draw [light-gray,dashed] (0.5,2) -- (0.5,2.5); \draw [light-gray,dashed] (-0.5,2) -- (-0.5,2.5); \draw [light-gray] (0.5,2.5) -- (0.5,2.7); \draw [light-gray] (-0.5,2.5) -- (-0.5,2.7); \draw [light-gray] (0,0) arc [radius=0.333, start angle=0, end angle= 180]; \draw [light-gray] (-0.334,0) arc [radius=0.333, start angle=0, end angle= 180]; \draw [light-gray] (0.666,0) arc [radius=0.333, start angle=0, end angle= 180]; \draw [light-gray] (1,0) arc [radius=0.333, start angle=0, end angle= 180]; \draw [light-gray] (0,0) arc [radius=0.196, start angle=0, end angle= 180]; \draw [light-gray] (-0.608,0) arc [radius=0.196, start angle=0, end angle= 180]; \draw [light-gray] (0.392,0) arc [radius=0.196, start angle=0, end angle= 180]; \draw [light-gray] (1,0) arc [radius=0.196, start angle=0, end angle= 180]; \draw [light-gray] (0,0) arc [radius=0.142, start angle=0, end angle= 180]; \draw [light-gray] (-0.716,0) arc [radius=0.142, start angle=0, end angle= 180]; \draw [light-gray] (0.284,0) arc [radius=0.142, start angle=0, end angle= 180]; \draw [light-gray] (1,0) arc [radius=0.142, start angle=0, end angle= 180]; \draw [light-gray] (0,0) arc [radius=0.108, start angle=0, end angle= 180]; \draw [light-gray] (-0.784,0) arc [radius=0.108, start angle=0, end angle= 180]; \draw [light-gray] (0.216,0) arc [radius=0.108, start angle=0, end angle= 180]; \draw [light-gray] (1,0) arc [radius=0.108, start angle=0, end angle= 180]; \draw [light-gray] (0.5,0) arc [radius=0.122, start angle=0, end angle= 180]; \draw [light-gray] (-0.5,0) arc [radius=0.122, start angle=0, end angle= 180]; \draw [light-gray] (0.744,0) arc [radius=0.122, start angle=0, end angle= 180]; \draw [light-gray] (-0.256,0) arc [radius=0.122, start angle=0, end angle= 180]; \draw [light-gray] (0.5,0) arc [radius=0.060, start angle=0, end angle= 180]; \draw [light-gray] (-0.5,0) arc [radius=0.060, start angle=0, end angle= 180]; \draw [light-gray] (0.620,0) arc [radius=0.060, start angle=0, end angle= 180]; \draw [light-gray] (-0.380,0) arc [radius=0.060, start angle=0, end angle= 180]; \draw [light-gray] (0.666,0) arc [radius=0.039, start angle=0, end angle= 180]; \draw [light-gray] (0.412,0) arc [radius=0.039, start angle=0, end angle= 180]; \draw [light-gray] (-0.588,0) arc [radius=0.039, start angle=0, end angle= 180]; \draw [light-gray] (-0.334,0) arc [radius=0.039, start angle=0, end angle= 180]; \draw [light-gray] (0.334,0) arc [radius=0.062, start angle=0, end angle= 180]; \draw [light-gray] (0.790,0) arc [radius=0.062, start angle=0, end angle= 180]; \draw [light-gray] (-0.666,0) arc [radius=0.062, start angle=0, end angle= 180]; \draw [light-gray] (-0.210,0) arc [radius=0.062, start angle=0, end angle= 180]; \draw[line width=0.5mm, Maroon] (0,0) -- (0,2.7); \draw[line width=0.5mm, Maroon, ->] (0,0.4) -- (0,1.5); \draw[line width=0.5mm, Maroon] (0.5,0) -- (0.5,2.7); \draw[line width=0.5mm, Maroon, ->] (0.5,2.7) -- (0.5,1.4); \end{scope} \end{tikzpicture} \qquad \begin{tikzpicture} [scale=2.5] \begin{scope} \draw [light-gray] (1.1,0) -- (-1.1,0); \draw [light-gray] (0,0) -- (0,2); \draw [light-gray,dashed] (0,2) -- (0,2.5); \node at (-1,-0.15) {$-1$}; \node at (1,-0.15) {$1$}; \node at (0,-0.15) {$0$}; \node at (0.5,-0.15) {$\frac{1}{2}$}; \node at (-0.5,-0.15) {-$\frac{1}{2}$}; \node at (1.195,2.63) {$\tau$}; \draw (1.1,2.7) -- (1.1,2.55) -- (1.25,2.55); \draw [light-gray] (0,0) arc [radius=1, start angle=0, end angle= 90]; \draw [light-gray] (1,0) arc [radius=1, start angle=0, end angle= 180]; \draw [light-gray] (1,1) arc [radius=1, start angle=90, end angle= 180]; \draw [light-gray] (0.5,0) -- (0.5,0.866); \draw [light-gray] (-0.5,0) -- (-0.5,0.866); \draw [light-gray] (0.5,0.866) -- (0.5,2); \draw [light-gray] (-0.5,0.866) -- (-0.5,2); \draw [light-gray,dashed] (0.5,2) -- (0.5,2.5); \draw [light-gray,dashed] (-0.5,2) -- (-0.5,2.5); \draw [light-gray] (0.5,2.5) -- (0.5,2.7); \draw [light-gray] (-0.5,2.5) -- (-0.5,2.7); \draw [light-gray] (0,0) arc [radius=0.333, start angle=0, end angle= 180]; \draw [light-gray] (-0.334,0) arc [radius=0.333, start angle=0, end angle= 180]; \draw [light-gray] (0.666,0) arc [radius=0.333, start angle=0, end angle= 180]; \draw [light-gray] (1,0) arc [radius=0.333, start angle=0, end angle= 180]; \draw [light-gray] (0,0) arc [radius=0.196, start angle=0, end angle= 180]; \draw [light-gray] (-0.608,0) arc [radius=0.196, start angle=0, end angle= 180]; \draw [light-gray] (0.392,0) arc [radius=0.196, start angle=0, end angle= 180]; \draw [light-gray] (1,0) arc [radius=0.196, start angle=0, end angle= 180]; \draw [light-gray] (0,0) arc [radius=0.142, start angle=0, end angle= 180]; \draw [light-gray] (-0.716,0) arc [radius=0.142, start angle=0, end angle= 180]; \draw [light-gray] (0.284,0) arc [radius=0.142, start angle=0, end angle= 180]; \draw [light-gray] (1,0) arc [radius=0.142, start angle=0, end angle= 180]; \draw [light-gray] (0,0) arc [radius=0.108, start angle=0, end angle= 180]; \draw [light-gray] (-0.784,0) arc [radius=0.108, start angle=0, end angle= 180]; \draw [light-gray] (0.216,0) arc [radius=0.108, start angle=0, end angle= 180]; \draw [light-gray] (1,0) arc [radius=0.108, start angle=0, end angle= 180]; \draw [light-gray] (0.5,0) arc [radius=0.122, start angle=0, end angle= 180]; \draw [light-gray] (-0.5,0) arc [radius=0.122, start angle=0, end angle= 180]; \draw [light-gray] (0.744,0) arc [radius=0.122, start angle=0, end angle= 180]; \draw [light-gray] (-0.256,0) arc [radius=0.122, start angle=0, end angle= 180]; \draw [light-gray] (0.5,0) arc [radius=0.060, start angle=0, end angle= 180]; \draw [light-gray] (-0.5,0) arc [radius=0.060, start angle=0, end angle= 180]; \draw [light-gray] (0.620,0) arc [radius=0.060, start angle=0, end angle= 180]; \draw [light-gray] (-0.380,0) arc [radius=0.060, start angle=0, end angle= 180]; \draw [light-gray] (0.666,0) arc [radius=0.039, start angle=0, end angle= 180]; \draw [light-gray] (0.412,0) arc [radius=0.039, start angle=0, end angle= 180]; \draw [light-gray] (-0.588,0) arc [radius=0.039, start angle=0, end angle= 180]; \draw [light-gray] (-0.334,0) arc [radius=0.039, start angle=0, end angle= 180]; \draw [light-gray] (0.334,0) arc [radius=0.062, start angle=0, end angle= 180]; \draw [light-gray] (0.790,0) arc [radius=0.062, start angle=0, end angle= 180]; \draw [light-gray] (-0.666,0) arc [radius=0.062, start angle=0, end angle= 180]; \draw [light-gray] (-0.210,0) arc [radius=0.062, start angle=0, end angle= 180]; \draw[line width=0.5mm, Maroon] (0,0.4) -- (0,2); \draw[line width=0.5mm, Maroon, ->] (0,0.4) -- (0,1.5); \draw[line width=0.5mm, Maroon] (0,0) arc (-90:90:0.2); \draw[line width=0.5mm, Maroon] (0.5,0.4) -- (0.5,2); \draw[line width=0.5mm, Maroon, ->] (0.5,2) -- (0.5,1.4); \draw[line width=0.5mm, Maroon] (0.5,0) arc (-90:90:0.2); \draw[line width=0.5mm, Maroon] (0,2) -- (0.5,2); \draw[line width=0.5mm, Maroon, ->] (0,2) -- (0.3,2); \draw[line width=0.5mm, Maroon] (0,2.2) -- (0.5,2.2); \draw[line width=0.5mm, Maroon, <-] (0.2,2.2) -- (0.5,2.2); \draw[line width=0.5mm, Maroon] (0,2.2) -- (0,2.7); \draw[line width=0.5mm, Maroon, ->] (0,2.2) -- (0,2.5); \draw[line width=0.5mm, Maroon] (0.5,2.2) -- (0.5,2.7); \draw[line width=0.5mm, Maroon, ->] (0.5,2.7) -- (0.5,2.4); \node at (0.7,2.4) {$\color{Maroon}\gamma$}; \node at (0.7,1.4) {$\color{Maroon}\Gamma$}; \end{scope} \end{tikzpicture} \caption{\label{fig:Gamma}Contours of integration in the upper half-plane of the modular parameter $\tau$. \textbf{Left:} Textbook contour corresponding to integration over the annulus ($i\mathbb{R}$) and M\"obius strip ($\frac{1}{2} + i \mathbb{R}$) topologies. The gray lines carving out the fundamental domain and its images are there only to guide the eye. \textbf{Right:} The integration contour consistent with causality and unitarity. The part approaching infinity can be evaluated exactly and integrating over $\Gamma$ is the main challenge addressed in this paper.} \end{figure} Let us first recall the textbook definition of the integration contour used in \eqref{eq:1.1} for the planar one-loop open-string amplitude. The integrand of \eqref{eq:1.1} is known explicitly and will be reviewed later in Section~\ref{subsec:basic amplitudes}. As mentioned above, the most critical part of the contour lies in the $\tau$-plane, see Figure~\ref{fig:Gamma} (left). The integration over the annulus corresponds to the contour on the imaginary axis $i\mathbb{R}$ directed upwards. By itself, the integral over this contour is divergent as $\tau \to i\infty$, which corresponds to the annulus becoming thick and looking like a disk with a single closed-string emission at zero momentum. In type I superstring, this divergence is cured by the M\"obius strip topology. It turns out that it can be described by exactly the same integrand as the annulus, except the contour needs to be shifted to $\frac{1}{2} + i\mathbb{R}$ and its orientation reversed. The two parts of the contour meet at infinity and cancel the divergence provided the gauge group is chosen to be $\SO(32)$. The problems with the integration contour occur near $\tau =0$ and $\tau=\frac{1}{2}$, where the annulus and the M\"obius strip become very thin and look like Feynman diagrams. This is precisely the part of the contour that has to be appropriately modified. It turns out that the prescription consistent with the Witten $i\varepsilon$ is to choose the contour with two semi-circles illustrated in Figure~\ref{fig:Gamma} (right). Details behind this constructions will be given in Section~\ref{subsec:integration contour}. Since the integrand is holomorphic in the $\tau$ upper half-plane, the resulting contour can be freely deformed. We used this fact to split it into the part $\gamma$ enclosing the $i\infty$ and the rest which we call $\Gamma$. The former can be easily evaluated in terms of derivatives of the tree-level amplitude, so the main focus will be on $\Gamma$. Note that the size of the semi-circles in $\Gamma$ does not matter. A similar contour can be designed for the non-planar scattering amplitudes, see Section~\ref{subsec:integration contour} for details. We stress that this contour is \emph{not} related by a contour deformation to the original contour. The reason is that the correlation function (the integrand) has essential singularities at every rational $\tau$ and hence the usual Cauchy contour deformation arguments do not apply there. Instead, $\gamma \cup \Gamma$ should be treated as a new proposal for the integration contour in the $\tau$-space. A more precise description on the whole complex moduli space $\mathcal{M}_{1,n}(\mathbb{C})$ will be given in \cite{LorenzSebastian}. As a matter of fact, the essential singularities are another way of determining that the contour on the left of Figure~\ref{fig:Gamma} could not have been the correct one: it simply gives a divergence close to the real axis because of the bad direction of approach at the singularities. In a previous publication \cite{Eberhardt:2022zay}, we have already checked implicitly that the above contour is needed for consistency with unitarity at $n=4$. More precisely, if $\Gamma$ corresponds to the choice of $+i\varepsilon$ prescription, the $-i\varepsilon$ version would be described by a similar contour with both half-circles bulging to the left. The imaginary part of the amplitude, with is proportional to the difference between the two choices, would be given by integrating over two circles anchored at $\tau = 0$ and $\tau=\frac{1}{2}$. After additional massaging, this contour gives rise to an explicit and convergent integral representation of the imaginary part $\Im \A_{1,4}$. We checked that this answer is in perfect agreement with computing the imaginary part using unitarity cuts. We refer the reader to \cite{Eberhardt:2022zay} for more details. Direct integration over the $n$-dimensional contour involving $\Gamma$ and the $z_i$ moduli using generalizations of the Pochhammer contour will be presented elsewhere \cite{LorenzSebastian}. For $n=4$, this gives a $4$-dimensional contour. But the bottom line of the above discussion of the imaginary part is that, in a sense, integrating over the circles anchored at any rational $\tau$ is already a solved problem and it leads to $2$-dimensional integrals. Computing $\A_{1,n}$ can therefore be made more efficient if we managed to deform $\Gamma$ to a collection of circles. A realization of this idea is inspired by the beautiful work of Rademacher \cite{Rademacher, RademacherZuckerman}, who employed a similar deformation to provide convergent infinite-sum representations of the Fourier coefficients of certain modular forms. Starting with $\Gamma$, an iterative process described in Section~\ref{subsec:Rademacher contour} allows us to deform it into the contour $\Gamma_\infty$ shown in Figure~\ref{fig:Rademacher}. We call it the \emph{Rademacher contour}.\footnote{Versions of the Rademacher contour appeared previously in many other areas of high-energy theory, see e.g.\ \cite{Dijkgraaf:2000fq,Moore:2004fg, Denef:2007vg, Alday:2019vdr} for an incomplete cross-section.} It consists of an infinite number of Ford circles $C_{a/c}$ touching the real axis at rational points $\tau = \frac{a}{c}$ for every such fraction between $0$ and $\frac{1}{2}$. Each circle has a radius $\frac{1}{2c^2}$. The point is that each of them can be manipulated into a convergent integral expression. Roughly speaking, the smaller the circle, the smaller its contribution. Therefore, however insane the Rademacher contour might look like, it leads to an explicit formula for computing the amplitude. \begin{figure} \centering \begin{tikzpicture} \begin{scope} \node at (10.2,4.9) {$\tau$}; \draw (10.4,4.7) -- (10.0,4.7) -- (10.0,5.1); \tikzmath{ int \p, \q; for \q in {2,...,50}{ for \p in {1,...,\q/2}{ if gcd(\p,\q) == 1 then { \f = 16*\p/\q; \r = 8/(\q*\q); { \draw[line width=0.5mm, black!30!white, Maroon] (\f,\r) circle(\r); }; }; }; }; } \draw[ultra thick, Maroon, ->] (3.52,0.39) arc (192.7:100:.5); \draw[ultra thick, Maroon, ->] (4.48,0.64) arc (196.3:100:0.888); \draw[ultra thick, Maroon, ->] (6.62,0.55) arc (226.4:100:2); \node at (0,-.4) {$0$}; \node at (8,-.4) {$\frac{1}{2}$}; \node at (5.33,-.4) {$\frac{1}{3}$}; \node at (4,-.4) {$\frac{1}{4}$}; \node at (3.2,-.4) {$\frac{1}{5}$}; \node at (6.4,-.4) {$\frac{2}{5}$}; \node at (2.67,-.4) {$\frac{1}{6}$}; \node at (2.28,-.4) {$\frac{1}{7}$}; \node at (4.57,-.4) {$\frac{2}{7}$}; \node at (6.86,-.4) {$\frac{3}{7}$}; \node at (1.2,-.4) {$\cdots$}; \node at (8,2) {$C_{1/2}$}; \node at (5.33,0.8) {$C_{1/3}$}; \node at (4,0.45) {\scalebox{0.8}{$C_{1/4}$}}; \node at (3.2,0.3) {\scalebox{0.6}{$C_{1/5}$}}; \node at (1.5,0.6) {$\color{Maroon}\Gamma_\infty$}; \end{scope} \end{tikzpicture} \caption{\label{fig:Rademacher}Rademacher contour $\Gamma_\infty$ in the $\tau$-plane is a sum of infinitely many Ford circles $C_{a/c}$ for all irreducible fractions $\frac{a}{c} \in (0,\frac{1}{2}]$. Each circle touches the real axis at $\tau = \frac{a}{c}$, has radius $\frac{1}{2c^2}$, and is oriented clockwise.} \end{figure} The analysis becomes rather complicated due to the infinite number of branches of the integrand. After the dust settles, the final answer for the planar type I superstring amplitude $A^{\text{p}}$ takes the following form: \begin{equation}\label{eq:1.2} A^{\text{p}}(s,t) = \Delta A^{\text{p}}(s,t) + \sum_{\substack{\mathrm{irreducible}\\ \mathrm{fractions}\\ 0 < \frac{a}{c} \leq \frac{1}{2}}} \sum_{\begin{subarray}{c}\mathrm{windings}\\ n_\L,n_\mathrm{D},n_\mathrm{R},n_\U \ge 0 \\ n_\L+n_\mathrm{D}+n_\mathrm{R}+n_\U=c-1 \end{subarray}}A^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}_{a/c}(s,t)\ . \end{equation} The first contribution is the result of integrating over $\gamma$ and can be written explicitly in terms of derivatives of the Veneziano amplitude, see eq.~\eqref{eq:Delta A planar}. The second term involves a sum over all irreducible fractions $\frac{a}{c}$ between $0$ and $\frac{1}{2}$, as dictated by the Rademacher contour $\Gamma_\infty$. For each of them, we sum over a finite number of integers $n_\L$, $n_\mathrm{D}$, $n_\mathrm{R}$, $n_\U$ that depend on how the four punctures are placed relative to one another. We can interpret these terms more physically as winding numbers in the following way. Close to the real axis of $\tau$, Riemann surfaces become very skinny and look like worldlines. Consider the $s$-channel, in which the external particles $1$ and $2$ are incoming and hence are placed at past infinity, and likewise $3$ and $4$ are at future infinity since they are outgoing. We can separate them by an imagined space-like cut surface. Embedding the worldline in spacetime, the color lines of the annulus (near $\tau = \frac{0}{1})$ would go through the cut surface twice: on the way in and out. However, those of the M\"obius strip (near $\tau = \frac{1}{2}$) would do it four times since it needs an extra winding. As a generalization, close to the point $\tau = \frac{a}{c}$, the color lines do exactly $c{-}1$ extra windings. Moreover, we can count how many extra windings occur between every pair of punctures, starting from the $(1,2)$ and ending on $(4,1)$. Let us call these numbers $(n_\L, n_\mathrm{D}, n_\mathrm{R}, n_\U)$. They have to add up to $c{-}1$. An example is given in Figure~\ref{fig:windings}. This gives an interpretation of every term in the sum \eqref{eq:1.2}. \begin{figure} \centering \vspace{-4em} \begin{tikzpicture}[ CoilColor/.store in=\coilcolor,CoilColor=black, Step/.store in=\Step,Step=0.1, Coil/.style={ double=black, draw=gray!50, decoration={ #1, segment length=3mm, coil }, decorate, }, Coil2/.style={ decorate, decoration={ markings, mark= between positions 0 and 1 step \Step with { \begin{scope}[yscale=#1] \draw[xshift=9.2,fill,\coilcolor!70!black] (0,0)++(-135: 0.2 and 0.4) .. controls +(-0.2,0) and +(-0.3,0) .. (90: 0.2 and 0.4) .. controls +(-0.33,0) and +(-0.23,0) .. (-135: 0.2 and 0.4); \draw[white,line width=2pt] (0,0)++(90: 0.2 and 0.4) .. controls +(0.3,0) and +(0.2,0) .. (-45: 0.2 and 0.4); \draw[fill=\coilcolor,\coilcolor] (0,0)++(90: 0.2 and 0.4) .. controls +(0.3,0) and +(0.2,0) .. (-45: 0.2 and 0.4) .. controls +(0.25,0) and +(0.35,0) .. (90: 0.2 and 0.4); \end{scope} } } } ] \draw[Coil2=3.5,CoilColor=black] (1,-4) -- ++ (0,3); \draw [line width=0.2mm, bend right=130, looseness=3] (-0.4,-0.98) to (-0.4,-3.99); \draw (-2.27,-1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-2.57,-1.5) {1}; \draw (-2.27,-3.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-2.57,-3.5) {2}; \draw (2,-3.55) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.3,-3.55) {3}; \draw (2,-1.45) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.3,-1.45) {4}; \draw[dashed, very thick, Maroon] (0.8,-0.5) to (0.8,-4.6); \end{tikzpicture} \vspace{-4em} \caption{\label{fig:windings}Example contribution to the sum \eqref{eq:1.2} with $(n_\L, n_\mathrm{D}, n_\mathrm{R}, n_\U) = (0,1,7,1)$ and hence $c=10$. Time flows to the right and the cut (dashed line) separates the punctures with incoming ($1$ and $2$) and outgoing ($3$ and $4$) momenta. The four integers $(n_\L, n_\mathrm{D}, n_\mathrm{R}, n_\U)$ count the number of extra windings across the cut between the punctures $1$ and $2$, $2$ and $3$, $3$ and $4$, $4$ and $1$, respectively.} \end{figure} The explicit expressions for $A^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}_{a/c}$ are given in Section~\ref{subsec:results}. In addition to the aforementioned winding numbers, they depend on the kinematic variables $(s,t)$ and we find they are proportional to $\frac{1}{c^5}$. Every such term is given by a convergent two-dimensional integral similar to the ones originating from the phase space integration for unitarity cuts. We argue that the whole sum in \eqref{eq:1.2} convergent for $s>0$, although the convergence is sufficiently fast for useful numerical evaluation when $s \gtrsim \frac{1}{\alpha'}$. For $s \to 0$, convergence breaks down. To understand the origin of this behavior, consider the toy model $A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}(s,t)=\frac{1}{c^5}\mathrm{e}^{i \alpha' s \phi}$, where $\phi$ is a ``random'' phase that can depend on the winding numbers, $\frac{a}{c}$, and the kinematics. This is a good model for the actual formula for $A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}(s,t)$ that we derive in this paper, see eq.~\eqref{eq:planar four-point function s-channel}. As we take $\alpha' s \ll 1$, the phases stop mattering and the number of terms in the four-fold sum (over $a,n_\L, n_\mathrm{D}, n_\mathrm{R}, n_\U$ subject to one constraint) in \eqref{eq:1.2} is $\mathcal{O}(c^4)$. Therefore, in the strict $\alpha' = 0$ limit, we end up with a harmonic sum of the type $\sum_{c=1}^{\infty} \frac{1}{c}$, which indeed diverges. Increasing $s$ helps with convergence. For example, if the phases $\phi$ were sufficiently random (as they seem to be in practice), they would give ${\mathcal O}(\frac{1}{\sqrt{c}})$ enhancement to each sum, thus making the series converge pretty well. Indeed, we prove that \eqref{eq:1.2} converges for every $s \in \mathbb{Z}_{>0}$. Analogous formulas can be derived in the non-planar case. The formula \eqref{eq:1.2} allows us to compute $A^{\mathrm{p}}(s,t)$ for given $s$ and $t$ and finally plot the amplitude. As an example, the results in the forward limit, $t=0$, are shown in Figure~\ref{fig:Ap-forward}. We plotted $A^\text{p}(s,0) \sin^2(\pi s)$ with the additional factor that removes double poles at every integer $s$ to make the plot readable. One can notice a few interesting features. The imaginary part dominates by around two orders of magnitude over the real part (in the figure, the former is multiplied by $\tfrac{1}{20}$ for readability). The imaginary part remains constant or slightly increasing, while the real part oscillates around zero. We cross-checked our results with the unitarity cut computation \cite{Eberhardt:2022zay}, explicit evaluation using the generalized Pochhammer contour \cite{LorenzSebastian}, and computations of mass shifts, finding agreement in all cases. Details of the computations that went into Figure~\ref{fig:Ap-forward} are given in Section~\ref{subsec:numerical}. \begin{figure} \centering \includegraphics[scale=1.2]{figures/forward-data} \caption{\label{fig:Ap-forward}Planar open-string amplitude $A^\text{p}(s,t)$ in the forward limit $t=0$, plotted as a function of $s$. For the purpose of the plot, the amplitude is multiplied by $\sin^2(\pi s)$ in order to remove double poles. The real part is given in orange and the imaginary part (rescaled by $\tfrac{1}{20}$) is in blue. Faint vertical lines indicate values of $s$ at which a new threshold opens up. The error bars from extrapolation $c \to \infty$ are smaller than the line widths.} \end{figure} This interpretation in terms of winding numbers allows us to predict possible singularities of $A^\text{p}$. For example, a pole in the $s$-channel can only happen when the punctures $1$ and $2$ (or $3$ and $4$) are brought together. But this can only happen when $n_\L = 0$ (or $n_\mathrm{R} = 0$). Hence if one is interested in averaged mass shifts and decay widths of strings, which can be read off from the coefficient of the double pole at integer $s$, the computation simplifies quite dramatically. We use this fact to compute mass shifts and decay widths up to $s \leq 16$, see Appendix~\ref{app:mass-shifts}. Moreover, we observe that mass shifts and decay widths provide a rough estimate for the average behaviour of the amplitude. As a practical application, we used them to compute the high-energy fixed-angle behavior. This limit was previously studied using saddle-point methods in \cite{Gross:1989ge} (see also \cite{Gross:1987kza,Gross:1987ar} for closed strings), who found evidence that the amplitude is suppressed as $\mathrm{e}^{-\alpha' S_{\mathrm{tree}}}$ as $\alpha' \to \infty$ with $s/t$ fixed, where $S_{\mathrm{tree}}$ is the tree-level on-shell action. But it is actually not possible to perform saddle-point analysis correctly without knowing the original integration contour, so the discussion of \cite{Gross:1987kza,Gross:1987ar,Gross:1989ge} should be viewed only as a heuristic. It is thus interesting to study how the high-energy behavior looks like in practice and now we have a great tool to do so. \begin{figure} \centering \includegraphics[scale=1.2]{figures/fixed-angle-data} \caption{\label{fig:fixed-angle-data}Exponential decay of the planar open-string amplitude in the high-energy fixed-angle limit. We plot $A^{\mathrm{p}}(s,-\tfrac{s}{4})\sin^2(\pi s)$ with the absolute values of real and imaginary parts in orange and blue respectively. The data for $s \leq 16$ is computed using \eqref{eq:1.2} (with $c\leq 10$) and for all integer $s$ using mass shifts and decay widths (with $c \leq 1000$). Faint vertical lines indicate energies at which a new threshold opens up. The gray dashed lines correspond to exponential suppression $\mathrm{e}^{-S_{\mathrm{tree}}}$ with $S_{\mathrm{tree}} = s \log(s) + t \log(-t) + u \log(-u) \approx 0.56 s$ and are plotted with two different constants to guide the eye. The data confirms exponential decay.} \end{figure} In Figure \ref{fig:fixed-angle-data} we plot an example numerical evaluation of $A^{\mathrm{p}}(s,t) \sin^2(\pi s)$ at $60^{\circ}$ scattering angle (translating to $t=-\tfrac{s}{4}$) for a range of energies $s$ on a logarithmic scale. The data spans roughly $8$ orders of magnitude. We plot a continuous curve up to $s \leq 16$ and also high-precision values at all integer $s$ (since the plot is logarithmic, the spikes indicate zeros of the amplitude). The latter are computed using mass shifts and decay widths. The gray dashed lines indicate the exponential decay $\mathrm{e}^{-\alpha' S_{\mathrm{tree}}}$ and are in perfect agreement with the numerical data. We reaffirm this result by extracting the exponent of the exponential decay for a couple of scattering angles in Figure~\ref{fig:ratios}. Details of these computations are given in Section~\ref{subsec:numerical}. For the imaginary part, this behavior was already verified in \cite{Eberhardt:2022zay}, where the coefficient of the exponential was also proposed, explaining departure from a pure exponential in Figure~\ref{fig:fixed-angle-data}. Despite following the exponential envelope, the data has a jagged behavior, which is a result of receiving contributions from an infinite number of saddle points with the same exponential decay but different phases. We leave a more careful study of the high-energy behavior of the string amplitudes, where the integration contour $\Gamma$ is taken as a starting point, until future work. A lot of physical aspects of open-string amplitudes, including computing the cross-section, dominance of low partial-wave spins, and low-energy expansions, were already discussed in \cite{Eberhardt:2022zay}, where we refer the reader for details. \begin{figure} \centering \includegraphics[scale=1.2]{figures/ratios} \caption{\label{fig:ratios}Coefficient of the exponential suppression for two different scattering angles, corresponding to setting $t = -\tfrac{s}{2}$ and $t = -\tfrac{s}{3}$. After normalizing the amplitude by $\sin^2(\pi s) \sqrt{-8\pi t u /s}$, we plot the real part of its exponent in the units of the tree-level action $S_\mathrm{tree}$. The data for $s \lesssim 14.5$ is computed using \eqref{eq:1.2} with $c \leq 10$ and for all integer $s$ with $c \leq 1000$ using mass shifts and decay widths. The gray dashed line indicates the exponential suppression $\mathrm{e}^{-S_\mathrm{tree}}$ with which we find perfect agreement.} \end{figure} \medskip This paper is organized as follows. In Section~\ref{sec:review}, we review the definitions of one-loop open string amplitudes and how Riemann surface degenerations encode different singularities of the amplitudes. We advise readers familiar with the subject to skip directly ahead to Section~\ref{sec:summary}, where the main results of this paper are summarized in Section~\ref{sec:summary}, including the proposal for the integration contour, the Rademacher procedure, and the numerical calculations. The curious reader then hopefully wants to know how to derive these results, which we explain in the following three Sections. In Section~\ref{sec:two-point function}, we study the warm-up example of the two-point function, which illustrates most of the ideas employed in the full computation in a simplified setting. In Section~\ref{sec:planar amplitude derivation}, we give details of the manipulations leading to the formula \eqref{eq:1.2} in the planar case and in Section~\ref{sec:non-planar} we treat the non-planar one. Computations of mass shifts and decay widths are given in Section~\ref{sec:mass-shifts}. Finally, we conclude with a list of future directions in Section~\ref{sec:conclusion}. This paper comes with a number of appendices. In Appendix~\ref{app:Delta A planar} we review the computation of the cusp contribution to the planar amplitude. In Appendix~\ref{app:convergence}, we discuss convergence of the Rademacher method. In Appendix~\ref{app:count number of solutions quadratic equation} we explain how to count the solutions to quadratic equations modulo prime powers, which is an ingredient in the computations of mass shifts. In Appendix~\ref{app:mass-shifts} we tabulate the results for the numerical evaluation of mass shifts and decay widths. \section{Review: One-loop open string amplitudes}\label{sec:review} \subsection{\label{subsec:basic amplitudes}Annulus and M\"obius strip topologies} In this work, we will consider one-loop open string four-point amplitudes in type I superstring theory. There are three possible diagrams for the scattering of four gluons that are depicted in Figure~\ref{fig:open string diagrams}. The three basic cases are: the planar annulus diagram, where all four vertex operators are inserted on one boundary of the annulus, the (planar) M\"obius strip diagram and the non-planar annulus diagram, where two vertex operators are one boundary and the other two are on the other boundary of the annulus. Of course, all these diagrams also exist with the roles of the punctures $1$, $2$, $3$ and $4$ permuted. We should immediately mention that the non-planar annulus diagram with one vertex operator on one boundary and all three others on the other boundary vanishes identically. The reason is that one has to trace over the Chan--Paton group factors and a single vertex operator insertion on one boundary leads to the factor $\Tr(t^a)=0$, where $t^a$ are the adjoint generators for $\SO(N)$ with $N=32$. \begin{figure} \centering \begin{tikzpicture} \begin{scope} \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \draw (20:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (1.8,.51) {4}; \draw (-20:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (1.8,-.51) {3}; \draw (160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,.51) {1}; \draw (-160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,-.51) {2}; \end{scope} \begin{scope}[shift={(5,0)}] \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \draw (20:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (1.8,.51) {4}; \draw (-20:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (1.8,-.51) {3}; \draw (160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,.51) {1}; \draw (-160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,-.51) {2}; \fill[white] (-.6,.5) rectangle (.6,1.6); \fill[black!10!white] (-.62,.53) to (0,1.2) to[bend right=30] (-.62,1.375); \fill[black!10!white] (.62,.53) to (0,1.2) to[bend left=30] (.62,1.375); \draw[very thick, out=54.3, in=154.3, looseness=.8] (-.65,.47) to (.65,1.35); \fill[white] (0,1.2) circle (.1); \draw[very thick, out=125.7, in=25.7, looseness=.8] (.65,.47) to (-.65,1.35); \end{scope} \begin{scope}[shift={(10,0)}] \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \draw (20:.8) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (.4,.3) {3}; \draw (-20:.8) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (.4,-.3) {4}; \draw (160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,.51) {1}; \draw (-160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,-.51) {2}; \end{scope} \end{tikzpicture} \caption{\label{fig:open string diagrams}Three open string topologies at the one-loop level. \textbf{Left:} Planar annulus. \textbf{Middle:} M\"obius strip. \textbf{Right:} Non-planar annulus.} \end{figure} The color structure of the planar amplitudes is hence of the form $\Tr(t^{a_1} t^{a_2} t^{a_3} t^{a_4})$, whereas the color structure of the non-planar amplitudes is $\Tr(t^{a_1}t^{a_2})\Tr(t^{a_3}t^{a_4})$. Note that, relative to the M\"obius strip, the annulus contribution has an additional factor of $N$ coming from the Chan--Paton trace $\Tr(\varnothing) = N$ on the empty boundary. The integrands for these open string amplitudes are well-known \cite{Green:1981ya, Schwarz:1982jn}. Setting $\alpha'=1$, they are given respectively by \begin{subequations} \label{eq:integrands four point functions} \begin{align} \A_{\text{an}}^{\text{p}}& = 2^9 \pi^2 g_\text{s}^4 N \, t_8\, \Tr(t^{a_1}t^{a_2}t^{a_3}t^{a_4})\, \frac{(-i)}{32}\int_{\mathcal{M}_{1,4}^{\text{p,an}}} \!\!\!\!\! \d \tau\, \d z_1 \, \d z_2 \, \d z_3\, \prod_{j<i} \vartheta_1(z_{ij},\tau)^{-s_{ij}}\, , \label{eq:planar annulus four point function}\\ \A_{\text{M\"ob}}& =2^9 \pi^2 g_\text{s}^4 \, t_8\, \Tr(t^{a_1}t^{a_2}t^{a_3}t^{a_4})\, i\int_{\mathcal{M}_{1,4}^{\text{M\"ob}}} \!\!\! \d \tau\, \d z_1 \, \d z_2 \, \d z_3 \, \prod_{j<i} \vartheta_1(z_{ij},\tau)^{-s_{ij}} \ , \label{eq:Moebius strip four point function}\\ \A_{\text{an}}^{\text{n-p}}& =2^9 \pi^2 g_\text{s}^4 \, t_8 \Tr(t^{a_1}t^{a_2})\Tr(t^{a_3}t^{a_4})\, \frac{(-i)}{32}\int_{\mathcal{M}_{1,4}^{\text{n-p,an}}} \!\!\! \d \tau\, \d z_1 \, \d z_2 \, \d z_3 \, \prod_{j=1}^2\prod_{i=3}^4 \vartheta_4(z_{ij},\tau)^{-s_{ij}} \nonumber\\ &\hspace{7cm}\times \big(\vartheta_1(z_{21},\tau)\vartheta_1(z_{43},\tau)\big)^{-s} \ . \label{eq:non-planar annulus four point function} \end{align} \end{subequations} The planar amplitude is a sum of the first two contributions. The amplitudes depend on the Mandelstam invariants $s_{ij} = -(p_i + p_j)^2$, where $p_i$ denotes the external momentum associated to the vertex operator at position $z_i$. Only two kinematic invariants, say $(s,t)$ with $s = s_{12}$ and $t = s_{23}$ are independent. We use the mostly-minus signature in which, e.g., the $s$-channel kinematics is described by $-s<t<0$. We denote $z_{ij} = z_i - z_j$. We use the following standard conventions for the Jacobi theta function \begin{subequations} \begin{align} \vartheta_1(z,\tau)&=i \sum_{n \in \ZZ} (-1)^n \mathrm{e}^{2\pi i(n-\frac{1}{2}) z+\pi i (n-\frac{1}{2})^2 \tau}\ , \label{eq:definition theta1} \\ \vartheta_4(z,\tau)&= \sum_{n \in \ZZ} (-1)^n\mathrm{e}^{2\pi i n z+\pi i n^2 \tau}\ . \label{eq:definition theta4} \end{align} \end{subequations} $g_\mathrm{s}$ is the string coupling constant. The prefactor $t_8$ depends on the polarizations and kinematics and will be spelled out below. The moduli spaces $\mathcal{M}_{1,4}$ are real four-dimensional and consists of the purely imaginary modular parameter $\tau$ (for the M\"obius strip we have $\tau \in \frac{1}{2}+i \RR$) and purely real $z_i$, which are appropriately ordered, i.e.,\ $0\le z_1 \le z_2 \le z_3 \le z_4=1$ in the planar case and $-1 \le z_1 \le z_2 \le 1$, $z_{21}\le 1$, $0 \le z_3 \le z_4=1$ in the non-planar case. The $\U(1)$ isometry group of the annulus and the M\"obius strip allows us to pick the location of one puncture, say $z_4$, arbitrarily and we will choose $z_4=1$ in the following. The factor of $-i$ in front of the expressions compensates for our choice to integrate over purely imaginary $\tau$ which supplies a further $i$ from the Jacobian. As written, the amplitudes are hence real. This immediately tells us that the above description of ${\cal M}_{1,4}$ cannot be quite correct: scattering amplitudes at loop level need imaginary parts for consistency with unitarity. We will temporarily ignore this issue and come back to it in Section~\ref{subsec:integration contour}, where we will define a prescription for the integration contour similar to the causal Feynman $i\varepsilon$ in quantum field theory. Geometrically, these formulas arise as follows. For an open string worldsheet such as the annulus and the M\"obius strip, there is always a corresponding closed surface that double covers the original open surface. The covering is branched along the boundaries of the Riemann surface. It is given in both cases by the torus. It can be constructed by taking the orientation double cover of the original surface and gluing the boundaries. For example, the orientation double cover of the M\"obius strip is an annulus and the aforementioned torus is obtained by gluing its two boundaries. As an orientation double cover, the covering surface admits an orientation-reversing involution $\Phi$ such that the original surface is given by the respective quotient where one identifies $z \sim \Phi(z)$. In the case at hand, the torus is as usual realized by $\TT=\CC/\Lambda$ with $\Lambda=\langle 1,\tau \rangle$. Taking then $z \in \TT$, the orientation-reversing map can be chosen as $\Phi(z)=\overline{z}$. For this to yield a well-defined map on the torus, we need $\Lambda=\overline{\Lambda}$. This is true in two distinct cases: \begin{enumerate} \item $\tau \in i \RR_{>0}$. In this case, the resulting surface has two boundaries, namely the boundary corresponding to $z \in \RR+\ZZ \, \Im(\tau)$ and the boundary corresponding to $z \in \RR+(\ZZ+\frac{1}{2})\Im(\tau)$. The resulting geometry is an annulus. \item $\tau\in \frac{1}{2}+i \RR_{>0}$. In this case, there is only a single boundary given by the translates of the real line and we hence obtain a M\"obius strip as the quotient surface. \end{enumerate} In particular, vertex operators for the annulus can be either inserted on the real line, $z_j \in \RR$, or on the line $z_j \in \RR+\frac{1}{2}i \tau$. For the planar case, we will always choose the real line. The close connection to the torus explains the appearance of Jacobi theta functions in \eqref{eq:integrands four point functions}. In fact, the Green's function on the torus is given by \begin{equation} G(z_i,z_j)=\log\left|\frac{\vartheta_1(z_{ij},\tau)}{\vartheta_1'(\tau)}\right|^2-\frac{2\pi [\Im(z_{ij})]^2}{\Im(\tau)}\ . \end{equation} The non-holomorphic piece is necessary because of the constant function that is a zero mode of the Laplacian. Consequently, the Green's function satisfies \begin{equation} \Delta_{z_i}G(z_i,z_j)=2\pi \delta^2(z_i-z_j)-\frac{2\pi}{\Im(\tau)} \end{equation} and the right-hand side has a vanishing integral over the torus. Now the free boson propagator on the quotient surface is simply given by \begin{equation} \frac{1}{2}G(z_i,z_j)=\begin{cases} \log \frac{\vartheta_1(z_{ji},\tau)}{\vartheta_1'(\tau)} \qquad & \text{if $z_i$ and $z_j$ are on the same boundary}\ , \\ \log \frac{\vartheta_4(z_{ji},\tau)}{\vartheta_1'(\tau)} \qquad & \text{if $z_i$ and $z_j$ are on different boundaries}\ . \end{cases} \end{equation} This explains the various $\vartheta_i$-factors appearing in \eqref{eq:integrands four point functions}. It also explains why the planar annulus amplitude has exactly the same form as the M\"obius strip amplitude, except that $\tau$ is shifted to $\tau+\frac{1}{2}$ in the latter one. The relative overall normalization between the two diagrams requires a more careful discussion, see \cite{Green:1984ed}. These are the amplitudes for type I superstrings which are traditionally derived in the RNS formalism, although the pure-spinor superstring is actually more effective in deriving these formulas \cite{Berkovits:2004px}. From the perspective of the RNS formalism, it is surprising that one ends up with a simple integral over the \emph{bosonic} moduli space of Riemann surfaces. In general, superstring amplitudes involve integrals over supermoduli space: a supermanifold of dimension $3g-3+n|2g-2+n$ (for $n$ NS-punctures). Hence for a four-point function, there are four fermionic integrals to be done. In general, there is no canonical way of performing integrals over the fermionic directions since there is no preferred choice of such ``fermionic directions''. Correspondingly, superstring theory usually does not give canonical integrands over the moduli space of Riemann surfaces. At genus one, one is however in luck, since there is a very natural choice for how to do the fermionic integrals. In the traditional formalisms, fermionic integrals are performed by inserting picture-changing operators and on a genus-one surface we need exactly $n$ of them. Hence we can consider the correlation function of picture 0 NS-sector vertex operator, which leads to a well-defined integrand on the reduced space of supermoduli: the moduli space of spin-curves.\footnote{Strictly speaking this distribution of picture changing operators (PCOs) is not entirely consistent, but it seems that one can get away with it at genus 1.} One then has to sum over the non-trivial spin-structures of the respective surfaces to finally reduce the amplitude to an integral over ordinary moduli space. Through various miraculous cancellations, one ends up with the simple integrands given in \eqref{eq:integrands four point functions}. In particular, the one-loop determinants of the worldsheet fields together with the additional contributions from the vertex operators (beyond the Green's function explained above) essentially cancel out in the end. They produce the coordinate-independent factor $t_8$. It is given by \begin{align}\label{eq:t8 def} t_8 = &\;\tr_v(F_1 F_2 F_3 F_4) + \tr_v(F_1 F_3 F_2 F_4) + \tr_v(F_1 F_2 F_4 F_3) \\ &- \tfrac{1}{4} \Big( \tr_v(F_1 F_2) \tr_v(F_3 F_4) + \tr_v(F_1 F_3) \tr_v(F_2 F_4) + \tr_v(F_1 F_4) \tr_v(F_2 F_3) \Big)\ ,\nonumber \end{align} where the linearized field strengths are $F_i^{\mu\nu} = p_i^\mu \epsilon_i^\nu - \epsilon_i^\mu p_i^\nu$ with polarization vectors $\epsilon_i^\mu$ and the traces $\tr_v$ are taken over the Lorentz indices. It equals the Yang--Mills numerator and is the unique permutation-invariant structure consistent with gauge invariance and supersymmetry. Consequently, the four-gluon amplitude at any genus is guaranteed to have this universal factor present. For convenience, we will strip off the prefactors from \eqref{eq:integrands four point functions} that do not affect the analysis and denote the resulting amplitudes with non-curly symbols. In particular, after simplifying the integrand, the planar annulus amplitude is given by \begin{equation} A_\text{an}^\text{p} = -i \int_{i\mathbb{R}_{\geq 0}} \!\!\! \mathrm{d}\tau \int_{0 \leq z_1 \leq z_2 \leq z_3 \leq 1}\!\!\!\!\!\!\!\!\!\!\!\!\! \mathrm{d}z_1\, \mathrm{d}z_2 \, \mathrm{d} z_3 \left( \frac{\vartheta_1(z_{21},\tau)\vartheta_1(z_{43},\tau)}{\vartheta_1(z_{31},\tau)\vartheta_1(z_{42},\tau)}\right)^{\!-s} \!\left( \frac{\vartheta_1(z_{32},\tau)\vartheta_1(z_{41},\tau)}{\vartheta_1(z_{31},\tau)\vartheta_1(z_{42},\tau)}\right)^{\!-t} \end{equation} and similarly the M\"obius strip contribution $A_{\text{M\"ob}}$ is obtained by replacing the $\tau$ integration with $\frac{1}{2} + i\mathbb{R}_{\geq 0}$ and multiplying by $-1$. The total planar amplitude is then \begin{equation} A^{\text{p}} = A_{\text{an}}^{\text{p}} + A_{\text{M\"ob}}\ . \end{equation} We also set $N=32$, which is required for this combination to be well-defined. Finally, the non-planar amplitude is given by \begin{align} A^{\text{n-p}}& = \frac{-i}{32}\int_{i\mathbb{R}_{\geq 0}} \!\!\! \d \tau \int_{\begin{subarray}{c} \, 0 \leq z_1 \leq z_2 \leq 1 \\ 0 \leq z_3 \leq 1 \end{subarray}} \d z_1 \, \d z_2 \, \d z_3 \, \prod_{j=1}^2\prod_{i=3}^4 \vartheta_4(z_{ij},\tau)^{-s_{ij}} \big(\vartheta_1(z_{21},\tau)\vartheta_1(z_{43},\tau)\big)^{-s} . \end{align} All conventions agree with those used in \cite{Eberhardt:2022zay}. \begin{figure} \centering \begin{tikzpicture} \begin{scope} \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \draw (160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,.51) {1}; \draw (-160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,-.51) {2}; \draw[very thick, fill=black!10!white] (2.2,0) circle (.7); \draw (2.4,.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,1) {4}; \draw (2.4,-.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,-1) {3}; \end{scope} \begin{scope}[shift={(7,0)}] \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \draw (180:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,0) {1}; \draw[very thick, fill=black!10!white] (2.2,0) circle (.7); \draw (2.9,0) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (3.2,0) {3}; \draw (2.4,.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,1) {4}; \draw (2.4,-.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,-1) {2}; \end{scope} \begin{scope}[shift={(0,-4)}] \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \draw[very thick, fill=black!10!white] (2.2,0) circle (.7); \draw (2.81,.35) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (3.15,.35) {3}; \draw (2.4,.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,1) {4}; \draw (2.4,-.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,-1) {1}; \draw (2.81,-.35) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (3.15,-.35) {2}; \end{scope} \begin{scope}[shift={(7,-4)}] \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \draw (20:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (1.8,.51) {4}; \draw (-20:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (1.8,-.51) {3}; \draw (160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,.51) {1}; \draw (-160:1.5) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-1.8,-.51) {2}; \fill[white] (-.6,.5) rectangle (.6,1.6); \fill[black!10!white] (-.65,.47) to (0,1.05) to (0,1.15) to (-.65,1.35); \fill[black!10!white] (.65,.47) to (0,1.05) to (0,1.15) to (.65,1.35); \draw[very thick, out=54.3, in=270, looseness=1, fill=black!10!white] (-.65,.47) to (0,1.1); \draw[very thick, out=90, in=25.7, looseness=1, fill=black!10!white] (0,1.1) to (-.65,1.35); \draw[very thick, out=125.7, in=270, looseness=1, fill=black!10!white] (.65,.47) to (0,1.1); \draw[very thick, out=90, in=154.3, looseness=1, fill=black!10!white] (0,1.1) to (.65,1.35); \end{scope} \end{tikzpicture} \caption{Four basic degenerations of the planar annulus amplitude. \textbf{Top left:} Massive pole exchange. \textbf{Top right:} Wave function renormalization. \textbf{Bottom left:} Tadpole. \textbf{Bottom right:} Non-separating degeneration.} \label{fig:planar annulus single degeneration} \end{figure} \subsection{Singularities of open strings} \label{subsec:singularities integrands} As it stands, the integrals in eq.~\eqref{eq:integrands four point functions} are all divergent. There are relatively benign divergences such as the collision of $z_1$ and $z_2$. We are already used to dealing with such singularities in the tree-level disk amplitude. They lead to the poles of the string amplitude corresponding to the exchange of massive string-modes, e.g., collision of $z_1$ and $z_2$ gives poles at $s \in \mathbb{Z}_{\geq 0}$. However, the story becomes more complicated at one-loop because the amplitude will also have discontinuities which are reflected in more intricate singular behaviours. Before discussing them, we should recall some basic features of the moduli space of open Riemann surfaces. We will mostly just discuss the planar annulus case, since all other cases are similar. There are various degenerations of the surface that are familiar from the Deligne--Mumford compactification of the closed string moduli space. There is one additional type of boundary in the open string that appears when closing a hole without vertex operators, which we discuss below. The basic single degenerations of the planar annulus diagrams are depicted in Figure~\ref{fig:planar annulus single degeneration}. Of course, these also exist with punctures relabelled in all ways such that the original permutation is preserved alonge the outer boundary. Let us discuss the physical meaning and behaviour of the integrand near these degenerations in turn. Near degenerations, the worldsheet develops a very long ``neck'' connecting the two parts of the surface. This situation is conformally equivalent to a pinched cycle as drawn in Figure~\ref{fig:planar annulus single degeneration}. Hence, near the degeneration, the string worldsheet collapses to a worldline and one makes contact with the effective field-theory description, at least for that part of the diagram. To fully reduce to field-theory amplitudes, one has to completely degenerate the surface, i.e., pinch four compatible cycles. \subsubsection{Massive pole exchange} For example, one such degeneration corresponding to a field-theory bubble diagram is depicted in Figure~\ref{fig:planar annulus maximal degeneration}. Such Feynman diagrams correspond to a field theory with only cubic vertices and every cubic vertex is identified with a three-punctured disk in the string worldsheet. In fact, this relation can be made precise for the open string via Witten's open cubic string field theory. \begin{figure} \centering \begin{tikzpicture} \begin{scope} \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \draw[very thick, fill=black!10!white] (2.2,0) circle (.7); \draw[very thick, fill=black!10!white] (-2.2,0) circle (.7); \fill[white] (-.6,.5) rectangle (.6,1.6); \fill[black!10!white] (-.65,.47) to (0,1.05) to (0,1.15) to (-.65,1.35); \fill[black!10!white] (.65,.47) to (0,1.05) to (0,1.15) to (.65,1.35); \draw[very thick, out=54.3, in=270, looseness=1, fill=black!10!white] (-.65,.47) to (0,1.1); \draw[very thick, out=90, in=25.7, looseness=1, fill=black!10!white] (0,1.1) to (-.65,1.35); \draw[very thick, out=125.7, in=270, looseness=1, fill=black!10!white] (.65,.47) to (0,1.1); \draw[very thick, out=90, in=154.3, looseness=1, fill=black!10!white] (0,1.1) to (.65,1.35); \fill[white] (-.6,-.5) rectangle (.6,-1.6); \fill[black!10!white] (-.65,-.47) to (0,-1.05) to (0,-1.15) to (-.65,-1.35); \fill[black!10!white] (.65,-.47) to (0,-1.05) to (0,-1.15) to (.65,-1.35); \draw[very thick, out=-54.3, in=-270, looseness=1, fill=black!10!white] (-.65,-.47) to (0,-1.1); \draw[very thick, out=-90, in=-25.7, looseness=1, fill=black!10!white] (0,-1.1) to (-.65,-1.35); \draw[very thick, out=-125.7, in=-270, looseness=1, fill=black!10!white] (.65,-.47) to (0,-1.1); \draw[very thick, out=-90, in=-154.3, looseness=1, fill=black!10!white] (0,-1.1) to (.65,-1.35); \draw (-2.4,.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-2.4,1) {1}; \draw (-2.4,-.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-2.4,-1) {2}; \draw (2.4,.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,1) {4}; \draw (2.4,-.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,-1) {3}; \node at (4,0) {$\Longleftrightarrow$}; \end{scope} \begin{scope}[shift={(8,0)}] \draw[very thick] (-3,1) to (-2,0) to (-3,-1); \draw[very thick] (3,1) to (2,0) to (3,-1); \draw[very thick] (0,0) circle (1); \draw[very thick] (-1,0) to (-2,0); \draw[very thick] (1,0) to (2,0); \fill (-2,0) circle (.1); \fill (-1,0) circle (.1); \fill (1,0) circle (.1); \fill (2,0) circle (.1); \end{scope} \end{tikzpicture} \caption{The maximal degeneration corresponding to a bubble diagram in the $s$-channel.} \label{fig:planar annulus maximal degeneration} \end{figure} In this way, we obtain one field-theory-like propagator for every pinched cycle in the open string worldsheet and the singularities that various regions in moduli space produce are expected to reproduce the standard behaviors in field theory as review below. In particular, the first picture in Figure~\ref{fig:planar annulus single degeneration} corresponds to the exchange of a massive intermediate particle. It leads to poles in the amplitude for $s \in \ZZ_{\ge 0}$ since there are physical particles in string theory that can go on-shell in this case. In fact, the amplitude will have double poles at $s \in \ZZ_{\ge 0}$ since we can consider the double degeneration where also punctures $1$ and $2$ split off and produce a further propagator which can go on-shell. As we shall discuss below, the existence of double poles at $s \in \ZZ_{\ge 0}$ is directly tied to the mass renormalization of the intermediate massive states. Since the mass of the massless particles is protected by gauge invariance, the double pole is actually absent for $s=0$. \subsubsection{Wave function renormalization} The second picture in Figure~\ref{fig:planar annulus single degeneration} corresponds to the one-loop wave-function renormalization of the disk four-point amplitude. Let us see this explicitly. The degeneration is obtained by changing the coordinates to \begin{equation} z_2=1-\lambda,\qquad z_3=1-\lambda x \end{equation} for $0<x<1$ and small $\lambda$. Then $x$ will become a cross-ratio on the disk that splits off from the annulus as depicted in the figure. We can see this directly at the level of the integrand. In this limit $\vartheta_1(z_{ij},\tau)\sim z_{ij} \vartheta_1'(0)$ for $i, j \in \{2,3,4\}$ and hence the amplitude becomes \begin{equation} A^\text{p}_\text{an}=-i \int_{i \RR_{\ge 0}} \mathrm{d}\tau \int_0^1 \mathrm{d}z_1 \int_0^\delta \mathrm{d}\lambda \int_0^1 \mathrm{d}x\ \lambda\ x^{-s} \, (1-x)^{-t}\ . \label{eq:wave function renormalization degeneration} \end{equation} This result is indeed proportional to the disk amplitude, which is obtained by integrating over $x$. We cut off the integral over $\lambda$ off at some small positive $\delta$, which is where the approximation of the degeneration breaks down. For fixed $\tau$, $z_1$ and $x$, the resulting integral over $\lambda$ is convergent as $\delta \to 0$. This means that the integrand is non-singular as we approach the degeneration corresponding to the wave function renormalization and thus the wave function renormalization actually vanishes. This is a consequence of supersymmetry: we are considering the scattering of massless gauge bosons which sit in a $\frac{1}{2}$-BPS multiplet of the spacetime supersymmetry algebra. This protects them from wave function renormalization. The upshot of this discussion is that we do not have to worry about the degenerations corresponding to wave function renormalizations: nothing special is happening there. \subsubsection{Tadpoles} In a similar vein, the third diagram in Figure~\ref{fig:planar annulus single degeneration} represents the tadpole diagram of string theory. Consistency of the theory requires the vanishing of this diagram. This is indeed the case as one can see by a similar scaling as in \eqref{eq:wave function renormalization degeneration}, now setting \begin{equation} z_1=1-\lambda,\qquad z_2=1-\lambda x_2,\qquad z_3=1-\lambda x_1 \end{equation} with $0<x_1<x_2<1$ being the two cross-ratios on the disk with five marked points. Thus also the regions in moduli space corresponding to the tadpole do not deserve special attention. \subsubsection{Non-separating degenerations} Finally, the most subtle degeneration is the non-separating degeneration depicted in the last picture of Figure~\ref{fig:planar annulus single degeneration}. The name indicates that the resulting nodal surface is still connected: it is topologically a six-punctured disk with two punctures glued together. Non-separating degenerations cause the appearance of discontinuities and branch cuts in the string amplitude. In fact, the singularity structure near such a discontinuity is more complicated than the Deligne--Mumford compactification of moduli makes one suspect. In particular, the string integrand does actually not extend to a smooth function over all of the compactified moduli space. The depicted degeneration corresponds to $\tau\to 0$ with $\frac{z_{ij}}{\tau}$ fixed, meaning that also all $z_{ij}\to 0$ at the same rate. It will turn out that it is actually more convenient to just take $\tau\to 0$ and keep all $z_i$'s fixed. The string amplitude automatically singles out the correct degenerations. To investigate the behaviour of the integrand near this degeneration, one uses the modular covariance of the integrand: \begin{align} \left(\frac{\vartheta_1(z_{21},\tau)\vartheta_1(z_{43},\tau)}{\vartheta_1(z_{31},\tau)\vartheta_1(z_{42},\tau)}\right)^{\!-s} &= \mathrm{e}^{-\pi i s \tilde{\tau} (z_{21}^2+z_{43}^2-z_{42}^2-z_{31}^2)} \left(\frac{\vartheta_1(z_{21}\tilde{\tau} ,\tilde{\tau})\vartheta_1(z_{43}\tilde{\tau},\tilde{\tau})}{\vartheta_1(z_{31}\tilde{\tau},\tilde{\tau})\vartheta_1(z_{42}\tilde{\tau},\tilde{\tau})}\right)^{-s} \\ &=\mathrm{e}^{2\pi i s \tilde{\tau} z_{32}z_{41}} \left(\frac{\vartheta_1(z_{21}\tilde{\tau} ,\tilde{\tau})\vartheta_1(z_{43}\tilde{\tau},\tilde{\tau})}{\vartheta_1(z_{31}\tilde{\tau},\tilde{\tau})\vartheta_1(z_{42}\tilde{\tau},\tilde{\tau})}\right)^{-s}\ , \end{align} where $\tilde{\tau}=-\frac{1}{\tau}$. Since $0<z_{ij} \tilde{\tau}<\tilde{\tau}$ and $\tilde{\tau}$ has large imaginary part, we can actually use that the $n=0$ term in the definition of the Jacobi theta function \eqref{eq:definition theta1} will dominate in this limit, i.e., $\vartheta_1(z_{ij} \tilde{\tau},\tilde{\tau}) \sim i \mathrm{e}^{-\pi i \tilde{\tau}z_{ij}+\frac{1}{4}\pi i \tilde{\tau}}$. This yields \begin{equation} A_\text{an}^\text{p} =-i \int_{i/\delta}^{i \infty} \frac{\mathrm{d}\tilde{\tau}}{-\tilde{\tau}^2} \int \mathrm{d}z_1 \, \mathrm{d}z_2\, \mathrm{d}z_3\ \tilde{q}^{-s(1-z_{41})z_{32}-t z_{43} z_{21}}+\text{higher order in $\tilde{q}$}\ , \label{eq:annulus non-separating degeneration leading singularity} \end{equation} where $\tilde{q}=\mathrm{e}^{2\pi i \tilde{\tau}}$. We again cut off the integral over $\tau$ (and hence over $\tilde{\tau}$) off at small $\delta$ where our approximations break down. One notices that, e.g., in the $s$-channel with large $s>0$ and small $t<0$, the exponent of $\tilde{q}$ can be negative and hence the integral over $\tilde{\tau}$ is generically divergent. For fixed $s$ and $t$, there is a fixed number of terms in the expansion of the $\vartheta_1$-function that yield divergent contributions as $\tilde{\tau} \to i \infty$. These are precisely the terms that can contribute to the imaginary part of the amplitude, since positive exponents of $\tilde{q}$ come with manifestly real coefficients and hence cannot contribute to $\Im A^{\text{p}}_{\text{an}}$. Below, we will discuss how to properly deal with the divergent contributions. For now, let us note that the imaginary part of the amplitude is much simpler than the real part because only finitely many terms in the $\tilde{q}$-expansion contribute to it. This is of course expected physically because the imaginary part of the amplitude can in principle be computed by the optical theorem from tree-level amplitudes, which was discussed in detail in \cite{Eberhardt:2022zay}. Let us also mention that this discussion immediately implies that the string integrand does \emph{not} extend to a well-defined smooth function on the Deligne--Mumford compactification of moduli space. Indeed, which term in the $\tilde{q}$-expansion in \eqref{eq:annulus non-separating degeneration leading singularity} dominates for $\tilde{q}\to 0$ depends on the choice of the other moduli $z_i$ and consequently is not a smooth function of the $z_i$'s in this limit. Contrary to what is often stated, it means that the string integrands have a more complicated singularity structure than that predicted by the Deligne--Mumford compactification and to properly characterize them one would also need to consider a more complicated compactification of the moduli space. As far as we are aware, this has not been made precise in the literature. \subsubsection{Closed string pole} There is one final singularity of the integrand that is special to the open string. If one views the annulus as a cylinder, then one can make the cylinder very long and pinch the corresponding closed string cycle. For the non-planar diagram, this leads to two disks connected at a single joint node as illustrated in Figure~\ref{fig:closed string pole non-planar diagram}. \begin{figure} \centering \begin{tikzpicture} \draw[very thick, fill=black!10!white] (0,1.5) to (2.5,0) to (0,-1.5); \draw[very thick, fill=white] (0,0) ellipse (.5 and 1.5); \draw[very thick, fill=black!10!white] (5,1.5) to (2.5,0) to (5,-1.5); \draw[thick, fill=black!10!white, dashed] (5,0) ellipse (.5 and 1.5); \draw[very thick] (4.83,-1.42) arc(-110:110: .5 and 1.5); \draw (5.45,.7) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (5.8,.7) {3}; \draw (5.45,-.7) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (5.8,-.7) {4}; \draw (0.44,.7) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (0,.7) {1}; \draw (0.44,-.7) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (0,-.7) {2}; \end{tikzpicture} \caption{The degeneration leading to the closed string pole.} \label{fig:closed string pole non-planar diagram} \end{figure} This degeneration is actually of real codimension-2, in contrast with codimension-1 cases encountered above, since it corresponds to a closed string degeneration and indeed each of the two disks has only one real modulus. Explicitly, this corresponds to taking $\tau \to i \infty$, where to leading order $\vartheta_4(z_{ij},\tau) \sim 1$ and $\vartheta_1(z_{ij},\tau)=-2 \mathrm{e}^{\frac{1}{4} \pi i \tau} \sin(\pi z_{ij})$. The amplitude becomes \begin{align} A^\text{n-p}\sim -i \int_{i/\delta}^{i \infty} \mathrm{d}\tau \int_{0\le z_1 \le z_2\le 1} \mathrm{d}z_1\, \mathrm{d}z_2 \int_0^1 \mathrm{d}z_3\ \left(4 \sin(\pi z_{21}) \sin(\pi z_{43})\right)^{-s} \mathrm{e}^{-\frac{1}{2} \pi i s \tau}\ . \end{align} The integral over $\tau$ is again divergent, but can in this case defined in an ad-hoc manner, e.g., by analytic continuation in $s$. The fact that the integrand only depends on $z_{21}$ and $z_{43}$ corresponds to the fact that this degeneration is really codimension-2. We can hence fix $z_2=1$ and compute \begin{equation} A^\text{n-p}\sim -\frac{2}{\pi s} \left(\int_0^1 \mathrm{d}z\ \left(2 \sin(\pi z)\right)^{-s}\right)^{\!2}\ . \end{equation} The remaining integral corresponds to the disk two-point function with one closed-string vertex operator. Higher closed-string string poles can be observed by keeping more terms in the $q$-expansion of the Jacobi theta functions. Since $\vartheta_4(z,\tau)$ has a $q$-expansion with half-integer powers of $q$, it leads to the following integrals over $\tau$: \begin{equation} -i \int_{i/\delta}^{i \infty} \mathrm{d}\tau \ \mathrm{e}^{-\frac{1}{2}\pi i s \tau+\pi i k \tau} \sim -\frac{2}{\pi(s-2k)} \end{equation} with integer $k$. Thus, the amplitude has a closed-string pole at every even integer. The factor of 2 compared to the normalization of the open-string spectrum corresponds to the usual relative normalization between the open and closed string spectrum. For the planar amplitudes, this region in the moduli space also exists. However, the corresponding closed-string exchange happens at zero momentum and thus corresponds to a closed string tadpole. Explicitly, we have for the planar annulus amplitude for $\tau \to i \infty$ \begin{equation} A^\text{p}_\text{an} \sim -i\int_{i/\delta}^{i \infty} \mathrm{d}\tau \int \mathrm{d}z_1\, \mathrm{d}z_2 \, \mathrm{d}z_3\ \left(\frac{\sin(\pi z_{21}) \sin(\pi z_{43})}{\sin(\pi z_{31}) \sin(\pi z_{42})} \right)^{-s} \left(\frac{\sin(\pi z_{32}) \sin(\pi z_{41})}{\sin(\pi z_{31}) \sin(\pi z_{42})} \right)^{-t}\ . \label{eq:tau to infinity behaviour planar amplitude} \end{equation} Here, we again replaced the Jacobi theta functions by their leading behaviour as $\tau \to i\infty$. The integrand becomes completely $\tau$-independent and hence the integral over $\tau$ diverges. Nevertheless, we can observe that the M\"obius strip has a similar divergence as $\tau \to i\infty$ and since $\tau$ and $\tau+\frac{1}{2}$ become indistinguishable for large $\Im(\tau)$, the integrands approach exactly the same value, except for the overall minus sign that is present in \eqref{eq:Moebius strip four point function} compared to \eqref{eq:planar annulus four point function} when $N=32$. Hence the two divergences can be cancelled against each other in the full planar string amplitude. This is known as the Fischler--Susskind--Polchinski mechanism \cite{Fischler:1986ci, Polchinski:1994fq}. In the next section, we will see an elegant way of combining the annulus and the M\"obius strip diagrams naturally. \section{Summary of the results}\label{sec:summary} In this section, we explain all the conceptual ideas utilized in this paper without going too much into technical details. The full derivation of our formulas for the scattering amplitudes is given in Section~\ref{sec:planar amplitude derivation} and Section~\ref{sec:non-planar}, after a simpler warmup example that we discuss in Section~\ref{sec:two-point function}. Consequences for the mass-shifts of the string spectrum are explored in Section~\ref{sec:mass-shifts}. In this section, we also demonstrate the usefulness of the Rademacher expansion by explicitly evaluating it numerically. \subsection{Integration contour} \label{subsec:integration contour} After having discussed various singularities of the open string integrands, we move on to making sense of the integrals near the degenerations. One obvious idea for evaluating string amplitudes is to chop up the integration domain, compute the individual pieces in (typically disjoint) kinematic regions where they converge, and define the final result via analytic continuation back into a single kinematic point. This approach was used in the old literature on string amplitudes, see, e.g., \cite{DHoker:1993hvl,DHoker:1993vpp,DHoker:1994gnm}, but is difficult to make practical beyond genus zero. The simple reason is that in order to perform analytic continuation, one needs an analytic expression to begin with and these are near impossible to find for such an intricate object as a string scattering amplitude. Alternatively, one might define analytic continuation using dispersive methods such as using Mandelstam representation. But in contrast with quantum field theory, these are difficult to construct due to the exponential divergences at infinity. Likewise, $\alpha'$-expansion is not an option because it would only provide an asymptotic series. Hence, we are lead to the conclusion that in order to evaluate string amplitudes at finite $\alpha'$, one has to face the problem of constructing the correct integration contour. A physical picture for deciding how to construct the contour was proposed more recently by Witten \cite{Witten:2013pra}. He pointed out that the reason for the divergences is that we treat the worldsheet as a Euclidean 2D CFT, whereas the target space is Lorentzian. To get the correct causal structure of the amplitude we would hence like to perform the computations on a Lorentzian worldsheet. While this is not possible globally on the moduli space without introducing spurious UV divergences, it is possible near the degenerations where long tubes or strips develop on the worldsheet which can be endowed with a Lorentzian metric. For a codimension-$1$ degeneration, there is always a modulus $\tilde{q}$ that represents a good local coordinate on moduli space such that $\tilde{q}=0$ is the degenerate locus. For example, in the four degenerations depicted in Figure~\ref{fig:planar annulus single degeneration}, $\tilde{q}$ corresponds to: \begin{equation} z_{43},\quad z_{42},\quad z_{41},\quad \mathrm{and}\quad \mathrm{e}^{-\frac{2\pi i}{\tau}}, \end{equation} respectively. Here, $\tilde{q}$ measures the width of the neck connecting the two parts of the surface, which is conformally equivalent to saying that $\tilde{q}=\mathrm{e}^{-t}$, where $t$ is the proper Euclidean length of the tube. Thus the Lorentzian analytic continuation corresponds to rotating into the upper half-plane after some large proper time $t_\ast \gg 1$, corresponding to the swirl contour in the $\tilde{q}$-plane, see Figure~\ref{fig:Euclidean to Lorentzian contour}. \begin{figure} \centering \begin{tikzpicture} \begin{scope} \draw[->, gray] (-0.5,0) -- (5,0); \draw[->, gray] (0,-0.5) -- (0,2); \draw[thick] (4.5,2) -- (4.5,1.5) -- (5,1.5); \node at (4.75,1.75) {$t$}; \draw[very thick, Maroon] (0,0) -- (3,0); \draw[->, very thick, Maroon] (0,0) -- (1.5,0); \draw[very thick, Maroon] (3,0) -- (3.3,2); \draw[->, very thick, Maroon] (3,0) -- (3.15,1); \fill (3,0) circle (0.07) node[below right] {$t_\ast \gg 1$}; \node (E) at (1,0.8) {\footnotesize Euclidean}; \node (L) at (1.5,1.5) {\footnotesize Lorentzian}; \draw[->] (E) to (1.8,0.2); \draw[->] (L) to (3,1.2); \end{scope} \begin{scope}[shift={(8,0)}] \draw[->, gray] (-0.5,0) -- (5,0); \draw[->, gray] (0,-0.5) -- (0,2); \draw[thick] (4.5,2) -- (4.5,1.5) -- (5,1.5); \node at (4.75,1.75) {$\tilde{q}$}; \draw[very thick, Maroon, variable=\t, domain=0:2400, samples=200, smooth] plot ({-\t}:{1-.0004*\t}); \draw[very thick, Maroon] (1,0) -- (4,0); \draw[->, very thick, Maroon] (4,0) -- (2.5,0); \fill (1,0) circle (0.07) node[below right] {$e^{-t_\ast}$}; \end{scope} \end{tikzpicture} \caption{\label{fig:Euclidean to Lorentzian contour}The integration contour in the neighborhood of the divisor at small $\tilde{q} = e^{-t}$. \textbf{Left:} In the $t$-plane, the Euclidean contour running along the real axis is rotated into a Lorentzian one after some large proper time $t_\ast$. The fact that the Lorentzian contour retains a small positive real part is equivalent to the Feynman $i\varepsilon$ prescription. \textbf{Right:} The image of the contour in the variable $\tilde{q}$ involves an infinite spiral onto the divisor at $\tilde{q}=0$ with radius $e^{-t_\ast}$.} \end{figure} This necessitates the definition of a complexification of the open-string moduli space $\mathcal{M}_{1,n}$. We already discussed this complexification in Section~\ref{subsec:basic amplitudes}. It is induced from the underlying complex moduli space of the torus that appears as the orientation double cover. In practice, it just corresponds to allowing complex $z_i$'s and arbitrary $\tau$ in the upper half-plane $\HH$. The contour of integration is most interesting for the $\tau$-part of the integral, since it leads to the discontinuities of the amplitude. Here, we describe it for any $n$. As we approach the $\tau \to 0$ region, the local parameter is given by $\tilde{q}=\mathrm{e}^{-\frac{2\pi i}{\tau}}$. From the above discussion, the relevant contour hence takes the form \begin{equation} \frac{2\pi i}{\tau}= t_\ast +i t \end{equation} for some large real constant $t_\ast$ and $t \in \RR_{\ge 0}$ describing the Wick-rotated part of the contour, i.e., \begin{equation} \tau=\frac{2\pi i }{t_\ast +i t}\ . \end{equation} This contour maps to a semi-circle in the complex $\tau$-plane with radius $\frac{\pi}{t_\ast}$ and centered at $\frac{\pi i}{t_\ast}$, bulging to the right, see Figure~\ref{fig:tau contours}. The actual contour itself is of course not important since we can always deform it. The only important content of it is that we approach $\tau=0$ from the right and not from the top. The direction in which we approach $\tau=0$ matters since $\tau=0$ is an essential singularity of the integrand. \begin{figure} \centering \qquad \begin{tikzpicture}[scale=1] \draw[line width=0.6mm, Maroon] (0,0) arc (-90:90:1); \draw[line width=0.6mm, Maroon] (4,0) arc (-90:90:1); \draw[line width=0.6mm, Maroon] (0,2) -- (0,6) -- (4,6) -- (4,2); \draw[line width=0.6mm, Maroon, ->] (0,2) -- (0,4.5); \draw[line width=0.6mm, Maroon, ->] (0,6) -- (2,6); \draw[line width=0.6mm, Maroon, ->] (4,6) -- (4,4.3); \fill (0,0) circle (.1) node[below] {$\tau=0$}; \fill (4,0) circle (.1) node[below, align=center] {$\tau=\frac{1}{2}$ {\footnotesize in the planar case} \\ $\tau=2$ {\footnotesize in the non-planar case}}; \fill (2,7) circle (.1) node[above, align=center] {{\footnotesize closed string pole} \\ \vspace{0.7em}$\tau \to i \infty$}; \draw[thick] (5.5,8) -- (5.5,7.5) -- (6,7.5); \node at (5.75,7.75) {$\tau$}; \node at (4.5,4) {$\textcolor{Maroon}{\Gamma}$}; \end{tikzpicture} \caption{Contour of integration in the $\tau$-plane for open string one-loop amplitudes. The endpoint of the contour is $\tau = \frac{1}{2}$ in the planar case and $\tau = 2$ in the non-planar case.} \label{fig:tau contours} \end{figure} The other interesting place of the $\tau$-integration is $\tau \to i\infty$, where in the planar case we expect the Fischler--Susskind--Polchinski mechanism to cancel the respective singularities and in the non-planar case the closed string pole originates from. Let us start with the planar case. Recall that the M\"obius strip case is obtained by shifting $\tau \to \tau + \frac{1}{2}$ up to a minus sign compared to the annulus case, so the part of the contour close to $\tau = \frac{1}{2}$ looks the same as near $\tau = 0$, except for reversed orientation, see Figure~\ref{fig:tau contours}. Most of the two contours cancel out and the end result is simply that the annulus and M\"obius strip contours get connected, up to a contribution from $i\infty$. The resulting contour $\Gamma$ is displayed in Figure~\ref{fig:tau contours}. We should mention that there is a bit of choice involved in this contour. We could have for example also declared that the M\"obius strip corresponds to $\tau \in \frac{3}{2}+i \RR_{\ge 0}$ (since the integrands are periodic in $\tau$) and hence the horizontal connection between the contours would have been longer. Another common definition is to take the principal value of the integral in $q = \mathrm{e}^{2\pi i \tau}$ that runs through the pole at $q=0$. This corresponds to putting no horizontal part in the contour. All these definitions differ by a multiple of the residue of the pole at infinity. More precisely, for the four-point amplitude, the principal value definition and the definition in terms of a closed contour differ by \begin{equation} \Delta A^\text{p} = \frac{i}{2} \int_{0 \leq z_1 \leq z_2 \leq z_3 \leq 1} \!\!\!\!\!\!\!\!\!\! \mathrm{d}z_1\, \mathrm{d}z_2\, \mathrm{d}z_3 \ \left(\frac{\sin(\pi z_{21}) \sin(\pi z_{43})}{\sin(\pi z_{31}) \sin(\pi z_{42})} \right)^{\!-s} \left(\frac{\sin(\pi z_{32}) \sin(\pi z_{41})}{\sin(\pi z_{31}) \sin(\pi z_{42})} \right)^{\!-t} , \label{eq:Delta A planar integral} \end{equation} as worked out in eq.~\eqref{eq:tau to infinity behaviour planar amplitude}. The remaining integral is the disk four-point function with an additional closed-string dilaton vertex operator at zero momentum inserted. This follows directly from the geometry of the degeneration: for large $\tau$, the hole of the annulus closes and is replaced by a puncture with no momentum inflow. The leading term comes from the massless level and the Lorentz index structure only allows for a scalar, i.e., the dilaton. The dilaton vertex operator at zero momentum is in fact equal to the action itself and hence an insertion of this operator simply renormalizes $\alpha'$. As one can check the residue is indeed proportional to the $\alpha'$-derivative of the tree-level amplitude, explicitly \begin{equation} \Delta A^\text{p}=\frac{i}{(2\pi)^2}\left[\frac{\mathrm{d}}{\mathrm{d}s} \left(\frac{\Gamma(1-s)\Gamma(-t)}{\Gamma(1-s-t)}\right)+\frac{\mathrm{d}}{\mathrm{d}t} \left(\frac{\Gamma(-s)\Gamma(1-t)}{\Gamma(1-s-t)}\right)\right] \ .\label{eq:Delta A planar} \end{equation} We check this in Appendix~\ref{app:Delta A planar}. This fact is known as the soft-dilaton theorem in string theory \cite{Ademollo:1975pf,Shapiro:1975cz}. In practice, we will hence compute with the closed contour $\Gamma$ as in Figure~\ref{fig:tau contours}, but will then subtract the contribution from the closed string pole in the end in order to obtain the actual amplitude with correct reality properties. We will however mostly suppress this in the following discussion. Thus, the combined planar amplitude is given by \begin{align} A^\text{p} &= \Delta A^\text{p} -i \int_{\Gamma} \mathrm{d}\tau \!\! \int \mathrm{d}z_1 \, \mathrm{d}z_2 \, \mathrm{d} z_3 \left( \frac{\vartheta_1(z_{21},\tau)\vartheta_1(z_{43},\tau)}{\vartheta_1(z_{31},\tau)\vartheta_1(z_{42},\tau)}\right)^{\!-s} \left( \frac{\vartheta_1(z_{32},\tau)\vartheta_1(z_{41},\tau)}{\vartheta_1(z_{31},\tau)\vartheta_1(z_{42},\tau)}\right)^{\!-t} . \label{eq:planar amplitude} \end{align} We did not spell out explicitly the appropriate contours for the $z_i$-integration. We will analyze them further below. A very similar contour can be defined for the non-planar case. In this case, the $\tau$-contour as defined via the $i \varepsilon$ prescription is the same as the one for the planar annulus. However, we can notice that the integrand is quasi-periodic in $\tau \to \tau+2$, which leads to the simple phase $\mathrm{e}^{-\pi i s}$ of the integrand. This means that we can compute the expression \begin{equation} (1-\mathrm{e}^{-\pi i s}) A^\text{n-p} \label{eq:non-planar pi i s prefactor} \end{equation} by subtracting from the original contour a contour that is shifted as $\tau \to \tau+2$. This then cancels the infinite horizontal tail that runs horizontally to the left as $\tau=it_\ast -t$. The contour is then essentially identical to the planar contour, except that it ends at $\tau=2$ instead of $\tau=\frac{1}{2}$, see Figure~\ref{fig:tau contours}. \subsection{Rademacher contour} \label{subsec:Rademacher contour} The Rademacher contour is a general method to compute contour integrals over modular objects. It was historically used to derive an asymptotic expansion for the partition numbers, which appear as the Fourier coefficients of the Dedekind eta-function $\eta(\tau)^{-1}$, see \cite{RademacherParition}. Let us explain the basic method at the simple example $\eta(\tau)^{-24}$, which gives the bosonic open-string partition function and hence is interesting in its own right. We will consider the more complicated examples relevant for the open superstring amplitudes below. Suppose we want to compute \begin{equation} \int_{\Gamma} \frac{\mathrm{d}\tau}{\eta(\tau)^{24}}\ , \end{equation} where the contour is identical to the one displayed in Figure~\ref{fig:tau contours} with the endpoint at $\tau=\frac{1}{2}$.\footnote{In the open bosonic string, we face the same problem as in the planar four-point function discussed above, that we should really take the principal value at the pole $\tau=i\infty$ in order to get a physical result. For the bosonic string, there is a double pole at $\tau=i\infty$ and hence a correction term as in \eqref{eq:Delta A planar} does not exist. This makes the present computation unphysical.} \begin{figure} \centering \begin{tikzpicture} \begin{scope} \draw[very thick, black!30!white] (0,0) arc (-90:90:4); \draw[very thick, black!30!white] (8,0) arc (-90:-270:4); \tikzmath{ int \p, \q; for \q in {2,...,20}{ for \p in {1,...,\q}{ if gcd(\p,\q) == 1 then { \f = 8*\p/\q; \r = 4/(\q*\q); { \draw[very thick, light-gray] (\f,\r) circle(\r); }; }; }; }; } \node at (0,-.4) {$0$}; \node at (8,-.4) {$1$}; \node at (4,-.4) {$\frac{1}{2}$}; \node at (2.67,-.4) {$\frac{1}{3}$}; \node at (5.33,-.4) {$\frac{2}{3}$}; \node at (2,-.4) {$\frac{1}{4}$}; \node at (6,-.4) {$\frac{3}{4}$}; \node at (1.6,-.4) {$\frac{1}{5}$}; \node at (3.2,-.4) {$\frac{2}{5}$}; \node at (4.8,-.4) {$\frac{3}{5}$}; \node at (6.4,-.4) {$\frac{4}{5}$}; \node at (1.33,-.4) {$\frac{1}{6}$}; \node at (6.67,-.4) {$\frac{5}{6}$}; \node at (1.14,-.4) {$\frac{1}{7}$}; \node at (2.285,-.4) {$\frac{2}{7}$}; \node at (3.43,-.4) {$\frac{3}{7}$}; \node at (4.57,-.4) {$\frac{4}{7}$}; \node at (5.715,-.4) {$\frac{5}{7}$}; \node at (6.86,-.4) {$\frac{6}{7}$}; \node at (0,4) {$C_{0/1}$}; \node at (4,0.9) {$C_{1/2}$}; \node at (8,4) {$C_{1/1}$}; \node at (2.67,0.4) {\scalebox{0.7}{$C_{1/3}$}}; \node at (5.33,0.4) {\scalebox{0.7}{$C_{2/3}$}}; \node at (2,1) {$\color{Maroon}\Gamma_2$}; \draw[ultra thick, Maroon] (0,0) arc (-90:-36.9:4); \draw[ultra thick, Maroon] (3.2,1.6) arc (143.1:-90:1); \draw[ultra thick, Maroon, ->] (4,2) -- (4.01,2); \end{scope} \end{tikzpicture} \caption{The Ford circles $C_{a/c}$ in the $\tau$ upper half-plane. The original contour of integration $\Gamma$ can be deformed to the second Rademacher contour $\Gamma_2$.} \label{fig:Ford circles} \end{figure} One deforms the contour in a series of steps as follows. Let us first recall the \emph{Farey sequence} $F_n$, which consists of all fractions $0<\frac{a}{c} \le 1$ such that $a$ and $c$ are coprime integers, $(a,c)=1$, and $c \le n$. By convention, we do not include 0, even though it is often included in the literature. The first few terms are \begin{subequations} \begin{align} F_1 &= ( \tfrac{1}{1} )\ ,\\ F_2 &= ( \tfrac{1}{2}, \tfrac{1}{1} )\ ,\\ F_3 &= ( \tfrac{1}{3}, \tfrac{1}{2}, \tfrac{2}{3}, \tfrac{1}{1} )\ ,\\ F_4 &= ( \tfrac{1}{4}, \tfrac{1}{3}, \tfrac{1}{2}, \tfrac{2}{3}, \tfrac{3}{4}, \tfrac{1}{1} )\ ,\\ F_5 &= ( \tfrac{1}{5}, \tfrac{1}{4}, \tfrac{1}{3}, \tfrac{2}{5}, \tfrac{1}{2},\tfrac{3}{5},\tfrac{2}{3}, \tfrac{3}{4}, \tfrac{4}{5}, \tfrac{1}{1} )\ .\label{eq:Farey5} \end{align} \end{subequations} It is a non-trivial fact that one can draw \emph{Ford circles} $C_{a/c}$ around the points $\tau = \frac{a}{c}+\frac{i}{2c^2}$ such that none of the circles overlap and two of them touch only if they are neighbors in the Farey sequence $F_n$ for some $n$. We now construct a series of Rademacher contours $\Gamma_n$ as follows. For $n=2$, we start with the contour that follows the arc of the Ford circle $C_{0/1}$ until the common point of $C_{0/1}$ and $C_{1/2}$ is reached, where we start following the arc of the Ford circle $C_{1/2}$ as depicted in Figure~\ref{fig:Ford circles}. The resulting contour is called $\Gamma_2$. It is equivalent to the original contour $\Gamma$ we described in Figure~\ref{fig:tau contours}. The contour $\Gamma_3$ includes the following modification to $\Gamma_2$: we follow the arc of $C_{0/1}$ only until it touches the circle $C_{1/3}$, which we then follow until we touch the circle $C_{1/2}$, which we then follow until $\tau=\frac{1}{2}$. We iteratively modify the contour further in the same way so that the contour $\Gamma_n$ describes an arc of $C_{0/1}$ until we meet $C_{1/n}$. We then follow the arcs of all the Ford circles in the Farey sequence $F_n$ until we reach the endpoint at $\tau = \frac{1}{2}$. Hence all Ford circles $C_{a/c}$ with $\frac{a}{c} \le \frac{1}{2}$ and $c \le n$ appear in the contour $\Gamma_n$. For example, the contour $\Gamma_5$, following the circles listed in \eqref{eq:Farey5}, is depicted in Figure~\ref{fig:Rademacher contour 01}. The obvious idea is now to take the limiting contour $\Gamma_\infty$ that encircles every Ford circle $C_{a/c}$ with $0<\frac{a}{c} \le \frac{1}{2}$ precisely once and hence \begin{equation} \int_{\Gamma_\infty} \frac{\mathrm{d}\tau}{\eta(\tau)^{24}} = \sum_{c=1}^\infty \sum_{\begin{subarray}{c} 1\le a\le \frac{c}{2} \\ (a,c)=1 \end{subarray} }\int_{C_{a/c}} \frac{\mathrm{d}\tau}{\eta(\tau)^{24}}\ . \end{equation} The contour $\Gamma_\infty$ was already illustrated in Figure~\ref{fig:Rademacher}. Of course it is not obvious from our discussion that this procedure converges, but we will find that it does in all the cases of interest. This can be proved rigorously by estimating the contributions to the integral from the remaining small arcs on the contour $\Gamma_n$. We do this in Appendix~\ref{app:convergence}. In fact, this procedure always converges when the modular weight of the integrand is negative. \begin{figure} \centering \begin{tikzpicture} \begin{scope} \node at (10.2,4.9) {$\tau$}; \draw (10.4,4.7) -- (10.0,4.7) -- (10.0,5.1); \tikzmath{ int \p, \q; for \q in {2,...,20}{ for \p in {1,...,\q/2}{ if gcd(\p,\q) == 1 then { \f = 16*\p/\q; \r = 8/(\q*\q); { \draw[very thick, light-gray] (\f,\r) circle(\r); }; }; }; }; } \draw[ultra thick, Maroon] (0,0) arc (-90:-67.4:8); \draw[ultra thick, Maroon] (3.08,0.62) arc (112.6:12.7:0.32); \draw[ultra thick, Maroon] (3.52,0.39) arc (192.7:16.3:.5); \draw[ultra thick, Maroon] (4.48,0.64) arc (196.3:-28.1:0.888); \draw[ultra thick, Maroon] (6.12,0.47) arc (151.9:46.4:.32); \draw[ultra thick, Maroon] (6.62,0.55) arc (226.4:-90:2); \draw[ultra thick, Maroon, ->] (3.52,0.39) arc (192.7:100:.5); \draw[ultra thick, Maroon, ->] (4.48,0.64) arc (196.3:100:0.888); \draw[ultra thick, Maroon, ->] (6.62,0.55) arc (226.4:100:2); \node at (0,-.4) {$0$}; \node at (8,-.4) {$\frac{1}{2}$}; \node at (5.33,-.4) {$\frac{1}{3}$}; \node at (4,-.4) {$\frac{1}{4}$}; \node at (3.2,-.4) {$\frac{1}{5}$}; \node at (6.4,-.4) {$\frac{2}{5}$}; \node at (2.67,-.4) {$\frac{1}{6}$}; \node at (2.28,-.4) {$\frac{1}{7}$}; \node at (4.57,-.4) {$\frac{2}{7}$}; \node at (6.86,-.4) {$\frac{3}{7}$}; \node at (8,2) {$C_{1/2}$}; \node at (5.33,0.8) {$C_{1/3}$}; \node at (4,0.45) {\scalebox{0.8}{$C_{1/4}$}}; \node at (3.2,0.3) {\scalebox{0.6}{$C_{1/5}$}}; \node at (1.5,0.6) {$\color{Maroon}\Gamma_5$}; \end{scope} \end{tikzpicture} \caption{The fifth Rademacher contour $\Gamma_5$ obtained by following the Ford circles in the fifth Farey sequence \eqref{eq:Farey5} according to the rules given in the text.} \label{fig:Rademacher contour 01} \end{figure} So far, it may seem like this procedure has not gained us much. However, due to modular invariance, it is much simpler to compute the integral over the Ford circle than over the original contour. Indeed, consider the following modular transformation \begin{equation} \gamma(\tau)=\frac{a\tau+b}{c\tau+d}\ , \label{eq:modular transformation Ca/c} \end{equation} where $b$ and $d$ are chosen such that $ad-bc=1$. Then \begin{equation} \eta\left(\frac{a\tau+b}{c\tau+d}\right)^{24}=(c \tau+d)^{12}\, \eta(\tau)^{24}\ . \end{equation} We can use this modular transformation to change variables in the integral over $C_{a/c}$ and obtain \begin{equation} \int_{C_{a/c}} \frac{\mathrm{d}\tau}{\eta(\tau)^{24}}=-\int_{\longrightarrow} \frac{\mathrm{d}\tau}{(c\tau+d)^{14} \, \eta(\tau)^{24}}\ . \end{equation} The additional two powers of $c\tau+d$ come from the Jacobian. Due to our judicious choice of the modular transformation, the new contour runs now horizontally, i.e., we mapped the circle touching the real axis at $\tau=\frac{a}{c}$ to the circle at $\tau=i\infty$. After the modular transformation, the contour starts at $i-\infty$ and runs to $i+\infty$. This is opposite to the natural orientation of the circle at $i\infty$ and leads to the additional minus sign. The new integrand is holomorphic in the upper half-plane, except for a singularity at $\tau=i\infty$. We may hence deform the contour from $i+\RR$ to $iL+\RR$ for arbitrarily large $L$. We will frequently denote such a horizontal contour by $\longrightarrow$. For large imaginary parts of $\tau$, it is then advantageous to use the Fourier expansion of $\eta(\tau)^{-24}$, which gives \begin{equation} \int_{C_{a/c}} \frac{\mathrm{d}\tau}{\eta(\tau)^{24}}= -\int_{\longrightarrow} \frac{\mathrm{d}\tau}{(c\tau+d)^{14}} \left(\mathrm{e}^{-2\pi i \tau}+24+\mathcal{O}(\mathrm{e}^{2\pi i \tau})\right)\ . \end{equation} For large $L$, all the contributions coming from $\mathcal{O}(\mathrm{e}^{2\pi i \tau})$ are exponentially suppressed and do not contribute. Similarly, the contribution from the constant term 24 does not contribute since it is polynomially suppressed thanks to the prefactor $(c \tau+d)^{-14}$. We thus conclude that we have the exact equality \begin{equation} \int_{C_{a/c}} \frac{\mathrm{d}\tau}{\eta(\tau)^{24}}= -\int_{\longrightarrow} \frac{\mathrm{d}\tau}{(c\tau+d)^{14}} \mathrm{e}^{-2\pi i \tau}\ . \end{equation} The fact that we can reduce integrals of modular objects along the Ford circles back to integrals over elementary functions in this way is at the heart of the power of the Rademacher method. After we have argued for the vanishing of the higher Fourier terms in the expansion, we can deform the contour back to finite $L$. In fact, we would like to deform it to large \emph{negative} values of $L$, since the integrand is exponentially suppressed there. The only obstruction to this procedure is the 14${}^\mathrm{th}$ order pole at $\tau=-\frac{d}{c}$ and hence its residue is the only contributing factor. Thus we get \begin{equation} \int_{C_{a/c}} \frac{\mathrm{d}\tau}{\eta(\tau)^{24}}=2\pi i \Res_{\tau=-\frac{d}{c}} \frac{\mathrm{e}^{-2\pi i \tau}}{(c\tau+d)^{14}}=\frac{(2\pi)^{14} \mathrm{e}^{\frac{2\pi i d}{c}}}{13! \, c^{14}}\ , \end{equation} where we recall that $d$ was determined by $a$ through $ad \equiv 1 \bmod c$. Let us write $d=a^*$ for the inverse $\bmod\; c$. Thus we find for the bosonic open string partition function \begin{equation} Z_\text{open}=-i \int_{\Gamma_\infty} \frac{\mathrm{d}\tau}{\eta(\tau)^{24}}=\frac{-i (2\pi)^{14}}{13!} \sum_{c=1}^\infty \frac{1}{c^{14}}\sum_{\begin{subarray}{c} 1 \le a\le \frac{c}{2} \\ (a,c)=1 \end{subarray}}\mathrm{e}^{\frac{2\pi i a^*}{c}}\ . \end{equation} This is a very fast-converging infinite sum representation of the partition function and trivial to evaluate numerically to very high accuracy. \subsection{Results for the four-point amplitudes}\label{subsec:results} Using the same basic idea, one can also evaluate the integrals \begin{equation}\label{eq:Ap-Ford-circle} \int_{C_{a/c}}\hspace{-.3cm} \mathrm{d}\tau\, \mathrm{d}z_1 \, \mathrm{d}z_2 \, \mathrm{d} z_3\ \left( \frac{\vartheta_1(z_{21},\tau)\vartheta_1(z_{43},\tau)}{\vartheta_1(z_{31},\tau)\vartheta_1(z_{42},\tau)}\right)^{-s} \left( \frac{\vartheta_1(z_{32},\tau)\vartheta_1(z_{41},\tau)}{\vartheta_1(z_{31},\tau)\vartheta_1(z_{42},\tau)}\right)^{-t} \end{equation} and similarly for the non-planar amplitude. Detailed derivations will be given in Section~\ref{sec:planar amplitude derivation} and Section~\ref{sec:non-planar}. Here, we simply present the results of this computation and highlight some of its features. \subsubsection{\label{subsec:planar-sle1}Planar amplitude in the \texorpdfstring{$s$}{s}-channel with \texorpdfstring{$s\le 1$}{s<1}} Let us first explain the simplest case of interest: the planar amplitude $A^\text{p}$ in the $s$-channel for $0 < s \leq 1$ and $t<0$. The amplitude, and hence also our formula, behaves discontinuously as we cross the normal thresholds of the string that are located at \begin{equation} s=(\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \end{equation} for integers $m_\mathrm{D},\, m_\U \in \ZZ_{\ge 0}$ corresponding to mass levels of string states in the units $\alpha'=1$. Each threshold is connected with a new two-particle exchange being kinematically allowed. After the massless threshold, the next one appears at $s=1$ corresponding to $(m_\mathrm{D}, m_\U) = (0,1)$ or $(1,0)$. Hence the formula for the amplitude is going to be particularly simple when $s \leq 1$. Evaluating the integrals over the circle $C_{a/c}$ always leads to further sums which we label by integers $n_\L,\, n_\mathrm{D},\, n_\mathrm{R},\, n_\U$ satisfying the constraint $n_\L+n_\mathrm{D}+n_\mathrm{R}+n_\U=c-1$. These integers are associated to particular winding numbers which we explain further below. We hence write \begin{equation} A^{\text{p}} = \Delta A^{\text{p}} + \sum_{c=1}^\infty \sum_{\begin{subarray}{c} 1 \le a \le \frac{c}{2} \\ (a,c)=1 \end{subarray}} \sum_{\begin{subarray}{c} n_\L,n_\mathrm{D},n_\mathrm{R},n_\U \ge 0 \\ n_\L+n_\mathrm{D}+n_\mathrm{R}+n_\U=c-1 \end{subarray}}A^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}_{a/c}\ , \label{eq:planar amplitude decomposition} \end{equation} where $\Delta A^\text{p}$ is given by \eqref{eq:Delta A planar}. The individual contributions $A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}$ are given by \begin{multline} A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}=-\frac{16\pi i \, \mathrm{e}^{-\pi i\sum_{a=\L,\mathrm{R},\, b=\mathrm{D},\U} \big[s \sum_{m=n_a+1}^{n_a+n_b}+t \sum_{m=n_b+1}^{n_a+n_b}\big] \st{\frac{md}{c}}}}{15c^5 \sqrt{stu}} \int_{P > 0} \hspace{-0.4cm} \d t_\L \, \d t_\mathrm{R} \\ \times P(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}}\left( \frac{\Gamma(-t_\L)\Gamma(s+t_\L)}{\Gamma(s)}\begin{cases}\mathrm{e}^{2\pi i t_\L \st{\frac{d n_\L}{c}}} \;\;\mathrm{if}\ n_\L>0 \\ \frac{\sin(\pi(s+t_\L))}{\sin(\pi s)} \;\; \mathrm{if}\ n_\L=0 \end{cases} \right) \big(\L \leftrightarrow \mathrm{R} \big)\ . \label{eq:planar four-point function s-channel s<1 Rademacher} \end{multline} Let us dissect this formula one by one. The following number-theoretic (discontinuous) \emph{sawtooth} function makes an appearance: \begin{equation} \st{x}=\begin{cases} x-\lfloor x \rfloor -\frac{1}{2} \quad&\mathrm{if}\quad x \not \in \ZZ\ , \\ 0 \quad&\mathrm{if}\quad x \in \ZZ\ . \end{cases} \label{eq:st definition} \end{equation} As in the open-string partition function example discussed in Section~\ref{subsec:Rademacher contour}, $d$ denotes the inverse of $a$ mod $c$, i.e., $ad \equiv 1 \bmod c$. Here, $P(s,t,t_\L,t_\mathrm{R})$ is the following polynomial in $t_\L$ and $t_\mathrm{R}$, also known as the Baikov polynomial: \begin{equation} P(s,t,t_\L,t_\mathrm{R})=\frac{s^2 t^2 - 2 s^2 t t_\L + s^2 t_\L^2 - 2 s^2 t t_\mathrm{R} - 2 s^2 t_\L t_\mathrm{R} - 4 s t t_\L t_\mathrm{R} + s^2 t_\mathrm{R}^2}{4 s t (s + t)}\ . \end{equation} It measures the volume of the two-particle phase space: it is equal to $\ell_\perp^2$, where $\ell_\perp$ is the transverse part of the loop momentum that is orthogonal to all external momenta. The integration bounds are therefore impose only by taking $P>0$. Note that the two factors in the second line of this ``generalized Baikov representation'' are simply the tree-level Veneziano amplitudes decorated with extra phases. The left one depends only on ``left'' variables such as $t_\L$ and $n_\L$, while the other one only on the ``right'' variables. The reader may rightfully ask why the representation \eqref{eq:planar four-point function s-channel s<1 Rademacher} is better than the original integrals from \eqref{eq:integrands four point functions}, given that it still involves infinite sums over $a$, $c$, and $n_a$'s, as well as two integrals over $t_\L$ and $t_\mathrm{R}$. In the regime $s<1$, interest in \eqref{eq:planar four-point function s-channel s<1 Rademacher} is indeed rather of theoretical nature. While we believe that the representation is convergent, it does so very slowly and is also not absolutely convergent. Indeed, it is precisely on the cusp of convergence. Let us understand why. The factor $A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}$ depends on $a$, $n_\L,\,n_\mathrm{D},\,n_\mathrm{R}$ and $n_\U$ only via phases (disregarding for the moment the case distinction in \eqref{eq:planar four-point function s-channel s<1 Rademacher}). Thus naively analyzing convergence of the whole sum \eqref{eq:planar amplitude decomposition} would lead one to the rough estimate \begin{align} |A^{\text{p}}-\Delta A^{\text{p}}|&\le F(s,t) \sum_{c=1}^\infty \sum_{\begin{subarray}{c} 1 \le a \le \frac{c}{2} \\ (a,c)=1 \end{subarray}} \sum_{\begin{subarray}{c} n_\L,n_\mathrm{D},n_\mathrm{R},n_\U \ge 0 \\ n_\L+n_\mathrm{D}+n_\mathrm{R}+n_\U=c-1 \end{subarray}} \frac{1}{c^5}\ . \end{align} The latter sum is logarithmically divergent because there are $\mathcal{O}(c^3)$ choices for $n_\L,\,n_\mathrm{D},\,n_\mathrm{R}$ and $n_\U$ and $\mathcal{O}(c)$ choices for $a$ and thus we end up with a harmonic series. However, at least heuristically, we expect that the sum converges, albeit not absolutely. The reasoning is that for very large values of $c$, the phases in \eqref{eq:planar four-point function s-channel s<1 Rademacher} look completely random and thus we expect that even though there are $\mathcal{O}(c^4)$ choices for $(a,n_\L,n_\mathrm{D},n_\mathrm{R},n_\U)$, each sum only leads to a ${\mathcal O}(\sqrt{c})$ enhancement. This would lead to a convergent sum. Since the phases involve $s$ and $t$, convergence becomes worse for small values of $s$ and $t$ and completely breaks down if we try to approach $s \to 0$. This is the manifestation of a the massless branch cut in our formula. However, while convergence of \eqref{eq:planar four-point function s-channel s<1 Rademacher} for $s\le 1$ is slow, it becomes much faster for the generalization of the formula for $s \ge 1$ that we describe below. In this regime, it is actually very practical and allows to directly evaluate the amplitude. Let us now explain the geometrical interpretation of $n_\L, \,n_\mathrm{D}, \,n_\mathrm{R}$ and $n_\U$ as winding numbers. For this we should recall that we analytically continued $\tau$ inside the complexified moduli space, which we can identify with the moduli space of a torus (without invariance under modular transformations). For $\tau\sim\frac{a}{c}$, the relevant torus becomes very thin and the $z_i$'s are all on a line that winds around the long cycle of the torus $c$ times. For the case $\frac{a}{c}=\frac{0}{1}$ and $\frac{a}{c}=\frac{1}{2}$, this is just the fact that the boundary of the annulus goes once around the annulus, while the boundary of the M\"obius strip winds twice around, see Figure~\ref{fig:open string diagrams}. Geometrically, we can hence think of a loop that winds $c$ times around itself with the four vertex operators on it. Every term $A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}$ corresponds to a consistent way of cutting this diagram in the $s$-channel. The integers $n_\L$, $n_\mathrm{D}$, $n_\mathrm{R}$ and $n_\U$ correspond to the amount of windings that separate the four vertex operators. We displayed the four possibilities for $c=2$ in Figure~\ref{fig:windings c=2}. In general, there are $\frac{1}{6}c(c+1)(c+2)$ ways to distribute the four vertex operators like this. This also explains why the case $n_\L=0$ or $n_\mathrm{R}=0$ plays a special role in the formula \eqref{eq:planar four-point function s-channel s<1 Rademacher}. In this case, two vertex operators can collide which manifests itself as poles in $s \in \ZZ_{>0}$ in the amplitude. The amplitude has in fact double poles at every positive integer $s$ corresponding to the mass renormalization of massive states. We can read-off the mass shifts from the prefactors of these double poles. The only diagrams that contribute to these prefactors are those with $n_\L=n_\mathrm{R}=0$. Thus the mass shifts are much simpler physical quantities than the full amplitude and we also analyze them extensively in this paper. Our results for them are discussed in Section~\ref{subsec:results mass shifts}. \begin{figure} \centering \begin{tikzpicture} \begin{scope}[scale=.85] \draw[domain=90+360:90+720, smooth, variable=\x, very thick, gray, samples=100] plot ({\x}: {1.5+.15*cos(\x/2-45)}); \draw[domain=90:90+360, smooth, variable=\x, very thick, samples=100] plot ({\x}: {1.5+.15*cos(\x/2-45)}); \draw (170:{1.5+.15*cos(85-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (170:1.9) {1}; \draw (190:{1.5-.15*cos(95-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (190:1.1) {2}; \draw (-10:{1.5+.15*cos(-5-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-10:1.9) {3}; \draw (10:{1.5+.15*cos(5-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (10:1.9) {4}; \draw[dashed, very thick, Maroon] (0,-1.8) to (0,1.8); \node at (0,.75) {$n_\L=1$}; \node at (0,.25) {$n_\mathrm{D}=0$}; \node at (0,-.25) {$n_\mathrm{R}=0$}; \node at (0,-.75) {$n_\U=0$}; \end{scope} \begin{scope}[shift={(3.8,0)}, scale=.85] \draw[domain=90+360:90+720, smooth, variable=\x, very thick, gray, samples=100] plot ({\x}: {1.5+.15*cos(\x/2-45)}); \draw[domain=90:90+360, smooth, variable=\x, very thick, samples=100] plot ({\x}: {1.5+.15*cos(\x/2-45)}); \draw (170:{1.5+.15*cos(85-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (170:1.9) {1}; \draw (190:{1.5+.15*cos(95-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (190:1.9) {2}; \draw (-10:{1.5+.15*cos(-5-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-10:1.9) {3}; \draw (10:{1.5+.15*cos(5-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (10:1.9) {4}; \draw[dashed, very thick, Maroon] (0,-1.8) to (0,1.8); \node at (0,.75) {$n_\L=0$}; \node at (0,.25) {$n_\mathrm{D}=1$}; \node at (0,-.25) {$n_\mathrm{R}=0$}; \node at (0,-.75) {$n_\U=0$}; \end{scope} \begin{scope}[shift={(7.6,0)}, scale=.85] \draw[domain=90+360:90+720, smooth, variable=\x, very thick, gray, samples=100] plot ({\x}: {1.5+.15*cos(\x/2-45)}); \draw[domain=90:90+360, smooth, variable=\x, very thick, samples=100] plot ({\x}: {1.5+.15*cos(\x/2-45)}); \draw (170:{1.5-.15*cos(190-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (170:1.9) {1}; \draw (190:{1.5-.15*cos(170-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (190:1.9) {2}; \draw (-10:{1.5-.15*cos(-10-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-10:1.1) {3}; \draw (10:{1.5+.15*cos(10-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (10:1.9) {4}; \draw[dashed, very thick, Maroon] (0,-1.8) to (0,1.8); \node at (0,.75) {$n_\L=0$}; \node at (0,.25) {$n_\mathrm{D}=0$}; \node at (0,-.25) {$n_\mathrm{R}=1$}; \node at (0,-.75) {$n_\U=0$}; \end{scope} \begin{scope}[shift={(11.4,0)}, scale=.85] \draw[domain=90+360:90+720, smooth, variable=\x, very thick, gray, samples=100] plot ({\x}: {1.5+.15*cos(\x/2-45)}); \draw[domain=90:90+360, smooth, variable=\x, very thick, samples=100] plot ({\x}: {1.5+.15*cos(\x/2-45)}); \draw (170:{1.5-.15*cos(190-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (170:1.9) {1}; \draw (190:{1.5-.15*cos(170-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (190:1.9) {2}; \draw (-10:{1.5-.15*cos(-10-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-10:1.1) {3}; \draw (10:{1.5-.15*cos(10-45)}) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (10:1.1) {4}; \draw[dashed, very thick, Maroon] (0,-1.8) to (0,1.8); \node at (0,.75) {$n_\L=0$}; \node at (0,.25) {$n_\mathrm{D}=0$}; \node at (0,-.25) {$n_\mathrm{R}=0$}; \node at (0,-.75) {$n_\U=1$}; \end{scope} \end{tikzpicture} \caption{Four possibilities for windings of vertex operators for $c=2$ that correspond to all the generalized $s$-channel cuts. For example, in the first case $(n_\L, n_\mathrm{D}, n_\mathrm{R}, n_\U)=(1,0,0,0)$, because going from the puncture $1$ to $2$ requires one winding and no windings are necessary to travel between the other pairs of punctures.} \label{fig:windings c=2} \end{figure} \subsubsection{Higher values of \texorpdfstring{$s$}{s}} Equation \eqref{eq:planar four-point function s-channel s<1 Rademacher} can be systematically extended to higher values of $s$. In this case, we get contributions from each mass-level that can be exchanged in the scattering process labelled by the integers $m_\mathrm{D}$ and $m_\U$ mentioned above. The generalization of \eqref{eq:planar four-point function s-channel s<1 Rademacher} now reads \begin{align} A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}&=-\frac{16\pi i \, \mathrm{e}^{-\pi i\sum_{a=\L,\mathrm{R},\, b=\mathrm{D},\U} \big[s \sum_{m=n_a+1}^{n_a+n_b}+t \sum_{m=n_b+1}^{n_a+n_b}\big] \st{\frac{md}{c}}}}{15c^5 \sqrt{stu}} \sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \nonumber\\ &\times \mathrm{e}^{\frac{2\pi i d}{c}(m_\mathrm{D} n_\mathrm{D}+m_\U n_\U)}\int_{P_{m_\mathrm{D},m_\U} > 0} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \d t_\L \, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}}\, Q_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R}) \nonumber\\ &\times \left( \frac{\Gamma(-t_\L)\Gamma(s+t_\L-m_\mathrm{D}-m_\U)}{\Gamma(s)}\begin{cases}\mathrm{e}^{2\pi i t_\L \st{\frac{d n_\L}{c}}} &\text{if}\;\; n_\L>0 \\ \frac{\sin(\pi(s+t_\L))}{\sin(\pi s)} &\text{if}\;\; n_\L=0 \end{cases} \right) \big(\L \leftrightarrow \mathrm{R}\big)\, . \label{eq:planar four-point function s-channel} \end{align} The polynomials $P_{m_\mathrm{D},m_\U}$ still have a purely kinematical interpretation of $\ell_\perp^2$. They are explicitly given by \begin{align} P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})=-\frac{1}{4stu}\det \begin{bmatrix} 0 & s & u & \!\! m_\U-s-t_\L \\ s & 0 & t & t_\L-m_\mathrm{D} \\ u & t & 0 & m_\mathrm{D}-t_\mathrm{R} \\ m_\U-s-t_\L & t_\L-m_\mathrm{D} & m_\mathrm{D}-t_\mathrm{R} & 2m_\mathrm{D} \end{bmatrix}\ , \end{align} where the determinant arises kinematically as a certain Gram determinant. Moreover, the factors $Q_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})$ are polynomials in the four arguments. Physically, they appear from the sum over polarizations and degeneracies of the internal states. They are defined as follows. Take \begin{align} Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}(s,t) &=[q_\L^{m_\L} q_\mathrm{D}^{m_\mathrm{D}}q_\mathrm{R}^{m_\mathrm{R}}q_\U^{m_\U}] \prod_{\ell=1}^\infty \prod_{a=\L,\mathrm{R}}(1-q^\ell q_a^{-1})^{-s}(1-q^\ell q_a)^{-s}\nonumber\\ &\qquad\times\prod_{a=\mathrm{D},\U}(1-q^\ell q_a^{-1})^{-t}(1-q^{\ell-1} q_a)^{-t}\nonumber\\ &\qquad\times \prod_{a=\L,\mathrm{R}}(1-q^{\ell}q_a^{-1} q_\mathrm{D}^{-1})^{-u} (1-q^{\ell-1}q_a q_\mathrm{D})^{-u}\ , \label{eq:QmL,mD,mR,mU definition} \end{align} where $q=q_\L q_\mathrm{D} q_\mathrm{R} q_\U$ and $[q_\L^{m_\L} q_\mathrm{D}^{m_\mathrm{D}}q_\mathrm{R}^{m_\mathrm{R}}q_\U^{m_\U}]$ denotes the coefficient of the relevant term in the series expansion around each $q_a=0$. We then have \begin{multline} Q_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R}) = \!\!\!\sum_{m_\L,m_\mathrm{R}=0}^{m_\mathrm{D}+m_\U} Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}(s,t) (-t_\L)_{m_\L}(-s-t_\L+m_\L+1)_{m_\mathrm{D}+m_\U-m_\L} \\ \times (-t_\mathrm{R})_{m_\mathrm{R}}(-s-t_\mathrm{R}+m_\mathrm{R}+1)_{m_\mathrm{D}+m_\U-m_\mathrm{R}}\ , \label{eq:definition Qm2,m4} \end{multline} where $(a)_n=a(a+1) \cdots (a+n-1)$ is the rising Pochhammer symbol. In practice, we computed all the polynomials $Q_{m_\mathrm{D},m_\U}$ with $(\sqrt{m_\mathrm{D}} + \sqrt{m_\U})^2 \leq s \leq 39$. They rapidly grow in the number of terms. In the ancillary file \texttt{Q.txt} we included all the ones needed to reproduce our results up to $s \leq 16$. In the language of the Rademacher expansion, the sum over $m_\mathrm{D}$ and $m_\U$ corresponds to the sum over so-called polar terms in the modular integrand. When crossing one of the production thresholds, a new polar term arises and contributes to the integral. \subsubsection{Imaginary part} As explained in Section~\ref{subsec:integration contour} and \cite{Eberhardt:2022zay}, the imaginary part is much simpler to compute. To be precise, we have \begin{equation} \Im A^\text{p} = -\frac{1}{2i} \bigg( A_{0/1}^{0,0,0,0}\; -\hspace{-0.6cm}\sum_{\begin{subarray}{c} n_\L,n_\mathrm{D},n_\mathrm{R},n_\U \ge 0 \\ n_\L+n_\mathrm{D}+n_\mathrm{R}+n_\U=1 \end{subarray}} \hspace{-0.6cm} A_{1/2}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U} \bigg)\ . \label{eq:planar amplitude imaginary part} \end{equation} The first term corresponds to the $s$-channel cut of the annulus computed as a circle anchored at $\tau = \frac{0}{1}$ (in this edges case, we set $a^\ast=0$). The other four terms correspond to the four possible ways that we can cut the M\"obius strip in the $s$-channel, computed using a Ford circle at $\tau = \frac{1}{2}$. The overall minus sign comes about because of the orientation of the contour and $\frac{1}{2i}$ is the normalization extracting the imaginary part, see \cite{Eberhardt:2022zay} for details. It is a very non-trivial identity that the imaginary part of eq.~\eqref{eq:planar amplitude decomposition} recovers indeed eq.~\eqref{eq:planar amplitude imaginary part}. While our derivation shows that this indeed holds, we do not have a direct proof of this fact (see however below for some special cases). \subsubsection{Other channels and the non-planar case} We also derive the corresponding formulas for the $u$-channel of the planar amplitude and for the $s$- and $u$-channel of the non-planar amplitude. The reader can find the results in these three cases in eq.~\eqref{eq:planar four point function u-channel Rademacher}, \eqref{eq:non-planar four point function s-channel Rademacher} and \eqref{eq:non-planar four point function u-channel Rademacher}. The formulas are essentially all identical, except that the allowed range of $(n_\L,n_\mathrm{D},n_\mathrm{R},n_\U)$ is different and the appearing phases are slightly different. The $u$-channel formulas also do not exhibit poles because the corresponding vertex operators are not allowed to collide. We also remark again that the non-planar formula does not need a correction from the cusp, i.e.\ there is no $\Delta A^{\text{n-p}}$. For the non-planar amplitude, the range of fractions runs from $0<\frac{a}{c} \le 2$, since the endpoint of the integration contour is different, see Figure~\ref{fig:tau contours}. We should also remember that the non-planar amplitude has an additional factor of $(1-\mathrm{e}^{-\pi i s}) A^{\text{n-p}}$ in front, see eq.~\eqref{eq:non-planar pi i s prefactor}. Thus naively, the amplitude has triple poles at every even integer $s$. One can however easily check that they cancel out of the final expression. \subsection{Results for the mass-shifts} \label{subsec:results mass shifts} As already mentioned, the above formulas allow us to compute mass-shifts in a convenient way. Recall that they originate from the worldsheet degenerations illustrated in Figure~\ref{fig:double pole degenerations}. Mass shifts are given by the coefficient of the double pole $\DRes_{s=s_\ast}$ in \eqref{eq:planar four-point function s-channel} at every positive integer $s_\ast$. Only the terms with $n_\L=n_\mathrm{R}=0$ contribute and for them we have \begin{align} \DRes_{s = s_\ast} A_{a/c}^{0,n_\mathrm{D},0,n_\U}&=-\frac{16\pi i\, \mathrm{e}^{\frac{2\pi i s_\ast d}{c} n_\mathrm{D} n_\U}}{15c^5 \sqrt{-s_\ast t(s_\ast+t)}\, \Gamma(s_\ast)^2}\sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s_\ast \end{subarray}} \mathrm{e}^{\frac{2\pi i d}{c}(m_\mathrm{D} n_\mathrm{D}+m_\U n_\U)} \nonumber\\ &\qquad\times \int_{P_{m_\mathrm{D},m_\U} > 0} \hspace{-1cm} \d t_\L \, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s_\ast,t,t_\L,t_\mathrm{R})^{\frac{5}{2}}\, Q_{m_\mathrm{D},m_\U}(s_\ast,t,t_\L,t_\mathrm{R}) \nonumber\\ &\qquad\qquad\times (t_\L+1)_{s_\ast-m_\mathrm{D}-m_\U-1}(t_\mathrm{R}+1)_{s_\ast-m_\mathrm{D}-m_\U-1}\ . \label{eq:mass-shifts} \end{align} For every mass-level, the integral over $t_\L$ and $t_\mathrm{R}$ can be explicitly evaluated and gives a polynomial of degree $s_\ast{-}1$ in $t$. In particular, in the simplest mass-shift at $s=1$ takes the simple form \begin{equation} \DRes_{s=1} A^\text{p}=\frac{i}{(2\pi)^2}-\frac{\pi^2 i}{210}\sum_{c=1}^\infty \frac{1}{c^5}\sum_{\begin{subarray}{c} 1 \le a \le \frac{c}{2} \\ (a,c)=1 \end{subarray}}\sum_{n=0}^{c-1} \mathrm{e}^{-\frac{2\pi i n(n+1)a^*}{c}}\ , \label{eq:mass-shift s=1} \end{equation} where $d=a^*$ denotes again the inverse mod $c$. Such sums are classical objects in number theory. In particular, the sum over $n$ is known as a \emph{Gauss sum} and can be explicitly evaluated in terms of the Jacobi symbol (a generalization of the Legendre symbol). \begin{figure} \centering \begin{tikzpicture} \begin{scope} \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \draw[very thick, fill=black!10!white] (2.2,0) circle (.7); \draw[very thick, fill=black!10!white] (-2.2,0) circle (.7); \draw (-2.4,.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-2.4,1) {1}; \draw (-2.4,-.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-2.4,-1) {2}; \draw (2.4,.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,1) {4}; \draw (2.4,-.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,-1) {3}; \end{scope} \begin{scope}[shift={(8,0)}] \draw[very thick, fill=black!10!white] (0,0) circle (1.5); \draw[very thick, fill=white] (0,0) circle (.8); \fill[white] (-.6,.5) rectangle (.6,1.6); \fill[black!10!white] (-.62,.53) to (0,1.2) to[bend right=30] (-.62,1.375); \fill[black!10!white] (.62,.53) to (0,1.2) to[bend left=30] (.62,1.375); \draw[very thick, out=54.3, in=154.3, looseness=.8] (-.65,.47) to (.65,1.35); \fill[white] (0,1.2) circle (.1); \draw[very thick, out=125.7, in=25.7, looseness=.8] (.65,.47) to (-.65,1.35); \draw[very thick, fill=black!10!white] (2.2,0) circle (.7); \draw[very thick, fill=black!10!white] (-2.2,0) circle (.7); \draw (-2.4,.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-2.4,1) {1}; \draw (-2.4,-.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (-2.4,-1) {2}; \draw (2.4,.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,1) {4}; \draw (2.4,-.67) node[cross out, draw=black, ultra thick, minimum size=5pt, inner sep=0pt, outer sep=0pt] {}; \node at (2.4,-1) {3}; \end{scope} \end{tikzpicture} \caption{Worldsheet configurations leading to double poles at every $s \in \mathbb{Z}_{>0}$ for the planar annulus and M\"obius strip topologies respectively.} \label{fig:double pole degenerations} \end{figure} One can also perform strong tests on these formulas by invoking some further number theory. There are two ways to compute the imaginary part of the mass-shift: either take the imaginary part directly in \eqref{eq:mass-shifts} or take the imaginary part in \eqref{eq:planar amplitude imaginary part}. For the mass-shifts we can check the equality of the two formulas directly. We show that \begin{equation} F_{s}^{m_\mathrm{D},m_\U}(c)=\frac{2}{c}\Im \bigg[i\sum_{\begin{subarray}{c} 1 \le a \le \frac{c}{2} \\ (a,c)=1 \end{subarray}}\sum_{\begin{subarray}{c} n_\mathrm{D},n_\U \ge 0 \\ n_\mathrm{D}+n_\U=c-1\end{subarray}} \mathrm{e}^{\frac{2\pi i d}{c} (sn_\mathrm{D} n_\U+m_\mathrm{D} n_\mathrm{D}+m_\U n_\U)}\bigg] \label{eq:definition F overview} \end{equation} is almost a multiplicative function, meaning that (suppressing other labels) \begin{equation} F(c)=F(p_1^{\ell_1}) \cdots F(p_k^{\ell_k}) \ ,\label{eq:multiplicative function} \end{equation} where $c=p_1^{\ell_1} \cdots p_k^{\ell_k}$ is the prime factorization of $c$. The identity \eqref{eq:multiplicative function} can fail for finitely many prime numbers, but they can be treated separately. For example, in the simplest case for the mass-shift at $s=1$, we have \begin{equation} F_1^{0,0}(c)=\begin{cases} 0 \quad &\text{if }c=1\text{ or }c \ge 3\text{ contains a square}\ , \\ 2 \quad &\text{if }c=2\ , \\ 1 \quad &\text{if }c \ge 3\text{ is square-free}\ , \end{cases} \end{equation} where we recall that square-free means that the number has no repeated prime factor. Using properties of Euler products, one has \begin{equation} \frac{105}{\pi^4}=\frac{\zeta(4)}{\zeta(8)}=\sum_{c=1}^\infty \frac{1}{c^4} \ \begin{cases} 0 \quad &\text{if }c \ge 3\text{ contains a square}\ , \\ 1 \quad &\text{if }c \ge 3\text{ is square-free\ ,} \end{cases} \label{eq:series evaluation square free} \end{equation} which leads to the exact evaluation \begin{equation} \Im\eqref{eq:mass-shift s=1}=\frac{\pi^2}{448}\ . \end{equation} This result agrees with the one obtained by directly extracting the double pole from \eqref{eq:planar amplitude imaginary part}. Similarly, one can check that the corresponding equalities hold for higher values of $s$, where they involve more non-trivial $L$-functions that generalize the Riemann zeta-function appearing in \eqref{eq:series evaluation square free}. This computation provides a completely independent check of (parts of) our formula \eqref{eq:planar four-point function s-channel}. The generalization to higher $s = s_\ast$ is quite straightforward. In the ancillary file \texttt{DRes.txt}, we included the expressions in terms of Gauss sums needed that compute them up to $s \leq 16$. To highlight the main result, we find \begin{subequations} \begin{align} \DRes_{s=1} A^\text{p} &= d_1 + \frac{\pi^2}{448}i\, ,\\ \DRes_{s=2} A^\text{p} &= (1+t)\left(d_2 + \frac{17\pi^2}{7560}i \right)\, ,\\ \DRes_{s=3} A^\text{p} &= (1+t)(2+t)\left(d_3 + \frac{167341 \pi^2}{143700480}i\right)\, . \end{align} \end{subequations} The imaginary parts can be evaluated exactly and the real parts $d_{s_\ast}$ are constants we can compute with arbitrary precision, but did not manage to express in terms of known quantities: \begin{equation} d_1 \approx 8.36799 \cdot 10^{-5}\, , \qquad d_2 \approx -1.61091 \cdot 10^{-4}\, , \qquad d_3 \approx -9.05359 \cdot 10^{-6}\, . \end{equation} The neat factorization pattern into $(1+t)(2+t)\cdots (s_\ast - 1 + t)$ breaks down starting with $s_\ast = 4$ (but as noticed in \cite{Eberhardt:2022zay}, it holds approximately for the imaginary parts). The reason is that the spectrum no longer consists of a single supermultiplet and particles of varies spins at the same mass level get different mass shifts and decay widths. Numerical evaluation of higher $\DRes_{s=s_\ast}$ up to $c \leq 1000$ is given in App.~\ref{app:mass-shifts}. Since the mass shifts are easier to evaluate, we can also use them to explore the behaviour of the amplitude at high energies. Since they control the value of the amplitude at all integer values of $s \in \ZZ$, they give an approximate idea of the high-energy behaviour. However, as we shall see the real part of the amplitude oscillates and thus only knowing integer values can be somewhat misleading. \subsection{Numerical computations and convergence}\label{subsec:numerical} Let us explain how to use the Rademacher formula in practical computations and summarize our observations on the convergence in $c$. To take specific examples, we will discuss the type of manipulations that went into producing the plots shown in Section~\ref{sec:introduction}. The first step is to simplify the integration domain, which can be done by a change of variables from $(t_\L,t_\mathrm{R})$ to $(x,y)$ as follows: \begin{equation} t_{\L/\mathrm{R}}=\frac{\sqrt{\Delta_{m_\mathrm{D},m_\U}}}{2\sqrt{s}}(\sqrt{-u}x \pm \sqrt{-t} y)+\frac{1}{2}(m_\mathrm{D}+m_\U-s)\ , \end{equation} where we introduced \begin{equation} \Delta_{m_\mathrm{D},m_\U}(s) =\left[s-(\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2\right]\left[s-(\sqrt{m_\mathrm{D}}-\sqrt{m_\U})^2\right]\ . \end{equation} This change leads to the Jacobian $\sqrt{tu}\, \Delta_{m_\mathrm{D},m_\U}/2s$. In addition, the polynomial $P_{m_\mathrm{D},m_\U}$ simplifies to \begin{equation} P_{m_\mathrm{D},m_\U} = \frac{\Delta_{m_\mathrm{D},m_\U}(s)}{4s} (1-x^2-y^2)\, . \end{equation} and hence the overall powers of $\Delta_{m_\mathrm{D},m_\U}$ can be pulled out of the integral. We thus obtain the simplified formula for \eqref{eq:planar four point function Aa/c n1,n2,n3 final result}: \begin{align} A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}&=-\frac{\pi i \, \mathrm{e}^{-\pi i\sum_{a=\L,\mathrm{R},\, b=\mathrm{D},\U} \big[s \sum_{m=n_a+1}^{n_a+n_b}+t \sum_{m=n_b+1}^{n_a+n_b}\big] \st{\frac{md}{c}}}}{60 c^5 s^4 \Gamma^2(s) \sin^2(\pi s)} \!\!\!\!\sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \!\!\!\! \Delta_{m_\mathrm{D},m_\U}^{\frac{7}{2}}(s) \nonumber\\ &\times \mathrm{e}^{\frac{2\pi i d}{c}(m_\mathrm{D} n_\mathrm{D}+m_\U n_\U)}\int_{\mathbb{D}} \d x \, \d y\ (1{-}x^2{-}y^2)^{\frac{5}{2}}\, Q_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R}) \nonumber\\ &\times \!\left(\! \Gamma(-t_\L)\Gamma(s{+}t_\L{-}m_\mathrm{D}{-}m_\U) \begin{cases}\mathrm{e}^{2\pi i t_\L \st{\frac{d n_\L}{c}}} \sin(\pi s) &\!\text{if}\; n_\L>0 \\ \sin(\pi(s+t_\L)) &\!\text{if}\; n_\L=0 \end{cases} \right) \! \big(\L \leftrightarrow \mathrm{R}\big)\ . \end{align} The integration domain $\mathbb{D}$ is the unit disk, $x^2 + y^2 < 1$. Written this way, every integral in the sum is convergent and the only singularities come from the explicit sine function out in the front. This is the reason, in all computations we multiply out by $\sin^2(\pi s)$ in order to remove these divergences from plots. As mentioned before, these double poles at every positive integer $s$ are associated with worldsheet degenerations illustrated in Figure~\ref{fig:double pole degenerations} and they come from the terms $n_\L = 0$ and $n_\mathrm{R}=0$ where two punctures can collide. Analogous formulas can be derived in other channels and the non-planar case. In the forward limit, $t=0$, the expression undergoes some simplifications because $t_\L = t_\mathrm{R}$ and consequently the variable $y$ can be integrated out. The result is \begin{align} A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}\Big|_{t=0}\!\!\!&=\!-\frac{i \pi^2 \mathrm{e}^{-\pi i s\sum_{a=\L,\mathrm{R},\, b=\mathrm{D},\U} \sum_{m=n_a+1}^{n_a+n_b} \st{\frac{md}{c}}}}{192 c^5 s^4 \Gamma^2(s) \sin^2(\pi s)} \hspace{-0.8cm} \sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \hspace{-0.8cm} \Delta_{m_\mathrm{D},m_\U}^{\frac{7}{2}}(s)\, \mathrm{e}^{\frac{2\pi i d}{c}(m_\mathrm{D} n_\mathrm{D}+m_\U n_\U)}\nonumber\\ &\times \int_{-1}^{1} \d x\ (1{-}x^2)^{3}\, Q_{m_\mathrm{D},m_\U}(s,0,t_\L,t_\L)\, \Gamma^2(-t_\L)\, \Gamma^2(s{+}t_\L{-}m_\mathrm{D}{-}m_\U) \nonumber\\ &\times \left( \begin{cases}\mathrm{e}^{2\pi i t_\L \st{\frac{d n_\L}{c}}} \sin(\pi s) &\text{if}\quad n_\L>0 \\ \sin(\pi(s+t_\L)) &\text{if}\quad n_\L=0 \end{cases} \right) \big(\L \leftrightarrow \mathrm{R}\big)\ . \end{align} In practice, we perform the sums over the winding numbers $n_\L$, $n_\mathrm{D}$, $n_\mathrm{R}$, $n_\U$ and fractions $\frac{a}{c}$ within the integrand. Notice that $a$ never appears explicitly, so the sum can be expressed as that over $d$, running over the range $\{1,2,\dots,\lfloor\frac{c}{2}\rfloor\}^*$, where the star $*$ denotes the inverse mod $c$. To perform a computation in a finite amount of time, we need to truncate the sum in \eqref{eq:planar amplitude decomposition} at some $c$. As highlighted before, due to the oscillating terms in the sums, it is difficult to accurately estimate the truncation errors analytically. As an alternative, we can fit the dependence on $c$ and extrapolate the data to $c \to \infty$, thus obtaining some error bars on the amplitude computed using the Rademacher method. Let us first use fitting to get an estimate on the rate of convergence of the Rademacher method. This can be done quite reliably for the imaginary part, since in that case we can compute the exact value with arbitrary precision using \eqref{eq:planar amplitude imaginary part}. On the other hand, we take the imaginary part of \eqref{eq:planar amplitude decomposition} computed up to $c \leq c_\ast$ and fit the result to the simple ansatz \begin{equation} \eqref{eq:planar amplitude imaginary part} - \alpha\, c_\ast^{-\beta}\, .\label{eq:fit} \end{equation} The exponent $\beta$ in principle depends on the kinematics. Note that it corresponds to the convergence of partial sums, not individual terms in the $c$-sum. Any positive $\beta$ indicates convergence. The ``random phase'' model explained in Section~\ref{subsec:planar-sle1} corresponds to $\beta \approx 2$. In Section~\ref{sec:mass-shifts}, we will prove that for positive integers $s$, we have $\beta = 3$. In Appendix~\ref{app:convergence}, we further argue that that $\beta > 0$ for every $s>0$. The goal of the following discussion is to extract $\beta$ directly from the data. We focus on the forward limit case, $t=0$, corresponding to the amplitude plotted in Figure~\ref{fig:Ap-forward}. After taking a logarithm, \eqref{eq:fit} can be fitted using linear regression. In practice, it is difficult to control the systematic errors coming from the fact we might not have reached the asymptotic regime in $c$. In practice we have access to data $c_\ast \leq c_\mathrm{max}$, where $c_\mathrm{max}=68$ for data points with $s \leq 1$ and $c_\mathrm{max}=40$ for those with $s \leq 12$. It is beneficial to drop multiple data points at low $c_\ast$, but not too many to reduce the statistics. To find a balance, we scan over multiple cutoffs $c_\mathrm{min}$ such that fitting only the data $c_\mathrm{min} \leq c_\ast \leq c_\mathrm{max}$ gives the most accurate fit, quantified by the highest value of the adjusted coefficient of determination $\bar{R}^2$. We also discard all unreliable fits with $\bar{R}^2 < 0.99$. The resulting $\beta$'s together with the error bars are plotted in Figure~\ref{fig:ImA-fit}. \begin{figure} \centering \includegraphics[scale=1.2]{figures/ImA-fit} \caption{\label{fig:ImA-fit}Fitted values of the exponent $\beta$ in \eqref{eq:fit} together with their standard deviation error bars. The gray dashed line corresponds to $\beta=3$ expected at positive integers $s$.} \end{figure} As anticipated, at positive integers $s$, convergence reaches $\beta \approx 3$ predicted by the Gauss sum formula. We observe that for $s$ just above an integer, convergence changes drops drastically. As expected, when $s \to 0$, the value $\beta \approx 0$ is reached indicating poorer and poorer convergence due to lack of cancellations between terms in the Rademacher expansion. Across all energies, the results are consistent with $\beta > 0$. In order to illustrate why the jumps in convergence happens across integers, in Figure~\ref{fig:convergence-jump} we plot $-\alpha c_\ast^{-\beta}$ (the difference between truncated and exact values) for two values: $s=1$ and $s=1.1$. The former reaches the asymptotic behavior fairly soon. For example, keeping the data points $c_{\mathrm{max}} = 40$ leads to the fit $\beta = 3.07 \pm 0.09$, while taking the extended set $c_{\mathrm{max}} = 190$ gives $\beta = 3.013 \pm 0.012$. On the other hand, for $s=1.1$, the data set $c_{\mathrm{max}} = 40$ gives rise to the fit $\beta = 0.61 \pm 0.03$, while the set $c_{\mathrm{max}} = 190$ gives $\beta = 0.911 \pm 0.002$. The two values disagree, indicating a presence of the systematic error: the fact that for a given $c_{\mathrm{max}}$ the asymptotic regime might not have been reached. The qualitative difference between $s=1$ and $s=1.1$ can be observed in Figure~\ref{fig:convergence-jump}. It is the fact that $s=1.1$ develops a hump around $c_\ast \approx 20$ and settles down to its power-law behavior only for much larger values of $c_\ast$. Due to this feature, the error bars on the data points right above each $\mathbb{Z}_{>0}$ in Figure~\ref{fig:ImA-fit} are underestimated. \begin{figure} \centering \includegraphics[scale=1.2]{figures/convergence-jump} \caption{\label{fig:convergence-jump}Two examples of the difference between $c_\ast$-truncated and exact values of $\Im A^\text{p}(s,0)$ as a function of $c_\ast$ for $s=1$ and $s=1.1$, illustrating a drastic change in convergence rates visible in Figure~\ref{fig:ImA-fit}.} \end{figure} Finally, we can analyze the convergence of the full amplitude, which is intrinsically more difficult, because we do not a priori know the exact value. We perform the same analysis as above, except using a $3$-parameter non-linear fit: \begin{equation}\label{eq:fit2} \gamma - \alpha c_\ast^{-\beta}\, . \end{equation} Here, $\gamma$ is the $c_\ast \to \infty$ extrapolated value of the amplitude. This quantity, together with its error bars is plotted in Figure~\ref{fig:Ap-forward}. In Figure~\ref{fig:convergence} we plot the exponents $\beta$ obtained by fitting the real part of the amplitude. Overall, uncertainties become much larger due to a more complicated fitting function used. We employ the same procedure as before, except in addition to filtering out by $\bar{R}^2$, we also discard data points for which the relative error on the central value $\gamma$ is bigger than $10\%$. Once again, we observe that $\beta \approx 0$ as $s \to 0$. For larger $s$, the convergence rate $\beta$ stabilizes to more or less a constant. For a range of values, it is consistent with the ``random phase'' model that would correspond to $\beta = 2$. Note that systematic errors, just as in the case of Figure~\ref{fig:ImA-fit}, are not taken into account. In the case of Figures~\ref{fig:fixed-angle-data} and \ref{fig:ratios}, no extrapolation to $c_\ast \to \infty$ is used. All the points are plotted with $c_\ast = 10$ with a subset with slightly higher $c_\ast = 16$ and integer $s$ with $c_\ast = 1000$. In practice, we found that the result is convergent to a percent level around $c_\ast \approx 10$ and, especially plotted on a logarithmic scale, using higher cutoffs does not lead to a noticeable difference. The spacing between values of $s$ sampled is $\delta s = 0.01$. The vertical spikes on the plots are caused by the change of the sign of the amplitude. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{figures/convergence} \caption{\label{fig:convergence}Estimated power-law exponent $\beta$ after fitting the data for the real part to \eqref{eq:fit2} for every value of $s$. The value of $\beta=2$ indicated by the dashed line would correspond to the ``random phase'' model explained in Section~\ref{subsec:planar-sle1}.} \end{figure} \section{Warm-up: Two-point amplitude} \label{sec:two-point function} Before diving directly into the derivation of the four-point function, we demonstrate some further features of the method at a simpler example, namely a two-point function. Let \begin{equation} I(s)=-i\int_{\Gamma} \mathrm{d}\tau \int_0^1 \mathrm{d}z\ \left(\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}\right)^{2s} \label{eq:two-point function integral} \end{equation} for $s> 0$.\footnote{For $s\le 0$, the Rademacher procedure does not converge in this case.} This is basically the two-point function in bosonic open-string theory. The contour $\Gamma$ runs as before from 0 to $\frac{1}{2}$. Of course, unless $s=0$, this two-point function is off-shell. This does not prevent us from computing this integral, however. This calculation contains all the the main ideas that go into the computation of the four-point function below, but is still much lower in complexity and thus serves as a good toy example. Compared to the partition function analyzed using the Rademacher method in Section~\ref{subsec:Rademacher contour}, the new aspect we want to learn about from \eqref{eq:two-point function integral} is how to deal with branch cuts of the integrand. \subsection{Modular transformation} The general logic of the Rademacher contour posits that \begin{equation} I(s)=\sum_{c=1}^\infty \sum_{\begin{subarray}{c} 1 \le a \le \frac{c}{2} \\ (a,c)=1\end{subarray}} \int_{C_{a/c}} \mathrm{d}\tau \int_0^1 \mathrm{d}z\, \left(\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}\right)^{2s}\ . \end{equation} The $z$-contour is unaffected by this contour deformation. To compute the integral over the circle $C_{a/c}$, we use the modular properties of the integrand. Notice that \begin{equation} \Phi(z,\tau)=\frac{\vartheta_1(z,\tau)^2}{\eta(\tau)^6} \end{equation} is a weak Jacobi form of index $1$ and weight $-2$, which means that transforms as follows under modular transformations and shifts in $z$: \begin{subequations} \begin{align} \Phi\left(\frac{z}{c \tau+d},\frac{a \tau+b}{c \tau+d}\right)&=(c \tau+d)^{-2} \mathrm{e}^{\frac{2\pi i cz^2}{c\tau+d}} \Phi(z,\tau)\ , \\ \Phi(z+m \tau+n,\tau)&=\mathrm{e}^{-2\pi i (m^2 \tau+2 m z)} \Phi(z,\tau)\ . \end{align} \end{subequations} Conceptually, these transformation behaviour means that $\Phi(z,\tau)$ is a holomorphic section of a certain line bundle over the moduli space of two-punctured tori $\overline{\mathcal{M}}_{1,2}$. Proceeding as in \eqref{eq:modular transformation Ca/c}, we set \begin{equation} \tau=\frac{a \tau'+b}{c \tau'+d} \end{equation} for a new modular parameter $\tau'$. This modular transformation has the property that the line $\tau'\in i+\RR$ gets mapped to the circle $C_{a/c}$. We have \begin{subequations} \label{eq:modular transformation Phi} \begin{align} \Phi(z,\tau)&=\Phi\left(\frac{z(c \tau'+d)}{c \tau'+d}, \frac{a \tau'+b}{c \tau'+d}\right) \\ &=(c \tau'+d)^{-2} \mathrm{e}^{2\pi i c(c \tau'+d)z^2} \Phi(z (c \tau'+d),\tau')\ . \end{align} \end{subequations} Thus we obtain \begin{align} \int_{C_{a/c}} &\mathrm{d}\tau \int_0^1 \mathrm{d}z\, \left(\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}\right)^{2s} \nonumber\\ &=-\int_{\longrightarrow} \frac{\mathrm{d} \tau'}{(c \tau'+d)^{2+2s}} \int_0^1 \mathrm{d}z \ \mathrm{e}^{2\pi i c(c \tau'+d)z^2} \left(\frac{\vartheta_1(z (c \tau'+d),\tau')}{\eta(\tau')^3}\right)^{2s} \ . \label{eq:integral Ca/c two-point after modular transformation} \end{align} As before, $\longrightarrow$ denotes the contour parallel to the real axis. The minus sign appears because we turned around the orientation of the contour. Since we raised \eqref{eq:modular transformation Phi} to a fractional power, we need to be careful about branch cuts. In the integrand, by $(c \tau'+d)^{2+2s}$ we mean its principal branch. The correct branch on the right-hand side can be determined by considering the integrand for $z \to 0$. Using that $\vartheta_1'(0,\tau)=2\pi \eta(\tau)^3$, we have for the integrand before modular transformation: \begin{equation} \left(\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}\right)^{2s} \to (2\pi z)^{2s}\ , \end{equation} and after: \begin{equation} \frac{\mathrm{e}^{2\pi i c(c \tau'+d)z^2}}{(c \tau'+d)^{2s}} \left(\frac{\vartheta_1(z (c \tau'+d),\tau')}{\eta(\tau')^3}\right)^{2s} \to \frac{(2\pi z(c \tau'+d))^{2s}}{(c \tau'+d)^{2s}}=(2\pi z)^{2s}\ , \end{equation} where we use the principal branch throughout. Since these two expressions agree, we conclude that the branch on the right hand side of \eqref{eq:integral Ca/c two-point after modular transformation} is specified by taking the principal branch in the region $z \to 0$ and then following the branch smoothly for the other values of $z$. A similar argument could have been applied also for $z \to 1$ and the branch again becomes the principal branch in that region. Finally, we shift $\tau' \to \tau'-\frac{d}{c}$ since this will be more convenient in the following. We also rename $\tau' \to \tau$ for better readability. Thus, in the end we have \begin{align} \int_{C_{a/c}} \mathrm{d}\tau \int_0^1 \mathrm{d}z\, \left(\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}\right)^{2s}&=-\int_{\longrightarrow} \frac{\mathrm{d} \tau}{(c \tau)^{2+2s}} \ q^{s c^2 z^2} \left(\frac{\vartheta_1(z c \tau,\tau-\frac{d}{c})}{\eta(\tau-\frac{d}{c})^3}\right)^{2s} \ , \label{eq:integral Ca/c two-point function} \end{align} where $q=\mathrm{e}^{2\pi i \tau}$. \subsection{Tropical behaviour} As in the example we explained in Section~\ref{subsec:Rademacher contour}, the main trick to evaluate the integral on the right hand side of \eqref{eq:integral Ca/c two-point function} explicitly is to push the horizontal contour up to very high values of $\Im \tau$. The result is then only sensitive to the singular behaviour of the integrand. Let us work out the leading singular behaviour of the integrand, which is controlled by the tropicalization of the integrand. To leading order, the integrand goes like $q^\mathrm{Trop}$ as $\Im \tau \to \infty$, where the function $\mathrm{Trop}$ still depends on the other moduli and kinematics of the problem ($z$ and $s$ in this case). We are interested in the limit $q \to 0$, which is dominated by most negative $\mathrm{Trop}$. We can work out $\mathrm{Trop}$ from the definition of the Jacobi theta function, \begin{equation} \vartheta_1(zc\tau,\tau-\tfrac{d}{c})=-i \sum_{n=-\infty}^{\infty} (-1)^n\, \mathrm{e}^{-\frac{\pi i d(2n-1)^2}{4c}} q^{\frac{1}{2}(n-\frac{1}{2})^2-(n-\frac{1}{2})zc}\ . \end{equation} The exponents $\frac{1}{2}(n-\tfrac{1}{2})^2 - (n-\frac{1}{2})zc$ grow when $n \to \pm \infty$ and thus there is a minimal exponent which controls the behaviour of the theta function near $q \to 0$. The minimum exponent appears for $n=\lfloor cz \rfloor+1$. Combining this fact with the leading behaviour of the other factors in the integrand, we get the behavior \begin{equation} q^{s c^2 z^2} q^{2s \left[ \frac{1}{2}(\frac{1}{2}+\lfloor cz \rfloor)^2 - (\frac{1}{2} + \lfloor cz \rfloor) cz \right]} q^{-\frac{s}{4}} = q^{\mathrm{Trop}}\, , \end{equation} and hence we conclude \begin{equation} \mathrm{Trop}=-s \, \{c z\}(1-\{c z\})\ , \label{eq:leading Trop two-point function} \end{equation} where $\{x\}$ denotes the fractional part, i.e., $\{x\}=x-\lfloor x \rfloor$. This function is periodic with period $\frac{1}{c}$. Moreover, we notice that it vanishes on the boundary of the segments $z \in [\frac{n}{c},\frac{n+1}{c}]$. It means that the boundaries of these segments do not contribute to the integral, since when $z=\frac{n}{c}$, we can take $\Im \tau \to \infty$, which makes the integrand arbitrarily small. This makes it natural to split up the integral into disjoint contributions. Let us set \begin{equation} z=\frac{n+\xi}{c} \label{eq:z xi change of variables 2-point function} \end{equation} with $n \in \{0,1,\dots,c-1\}$ and $\xi \in [0,1]$. \subsection{Branches of \texorpdfstring{$\log \vartheta_1$}{log theta1}} \label{subsec:branches log theta1} To continue, we should carefully discuss the branch of the integrand. It will be sufficient to study \begin{equation} \log \vartheta_1(cz\tau,\tau-\tfrac{d}{c})\ . \end{equation} with $z=\frac{n+\xi}{c}$. Recall that we specified the branch of the integrand by taking the principal branch near $z \to 0$. We want to determine the branch of the logarithm that is obtained by continuously following the branch as we vary $z$ from $0$ to our desired value. We claim that for this branch, \begin{multline} \log \vartheta_1((n+\xi) \tau,\tau-\tfrac{c}{d})=-\pi i\tau (n+\xi)^2+\pi i\tau (\xi-\tfrac{1}{2})^2+\tfrac{\pi i}{2}-\tfrac{\pi i d}{4c}-2\pi i \sum_{m=1}^n \st{\tfrac{md}{c}}\\ +\log \prod_{\ell=1}^\infty\big(1-\mathrm{e}^{-\frac{2\pi i d \ell}{c}}q^\ell\big)\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell+n)}{c}} q^{\ell-\xi}\big)\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell-n-1)}{c}}q^{\ell+\xi-1}\big)\ .\label{eq:log theta1 branch} \end{multline} Recall the definition of the sawtooth function $\st{x}$ given in eq.~\eqref{eq:st definition}, which will play a very important role in the following. We use the product representation of the Jacobi theta function for the argument since it is more convenient. One can easily prove this claim by induction over $n$, as will be done below. Notice that the function $\log \vartheta_1(cz\tau,\tau-\tfrac{d}{c})$ has branch points for \begin{equation} c z \tau \in \ZZ+\ZZ(\tau-\tfrac{d}{c})\, , \end{equation} which never lie on the interval $z \in (0,1)$. Thus the choice of branch is independent of $\tau$ and it will be convenient to choose $\Im \tau$ very large, i.e., $q=\mathrm{e}^{2\pi i \tau}$ small. For $n=0$, we use the product representation of $\vartheta_1$ to write \begin{align} \log \vartheta_1(\xi \tau,\tau-\tfrac{c}{d})&=\log \Big[ i\, q^{\frac{1}{8}-\frac{\xi}{2}}\mathrm{e}^{-\frac{\pi i d}{4c}} \prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{2\pi i d\ell}{c}} q^\ell) \nonumber \\ &\qquad\quad\times (1-\mathrm{e}^{-\frac{2\pi i d\ell)}{c}} q^{\ell-\xi})(1-\mathrm{e}^{-\frac{2\pi i d(\ell-1)}{c}} q^{\ell+\xi-1}) \Big]\\ &=\tfrac{\pi i}{2} + \tfrac{\pi i\tau}{4}-\pi i \tau \xi-\tfrac{\pi i d}{4c} +\log \Big[ \prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{2\pi i d\ell}{c}} q^\ell) \nonumber \\ &\qquad\quad\times (1-\mathrm{e}^{-\frac{2\pi i d\ell}{c}} q^{\ell-\xi})(1-\mathrm{e}^{-\frac{2\pi i d(\ell-1)}{c}} q^{\ell+\xi-1}) \Big]\ . \end{align} In the last line, we chose the principal branch for $\Re \tau=\frac{d}{c}$ and large $\Im \tau$, since this was our prescription for to determine the branch.\footnote{The choice $\Re \tau=\frac{d}{c}$ is not necessary if we look at the ratio $\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}$ since the leading exponent $q^{\frac{1}{8}}$ cancels out.} We next discuss the induction step $n \to n+1$. We can start with the formula \eqref{eq:log theta1 branch} for $n$ and then smoothly take $\xi$ from the range $[0,1]$ to the range $[1,2]$. We can discuss every factor in the infinite product separately. Since we are taking $q$ very small, the only dangerous factors are those where the exponent of $q$ can become less than zero, which only happens for the second term in the infinite product for $\ell=1$. Thus it is enough to discuss the branch of \begin{equation} \log \left(1-\mathrm{e}^{2\pi i \varphi} q^{1-\xi}\right) \end{equation} for a phase $\varphi$. For $\xi \in [0,1]$, the choice of branch in the function is clear. For $\xi>1$, the second term dominates. The correct branch is obtained by the following consideration. First note that for $\varphi=\frac{1}{2}$, the branch is trivial to choose and we have \begin{equation} \log (1+q^{1-\xi})=2\pi i \tau (1-\xi)+\log (1+q^{\xi-1})\ . \end{equation} For arbitrary $\varphi$, we have \begin{equation} \log (1-\mathrm{e}^{2\pi i \varphi} q^{1-\xi})=2\pi i \tau (1-\xi)+\pi i +2\pi i \varphi+2\pi i k +\log (1+\mathrm{e}^{-2\pi i \varphi} q^{\xi-1}) \end{equation} for some integer $k$. Finally, we use that the phase can only jump for integer $\varphi$, since then the contour in $\xi$ hits the branch point. Thus the correct branch is \begin{equation} \log (1-\mathrm{e}^{2\pi i \varphi} q^{1-\xi})=2\pi i \tau (1-\xi)+2\pi i \st{\varphi} +\log (1+\mathrm{e}^{-2\pi i \varphi} q^{\xi-1}) , \end{equation} where we defined the sawtooth function $\st{\varphi}$ in \eqref{eq:st definition}. Applying this identity to \eqref{eq:log theta1 branch} with $\varphi=-\frac{d(\ell+1)}{c}$ gives \begin{align} \log \, &\vartheta_1((n+\xi) \tau,\tau-\tfrac{c}{d}) \nonumber\\ &=-\pi i\tau (n+\xi)^2+\pi i \tau (\xi-\tfrac{1}{2})^2+2\pi i \tau (1-\xi)-\tfrac{\pi i}{2}-\tfrac{\pi i d}{4c}-2\pi i \sum_{m=1}^{n+1} \st{\tfrac{md}{c}}\nonumber\\ &\qquad+\log \Big[ \prod_{\ell=1}^\infty\big(1-\mathrm{e}^{-\frac{2\pi i d \ell}{c}}q^\ell\big)\prod_{\ell=2}^\infty \big(1-\mathrm{e}^{-\frac{2\pi i d(\ell+n)}{c}} q^{\ell-\xi}\big)\nonumber\\ &\qquad\qquad\qquad\times \big(1-\mathrm{e}^{\frac{2\pi i d(1+n)}{c}} q^{\xi-1}\big)\prod_{\ell=1}^\infty\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell-n-1)}{c}}q^{\ell+\xi-1}\big) \Big]\, , \end{align} and after further massaging: \begin{align} \log \, &\vartheta_1((n+\xi) \tau,\tau-\tfrac{c}{d}) = -\pi i\tau (n+\xi)^2+\pi i \tau (\xi-\tfrac{3}{2})^2-\tfrac{\pi i}{2}-\tfrac{\pi i d}{4c}-2\pi i \sum_{m=1}^{n+1} \st{\tfrac{md}{c}}\nonumber\\ &\qquad+\log \Big[ \prod_{\ell=1}^\infty\big(1-\mathrm{e}^{-\frac{2\pi i d \ell}{c}}q^\ell\big)\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell+n+1)}{c}} q^{\ell+1-\xi}\big)\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell-n-2)}{c}}q^{\ell+\xi-2}\big) \Big]\ . \end{align} This is the claimed expression expression for $\log \vartheta_1((n+1+\xi) \tau,\tau-\tfrac{c}{d})$, but with $\xi$ replaced with $\xi-1$, showing that \eqref{eq:log theta1 branch} is the correct branch for the logarithm. For future reference, let us note that \begin{equation} \sum_{m=1}^{c-1} \bigst{\frac{m d}{c}}=0\ . \end{equation} This is because we are summing over all non-zero elements of the ring $\ZZ_c$. They are paired up as $md$ and $-md$ and hence cancel out pairwise thanks to $\st{\frac{md}{c}}+\st{-\frac{md}{c}}=0$. It continues to hold if $md=-md \bmod c$, since then $\frac{md}{c}\equiv\frac{1}{2}$ and hence $\st{\frac{md}{c}}=0$. This means that the phase on the last segment $\frac{c-1}{c}<z<1$ is again trivial. This had to happen because we can either use $z \to 0$ or $z \to 1$ to fix the branch as described above. \subsection{Thresholds from \texorpdfstring{$q$}{q}-expansion} Let us now insert the branch \eqref{eq:log theta1 branch} into \eqref{eq:integral Ca/c two-point function}. We get \begin{multline} \int_{C_{a/c}} \mathrm{d}\tau \int_0^1 \mathrm{d}z\ \left(\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}\right)^{2s}=\sum_{n=0}^{c-1} \int_{\longrightarrow} \frac{\mathrm{d}\tau}{c(-ic\tau)^{2+2s}} \int_0^1 \mathrm{d}\xi\ q^{s \xi(\xi-1)}\mathrm{e}^{-4\pi i s \sum_{m=1}^n \st{\frac{md}{c}}} \\ \times \prod_{\ell=1}^\infty \frac{\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell+n)}{c}} q^{\ell-\xi}\big)^{2s}\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell-n-1)}{c}}q^{\ell+\xi-1}\big)^{2s}}{\big(1-\mathrm{e}^{-\frac{2\pi i d \ell}{c}} q^\ell\big)^{4s}} \, .\label{eq:two-point function Ca/c integral split up} \end{multline} We absorbed the phase $\mathrm{e}^{\pi i s}$ and the overall minus sign into the denominator. We also get another factor $\frac{1}{c}$ from the Jacobian of the change of variables \eqref{eq:z xi change of variables 2-point function}. Let us $q$-expand the integrand. There are only finitely many terms in the $q$-expansion that can potentially contribute to the integral. Indeed, every term in the $q$-expansion is of the form $q^{\mathrm{Trop}_{m_\mathrm{D},m_\U}}$ with \begin{equation} \mathrm{Trop}_{m_\mathrm{D},m_\U}=s\xi(\xi-1)+m_\mathrm{D} \xi+m_\U(1-\xi) \label{eq:Trop two-point function} \end{equation} and $m_\mathrm{D},\, m_\U$ to non-negative integers. A term of this form can only contribute when $\mathrm{Trop}_{m_\mathrm{D},m_\U} < 0$ for some choice of $\xi \in [0,1]$. $\mathrm{Trop}_{m_\mathrm{D},m_\U}$ attains its minimum for \begin{equation} \xi_\text{min}=\frac{m_\U-m_\mathrm{D}+s}{2s}\ , \end{equation} which lies inside the unit interval for $|m_\U-m_\mathrm{D}|\le s$. In the other case where $|m_\mathrm{D}-m_\U| \ge s$, the minimum of $\mathrm{Trop}_{m_\mathrm{D},m_\U}$ is attained on the boundary of the interval, where $\mathrm{Trop}_{m_\mathrm{D},m_\U}$ is always non-negative. Thus only $(m_\mathrm{D},m_\U)$ with $|m_\mathrm{D}-m_\U| \le s$ can potentially be negative somewhere on the unit interval. In this case we have \begin{equation} \min_{\xi \in [0,1]} \mathrm{Trop}_{m_\mathrm{D},m_\U} = -\frac{[s-(\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2][s-(\sqrt{m_\mathrm{D}}-\sqrt{m_\U})^2]}{4s}\, . \end{equation} This expression is negative and hence contributes to the integral when either \begin{equation} s \ge (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2\quad \text{or}\quad s \le (\sqrt{m_\mathrm{D}}-\sqrt{m_\U})^2\ . \end{equation} Since \begin{equation} s \ge | m_\mathrm{D}-m_\U|=|\sqrt{m_\mathrm{D}}-\sqrt{m_\U}| (\sqrt{m_\mathrm{D}}+\sqrt{m_\U}) \ge |\sqrt{m_\mathrm{D}}-\sqrt{m_\U}|^2\ , \end{equation} the latter is incompatible with the assumption $s \ge |m_\mathrm{D}-m_\U|$. Thus only terms with $s \ge (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2$ contribute to the integral. This is precisely how thresholds manifest themselves. Assuming now that $s \ge (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2$, we may extend the integration region $\xi \in [0,1]$ to $\xi \in \RR$, since the exponent $\mathrm{Trop}_{m_\mathrm{D},m_\U}$ is positive outside of the interval $[0,1]$ and hence the region outside of the unit interval also leads to a vanishing contribution for the integral. This extension of the integration contour will simplify the analysis later on. Next, let us note that the phase of a term of the form $q^{m_\mathrm{D} \xi+m_\U(1-\xi)}$ appearing in the $q$-expansion of the infinite product in \eqref{eq:two-point function Ca/c integral split up} is given by $\mathrm{e}^{\frac{2\pi id}{c}(n m_\mathrm{D} -(n+1)m_\U)}$. This becomes obvious if we set $q_\mathrm{D}=q^{\xi}$ and $q_\U=q^{1-\xi}$, so that $q_\mathrm{D} q_\U=q$. We can then write terms appearing in \eqref{eq:two-point function Ca/c integral split up} as \begin{subequations} \begin{align} \mathrm{e}^{-\frac{2\pi i d (\ell+n)}{c}} q^{\ell-\xi}&=\mathrm{e}^{\frac{2\pi i d}{c}(n (\ell-1) -(n+1)\ell)} q_\mathrm{D}^{\ell-1}q_\U^{\ell}\ , \\ \mathrm{e}^{-\frac{2\pi i d (\ell-n-1)}{c}} q^{\ell+\xi-1}&=\mathrm{e}^{\frac{2\pi i d}{c}(n \ell-(n+1)(\ell-1))} q_\mathrm{D}^\ell q_\U^{\ell-1}\ , \\ \mathrm{e}^{-\frac{2\pi i d \ell}{c}} q^\ell &= \mathrm{e}^{\frac{2\pi i d}{c}(n \ell-(n+1)\ell)} q_\mathrm{D}^\ell q_\U^{\ell}\ . \end{align} \end{subequations} We can thus write \begin{multline} \prod_{\ell=1}^\infty \frac{\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell+n)}{c}} q^{\ell-\xi}\big)^{2s}\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell-n-1)}{c}}q^{\ell+\xi-1}\big)^{2s}}{\big(1-\mathrm{e}^{-\frac{2\pi i d \ell}{c}} q^\ell\big)^{4s}}\\ =\sum_{m_\mathrm{D},m_\U=0}^\infty Q_{m_\mathrm{D},m_\U}^{(2)}(s)\, \mathrm{e}^{\frac{2\pi id}{c}(n m_\mathrm{D} -(n+1)m_\U)}q^{m_\mathrm{D} \xi+m_\U(1-\xi)} \end{multline} with \begin{equation} Q_{m_\mathrm{D},m_\U}^{(2)}(s)=[q_\mathrm{D}^{m_\mathrm{D}} q_\U^{m_\U}] \prod_{\ell=1}^\infty \frac{(1-q_\mathrm{D}^{\ell-1}q_\U^\ell)^{2s}(1-q_\mathrm{D}^{\ell}q_\U^{\ell-1})^{2s}}{(1-q_\mathrm{D}^{\ell}q_\U^{\ell})^{4s}}\ . \end{equation} The superscript $(2)$ is intended to distinguish these coefficients from similar coefficients appearing in the four-point function case. We can insert this expansion in \eqref{eq:two-point function Ca/c integral split up} to get \begin{align} &\int_{C_{a/c}} \mathrm{d}\tau \int_0^1 \mathrm{d}z\, \left(\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}\right)^{2s}=\sum_{n=0}^{c-1} \sum_{\begin{subarray}{c} m_\mathrm{D},m_\U\ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \mathrm{e}^{-4\pi i s \sum_{m=1}^n \st{\frac{md}{c}}+\frac{2\pi i (n m_\mathrm{D}-(n+1) m_\U) d}{c}} \nonumber\\ &\qquad\qquad\qquad \times Q_{m_\mathrm{D},m_\U}^{(2)}(s)\int_{\longrightarrow} \frac{\mathrm{d} \tau}{c(-ic \tau)^{2+2s}}\ \int_{-\infty}^\infty \mathrm{d}\xi \ q^{s \xi(\xi-1)+m_\mathrm{D} \xi+m_\U(1-\xi)} \ . \end{align} \subsection{\label{subsec:2-pt assembling}Assembling the result} The integral over $\xi$ is Gaussian and simple to perform. Let us denote \begin{equation} \Delta_{m_\mathrm{D},m_\U}(s) = [s-(\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2][s-(\sqrt{m_\mathrm{D}}-\sqrt{m_\U})^2]\ . \label{eq:definition Delta} \end{equation} Evaluating the Gaussian integral leads to \begin{multline} \int_{C_{a/c}} \mathrm{d}\tau \int_0^1 \mathrm{d}z\, \left(\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}\right)^{2s} =\sum_{\begin{subarray}{c} m_\mathrm{D},m_\U\ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}}\sum_{n=0}^{c-1} \mathrm{e}^{-4\pi i s \sum_{m=1}^n \st{\frac{md}{c}}+\frac{2\pi i (n m_\mathrm{D}-(n+1)m_\U) d}{c}} \\ \times Q_{m_\mathrm{D},m_\U}^{(2)}(s)\int_{\longrightarrow} \frac{\mathrm{d} \tau}{(-i\tau)^{\frac{5}{2}+2s}c^{3+2s}\sqrt{2s}}\ q^{-\frac{\Delta_{m_\mathrm{D},m_\U}}{4s}} \ . \end{multline} The integral over $\tau$ can be computed exactly. Let us consider \begin{equation} \int_{\longrightarrow} \frac{\mathrm{d}\tau}{(-i \tau)^z} \mathrm{e}^{-2\pi i \tau a} \end{equation} for an arbitrary positive parameter $a$ and arbitrary exponent $z$. Changing the variables to $x=2\pi i \tau a$ gives \begin{equation} \int_{\longrightarrow} \frac{\mathrm{d}\tau}{(-i \tau)^z} \mathrm{e}^{-2\pi i \tau a}=-i(2\pi a)^{z-1}\int_{\uparrow} \mathrm{d}x\ (-x)^{-z} \mathrm{e}^{-x}\ . \end{equation} Upon performing the change of variables from $\tau$ to $x$, the contour gets rotated by 90 degrees and now runs upwards, which we signified by the arrow $\uparrow$. We can deform this contour into the Hankel contour $\mathcal{H}$ which runs from $\infty+i \varepsilon$ to $\infty-i\varepsilon$ by surrounding whole branch cut going from $x=0$ to $x=\infty$ along the real axis. We then use the Hankel representation of the Gamma function, \begin{equation} \int_{\mathcal{H}} \mathrm{d}x\ (-x)^{-z} \mathrm{e}^{-x}=-\frac{2\pi i}{\Gamma(z)}\ . \end{equation} Therefore, we find \begin{equation} \int_{\longrightarrow} \frac{\mathrm{d}\tau}{(-i \tau)^z} \mathrm{e}^{-2\pi i \tau a}=i(2\pi a)^{z-1}\int_{\mathcal{H}} \mathrm{d}x\ (-x)^{-z} \mathrm{e}^{-x}=\frac{2\pi(2\pi a)^{z-1}}{\Gamma(z)}\, . \label{eq:Hankel contour identity q integral} \end{equation} We can thus finish the computation as follows, \begin{multline} \int_{C_{a/c}} \mathrm{d}\tau \int_0^1 \mathrm{d}z\, \left(\frac{\vartheta_1(z,\tau)}{\eta(\tau)^3}\right)^{2s}=\frac{\pi^{\frac{5}{2}+2s}}{2^{1+2s}s^{2+2s}c^{3+2s}\Gamma(\frac{5}{2}+2s)} \\ \times \!\!\!\! \sum_{\begin{subarray}{c} m_\mathrm{D},m_\U\ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \!\!\!\! Q_{m_\mathrm{D},m_\U}^{(2)}(s)\, \Delta_{m_\mathrm{D},m_\U}^{\frac{3}{2}+2s}(s) \sum_{n=0}^{c-1} \mathrm{e}^{-4\pi i s \sum_{m=1}^n \st{\frac{md}{c}}+\frac{2\pi i (n m_\mathrm{D}-(n+1)m_\U) d}{c}}\ . \end{multline} We can finally assemble all the circles of the Rademacher contour to obtain the final result for the integral \eqref{eq:two-point function integral}, \begin{multline} I(s)=-i \sum_{c=1}^\infty \frac{\pi^{\frac{5}{2}+2s}}{2^{1+2s}s^{2+2s}c^{3+2s}\Gamma(\frac{5}{2}+2s)}\sum_{\begin{subarray}{c} 1 \le a \le \frac{c}{2} \\ (a,c)=1 \end{subarray}} \sum_{\begin{subarray}{c} m_\mathrm{D},m_\U\ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \!\!\!\! Q_{m_\mathrm{D},m_\U}^{(2)}(s)\, \Delta_{m_\mathrm{D},m_\U}^{\frac{3}{2}+2s}(s)\\ \times \sum_{n=0}^{c-1} \mathrm{e}^{-4\pi i s \sum_{m=1}^n \st{\frac{md}{c}}+\frac{2\pi i (n m_\mathrm{D}-(n+1)m_\U) d}{c}}\ . \label{eq:two-point function evaluated} \end{multline} As usual, $d$ denotes the inverse of $a \bmod c$. This is our final result for the two-point function. \subsection{Cross-checks} Given the amount of non-trivial manipulations that went into this computation, it would be reassuring to perform a stress test. We explain now such a check. Let us change the computation slightly and compute instead the integral over the contour in \eqref{eq:two-point function integral} that runs from $0$ to $1$ (instead of $0$ to $\frac{1}{2}$). Let us denote the corresponding result by $\tilde{I}(s)$. The Rademacher logic still applies to this integral and \eqref{eq:two-point function evaluated} still holds, except that the summation range over $a$ gets extended to $1 \le a \le c$, reflecting the fact that we now need to use all the circles up to 1 in the contour. \begin{figure} \centering \includegraphics[scale=0.9]{figures/convergence-two-point-function.pdf} \caption{Convergence of the Rademacher method for the two-point function. We plot the relative error \eqref{eq:two-point function exact difference} on the $y$-axis, which decreases with larger cutoffs $c$ for any $s$.} \label{fig:two-point function check} \end{figure} The integral $\tilde{I}(s)$ over the new contour is very simple to evaluate analytically. Indeed, since the integrand is periodic in $\tau \to \tau+1$, we are simply extracting the leading Fourier coefficient in $\tau$. Since \begin{equation} \frac{\vartheta_1(z,\tau)}{\eta(\tau)^3} \overset{\Im \tau \to \infty}{\longrightarrow} 2 \sin(\pi z)\ , \end{equation} the constant Fourier coefficient of the integrand is $(2 \sin(\pi z))^{2s}$. We thus get \begin{equation} \tilde{I}(s)=-i \int_0^1 \mathrm{d}z \left(2 \sin(\pi z)\right)^{2s}=-i \, \frac{4^s \, \Gamma(s+\frac{1}{2})}{\sqrt{\pi}\, \Gamma(s+1)}\ . \label{eq:two-point function alternative contour simple evaluation} \end{equation} We can easily check numerically whether this equals \eqref{eq:two-point function evaluated} with extended range over $a$. We plot in Figure~\ref{fig:two-point function check} the quantity \begin{equation} \left|\frac{\eqref{eq:two-point function evaluated}\text{ with extended range $1 \le a \le c$}}{\eqref{eq:two-point function alternative contour simple evaluation}}-1\right| \label{eq:two-point function exact difference} \end{equation} in the interval $0 \le s \le 5$ for larger and larger cutoffs $c$. Clearly, the error goes to zero as $c$ grows. Moreover, thanks to the presence of the factor $\frac{1}{c^{3+2s}}$ in \eqref{eq:two-point function evaluated}, convergence is much faster for large values of $s$. This check nicely tests the whole formula. As another remark, we note that the limit $s \to 0$ is rather subtle. The exact answer \eqref{eq:two-point function alternative contour simple evaluation} clearly goes to $-i$ as $s \to 0$. On the other hand, \eqref{eq:two-point function evaluated} for fixed $c$ in goes to zero even if we extend the range to $1 \leq a \leq c$. This means that we are not allowed to commute the limit with the infinite sum over $c$ in \eqref{eq:two-point function evaluated}. We can see this explicitly by plotting \eqref{eq:two-point function evaluated} (with extended range $1 \le a \le c$) for different cutoffs $c$ near $s=0$. The results are plotted in Figure~\ref{fig:limit exchange two-point function}. The curve obtained from the Rademacher method converges everywhere to the exact answer, except at $s=0$. \begin{figure} \centering \includegraphics[scale=0.9]{figures/limit-exchange-two-point-function.pdf} \caption{The behaviour of the Rademacher formula near $s=0$. The gray dashed line is the exact answer \eqref{eq:two-point function alternative contour simple evaluation} (multiplied by $i$ to make it real), while the different contours are the answer obtained form \eqref{eq:two-point function evaluated} when truncating the sum at different maximal values of $c$. } \label{fig:limit exchange two-point function} \end{figure} \section{Four-point planar amplitude \texorpdfstring{$A^{\text{p}}(s,t)$}{Ap(s,t)}} \label{sec:planar amplitude derivation} We now derive the equation \eqref{eq:planar four-point function s-channel} for the planar amplitude in the $s$-channel. Many of the steps performed here are analogous to the steps for the two-point function and thus we keep the discussion of these steps more brief. The formula for the amplitude in the $u$-channel will turn out to be similar as well. \subsection{Integrand on the Ford circle \texorpdfstring{$C_{a/c}$}{Cac}} We focus on the contribution of the Ford circle $C_{a/c}$ to the planar amplitude given by the integral \eqref{eq:Ap-Ford-circle}. Let us call it $A^\mathrm{p}_{a/c}$. The full planar amplitude $A^{\mathrm{p}}$ is the sum of $A^\mathrm{p}_{a/c}$ for all irreducible fractions $\frac{a}{c}$ plus the cusp contribution $\Delta A^{\mathrm{p}}$. As in our toy example above, we want to perform the change of variables \begin{equation} \tau=\frac{a\tau'+b}{c \tau'+d}\ . \end{equation} Using the modular behaviour of the theta-functions under, we have \begin{multline} A^{\mathrm{p}}_{a/c} = i \int_{\longrightarrow} \frac{\mathrm{d}\tau}{c^2 \tau^2} q^{c^2 s z_{41}z_{32}-c^2tz_{21}z_{43}} \left(\frac{\vartheta_1(z_{21}c \tau,\tau-\frac{d}{c})\vartheta_1(z_{43}c \tau,\tau-\frac{d}{c})}{\vartheta_1(z_{31}c \tau,\tau-\frac{d}{c})\vartheta_1(z_{42}c \tau,\tau-\frac{d}{c})}\right)^{-s} \\ \times \left(\frac{\vartheta_1(z_{32}c \tau,\tau-\frac{d}{c})\vartheta_1(z_{41}c \tau,\tau-\frac{d}{c})}{\vartheta_1(z_{31}c \tau,\tau-\frac{d}{c})\vartheta_1(z_{42}c \tau,\tau-\frac{d}{c})}\right)^{-t}\ . \label{eq:circle contribution Rademacher} \end{multline} Here we renamed $\tau' \to \tau-\frac{d}{c}$.\footnote{The original $\tau$ will not appear again and hopefully this does not lead to confusions.} We also set $q=\mathrm{e}^{2\pi i \tau}$. The overall sign changes again due to the choice of orientation for the contour. The branch of the right-hand side is determined as follows. We first note that the integrand of the original integral \eqref{eq:planar amplitude} simplifies for small $z_{ij}$ to \begin{equation} \left( \frac{\vartheta_1(z_{21},\tau)\vartheta_1(z_{43},\tau)}{\vartheta_1(z_{31},\tau)\vartheta_1(z_{42},\tau)}\right)^{-s} \left( \frac{\vartheta_1(z_{32},\tau)\vartheta_1(z_{41},\tau)}{\vartheta_1(z_{31},\tau)\vartheta_1(z_{42},\tau)}\right)^{-t} \to \left(\frac{z_{21}z_{43}}{z_{31}z_{42}}\right)^{-s}\left(\frac{z_{32}z_{41}}{z_{31}z_{42}}\right)^{-t}\ , \end{equation} independently of $\tau$. This is compatible with the leading behaviour of the integrand \eqref{eq:circle contribution Rademacher} as $z_{ij} \to 0$. Thus we take the principal branch in \eqref{eq:circle contribution Rademacher} for small $z_{ij}$ and then follow the branch smoothly when varying $z_i$. \subsection{Tropicalization} We now want to again want to push the $\tau$ contour to large values of $\Im \tau$. The leading behaviour is controlled by the function $\mathrm{Trop}$, which appears as the leading exponent $q^\mathrm{Trop}$ as $q \to 0$. It is given by \begin{equation} \mathrm{Trop}=\frac{1}{2}\sum_{i>j} s_{ij} \{c z_{ij}\}(1-\{c z_{ij}\})\ , \end{equation} where we remind the reader that $\{x\}$ denotes the fractional part of $x$. We are again interested in the region with $\mathrm{Trop}<0$, since the regions with $\mathrm{Trop}>0$ give a vanishing contribution to the integral when we take the limit $\Im \tau \to \infty$. Clearly, $\mathrm{Trop}$ is a periodic function with period $\frac{1}{c}$ in all $z_i$'s. As a consequence, the regions where $\mathrm{Trop}<0$ will come in families, since we can always translate a region with $\mathrm{Trop}<0$ by a multiple of $\frac{1}{c}$ in the $z_i$'s to obtain a new region with $\mathrm{Trop}<0$. For example, the subregion of the parameter space $(z_1,z_2,z_3)$ with $\mathrm{Trop}<0$ in the $s$-channel is depicted in Figure~\ref{fig:Trop regions s-channel c=3}. Recall that we fix $z_4 = 1$. \begin{figure} \centering \includegraphics{figures/plot3s.png} \caption{The regions $\Gamma_{n_1,n_2,n_3}$ in parameter space where $\mathrm{Trop}<0$ for $c=3$.} \label{fig:Trop regions s-channel c=3} \end{figure} There are in total $\frac{1}{6}m(m+1)(m+2)$ regions in the $z_i$-parameter space, where $\mathrm{Trop}<0$. We will label them as $\Gamma_{n_1,n_2,n_3}$ with $1 \le n_1 \le n_2 \le n_3 \le c$. Each such $\Gamma_{n_1,n_2,n_3}$ is fully contained in the following region \begin{subequations} \begin{align} \label{eq:region Rn1,n2,n3} \frac{n_{ij}-1}{c}&\le z_{ij}\le \frac{n_{ij}+1}{c}\ , \qquad ij \in \{21,\, 43\}\ , \\ \frac{n_{ij}}{c}&\le z_{ij}\le \frac{n_{ij}+1}{c}\ , \qquad ij \in \{31,\, 41,\, 32,\, 42\}\ . \end{align} \end{subequations} Here, $n_{ij} \equiv n_i-n_j$ and $n_4 \equiv c$. We denote the contribution from the region $\Gamma_{n_1,n_2,n_3}$ by $A_{a/c}^{n_1,n_2,n_3}$, so that \begin{equation} A_{a/c}^{\mathrm{p}} = \sum_{1 \le n_1 \le n_2 \le n_3 \le c} A_{a/c}^{n_1,n_2,n_3}\ . \end{equation} We then set \begin{equation} z_i=\frac{n_i+\xi_i}{c} \end{equation} on each of the individual regions so that the integration range of $\xi_i$ is always the same in each region. \subsection{Contributions with \texorpdfstring{$\mathrm{Trop}<0$}{Trop<0}} We can determine the correct branch of the Jacobi theta functions raised to the powers of $s$ or $t$ by the same logic as for the two-point function. Inserting eq.~\eqref{eq:log theta1 branch} for the correct branch gives immediately \begin{align} A_{a/c}^{n_1,n_2,n_3}&=i \, \int_{\longrightarrow} \frac{\mathrm{d}\tau}{c^5 \tau^2} \int \mathrm{d}\xi_1\, \mathrm{d}\xi_2\, \mathrm{d}\xi_3\ \prod_{i>j} q^{-\frac{1}{2}s_{ij} \xi_{ij}(\xi_{ij}-1)} \mathrm{e}^{2\pi i s_{ij} \sum_{m=1}^{n_{ij}} \st{\frac{md}{c}}} \nonumber\\ &\qquad\times \prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{2\pi i d (\ell+n_{ij})}{c}} q^{\ell-\xi_{ij}})^{-s_{ij}}(1-\mathrm{e}^{-\frac{2\pi i d (\ell-n_{ij}-1)}{c}} q^{\ell+\xi_{ij}-1})^{-s_{ij}} \ .\label{eq:integral Aa/c n1,n2,n3} \end{align} The integration region over the $\xi_i$'s is such that both the inequalities \eqref{eq:region Rn1,n2,n3} and $0 \le z_1 \le z_2 \le z_3 \le 1$ are satisfied. This means that for a generic region $\Gamma_{n_1,n_2,n_3}$ we have \begin{equation} -1 \le \xi_{21},\, \xi_{43} \le 1\ , \qquad 0 \le \xi_{31},\, \xi_{32},\, \xi_{41},\, \xi_{42} \le 1\ . \end{equation} For the regions with $n_{21}=0$, we have the smaller integration region where $\xi_{21}\ge 0$ should be imposed. Similarly when $n_{43}=0$, the integration region is restricted by $\xi_{43} \ge 0$. The branch in this formula is defined for $\xi_{ij}>0$, where the constant factor in the infinite product dominates for small $q$. \subsection{Thresholds from the \texorpdfstring{$q$}{q}-expansion} As a next step, we $q$-expand the integrand in \eqref{eq:integral Aa/c n1,n2,n3}. As for the two-point function, it will turn out that for a given $s$, there are only finitely many terms that contribute to the integral. We have to be careful with the two factors $(1-\mathrm{e}^{\frac{2\pi i d n_{21}}{c}} q^{\xi_{21}})^{-s}$ and $(1-\mathrm{e}^{\frac{2\pi i d n_{43}}{c}} q^{\xi_{43}})^{-s}$ that are present in the infinite product in \eqref{eq:integral Aa/c n1,n2,n3}. Since $\xi_{21}$ and $\xi_{43}$ are allowed to go to zero in the integration region, we are not allowed to $q$-expand these factors, but have to leave them unexpanded. For the purpose of analyzing which term can dominate where, we notice that any term appearing in the $q$-expansion is of the form \begin{equation} q^{-\frac{1}{2}\sum_{i>j} s_{ij} \xi_{ij}(\xi_{ij}-1)+m_\L \xi_{21}+m_\mathrm{D} \xi_{32}+m_\mathrm{R} \xi_{43}+m_\U(1-\xi_{41})} (1-\mathrm{e}^{\frac{2\pi i d n_{21}}{c}} q^{\xi_{21}})^{-s}(1- \mathrm{e}^{\frac{2\pi i d n_{43}}{c}}q^{\xi_{43}})^{-s} \label{eq:single term} \end{equation} for four non-negative integers that we denote by $m_\L$, $m_\mathrm{D}$, $m_\mathrm{R}$, and $m_\U$. The names indicate that they play the role of the (square of the) internal masses on the left, bottom, right and top part of a box Feynman diagram that approximates the worldsheet. It is easy to see that these integers satisfy the condition \begin{equation} 0 \le m_\L,\, m_\mathrm{R} \le m_\mathrm{D}+m_\U\ . \label{eq:restriction m1 m3} \end{equation} Let us work out the contribution from such a term to $A_{a/c}^{n_1,n_2,n_3}$. We first consider the leading exponent as $q \to 0$ \begin{multline} \mathrm{Trop}_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}=-\frac{1}{2}\sum_{i>j} s_{ij} \xi_{ij}(\xi_{ij}-1)+ \left(\begin{cases} m_\L \xi_{21} &\;\text{if}\quad \xi_{21}>0 \\ (m_\L-s) \xi_{21}\ &\;\text{if}\quad \xi_{21}<0 \end{cases}\right) + m_\mathrm{D} \xi_{32}\\ +\left(\begin{cases} m_\mathrm{R} \xi_{43} &\;\text{if}\quad \xi_{43}>0 \\ (m_\mathrm{R}-s)\xi_{43} &\;\text{if}\quad \xi_{43}<0 \end{cases}\right) +m_\U(1-\xi_{41})\ . \end{multline} The term contributes to the integral if $\mathrm{Trop}_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}$ becomes negative somewhere on the integration region. A straightforward analysis shows that $\mathrm{Trop}_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}$ attains its minimum at $\xi_{21}=0$ and $\xi_{43}=0$ (but is not differentiable there). Thus it suffices to restrict $\mathrm{Trop}_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}$ to this special case and analyze where it is negative. We have, setting $\xi_3 \equiv \xi$ and $\xi_1=0$, \begin{equation} \mathrm{Trop}_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}\Big|_{\begin{subarray}{c} \xi_{21}=0 \\ \xi_{43}=0 \end{subarray}} =s \xi(\xi-1)+m_\mathrm{D}\xi+m_\U(1-\xi)\ , \end{equation} which coincides with the Trop function in the two-point function case, see eq.~\eqref{eq:Trop two-point function}. Let us remark that this is not surprising from a field theory point of view. The $\xi_i$'s play the role of Schwinger parameters and taking $\xi_{21} \to 0$, $\xi_{43}\to 0$ essentially reduces the diagram to a bubble diagram with masses squared $m_\mathrm{D}$ and $m_\U$, which is what we analyzed in Section~\ref{sec:two-point function}. Thus the same conclusion as there holds and a term of the form \eqref{eq:single term} only contributes to the amplitude for $s \geq (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2$. \subsection{Evaluating a single term in the \texorpdfstring{$q$}{q}-expansion} \label{subsec:evaluating single term q expansion} We now focus on a single term in the $q$-expansion of the form \eqref{eq:single term}. Evaluating such a term is the only essentially new ingredient not present in the two-point function analysis of Section~\ref{sec:two-point function}. Let us change variables as follows \begin{equation} \xi_{21}=\alpha_\text{L}\ , \qquad \xi_{43}=\alpha_\text{R}\ , \qquad \xi_{31}=\frac{1}{s}(-m_\mathrm{D}+s+t_\text{L}+u \alpha_\text{R})\ . \end{equation} We also integrate in the unity \begin{equation} 1=\sqrt{\frac{-i s \tau}{2 t u}}\int \d t_\mathrm{R} \ q^{\frac{1}{4stu}(s t_\mathrm{R}-(s+2t)t_\L-2tu \alpha_\mathrm{R}+(m_\mathrm{D}+m_\U)t-st)^2}\ . \end{equation} This yields the following contribution from a single term \eqref{eq:single term}, \begin{align} i \, &\int_{\longrightarrow} \frac{\mathrm{d}\tau}{c^5 \tau^2} \int \mathrm{d}\xi_1\, \mathrm{d}\xi_2\, \mathrm{d}\xi_3\ q^{-\frac{1}{2}\sum_{i>j} s_{ij} \xi_{ij}(\xi_{ij}-1)+m_\L \xi_{21}+m_\mathrm{D} \xi_{32}+m_\mathrm{R} \xi_{43}+m_\U(1-\xi_{41})} \nonumber \\ &\qquad\times (1-\mathrm{e}^{\frac{2\pi i d n_{21}}{c}} q^{\xi_{21}})^{-s}(1- \mathrm{e}^{\frac{2\pi i d n_{43}}{c}}q^{\xi_{43}})^{-s} \nonumber \\ &=i \int_{\longrightarrow} \frac{\d \tau}{c^5 \tau^2} \sqrt{\frac{-i \tau}{2stu}}\int \d t_\L \, \d t_\mathrm{R}\, \d \alpha_\L\, \d \alpha_\mathrm{R}\ q^{-\alpha_\L(t_\L-m_\L)-\alpha_\mathrm{R} (t_\mathrm{R}-m_\mathrm{R})-P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})} \nonumber\\ &\qquad\times (1-\mathrm{e}^{\frac{2\pi i d n_{21}}{c}} q^{\alpha_\L})^{-s}(1- \mathrm{e}^{\frac{2\pi i d n_{43}}{c}}q^{\alpha_\mathrm{R}})^{-s} \ . \label{eq:single contribution before changing integration region} \end{align} Here, the polynomial $P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})$ is given by \begin{subequations} \begin{align} P_{m_\mathrm{D},m_\U}&=-\frac{\det \mathcal{G}_{p_1p_2p_3\ell}}{\det \mathcal{G}_{p_1p_2p_3}}\\ &=-\frac{1}{4stu}\Big[s^2(t_\L-t_\mathrm{R})^2+2 s t(m_\mathrm{D}+m_\U-s)(t_\L+t_\mathrm{R})-4 s t t_\L t_\mathrm{R}\nonumber\\ &\qquad -4st m_\mathrm{D} m_\U+t^2(m_\mathrm{D}-m_\U)^2-s t^2 (2m_\mathrm{D}+2m_\U-s)\Big]\ . \end{align} \end{subequations} Here, $\mathcal{G}_{p_1p_2p_3}$ and $\mathcal{G}_{p_1p_2p_3\ell}$ denote the Gram determinants of the respective momenta (where $\ell$ is the field-theoretic loop momentum). As explained in \cite{Eberhardt:2022zay}, this polynomial is expected from field-theory considerations where it plays the role of the kernel in the Baikov representation \cite{Baikov:1996iu} of the imaginary part of the amplitude. The image of the integration region for the $\xi_i$'s is \begin{equation} \mathcal{R}=\Big\{(\alpha_\L,\alpha_\mathrm{R},t_\L,t_\mathrm{R}) \Big| \begin{array}{l} \;-1\, (0) \le \alpha_\L,\, \alpha_\mathrm{R} \le 1\, , \\ t_\L{-}m_\mathrm{D} \le -u \alpha_\mathrm{R},\, t \alpha_\mathrm{R},\, s\alpha_\L-u \alpha_\mathrm{R},\, s \alpha_\L+t \alpha_\mathrm{R} \le s{+}t_\L{-}m_\mathrm{D} \end{array}\Big\} \ , \label{eq:region R} \end{equation} with $t_\mathrm{R}$ unrestricted. The lower limit on $\alpha_\L$ is $0$ instead of $-1$ when $n_{21}=0$ and the lower limit of $\alpha_\mathrm{R}$ is $0$ when $n_{43}=0$ and we indicated these special cases with values in the parenthesis. We claim that we can change the integration region to the following: \begin{equation} \tilde{\mathcal{R}}=\{(\alpha_\L,\alpha_\mathrm{R},t_\L,t_\mathrm{R})\, |\,\alpha_\L,\, \alpha_\mathrm{R}\ge \infty \, (0) \, ,\ P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})\ge 0\} \label{eq:region Rtilde} \end{equation} without changing the value of the integral. To see that this is possible, we need to check that the leading exponent \begin{multline} \mathrm{Trop}= -\alpha_\L(t_\L-m_\L)-\alpha_\mathrm{R}(t_\mathrm{R}-m_\mathrm{R})-P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})\\ -\min(\alpha_\L,0)s -\min(\alpha_\mathrm{R},0) s \label{eq:Trop R Rtilde regions} \end{multline} is everywhere positive on the difference $(\mathcal{R}^c \cap \tilde{\mathcal{R}}) \cup (\mathcal{R} \cap \tilde{\mathcal{R}}^c)$. For this statement to be true, one needs to use the fact that the range of $m_\L$ and $m_\mathrm{R}$ is bounded as in eq.~\eqref{eq:restriction m1 m3}. The statement is also only true if $s \ge (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2$, which is the range where the term contributes. In practice, we checked the correctness of this statement numerically. It would be of course nice to show this analytically, but the involved algebra unfortunately gets very quickly very complicated. After changing the integration region in \eqref{eq:single contribution before changing integration region}, we can integrate out $\alpha_\L$ and $\alpha_\mathrm{R}$ analytically. In both cases, we need to compute an integral of the form \begin{equation} \int_{-\infty\, (0)}^\infty \mathrm{d}\alpha\ q^{-\alpha t} (1-\mathrm{e}^{2\pi i \varphi}q^\alpha)^{-s}=\frac{i}{2\pi \tau}\int_0^{\infty\, (1)} \mathrm{d}x \ x^{-t-1} \left(1-\mathrm{e}^{2\pi i \varphi}x \right)^{-s} \label{eq:Gamma function type integral} \end{equation} for some phase $\mathrm{e}^{2\pi i \varphi}$. The integration boundaries in parentheses apply when $\varphi \in \ZZ$. Let us first assume that $\tau \in i \RR$ so that $q$ is real. Since varying $\tau$ does not change the branch cut structure of the integrand, the result depends analytically on $\tau$ and we can obtain the general result by analytic continuation. On the right-hand side, we changed variables to $x=q^{\alpha}$. The boundary $\alpha \to \infty$ gets mapped to $x=0$, while the lower boundary gets mapped to $\infty$ and $1$ in the two cases. In the case $\varphi\in \ZZ$, we end up with the integral \begin{equation} \eqref{eq:Gamma function type integral}=\frac{i}{2\pi \tau} \int_0^1 \mathrm{d}x\ x^{-t-1} \left(1-x \right)^{-s}=\frac{i}{2\pi \tau} \frac{\Gamma(1-s)\Gamma(-t)}{\Gamma(1-s-t)}\ . \end{equation} Now assume that $\varphi \not \in \ZZ$. We rotate the contour by defining \begin{equation} y=-\mathrm{e}^{2\pi i \varphi}x=\mathrm{e}^{2\pi i \st{\varphi}} x\ . \end{equation} When rotating the contour, the arc at infinity gives a vanishing contribution to the integral in $s$-channel kinematics and can be discarded. Thus we have \begin{equation} \eqref{eq:Gamma function type integral}=\frac{i\, \mathrm{e}^{2\pi i t \st{\varphi}}}{2\pi \tau} \int_{0}^{\infty} \mathrm{d}y \ y^{-t-1}(1+y)^{-s} \\ =\frac{i\, \mathrm{e}^{2\pi i t \st{\varphi}}}{2\pi \tau} \frac{\Gamma(-t)\Gamma(s+t)}{\Gamma(s)}\ . \end{equation} The choice of branch of $\mathrm{e}^{2\pi i t \st{\varphi}}$ is correct, as can be easily seen from the following two facts: (i) the branch can only jump when $\varphi \in \ZZ$, since then the branch point crosses the integration contour and (ii) this is the correct branch for $\varphi=\frac{1}{2}$, where we do not have to rotate the contour at all. As mentioned above, these results still hold for $\tau \not \in i \RR$ since by varying $\tau$ continuously, no branch points cross the integration contour. Coming back to \eqref{eq:single contribution before changing integration region}, we can now fully evaluate the contribution of a single term in the $q$-expansion: \begin{align} \eqref{eq:single contribution before changing integration region} &=i \int_{\longrightarrow} \frac{\d \tau}{c^5 \tau^2} \sqrt{\frac{- i \tau}{2stu}}\int_{P_{m_\mathrm{D},m_\U} > 0}\hspace{-1cm} \d t_\L \, \d t_\mathrm{R} \left(\frac{i}{2\pi \tau}\right)^2 q^{-P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})} \nonumber\\ &\qquad\times\! \left(\!\begin{cases}\mathrm{e}^{2\pi i (t_\L-m_\L) \st{\frac{d n_{21}}{c}}} &\;\text{if}\quad n_{21}>0 \\ \frac{\sin(\pi(s+t_\L))}{\sin(\pi s)} &\;\text{if}\quad n_{21}=0 \end{cases}\right)\! \left(\!\begin{cases}\mathrm{e}^{2\pi i(t_\mathrm{R}-m_\mathrm{R}) \st{\frac{d n_{43}}{c}}} & \;\text{if}\quad n_{43}>0 \\ \frac{\sin(\pi(s+t_\mathrm{R}))}{\sin(\pi s)} &\;\text{if}\quad n_{43}=0 \end{cases}\right) \nonumber\\ &\qquad\times \frac{\Gamma(-t_L+m_\L)\Gamma(-t_R+m_\mathrm{R})\Gamma(s+t_\L-m_\L)\Gamma(s+t_\mathrm{R}-m_\mathrm{R})}{\Gamma(s)^2} \end{align} In order to integrate out the $\tau$ variable, we can use the Hankel contour representation of the Gamma function, \begin{equation} \int_{\longrightarrow} \frac{\mathrm{d}\tau}{(-i \tau)^{\frac{7}{2}}} \ \mathrm{e}^{-2\pi i \tau a}=-i\, (2\pi a)^{\frac{5}{2}}\int_{\uparrow} \mathrm{d}x \ (-x)^{-\frac{7}{2}}\ \mathrm{e}^{-x}=\frac{2\pi(2\pi a)^{\frac{5}{2}}}{\Gamma(\frac{7}{2})} \end{equation} that we already explained in Section~\ref{subsec:2-pt assembling}. The result is \begin{align} \eqref{eq:single contribution before changing integration region} &=-\frac{16\pi i}{15c^5 \sqrt{stu}} \int_{P_{m_\mathrm{D},m_\U} > 0}\hspace{-1cm} \d t_\L \, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}}\nonumber\\ &\qquad\times\!\left(\!\begin{cases}\mathrm{e}^{2\pi i (t_\L-m_\L) \st{\frac{d n_{21}}{c}}} &\;\text{if}\quad n_{21}>0 \\ \frac{\sin(\pi(s+t_\L))}{\sin(\pi s)} &\;\text{if}\quad n_{21}=0 \end{cases}\right)\!\left(\! \begin{cases}\mathrm{e}^{2\pi i(t_\mathrm{R}-m_\mathrm{R}) \st{\frac{d n_{43}}{c}}} &\;\text{if}\quad n_{43}>0 \\ \frac{\sin(\pi(s+t_\mathrm{R}))}{\sin(\pi s)} &\;\text{if}\quad n_{43}=0 \end{cases}\right)\nonumber\\ &\qquad\times \frac{\Gamma(-t_\L+m_\L)\Gamma(-t_\mathrm{R}+m_\mathrm{R})\Gamma(s+t_\L-m_\L)\Gamma(s+t_\mathrm{R}-m_\mathrm{R})}{\Gamma(s)^2} \ . \label{eq:contribution single q term} \end{align} \subsection{Assembling the result} We can now combine eq.~\eqref{eq:integral Aa/c n1,n2,n3} for the contribution of $A_{a/c}^{n_1,n_2,n_3}$ to the amplitude with our evaluation of a contribution of a single term in the $q$-expansion \eqref{eq:contribution single q term} to obtain \begin{align} A_{a/c}^{n_1,n_2,n_3}&=-\frac{16\pi i \, \mathrm{e}^{2\pi i \sum_{i>j} s_{ij} \sum_{m=1}^{n_{ij}} \st{\frac{md}{c}}}}{15c^5 \sqrt{stu}} \sum_{\begin{subarray}{c} m_\L,m_\mathrm{D},m_\mathrm{R},m_\U\ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} [q^{m_\L \xi_{21}+m_\mathrm{D} \xi_{32}+m_\mathrm{R} \xi_{43}+m_\U(1-\xi_{41})}] \nonumber\\ &\quad \times \prod_{i>j}\prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{2\pi i d (\ell+n_{ij})}{c}} q^{\ell-\xi_{ij}})^{-s_{ij}} \hspace{-0.6cm} \prod_{\ell=1+\delta_{ij,21}+\delta_{ij,41}}^\infty \hspace{-0.6cm} (1-\mathrm{e}^{-\frac{2\pi i d (\ell-n_{ij}-1)}{c}} q^{\ell+\xi_{ij}-1})^{-s_{ij}} \nonumber\\ &\quad \times \int_{P_{m_\mathrm{D},m_\U} >0} \d t_\L \, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}}\nonumber\\ &\quad\times\!\left(\!\begin{cases}\mathrm{e}^{2\pi i (t_\L-m_\L) \st{\frac{d n_{21}}{c}}} &\,\text{if}\quad n_{21}>0 \\ \frac{\sin(\pi(s+t_\L))}{\sin(\pi s)} &\,\text{if}\quad n_{21}=0 \end{cases}\right)\!\left(\! \begin{cases}\mathrm{e}^{2\pi i(t_\mathrm{R}-m_\mathrm{R}) \st{\frac{d n_{43}}{c}}} &\,\text{if}\quad n_{43}>0 \\ \frac{\sin(\pi(s+t_\mathrm{R}))}{\sin(\pi s)} &\,\text{if}\quad n_{43}=0 \end{cases}\right)\nonumber\\ &\quad\times \frac{\Gamma(-t_\L+m_\L)\Gamma(-t_\mathrm{R}+m_\mathrm{R})\Gamma(s+t_\L-m_\L)\Gamma(s+t_\mathrm{R}-m_\mathrm{R})}{\Gamma(s)^2}\ . \end{align} We can simplify this formula further. Let us look at the coefficients of the infinite product in more detail. We notice that the phase of a given term in the infinite product is entirely determined by the exponent and we can write \begin{align} &[q^{m_\L \xi_{21}+m_\mathrm{D} \xi_{32}+m_\mathrm{R} \xi_{43}+m_\U(1-\xi_{41})}] \prod_{i>j}\prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{2\pi i d (\ell+n_{ij})}{c}} q^{\ell-\xi_{ij}})^{-s_{ij}}\nonumber\\ &\hspace{3.85cm}\times\prod_{\ell=1+\delta_{ij,21}+\delta_{ij,43}}^\infty \!\!\!\!\! (1-\mathrm{e}^{-\frac{2\pi i d (\ell-n_{ij}-1)}{c}} q^{\ell+\xi_{ij}-1})^{-s_{ij}} \nonumber \\ &=\mathrm{e}^{\frac{2\pi i d}{c}(m_\L n_{21}+m_\mathrm{D} n_{32}+m_\mathrm{R} n_{43}-m_\U(n_{41}+1)} [q^{m_\L \xi_{21}+m_\mathrm{D} \xi_{32}+m_\mathrm{R} \xi_{43}+m_\U(1-\xi_{41})}]\nonumber\\ &\qquad\times\prod_{i>j}\prod_{\ell=1}^\infty (1- q^{\ell-\xi_{ij}})^{-s_{ij}}\prod_{\ell=1+\delta_{ij,21}+\delta_{ij,43}}^\infty(1- q^{\ell+\xi_{ij}-1})^{-s_{ij}}\ . \end{align} Part of this phase combines with the phase that we obtained from evaluating the integrals over $\alpha_\L$ and $\alpha_\mathrm{R}$. We note that $\mathrm{e}^{\frac{2\pi i m_\L n_{21}d}{c}}\mathrm{e}^{-2\pi i m_\L\st{\frac{dn_{21}}{c}}}=(-1)^{m_\L}$. Setting \begin{equation} q_\L=q^{\xi_{21}},\qquad q_\mathrm{D}=q^{\xi_{32}},\qquad q_\mathrm{R}=q^{\xi_{43}},\qquad q_\U=q^{1-\xi_{41}} \end{equation} recovers the definition \eqref{eq:QmL,mD,mR,mU definition} for the polynomials coefficients $Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}(s,t)$. We thus have at this stage \begin{align} A_{a/c}^{n_1,n_2,n_3}&=-\frac{16\pi \, \mathrm{e}^{2\pi i \sum_{i>j} s_{ij} \sum_{m=1}^{n_{ij}} \st{\frac{md}{c}}}}{15c^5 \sqrt{stu}} \sum_{\begin{subarray}{c} m_\L,m_\mathrm{D},m_\mathrm{R},m_\U\ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} (-1)^{m_\L+m_\mathrm{R}}\, Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}(s,t) \nonumber\\ &\qquad \times \mathrm{e}^{\frac{2\pi i d}{c}(m_\mathrm{D} n_{32}-m_\U(n_{41}+1))}\int_{P_{m_\mathrm{D},m_\U}> 0}\hspace{-1cm} \d t_\L \, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}} \nonumber\\ &\qquad\times\left(\!\begin{cases}\mathrm{e}^{2\pi i t_\L \st{\frac{d n_{21}}{c}}} &\;\text{if}\quad n_{21}>0 \\ \frac{\sin(\pi(s+t_\L))}{\sin(\pi s)} &\;\text{if}\quad n_{21}=0 \end{cases}\right)\left(\!\begin{cases}\mathrm{e}^{2\pi i t_\mathrm{R} \st{\frac{d n_{43}}{c}}} &\;\text{if}\quad n_{43}>0 \\ \frac{\sin(\pi(s+t_\mathrm{R}))}{\sin(\pi s)} &\;\text{if}\quad n_{43}=0 \end{cases}\right)\nonumber\\ &\qquad\times \frac{\Gamma(-t_\L+m_\L)\Gamma(-t_\mathrm{R}+m_\mathrm{R})\Gamma(s+t_\L-m_\L)\Gamma(s+t_\mathrm{R}-m_\mathrm{R})}{\Gamma(s)^2}\ . \end{align} We can now carry out the sum over $m_\L$ and $m_\mathrm{R}$. Recall the definition for the polynomials $Q_{m_\mathrm{D},m_\U}(s,t)$ given in \eqref{eq:definition Qm2,m4}, which we reproduce here: \begin{multline} Q_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})\equiv \!\!\!\sum_{m_\L,m_\mathrm{R}=0}^{m_\mathrm{D}+m_\U} Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}(s,t) (-t_\L)_{m_\L}(-s-t_\L+m_\L+1)_{m_\mathrm{D}+m_\U-m_\L} \\ \times (-t_\mathrm{R})_{m_\mathrm{R}}(-s-t_\mathrm{R}+m_\mathrm{R}+1)_{m_\mathrm{D}+m_\U-m_\mathrm{R}}\ . \end{multline} We used the fact that the range of $m_\L$ and $m_\mathrm{R}$ is given by \eqref{eq:restriction m1 m3}.\footnote{This is almost what we called $Q_{m_\mathrm{D},m_\U}$ in our earlier paper \cite{Eberhardt:2022zay}. For $m_\mathrm{D}=m_\U$, the two are literally the same, while it differs by a factor of $2$ from what we called $Q_{m_\mathrm{D},m_\U}$ before, since we include both $m_\mathrm{D} < m_\U$ and $m_\mathrm{D} > m_\U$ in the present sum.} We can hence write \begin{align} A_{a/c}^{n_1,n_2,n_3}&=-\frac{16\pi i \, \mathrm{e}^{2\pi i \sum_{i>j} s_{ij} \sum_{m=1}^{n_{ij}} \st{\frac{md}{c}}}}{15c^5 \sqrt{stu}} \sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \mathrm{e}^{\frac{2\pi i d}{c}(m_\mathrm{D} n_{32}-m_\U(n_{41}+1))}\nonumber\\ &\qquad \times \int_{P_{m_\mathrm{D},m_\U} > 0} \hspace{-1cm} \d t_\L \, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}} \, Q_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})\nonumber\\ &\qquad\times\left(\begin{cases}\mathrm{e}^{2\pi i t_\L \st{\frac{d n_{21}}{c}}} &\;\text{if}\quad n_{21}>0 \\ \frac{\sin(\pi(s+t_\L))}{\sin(\pi s)} &\;\text{if}\quad n_{21}=0 \end{cases}\right)\left(\begin{cases}\mathrm{e}^{2\pi i t_\mathrm{R} \st{\frac{d n_{43}}{c}}} &\;\text{if}\quad n_{43}>0 \\ \frac{\sin(\pi(s+t_\mathrm{R}))}{\sin(\pi s)} &\;\text{if}\quad n_{43}=0 \end{cases}\right)\nonumber\\ &\qquad\times \frac{\Gamma(-t_\L)\Gamma(-t_\mathrm{R})\Gamma(s+t_\L-m_\mathrm{D}-m_\U)\Gamma(s+t_\mathrm{R}-m_\mathrm{D}-m_\U)}{\Gamma(s)^2}\ . \label{eq:planar four point function Aa/c n1,n2,n3 final result} \end{align} \subsection{Renaming \texorpdfstring{$(n_1,n_2,n_3)$}{(n1,n2,n3)}} The final step is an aesthetic one. We change our labelling of the contributions $A_{a/c}^{n_1,n_2,n_3}$ to make the final formula more symmetric. Let us introduce \begin{equation} n_\L=n_{21}\ , \qquad n_\mathrm{D}=n_{32}\ , \qquad n_\mathrm{R}=n_{43}\ , \qquad n_\U=n_1-1\ , \end{equation} so that $n_\L,\, n_\mathrm{D},\, n_\mathrm{R},\, n_\U \ge 0$ and their sum is constrained to $c-1$. In terms of these variables, the prefactor phase can be written as \begin{align} \sum_{i>j} s_{ij} \sum_{m=1}^{n_{ij}} \bigst{\frac{md}{c}}&=s \bigg[ \sum_{m=1}^{n_\L}+\sum_{m=1}^{n_\mathrm{R}}-\sum_{m=1}^{n_\mathrm{D}+n_\L}-\sum_{m=1}^{n_\mathrm{D}+n_\mathrm{R}} \bigg]\bigst{\frac{md}{c}} \nonumber\\ &\qquad+t \bigg[ \sum_{m=1}^{n_\mathrm{D}}+\sum_{m=1}^{c-1-n_\U}-\sum_{m=1}^{n_\mathrm{D}+n_\L}-\sum_{m=1}^{n_\mathrm{D}+n_\mathrm{R}} \bigg]\bigst{\frac{md}{c}} \\ &=-s \bigg[ \sum_{m=n_\L+1}^{n_\L+n_\mathrm{D}}+\sum_{m=n_\mathrm{R}+1}^{n_\mathrm{R}+n_\mathrm{D}}\bigg]\bigst{\frac{md}{c}}\nonumber\\ &\qquad-t \bigg[ \sum_{m=n_\mathrm{D}+1}^{n_\L+n_\mathrm{D}}-\sum_{m=n_\mathrm{D}+n_\mathrm{R}+1}^{c-1-n_\U}\bigg]\bigst{\frac{md}{c}}\ . \end{align} By using periodicity in $m$ mod $c$, we can rewrite the last term as follows: \begin{align} \sum_{m=n_\mathrm{D}+n_\mathrm{R}+1}^{c-1-n_\U} \bigst{\frac{md}{c}}=\sum_{m=n_\mathrm{D}+n_\mathrm{R}+1-c}^{-1-n_\U} \bigst{\frac{md}{c}}=-\sum_{m=n_\U+1}^{n_\U+n_\L} \bigst{\frac{md}{c}}\ , \end{align} where we shifted the summation domain by $c$ steps and then renamed $m \to -m$. Thus, \begin{align} \sum_{i>j} s_{ij} \sum_{m=1}^{n_{ij}} \bigst{\frac{md}{c}} &=-s \sum_{a=\L,\mathrm{R}} \sum_{m=n_a+1}^{n_a+n_\mathrm{D}}\bigst{\frac{md}{c}}-t \sum_{a=\mathrm{D},\U} \sum_{m=n_a+1}^{n_a+n_\L}\bigst{\frac{md}{c}}\ . \end{align} This looks still slightly asymmetric. However, we claim that \begin{equation} \sum_{a=\L,\mathrm{R}} \sum_{m=n_a+1}^{n_a+n_\mathrm{D}}\bigst{\frac{md}{c}}=\sum_{a=\L,\mathrm{R}} \sum_{m=n_a+1}^{n_a+n_\U}\bigst{\frac{md}{c}} \end{equation} and similarly the second term is invariant under $n_\L \to n_\mathrm{R}$, which makes it obvious that the expression has in fact the required symmetries. To prove this, we only need to use that $\st{x}$ is an asymmetric function. We have \begin{align} &\sum_{a=\L,\mathrm{R}} \sum_{m=n_a+1}^{n_a+n_\mathrm{D}}\bigst{\frac{md}{c}}-\sum_{a=\L,\mathrm{R}} \sum_{m=n_a+1}^{n_a+n_\U}\bigst{\frac{md}{c}} \nonumber\\ &=\bigg[\sum_{m=n_\L+1}^{n_\L+n_\mathrm{D}} -\sum_{m=-n_\mathrm{R}-n_\mathrm{D}}^{-n_\mathrm{R}-1}-\sum_{m=n_\L+1}^{n_\L+n_\U}+\sum_{m=-n_\mathrm{R}-n_\U}^{-n_\mathrm{R}-1}\bigg] \bigst{\frac{md}{c}} \\ &=\bigg[\sum_{m=n_\L+1}^{n_\L+n_\mathrm{D}} -\sum_{m=n_\L+n_\U+1}^{n_\L+n_\mathrm{D}+n_\U}-\sum_{m=n_\L+1}^{n_\L+n_\U}+\sum_{m=n_\L+n_\mathrm{D}+1}^{n_\L+n_\mathrm{D}+n_\U}\bigg] \bigst{\frac{md}{c}}=0\ . \end{align} Here we sent $m \to -m$ in the second and fourth term and then shifted summation variables by $c$. The first and last term, as well as the second and third term then join up and the terms cancel. Thus we can write the phases fully symmetrically as follows, \begin{equation} \sum_{i>j} s_{ij} \sum_{m=1}^{n_{ij}} \bigst{\frac{md}{c}}=-\frac{1}{2}\sum_{\begin{subarray}{c} a=\L,\mathrm{R}\\ b=\mathrm{D},\U \end{subarray}} \bigg[s \sum_{m=n_a+1}^{n_a+n_b}+\, t \sum_{m=n_b+1}^{n_a+n_b}\bigg] \bigst{\frac{md}{c}}\ . \end{equation} Inserting this into \eqref{eq:planar four point function Aa/c n1,n2,n3 final result} then finally yields \eqref{eq:planar four-point function s-channel}. This is our final formula for the planar open-string amplitude in the $s$-channel. \subsection{Results in the \texorpdfstring{$u$}{u}-channel} \label{subsec:planar u-channel} We now consider the $u$-channel contribution. As we shall see, we almost get it for free from the $s$-channel contribution. There are $\frac{1}{6}(m-1)m(m+1)$ regions in $(z_1,z_2,z_3)$-parameter space for which $\mathrm{Trop}<0$. We will denote them by $\Gamma_{n_1,n_2,n_3}$ where $1 \le n_1 \le n_2 < n_3 \le c$. They are separated by the inequalities \begin{subequations} \label{eq:bounds regions u-channel} \begin{align} \frac{n_{ij}}{c}&\le z_{ij}\le \frac{n_{ij}+1}{c}\ , \qquad ij \in \{21,\, 41,\, 43\}\ , \\ \frac{n_{32}-1}{c}&\le z_{32}\le \frac{n_{32}}{c}\ , \\ \frac{n_{ij}-1}{c}&\le z_{ij}\le \frac{n_{ij}+1}{c}\ , \qquad ij \in \{31,\, 42\}\ . \end{align} \end{subequations} Here we defined again $n_4\equiv c$. Let us set $z_i=\frac{n_i+\xi_i}{c}$ as before. It follows that the contribution $A_{a/c}^{n_1,n_2,n_3}$ to the $u$-channel amplitude equals \begin{align} A_{a/c}^{n_1,n_2,n_3}&=i \, \int_{\longrightarrow} \frac{\mathrm{d}\tau}{c^5 \tau^2} \int \mathrm{d}\xi_1\, \mathrm{d}\xi_2\, \mathrm{d}\xi_3\ \prod_{i>j} q^{-\frac{1}{2}s_{ij} \xi_{ij}(\xi_{ij}-1)} \mathrm{e}^{2\pi i s_{ij} \sum_{m=1}^{n_{ij}-\delta_{ij,32}} \st{\frac{md}{c}}} \nonumber\\ &\qquad\times \prod_{\begin{subarray}{c} i>j\\ ij \ne 32 \end{subarray}}\prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{2\pi i d (\ell+n_{ij})}{c}} q^{\ell-\xi_{ij}})^{-s_{ij}}(1-\mathrm{e}^{-\frac{2\pi i d (\ell-n_{ij}-1)}{c}} q^{\ell+\xi_{ij}-1})^{-s_{ij}} \nonumber\\ &\qquad\times \prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{2\pi i d (\ell+n_{32}-1)}{c}} q^{\ell-\xi_{32}-1})^{-s_{32}}(1-\mathrm{e}^{-\frac{2\pi i d (\ell-n_{32})}{c}} q^{\ell+\xi_{32}})^{-s_{32}}\ . \label{eq:planar u channel before swap} \end{align} This formula is straightforward to derive. To deal with the factor of the form $\vartheta_1(c \tau z_{32},\tau-\frac{d}{c})=\vartheta_1(\tau(n_{32}-1+\xi_{32}+1), \tau - \frac{d}{c})$, we use eq.~\eqref{eq:log theta1 branch} with $n=n_{32}-1$ and then insert $\xi=\xi_{32}+1$, which is positive. This is almost identical to \eqref{eq:integral Aa/c n1,n2,n3}, except for a different integration region over the $\xi_i$'s and a slightly different phase. We also treated the factor with $ij=32$ separately in order to ensure that we land on the correct branch. We can relate this to the $s$-channel contributions as follows. Consider swapping $\xi_2$ with $\xi_3$, $n_2$ with $n_3$ and $s$ with $u$ (i.e.\ we swap all the labels 2 with the labels 3). This turns $\xi_{32} \to -\xi_{32}$, $n_{32} \to -n_{32}$, while the terms in the second line get permuted. We can then combine them with the terms in the third line to obtain \begin{align} A_{a/c}^{n_1,n_3,n_2}\Big|_{2 \leftrightarrow 3}&=i \, \int_{\longrightarrow} \frac{\mathrm{d}\tau}{c^5 \tau^2} \int \mathrm{d}\xi_1\, \mathrm{d}\xi_2\, \mathrm{d}\xi_3\ \prod_{i>j} q^{-\frac{1}{2}s_{ij} \xi_{ij}(\xi_{ij}-1)} \nonumber\\ &\qquad\times \mathrm{e}^{2\pi i \sum_{i>j,\, ij \ne 32}s_{ij} \sum_{m=1}^{n_{ij}} \st{\frac{md}{c}}+2\pi i t \sum_{m=1}^{-n_{32}-1}\st{\frac{md}{c}} } \nonumber\\ &\qquad\times \prod_{i>j}\prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{2\pi i d (\ell+n_{ij})}{c}} q^{\ell-\xi_{ij}})^{-s_{ij}}(1-\mathrm{e}^{-\frac{2\pi i d (\ell-n_{ij}-1)}{c}} q^{\ell+\xi_{ij}-1})^{-s_{ij}}\ . \end{align} In terms of the new $\xi_i$'s, the integration region is bounded by \begin{equation} -1 \le \xi_{21},\, \xi_{43} \le 1 \ , \qquad 0 \le \xi_{31},\, \xi_{41},\, \xi_{32},\, \xi_{42} \le 1\ , \end{equation} which coincides with the integration region in the $s$-channel. Up to the slightly different phase, this directly coincides with the $s$-channel $A_{a,c}^{n_1,n_2,n_3}$. Note that $n_{32}<0$, so a modification of this particular factor in the phase is expected. Note also that $n_{21}>0$ and $n_{43}>0$ on the right-hand side and thus only one of the cases in the $s$-channel formula appears in this case. Let us also express this result in terms of $n_\L=n_{31}$, $n_\mathrm{D}=-n_{32}$, $n_\mathrm{R}=n_{42}$ and $n_\U=c-1-n_{41}$, so that $n_\L+n_\mathrm{D}+n_\mathrm{R}+n_\U=c-1$. We have of course also the following inequalities: \begin{equation} n_\L>0\ ,\quad n_\mathrm{D}<0,\quad n_\mathrm{R}>0\ , \quad n_\U \ge 0\ , \quad n_\L+n_\mathrm{D} \ge 0\ , \quad n_\mathrm{R}+n_\mathrm{D} \ge 0\ . \end{equation} Taking \eqref{eq:planar four-point function s-channel} and exchanging all labels finally gives \begin{align} A_{a/c}^{n_\L,n_\mathrm{D},n_\mathrm{R},n_\U}&=-\frac{16\pi i \, \mathrm{e}^{-2\pi is \sum_{a=\L,\mathrm{R}}\sum_{m=n_a+n_\mathrm{D}+1}^{n_a} \st{\frac{md}{c}}+2\pi i t \big[\sum_{m=n_\L+1}^{n_\L+n_\mathrm{R}+n_\mathrm{D}}-\sum_{m=-n_\mathrm{D}}^{n_\mathrm{R}}\big] \st{\frac{md}{c}}}}{15c^5 \sqrt{stu}} \nonumber\\ &\qquad \times \sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le u \end{subarray}}\hspace{-.5cm}\mathrm{e}^{\frac{2\pi i d}{c}(m_\mathrm{D} n_\mathrm{D}+m_\U n_\U)}\int_{P_{m_\mathrm{D},m_\U}(s,u,t_\L,t_\mathrm{R})\ge 0} \hspace{-2cm} \d t_\L \, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s,u,t_\L,t_\mathrm{R})^{\frac{5}{2}} \nonumber\\ &\qquad\times Q_{m_\mathrm{D},m_\U}(s,u,t_\L,t_\mathrm{R})\frac{\Gamma(-t_\L)\Gamma(u+t_\L-m_\mathrm{D}-m_\U)}{\Gamma(u)}\, \mathrm{e}^{2\pi i t_\L \st{\frac{d n_\L}{c}}}\nonumber\\ &\qquad\times (\L \leftrightarrow \mathrm{R})\ . \label{eq:planar four point function u-channel Rademacher} \end{align} \section{Four-point non-planar amplitude \texorpdfstring{$A^{\text{n-p}}(s,t)$}{Anp(s,t)}}\label{sec:non-planar} Let us derive the analogous formula for the non-planar annulus amplitude $A^{\text{n-p}}(s,t)$. Recall from Section~\ref{subsec:basic amplitudes} that the amplitude of the non-planar annulus was given by \begin{equation} A^{\text{n-p}} = \frac{-i}{32(1-\mathrm{e}^{-\pi i s})}\int_{\Gamma} \d \tau\, \d z_1 \, \d z_2 \, \d z_3 \, \prod_{j=1}^2\prod_{i=3}^4 \vartheta_4(z_{ij},\tau)^{-s_{ij}}\big(\vartheta_1(z_{21},\tau)\vartheta_1(z_{43},\tau)\big)^{-s}\ . \label{eq:non-planar integrand} \end{equation} The integration contour $\Gamma$ is the $\tau$-plane is the one described in Figure~\ref{fig:tau contours} with the endpoints at $\tau=0$ and $\tau=2$. The prefactor $(1-\mathrm{e}^{-\pi i s})^{-1}$ came from the choice of the contour, but we will suppress it from now on for readability by introducing $\tilde{A}^{\text{n-p}} = (1-\mathrm{e}^{-\pi i s}) A^{\text{n-p}}$. The integration region in the $z_i$'s can be described by the inequalities \begin{equation} 0 \le z_{21},\, z_{43} \le 1\ \quad 0 \le z_{42}\le 1\, . \end{equation} Note also that the integrand is periodic in $(z_1,z_2) \to (z_1+1,z_2+1)$, which corresponds to taking the punctures around the inner boundary. We again want to compute the contribution from the integral around the circles $C_{a/c}$, where now $0<\frac{a}{c} \le 2$. The treatment of the correct branches is more complicated in this case compared to the planar case and the generalization of the computation is not entirely straightforward. \subsection{Shifting \texorpdfstring{$z_i$}{zi}} Our strategy will be to recycle the computation of the planar annulus as much as possible. It is thus advantageous to relate $\vartheta_4$ to $\vartheta_1$. Let us set \begin{equation} z_1=y_1,\qquad z_2=y_2,\qquad z_3=y_3+\frac{\tau}{2},\qquad z_4=y_4+\frac{\tau}{2}\, . \end{equation} We do not fix $z_4=1$ for the moment. Then some of the arguments of the theta functions in \eqref{eq:non-planar integrand} become $\vartheta_4(y_{31}+\frac{\tau}{2},\tau)$ etc. Let us compute \begin{subequations} \begin{align} \vartheta_4(z,\tau)=\vartheta_4(y+\tfrac{\tau}{2},\tau)&=\sum_{n \in \ZZ} (-1)^n \mathrm{e}^{2\pi i n(y+\frac{\tau}{2})+\pi i n^2 \tau }\\ &=\mathrm{e}^{-\frac{\pi i \tau}{4}-\pi i y} \sum_{n \in \ZZ} (-1)^n \mathrm{e}^{2\pi i (n+\frac{1}{2}) y+\pi i (n+\frac{1}{2})^2 \tau} \\ &= i \mathrm{e}^{-\frac{\pi i \tau}{4}-\pi i y} \vartheta_1(y,\tau)\ . \end{align} \end{subequations} We can thus write \begin{equation} \tilde{A}^{\text{n-p}} =\frac{-i}{32}\int_{\Gamma} \d \tau\, \d y_1 \, \d y_2 \, \d y_3 \, \mathrm{e}^{\pi i s-\frac{\pi i s\tau}{2}-\pi i s(y_3+y_4-y_1-y_2)}\prod_{i>j} \vartheta_1(y_{ij},\tau)^{-s_{ij}} \ , \label{eq:non-planar annulus shifted y} \end{equation} where the integration contour in $y_i$ follows from the one in $z_i$ by shifting. The natural way to choose the branch in this expression is to take $y_i$ close to zero with $y_{ij}>0$, then integrand simplifies to \begin{equation} \mathrm{e}^{\pi i s-\frac{\pi i s \tau}{2}} \prod_{i>j} y_{ij}^{-s_{ij}} \label{eq:theta1 choice of branch cut}\ . \end{equation} Since all $y_{ij}$'s are positive, the branch is the canonical one. We have to make sure that this choice follows from the original choice of branch in \eqref{eq:non-planar integrand} by analytic continuation, which was specified by taking all $z_i$'s close to zero by similar reasoning. It suffices to do this for one of the $\vartheta_4$'s. We can also assume that $\tau$ is purely imaginary and very large since the overall phase depends continuously on $\tau$. Consider \begin{equation} \log \vartheta_1(z+\tfrac{\sigma\tau}{2},\tau) \end{equation} for $\sigma \in [0,1]$ and take $z \in \RR$ small, but bigger than 0. For $\sigma=0$, the choice of branch is clear, since $\vartheta_1(z,\tau)\to z \vartheta_1'(0,\tau) \sim 2\pi z \mathrm{e}^{\frac{\pi i \tau}{4}}$ as $\Im \tau \to \infty$. It corresponds to the choice of branch in \eqref{eq:theta1 choice of branch cut}. We want to follow the branch smoothly from $\sigma=0$ to $\sigma=1$. For large $\Im \tau$ we have approximately \begin{equation} \log \vartheta_1(z+\tfrac{\sigma\tau}{2},\tau)\sim \frac{\pi i \tau }{4}+\log \left(-i\, \mathrm{e}^{\pi i (z+\frac{\sigma\tau}{2})}+i\, \mathrm{e}^{-\pi i (z+\frac{\sigma\tau}{2})}\right)\ . \end{equation} We took out $\frac{\pi i\tau}{4}$, since this term is real. For $\sigma=0$, both terms are equally relevant and as discussed above, we choose the principal branch. For $\sigma=1$, the second term dominates and is essentially purely imaginary. However, we never cross the negative axis (since the first term is always less than the second term in magnitude) and thus the principal branch of the logarithm gives the correct answer everywhere. In particular for $\sigma=1$, we get \begin{equation} \log \vartheta_1(z+\tfrac{\tau}{2},\tau) \sim -\frac{\pi i \tau}{4}+\frac{\pi i}{2}-\pi i z \sim \frac{\pi i \tau}{4}+\frac{\pi i}{2}-\pi i z+\log \vartheta_4(z,\tau)\ . \end{equation} This shows that the branch in \eqref{eq:non-planar annulus shifted y} is the one that we get from the original expression \eqref{eq:non-planar integrand} by following the straight line from $z_{3,4} \sim 0$ to $z_{3,4} \sim \frac{\tau}{2}$. \subsection{Modular transformation} The next step is as before to set \begin{equation} \tau=\frac{a \tau'+b}{c \tau'+d}\ . \end{equation} Then $\tau' \in i+\RR$ gets mapped to the circle $C_{a/c}$. Hence, for the contribution from a single circle we get \begin{multline} \tilde{A}_{a/c}=\frac{i}{32} \int_{\longrightarrow} \frac{\mathrm{d}\tau'}{(c \tau'+d)^2} \int \mathrm{d}z_1\, \mathrm{d}z_2\, \mathrm{d}z_3\ \mathrm{e}^{\pi i s-\frac{\pi i s(a \tau'+b)}{2(c \tau'+d)}-\pi i s(y_3+y_4-y_1-y_2)} \\ \times \prod_{i>j} \mathrm{e}^{-\pi i c(c \tau'+d) s_{ij}y_{ij}^2} \vartheta_1((c \tau'+d)y_{ij},\tau')^{-s_{ij}}\ . \end{multline} We are guaranteed that this choice of branch is correct for $y_{ij} \to 0$, where the theta functions drastically simplify. Everywhere else, the branch is determined by analytic continuation. \subsection{Further shifts of \texorpdfstring{$z_i$}{zi}} To proceed further, we now want to re-express the result in terms of the original $z_i$. For this it is convenient to shift the $z_i$'s further. Set \begin{equation} z_{1,2}=\zeta_{1,2}\ , \qquad z_{3,4}=\zeta_{3,4}+\frac{a}{2c}\ . \end{equation} Contrary to the shift that led to $y_i$, this is a real shift. This does not change the integration region and we can still integrate over \begin{equation} 0 \le \zeta_{21},\, \zeta_{43} \le 1\ , \qquad 0 \le \zeta_{42} \le 1 \ .\label{eq:zeta integration region} \end{equation} We now have \begin{equation} y_{31}=z_{31}-\frac{a \tau'+b}{2(c \tau'+d)}=\zeta_{31}-\frac{a \tau'+b}{2(c \tau'+d)}+\frac{a}{2c}=\zeta_{31}+\frac{1}{2c(c \tau'+d)}\ . \end{equation} This is advantageous because $y_{31}$ and $\zeta_{31}$ are much closer together now. Consider now one of the factors for $ij \in \{31,\, 32,\, 41,\, 42\}$. For the purpose of discussing the branch, we look at \begin{equation} f(\zeta)= \log \vartheta_1\big((c \tau'+d) \zeta+\tfrac{1}{2c},\tau'\big)\ . \end{equation} We recall that the branch of the expression is determined from the behaviour near $y=0$ with $y>0$ small, i.e.\ $\zeta=-\frac{1}{2c(c \tau'+d)}+y$ with $y>0$ small. On the other hand, we can naturally determine a branch by taking $\zeta=0$, $\tau'$ large and purely imaginary. Since $0< \frac{1}{2c} \le \tfrac{1}{2}$, we have \begin{equation} \log \vartheta_1\big( \tfrac{1}{2c},\tau' \big)=\tfrac{\pi i \tau}{4}+ \log \left[ 2\sin\big(\tfrac{\pi}{2c}\big) \right]\ , \end{equation} which also has a natural branch since $\sin\big(\tfrac{\pi}{2c}\big)> 0$. These two choices of branches are easily seen to be equivalent. Indeed, set $\zeta=-\frac{\sigma }{2c(c \tau'+d)}+y$ and follow the branch from $\sigma=0$ to $\sigma=1$. We can again take $\tau$ purely imaginary and very large. Then \begin{equation} \log \vartheta_1\big((c \tau'+d) y+ \tfrac{1-\sigma}{2c},\tau' \big)\sim \frac{\pi i \tau}{4}+\log \left[ 2 \sin\pi \big((c \tau'+d) y+ \tfrac{1-\sigma}{2c}\big) \right] \end{equation} Since the path $2 \sin\pi \big((c \tau'+d) y+ \tfrac{1-\sigma}{2c}\big)$ never crosses the negative real axis, we can just take the principal branch of the logarithm everywhere. We thus conclude that our alternative determination of the branch in terms of $\zeta$ is equally valid and will be more convenient in the following. To summarize our analytic continuations so far, consider Figure~\ref{fig:shifts of z}. It shows the paths of the two analytic continuations we have performed. First, we analytically continued the integrand from $z_3\sim 0$ to $z_3 \sim \frac{\tau}{2}$, then we analytically continued back to the real axis. In the picture, $\tau$ is close to $\frac{a}{c}$, so that $\Im \tau'$ is large. Clearly, this path of analytic continuation is equivalent to the horizontal path, since we do not surround any branch points of the integrand (represented by crosses in the picture). This confirms that we are still on the correct branch. The same comment applies for $z_4$. \begin{figure} \centering \begin{tikzpicture} \fill (0,0) circle (.06) node[below] {0}; \fill (5,0) circle (.06) node[below] {1}; \fill (6,1) circle (.06) node[above] {$\tau$}; \fill (1,1) circle (.06) node[above] {$\tau-1$}; \draw[very thick] (2.94,.44) to (3.06,.56); \draw[very thick] (2.94,.56) to (3.06,.44) node[above] {$\tfrac{\tau}{2}$}; \draw[very thick, Maroon,->] (.2,.03) to (3.3,.47); \draw[very thick, Maroon,->] (3.35,.45) to (3,0); \draw[thick] (7,1.4) to (7,1) to (7.4,1); \node at (7.24,1.24) {$z_3$}; \end{tikzpicture} \caption{The two analytic continuations in $z_3$.} \label{fig:shifts of z} \end{figure} We can plug this change of variables back into \eqref{eq:non-planar annulus shifted y} to obtain the following expression: \begin{multline} \tilde{A}_{a/c}=\frac{i}{32} \int_{\longrightarrow} \frac{\mathrm{d}\tau}{(c \tau'+d)^2} \int_0^1 \mathrm{d}\zeta_1 \, \mathrm{d} \zeta_2\, \mathrm{d}\zeta_3 \ \mathrm{e}^{-\frac{\pi i s a}{2c}+\pi i s} \\ \times \prod_{i>j} \mathrm{e}^{-\pi i c(c \tau'+d) s_{ij}\zeta_{ij}^2} \vartheta_1\big((c \tau'+d)\zeta_{ij}+\tfrac{\delta(i,j)}{2c},\tau'\big)^{-s_{ij}} \end{multline} Here and in the following, we used the short-hand notation \begin{equation} \delta(i,j)=\begin{cases} 1\ , \quad & ij=31,\, 32,\, 41\text{ or }42\ , \\ 0\ , \quad & ij=21,\, 43\ . \end{cases} \end{equation} Finally, we shift $\tau' \to \tau-\frac{d}{c}$ and set $q=\mathrm{e}^{2\pi i \tau}$, yielding \begin{multline} \tilde{A}_{a/c}=\frac{i}{32} \int_{\longrightarrow} \frac{\mathrm{d}\tau}{c^2 \tau^2} \int_0^1 \mathrm{d}\zeta_1 \, \mathrm{d} \zeta_2\, \mathrm{d}\zeta_3 \ \mathrm{e}^{-\frac{\pi i s a}{2c}+\pi i s} \\ \times \prod_{i>j} q^{-\frac{1}{2}s_{ij} c^2\zeta_{ij}^2} \vartheta_1\big(c \tau \zeta_{ij}+\tfrac{\delta(i,j)}{2c},\tau-\tfrac{d}{c}\big)^{-s_{ij}}\ . \label{eq:Aa/c non planar annulus} \end{multline} We recall that the branch in this expression is chosen by setting all $\zeta_{ij} \sim 0$ with $\zeta_{21}>0$, $\zeta_{43}>0$, $\Re \tau=\frac{d}{c}$ and $\Im \tau$ large and evaluating the powers on the principal branch. Everywhere else, the branch is defined by analytic continuation. Notice that the expression is quite similar to the planar expression at this point, it only differs by an overall phase depending on $a$ and $c$ as well as the additional shifts in the theta function. These will also only lead to phases. Finally the integration region is different, see eq.~\eqref{eq:zeta integration region}. At this point, it is convenient to shift the integration contour in $\tau$ to large imaginary values of $\tau$ so that we can again take advantage of the tropicalization of the integration. \subsection{Subdividing the \texorpdfstring{$\zeta_i$}{zetai}-integration} We now continue the analysis for the $s$-channel. Since the integrand is the same as for the planar annulus up to phases, the contributions where $\mathrm{Trop}<0$ happen in the same region. In particular \eqref{eq:region Rn1,n2,n3} still holds when we replace $z_i \to \zeta_i$. The integration region is however larger and thus the integers $(n_1,n_2,n_3)$ can take more values. From the last inequality in \eqref{eq:region Rn1,n2,n3}, we see that $0 \le n_3 \le c$. From the first inequality in \eqref{eq:region Rn1,n2,n3}, we see that $0 \le n_{21} \le c$. To cover the whole integration region, we can let $0 \le n_{32} \le c-1$. The upper and lower bounds here could be shifted, since the integrand is periodic. It is only important that we cover $c$ different values. We hence have overall \begin{equation} 0 \le n_{21} \le c\ , \qquad 0 \le n_{32} \le c-1\ , \qquad 0 \le n_{43} \le c\ . \end{equation} This defines the regions $\Gamma_{n_1,n_2,n_3}$. There are $c(c+1)^2$ such regions. For each of the regions, we set \begin{equation} \zeta_i=\frac{n_i+\xi_i}{c}\ . \end{equation} The integration region for $n_{21}>0$, $n_{21}<c$, $n_{43}>0$ and $n_{43}<c$ are given by \begin{equation} -1\le \xi_{21},\, \xi_{43}\le 1\ , \qquad 0\le \xi_{31},\, \xi_{32},\, \xi_{41},\, \xi_{42}\le 1\ . \end{equation} The cases $n_{21}=0$, $n_{21}=c$, $n_{43}=0$ or $n_{43}=c$ are special since in this case the region for $\xi_{21}$, $\xi_{43}$ are modified. For $n_{21}=0$, we have $0\le \xi_{21}\le 1$, while for $n_{21}=c$, we have $-1\le \xi_{21}\le 0$ and similarly for $n_{43}$. \subsection{More branches of \texorpdfstring{$\log \vartheta_1$}{log theta1}} We next analyze the correct branch of the integrand in these regions. We have as before, see eq.~\eqref{eq:log theta1 branch} \begin{multline} \log \vartheta_1((n+\xi) \tau,\tau-\tfrac{c}{d})=-\pi i\tau (n+\xi)^2+\pi i\tau (\xi-\tfrac{1}{2})^2+\tfrac{\pi i}{2}-\tfrac{\pi i d}{4c}-2\pi i \sum_{m=1}^n \st{\tfrac{md}{c}}\\ +\log \left[ \prod_{\ell=1}^\infty\big(1-\mathrm{e}^{-\frac{2\pi i d \ell}{c}}q^\ell\big)\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell+n)}{c}} q^{\ell-\xi}\big)\big(1-\mathrm{e}^{-\frac{2\pi i d(\ell-n-1)}{c}}q^{\ell+\xi-1}\big) \right]\ , \end{multline} where the branch on the right hand side is the correct one that comes from smoothly following the branch between $\xi=-n$ and $0<\xi<1$. We similarly want to relate the branches of $\log \vartheta_1((n+\xi)\tau+\frac{1}{2c},\tau-\frac{d}{c})$ for different values of $n$. We claim \begin{multline} \log \vartheta_1((n+\xi) \tau+\tfrac{1}{2c},\tau-\tfrac{c}{d})=-\pi i\tau (n+\xi)^2+\pi i \tau (\xi-\tfrac{1}{2})^2+\tfrac{\pi i}{2}-\tfrac{\pi i (d+2)}{4c}-2\pi i \sum_{m=1}^n \st{\tfrac{2md+1}{2c}}\\ +\log \left[ \prod_{\ell=1}^\infty\big(1-\mathrm{e}^{-\frac{2\pi i d \ell}{c}}q^\ell\big)\big(1-\mathrm{e}^{-\frac{\pi i (2d(\ell+n)+1)}{c}} q^{\ell-\xi}\big)\big(1-\mathrm{e}^{-\frac{\pi i (2d(\ell-n-1)-1)}{c}}q^{\ell+\xi-1}\big) \right]\ , \label{eq:log theta1 1/2c shifted branch} \end{multline} The argument is by induction over $n$ and identical to the one given in Section~\ref{subsec:branches log theta1}. Thus we will not repeat the argument here. Taking \eqref{eq:log theta1 branch} and \eqref{eq:log theta1 1/2c shifted branch} now gives the following formula for the contribution of $\tilde{A}_{a/c}^{n_1,n_2,n_3}$ to the non-planar amplitude: \begin{align} \tilde{A}_{a/c}^{n_1,n_2,n_3}&=\frac{i}{32} \int_{\longrightarrow} \frac{\mathrm{d}\tau}{c^5 \tau^2} \int \mathrm{d}\xi_{1} \, \mathrm{d}\xi_2\, \mathrm{d} \xi_3\ \mathrm{e}^{-\frac{\pi i s(a+2)}{2c}+\pi i s+\sum_{i>j} s_{ij}\sum_{m=1}^{n_{ij}} \st{\frac{2md+\delta(i,j)}{2c}}} \nonumber\\ &\qquad \times\prod_{i>j}q^{-\frac{1}{2}s_{ij} \xi_{ij}(\xi_{ij}-1)}\prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{\pi i (2d(\ell+n_{ij})+\delta(i,j))}{c}} q^{\ell-\xi_{ij}})^{-s_{ij}}\nonumber\\ &\qquad\times(1-\mathrm{e}^{-\frac{\pi i (2d(\ell-n_{ij}-1)-\delta(i,j))}{c}} q^{\ell+\xi_{ij}-1})^{-s_{ij}}\ . \end{align} \subsection{Evaluating the contribution from fixed \texorpdfstring{$(n_1,n_2,n_3)$}{(n1,n2,n3)} in the \texorpdfstring{$s$}{s}-channel} Next, we evaluate the contribution from a given $(n_1,n_2,n_3)$. This is very similar to the situation for the planar annulus. As discussed repeatedly, we can just do the $q$-expansion of the integrand, except for the factors involving $q^{\xi_{21}}$ or $q^{\xi_{43}}$, since those may go to zero. As before, when extracting the coefficient of the term $q^{m_\L \xi_{21}+m_\mathrm{D} \xi_{32}+m_\mathrm{R} \xi_{43}+m_\U(1-\xi_{41})}$, we get \begin{align} &[q^{m_\L \xi_{21}+m_\mathrm{D} \xi_{32}+m_\mathrm{R} \xi_{43}+m_\U(1-\xi_{41})}]\prod_{i>j}\prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{\pi i (2d(\ell+n_{ij})+\delta(i,j))}{c}} q^{\ell-\xi_{ij}})^{-s_{ij}}\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times \prod_{\ell=2-\delta(i,j)}^\infty (1-\mathrm{e}^{-\frac{\pi i (2d(\ell-n_{ij}-1)-\delta(i,j))}{c}} q^{\ell+\xi_{ij}-1})^{-s_{ij}} \nonumber\\ &\qquad=\mathrm{e}^{\frac{\pi i}{c}(2m_\L d n_{21}+m_\mathrm{D}(2d n_{32}+1)+2m_\mathrm{R} d n_{43}-m_\U(2d(n_{41}+1)+1))} Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}(s,t)\ , \end{align} where we used the definition of $Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}$ given in \eqref{eq:QmL,mD,mR,mU definition}. We thus have \begin{align} \tilde{A}_{a/c}^{n_1,n_2,n_3}&=\frac{i}{32c^5} \mathrm{e}^{-\frac{\pi i s(a+2)}{2c}+\pi i s+2\pi i \sum_{i>j} s_{ij}\sum_{m=1}^{n_{ij}} \st{\frac{2md+\delta(i,j)}{2c}}} \hspace{-0.6cm} \sum_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U\ge 0} \hspace{-0.6cm} Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}(s,t) \nonumber\\ &\qquad\times \mathrm{e}^{\frac{\pi i}{c}(2m_\L d n_{21}+m_\mathrm{D}(2d n_{32}+1)+2m_\mathrm{R} d n_{43}-m_\U(2d(n_{41}+1)+1))} \nonumber\\ &\qquad\times \int_{\longrightarrow} \frac{\mathrm{d}\tau}{ \tau^2} \int \mathrm{d}\xi_{1} \, \mathrm{d}\xi_2\, \mathrm{d} \xi_3\ q^{-\sum_{i>j}\frac{1}{2}s_{ij} \xi_{ij}(\xi_{ij}-1)+m_\L \xi_{21}+m_\mathrm{D} \xi_{32}+m_\mathrm{R} \xi_{43}+m_\U(1-\xi_{41})}\nonumber\\ &\qquad\qquad\qquad(1-\mathrm{e}^{\frac{2\pi i d n_{21}}{c}}q^{\xi_{21}})^{-s}(1-\mathrm{e}^{\frac{2\pi i d n_{43}}{c}}q^{\xi_{43}})^{-s}\ . \end{align} This expression is actually imprecise for $n_{21}=c$ or $n_{43}=c$, a case that we did not encounter in the planar amplitude. In this case, we have $\xi_{21}<0$ and thus we cannot specify the branch using the region $\xi_{21}>0$. In this case, we can follow the argument above how to relate these two branches and the correct prescription is \begin{equation} (1-\mathrm{e}^{\frac{2\pi i d n_{21}}{c}}q^{\xi_{21}})^{-s}=\mathrm{e}^{-2\pi i s \st{\frac{d n_{21}}{c}}} q^{-s \xi_{21}} (1-\mathrm{e}^{-\frac{2\pi i d n_{21}}{c}}q^{-\xi_{21}})^{-s}\ . \end{equation} This cancels the corresponding phase factor in $\mathrm{e}^{-2\pi i s\sum_{m=1}^{n_{21}} \st{\frac{md}{c}}}$. For $n_{21}=c$, we have $\st{\frac{dn_{21}}{c}}=0$ according to our definition \eqref{eq:st definition} and thus the prefactor does not need modification. For $n_{21}=c$ we then use the right hand side of this equation for the correct branch. A similar comment applies to the case $n_{43}=c$. We now continue as in the planar case in evaluating the integrals over the $\xi_i$'s. After introducing $\alpha_\L$, $\alpha_\mathrm{R}$, $t_\L$ and $t_\mathrm{R}$ as in Section~\ref{subsec:evaluating single term q expansion}, this is reduced to the computation of the integrals \begin{equation} \int_{-\infty\, (0)}^{\infty\, (0)} \mathrm{d}\alpha\ q^{-\alpha t}(1-\mathrm{e}^{2\pi i \varphi} q^\alpha)^{-s}\ . \end{equation} There is now a new case appearing corresponding to $n_{21}=c$ or $n_{43}=c$ in which the upper limit of the integral is 0. In this case, we should change the specification of the branch as discussed above by first factoring out $\mathrm{e}^{-2\pi i s \st{\varphi}}q^{-\alpha s}$. We then have for $\varphi \in \ZZ$: \begin{subequations} \begin{align} \int_{-\infty}^0 \mathrm{d}\alpha\ q^{-\alpha (s+t)} (1-q^{-\alpha})^{-s}&=\frac{i}{2\pi \tau} \int_0^1 \mathrm{d}x\ x^{s+t-1}(1-x)^{-s}\\ &=\frac{i}{2\pi \tau} \frac{\Gamma(1-s)\Gamma(s+t)}{\Gamma(t+1)}\\ &=-\frac{\sin(\pi t)}{\sin(\pi s)} \frac{i}{2\pi \tau} \frac{\Gamma(-t)\Gamma(s+t)}{\Gamma(s)}\ . \end{align} \end{subequations} Thus we get \begin{align} \tilde{A}_{a/c}^{n_1,n_2,n_3}&=-\frac{\pi i \, \mathrm{e}^{-\frac{\pi i s(a+2)}{2c}+\pi i s+2\pi i \sum_{i>j} s_{ij}\sum_{m=1}^{n_{ij}} \st{\frac{2md+\delta(i,j)}{2c}}}}{30c^5\sqrt{stu}}\sum_{\begin{subarray}{c} m_\L,m_\mathrm{D},m_\mathrm{R},m_\U\ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \hspace{-0.6cm}Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}(s,t) \nonumber\\ &\quad\times (-1)^{m_\L+m_\mathrm{R}}\mathrm{e}^{\frac{\pi i}{c}(m_\mathrm{D} (2dn_{32}+1)-m_\U(2d(n_{41}+1)+1)}\int_{P_{m_\mathrm{D},m_\U}> 0}\hspace{-1.4cm} \d t_\L \, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}} \nonumber\\ &\quad\times \left(\begin{cases} \mathrm{e}^{2\pi i t_\L \st{\frac{d n_{21}}{c}}} &\text{if}\;\; 0<n_{21}<c \\ \frac{\sin(\pi(s+t_\L))}{\sin(\pi s)} &\text{if}\;\; n_{21}=0 \\ -\frac{\sin(\pi t_\L)}{\sin(\pi s)} &\text{if}\;\; n_{21}=c \end{cases}\right)\left(\begin{cases} \mathrm{e}^{2\pi i t_\mathrm{R} \st{\frac{d n_{43}}{c}}} &\text{if}\;\; 0<n_{43}<c \\ \frac{\sin(\pi(s+t_\mathrm{R}))}{\sin(\pi s)}&\text{if}\;\; n_{43}=0 \\ -\frac{\sin(\pi t_\mathrm{R})}{\sin(\pi s)}&\text{if}\;\; n_{43}=c \end{cases}\right) \nonumber\\ &\quad\times \frac{\Gamma(-t_\L+m_\L)\Gamma(-t_\mathrm{R}+m_\mathrm{R})\Gamma(s+t_\L-m_\L)\Gamma(s+t_\mathrm{R}-m_\mathrm{R})}{\Gamma(s)^2}\ . \end{align} We now recall the definition of \eqref{eq:definition Qm2,m4} to perform the sum over $m_\L$ and $m_\mathrm{R}$. This gives \begin{align} \tilde{A}_{a/c}^{n_1,n_2,n_3}&=-\frac{\pi i \, \mathrm{e}^{-\frac{\pi i s(a+2)}{2c}+\pi i s+2\pi i \sum_{i>j} s_{ij}\sum_{m=1}^{n_{ij}} \st{\frac{2md+\delta(i,j)}{2c}}}}{30c^5\sqrt{stu}}\sum_{\begin{subarray}{c} m_\mathrm{D},m_\U\ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \mathrm{e}^{\frac{\pi i}{c}(m_\mathrm{D}-m_\U)} \nonumber\\ &\quad\times \mathrm{e}^{\frac{2\pi id}{c}(m_\mathrm{D} n_{32}-m_\U (n_{41}+1))} \int_{P_{m_\mathrm{D},m_\U}> 0} \hspace{-1.2cm} \d t_\L \, \d t_\mathrm{R}\, P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}} Q_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})\nonumber\\ &\quad\times \left(\begin{cases} \mathrm{e}^{2\pi i t_\L \st{\frac{d n_{21}}{c}}} &\text{if}\;\; 0<n_{21}<c \\ \frac{\sin(\pi(s+t_\L))}{\sin(\pi s)} &\text{if}\;\; n_{21}=0 \\ -\frac{\sin(\pi t_\L)}{\sin(\pi s)} &\text{if}\;\; n_{21}=c \end{cases}\right)\left(\begin{cases} \mathrm{e}^{2\pi i t_\mathrm{R} \st{\frac{d n_{43}}{c}}} &\text{if}\;\; 0<n_{43}<c \\ \frac{\sin(\pi(s+t_\mathrm{R}))}{\sin(\pi s)}&\text{if}\;\; n_{43}=0 \\ -\frac{\sin(\pi t_\mathrm{R})}{\sin(\pi s)}&\text{if}\;\; n_{43}=c \end{cases}\right) \nonumber\\ &\quad\times\frac{\Gamma(-t_\L)\Gamma(-t_\mathrm{R})\Gamma(s+t_\L-m_\mathrm{D}-m_\U)\Gamma(s+t_\mathrm{R}-m_\mathrm{D}-m_\U)}{\Gamma(s)^2}\ . \label{eq:non-planar four point function s-channel Rademacher} \end{align} It does not seem particularly fruitful to express things in terms of $n_\L$, $n_\mathrm{D}$, $n_\mathrm{R}$ and $n_\U$ in this case. Their geometric meaning is also much less clear, since there are two boundaries in this case. Let us merely note that \begin{align} \sum_{m=0}^{c-1} \bigst{\frac{2md+1}{2c}}&=\sum_{m=0}^{c-1} \bigst{\frac{2m+1}{2c}} \\ &=\frac{1}{2}\sum_{m=0}^{c-1} \left[\bigst{\frac{2m+1}{2c}}+\bigst{\frac{2(-m-1)+1}{2c}}\right]=0 \end{align} and thus the overall phase only depends on $n_{31}$, $n_{32}$, $n_{42}$ and $n_{41} \bmod c$. \subsection{Results in the \texorpdfstring{$u$}{u}-channel} We finally treat the $u$-channel of the non-planar annulus diagram. Start again with \eqref{eq:Aa/c non planar annulus}. The regions $\Gamma_{n_1,n_2,n_3}$ are analogous to the $u$-channel regions of the planar annulus, because the two integrals only differ by phases. In particular, the region $\Gamma_{n_1,n_2,n_3}$ is also specified by \eqref{eq:bounds regions u-channel}, except that the $\zeta_i$'s now play the role of $z_i$'s. There are $c^3$ regions in total. We can label them by with \begin{equation} 0 \le n_{21} \le c-1\ , \qquad 0 \le n_{32} \le c-1\ , \qquad 0 \le n_{43} \le c-1\ . \end{equation} Now essentially the same formula as \eqref{eq:planar u channel before swap} holds for the non-planar case as well, except that various phases are modified accordingly: \begin{align} \tilde{A}_{a/c}^{n_1,n_2,n_3}&=\frac{i}{32} \, \mathrm{e}^{-\frac{\pi i s(a+2)}{2c}+\pi i s}\int_{\longrightarrow} \frac{\mathrm{d}\tau}{c^5 \tau^2} \int \mathrm{d}\xi_1\, \mathrm{d}\xi_2\, \mathrm{d}\xi_3\nonumber\\ &\qquad\times \prod_{i>j} q^{-\frac{1}{2}s_{ij} \xi_{ij}(\xi_{ij}-1)} \mathrm{e}^{2\pi i s_{ij} \sum_{m=1}^{n_{ij}-\delta_{ij,32}} \st{\frac{2md+\delta(i,j)}{2c}}} \nonumber\\ &\qquad\times \prod_{\begin{subarray}{c} i>j\\ ij \ne 32 \end{subarray}}\prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{\pi i (2d (\ell+n_{ij})+\delta(i,j))}{c}} q^{\ell-\xi_{ij}})^{-s_{ij}}\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\times(1-\mathrm{e}^{-\frac{\pi i (2d(\ell-n_{ij}-1)-\delta(i,j))}{c}} q^{\ell+\xi_{ij}-1})^{-s_{ij}} \nonumber\\ &\qquad\times \prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{\pi i (2d (\ell+n_{32}-1)+1)}{c}} q^{\ell-\xi_{32}-1})^{-s_{32}}\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\times(1-\mathrm{e}^{-\frac{\pi i (2d (\ell-n_{32})-1)}{c}} q^{\ell+\xi_{32}})^{-s_{32}}\ . \label{eq:non planar u channel before swap} \end{align} One can then give the same argument as in the planar case to relate this to the non-planar $s$-channel amplitude. Let us exchange all quantities labeled with 2 with all the corresponding quantities labelled with 3. This maps $\delta(i,j)$ to \begin{equation} \tilde{\delta}(i,j)=\begin{cases} 1\ , \quad &ij \in \{21,\, 41,\, 43\}\ , \\ -1 \ , \quad &ij =32\ , \\ 0 \ , \quad &ij\in \{31,\, 42\}\ . \end{cases} \end{equation} One obtains \begin{align} \tilde{A}_{a/c}^{n_1,n_2,n_3}\Big|_{2 \leftrightarrow 3}&=\frac{i}{32}\, \mathrm{e}^{-\frac{\pi i u(a+2)}{2c}+\pi i u} \int \frac{\mathrm{d}\tau}{c^5 \tau^2} \int \mathrm{d}\xi_1 \, \mathrm{d}\xi_2 \, \mathrm{d}\xi_3\ \prod_{i>j} \prod_{i>j} q^{-\frac{1}{2}s_{ij} \xi_{ij}(\xi_{ij}-1)} \nonumber\\ &\qquad\times\mathrm{e}^{2\pi i \sum_{i>j,\,ij \ne 32} s_{ij} \sum_{m=1}^{n_{ij}} \st{\frac{2md+\tilde{\delta}(i,j)}{2c}}+2\pi i u \sum_{m=1}^{-n_{32}-1} \st{\frac{2md+1}{2c}}} \nonumber\\ &\qquad\times \prod_{i>j}\prod_{\ell=1}^\infty (1-\mathrm{e}^{-\frac{\pi i (2d (\ell+n_{ij})+\tilde{\delta}(i,j))}{c}} q^{\ell-\xi_{ij}})^{-s_{ij}}\nonumber\\ &\qquad\qquad\qquad\qquad\times (1-\mathrm{e}^{-\frac{\pi i d ((\ell-n_{ij}-1)-\tilde{\delta}(i,j))}{c}} q^{\ell+\xi_{ij}-1})^{-s_{ij}}\ . \end{align} Up to slightly different phases, this coincides with the $s$-channel expression. In particular, we can proceed as before and get \begin{align} \tilde{A}_{a/c}^{n_1,n_2,n_3}\Big|_{2 \leftrightarrow 3}&=-\frac{\pi i\, \mathrm{e}^{-\frac{\pi i u(a+2)}{2c}+\pi i u+2\pi i \sum_{i>j,\,ij \ne 32} s_{ij} \sum_{m=1}^{n_{ij}} \st{\frac{2md+\tilde{\delta}(i,j)}{2c}}+2\pi i u \sum_{m=1}^{-n_{32}-1} \st{\frac{2md+1}{2c}}}}{30 c^5 \sqrt{stu}}\nonumber\\ &\qquad\times \hspace{-.2cm}\sum_{\begin{subarray}{c} m_\L,m_\mathrm{D},m_\mathrm{R},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}}\hspace{-.3cm} \mathrm{e}^{\frac{\pi i}{c}(m_\L(2d n_{21}+1)+m_\mathrm{D}(2d n_{32}-1)+m_\mathrm{R}(2 d n_{43}+1)-m_\U(2d(n_{41}+1)+1))} \nonumber\\ &\qquad\times Q_{m_\L,m_\mathrm{D},m_\mathrm{R},m_\U}(s,t) \int_{P_{m_\mathrm{D},m_\U} > 0} \hspace{-1cm} \d t_\L\, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}} \nonumber\\ &\qquad\times \mathrm{e}^{2\pi i (t_\L-m_\L)\st{\frac{2d n_{21}+1}{2c}}+2\pi i (t_\mathrm{R}-m_\mathrm{R})\st{\frac{2d n_{43}+1}{2c}}} \nonumber\\ &\qquad\times \frac{\Gamma(-t_\L+m_\L)\Gamma(-t_\mathrm{R}+m_\mathrm{R})\Gamma(s+t_\L-m_\L)\Gamma(s+t_\mathrm{R}-m_\mathrm{R})}{\Gamma(s)^2}\ . \end{align} The phases again partially cancel and we can perform the sum over $m_\L$ and $m_\mathrm{R}$. We obtain \begin{align} \tilde{A}_{a/c}^{n_1,n_2,n_3}\Big|_{2 \leftrightarrow 3}&=-\frac{\pi i\, \mathrm{e}^{-\frac{\pi i u(a+2)}{2c}+\pi i u+2\pi i \sum_{i>j,\,ij \ne 32} s_{ij} \sum_{m=1}^{n_{ij}} \st{\frac{2md+\tilde{\delta}(i,j)}{2c}}+2\pi i u \sum_{m=1}^{-n_{32}-1} \st{\frac{2md+1}{2c}}}}{30 c^5 \sqrt{stu}}\nonumber\\ &\qquad\times \sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \mathrm{e}^{\frac{\pi i}{c}(-m_\mathrm{D}-m_\U)+\frac{2\pi id}{c}(m_\mathrm{D} n_{32}-m_\U(n_{41}+1))} \nonumber\\ &\qquad\times \int_{P_{m_\mathrm{D},m_\U} > 0} \hspace{-1cm} \d t_\L\, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}} Q_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R}) \nonumber\\ &\qquad\times \mathrm{e}^{2\pi i t_\L\st{\frac{2d n_{21}+1}{2c}}+2\pi i t_\mathrm{R}\st{\frac{2d n_{43}+1}{2c}}} \nonumber\\ &\qquad\times \frac{\Gamma(-t_\L)\Gamma(-t_\mathrm{R})\Gamma(s+t_\L-m_\L-m_\mathrm{R})\Gamma(s+t_\mathrm{R}-m_\L-m_\mathrm{R})}{\Gamma(s)^2}\ . \end{align} We can now swap the labels 2 and 3 back, which leads to \begin{align} \tilde{A}_{a/c}^{n_1,n_2,n_3}&=-\frac{\pi i\, \mathrm{e}^{-\frac{\pi i s(a+2)}{2c}+\pi i s+2\pi i \sum_{i>j} s_{ij} \sum_{m=1}^{n_{ij}-\delta_{ij,32}} \st{\frac{2md+\delta(i,j)}{2c}}}}{30 c^5 \sqrt{stu}}\nonumber\\ &\qquad\times \sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s \end{subarray}} \mathrm{e}^{\frac{\pi i}{c}(-m_\mathrm{D}-m_\U)+\frac{2\pi id}{c}(-m_\mathrm{D} n_{32}-m_\U(n_{41}+1))} \nonumber\\ &\qquad\times \int_{P_{m_\mathrm{D},m_\U}(u,t,t_\L,t_\mathrm{R}) > 0} \hspace{-2cm} \d t_\L\, \d t_\mathrm{R}\ P_{m_\mathrm{D},m_\U}(u,t,t_\L,t_\mathrm{R})^{\frac{5}{2}}\, Q_{m_\mathrm{D},m_\U}(u,t,t_\L,t_\mathrm{R}) \nonumber\\ &\qquad\times \mathrm{e}^{2\pi i t_\L\st{\frac{2d n_{31}+1}{2c}}+2\pi i t_\mathrm{R}\st{\frac{2d n_{42}+1}{2c}}} \nonumber\\ &\qquad\times \frac{\Gamma(-t_\L)\Gamma(-t_\mathrm{R})\Gamma(u+t_\L-m_\L-m_\mathrm{R})\Gamma(u+t_\mathrm{R}-m_\L-m_\mathrm{R})}{\Gamma(u)^2}\ . \label{eq:non-planar four point function u-channel Rademacher} \end{align} \section{Mass shifts and decay widths}\label{sec:mass-shifts} With the formulas we derived, we will cross check them first in a simpler case, namely in the computation of the mass shifts and decay widths. They appear as the double residue of the amplitude at integer $s$ in the $s$-channel. This leads to lots of classical number theory. \subsection{Double residues in terms of Gauss sums} To illustrate the procedure, we first discuss the mass-shift at $s=1$. Only terms with $n_{\L}=n_{\mathrm{R}}=0$ can contribute to the double residue in \eqref{eq:planar four-point function s-channel}. It is straightforward to take the double residue and derive eq.~\eqref{eq:mass-shifts} from it. To obtain the mass-shift at $s=1$, we only need the term with $m_\mathrm{D}=m_\U=0$. It thus remains to compute the integral \begin{equation} \int_{P_{0,0} > 0}\hspace{-0.6cm} \d t_\L \, \d t_\mathrm{R}\ P_{0,0}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}}\ . \end{equation} Since the region $P_{0,0} > 0$ is bounded by an ellipse, we can change coordinates to map it to the unit circle, which gives immediately \begin{equation} \int_{P_{0,0} > 0}\hspace{-0.6cm} \d t_\L \, \d t_\mathrm{R}\ P_{0,0}(s,t,t_\L,t_\mathrm{R})^{\frac{5}{2}}=\frac{s^4 \sqrt{stu}}{64} \int_{\mathbb{D}} \d x\, \d y\ (1-x^2-y^2)^{\frac{5}{2}}=\frac{\pi s^4 \sqrt{stu}}{224}\ , \end{equation} where $\mathbb{D} = \{x^2 + y^2 < 1\}$. We thus have simply \begin{equation} \DRes_{s=1} A^{0,n_\mathrm{D},0,n_\U}=-\frac{\pi^2 i \, \mathrm{e}^{\frac{2\pi i d}{c} n_\mathrm{D} n_\U}}{210 c^5}\ . \end{equation} Note that we can omit the sawtooth function $\st{x}$ in the exponent since $s$ is integer. Given that $n_\U=c-1-n_\mathrm{D}$, it is more convenient to write this entirely in terms of $n \equiv n_\mathrm{D}$. To obtain the double residue, we have to sum over $n \in \{0,\dots,c-1\}$, $d$ and $c$. The sum over $n$ is known as a Gauss sum: \begin{equation} G(-d,-d,c) \equiv \sum_{n=0}^{c-1} \mathrm{e}^{ - \frac{2\pi i n(n+1) d}{c}}\ . \end{equation} The general notation will be explained below. These are very classical objects in number theory. We also have \begin{equation} \DRes_{s=1} \Delta A^{\text{p}}=-\frac{i}{(2\pi)^2}\Res_{s=1} \frac{\Gamma(1-s)\Gamma(-t)}{\Gamma(1-s-t)}=\frac{i}{(2\pi)^2}\ . \end{equation} Hence we obtain \begin{align} \DRes_{s=1} A^{\text{p}} =\frac{i}{(2\pi)^2}-\sum_{c=1}^\infty \sum_{a=1,\, (a,c)=1}^{\frac{c}{2}} \frac{\pi^2 i\, G(-a^*,-a^*,c) }{210c^5}\ , \label{eq:mass shift sum} \end{align} where $a^*$ is the modular image of $a$ mod $c$, i.e., $a a^*=1 \bmod c$. More generally, we can evaluate the double residues at higher integer levels $s_\ast \in \mathbb{Z}_{>0}$. Their real and imaginary parts correspond to mass shifts and decay widths respectively. The general formula is \begin{align} \DRes_{s = s_\ast} A_{a/c}^{0,n,0,c-1-n}&=-\frac{\pi i \mathrm{e}^{-4\pi i s_\ast \sum_{m=1}^{n} \st{\frac{md}{c}} }}{60c^5 s_\ast^2 (s_\ast!)^2} \hspace{-0.6cm}\sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s_\ast \end{subarray}} \hspace{-0.6cm} \Delta_{m_\mathrm{D},m_\U}(s_\ast)^{\frac{7}{2}}\, \mathrm{e}^{\frac{2\pi i d}{c}(m_\mathrm{D} n-m_\U(n+1) )}\nonumber\\ &\qquad\times\int_{\DD} \d x\, \d y\ (1-x^2-y^2)^{\frac{5}{2}} Q_{m_\mathrm{D},m_\U} (s_\ast,t,t_\L,t_\mathrm{R}) \nonumber\\ &\qquad\qquad\times (t_\L+1)_{s_\ast-m_\mathrm{D}-m_\U-1}(t_\mathrm{R}+1)_{s_\ast-m_\mathrm{D}-m_\U-1}\ , \end{align} where \begin{equation} t_{\L,\mathrm{R}}=\frac{\sqrt{\Delta_{m_\mathrm{D},m_\U}(s_\ast)}}{2 \sqrt{s_\ast}}(\sqrt{s_\ast + t} x\pm \sqrt{-t} y)+\frac{1}{2} (m_\mathrm{D}+m_\U-s_\ast) \end{equation} and $\Delta_{m_\mathrm{D},m_\U}$ was defined in eq.~\eqref{eq:definition Delta}. Since $s_\ast$ is integer, we can simplify the sum of the sawtooth function: \begin{equation} \mathrm{e}^{-4\pi i s_\ast \sum_{m=1}^{n} \st{\frac{md}{c}}}=\mathrm{e}^{-\frac{2\pi i s d n(n+1)}{c}}\ . \end{equation} At this point, we can perform the sum over $n$. They are classical Gauss sums: \begin{align} G(-d s_\ast,d(m_\mathrm{D}-m_\U-s_\ast),c)=\sum_{n=0}^{c-1} \mathrm{e}^{-\frac{2\pi i d n(s_\ast n+s_\ast-m_\mathrm{D}+m_\U)}{c}}\ . \end{align} Putting everything together, we obtain \begin{align} \DRes_{s=s_\ast} A^{\text{p}} &= \frac{i}{(2\pi)^2}\frac{\Gamma(t+s_\ast)}{\Gamma(t+1)\Gamma(s)} -\sum_{c=1}^\infty \sum_{a=1,\, (a,c)=1}^{\frac{c}{2}} \frac{\pi i }{60c^5 s_\ast^2 (s_\ast!)^2} \hspace{-.3cm}\sum_{\begin{subarray}{c} m_\mathrm{D},m_\U \ge 0 \\ (\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2 \le s_\ast \end{subarray}} \hspace{-.3cm} \Delta_{m_\mathrm{D},m_\U}(s_\ast)^{\frac{7}{2}} \nonumber\\ &\qquad\times \mathrm{e}^{-\frac{2\pi i s_\ast m_\U a^*}{c}} G(-a^*s_\ast,a^*(m_\mathrm{D}-m_\U-s_\ast),c)\int_{\DD} \d x\, \d y\ (1-x^2-y^2)^{\frac{5}{2}} \nonumber\\ &\qquad\times Q_{m_\mathrm{D},m_\U} (s_\ast,t,t_\L,t_\mathrm{R})\,(t_\L+1)_{s_\ast-m_\mathrm{D}-m_\U-1}(t_\mathrm{R}+1)_{s_\ast-m_\mathrm{D}-m_\U-1}\ . \end{align} The integrals for integer $s_\ast$ can always be performed analytically and for a given mass level this expression can be further simplified. For example, for the first few mass levels, we get \begin{subequations} \begin{align} \DRes_{s=2} A&=\frac{i (1+t)}{(2\pi)^2}-\frac{i\pi^2(1+t)}{3780} \sum_{c=1}^\infty \frac{1}{c^5}\sum_{a=1,\, (a,c)=1}^{\frac{c}{2}} \Big[G(-2a^*,-a^*,c)\nonumber\\ &\qquad\qquad+16 G(-2a^*,-2a^*,c)+ \mathrm{e}^{-\frac{2\pi i a^*}{c}} G(-2a^*,-a^*,c) \Big]\ , \\ \DRes_{s=3} A&=\frac{i(1+t)(2+t)}{8\pi^2}-\frac{i \pi^2(1+t)(2+t)}{4490640} \sum_{c=1}^\infty \frac{1}{c^5}\sum_{a=1,\, (a,c)=1}^{\frac{c}{2}} \nonumber\\ &\qquad\times\Big[113 G(-3 a^*,-a^*,c)+2048 G(-3 a^*,-2 a^*,c)+6561 G(-3 a^*,-3 a^*,c)\nonumber\\ &\qquad\qquad+2048 \mathrm{e}^{-\frac{2 i \pi a^*}{c}} G(-3 a^*,-4 d,c)+113 \mathrm{e}^{-\frac{4 i \pi a^*}{c}} G(-3 a^*,-5 a^*,c)\Big]\ , \\ \DRes_{s=4} A&=\frac{i(1+t)(2+t)(3+t)}{24\pi^2}-\frac{i \pi^2(2+t)}{39852933120} \sum_{c=1}^\infty \frac{1}{c^5}\sum_{a=1,\, (a,c)=1}^{\frac{c}{2}} \nonumber\\ &\qquad\times\Big[\left(103827 t^2+415308 t+309568\right) G(-4 a^*,-a^*,c)\nonumber\\ &\qquad\qquad+22528 \left(87 t^2+348 t+272\right) G(-4 a^*,-2 a^*,c)\nonumber\\ &\qquad\qquad+19683 \left(405 t^2+1620 t+1216\right) G(-4a^*,-3 a^*,c)\nonumber\\ &\qquad\qquad+524288 \left(24 t^2+96 t+71\right) G(-4 a^*,-4 a^*,c)\nonumber\\ &\qquad\qquad+19683 \left(405 t^2+1620 t+1216\right) \mathrm{e}^{-\frac{2 i \pi a^*}{c}} G(-4 a^*,-5 a^*,c)\nonumber\\ &\qquad\qquad+22528 \left(87 t^2+348 t+272\right) \mathrm{e}^{-\frac{4 i \pi a^*}{c}} G(-4 a^*,-6 a^*,c)\nonumber\\ &\qquad\qquad+\left(103827 t^2+415308 t+309568\right) \mathrm{e}^{-\frac{6 i \pi a^*}{c}} G(-4 a^*,-7 a^*,c) \Big]\ . \end{align}\label{eq:mass shifts}% \end{subequations} Here, \begin{equation} G(a,b,c)=\sum_{n=0}^c \mathrm{e}^{\frac{2\pi i (a n^2+b n)}{c}} \end{equation} denotes the general quadratic Gauss sum. As we will explain below, it can be efficiently calculated. The first three mass-levels are special since the degeneracy is not lifted and the double residues have a factorized form. Starting from $s=5$, the expressions also contain square roots, but they become too unwieldy to display here. Expressions up to $s_\ast \leq 16$ are provided in the ancillary file \texttt{DRes.txt}. We also evaluate them numerically to high precision with the results shown in App.~\ref{app:mass-shifts}. \subsection{Gauss sums} \label{subsec:Gauss sums} To continue, it is useful to recall some elementary number theory that allows us to evaluate Gauss sums. We refer to any book on number theory for more details. Let us define the Legendre symbol for any odd prime $p$ as follows: \begin{equation} \jac{a}{p}=\begin{cases} 0 \ , \quad &a \equiv 0 \bmod p\ , \\ 1\ , \quad &\text{$a$ is a quadratic residue mod $p$}\ , \\ -1\ , \quad &\text{$a$ is not a quadratic residue mod $p$}\ . \end{cases} \end{equation} A quadratic residue mod $p$ is by definition an element of the finite field $\FF_p$ that has a square root. For example in $\FF_5$, \begin{equation} 1^2=1\ , \quad 2^2=4\ , \quad 3^2=4\ , \quad 4^2=1\ . \end{equation} Hence $1$ and $4$ are quadratic residues mod 5 and correspondingly $\jac{1}{5}=\jac{4}{5}=1$, whereas $\jac{2}{5}=\jac{3}{5}=-1$. One then extends the definition to the Jacobi symbol as follows. For an odd integer $n$, let $n=p_1^{m_1} p_2^{m_2} \cdots p_k^{m_k}$ be its prime factorization. Then one defines \begin{equation} \jac{a}{n}=\prod_{i=1}^k\jac{a}{p_1}^{m_k}\ . \end{equation} The Jacobi symbol is a multiplicative function in both the top and bottom argument, \begin{equation} \jac{ab}{n}=\jac{a}{n}\jac{b}{n}\ , \qquad \jac{a}{mn}=\jac{a}{m} \jac{b}{m}\ . \end{equation} Famously, it satisfies the law of quadratic reciprocity. For $a$ and $b$ odd coprime integers, one has \begin{equation} \jac{a}{b}\jac{b}{a}=(-1)^{\frac{(a-1)(b-1)}{4}}\ . \end{equation} The law of quadratic reciprocity can be exploited to give a fast algorithm to compute the Jacobi symbol (runtime $\mathcal{O}(\log a \log b)$). Let us recall the definition of the Gauss sum: \begin{align} G(a,b,c)=\sum_{n=0}^{c-1} \mathrm{e}^{\frac{2\pi i (a n^2+b n)}{c}}\ . \end{align} They can be evaluated in closed form in terms of Jacobi symbols. First, we can reduce to the case where $(a,c)=1$ as follows, \begin{align} G(a,b,c)&=\begin{cases} (a,c) \, G\big(\tfrac{a}{(a,c)},\tfrac{b}{(a,c)}, \tfrac{c}{(a,c)}\big)\ , \qquad & b \mid (a,c)\ , \\ 0\ , \qquad &\text{otherwise}\ . \end{cases} \end{align} Assuming $(a,c)=1$, we can next reduce to the case with $b=0$ by `completing the square' in the sum. For odd $c$ and even $b$, this is always possible. We first reduce to these cases by using \begin{align} G(a,b,c)&=\begin{cases} 0\ , \qquad & (a,c)=1,\, c \equiv 0 \bmod 4 \text{ and }b \text{ odd}\ , \\ 2 G(2a,b,\tfrac{c}{2})\ , \qquad & (a,c)=1,\, c \equiv 2 \bmod 4 \text{ and }b \text{ odd}\ . \end{cases} \end{align} We can now assume that $b$ is even or $c$ is odd. This always ensures the existence of a solution to the equation \begin{equation} 2am+b \equiv 0 \bmod c\ . \end{equation} Indeed, for $c$ odd, $2$ is invertible and the solution is given by $m=-2^* a^* b$, where ${}^*$ denotes the inverse mod $c$. For $b$ even, we can divide the equation by 2 and solve instead $am+\frac{b}{2} \equiv 0 \bmod c$, which always has a solution. We can then shift the summation variable by $m$ to eliminate the linear term and get \begin{equation} G(a,b,c)=\mathrm{e}^{\frac{2\pi i (a m^2+b m)}{c}} G(a,0,c)\ . \end{equation} Finally, $G(a,0,c)$ with $(a,c)=1$ is computed by the following classical formula, \begin{align} G(a,0,c)=\begin{cases} 0\ , \qquad &(a,c)=1\text{ and } c \equiv 2 \bmod 4\ , \\ \varepsilon(c) \sqrt{c} \jac{a}{c}\ , \qquad & (a,c)=1\text{ and } c\text{ odd}\ , \\ (1+i) \varepsilon(a)^{-1} \sqrt{c} \jac{c}{a}\ , \qquad & (a,c)=1\text{ and } c \equiv 0 \bmod 4\ . \end{cases} \end{align} We used the abbreviation \begin{equation} \varepsilon(c)=\begin{cases} 1 \ , \qquad & c \equiv 1 \bmod 4\ , \\ i \ , \qquad & c \equiv 3 \bmod 4\ . \end{cases} \end{equation} This explains how to efficiently compute the Gauss sums appearing in the formulas for the mass-shifts \eqref{eq:mass shifts}. We have implemented these formulae to generate results in the Appendix~\ref{app:mass-shifts}. \subsection{Recovering the decay width} As a consistency check of our analysis, we will now demonstrate that the imaginary part of \eqref{eq:mass shifts} equals the expected value that is obtained by just computing the contribution from the annulus ($c=1$) and from the M\"obius strip ($c=2$). Let us first note that \begin{equation} \Re \mathrm{e}^{-\frac{2\pi i s m_\U a^*}{c}} G(-a^*s,a^*(m_\mathrm{D}-m_\U-s),c)=\Re \sum_{n=0}^{c-1} \mathrm{e}^{-\frac{2\pi i a^*(sn(n+1)-m_\mathrm{D} n+m_\U(n+1))}{c}} \end{equation} and this is obviously unchanged when replacing $a^* \to -a^*$. Since the modular inverse of $a$ mod $c$ satisfies $(-a)^*=-a^*$, we could hence sum over $a=1,\dots,c$ with $(a,c)=1$, as long as we compensate by a factor of 2. The exception to this occurs for $c=1$ and $c=2$, since extending the summation range to $c$ would count $a=\frac{c}{2}$ only once. For $c \ge 3$, $a$ can never be equal to $\frac{c}{2}$, since it would either be non-integer or not be coprime to $c$. Thus, the imaginary part of \eqref{eq:mass shift sum} can be written as \begin{multline} \Im \DRes_{s=1} A^{\text{p}} = \frac{\pi^2\, G(-1,-1,1) }{420}-\frac{\pi^2 \, G(-1,-1,2) }{420 \cdot 2^5}+\frac{1}{4\pi^2}\\ -\frac{1}{2}\sum_{c=1}^\infty \sum_{a=1,\, (a,c)=1}^{c} \frac{\pi^2 G(-a^*,-a^*,c) }{210c^5}\ . \end{multline} The first two terms are precisely the contribution from the annulus and the M\"obius strip. As expected, the M\"obius strip contributes with a negative sign. Thus, it remains to show that the last two terms cancel. The same logic applies to the higher mass-shifts. Consistency with the previously computed imaginary part requires that when we extend the sum over $a$ up to $c$ and compensate by dividing by $2$, then the infinite sum should precisely cancel the simple contribution that comes from $\Delta A^{\text{p}}$. Taking the modular inverse is unnecessary at this point because when $a$ runs over $\ZZ_c^\times$, then so does $a^*$. Here $\ZZ_c^\times$ is the set of units in the ring $\ZZ_c$ (i.e.\ all elements with $(a,c)=1$, since those have an inverse mod $c$). Let us hence denote $a^*=d$ in the following. To summarize, we need to show that \begin{subequations} \begin{align} \frac{1}{2\pi^2}&\overset{!}{=} \sum_{c=1}^\infty \sum_{d\in \ZZ_c^\times} \frac{\pi^2 G(-d,-d,c) }{210c^5}\ , \label{eq:L function identity s=1} \\ \frac{1+t}{2\pi^2} &\overset{!}{=}\frac{\pi^2(1+t)}{3780} \sum_{c=1}^\infty \frac{1}{c^5}\sum_{d \in \ZZ_c^\times} \Big[G(-2d,-d,c)\nonumber\\ &\qquad\qquad+16 G(-2d,-2d,c)+ \mathrm{e}^{-\frac{2\pi i d}{c}} G(-2d,-d,c) \Big]\ , \label{eq:L function identity s=2} \end{align}\label{eq:L function identity}% \end{subequations} and so on. We first demonstrate the equality explicitly for $s=1$ and then explain how it generalizes for higher values of $s$. \subsubsection{Case \texorpdfstring{$s=1$}{s=1}} Let us set \begin{align} F(c)=\frac{1}{c}\sum_{d \in \ZZ_c^\times} G(-d,-d,c)= \frac{1}{c}\sum_{d \in \ZZ_c^\times} \sum_{n=1}^c \mathrm{e}^{-\frac{2\pi i n(n-1)d}{c}}\ . \end{align} This definition agrees with \eqref{eq:definition F overview}, except for $c=1$ and $c=2$. Our aim is to determine $F$ explicitly. The result will be $F(c)=| \mu(c)|$, where $\mu(c)$ is the M\"obius function, defined by \begin{equation} \mu(n)=\begin{cases} 1\ , \quad &\text{$n$ has an even number of prime factors}\ , \\ -1\ , \quad &\text{$n$ has an odd number of prime factors}\ , \\ 0\ , \quad &\text{$n$ has a repeated prime factor}\ . \end{cases} \end{equation} We prove this in two steps. First, we show that $F(c)$ is multiplicative, i.e.\ for $c=c_1c_2$ and $(c_1,c_2)=1$ we have $F(c_1c_2)=F(c_1)F(c_2)$. The Chinese remainder theorem says that $\ZZ_{c_1c_2}^\times \cong \ZZ_{c_1}^\times \times \ZZ_{c_2}^\times$, i.e.\ $d \mapsto (d_1=d \bmod c_1,d_2=d \bmod c_2)$ is a group isomorphism. It is the restriction of the corresponding ring isomorphism $\ZZ_{c_1c_2} \cong \ZZ_{c_1} \times \ZZ_{c_2}$ to the group of units. We also notice that \begin{equation} (c_1+c_2,c_1c_2)=(c_1+c_2,c_1)(c_1+c_2,c_2)=(c_2,c_1)(c_1,c_2)=1\ , \end{equation} and hence $c_1+c_2$ is a unit. Thus we may replace $d \in \ZZ_c^\times$ in the sum with $(c_1+c_2)d$, since both run over the units of $\ZZ_c$. We then get \begin{align} F(c_1c_2)&=\frac{1}{c_1c_2}\sum_{n\in \ZZ_{c_1c_2}} \sum_{d \in \ZZ_{c_1c_2}^\times} \mathrm{e}^{-\frac{2\pi i n(n-1) (c_1+c_2)d}{c_1c_2}} \\ &=\frac{1}{c_1c_2}\sum_{n\in \ZZ_{c_1c_2}} \sum_{d \in \ZZ_{c_1c_2}^\times} \mathrm{e}^{-\frac{2\pi i n(n-1) d}{c_1}}\mathrm{e}^{-\frac{2\pi i n(n-1) d}{c_2}}\ . \end{align} Now let $d_i=d \bmod c_i$ and $n_i=n \bmod c_i$. Then with the help of the Chinese remainder theorem we conclude \begin{align} F(c_1c_2)&=\frac{1}{c_1 c_2}\sum_{n_1\in \ZZ_{c_1}}\sum_{n_2\in \ZZ_{c_2}} \sum_{d_1 \in \ZZ_{c_1}^\times}\sum_{d_2 \in \ZZ_{c_2}^\times} \mathrm{e}^{-\frac{2\pi i n_1(n_1-1) d_1}{c_1}}\mathrm{e}^{-\frac{2\pi i n_2(n_2-1) d_2}{c_2}}=F(c_1)F(c_2)\ . \end{align} It then remains to evaluate $F(p^k)$ for $p$ prime and $k \ge 1$, since this will determine $F(c)$ completely. \begin{align} F(p^k)&=\frac{1}{p^k} \left(\sum_{n \in \ZZ_{p^k}}\sum_{d \in \ZZ_{p^k}} \mathrm{e}^{-\frac{2\pi i n(n-1) d}{p^k}}-\sum_{n \in \ZZ_{p^k}}\sum_{p\, \mid\, d \in \ZZ_{p^k}} \mathrm{e}^{-\frac{2\pi i n(n-1) d}{p^k}}\right) \\ &=\frac{1}{p^k}\left(\sum_{n \in \ZZ_{p^k}}\sum_{d \in \ZZ_{p^k}} \mathrm{e}^{-\frac{2\pi i n(n-1) d}{p^k}}-\sum_{n \in \ZZ_{p^k}}\sum_{d \in \ZZ_{p^{k-1}}} \mathrm{e}^{-\frac{2\pi i n(n-1) d}{p^{k-1}}}\right) \\ &=\frac{1}{p^k}\left(\sum_{n \in \ZZ_{p^k}} p^k \delta_{p^k \mid n(n-1)}-\sum_{n \in \ZZ_{p^k}} p^{k-1} \delta_{p^{k-1} \mid n(n-1)}\right)\ . \end{align} Now $p^k \mid n(n-1)$ precisely for $n=0$ or $n=1 \in \ZZ_{p^k}$. The same reasoning applies in the second term where $p^{k-1} \mid n(n-1)$ when $n=r p^{k-1}$ or $n=r p^{k-1}+1$ with $r=0,\dots,p-1$. For $k \ge 2$ these are $2p$ possibilities, whereas for $k=1$, these are only $p$ possibilities. Thus \begin{equation} F(p^k)=\frac{1}{p^k}\left(2p^k-(2-\delta_{k,1})\, p \times p^{k-1}\right)= \delta_{k,1} = |\mu(p^k)|\ . \end{equation} Thus by multiplicativity of $F$ and the Moebius function we conclude \begin{equation} F(c)=|\mu(c)| \end{equation} for any positive integer $c$. We can then evaluate \begin{align} \sum_{c=1}^\infty \frac{|\mu(c)|}{c^4}&=\prod_{p \in \PP} \sum_{k=0}^\infty \mu(p^k) p^{-4k} \\ &=\prod_{p \in \PP} (1+p^{-4})\\ &=\prod_{p \in \PP} \frac{1-p^{-8}}{1-p^{-4}}=\frac{\zeta(4)}{\zeta(8)}=\frac{105}{\pi^4}\ , \label{eq:L function abs mu} \end{align} where we used multiplicativity of $|\mu(c)|$ and the fact that every integer can be uniquely written as the product of its prime factors. We also used the Euler product of the Riemann zeta-function, \begin{equation} \zeta(\sigma)=\prod_{p \in \PP} (1-p^{-\sigma})^{-1}\ . \end{equation} We thus get \begin{align} \sum_{c=1}^\infty \sum_{d \in \ZZ_c^\times} \frac{\pi^2 G(-d,-d,c)}{210 c^5}=\frac{\pi^2}{210} \sum_{c=1}^\infty \frac{|\mu(c)|}{c^4}=\frac{1}{2\pi^2}\ , \end{align} which demonstrates eq.~\eqref{eq:L function identity s=1}. \subsubsection{\label{subsec:higher-values-of-s}Higher values of \texorpdfstring{$s$}{s}} For higher decay widths we can proceed similarly. We define more generally \begin{align} F_s^{m_\mathrm{D},m_\U}(c)&=\frac{1}{c} \sum_{d \in \ZZ_c^\times} \mathrm{e}^{-\frac{2\pi i m_\U d}{c}} G(-ds,d(m_\mathrm{D}-m_\U-s),c) \\ &= \frac{1}{c}\sum_{d \in \ZZ_c^\times}\sum_{n=0}^{c-1} \mathrm{e}^{-\frac{2\pi i d(sn(n+1)-m_\mathrm{D} n+m_\U(n+1))}{c}}\ , \end{align} which again agrees with the definition \eqref{eq:definition F overview} except for $c=1$ and $c=2$. The same argument as for $s=1$, $m_\mathrm{D}=m_\U=0$ shows that $F_s^{m_\mathrm{D},m_\U}(c)$ is a multiplicative function. Thus it suffices again to compute $F_s^{m_\mathrm{D},m_\U}(c)$ on prime powers. Proceeding as before, this gives \begin{align} F_s^{m_\mathrm{D},m_\U}(p^k)&=\frac{1}{p^k}\sum_{n \in \ZZ_{p^k}} \left(\sum_{d \in \ZZ_{p^k}}-\sum_{d \in \ZZ_{p^k},\, d \equiv 0 \bmod p}\right) \mathrm{e}^{-\frac{2\pi i d(sn(n+1)-m_\mathrm{D} n+m_\U(n+1))}{p^k}} \\ &=\sum_{n \in \ZZ_{p^k}} \left(\delta_{p^k \mid sn(n+1)-m_\mathrm{D} n+m_\U(n+1))} - \frac{1}{p} \delta_{p^{k-1} \mid sn(n+1)-m_\mathrm{D} n+m_\U(n+1))}\right)\\ &=\sum_{n \in \ZZ_{p^k}} \delta_{p^k \mid sn(n+1)-m_\mathrm{D} n+m_\U(n+1))}-\sum_{n \in \ZZ_{p^{k-1}}} \delta_{p^{k-1} \mid sn(n+1)-m_\mathrm{D} n+m_\U(n+1))}\ . \end{align} We hence need to count the number of solutions to the equation \begin{equation} sn^2+(s-m_\mathrm{D}+m_\U)n+m_\U \equiv 0 \bmod p^k\ . \label{eq:quadratic equation} \end{equation} This is done in Appendix~\ref{app:count number of solutions quadratic equation}. Let us note that the discriminant of this quadratic equation is given by \begin{equation} \Delta_{m_\mathrm{D},m_\U}=\big[s-(\sqrt{m_\mathrm{D}}+\sqrt{m_\U})^2\big]\big[s-(\sqrt{m_\mathrm{D}}-\sqrt{m_\U})^2\big]\ . \end{equation} Let us first consider the generic case, by which we mean that $\Delta_{m_\mathrm{D},m_\U} \ne 0 \bmod p$, $p \ne 2$ and $s \ne 0 \bmod p$. Let us denote the set of all the special primes for which this is not the case by $\PP_{s,m_\mathrm{D},m_\U}$. In this case, the number of solutions is independent of $k \ge 1$ and is given by the Legendre symbol \begin{equation} \jac{\Delta_{m_\mathrm{D},m_\U}}{p}+1\ . \end{equation} This implies that for a generic prime \begin{equation} F_s^{m_\mathrm{D},m_\U}(p^k)=\jac{\Delta_{m_\mathrm{D},m_\U}}{p} \delta_{k,1}=\jac{\Delta_{m_\mathrm{D},m_\U}}{p} |\mu(p^k)|\ . \end{equation} For the exceptional primes $\PP_{s,m_\mathrm{D},m_\U}$, the formula for the number of integer solutions is more complicated and explained in Appendix~\ref{app:count number of solutions quadratic equation}. It is sufficient here to know that since $\Delta_{s,m_\mathrm{D},m_\U} \ne 0$ by construction, the number of solutions always stabilizes for $k \ge k_0$ and thus $F_s^{m_\mathrm{D},m_\U}(p^k)=0$ for sufficiently high $k$. By multiplicativity, we can write the sum involved in the mass-shift in terms of an infinite product over primes, \begin{align} \sum_{c=1}^\infty \frac{F_s^{m_\mathrm{D},m_\U}(c)}{c^4}=\prod_{p \in \PP} \left(1+\jac{\Delta_{m_\mathrm{D},m_\U}}{p} p^{-4} \right) \!\!\prod_{p \in \PP_{s,m_\mathrm{D},m_\U}} \!\!\!\! \frac{\sum_{k\ge 0} F_s^{m_\mathrm{D},m_\U}(p^k)p^{-4k}}{1+\jac{\Delta_{m_\mathrm{D},m_\U}}{p} p^{-4}} \ .\label{eq:sum f to Dirichlet L function} \end{align} Since there are finitely many exceptions, the second factor on the right hand side is easy to evaluate. It remains to evaluate the first factor. Here $\jac{\Delta}{c}$ appears with $c$ not necessarily odd. This constitutes a generalization of the Jacobi symbol known as the Kronecker symbol. Its definition involves in general several case distinctions, but we do not need all of them because $\Delta>0$ and $\Delta \equiv 0,\, 1 \bmod 4$. In this special case, the definition reads \begin{equation} \jac{\Delta}{c}=\jac{\Delta}{|c|}=\prod_{p \in \PP} \jac{\Delta}{p}^{k_p}\ , \end{equation} where $|c|=\prod_p p^{k_p}$ is the prime factorization of $|c|$. Hence we only need to define \begin{equation} \jac{\Delta}{2}=\begin{cases} 0\ , \quad &\Delta \equiv 0 \bmod 2\ , \\ 1\ , \quad &\Delta \equiv \pm 1 \bmod 8\ , \\ -1\ , \quad &\Delta \equiv \pm 3 \bmod 8\ . \end{cases} \end{equation} The Kronecker symbol is periodic in $c$ with period $\Delta$ because of quadratic reciprocity (this requires $\Delta \equiv 0,\, 1 \bmod 4$). We evaluate for $\Delta \equiv 0,\, 1 \bmod 4$, \begin{align} \prod_{p \in \PP} \left(1-\jac{\Delta}{p} p^{-4} \right)^{-1} &= \sum_{c=1}^\infty \jac{\Delta}{c} \frac{1}{c^4} \\ &=\frac{1}{2} \sum_{c \in \ZZ \setminus \{0\}} \Res_{z=c}\frac{1}{z^4} \sum_{m=0}^{\Delta-1} \jac{\Delta}{m}\frac{\pi }{\Delta} \cot\left(\frac{\pi(z-m)}{\Delta}\right) \\ &= -\frac{1}{2}\Res_{z=0}\frac{1}{z^4} \sum_{m=1}^{\Delta-1} \jac{\Delta}{m} \frac{\pi }{\Delta}\cot\left(\frac{\pi(z-m)}{\Delta}\right) \\ &=\sum_{(m,\Delta)=1}\frac{\pi^4(2+\cos(\frac{2m \pi}{\Delta})) }{6 \Delta^4 \sin(\frac{m \pi}{\Delta})^4} \jac{\Delta}{m}\ . \end{align} We then finish the calculation by noting that \begin{align} \prod_{p \in \PP} \left(1+\jac{\Delta}{p} p^{-4}\right)&=\prod_{p,\, \Delta \not\equiv 0 \bmod p} \frac{1-p^{-8}}{1-\jac{\Delta}{p}p^{-4}} \\ &=\frac{1}{\zeta(8)}\prod_{p \, \mid\, \Delta} (1-p^{-8})^{-1}\sum_{(m,\Delta)=1}\!\!\frac{\pi^4(2+\cos(\frac{2m \pi}{\Delta})) }{6 \Delta^4 \sin(\frac{m \pi}{\Delta})^4} \jac{\Delta}{m}\ . \label{eq:evaluation Dirichlet L function} \end{align} Combining \eqref{eq:sum f to Dirichlet L function} and \eqref{eq:evaluation Dirichlet L function} allows us to compute the sums appearing in the imaginary part of the mass-shift. It is then simple to implement these formulas and check the required identities such as \eqref{eq:L function identity}. We checked that the corresponding identities hold up to $s=12$. \section{Conclusions}\label{sec:conclusion} In this work, we revisited the formal expression \eqref{eq:1.1} describing scattering amplitudes of strings as integrals over the moduli space of Riemann surfaces with punctures. We converted it into a practical formula \eqref{eq:1.2} to compute one-loop four-point open-string amplitudes. It can be thought of as a sum over thin worldsheets with a given number of windings, where terms with more and more windings become more suppressed. This formula allowed us to compute the corresponding amplitudes at finite values of $\alpha'$ for the first time, as illustrated on examples in Figures~\ref{fig:Ap-forward}, \ref{fig:fixed-angle-data}, and \ref{fig:ratios}. There are a couple of open questions that we were not able to fully solve as well as a number of future research directions, which we outline below. \paragraph{Convergence of the Rademacher expansion.} While we have provided, in our view, strong evidence for the convergence of the Rademacher contour, we were unable to rigorously prove it. As we have also mentioned, the convergence properties deteriorate at low energies, since the phases in the Rademacher expansion tend to be close to unity and do not cancel out. At $s=t=0$ the convergence completely breaks down, which is the manifestation of the massless branch cut in our formula. In order to develop this formalism more systematically, it is of vital importance to understand the involved phases better. While the sawtooth function $\st{x}$ makes a frequent appearance in number theory, the sums in \eqref{eq:planar four-point function s-channel} at least naively are not easy to bound using standard number-theoretic techniques. One obviously would like to do better than simply argue for the randomness of these phases for high values of $c$. We should mention that there are some cases in the literature where convergence of the Rademacher series for positive weight is established \cite{Cheng:2011ay}. \paragraph{Low-energy expansion.} A related issue is the important cross check of making contact with the low-energy expansion of the amplitudes. There is a large body of literature studying the $\alpha'$ expansion of one-loop string amplitudes, see \cite{Broedel:2014vla, Broedel:2018izr, Mafra:2019ddf, Mafra:2019xms, Edison:2021ebi} for the open string in particular. It seems to be quite hard to extract the low-energy behaviour from the Rademacher formula, since this is the regime where convergence breaks down. It is also not possible to take $\alpha'$ derivatives of our formula and commute the derivatives with the infinite sum over $c$. Every such derivative makes the individual terms grow faster with $c$ and after a sufficient number of derivatives, the sum no longer converges. To better understand the involved subtlety, consider the infinite sum \begin{equation} \sum_{n\ne 0} \frac{1}{|n|} \mathrm{e}^{i n x}=- \log \big(4 \sin(\tfrac{x}{2})^2\big)\ , \label{eq:toy example sum} \end{equation} which has similar convergence properties as the sums that we encountered in this paper and the breakdown of convergence at $x=0$ also manifests itself as a logarithmic branch cut. Without knowing the right-hand side, it is equally challenging to extract the series expansion of the left-hand side around $x=0$, since the sum is divergent as soon as one takes a derivative in $x$ and commutes it with the infinite sum. \paragraph{Analytic continuation in $s$ and $t$.} The Rademacher expansion of the planar $s$-channel string amplitude given in eq.~\eqref{eq:planar four-point function s-channel} is only valid for physical kinematics. In fact, it does not even converge when we start to consider complex values of $s$ and $t$, since then the phases start to grow exponentially. However, as is well-known, it is often fruitful to study the extension of the amplitude to the complex Mandelstam invariants and study its analytic properties. In the context of string amplitudes, some analytic properties of the amplitude were studied in \cite{DHoker:1994gnm, Eberhardt:2022zay}, but a full understanding is missing. The fact that \eqref{eq:planar four-point function s-channel} cannot be easily analytically continued does not mean that such an analytic continuation does not exist. In fact, one might come to a similar conclusion in the toy example \eqref{eq:toy example sum}, but of course the right hand side has a perfectly good analytic continuation with branch points at $x\in 2\pi \ZZ$, where the convergence of the sum breaks down. It would be very desirable to have a formula for the string amplitude that holds for arbitrary complex values of $s$ and $t$; or short of that, a way to access the other branches of the amplitude for real values of $s$ and $t$. We expect that deforming the integration contour appropriately can achieve such a goal and plan to report on it elsewhere \cite{LorenzSebastian}. \paragraph{High-energy limit.} Since we now have explicit control over the amplitude at intermediate energies, it is natural to try to extract the asymptotics for very high energies from our formula. The high-energy limit of string amplitudes was first analyzed by Gross and Mende \cite{Gross:1987kza} and Gross and Ma\~nes \cite{Gross:1989ge} in the case of the open string, where the integral over moduli space was evaluated using saddle-point techniques. As already mentioned in Section~\ref{sec:introduction}, performing this computation rigorously is currently out of reach and the results of \cite{Gross:1987kza,Gross:1989ge} should be viewed as a heuristic. As we saw in this work, the asymptotic formula of Gross and Ma\~nes seem to be true ``on average'', but there is a number of very complicated oscillations on top that seem hard to predict from a saddle-point of view.\footnote{Incidentally, there seems to be a nice analogy with the recent discussion of wormholes in quantum gravity, where the saddle point gives the ``averaged'' contribution to some quantity such as the spectral form factor, while the true behaviour of the quantity has many erratic oscillations on top of this averaged smooth behaviour, see e.g.\ \cite{Cotler:2016fpe} and numerous follow-up works.} Our formula seems to open a different avenue to access the high-energy behaviour in detail as we have already demonstrated numerically, see Figure~\ref{fig:fixed-angle-data}. A good understanding of the growth behaviour of the polynomials $Q_{m_\mathrm{D},m_\U}(s,t,t_\L,t_\mathrm{R})$ for large values of the parameters would give a detailed analytic control over the high-energy limit of the amplitude and can hopefully make contact with the saddle-point evaluation. Additionally, the integration contour we proposed in this work can be taken as a starting point for a rigorous saddle-point analysis using methods of complex Morse theory. \paragraph{Other string topologies.} As an immediate question, one may ask how general the methods employed in this paper are. The logic generalizes straightforwardly to other open string one-loop amplitudes with an arbitrary number of external vertex operators. However, the modular weight of the integrand is at least naively positive for $n \ge 5$ external vertex operators and convergence might become even more delicate than in the four-point case. We should however note that this might be misleading. Contrary to the four-point function, the five-point function does not admit a canonical integrand that should be integrated over $\mathcal{M}_{1,5}$, but instead several different representations that differ by total derivatives. It might be more illuminating to think of the integrand as living on the moduli space of super Riemann surfaces as described by \cite{Witten:2012bh, Witten:2013pra}, where the integrand has a canonical form. Since there is no non-trivial topology in the fermionic directions of moduli space, the contour deformation into the Rademacher contour straightforwardly extends to supermoduli space and its complexification. The extension to higher loops and closed strings at one loop is much less clear at this stage. One would expect that the general logic that one can derive an infinite sum representation for the string amplitude where every term is controlled by a degeneration in complexified moduli space continuous to hold. To make such an expectation concrete, one needs a version of the Rademacher contour for other genera. For the open string at two loops, this seems to be possible. In \cite{Cardoso:2021gfg, LopesCardoso:2022hvc}, the Rademacher expansion for the inverse of the Igusa cusp form $\Phi_{10}^{-1}$ at genus 2 was derived in the context of microstate counting of black holes. This is almost what we need for the partition function of the bosonic open string in analogy to what we discussed in Section~\ref{subsec:Rademacher contour} at one loop. Indeed, the inverse of the Igusa cusp form is the partition function of 24 free bosons. However, for the closed string, even at genus 1, the mathematical technology needed for this computation is to our knowledge not available in the literature. In the simplest toy model for the partition function of the closed bosonic string, one wants to evaluate the integral \begin{equation} \int_{\mathcal{F}} \frac{\d^2 \tau}{(\Im \tau)^{14}\, |\eta(\tau)^{24}|^2} \label{eq:closed string partition function naive} \end{equation} over the fundamental domain. One again has to modify the contour near the cusp to implement the $i \varepsilon$ prescription. Let us set $\tau=x+y$ with $x \in \RR$ and $y \in i\RR$ for the real slice of moduli space. One then allows $x$ and $y$ to be both complex in order to pass to the complexification. The appropriate complexification of the moduli space is given by two copies of the upper half-plane modded out by a single diagonal modular group, $(\HH \times \HH)/\PSL(2,\ZZ)$. The proper contour for the integral \eqref{eq:closed string partition function naive} analogous to the one discussed in Section~\ref{subsec:integration contour} is then \begin{multline} \int_{\Gamma} \frac{\d^2 \tau}{(\Im \tau)^{14}\, |\eta(\tau)^{24}|^2}=\int_{\mathcal{F}_L} \frac{\d^2 \tau}{(\Im \tau)^{14}\, |\eta(\tau)^{24}|^2}\\ +\int_{-\frac{1}{2}}^{\frac{1}{2}} \d x \int_{ iL-\RR_{\ge 0}} \d y\ \frac{1}{y^{14} \eta(x+y)^{24}\eta(-x+y)^{24}}\ , \end{multline} which is indeed convergent. Here, $\mathcal{F}_L$ is the usual fundamental domain cut off at $\Im \tau=L$ that also often features in other regularizations of the integral over moduli space. While the integral can be easily evaluated numerically (and equals roughly $29399.1+98310i$), we are not aware of any exact analytic evaluation of the integral. See however \cite{Korpas:2019ava} for a term-by-term evaluation in the $q$-expansion, \cite{Lerche:1987qk} for the evaluation of integrals of holomorphic modular functions using the transformation property of the Eisenstein series $E_2$ and \cite{Angelantonj:2013eja, Florakis:2016boz} for the application of the Rankin--Selberg method for similar integrals. \paragraph{Oscillations and relation to chaos.} We have seen numerically that the real part of the one-loop amplitude features many seemingly erratic oscillations, see Figures \ref{fig:Ap-forward} and \ref{fig:fixed-angle-data}. The meaning of these oscillations from a scattering amplitudes point of view is not entirely clear, since there are very few consistency checks that can be performed on the real part of the one-loop amplitude without knowing further data. In particular, it is not directly constrained by unitary. Some constraints are imposed by the analytic structure, which allow one to compute dispersion relations. We will report on these elsewhere \cite{LorenzSebastian}. One perspective on these amplitudes is from the point of view of chaos. Going to stronger coupling will eventually make contact with black hole physics (although black holes are non-perturbative in the string coupling of the form $\mathcal{O}(\mathrm{e}^{-1/g_\text{s}^2})$ and thus not necessarily visible in string perturbation theory). Such a view for tree-level scattering amplitudes with one or more heavy external states was advocated in \cite{Gross:2021gsj}. We believe that the one-loop amplitude is a much better probe for such chaotic behaviour since it involves arbitrarily massive internal states. It would be interesting to make this link more precise. \acknowledgments We thank Nima Arkani-Hamed, Pinaki Banerjee, Simon Caron-Huot, Eric D'Hoker, Aaron Hillman, Abhiram Kidambi, Juan Maldacena, Giulio Salvatori, Oliver Schlotterer, and Gabriele Veneziano for useful discussions. L.E. and S.M. are supported by the grant DE-SC0009988 from the U.S. Department of Energy. S.M. gratefully acknowledges funding provided by the Sivian Fund.
2,869,038,155,427
arxiv
\section{Introduction} Industrial processes such as thermal processes and transport-reaction processes can all be modeled by DPSs, whose system input, output, and parameters can change in both time and space domain, which can be concluded as the S-T dynamics~\cite{li2010modeling,wang2018incremental,wang2019reinforcement,wang2019dissimilarity,Xu2019,wang2018sliding,wang2019spatial,meng2018evolutionary}. Abnormal behaviors or events in DPSs may cause the failure of controller or undesired system response, both are harmful to the safe and reliable operation of the system. Without loss of generality, these abnormal behaviors or events can be considered as the result of an unknown abnormal S-T source $f(z,t)$ in the system dynamics, which can also be treated equivalently as S-T fault in the process. The abnormal source term $f(z,t)$ is a set of unknown terms which may cause undesirable system behaviors, including faults (actuator fault or sensor fault) occurring to the system, disturbance or noise coupling in the system dynamics, etc. Identification of the abnormal source term has potential applications in chemical processes monitoring~\cite{el2006integrated,el2007actuator}, fault diagnosis of lithium-ion batteries~\cite{wei2019lyapunov}, control of vibrating single-link flexible manipulator system~\cite{zhao2019boundary}. On one hand, detection and identification of the abnormal source of DPSs are important for industrial applications but have not been fully investigated; On the other hand, the disturbance observer-based control (DOBC) of nonlinear parabolic PDE systems was studied in~\cite{wu2016disturbance,wu2016finite,wang2016low}, where the disturbance was governed by a known ordinary differential equation (ODE)~\cite{wu2016disturbance,wu2016finite} or partial differential equation (PDE)~\cite{wang2016low} exosystem with unknown initial conditions. The abnormal S-T source is similar to the source term in the inverse source estimating problems for the wave equation, where the authors introduced the modulating functions-based method~\cite{asiri2017modulating} to address it. However, this approach requires full state measurement, as well as the computation of the measurement's derivative with respect to time, which are difficult to realize for industrial applications~\cite{Fischer2018}. Over the past few decades, the actuator/sensor fault detection and diagnosis of DPSs have attracted more and more attention and some research efforts have been made, see~\cite{demetriou2002model,el2006integrated,el2007actuator,demetriou2007adaptive,armaou2008robust,ghantasala2009robust}. However, the fault detection and diagnosis of DPSs are inherently less complex than the abnormal S-T source detection and identification, due to the fact that the spatial distribution characteristic of the actuator/sensor fault was not considered: \begin{itemize} \item In the actuator fault case, the spatial distribution function of the fault was assumed to be the same as that of the actuator and was known a prior; \item In the sensor fault case, the spatial distribution characteristic of the fault was not considered for the most commonly used point-wise measurement sensors. \end{itemize} Hence the actuator/sensor fault detection and diagnosis of DPSs are conducted only on the time domain, while the abnormal S-T source detection and identification are conducted on both the time and space domain. Existing approaches for fault detection and diagnosis of DPSs can be roughly classified into two categories: one was using a finite-dimensional ODE representation of DPSs; the other was based on the original PDE system. For example, the authors proposed a finite-dimensional residual generators for the purpose of fault detection of linear DPSs in~\cite{deutscher2016fault}. A novel model-based fault detection approach is developed in \cite{cai2016model}, where the state observer was based on the original PDE system. By using the modulating functions-based approach~\cite{shinbrot1954analysis} on the original PDE system, the actuator or sensor fault identification was derived by an algebraic expression in~\cite{fischer2016algebraic,fischer2017fault,fischer2018modulating}. Despite these innovative results, the studies on abnormal S-T source detection and identification of DPSs are relatively insufficient, which are of great significance from the application viewpoint. Recently, the abnormal S-T source detection of DPSs was investigated using data-driven approaches for engineering applications~\cite{feng2018detection,feng2019dynamic}. However, the abnormal S-T source identification problem, i.e. estimating the unknown source term $f(z,t)$, was not considered, which is significant in abnormal source removing and process restoring. Moreover, compared to abnormal S-T source detection, the abnormal S-T source identification is more involved since it requires the dynamic tracking of the abnormal S-T source rather than determining detection thresholds for the generated residuals~\cite{feng2018detection,feng2019dynamic}. Adaptive observers~\cite{wang1996actuator,jiang2002adaptive,jiang2006fault,zhang2008adaptive} are efficient tools for the identification of unknown terms in dynamical systems modeled using ODEs. However, owing to the infinite-dimensional characteristic of DPSs, the adaptive observers' design methodologies for ODE systems cannot be applied to DPSs directly whose dynamical behaviors are modeled using PDEs. As one of the most representative classes of DPSs, a parabolic DPS's spatial differential operator can be divided into a finite-dimensional slow subsystem and an infinite-dimensional fast subsystem~\cite{christofides2012nonlinear}. This characteristic of parabolic DPSs provides a potential path of applying the adaptive observers for the abnormal S-T source identification problem. The advantage of using adaptive observers for the abnormal S-T source identification problem over modulating-functions approaches~\cite{asiri2017modulating} include: \begin{itemize} \item First, no state measurement is needed, so fewer sensors are required in industrial applications; \item Second, neither the time or space derivative of the measurement is required, which will improve the identification performance. \end{itemize} Motivated by the above-mentioned considerations, we make the first attempt to investigate the abnormal S-T source identification problem for a class of linear parabolic DPSs in this paper. An inverse S-T model for abnormal source identification is developed based on the theorem of separation of variables. The inverse S-T model consists of an adaptive state observer for source identification and an adaptive source estimation algorithm. Both theoretical analysis and numerical simulations are provided to validate the feasibility and effectiveness of the proposed method. The rest of this paper can be summarized as follows: The problem descriptions are provided in Section~\ref{sec:Preliminaries and problem statement}. In Section~\ref{sec:Adaptive fault diagnosis observer design}, the inverse S-T model design for abnormal S-T source identification is presented. Theoretic analysis and numerical simulations are provided in Section~\ref{sec:teeoretic analysis} and Section~\ref{sec:Numerical Simulation}, respectively. Finally, this paper is concluded in Section~\ref{sec:Conclusion}. \section{Preliminaries and problem statement}\label{sec:Preliminaries and problem statement} \subsection{System description} \begin{figure}[!h] \centering \includegraphics[width=5cm]{figs/problem.pdf} \caption{System description.} \label{fig:problem} \end{figure} Consider a class of linear DPSs modeled by the following parabolic PDE: \begin{equation} \label{e1} \begin{aligned} \frac{{\partial x(z,t)}} {{\partial t}} &= {a_1}\frac{{\partial x(z,t)}} {{\partial z}} + {a_2}\frac{{{\partial ^2} {x(z,t)}}} {{\partial {z^2}}}+a_3x(z,t)\\ &+ {k_u}\bm{b}_u^T(z)\bm{u}(t) + f(z,t), \hfill \\ \bm{y}(t) &= \int_{{\alpha _1}}^{{\alpha _2}} {\bm{c}(z)k_y x(z,t)dz}, \hfill \\ \end{aligned} \end{equation} subject to the following boundary conditions: \begin{equation} \label{e2} \begin{gathered} c_1 {x}({\alpha _1},t)+d_1\frac{{\partial {x}}} {{\partial z}}({\alpha _1},t) = {r_1}, \hfill \\ c_2 {x}({\alpha _2},t)+d_2\frac{{\partial {x}}} {{\partial z}}({\alpha _2},t) = {r_2}, \hfill \\ \end{gathered} \end{equation} and with the following initial condition: \begin{equation} \label{e3} x(z,0) = {{{x}}_0}(z), \end{equation} where $x(z,t)$ denotes the state variable; $\left[ {{\alpha _1},{\alpha _2}} \right] \subset \mathbb{R}$ is the spatial domain of the system; $z \in \left[ {{\alpha _1},{\alpha _2}} \right]$ is the spatial coordinate; $t \in \left[ {0,\infty } \right)$ denotes the time; $\bm{u}(t) \in {\mathbb{R}^{{n_u}}}$ denotes the vector of manipulated input; $f(z,t) \in \mathbb{R}$ denotes the unknown abnormal S-T source in the DPS, which is the root of abnormal behaviors or events. The definition \textbf{``abnormal source''} is proposed to distinguish from the manipulated input $\bm{u}(t)$, which can be considered as a \textbf{``normal source''}. The abnormal S-T source $f(z,t)$ is irrelevant to the state variable $x(z,t)$ and can't be measured; $\bm{y}(t) \in {\mathbb{R}^{{n_y}}}$ denotes the system output; $\partial x/\partial z$ and ${\partial ^2}x/\partial {z^2}$ denote the first and second-order spatial derivatives of $x$, respectively; $a_1, a_2, a_3, k_u, k_y, c_1, c_2, d_1, d_2, r_1$, and $r_2$ are constant coefficients; The $i$th element of known smooth function $\bm{b}_u(z)\in \mathbb{R}^{n_u}$ describes how the $i$th element of control action $\bm{u}(t)$ is distributed in $\left[ {{\alpha _1},{\alpha _2}} \right]$; The $i$th element of known smooth function $\bm{c}(z)\in \mathbb{R} ^ {n_y}$ is determined by the shape (point or distributed) of the $i$th measurement sensor; And ${{{x}}_0}(z)$ is the initial condition. A schematic graph of the system description is shown in Fig.~\ref{fig:problem}. The following Hilbert Space is defined throughout this paper: \begin{equation*} {\mathcal{H}}\mathop = \limits^\Delta \mathcal{L}_2([{\alpha _1},{\alpha _2}];\mathbb{R}). \end{equation*} Also, in this Hilbert Space, we have the following inner product and norm: \begin{equation*} \begin{aligned} < {{ {x}}_1(\cdot)},{{{x}}_2(\cdot)} > \mathop = \limits^\Delta \int_{{\alpha _1}}^{{\alpha _2}} {{ {x}}_1(z)} { {x}}_2(z)dz,\\ {\left\| { {x_1(\cdot)}} \right\|_{2}}\mathop = \limits^\Delta < {{ {x}}_1(\cdot)},{{{x}}_1(\cdot)} > ^{1/2},\\ \end{aligned} \end{equation*} where ${x}_1(\cdot)$ and ${x}_2(\cdot)$ are two elements of ${\mathcal{H}}$. \subsection{Problem statement} The problem investigated in this paper can be summarized as follows: \begin{adjustwidth}{2em}{2em} Utilize the system output $\bm{y}(t)$ to design an inverse S-T model to identify the abnormal S-T source ${f}(z,t)$ of systems, \end{adjustwidth} which subject to the system model in (\ref{e1}) with the boundary conditions in (\ref{e2}) and the initial condition in (\ref{e3}). \section{Inverse S-T model design for abnormal S-T source identification}\label{sec:Adaptive fault diagnosis observer design} In this section, an inverse S-T model is developed to infer the unknown source term $f(z,t)$ from the system output $\bm{y}(t)$, without any state measurement. \begin{figure}[!h] \centering \includegraphics[width=5cm]{figs/methodology.pdf} \caption{Framework of the inverse S-T model.} \label{fig:methodology} \end{figure} Motivated by the adaptive observer theory~\cite{wang1996actuator,jiang2002adaptive,jiang2006fault,zhang2008adaptive}, the inverse S-T model in Fig.~\ref{fig:methodology} is developed based on an adaptive state observer for source identification. Considering the infinite-dimensional characteristic of the linear DPSs described in (\ref{e1})-(\ref{e3}), an approximate finite-dimensional system model which exhibits the dominant dynamics of the original system is first derived for the adaptive state observer design as follows: \begin{equation} \label{e11} \begin{aligned} {{\dot {\bm{x}}}_s}(t) &= {{\bm{A}}_s}{\bm{x}_s}(t)+ {\bm{B}_{u,s}}\bm{u}(t)+\bm{f}_s(t), \hfill \\ {\bm{y}}_s(t) &= {\bm{C}_{s}{\bm{x}_s}(t)}, \hfill \\ \end{aligned} \end{equation} The advantages include: \begin{itemize} \item Finite-dimensional observers can then be applied to the abnormal S-T source identification rather than infinite-dimensional ones, by avoiding extra design efforts. \end{itemize} In industrial applications, dominant models are sufficient for satisfactory performance without extra design efforts. Details of model reduction can be referred to the Appendix. \begin{remark}\textbf{Time/space decoupled form of the abnormal S-T source}\\ \emph{ From the definition of $\bm{f}_s(t)$ and $\bm{f}_f(t)$ in (\ref{e8}), it is easy to obtain that: \begin{equation*} f(z,t) = {\bm{\phi} ^T}(z)\bm{f}(t) = \bm{\phi} _s^T(z){\bm{f}_s}(t) + \bm{\phi} _f^T(z){\bm{f}_f}(t). \end{equation*} In this manner, the unknown abnormal S-T source can be described by the products of the basis functions ${\bm{\phi}}(z)$ and the temporal coefficients $\bm{f}(t)$. Hence the abnormal S-T source identification is transformed into the identification of the temporal coefficients $\bm{f}(t)$.} \end{remark} Motivated by the adaptive fault diagnosis observer introduced in~\cite{wang1996actuator,jiang2002adaptive,jiang2006fault,zhang2008adaptive}, an adaptive state observer for (\ref{e11}) is constructed as follows: \begin{equation} \label{e12} \begin{aligned} {{\dot {\hat{\bm{x}}}_s}(t)} &= {{\bm{A}}_s}{\hat{\bm{x}}_s}(t)+ {\bm{B}_{u,s}}\bm{u}(t)+\hat{\bm{f}}_s(t)-\bm{L}(\hat{\bm{y}}_s(t)-\bm{y}(t)), \hfill \\ {\hat{\bm{y}}}_s(t) &= {\bm{C}_{s}{\hat{\bm{x}}_s}(t)}, \hfill \\ \end{aligned} \end{equation} where $\hat{{\bm{x}}}_s(t) \in \mathbb{R}^m$, $\hat{\bm{y}}_s(t)\in \mathbb{R}^{n_y}$, and $\hat{\bm{f}}_s(t)\in \mathbb{R}^{m}$ are the estimates of ${{\bm{x}}}_s(t)$, ${{\bm{y}}}_s(t)$, and ${{\bm{f}}}_s(t)$, respectively. $\bm{L}\in \mathbb{R}^{m \times n_y}$ is the observer gain matrix. Define $\bm{e}_x(t)={\hat{\bm{x}}_s}(t)-{\bm{x}_s}(t)$, $\bm{e}_y(t)={\hat{\bm{y}}_s}(t)-{\bm{y}}(t)$, and $\bm{e}_f(t)={\hat{\bm{f}}_s}(t)-{\bm{f}_s}(t)$, then the error dynamics are obtained by combing (\ref{e11}) and (\ref{e12}): \begin{equation} \label{e13} \begin{aligned} {{\dot {{\bm{e}}}_x}(t)} &= ({{\bm{A}}_s}-\bm{L}\bm{C}_s){{\bm{e}}_x}(t)+{\bm{e}}_f(t)+\bm{L}\bm{y}_f(t), \hfill \\ {{\bm{e}}}_y(t) &= {\bm{C}_{s}{{\bm{e}}_x}(t)}-\bm{y}_f(t). \hfill \\ \end{aligned} \end{equation} \begin{remark}\emph{ To be noticed, the output error $\bm{e}_y(t)$ which can be used for the adaptive state observer design is defined as: \begin{displaymath} \bm{e}_y(t)={\hat{\bm{y}}_s}(t)-{\bm{y}}(t) \end{displaymath} rather than the error between the slow-system output estimation ${\hat{\bm{y}}_s}(t)$ and the slow system output ${\bm{y}}_s(t)$. The reason is that the slow-system output ${\bm{y}}_s(t)$ in (\ref{e11}) can not be directly obtained while the original PDE system output $\bm{y}(t)$ in (\ref{e1}) can be obtained by the measurement sensors. The problem that comes with it is that $\bm{e}_y(t)\ne{\bm{C}_{s}{{\bm{e}}_x}(t)}$, which is equal in ~\cite{wang1996actuator,jiang2002adaptive,jiang2006fault,zhang2008adaptive}. Therefore, as shown in (\ref{e13}), the output of fast subsystem $\bm{y}_f(t)$ should be considered in the adaptive state observer design, which will introduce extra errors for the source identification. One make-up solution in practice is to select a sufficient high order $m$ for the slow subsystem.} \end{remark} Then the following adaptive source estimation algorithm is proposed: \begin{equation}\label{e17} {\dot {\hat{\bm{f}}}}_s(t) = - \bm{\Gamma F}({\dot{\bm{e}}_y}(t)+\sigma{{\bm{e}}_y}(t)), \end{equation} where $\bm{F}\in \mathbb{R} ^ {m \times n_y}$ and the symmetric positive definite matrix $\bm{\Gamma}\in \mathbb{R} ^ {m \times m}$ is the learning rate. \begin{remark}\emph{ It can be found that the adaptive source estimation algorithm (\ref{e17}) consists of a proportional term and an integral one of ${\bm{e}}_y(t)$, it can be rewritten as: \begin{displaymath} {{\hat{\bm{f}}}}_s(t) = - \bm{\Gamma F}({{\bm{e}}_y}(t)+\sigma\int_0^t {{\bm{e}_y}(\tau )} d\tau). \end{displaymath} The proportional term is introduced to improve the rapidity of abnormal source estimation.} \end{remark} \begin{figure}[!h] \centering \includegraphics[width=9cm]{figs/schematic.pdf} \caption{Schematic graph of the proposed inverse S-T model-based method for source identification.} \label{fig:schematic} \end{figure} Finally, the abnormal S-T source estimation can be done by the \textbf{time/space synthesis}: \begin{equation} \label{eqn:t/ssynthesis} \hat{f}(z,t)=\bm{\phi} _s^T(z){\hat{\bm{f}}_s}(t). \end{equation} \begin{remark}\emph{ To be noticed, a general assumption of the abnormal S-T source $f(z,t)$ is that it can be formulated in the following basis expansion form: \begin{displaymath} f(z,t)=\sum\limits_{i = 1}^n {{f_i}(t){\upsilon _i}(z)},\; 1\leqslant n<\infty \end{displaymath} with an finite number of the basis functions $\left\{ {{\upsilon _i}(z)} \right\}_{i = 1}^n$, where $n$ is unknown for the abnormal S-T source identification. Unlike the polynomial-type basis functions used in~\cite{asiri2017modulating}, the eigenfunctions $\left\{ {{\phi _i}(z)} \right\}_{i = 1}^n$ are used as the basis functions instead, i.e. ${\upsilon _i}(z) = {\phi _i}(z),i = 1,\cdots, n.$ When the order of the slow subsystem $m\ge n$, one can obtain that $\bm{f}_f(t)=0$. Since the inverse S-T model is based on an approximate finite-dimension model, $\bm{f}_f(t)$ is neglected in the abnormal S-T source identification. However, to further enhance the abnormal source identification performance, one can select a sufficient large $m$ under this assumption, which can also reduce the influence caused by the neglecting of the fast subsystem.} \end{remark} A schematic graph of the proposed inverse S-T model-based method for abnormal S-T source identification is summarized in Fig.~\ref{fig:schematic}, it can be found that the inverse S-T model consists of the adaptive state observer for source identification in (\ref{e12}) and the adaptive source estimation algorithm in (\ref{e17}). To be noticed, in the proposed inverse S-T model, no state measurement is used. \section{Theoretical analysis}\label{sec:teeoretic analysis} The following assumptions and lemma are needed for the proposed inverse S-T model design. % \begin{assumption}\emph{ The output of the $\bm{x}_f$-subsystem $\bm{y}_f(t)$ and its derivative with respect to time ${{\dot{\bm{y}}_f}(t)}$ satisfy that: \begin{equation*} \begin{aligned} \left\| {{\bm{y}_f}(t)} \right\|_{peak} &\triangleq \sup _{t}| | {\bm{y}_f}(t)| |<\infty, \hfill \\ {\left\| {{\dot{\bm{y}}_f}(t)} \right\|_{peak}} &\triangleq \sup _{t}| | {\dot{\bm{y}}_f}(t)| |<\infty,\; \forall t \geqslant 0 \hfill \\ \end{aligned} \end{equation*} where $\left\| \cdot \right\|_{peak}$ and $\left\| \cdot \right\|$ denote the so-called peak-norm~\cite{ding2008model} and Euclidean norm, respectively.} \end{assumption} \begin{assumption}\emph{ The derivative of $\bm{f}_s(t)$ with respect to time is norm bounded, i.e. \begin{equation*} \left\| {{\dot{\bm{f}}_s}(t)} \right\|^2 \leqslant {f_1},\;\forall t \ge 0 \end{equation*} where $f_1 \in \left[ {0,\infty } \right)$ is a constant.} \end{assumption} \begin{assumption}\emph{ The abnormal S-T source $f(z,t)$ satisfies that: \begin{equation*} \left\| {f(z,t)} \right\|_2^2 = {\left\| {{\bm{f}_s}(t)} \right\|^2} + {\left\| {{\bm{f}_f}(t)} \right\|^2} \leqslant {f_2},\;\forall t \ge 0 \end{equation*} where $f_2 \in \left[ {0,\infty } \right)$ is a constant.} \end{assumption} \begin{assumption}\emph{ $(\bm{A}_s,\bm{C}_s)$ is observable and $\bm{C}_s$ is of full column rank.} \end{assumption} \begin{remark}\emph{ In Assumption 4, the requirement of $\bm{C}_s$ being full column rank is common set in fault isolation, it is also known as the output separability condition~\cite{white1987detection,liu1997fault}. This can be done by selecting appropriate measurement sensors and dominant modes.} \end{remark} \begin{lemmax} \cite{jiang2002adaptive}\emph{ For a given positive scalar $\mu>0$ and a symmetric positive definite matrix $\bm{P}$, the following inequality holds: \begin{displaymath} 2{\bm{x}^T}\bm{y} \leqslant \frac{1}{\mu }{\bm{x}^T}\bm{Px} + \mu {\bm{y}^T}{\bm{P}^{ - 1}}\bm{y}, \;\bm{x}, \bm{y} \in {\mathbb{R}^n}. \end{displaymath}} \end{lemmax} \begin{lemmax} \cite{ioannou2012robust}\emph{ Let $V(t)$ and $g(t)$ be real functions. Then \begin{displaymath} \dot{V}(t) \leq-\alpha V(t)+g(t), \forall t \geq 0 \end{displaymath} implies that \begin{displaymath} V(t) \leq e^{-\alpha t} V(0)+\int_{0}^{t} e^{-\alpha(t-\tau)} g(\tau) d \tau, \quad \forall t \geq 0 \end{displaymath} for any finite constant $\alpha$. } \end{lemmax} \begin{theoremx}\emph{ Under Assumptions 1, 2, 4, and 5, given scalars $\mu_1, \mu_2, \sigma>0$, if there exist symmetric positive definite matrices $\bm{P}\in \mathbb{R} ^ {m \times m}$, $\bm{G}_1\in \mathbb{R} ^ {m \times m}$, $\bm{G}_2\in \mathbb{R} ^ {m \times m}$, matrices $\bm{X}\in \mathbb{R} ^ {m \times n_y}$, $\bm{F}\in \mathbb{R} ^ {m \times n_y}$, and a positive constant $\varepsilon_1$ such that the following linear matrix inequality (LMI) is satisfied: \begin{equation} \label{e15} \begin{aligned} \bm{\Xi} \mathop {=}\limits^\Delta \left[ {\begin{array}{*{20}{c}} {\bm{\Xi}_{11}} & \;\; * & \;\; * \\ \frac{1}{\sigma}(\bm{X}\bm{C}_s-\bm{P}\bm{A}_s) & \;\; {\bm{\Xi}_{22}}& \;\;* \\ {{\bm{X}^T}} & \;\; \bm{F}^T-\frac{1}{\sigma}{{\bm{X}^T}}& \;\;\;\;\;\;-\varepsilon_1\bm{I} \\ \end{array} } \right]< 0, \end{aligned} \end{equation} where \begin{displaymath} \bm{\Xi}_{11}\mathop {=}\limits^\Delta \bm{P}{{\bm{A}}_s} + {\bm{A}}_s^T\bm{P} - \bm{X}\bm{C}_s - \bm{C}_s^T{\bm{X}^T}, \end{displaymath} \begin{displaymath} {\bm{\Xi}_{22}}\mathop {=}\limits^\Delta -\frac{2}{\sigma}\bm{P}+\frac{1}{\sigma \mu_1}\bm{G}_1+\frac{1}{\sigma\mu_2}\bm{G}_2, \end{displaymath} \begin{equation}\label{e16} \bm{X}=\bm{PL}, \end{equation} and the following condition holds: \begin{equation}\label{e16+} \bm{P}=\bm{FC}_s, \end{equation} then the adaptive source estimation algorithm in (\ref{e17}) can realize $\bm{e}_x(t)$ and $\bm{e}_f(t)$ uniformly ultimately bounded (UUB)} where the symmetric positive definite matrix $\bm{\Gamma}\in \mathbb{R} ^ {m \times m}$ denotes the learning rate. \end{theoremx} \textbf{Proof}. Choose the Lyapunov candidate as follows: \begin{equation}\label{e18} V(t)=\bm{e}^T_x(t)\bm{Pe}_x(t)+\frac{1}{\sigma}\bm{e}^T_f(t)\bm{\Gamma}^{-1}\bm{e}_f(t). \end{equation} Combing (\ref{e13}), (\ref{e17}), (\ref{e16}), and (\ref{e16+}), its derivative with respect to time is: \begin{equation}\label{e20} \begin{aligned} \dot {V}(t) &= \bm{e}_x^T(t)(\bm{P}{{\bm{A}}_s} + {\bm{A}}_s^T\bm{P} - \bm{X}{\bm{C}_s} - \bm{C}_s^T{\bm{X}^T}){\bm{e}_x}(t)\\ &+ \frac{2}{\sigma}\bm{e}_f^T(t)(\bm{X}{\bm{C}_s}-\bm{PA}_s){\bm{e}_x}(t)+ 2\bm{y}_f(t)^T{\bm{X}^T}{\bm{e}_x}(t)\\ &+ 2\bm{y}_f^T(t)(\bm{F}^T-\frac{1}{\sigma}\bm{X}^T){\bm{e}_f}(t)- \frac{2}{\sigma}\bm{e}_f^T(t){\bm{\Gamma} ^{ - 1}}{\dot {\bm{f}}}_s(t)\\ &-\frac{2}{\sigma}\bm{e}_f^T(t)\bm{Pe}_f(t)+\frac{2}{\sigma}\bm{e}_f^T(t)\bm{F}\dot{\bm{y}}_f(t). \end{aligned} \end{equation} Combing Lemma 1, Assumption 1, and Assumption 2, it can be obtained that: \begin{equation}\label{e21} \begin{aligned} - \frac{2}{\sigma}\bm{e}_f^T(t){\bm{\Gamma} ^{ - 1}}{{\dot {\bm{f}}}_s}(t) &\leqslant \frac{1} {\sigma \mu_1 }\bm{e}_f^T(t)\bm{G}_1\bm{e}_f(t) \\ &+ \frac{\mu_1}{\sigma} \dot {\bm{f}}_s^T(t){\bm{\Gamma} ^{ - 1}}\bm{G}^{-1}_1{\bm{\Gamma} ^{ - 1}}{{\dot {\bm{f}}}_s}(t) \hfill \\ &\leqslant \frac{1}{\sigma \mu_1 }\bm{e}_f^T(t)\bm{G}_1\bm{e}_f(t)\\ &+ \frac{\mu_1}{\sigma} f_1{\lambda _{\max }}({\bm{\Gamma} ^{ - 1}}\bm{G}^{-1}_1{\bm{\Gamma} ^{ - 1}}). \hfill \\ \end{aligned} \end{equation} \begin{equation}\label{e21-} \begin{aligned} \frac{2}{\sigma}\bm{e}_f^T(t)\bm{F}\dot{\bm{y}}_f(t) &\leqslant \frac{1} {\sigma\mu_2 }\bm{e}_f^T(t)\bm{G}_2\bm{e}_f(t)\\ &+ \frac{\mu_2}{\sigma}\dot {\bm{y}}_f^T(t)\bm{F}^T\bm{G}_2^{-1}\bm{F}{{{\dot{\bm{y}}}}_f}(t) \hfill \\ &\leqslant \frac{1}{\sigma\mu_2 }\bm{e}_f^T(t)\bm{G}_2\bm{e}_f(t)\\ &+ \frac{\mu_2}{\sigma}{\lambda _{\max }}(\bm{F}^T\bm{G}_2^{-1}\bm{F}){\left\| {{\dot{\bm{y}}_f}(t)} \right\|_{peak}^2}. \hfill \\ \end{aligned} \end{equation} Substituting (\ref{e21}) and (\ref{e21-}) into (\ref{e20}) and considering Assumption 1 yields: \begin{equation} \label{e22} \dot {V}(t) \leqslant {\bm{\xi} ^T}(t)\bm{\Xi} \bm{\xi} (t)+\beta+{\varepsilon _1}{\left\| {{{\bm{y}}_f}(t)} \right\|_{peak}^2}+\varepsilon_2{\left\| {{\dot{\bm{y}}_f}(t)} \right\|_{peak}^2}, \end{equation} where \begin{equation*} \begin{aligned} \bm{\xi} (t) &\mathop {=}\limits^\Delta \left[ \begin{gathered} {\bm{e}_x}(t) \hfill \\ {\bm{e}_f}(t) \hfill \\ {\bm{y}_f}(t) \hfill \\ \end{gathered} \right], \\ \bm{\Xi} &\mathop {=}\limits^\Delta \left[ {\begin{array}{*{20}{c}} {\bm{\Xi}_{11}} & \;\; * & \;\; * \\ \frac{1}{\sigma}(\bm{X}\bm{C}_s-\bm{P}\bm{A}_s) & \;\; {\bm{\Xi}_{22}}& \;\;* \\ {{\bm{X}^T}} & \;\; \bm{F}^T-\frac{1}{\sigma}{{\bm{X}^T}}& \;\;\;\;\;\;-\varepsilon_1\bm{I} \\ \end{array} } \right],\\ \bm{\Xi}_{11}&\mathop {=}\limits^\Delta \bm{P}{{\bm{A}}_s} + {\bm{A}}_s^T\bm{P} - \bm{X}\bm{C}_s - \bm{C}_s^T{\bm{X}^T},\\ {\bm{\Xi}_{22}}&\mathop {=}\limits^\Delta -\frac{2}{\sigma}\bm{P}+\frac{1}{\sigma \mu_1}\bm{G}_1+\frac{1}{\sigma\mu_2}\bm{G}_2,\\ \beta&\mathop {=}\limits^\Delta \frac{\mu_1}{\sigma} f_1{\lambda _{\max }}({\bm{\Gamma} ^{ - 1}}\bm{G}^{-1}_1{\bm{\Gamma} ^{ - 1}}), \varepsilon_2\mathop {=}\limits^\Delta \frac{\mu_2}{\sigma}{\lambda _{\max }}(\bm{F}^T\bm{G}_2^{-1}\bm{F}). \end{aligned} \end{equation*} Hence, when $\bm{\Xi}<0$, one can obtain that: \begin{equation} \label{e23} \begin{aligned} \dot V(t) &\leqslant - {\lambda _{\min }}( - \bm{\Xi} ){\left\| {\bm{\xi} (t)} \right\|^2} + \beta+{\varepsilon _1}{\left\| {{{\bm{y}}_f}(t)} \right\|_{peak}^2}\\ &+\varepsilon_2{\left\| {{\dot{\bm{y}}_f}(t)} \right\|_{peak}^2} \\ &= - {\lambda _{\min }}( - \bm{\Xi} )({\left\| {{\bm{e}_x}(t)} \right\|^2} + {\left\| {{\bm{e}_f}(t)} \right\|^2}+{\left\| {{\bm{y}_f}(t)} \right\|^2})\\ &+\beta+{\varepsilon _1}{\left\| {{{\bm{y}}_f}(t)} \right\|_{peak}^2}+\varepsilon_2{\left\| {{\dot{\bm{y}}_f}(t)} \right\|_{peak}^2}\\ &\leqslant- {\lambda _{\min }}( - \bm{\Xi} )({\left\| {{\bm{e}_x}(t)} \right\|^2} + {\left\| {{\bm{e}_f}(t)} \right\|^2})\\ & +\beta+{\varepsilon _1}{\left\| {{{\bm{y}}_f}(t)} \right\|_{peak}^2}+\varepsilon_2{\left\| {{\dot{\bm{y}}_f}(t)} \right\|_{peak}^2}. \end{aligned} \end{equation} According to the definition of $V(t)$ in (\ref{e18}), it can be derived that: \begin{equation} \label{e24} \begin{aligned} {V}(t) &\leqslant {\lambda _{\max }}(\bm{P}){\left\| {{\bm{e}_x}(t)} \right\|^2} + \frac{1}{\sigma}{\lambda _{\max }}({\bm{\Gamma} ^{ - 1}}){\left\| {{\bm{e}_f}(t)} \right\|^2} \\ &\leqslant \max \{ {\lambda _{\max }}(\bm{P}), \frac{1}{\sigma}{\lambda _{\max }}({\bm{\Gamma} ^{ - 1}})\} ({\left\| {{\bm{e}_x}(t)} \right\|^2} + {\left\| {{\bm{e}_f}(t)} \right\|^2}). \\ \end{aligned} \end{equation} Combing (\ref{e23}) and (\ref{e24}), it can be obtained that: \begin{equation} \label{e25} \dot{V}(t)\leqslant-\alpha V(t)+\beta+{\varepsilon _1}{\left\| {{{\bm{y}}_f}(t)} \right\|_{peak}^2}+\varepsilon_2{\left\| {{\dot{\bm{y}}_f}(t)} \right\|_{peak}^2}, \end{equation} where \begin{displaymath} \alpha=\frac{{\lambda _{\min }}( - \bm{\Xi} )}{\max \{ {\lambda _{\max }}(\bm{P}), \frac{1}{\sigma}{\lambda _{\max }}({\bm{\Gamma} ^{ - 1}})\}}. \end{displaymath} By Lemma 2, it can be obtained that: \begin{equation} \label{e25+} \begin{aligned} {V}(t)&\leqslant e^{-\alpha t}V(0)+(\beta+{\varepsilon _1}{\left\| {{{\bm{y}}_f}(t)} \right\|_{peak}^2}+\varepsilon_2{\left\| {{\dot{\bm{y}}_f}(t)} \right\|_{peak}^2})\\ &\int_{0}^{t} e^{-\alpha(t-\tau)} d \tau\\ &=e^{-\alpha t}V(0)+(\beta+{\varepsilon _1}{\left\| {{{\bm{y}}_f}(t)} \right\|_{peak}^2}+\varepsilon_2{\left\| {{\dot{\bm{y}}_f}(t)} \right\|_{peak}^2})\\ &\frac{1}{\alpha}\left(1-e^{-\alpha t}\right)\\ &\leqslant e^{-\alpha t}V(0)+(\frac{\beta}{\alpha}+\frac{\varepsilon _1}{\alpha}{\left\| {{{\bm{y}}_f}(t)} \right\|_{peak}^2}+\frac{\varepsilon_2}{\alpha}{\left\| {{\dot{\bm{y}}_f}(t)} \right\|_{peak}^2})\\ &\sup _{t \in[0, \infty)}\left\{1-e^{-\alpha t}\right\}\\ &\leqslant e^{-\alpha t}V(0)+(\frac{\beta}{\alpha}+\frac{\varepsilon _1}{\alpha}{\left\| {{{\bm{y}}_f}(t)} \right\|_{peak}^2}+\frac{\varepsilon_2}{\alpha}{\left\| {{\dot{\bm{y}}_f}(t)} \right\|_{peak}^2})\\ &\leqslant e^{-\alpha t}V(0)+(\sqrt{\frac{\beta}{\alpha}}+\sqrt{\frac{\varepsilon _1}{\alpha}}{\left\| {{{\bm{y}}_f}(t)} \right\|_{peak}}\\ &+\sqrt{\frac{\varepsilon_2}{\alpha}}{\left\| {{\dot{\bm{y}}_f}(t)} \right\|_{peak}})^2\\ \end{aligned} \end{equation} Meanwhile, considering (\ref{e18}), it can be derived that: \begin{equation} \label{e26-} \begin{aligned} {V}(t) &\ge {\lambda _{\min }}(\bm{P}){\left\| {{\bm{e}_x}(t)} \right\|^2} + \frac{1}{\sigma}{\lambda _{\min }}({\bm{\Gamma} ^{ - 1}}){\left\| {{\bm{e}_f}(t)} \right\|^2} \\ &\ge \min \{ {\lambda _{\min }}(\bm{P}), \frac{1}{\sigma}{\lambda _{\min }}({\bm{\Gamma} ^{ - 1}})\} ({\left\| {{\bm{e}_x}(t)} \right\|^2} + {\left\| {{\bm{e}_f}(t)} \right\|^2}). \\ \end{aligned} \end{equation} Combining (\ref{e25+}) with (\ref{e26-}), it can be derived that: \begin{equation} \label{e25++} \begin{aligned} {\left\| {{\bm{e}_x}(t)} \right\|^2} + {\left\| {{\bm{e}_f}(t)} \right\|^2}&\leqslant \frac{e^{-\alpha t}V(0)}{\min \{ {\lambda _{\min }}(\bm{P}), \frac{1}{\sigma}{\lambda _{\min }}({\bm{\Gamma} ^{ - 1}})\}}+\rho^2\\ &\leqslant (\sqrt{\frac{e^{-\alpha t}V(0)}{\min \{ {\lambda _{\min }}(\bm{P}), \frac{1}{\sigma}{\lambda _{\min }}({\bm{\Gamma} ^{ - 1}})\}}}+\rho)^2 \end{aligned} \end{equation} where \begin{equation} \begin{aligned} \label{e25+++} \rho&=\sqrt{\frac{1}{\min \{ {\lambda _{\min }}(\bm{P}), \frac{1}{\sigma}{\lambda _{\min }}({\bm{\Gamma} ^{ - 1}})\}}}(\sqrt{\frac{\beta}{\alpha}}+\sqrt{\frac{\varepsilon _1}{\alpha}}{\left\| {{{\bm{y}}_f}(t)} \right\|_{peak}}\\ &+\sqrt{\frac{\varepsilon_2}{\alpha}}{\left\| {{\dot{\bm{y}}_f}(t)} \right\|_{peak}}). \end{aligned} \end{equation} Hence it can be further derived that \begin{equation} \label{e25++++} \begin{aligned} {\left\| {{\bm{e}_x}(t)} \right\|}&\leqslant \sqrt{\frac{e^{-\alpha t}V(0)}{\min \{ {\lambda _{\min }}(\bm{P}), \frac{1}{\sigma}{\lambda _{\min }}({\bm{\Gamma} ^{ - 1}})\}}}+\rho,\\ {\left\| {{\bm{e}_f}(t)} \right\|}&\leqslant \sqrt{\frac{e^{-\alpha t}V(0)}{\min \{ {\lambda _{\min }}(\bm{P}), \frac{1}{\sigma}{\lambda _{\min }}({\bm{\Gamma} ^{ - 1}})\}}}+\rho. \end{aligned} \end{equation} Recall Assumption 1, it can be obtained that $0\leqslant\rho<\infty$. Hence (\ref{e25++++}) implies that there exists a $T \to \infty$ such that ${\left\| {{\bm{e}_x}(t)} \right\|}\leqslant \rho, {\left\| {{\bm{e}_f}(t)} \right\|}\leqslant \rho$, for all $t>T$. That is to say, $\bm{e}_x(t)$ and $\bm{e}_f(t)$ are UUB with ultimate bound $\rho$. This completes the proof.\;\;$\blacksquare$ \begin{corollaryx}\emph{ The abnormal S-T source estimation error of the original PDE system in (\ref{e1})-(\ref{e3}) can be obtained as \begin{equation*} {e_f}(z,t) = \hat f(z,t) - f(z,t), \end{equation*} where $\hat{f}(z,t)=\bm{\phi} _s^T(z){\hat{\bm{f}}_s}(t)$, as shown in (\ref{eqn:t/ssynthesis}). Hence the square of the norm for the abnormal S-T source estimation error in $\mathcal{H}$ can be obtained as \begin{equation*} \begin{aligned} \left\| {{e_f}(z,t)} \right\|_2^2 &= \left\| {\hat f(z,t) - f(z,t)} \right\|_2^2 \hfill \\ &= \left\| {\bm{\phi} _s^T(z){{\hat {\bm{f}}}_s}(t) - \bm{\phi} _s^T(z){\bm{f}_s}(t) - \bm{\phi} _f^T(z){\bm{f}_f}(t)} \right\|_2^2 \hfill \\ &= \left\| {\bm{\phi} _s^T(z){\bm{e}_f}(t) - \bm{\phi} _f^T(z){\bm{f}_f}(t)} \right\|_2^2 \hfill \\ &= {\left\| {{\bm{e}_f}(t)} \right\|^2} + {\left\| {{\bm{f}_f}(t)} \right\|^2}. \hfill \\ \end{aligned} \end{equation*} Combing with Theorem 1 and Assumption 3, it can be concluded that the abnormal S-T source estimation error ${e_f}(z,t)$ is UUB if the conditions in Theorem 1 are satisfied, with ultimate bound $\rho+\sqrt{f_2}$.} \end{corollaryx} \begin{remark}\emph{ The inequality (\ref{e15}) can be solved by the MATLAB LMI toolbox. However, the equality condition in (\ref{e16+}) makes the problem difficult to solve. Therefore, this condition is transformed into the following problem: Minimize $\eta$ subject to (\ref{e15}), (\ref{e16}) and} \begin{equation} \label{e26+} \left[ {\begin{array}{*{20}{c}} {\eta \bm{I}} & * \\ {{{(\bm{P} - \bm{F}{\bm{C}_s})}^T}} & {\eta \bm{I}} \\ \end{array} } \right] > 0. \end{equation} \end{remark} % \section{Numerical Simulation}\label{sec:Numerical Simulation} Consider a thin rod whose temperature distribution can be modeled by the following parabolic PDE: \begin{equation}\label{e26} \begin{aligned} \frac{{\partial x(z,t)}} {{\partial t}}= \frac{{{\partial ^2}x(z,t)}} {{\partial {z^2}}} + {\beta _U}(b_u(z)u(t) - x(z,t))+ f(z,t), \end{aligned} \end{equation} \begin{equation} \label{e27} \bm{y}(t) = \left[ \begin{gathered} {\int_0^\pi {\delta (z - \frac{\pi }{4})x(z,t)dz} } \hfill \\ {\int_0^\pi {\delta (z - \frac{3\pi }{4})x(z,t)dz} } \hfill \\ \end{gathered} \right] = \left[ \begin{gathered} {x(\frac{\pi }{4},t)} \hfill \\ {x(\frac{3\pi }{4},t)} \hfill \\ \end{gathered} \right], \end{equation} subject to the Dirichlet boundary conditions: \begin{equation} x(0,t) = 0,\;x(\pi ,t) = 0, \end{equation} where $x(z,t)$ denotes the dimensionless temperature of the rod; $\beta_U$ denotes a dimensionless heat transfer coefficient; $u(t)$ is the manipulated input (temperature of the cooling medium); $\bm{y}(t)$ are the thermocouple measurements at point $z =\pi/{4}$ and $z =3\pi/{4}$. The typical value of $\beta_U$ is given as $2$. The actuator distribution function is set as: \begin{displaymath} {b_u}(z) = \sqrt {\frac{2}{\pi }}\sin (z). \end{displaymath} And the control input is selected as: \begin{displaymath} u(t)=1. \end{displaymath} The eigenvalue problem for the spatial differential operator: \begin{equation*} \begin{aligned} {\mathcal{A}}x &= \frac{{{\partial ^2}x}}{{\partial {z^2}}}-\beta_Ux,\;\\ x \in \mathcal{S}(\mathcal{A}) &= \{ x \in \mathcal{L}_2([{0},{\pi}];\mathbb{R}); x(0,t) = 0,x(\pi ,t) = 0\} \end{aligned} \end{equation*} can be directly solved as: \begin{equation*} {\lambda _j} = - {j^2}-2,\;{\phi _j}(z) = \sqrt {\frac{2} {\pi }} \sin (jz),\;j = 1,\cdots,\infty. \end{equation*} Details of solving procedures for this kind of eigenvalue problem can be referred to Cheaper 4 of~\cite{ray1989control}. The first two eigenvalues are considered as the dominant ones, thus $\varepsilon = \left| {{\lambda _1}} \right|/\left| {{\lambda _{3}}} \right| \approx 0.273$. Using the procedures discussed in Section~\ref{sec:Adaptive fault diagnosis observer design}, the following $2$-dimensional slow subsystem is derived: \begin{equation} \label{e29} \begin{aligned} {{\dot {\bm{x}}}_s}(t) &= {{\bm{A}}_s}{\bm{x}_s}(t)+ {\bm{B}_{u,s}}\bm{u}(t)+\bm{f}_s(t), \hfill \\ \bm{y}_s(t) &= {\bm{C}_{s}{\bm{x}_s}(t)}, \hfill \\ \end{aligned} \end{equation} where \begin{equation*} \bm{x}_s(t)=\left[ \begin{gathered} x_1(t) \hfill \\ x_2(t) \hfill \\ \end{gathered} \right],\\ {\bm{A}}_s=\left[ {\begin{array}{*{20}{c}} { - 3} & 0 \\ 0 & { - 6} \\ \end{array} } \right], {\bm{B}_{u,s}}=\left[ \begin{gathered} 2 \hfill \\ 0 \hfill \\ \end{gathered} \right],\\ \end{equation*} and \begin{equation*} \bm{C}_s=\int_0^\pi \left[ \begin{gathered} {\delta (z - \frac{\pi }{4})} \hfill \\ {\delta (z - \frac{3\pi }{4})} \hfill \\ \end{gathered} \right] \left[\phi_1(z)\;\phi_2(z)\right] dz =\left[ {\begin{array}{*{20}{c}} \sqrt {\frac{1}{\pi}} & \sqrt {\frac{2}{\pi}} \\ \sqrt {\frac{1}{\pi}} & {-\sqrt {\frac{2}{\pi}}} \\ \end{array} } \right]. \end{equation*} It can be found that Assumption 4 is satisfied. In this numerical simulation, consider the following abnormal S-T source according to Remark 4: \begin{displaymath} f(z,t)=[\phi_1(z)\;\phi_2(z)]\left[ \begin{gathered} {f}_{1}(t) \hfill \\ {f}_{2}(t) \hfill \\ \end{gathered} \right]. \end{displaymath} Abnormal source $f_1(t)$ can be considered as the actuator fault due to the fact that $b_u(z)=\phi_1(z)$ is set. To evaluate the performance of the proposed abnormal S-T source identification method, the following kinds of abnormal sources are considered: \begin{equation*} \begin{aligned} f_1(t) &= \left\{ \begin{gathered} 0,\;0 \leqslant t < 10(\sec ) \hfill \\ 2,\; 10 \leqslant t \leqslant 80(\sec ) \hfill \\ \end{gathered} \right. \hfill \\ f_2(t) &= \left\{ \begin{gathered} 0,\;0 \leqslant t < 40(\sec ) \hfill \\ 3,\; 40 \leqslant t \leqslant 80(\sec ) \hfill \\ \end{gathered} \right. \hfill \\ \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} f_1(t) &= \left\{ \begin{gathered} 0,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;0 \leqslant t < 10(\sec ) \hfill \\ 2-e^{-0.01(t-10)},\; 10 \leqslant t \leqslant 80(\sec ) \hfill \\ \end{gathered} \right. \hfill \\ f_2(t) &= \left\{ \begin{gathered} 0,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;0 \leqslant t < 40(\sec ) \hfill \\ 3-e^{-0.02(t-40)},\; 40 \leqslant t \leqslant 80(\sec ) \hfill \\ \end{gathered} \right. \hfill \\ \end{aligned} \end{equation*} The first kind of abnormal source can be considered as an abrupt source while the second is an incipient one. Based on the definition of $\bm{f}_s(t)$ in (\ref{e8}), it can be obtained that: \begin{displaymath} \bm{f}_s(t)=\left[ \begin{gathered} {f}_{s1}(t) \hfill \\ {f}_{s2}(t) \hfill \\ \end{gathered} \right] =\left[ \begin{gathered} {f}_1(t) \hfill \\ {f}_2(t) \hfill \\ \end{gathered} \right]. \end{displaymath} According to Theorem 1, choosing $\mu_1=1, \mu_2=1$, and $\sigma=1$ and solving (\ref{e15}), (\ref{e16}), and (\ref{e26+}), one can obtain that: \begin{equation*} \begin{aligned} \eta&=9.4277\times 10^{-12},\\ \bm{P}&=\left[ {\begin{array}{*{20}{c}} {0.1774} & 0 \\ 0 & {0.0609} \\ \end{array} } \right], \bm{G}_1=\left[ {\begin{array}{*{20}{c}} {0.0193} & 0 \\ 0 & {0.0102} \\ \end{array} } \right],\\ \bm{G}_2&=\left[ {\begin{array}{*{20}{c}} {0.0193} & 0 \\ 0 & {0.0102} \\ \end{array} } \right], \bm{X}=\left[ {\begin{array}{*{20}{c}} {-0.1106}&-0.1106 \\ -0.1588&0.1588 \\ \end{array} } \right],\\ \bm{F}&=\left[ {\begin{array}{*{20}{c}} {0.1572}&0.1572 \\ 0.0382&-0.0382 \\ \end{array} } \right], \bm{L}=\left[ {\begin{array}{*{20}{c}} {-0.6231}&-0.6231 \\ -2.6069&2.6069 \\ \end{array} } \right], \end{aligned} \end{equation*} by the MATLAB LMI toolbox. Moreover, the learning rate is chosen as $\bm{\Gamma}=diag(100,100)$. The original PDE system in (\ref{e26}) is solved numerically by the finite difference method (FDM) \cite{strikwerda2004finite} with the sampling time $\Delta t=0.01\sec$. By using the adaptive state observer (\ref{e12}) and the adaptive source estimation algorithm (\ref{e17}), the source estimation of the slow subsystem $\hat{\bm{f}}_s(t)=[\hat{f}_{s1}(t)\;\hat{f}_{s2}(t)]^T$ can be gained and the abnormal S-T source estimation for the original PDE system in (\ref{e26}) can be obtained by: \begin{displaymath} \hat{f}(z,t)=[\phi_1(z)\;\phi_2(z)]\left[ \begin{gathered} \hat{f}_{s1}(t) \hfill \\ \hat{f}_{s2}(t) \hfill \\ \end{gathered} \right], \end{displaymath} as shown in (\ref{eqn:t/ssynthesis}). \begin{figure}[!h] \centering \subfigure[The first output of the original PDE system $y_1(t)$, the slow subsystem $y_{s1}(t)$, and its estimation $\hat{y}_{s1}(t)$]{ \includegraphics[width=3.8cm]{figs/figure001.pdf} \label{fig:2a} } \subfigure[The second output of the original PDE system $y_2(t)$, the slow subsystem $y_{s2}(t)$, and its estimation $\hat{y}_{s2}(t)$]{ \includegraphics[width=3.8cm]{figs/figure002.pdf} \label{fig:2b} } \subfigure[$\bm{f}_s(t)$ and its estimation $\hat{\bm{f}}_s(t)$]{ \includegraphics[width=3.8cm]{figs/figure003.pdf} \label{fig:2c}} \caption{Estimation results of abnormal source 1. (Abrupt source)} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=7cm]{figs/figure004.pdf} \caption{$e_f(z,t)$ of abnormal source 1. (Abrupt source).} \label{fig:ef1} \end{figure} \begin{figure}[!h] \centering \subfigure[The first output of the original PDE system $y_1(t)$, the slow subsystem $y_{s1}(t)$, and its estimation $\hat{y}_{s1}(t)$]{ \includegraphics[width=3.8cm]{figs/figure005.pdf} \label{fig:4a} } \subfigure[The second output of the original PDE system $y_2(t)$, the slow subsystem $y_{s2}(t)$, and its estimation $\hat{y}_{s2}(t)$]{ \includegraphics[width=3.8cm]{figs/figure006.pdf} \label{fig:4b} } \subfigure[$\bm{f}_s(t)$ and its estimation $\hat{\bm{f}}_s(t)$]{ \includegraphics[width=3.8cm]{figs/figure007.pdf} \label{fig:4c} } \caption{Estimation results of abnormal source 2. (Incipient source)} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=7cm]{figs/figure008.pdf} \caption{$e_f(z,t)$ of abnormal source 2. (Incipient source).} \label{fig:ef2} \end{figure} The simulation results for both abnormal sources are presented in Fig.~\ref{fig:2a}-Fig.~\ref{fig:ef1} and Fig.~\ref{fig:4a}-Fig.~\ref{fig:ef2}, respectively. From Fig.~\ref{fig:2a}, Fig.~\ref{fig:2b}, Fig.~\ref{fig:4a}, and Fig.~\ref{fig:4b}, it can be found that some minor errors exist between the slow subsystem output $\bm{y}_s(t)$ in (\ref{e29}) and the output of the original PDE system $\bm{y}(t)$ in (\ref{e27}), which are caused by the neglecting of the fast subsystem. Meanwhile, the output estimation $\hat{\bm{y}}_s(t)$ in (\ref{e12}) tracks the original PDE system output $\bm{y}(t)$ rapidly under both abnormal sources. In addition, there exist minor errors between $\bm{f}_s(t)$ and its estimation $\hat{\bm{f}}_s(t)$, which are also caused by the truncation error introduced by the neglecting of the fast subsystem. Moreover, the abnormal S-T source estimation $\hat{f}(z,t)$ can track $f(z,t)$ with minor errors, as shown in Fig.~\ref{fig:ef1}, and Fig.~\ref{fig:ef2}, the abnormal source occurring time ($t=10$ (sec) and $40$ (sec)) can be detected for both kinds of sources. To better illustrate the performance of the proposed abnormal S-T source identification approach, the performance index ``RMSE (root of mean squared error)'' is defined as: \begin{displaymath} \text{RMSE}={(\int {\sum {{e_f}{{(z,t)}^2}dz/\int {dz\sum {\Delta t} } } } )^{1/2}}. \end{displaymath} The calculated value of RMSE for abrupt and incipient abnormal source estimation in Fig.~\ref{fig:ef1} and Fig.~\ref{fig:ef2} are $0.2007$ and $0.1919$, respectively. The main reason for the existing of the estimation error $e_f(z,t)$ is the neglecting of the fast subsystem, as discussed in Remark 2. For simple illustration, the above S-T source is generated using the eigenfunctions of the spatial operator $\mathcal{A}$ as the basis functions. To further study the effectiveness of the proposed method, consider a general S-T source $f(z,t)$ as follows: \begin{displaymath} f(z,t)=f(t)b_f(z), \end{displaymath} where \begin{equation*} \begin{aligned} b_f(z)&= H(z)-H(z-\pi/4),\\ f(t)&=\left\{\begin{array}{l}{0,\;0 \leqslant t<10(\sec )} \\ {2,\; 10 \leqslant t \leqslant 80(\mathrm{sec})}\end{array}\right. \end{aligned} \end{equation*} $H(\cdot)$ denotes the standard Heaviside function. To study the effectiveness of the proposed method, $n_y$ point-wise measurements are \textbf{uniformly distributed} in the spatial domain $[0,\pi]$. Moreover, the first $m$ eigenvalues were selected as the \textbf{dominant modes}. The abnormal source estimation results are provided below using the performance index RMSE. \begin{table}[!h] \renewcommand{\arraystretch}{1} \caption{Identification results of a general S-T source} \centering \label{tab:S-T results} \setlength{\tabcolsep}{5.5mm}{ \begin{tabular}{l l l l} \hline\hline \\[-2mm] \multicolumn{1}{c}{$(m, n_y)$} & \multicolumn{1}{c}{$\bm{\Gamma}$} & \multicolumn{1}{c}{RMSE} & \multicolumn{1}{c}{Ideal RMSE} \\[1.6ex] \hline (2, 2) & $100*I_2$ & 0.7709 & 0.7497\\ (2, 3) & $100*I_2$ & 0.7517 & 0.7497 \\ (3, 3) & $100*I_3$ & 0.6377 & 0.5901\\ (2, 4) & $100*I_2$ & 0.7518 & 0.7497\\ (3, 4) & $100*I_3$ & 0.6454 & 0.5901 \\ (4, 4) & $100*I_4$ & 0.5102 & 0.4286\\ \hline\hline \end{tabular} } \end{table} As shown in Table~\ref{tab:S-T results}, the performance of the proposed method on this S-T source is not as well as that on S-T source generated by the same basis functions. The reason is evident since for this kind of source, the truncation error is relatively large and it requires a larger number of basis functions for accurate approximation. It can also be found in this table that RMSE decreases with the increasing of the number of the dominant modes $m$, which is reasonable. To better illustrate the performance of the proposed method, another performance index named ``Ideal RMSE" is introduced as: \begin{equation*} \begin{aligned} e_{\text{ideal}f}(z,t)&=\boldsymbol{\phi}_{s}^{T}(z) \boldsymbol{f}_{s}(t)-f(z,t),\\ \boldsymbol{f}_{s}(t)&=<\boldsymbol{\phi}_{s}^{T}(z), b_f(z)>f(t),\\ \operatorname{Ideal\;RMSE}&=\left(\int \sum e_{\text{ideal}f}(z, t)^{2} d z / \int d z \sum \Delta t\right)^{1 / 2}, \end{aligned} \end{equation*} which determines a lower bound of RMSE. Comparing RMSE with Ideal RMSE, it can be found that the proposed method attains acceptable results on this S-T source. For those S-T sources generated by the same basis functions, the Ideal RMSE=0 apparently. In fact, for a given number of sensors, how to place them as well as choosing the learning rate $\bm{\Gamma}$ to obtain better estimation performance is one interesting multi-objective optimization problem. In this manuscript, the focus is on providing a general framework for solving such an inverse source estimation problem rather than achieving the best estimation performance, which is very challenging itself. We will discuss such problems in subsequent researches. \begin{remark} As the requirements of the modulating functions-based method are too restrictive, it is not fair nor meaningful to compare these two methods' performance on the same problem. To be more specific, on one hand, if these requirements of the modulating functions-based method cannot be met, it will not work definitely; On the other hand, since we aim to release these restrictions for industrial applications in this paper, the significance of the proposed method would be largely decreased if these requirements can be met, regardless of the comparison results. \end{remark} \section{Conclusion}\label{sec:Conclusion} In this paper, the abnormal S-T source identification for a class of linear parabolic DPSs is first investigated. An inverse S-T model for abnormal source identification is developed, which consists of an adaptive state observer for source identification and an adaptive source estimation algorithm. Theoretic analysis is provided to guarantee the convergence of the abnormal S-T source estimation error. Finally, numerical simulations on a heat rod with an abnormal S-T source are presented to evaluate the performance of the proposed method. Future researches will be expanded to the abnormal S-T source identification for non-linear DPSs.